CV

Aleksandr Andreev

Lead Data Engineer with 9+ years building streaming, analytics and lakehouse platforms. Strong in platform design, delivery, and turning messy operational needs into dependable systems.

Built and scaled data platforms that support analytics and near real-time decisioning.

Hands-on with Kafka, Flink, Spark, Airflow, dbt, Iceberg, Trino and Python.

Experienced leading delivery, mentoring engineers and tightening engineering standards.

Experience

Lead Data Engineer

AlfaStrakhovanie

2021 — present

  • Led design of real-time claims pipelines on Kafka, Flink and Iceberg for high-throughput workloads.
  • Drove migration of analytical workloads toward open lakehouse patterns with Trino, dbt and Iceberg.
  • Built an LLM-assisted merge request review workflow adopted by multiple teams.

Senior Data Engineer

Large financial services company

2018 — 2021

  • Built and operated Spark-based ETL pipelines across multiple upstream systems.
  • Standardized orchestration patterns in Airflow and improved observability for data SLAs.
  • Improved query performance through better file layout, partitioning and columnar storage practices.

Data Engineer / Data Analyst

Earlier analytics and data roles

2015 — 2018

  • Moved from SQL-heavy analytics toward Python and distributed data engineering.
  • Built early event pipelines and learned how platform decisions affect downstream teams.

Selected skills

Streaming

Kafka, Kafka Streams, Flink

Batch

Spark, Airflow, dbt

Storage

Iceberg, Parquet, ClickHouse, PostgreSQL

Query

Trino, DuckDB, SQL optimization

Infra

Kubernetes, Docker, Terraform, GitLab CI

AI tooling

Claude API, RAG, Qdrant, internal developer tools