Senior Data Engineer, Division HN
Trung tâm Công nghệ Thông tin
Hà Nội
25-ITC-0325
We are looking for a highly skilled and motivated Senior Data Engineer to lead the design and implementation of scalable, high-performance data pipelines and infrastructure that operate seamlessly across both cloud-based and on-premise environments. In alignment with our AI-first company mission, you will play a key role in shaping and developing a robust, self-serve custom data platform that enables MoMo’s business units and strategic partners to access, analyze, and activate data with speed and confidence. This platform will serve as the foundation for powering intelligent decision-making, personalization, and innovation across a wide range of products and services.
Mô tả công việc
- Design, implement, and maintain ETL/ELT pipelines for ingesting, transforming, and delivering data across multiple systems.
- Build and manage a hybrid data platform architecture that spans cloud and on-premise infrastructure.
- Support both batch and real-time data processing workflows using modern data engineering tools.
- Collaborate with engineering, analytics, and product teams to define and deliver data infrastructure that meets business needs.
- Monitor and improve pipeline performance, reliability, and scalability.
- Ensure data governance, quality, security, and compliance across all environments.
- Implement infrastructure for data observability, lineage tracking, and logging.
- Monitoring and optimizing resource usage;
Yêu cầu công việc
- 4+ years of experience as a Data Engineer or Software Engineer, with a minimum of 2 years specifically focused on data engineering.
- Proficiency in backend programming languages such as Java or Kotlin. Other languages like Python or Scala is a plus.
- Experienced working with Database systems (RDBMS, NoSQL) and the ability to read and analyze complex SQL queries.
- Experience with orchestration tools such as Apache Airflow, dbt, or Luigi.
- Strong understanding of cloud platforms (AWS, GCP, Azure) and experience integrating with on-premise systems.
- Familiarity with data warehouses (e.g., BigQuery, Snowflake, Redshift) and data lakes.
- Solid grasp of data modeling, ETL best practices, and distributed data processing.
- Experience with big data frameworks such as Apache Spark and/or Apache Flink is a plus.
- Familiarity with streaming technologies (e.g., Kafka, Pub/Sub)
- Strong collaboration and problem-solving skills, with a proactive approach to working with cross-functional teams.
- Ability to work independently and drive projects forward with minimal supervision.
- Bonus: Familiarity with CI/CD, Docker, and infrastructure-as-code tools.
Bạn có hứng thú với vị trí này
hoặc bạn biết một ứng viên phù hợp