We are seeking an experienced Big Data Engineer to join our team. The successful candidate will be responsible for operating and evolving our big data stack on AWS, designing and implementing ETL pipelines for large-scale data processing, and collaborating with the Data Science team to prepare and manage datasets for model training and deployment.
Key Responsibilities
* Managing and evolving our AWS-based data infrastructure
* Designing and implementing ETL pipelines for large-scale data processing
* Preparing and managing datasets for model training and deployment
* Automating existing workflows and contributing to infrastructure-as-code practices
* Supporting the deployment of internal tools and applications in the cloud
* Participating in R&D activities: evaluating new tools, improving system performance, and ensuring scalability
Requirements
* Knowledge of AWS data services: MSK (Kafka: message broker), EMR (Spark/Flink), Glue, Kinesis, MWAA (Airflow)
* Awareness of cloud environments (preferably AWS)
* Basic understanding of infrastructure-as-code tools (e.g., Terraform, AWS CDK, or CloudFormation)
* Strong interest in big data processing and scalable architecture
* Collaborative mindset, problem-solving oriented and willingness to learn in a fast-paced environment
Why Choose Us
* Join a product-driven company making a global impact in digital advertising
* Work with a modern, high-volume data stack in a collaborative and innovative environment
* Develop your skills through hands-on experience with AWS and machine learning infrastructure
* Engage in daily learning through research, experimentation, and cross-functional teamwork