*Who We Are**\nAt the heart of our outsourcing organization, the Data & Intelligence Competence Center serves as a dedicated hub for advanced data-driven solutions. We specialize in data engineering, analytics, and AI-powered insights, helping businesses turn raw information into actionable intelligence. By combining deep technical expertise with industry best practices, we enable smarter decision-making, optimize processes, and foster innovation across diverse sectors.\nTo deliver on this mission, we rely on talented professionals who can transform complex data challenges into robust, scalable solutions. This is where you come in. We are looking for a Data Engineer with strong expertise in Azure, Databricks, and Spark to support our transition from the current data platform to a modern, scalable Databricks-based architecture. You will play a key role in designing, building, and migrating data pipelines, ensuring that the next-generation data landscape is efficient, maintainable, and ready for advanced analytics and AI capabilities.\n*What You'll Be Doing**Design, develop, and optimize data pipelines using Azure Databricks, Spark, and related Azure servicesLead and support migration activities from the existing data warehouse/ETL stack into Databricks and modern data lakehouse architecturesCollaborate with architects, data engineers, and analysts to define migration approaches, integration patterns, and technical standardsBuild and maintain data ingestion, transformation, and orchestration workflows aligned with best practices for Databricks and Delta LakeImprove performance, scalability, and reliability of Spark workloads through tuning, optimization, and efficient resource managementImplement data quality, monitoring, and observability components for migrated pipelinesContribute to platform governance, reusable components, and engineering standards to enable consistent delivery across teamsDocument migration procedures, architectural decisions, data models, and operating guidelinesParticipate in Agile ceremonies to ensure predictable and transparent delivery during the migration program*What you'll bring along**Strong hands-on experience with Azure Databricks and Spark (batch and/or streaming)Demonstrated experience migrating legacy pipelines or data warehouses into Databricks or similar cloud architecturesProficiency in Python and SQL for building and optimizing data transformationsKnowledge of Delta Lake, Lakehouse principles, and scalable data modeling approachesFamiliarity with CI/CD pipelines and DevOps practices (e.g., Azure DevOps) for data engineering workflowsUnderstanding of performance tuning, cluster configuration, and efficient resource usage in Spark environmentsAbility to translate business data requirements into robust engineering solutionsExperience working in Agile delivery environmentsExcellent command of spoken and written EnglishExperience supporting ML or GenAI workloads in Databricks is nice-to-haveExposure to DataOps or MLOps concepts is nice-to-have