The selected candidate, working within project teams involved in all phases of the software development lifecycle, will be responsible for:
Deployment in testing and production environments
Development and optimization of data processing pipelines in distributed Big Data environments
Design and implementation of ETL workflows and large-scale data processing systems
Test-driven and domain-driven development applied to data engineering contexts
Test automation of data integration and transformation workflows
Supporting the Project Manager with project estimations
Planning of implementation activities
Requirements
* Have at least 3–4 years of experience developing Big Data solutions in enterprise environments
* Have worked with Apache Spark using Java for designing and developing batch and streaming data processing jobs
* Have experience designing and implementing complex ETL workflows and distributed data processing pipelines
* Have worked in Cloudera environments (CDH or CDP) and are familiar with the main services of the Hadoop ecosystem (HDFS, YARN, Hive, Impala, Kafka)
* Are able to write SQL queries to create, modify, and manage data in relational and distributed databases (e.g., Hive, Impala)
* Have professional experience with technologies such as Java 8+, Apache Spark, Spark SQL, and Apache Kafka
* Working proficiency in English is required
NICE TO HAVE
* Experience with Palantir Foundry, including building pipelines and ontologies on the platform
* Experience in software development projects within the Banking / Finance sector
Location: The position is open across all our offices, with preference for Molfetta, Lecce, or Palermo.
What We Offer
Contract type, salary (RAL), and job level will be evaluated during the selection process, based on the candidate’s experience.
We offer a dynamic work environment, attentive to employees’ needs, with:
#J-18808-Ljbffr