PpALLSIDES is redefining how the world experiences 3D content. We combine physically accurate scanning and generative AI to power content creation workflows for e-commerce, virtual environments, and immersive experiences. Our clients include global brands like adidas, Meta, Amazon, and Zalando. /ppWe operate a rapidly scaling photorealistic 3D scanning operation, capturing tens of thousands of assets annually while training next-generation AI models. As an NVIDIA Inception member, we collaborate with leading research institutions and actively participate in top-tier conferences in 3D computer vision and AI. /ppMore info: | /ph3Position Overview /h3pWe're looking for a DataOps MLOps Engineer to build the infrastructure that powers our data and ML workflows. You'll focus on data storage and movement, dataset versioning, ML pipeline automation, experiment tracking, and ensuring reproducibility across our 3D reconstruction and training workloads. /ph3Main Responsibilities /h3ulliDesign and manage data storage systems for large datasets (multi-TB image data, 3D assets, training data) /liliBuild efficient data access patterns and movement strategies for distributed training and experimentation /liliImplement dataset versioning and lineage tracking for reproducibility /liliSet up and maintain experiment tracking and model registry infrastructure (MLflow, Weights Biases) /liliBuild ML pipelines for data preprocessing, training, validation, and model registration (Kubeflow, Airflow, Prefect) /liliSupport distributed training workflows across multi-GPU clusters (PyTorch Distributed, Horovod, Ray) /liliProfile and optimize training pipelines: data loading bottlenecks, batch sizing, GPU memory utilization /liliEnsure reproducibility of experiments: environment pinning, data versioning, artifact management /liliManage artifact storage and distribution (Docker registries, model registries, package repositories) /liliBuild tooling to improve developer productivity for ML workflows /li /ulh3Qualifications /h3ulliStrong Linux knowledge /liliExperience with data storage systems and large file handling (object storage, NFS, distributed filesystems) /liliKnowledge of dataset versioning tools (DVC, Delta Lake, or similar) /liliExperience with ML pipeline orchestration (Airflow, Prefect, Kubeflow) /liliFamiliarity with experiment tracking tools (MLflow, Weights Biases, Neptune) /liliUnderstanding of distributed training frameworks and patterns /liliExperience with containerization (Docker) and CI/CD pipelines /liliKnowledge of Python dependency and environment management /li /ulh3Nice to Have /h3ulliExperience with model registries and deployment workflows /liliFamiliarity with data quality validation frameworks /liliKnowledge of 3D graphics processing or computer vision workflows /li /ulh3What we offer /h3ulliCompensation that reflects your experience including stock-options /liliLunch voucher for working days /liliWe assist with relocation /liliFlexible working hours and work-from-home policy /liliFamily-friendly environment /liliAmazing office space in South Tyrol, located at the Durst Group /liliPersonal and professional growth opportunities /li /ulpYou don't have to tick every box to apply, your drive and passion matter most! /ppThis role is located on-site in Brixen/Bressanone, Italy. If you are interested, please apply with your CV attached to /p /p #J-18808-Ljbffr