Ph3About ALLSIDES /h3 pALLSIDES is redefining how the world experiences 3D content. We combine physically accurate scanning and generative AI to power content creation workflows for e-commerce, virtual environments, and immersive experiences. Our clients include global brands like adidas, Meta, Amazon, and Zalando. /p pWe operate a rapidly scaling photorealistic 3D scanning operation, capturing tens of thousands of assets annually while training next‑generation AI models. As an NVIDIA Inception member, we collaborate with leading research institutions and actively participate in top‑tier conferences in 3D computer vision and AI. /p pMore info: | /p h3Position Overview /h3 pWe're looking for a DataOps MLOps Engineer to build the infrastructure that powers our data and ML workflows. You'll focus on data storage and movement, dataset versioning, ML pipeline automation, experiment tracking, and ensuring reproducibility across our 3D reconstruction and training workloads. /p h3Main Responsibilities /h3 ul liDesign and manage data storage systems for large datasets (multi‑TB image data, 3D assets, training data) /li liBuild efficient data access patterns and movement strategies for distributed training and experimentation /li liImplement dataset versioning and lineage tracking for reproducibility /li liSet up and maintain experiment tracking and model registry infrastructure (MLflow, Weights Biases) /li liBuild ML pipelines for data preprocessing, training, validation, and model registration (Kubeflow, Airflow, Prefect) /li liSupport distributed training workflows across multi‑GPU clusters (PyTorch Distributed, Horovod, Ray) /li liProfile and optimize training pipelines: data loading bottlenecks, batch sizing, GPU memory utilization /li liEnsure reproducibility of experiments: environment pinning, data versioning, artifact management /li liManage artifact storage and distribution (Docker registries, model registries, package repositories) /li liBuild tooling to improve developer productivity for ML workflows /li /ul h3Qualifications /h3 ul liStrong Linux knowledge /li liExperience with data storage systems and large file handling (object storage, NFS, distributed filesystems) /li liKnowledge of dataset versioning tools (DVC, Delta Lake, or similar) /li liExperience with ML pipeline orchestration (Airflow, Prefect, Kubeflow) /li liFamiliarity with experiment tracking tools (MLflow, Weights Biases, Neptune) /li liUnderstanding of distributed training frameworks and patterns /li liExperience with containerization (Docker) and CI/CD pipelines /li liKnowledge of Python dependency and environment management /li /ul h3Nice to Have /h3 ul liExperience with model registries and deployment workflows /li liFamiliarity with data quality validation frameworks /li liKnowledge of 3D graphics processing or computer vision workflows /li /ul h3What we offer /h3 ul liCompensation that reflects your experience including stock‑options /li liLunch voucher for working days /li liWe assist with relocation /li liFlexible working hours and work‑from‑home policy /li liFamily‑friendly environment /li liAmazing office space in South Tyrol, located at the Durst Group /li liPersonal and professional growth opportunities /li /ul pYou don't have to tick every box to apply, your drive and passion matter most! /p pThis role is located on‑site in Brixen/Bressanone, Italy. If you are interested, please apply with your CV attached to /p /p #J-18808-Ljbffr