BAbout the company. /bpWe are an Italian company specialized in applied Artificial Intelligence. Our mission is to design and build new AI-based solutions, from early pilots to MVPs, together with demanding enterprise clients. We work on complex problems where AI is the core of the solution, and where reliability, explainability, and control is essential. /pbAbout the RD unit. /bpWithin the company, the RD unit acts as an AI engineering lab. The team explores new AI capabilities and usage scenarios and transforms them into running systems that can be demonstrated, evaluated, and evolved. /pbKey characteristics of our RD work. /bulliProjects built around advanced AI components and AI-native architectures. /liliMultiple short, parallel initiatives instead of one long product stream. /liliStarting from open, ambiguous problem statements and shaping them into concrete system designs. /liliIntensive use of open-source technologies often extended or adapted for our needs. /liliPrototypes that run as complete systems and clearly demonstrate value in controlled environments. /li /ulpThe outcome of our work is a set of robust, well-structured AI services and pipelines that can be tested internally, presented to clients, and, when validated, evolved into full-scale solutions by production-oriented teams. /pbRole overview. /bpWe are looking for an RD Backend AI Systems Engineer to strengthen this RD unit. The role focuses on building and integrating AI-based services and APIs, transforming experimental ideas and components into reliable, usable systems. /ppYou will collaborate closely with other RD engineers who design AI pipelines and models. Your responsibility is to give these components a solid engineering shape: clear interfaces, strong contracts, predictable runtime behaviour, and the necessary control mechanisms to use AI safely and effectively. /ppYour work will be central in: /pp- Designing and implementing the service layer around AI components. /pp- Defining how different modules connect and exchange data. /pp- Turning experimental elements into coherent, testable systems suitable for pilots and pre-MVP stages. /pbWhat you will work on. /bpIn this role you will typically: /pulliDesign and implement API layers around AI components: Create HTTP/JSON APIs for LLM-based services, retrieval and reasoning modules, and in-house models, with clearly defined contracts and behaviors. /liliBuild and evolve AI service gateways and orchestration: Implement routing, composition, and coordination for multiple AI services and pipelines, including request flow, prioritisation, and fallback paths. /liliDefine and enforce schemas and contracts: Use explicit models and validation rules for all inputs and outputs, handle errors consistently, and keep AI behavior within agreed limits. /liliEncapsulate models into controllable services: Wrap local and cloud-based models into services with clear responsibilities, configuration, rate limits, and safeguards. /liliIntegrate and extend open-source components: Work with modern AI frameworks, libraries, and tools; select and integrate them into our systems, and when useful, adapt or patch them. /liliMake prototypes deployable as systems: Ensure that each prototype can be deployed as a small but complete system that demonstrates business value in a controlled environment and is ready for further industrialization. /li /ulpbr/pbWhy this role is attractive. /bulliAI-centric engineering: You work at the heart of AI systems, where creating robust, controllable services is as important as the models themselves. /liliVariety and pace: You contribute to several short, intense RD initiatives rather than a single long project, with continuous exposure to new domains and ideas. /liliImpact on architecture: You influence how new AI solutions are shaped from day one, designing the service and integration patterns that others will build on. /liliCollaborative environment: You join a compact, high-level RD team and interact with other expert groups in the company, keeping strong ownership on your systems. /liliClear path from idea to MVP: You see the full journey from initial concept to running prototype and pre-MVP, and your engineering work is a key enabler for that journey. /li /ulpbr/pbTechnical environment. /bpOur core environment combines modern backend engineering with the contemporary AI stack. /pulliPython as the primary implementation language. /liliModern web frameworks for APIs (e.g. FastAPI or similar). /liliAPI gateways, routing components, and orchestration logic for multiple backend services. /liliSchema-based request/response validation (e.g. Pydantic models or equivalent). /liliIntegration with AI models, including large language models and other neural components, both on-premise and in the cloud. /liliUse of ontologies and structured representations to manage and interpret AI outputs. /liliAI orchestration and tooling (e.g. LangChain, LangGraph or similar frameworks). /liliStandard data-layer and messaging components to support AI workflows (e.g. relational databases, caches, queues, background workers). /liliContainerization for deployment in RD and demo environments. /li /ulbCandidate profile. /bpWe are looking for an engineer who: /pulliEnjoys designing and building backend services and APIs and cares about clean, well-structured code. /liliHas solid experience with Python-based backend development and modern API frameworks. /liliUnderstands how AI-based components behave at runtime and how to integrate them into larger systems. /liliIs comfortable defining data models, schemas, and contracts and enforcing them through validation and testing. /liliThinks in terms of systems: components, boundaries, flows, reliability, observability. /liliLikes working with open problems, iterating from rough concept to running solution. /liliCommunicates clearly with colleagues from different backgrounds and can align technical decisions with project goals. /liliIs motivated by building new things and seeing them run in realistic conditions with real users and stakeholders. /li /ul