We’re seeking self-motivated, clever, and creative specialists who can handle the speed required to be on the frontlines of AI security. Complete extensive training on AI/ML, LLMs, Red Teaming, and jailbreaking, as well as specific project guidelines and requirements
Collaborating closely with language specialists, team leads, and QA leads to produce the best possible work
Assist our data scientists to conduct automated model attacks
Adapt to the dynamic needs of different projects and clients, navigating shifting guidelines and requirements
Keep up with the evolving capabilities and vulnerabilities of LLMs and help your team’s methods evolve with them
Hit productivity targets, including for number of prompts written and average handling time per prompt
Language writing skills (in English and Italian)
Strong understanding of grammar, syntax, and semantics – knowing what "proper” English and Italian rules are, as well as when to violate them to better test AI responses
Creative thinking
Candidates who clear the profile screening will be required to take a language proficiency assessment.
As a Red Teaming Specialist, you’ll push the boundaries of large language models and seek to expose their vulnerabilities. In this work, you may be dealing with material that is toxic or NSFW.