As a specialist, you will be joining a truly global team of subject matter experts across a wide variety of disciplines and will be entrusted with a range of responsibilities. We’re seeking self-motivated, clever, and creative specialists who can handle the speed required to be on the frontlines of AI security. Below are some responsibilities and tasks of our Red Teaming Specialist role:Complete extensive training on AI/ML, LLMs, Red Teaming, and jailbreaking, as well as specific project guidelines and requirementsCraft clever and sneaky prompts to attempt to bypass the filters and guardrails on LLMs, targeting specific vulnerabilities defined by our clientsCollaborating closely with language specialists, team leads, and QA leads to produce the best possible workAssist our data scientists to conduct automated model attacksAdapt to the dynamic needs of different projects and clients, navigating shifting guidelines and requirementsKeep up with the evolving capabilities and vulnerabilities of LLMs and help your team’s methods evolve with themHit productivity targets, including for number of prompts written and average handling time per promptWhat we need you to bring:Language writing skills (in English and Dutch)Strong understanding of grammar, syntax, and semantics – knowing what "proper” English and Dutch rules are, as well as when to violate them to better test AI responsesAbility to adopt different voices and points of viewCreative thinkingStrong attentive to detailWell-honed internet research skillsAbility to embrace diverse teamsAbility to navigates ambiguity with graceAdaptability to thrive in a dynamic environment, with the agility to adjust to evolving guidelines and requirementsCandidates who clear the profile screening will be required to take a language proficiency assessment.Please note: As a Red Teaming Specialist, you’ll push the boundaries of large language models and seek to expose their vulnerabilities. In this work, you may be dealing with material that is toxic or NSFW.