Position Summary
We are seeking a Staff AI Software Developer in Test (Staff AI SDET) to help lead the next generation of quality engineering by integrating AI-driven testing approaches and evaluation frameworks into our development process.
This Staff AI SDET will focus on building QA ecosystems, including building QA systems, evaluating, benchmarking, and improving AI-driven testing agents across our engineering streams. The successful candidate will work hands-on to design evaluation strategies, build proof-of-concepts, and conduct experiments to determine how AI can most effectively improve software quality, testing coverage, and engineering efficiency.
The role requires a combination testing expertise, hands-on experimentation with AI agents, and the ability to design a structured QA ecosystem and evaluation processes. Working closely with the QA Lead and engineering teams, this person will identify gaps in current testing approaches and design AI-assisted solutions to address them.
Responsibilities
Design and implement QA foundations and evaluation frameworks for assessing AI agents, their workflows and processes.
Design and run experiments to assess AI-driven QA processes, including prompt strategies, automation workflows, and agent performance.
Benchmark systems across multiple engineering streams to determine accuracy, reliability, cost, and consistency of outputs.
Develop approaches for creating synthetic datasets and structured test data to support reliable evaluation.
Identify gaps in existing testing approaches and design AI-based solutions or agents to address them.
Build proof-of-concepts (POCs) to explore new testing approaches and validate ideas.
Structure and document product knowledge, including user journeys, APIs, and business rules, to support AI-assisted testing systems.
Evaluate the cost, ROI, and effectiveness of different QA approaches and automation strategies.
Collaborate closely with the QA Lead and engineering teams to integrate AI-assisted QA into development workflows.
Ensure AI-driven testing processes produce consistent, reliable, and high-quality results.
Remain hands-on in building tooling, experiments, and testing infrastructure.
Key Role Requirements
Strong background in SDET, test automation.
Experience building and maintaining automated testing frameworks and QA tooling.
Hands-on experience building proof-of-concepts or experimental systems.
Experience evaluating or working with AI systems, LLMs, or agent-based workflows.
Ability to design structured experiments and evaluation methodologies.
Strong understanding of testing fundamentals, including UI and API automation tools such as Cypress, Playwright, or similar.
Experience analysing systems and identifying opportunities to improve processes through automation or AI.
Ability to work hands-on while also contributing to process design and strategic improvements.
Preferred Experience
Experience working in AI-focused companies or startups.
Hands-on experience building or experimenting with AI agents or AI-driven automation workflows in Quality Assurance.
Experience designing evaluation frameworks for AI systems.
Experience working with synthetic data generation or test data modelling.
Familiarity with knowledge graph systems or structured product knowledge mapping.
Experience evaluating cost efficiency and ROI of engineering or QA initiatives.
Experience working in fast-moving startup environments.
Personal Characteristics
Builder mindset with a strong desire to experiment and create new solutions.
Proactive and hands-on, comfortable turning ideas into working prototypes.
Strong analytical thinking and problem-solving skills.
Comfortable working in rapidly evolving technical environments.
Curious about emerging AI technologies and their practical applications.
Able to think creatively about testing strategies and data generation.
Collaborative and comfortable working across engineering and QA teams.
Maintains a can-do attitude and flexible approach to tools and technologies.