Oxford Dynamics

Test & AI Evaluation Lead

Harwell Oxford Hybrydowo Pełny etat Opublikowano Kwi 9, 2026
LokalizacjaHarwell Oxford
Tryb pracyHybrydowo
Forma zatrudnieniaPełny etat
Opublikowano9 kwietnia 2026
Ostatnio sprawdzono6 maja 2026

Salary: Competitive depending on experience

Location: 2-3 days on-site at our Harwell office with travel to client site when required

Contract type: Full-time permanent - 37.5 hours

A note from the Founders  

Oxford Dynamics is at an inflection point.  

We operate in some of the most complex and high‑stakes environments in the world - defence, national security, AI and robotics. The decisions we make now, will define not just how fast we grow, but who we become.    

You will work closely with all the team. You will be trusted with judgment calls. You will influence the business. And you will see the impact of your work every day in the work we do.  

If you are excited by ownership, pace and purpose - and by building something that genuinely matters - we would love to hear from you.  

Who We Are  

Founded in 2020, Oxford Dynamics (OD) is a fast‑growing UK deep‑tech company developing AI and robotic systems designed to operate in mission‑critical environments.  

Our flagship AVIS (A Very Intelligent System) AI framework fuses multi‑modal data - text, imagery, telemetry and sensor feeds - enabling operators to interrogate complex information at speed and make better decisions under pressure. Our STRIDER robotic platform performs autonomous tasks in hazardous environments, protecting people while extending operational reach.  

Our ambition is simple but demanding: to converge AI and robotics so machines can sense, understand and act in complex, real‑world environments.  

We work with defence and security organisations internationally to help protect nations, infrastructure and lives. 

 

What you will be doing here/ why this role matters 

Oxford Dynamics is a small team who rely on a collaborative and positive approach and so the right attitude for this role is equally as important as experience. We are at an important stage and time in our growth, and as a Senior AI Generative Robotics Engineer you will be an essential part of our success.

You’ll work at the cutting edge of agentic and generative AI, building systems that move beyond lab demos and into real-world deployment at pace.  At Oxford Dynamics, you’ll have the freedom to experiment in a fast-moving environment, the responsibility to deliver, and the opportunity to shape how multi-agent AI systems operate in complex, constrained, and high-trust environments.

If you’re excited by agent orchestration, VLLMs, and deploying AI where it matters, this role is built for you!

Role Summary

We're hiring a Test & AI Evaluation Lead to own how Oxford Dynamics validates its AI-driven, mission-critical systems - from multi-agent orchestration and LLM outputs through to cloud infrastructure and real-time user-facing applications.

You'll design and lead test approaches where correctness, resilience, and security matter as much as feature velocity. Working embedded with AI, Backend, Frontend, and DevOps, you'll shape how we validate agent behaviours, data pipelines, and end-to-end operational workflows - from research prototypes through to production deployments for Defence and Security customers. Quality is built in from day one, not inspected at the end.

Key Responsibilities

Test Strategy & Leadership

  • Define and own the end-to-end test strategy across AI, backend, frontend, and infrastructure layers.
  • Establish testing standards appropriate for agentic AI systems, including non-deterministic behaviour and probabilistic outputs.
  • Ensure testing aligns with mission-critical, safety-conscious, and security-first delivery expectations.
  • Act as the primary quality authority across projects, advising engineering and product leadership on risk and readiness.

AI & Data-Focused Testing

  • Design approaches for testing multi-agent workflows, including orchestration logic, memory/state handling, and tool integrations.
  • Define validation strategies for LLM outputs, including groundedness, hallucination detection, task success rates, and regression testing.
  • Work with AI Engineers to embed evaluation metrics and pass/fail thresholds into pipelines.
  • Validate data ingestion, transformation, and inference pipelines across structured and unstructured data sources.

Automation & Tooling

  • Drive a test-automation-first mindset, integrating tests into CI/CD pipelines (GitHub Actions, Argo CD).
  • Oversee automated testing across API and service layers, UI (E2E and accessibility), and infrastructure and deployment workflows.
  • Select, implement, and evolve testing tools and frameworks appropriate to modern cloud-native and AI systems.

Non-Functional Testing

  • Own performance, scalability, reliability, and resilience testing for distributed systems.
  • Coordinate security testing activities in line with secure-by-design principles (e.g. IAM, secrets handling, data boundaries).
  • Validate backup, disaster recovery, and failover scenarios alongside DevOps and Backend teams.

Delivery & Collaboration

  • Embed with delivery teams to ensure testing is planned early and executed continuously.
  • Work closely with Product and Engineering to define clear acceptance criteria and definition of done.
  • Provide clear, decision-ready quality reporting to technical and non-technical stakeholders.
  • Support customer-facing demonstrations, trials, and operational readiness assessments.

Zanim odejdziesz

Zostaw swój adres e-mail, aby śledzić tę ofertę i otrzymywać trafne powiadomienia. Możesz też kontynuować bez udostępniania go.