Perplexity

Member of Technical Staff (AI Inference Engineer)

🇬🇧 London, GB On-site IT Posted Apr 13, 2026
LocationLondon, GB
WorkplaceOn-site
CategoryIT
IT CategoryData Science & ML
LanguageEnglish
PostedApril 13, 2026
Last verifiedMay 15, 2026

Salary context for this role

JobGrid.eu combines visible employer pay, official public benchmarks, and current JobGrid listings for Data Science & ML.

JobGrid observed

Similar listings

GBP 75,000 - 135,000 / year

Based on 13 current public JobGrid listings with comparable role and location signals.

Source
JobGrid.eu public listings
Geography
City-level
Match quality
High confidence comparable role
Data period
Current active listings
Sample size
13
Latest listing
May 15, 2026
Checked by JobGrid
May 17, 2026

JobGrid listing details

JobGrid.eu keeps the employer description in its original language and adds clear listing facts, freshness, and source context so candidates can evaluate the role before applying.

Key details
1 location, IT, Data Science & ML, On-site
Current openings
25 active jobs
Original language
English
Source and freshness
Collected from public career pages and reviewed through JobGrid.eu source availability checks. Last verified: May 15, 2026.
Apply path
JobGrid.eu sends candidates to the original application page and adds non-personal referral parameters.

We are looking for an AI Inference Engineer to join our growing team. We build and run the inference engine behind every Perplexity query and deploy dozens of model architectures at scale with tight latency and cost budgets. Our stack is Rust, Python, CUDA, and CuTe DSL.

Responsibilities:

  • New models support. Support transformer-based retrieval, text-generation, and multimodal models in our inference infrastructure, from weight loading, request scheduling and KV-cache management to support in API Gateway.

  • GPU kernels migration to CuTe DSL. Port our in-house CUDA kernels to NVIDIA's CuTe DSL so they run on GB200 today and are portable to Vera Rubin racks tomorrow.

  • Rust-native serving runtime. Develop our internal Rust-based inference server to solve all Python pains and keep up with rapidly growing traffic.

  • Performance optimisation. Profile and fix bottlenecks from network ingress through continuous batching and GPU kernels interleaving.

  • Reliability and observability. Build dashboards, alerts, and automated remediation so we catch regressions before users do. Respond to and learn from production incidents.

Who we're looking for:

  • Deep experience with GPU programming and performance work (CUDA, Triton, CUTLASS, or similar). Any other deep systems programming experience is a plus.

  • You understand modern LLM architectures and are able to bring them up reliably in a production environment.

  • You've built and operated production distributed systems under real load - ideally performance-critical ones.

  • Comfortable working across languages and layers: Rust for the serving runtime, Python for model code, CUDA/CuteDSL for kernels.

  • You own problems end-to-end. You can read a research paper on Monday, write a kernel on Wednesday, and debug a production incident on Friday.

  • Self-directed. You do well in fast-moving environments where the path forward isn't laid out for you.

Nice-to-have:

  • ML compilers and framework internals: PyTorch internals, torch.compile, custom operators.

  • Distributed GPU communication: NCCL, NVLink, InfiniBand, RDMA libraries, model/tensor parallelism.

  • Low-precision inference: INT8/FP8/FP4 quantization, mixed-precision serving.

  • Profiling and debugging tools: Nsight Compute/Systems, CUDA-GDB, PTX/SASS analysis.

  • Container orchestration: Kubernetes, GPU scheduling, autoscaling inference workloads.

Qualifications:

  • 3+ years of professional software engineering experience with meaningful work on ML inference or high-performance systems.

  • Familiarity with at least one deep learning framework (PyTorch, JAX, TensorFlow).

  • Understanding of GPU architectures (memory hierarchy, warp scheduling, tensor cores).

  • Understanding of common LLM architectures and inference optimization techniques (e.g. quantization, speculative decoding, prefill-decode disaggregation).

Final offer amounts are determined by multiple factors including experience and expertise.

Equity: In addition to the base salary, equity may be part of the total compensation package.

Before you leave

Leave your email to track this opening and receive relevant alerts. You can also continue without sharing it.