Perplexity

Member of Technical Staff (AI Inference Engineer)

🇺🇸 San Francisco, US On-site Posted Apr 13, 2026
LocationSan Francisco, US
WorkplaceOn-site
LanguageEnglish
PostedApril 13, 2026
Last verifiedMay 15, 2026

JobGrid listing details

JobGrid.eu keeps the employer description in its original language and adds clear listing facts, freshness, and source context so candidates can evaluate the role before applying.

Key details
1 location, On-site
Current openings
25 active jobs
Original language
English
Source and freshness
Collected from public career pages and reviewed through JobGrid.eu source availability checks. Last verified: May 15, 2026.
Apply path
JobGrid.eu sends candidates to the original application page and adds non-personal referral parameters.

We build and run the inference engine behind every Perplexity query and deploy dozens of model architectures at scale with tight latency and cost budgets. Our stack is Rust, Python, CUDA, and CuTe DSL - and we need another engineer to join us.

What you will work on

Examples of real work the team does:

  • New models support. Support transformer-based retrieval, text-generation, and multimodal models in our inference infrastructure, from weight loading, request scheduling and KV-cache management to support in API Gateway.

  • GPU kernels migration to CuTe DSL. Port our in-house CUDA kernels to NVIDIA's CuTe DSL so they run on GB200 today and are portable to Vera Rubin racks tomorrow.

  • Rust-native serving runtime. Develop our internal Rust-based inference server to solve all Python pains and keep up with rapidly growing traffic.

  • Performance optimisation. Profile and fix bottlenecks from network ingress through continuous batching and GPU kernel interleaving.

  • Reliability and observability. Build dashboards, alerts, and automated remediation so we catch regressions before users do. Respond to and learn from production incidents.

Who we're looking for

  • Deep experience with GPU programming and performance work (CUDA, Triton, CUTLASS, or similar). Any other deep systems programming experience is a plus.

  • You understand modern LLM architectures and are able to bring them up reliably in a production environment.

  • You've built and operated production distributed systems under real load - ideally performance-critical ones.

  • Comfortable working across languages and layers: Rust for the serving runtime, Python for model code, CUDA/CuteDSL for kernels.

  • You own problems end-to-end. You can read a research paper on Monday, write a kernel on Wednesday, and debug a production incident on Friday.

  • Self-directed. You do well in fast-moving environments where the path forward isn't laid out for you.

Good if you touched any of

  • ML compilers and framework internals: PyTorch internals, torch.compile, custom operators.

  • Distributed GPU communication: NCCL, NVLink, InfiniBand, RDMA libraries, model/tensor parallelism.

  • Low-precision inference: INT8/FP8/FP4 quantization, mixed-precision serving.

  • Profiling and debugging tools: Nsight Compute/Systems, CUDA-GDB, PTX/SASS analysis.

  • Container orchestration: Kubernetes, GPU scheduling, autoscaling inference workloads.

Qualifications

  • 3+ years of professional software engineering experience with meaningful work on ML inference or high-performance systems.

  • Familiarity with at least one deep learning framework (PyTorch, JAX, TensorFlow).

  • Understanding of GPU architectures (memory hierarchy, warp scheduling, tensor cores).

  • Understanding of common LLM architectures and inference optimization techniques (e.g. quantization, speculative decoding, prefill-decode disaggregation).

Before you leave

Leave your email to track this opening and receive relevant alerts. You can also continue without sharing it.