About Etera
Etera is building the first AI-native corporate travel platform for the GCC market — designed from the ground up for how businesses in the region actually operate. We're a lean, high-conviction team replacing legacy TMCs with autonomous agents, real-time inventory, and a product experience that doesn't feel like it was designed in 2009.
The Role
QA at Etera is not a traditional gatekeeper role. You are embedded in the engineering team from the start — defining what correct looks like, documenting it rigorously, validating everything that ships (whether written by a human or produced by an AI system), and building your own team of QA agents to do it all at a scale no single person could reach manually.
You report to the Engineering Manager. You own quality strategy across the entire platform.
What You'll Own
- Test case definition and documentation. This is the foundation. Before anything gets tested, someone has to define what "correct" means — precisely, exhaustively, and in writing. You own the test case library for the entire platform. Your documentation is the source of truth that both human engineers and QA agents work from. You treat this as a first-class engineering artifact, not an afterthought.
- Validation of AI system output. Etera's engineering team uses AI coding tools extensively. The output of these systems needs to meet the same quality bar as human-written work — but it fails differently. You own the QA process for AI-produced output: identifying failure patterns, building validation checks, and feeding findings back to the Engineering Manager so the systems improve over time.
- Your own QA agent team. You don't do all of this manually. You build and maintain a team of QA agents — autonomous tools that execute test suites, generate test cases from specs, scan for regressions, validate API contracts, and flag inconsistencies. You define their scope, configure their workflows, and validate their output. This is how one QA engineer covers an entire platform without drowning.
- Test strategy. You decide what gets tested, at what layer, and with what priority. You think in terms of risk: where does a failure cost the most, and how do we catch it earliest.
- Integration and system-level validation. The hardest bugs won't be inside a single service. They'll be in the seams between services, between internal systems and external suppliers, and between asynchronous events that don't arrive in the order you expect. You own coverage for these interaction points.
- Test automation. You build and maintain automated test suites — API-level integration tests, end-to-end flows, regression suites that run on every deployment. Your QA agents handle the scale; you handle the strategy and judgment.
- Quality culture. When you see the same class of bug appearing repeatedly, you work with the Engineering Manager to fix the root cause. You prevent defects, not just catch them.
Tools
- Core — you use these daily:
- Jest, Playwright, TypeScript test automation, CI/CD pipeline configuration (GitHub Actions), API testing tools (Supertest, Bruno, or equivalent), Claude Code or equivalent agentic coding tools.
- Expected — you know these or can get productive quickly:
- Pact (contract testing), k6 (performance and load testing), MSW (service mocking), mobile testing frameworks (Detox or Maestro), Git-based workflows, Swagger/OpenAPI spec validation.
- Valuable — these set you apart:
- OWASP ZAP (security scanning), BrowserStack or AWS Device Farm (real device testing), observability tools (Datadog, Sentry — for tracing test failures back through services), Faker.js (test data generation).