|

SDET Interview Prep for AI-Era Hiring: 50 Questions I Actually Ask in 2026

I have interviewed over 300 SDET candidates in the last five years. The questions I ask today are not the questions I asked in 2022. AI has rewritten the hiring playbook. If you walk into a 2026 interview expecting “explain the difference between implicit and explicit waits” to be the hardest question, you are preparing for the wrong test.

This guide contains the 50 questions I actually ask — organized by experience level and mapped to what I am really testing. I also include the salary bands these interviews target, because context matters. A 15 LPA service-company round and a 40 LPA product-company round are not the same conversation.

Table of Contents

Contents

The New Hiring Reality: What Changed in 2026

Three shifts have permanently altered SDET interviews.

Shift 1: Tool fluency is assumed, architecture is tested. Every candidate with two years of experience knows Selenium locators. I stopped asking “what is XPath” in 2024. Now I ask: “Your test suite has 4,000 tests and takes 6 hours to run. The CTO wants it under 30 minutes without losing coverage. What do you change?” The answer reveals whether you think in systems or scripts.

Shift 2: AI testing is no longer niche. In 2023, AI testing questions were reserved for senior roles at AI startups. In 2026, I ask junior candidates how they use AI copilots to generate test cases. I ask mid-level candidates to evaluate an LLM-powered feature. I ask senior candidates to design a quality strategy for an agentic system. The level-specific breakdown in my earlier guide still holds, but the AI slice has grown from 5% of the interview to roughly 30%.

Shift 3: Playwright dominance is absolute. The numbers are brutal. Playwright pulled 203 million npm downloads last month. Selenium pulled 8.3 million. Playwright has 87,840 GitHub stars versus Selenium’s 34,075. When a candidate lists “Selenium expert” as their primary skill with no Playwright exposure, I mentally downgrade their profile for product-company roles. The stability data explains why teams are migrating aggressively.

Foundation Layer: 10 Questions for 0-2 Years

At this level, I am testing three things: can you write clean code, can you debug a failing test, and do you understand the “why” behind basic concepts.

Code and Debugging (Questions 1-4)

  1. Write a function that validates whether a JSON response contains all required fields. How do you handle nested objects?
  2. You run a test and get “StaleElementReferenceException.” Walk me through your debugging steps without changing the test code.
  3. Explain the difference between async/await and .then() in JavaScript. When would a test fail because you chose the wrong one?
  4. Given a failing CI build where tests pass locally, list the 5 most likely causes in order of probability.

What I am really testing: Question 2 separates candidates who Google fixes from candidates who reason about DOM lifecycle. The strong answer starts with “the element was present when I located it but detached before I interacted with it,” then discusses timing, AJAX rendering, and retry strategies. The weak answer says “I would add a Thread.sleep.”

Testing Fundamentals (Questions 5-8)

  1. What is the difference between verification and validation? Give a real example from an e-commerce checkout flow.
  2. Design test cases for a password reset feature. Include negative cases that junior testers often miss.
  3. When would you choose API testing over UI testing for a given scenario? When would you need both?
  4. Explain equivalence partitioning and boundary value analysis using a concrete example (e.g., age field accepting 18-65).

Automation Basics (Questions 9-10)

  1. Compare hard waits, implicit waits, and explicit waits. Why is mixing implicit and explicit waits an anti-pattern?
  2. What is the Page Object Model, and what problem does it solve? When would you intentionally not use it?

Salary context for this level: Service companies offer ₹4-8 LPA. Product companies with strong AI-testing expectations start at ₹8-12 LPA. The candidate who answers Question 10 with “POM reduces duplication but over-engineers small projects; for a 10-test MVP I might use inline locators” gets the higher offer.

Automation Architecture: 15 Questions for 3-5 Years

This is where interviews get serious. I am no longer testing knowledge. I am testing judgment.

Framework Design (Questions 11-18)

  1. Design a test automation framework for a micro-frontend architecture with three independent teams deploying on different schedules.
  2. How do you manage test data across environments when the database is refreshed nightly?
  3. Your suite has 400 flaky tests. You have two weeks to fix them. What is your triage process?
  4. Describe your CI/CD integration strategy. Where do unit tests, integration tests, and E2E tests run, and why?
  5. How do you handle authentication in automated tests when the auth flow involves OAuth2 and MFA?
  6. What reporting strategy would you use for a suite that runs 10,000 tests per day across 5 products?
  7. Explain how you would implement parallel test execution without causing race conditions in shared test data.
  8. When would you choose component testing over E2E testing? What tools would you use?

What separates mid-level from junior: Question 13 is my favorite flakiness diagnostic. Strong candidates propose a data-driven approach: categorize flakes by root cause (timing, data, environment, product bug), fix the top 20% by frequency first, and add retry logic only as a temporary bridge. Weak candidates say “I will add more waits.”

API and Contract Testing (Questions 19-23)

  1. How do you test a REST API endpoint that returns paginated results? What edge cases matter?
  2. Explain contract testing. When does Pact make sense over schema validation with Zod or Joi?
  3. How do you validate that a GraphQL query returns the exact shape the frontend expects?
  4. Your API returns a 500 status but the body says {“error”: “rate limited”}. Is this a bug? How do you report it?
  5. Design a test strategy for a third-party API integration where you do not control uptime or data.

Modern Tooling (Questions 24-25)

  1. Compare Playwright, Cypress, and Selenium for a new greenfield project. What factors drive your choice?
  2. How do you implement visual regression testing without creating brittle screenshot comparisons?

For Question 24, I expect candidates to cite real data. Playwright’s auto-wait architecture eliminates 60-70% of flaky test maintenance. Cypress is strong for component testing but limited to Chromium-family browsers for full E2E. Selenium is the safe choice for legacy migrations but requires explicit wait management. The stability benchmarks back these claims.

System Design and Quality Leadership: 12 Questions for 6+ Years

Senior SDET interviews look like staff-engineer system design rounds. I am testing whether you can shape quality culture, not just write tests.

System Design for Quality (Questions 26-32)

  1. Design a testing strategy for a microservices platform with 50 services, 3 frontends, and 200 deployments per week.
  2. How do you test in production safely? Discuss feature flags, canary analysis, and synthetic monitoring.
  3. Your organization wants to shift from 10% automation coverage to 80% in 12 months. Build the roadmap.
  4. How do you handle test data in a GDPR-compliant environment where PII cannot exist in staging databases?
  5. Design a quality gate that prevents deploys without adding 15 minutes to the pipeline.
  6. What metrics would you track to prove the ROI of test automation to a skeptical CFO?
  7. How do you maintain test quality when 15 engineers contribute to the same automation repository?

Leadership and Influence (Questions 33-37)

  1. Tell me about a time you convinced leadership to invest in test infrastructure. What was the business case?
  2. How do you handle a senior developer who dismisses test failures as “the test is wrong, not the code”?
  3. Describe how you would onboard a manual testing team into automation without demoralizing them.
  4. What is your approach to defining “done” for a testing story in an Agile sprint?
  5. How do you balance speed and coverage when the CEO demands a release in 48 hours?

The leadership litmus test: Question 34 is intentionally uncomfortable. I am looking for emotional intelligence. The strong answer involves pairing with the developer, reproducing the failure together, and establishing a team norm that “flaky test” is a diagnosis, not a dismissal. The weak answer blames the developer or proposes process punishment.

The AI Testing Layer: 13 Questions for Every Level

This is the section that did not exist in my interview playbook three years ago. In 2026, every SDET candidate needs AI fluency. The engineers who built these skills first are now earning 35-50 LPA.

AI-Aware Testing (Questions 38-43)

  1. How do you test a feature that uses an LLM to generate product descriptions? What is your oracle?
  2. Explain the difference between deterministic testing and probabilistic testing. How do you apply both to an AI feature?
  3. What evaluation metrics would you use to assess the quality of an LLM-powered chatbot?
  4. How do you write an automated test for an image-generation endpoint that returns different outputs for the same prompt?
  5. Describe how you would test for prompt injection vulnerabilities in an AI-powered application.
  6. What is the “temperature” parameter in LLM inference, and how does it affect your testing strategy?

What I am testing: Question 38 exposes candidates who have never tested non-deterministic software. The strong answer recognizes that traditional pass/fail assertions break down. You need evaluation frameworks — DeepEval, PromptFoo, or custom metrics — that score responses against relevance, accuracy, and safety criteria. You cannot assert expect(response).toBe("exact text") when the model generates natural language.

Agentic and MCP Testing (Questions 44-48)

  1. How do you test an AI agent that autonomously navigates a web application to complete a task?
  2. What is the Model Context Protocol (MCP), and how does it change the way you design automation?
  3. Design a test for a LangGraph workflow with planner, generator, and healer nodes. What do you assert at each stage?
  4. How do you validate that an AI agent’s tool selection is optimal for a given task?
  5. Explain how you would implement self-healing locators and when they are dangerous.

For Question 45, I expect candidates to know that MCP standardizes how LLMs interact with external tools. In testing terms, it means you need to validate both the LLM’s reasoning and the tool’s execution. The planner-generator-healer architecture is becoming the standard pattern for agentic test systems.

Tooling and Workflow (Questions 49-50)

  1. How do you use AI copilots in your daily testing workflow without creating blind spots?
  2. Build a 30-day plan to introduce AI-augmented testing into a team that currently uses only Selenium and TestNG.

Question 50 is my closing question for senior candidates. The strong answer starts with people, not tools. Week 1: demonstrate value with a small Playwright + AI pilot on one flaky test suite. Week 2: share metrics (time saved, bugs caught). Week 3: train two volunteers. Week 4: propose a migration roadmap. The weak answer dumps a list of tools with no change-management strategy.

How I Score Answers: The Rubric Candidates Never See

I use a 4-point scale for every question. Here is what each level looks like in practice.

  • 1 point — Memorized: The candidate recites a definition without context. “Explicit wait is when you use WebDriverWait.” So what? Why does that matter?
  • 2 points — Applied: The candidate connects the concept to a real scenario. “I use explicit waits when an element loads via AJAX because implicit waits time out too early.”
  • 3 points — Analyzed: The candidate compares trade-offs. “Explicit waits add boilerplate but eliminate flakiness. Implicit waits are concise but unpredictable. For my current project, I chose explicit waits because we have 12 different async loading patterns.”
  • 4 points — Architected: The candidate designs a system. “I built a custom wait utility that wraps explicit waits with automatic retry and logging. It cut our flaky test rate from 18% to 2% over three months. Here is the TypeScript implementation…”

Most candidates score 2s. The hires score 3s and 4s. A candidate who scores 4 on architecture questions and 2 on AI questions might still get hired at a traditional product company. A candidate who scores 2 on architecture and 4 on AI questions might get hired at an AI-first startup. The 2026 hiring playbook has shifted toward hybrid profiles.

India Context: What 15 LPA vs 40 LPA Interviews Look Like

I want to be specific about money because QA engineers in India are systematically underpaid relative to their impact.

The 15 LPA interview focuses on: Can you automate test cases using Selenium? Can you write basic SQL? Can you report bugs in Jira? These are service-company rounds. The interviewer often has 45 minutes and a checklist. If you answer 7 out of 10 definition questions correctly, you pass.

The 40 LPA interview focuses on: Can you design a testing system that scales? Can you evaluate AI-generated output quality? Can you influence engineering culture? These are product-company rounds. The interviewer is a Principal Engineer or Engineering Manager who has reviewed your GitHub before you entered the room. They ask open-ended questions and probe for depth.

The gap is not intelligence. The gap is skill stack. The five skills that bridge 15 LPA to 40 LPA are documented in detail here: modern browser automation, LLM evaluation, agentic systems, CI/CD infrastructure, and system design thinking.

Here is the raw data I use when candidates ask about market rates:

  • Entry-level QA (0-2 years): ₹4-8 LPA at service companies, ₹8-14 LPA at product/AI companies
  • Mid-level SDET (3-5 years): ₹12-20 LPA traditional, ₹20-30 LPA with Playwright + AI skills
  • Senior SDET (6+ years): ₹25-35 LPA standard, ₹35-50 LPA for AI infrastructure roles
  • Principal SDET / Quality Architect: ₹40-60 LPA at top product companies

The 3 Answers That Immediately Disqualify You

There are wrong answers, and then there are disqualifying answers. These three signal that a candidate is not ready for a modern SDET role.

1. “I do not use AI tools because I do not trust them.”

This is the fastest way to lose my interest. I am not asking you to replace your brain with ChatGPT. I am asking whether you understand that AI-assisted testing is now a core competency. The strong candidate says: “I use Copilot for boilerplate generation but review every line. I use AI for exploratory test ideas but never for production test assertions without human validation.”

2. “Selenium is the industry standard, so I stick with it.”

Selenium is not the industry standard for new projects in 2026. It is the legacy standard. Playwright’s 203 million monthly npm downloads versus Selenium’s 8.3 million tell the market story. The strong candidate says: “I maintain Selenium suites for legacy systems and build new suites in Playwright because the stability ROI is measurable.”

3. “Testing is about finding bugs.”

This mindset caps your career at mid-level. Testing is about building confidence in releases. It is about preventing bugs, not just finding them. It is about enabling speed, not enforcing bureaucracy. Senior candidates who frame quality as a business enabler rather than a cost center get promoted.

Key Takeaways

  • SDET interviews in 2026 are 30% AI testing, 40% architecture, and 30% fundamentals. Prepare accordingly.
  • The 50 questions above map to a scoring rubric that rewards context, trade-offs, and system thinking over memorization.
  • Playwright fluency is now table stakes for product companies. Selenium alone prices you out of premium roles.
  • AI testing questions are no longer niche. Every candidate must understand LLM evaluation, non-deterministic oracles, and agentic validation.
  • The difference between a 15 LPA and 40 LPA interview is not years of experience. It is the ability to design testing systems and influence engineering culture.

FAQ

How many of these 50 questions should I prepare fully?

Prepare deep answers for 15-20 questions in your experience band. For the rest, have a 30-second framework. I rarely ask all 50 in one interview. I sample 8-12 questions based on the candidate’s resume and the role’s requirements.

Do service companies in India ask AI testing questions?

Not consistently yet. Tier-1 service companies (TCS, Infosys, Wipro) are beginning to add basic AI awareness questions for senior roles. Mid-size product companies and AI startups ask them at every level. If you are targeting 25+ LPA, AI fluency is mandatory.

What if I have no AI testing experience?

Build it. Take an LLM-powered feature you use daily (a chatbot, a recommendation engine, a search autocomplete) and design a test strategy for it. Document your approach in a blog post or GitHub repo. That counts as experience in my interview.

Should I learn Java or TypeScript for 2026 interviews?

TypeScript with Playwright is the strongest signal for product companies. Java with Selenium still works for enterprise roles. Python is excellent for AI testing because most LLM evaluation frameworks (DeepEval, PromptFoo, Ragas) are Python-first. If you have time for one language, choose TypeScript. If you have time for two, add Python.

How do I practice system design questions without real-world experience?

Study public postmortems from companies like Netflix, Shopify, and Slack. Map their quality challenges to testing strategies. Practice designing test architectures for open-source projects you use. The skill is pattern recognition, not tenure.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.