AI SDET Roadmap 2026: 8 LPA to 25 LPA for Manual Testers
I have talked to over 200 manual testers in the last 12 months. The story is almost identical: they joined a service company, learned basic Excel test cases, and watched their salary freeze at ₹8 LPA while their developer friends crossed ₹20 LPA. In 2026, that gap is not closing on its own. It is widening.
The difference between a manual tester who stays at ₹8 LPA and one who breaks into ₹25 LPA is not a management promotion. It is a deliberate transition into an AI SDET role. Product companies in Bangalore and Hyderabad are hiring AI SDETs at ₹20-35 LPA right now. These are individual contributor positions. They demand a specific stack of skills that most manual testers have never been told to build.
This article is the exact roadmap. No motivational fluff. Just a month-by-month skill plan, real salary numbers, and the mistakes that keep testers stuck.
Table of Contents
- The Salary Reality — Why Manual Testers Stall at 8 LPA
- What Is an AI SDET in 2026?
- Phase 1: Build the Automation Foundation (Months 1-3)
- Phase 2: Inject AI Into Your Testing (Months 4-6)
- Phase 3: Agentic Systems & Architecture (Months 7-9)
- Phase 4: Portfolio & Interview Prep (Months 10-12)
- India Context — Where the 25 LPA Offers Are Coming From
- Common Traps That Keep Testers Stuck at 8 LPA
- Key Takeaways
- FAQ
Contents
The Salary Reality — Why Manual Testers Stall at 8 LPA
Most QA salary guides stop at “QA Manager = ₹30 LPA.” That ceiling is real if your skill set ends at manual regression and basic Selenium scripts. According to The Ultimate Guide to QA Salaries, entry-level QA in India sits at ₹3-5 LPA, mid-level automation engineers hit ₹10-12 LPA, and senior QA professionals plateau around ₹12-25 LPA. QA Managers touch ₹30 LPA.
But there is a parallel track. Manual testers in service companies (TCS, Infosys, Wipro) often get stuck at ₹6-9 LPA because their work is priced as “resource hours,” not “engineering impact.” When a company bills your time at a fixed rate, your salary has a hard cap. Product companies do not bill hours. They ship features. And they pay premiums for engineers who can guarantee that those features work with minimal human intervention.
Here is where the numbers get interesting. Playwright now has 87,888 GitHub stars and pulled 200 million npm downloads last month alone. Selenium, with 34,078 stars, is not dead — but Playwright is where new hiring happens. I stopped getting premium interview calls until I had Playwright + TypeScript on my resume. Product companies are not looking for “automation testers.” They are looking for AI SDETs who can design self-healing suites, evaluate LLM outputs, and build agentic test pipelines.
The gap between 8 LPA and 25 LPA is not years of experience. It is a 12-month skill rebuild.
What Is an AI SDET in 2026?
An AI SDET is not a manual tester who uses ChatGPT to write test cases. That is a power user, not an engineer.
An AI SDET is a software development engineer in test who:
- Builds and maintains automated test frameworks using modern tools like Playwright, Pytest, and TypeScript.
- Evaluates AI model outputs for hallucinations, bias, and safety using frameworks like PromptFoo and DeepEval.
- Designs agentic test systems where AI agents generate, execute, and maintain tests with minimal human intervention.
- Integrates testing into CI/CD pipelines and treats test infrastructure as production code.
In 2026, LinkedIn job postings now routinely include titles like “AI Test Lead,” “AI QA Engineer,” and “LLM Test Architect.” These roles did not exist two years ago. Companies like Happiest Minds, Tekion Corp, and Luxoft are listing Principal SDET — AI Infrastructure roles at ₹35-50 LPA. These are not management roles. They are individual contributor positions that demand a fusion of test automation, LLM evaluation, and system design.
If you are still writing test cases in Excel and running regression manually, you are pricing yourself out of the market. The good news: the transition is mechanical. You do not need a computer science degree. You need a 12-month execution plan.
Phase 1: Build the Automation Foundation (Months 1-3)
Month 1: Master One Programming Language
Pick Python or TypeScript. Do not pick Java unless your current company forces it. Product companies hiring AI SDETs overwhelmingly prefer Python (for AI tooling) or TypeScript (for Playwright). I recommend TypeScript because it pairs directly with Playwright and modern frontend stacks.
Your goal in month 1 is not to become a developer. It is to write clean, readable functions. Learn:
- Variables, loops, and conditionals
- Async/await (critical for Playwright)
- Basic object-oriented patterns (classes, inheritance)
- How to use Git and GitHub
Spend 1 hour every weekday. No exceptions.
Month 2: Playwright Fundamentals
Playwright is the most important automation tool for an AI SDET in 2026. It has auto-waiting, built-in API testing, tracing, and visual regression. Here is a single script that covers UI navigation, API validation, and visual regression in under 30 lines:
import { test, expect } from '@playwright/test';
test('checkout flow with API validation', async ({ page, request }) => {
// UI: add item to cart
await page.goto('/product/123');
await page.getByTestId('add-to-cart').click();
await expect(page.getByTestId('cart-count')).toHaveText('1');
// API: verify cart backend state
const cart = await request.get('/api/cart');
expect(await cart.json()).toEqual({ items: [{ id: 123, qty: 1 }] });
// Visual: screenshot comparison for regression
await expect(page).toHaveScreenshot('checkout-page.png');
});
No explicit waits. No Thread.sleep. No flaky retries. That stability is why product companies pay 35+ LPA for Playwright specialists.
Month 3: Framework Architecture & CI/CD
Stop writing standalone scripts. Build a framework:
- Page Object Model (or Component Object Model)
- Configuration management with environment variables
- Parallel execution with sharding
- Reporting with built-in Playwright HTML report or Allure
Then plug it into GitHub Actions or Jenkins. If your tests do not run on every pull request, they are not real tests. I see too many testers write 500 tests that run once a week. That is a hobby, not a job.
Phase 2: Inject AI Into Your Testing (Months 4-6)
Month 4: AI-Assisted Test Generation
Learn to use GitHub Copilot, Cursor, or Codeium to generate test scripts from natural language requirements. The skill is not prompting. The skill is evaluating what the AI produces.
Here is what I do. I paste a user story into Cursor and ask it to generate Playwright tests. Then I check:
- Does it cover the happy path and at least 2 edge cases?
- Are the selectors stable (data-testid, not CSS nth-child)?
- Does it handle async state correctly?
Most AI-generated tests are 70% correct. The remaining 30% is where your value lives. If you can spot the missing edge case, you are already doing AI SDET work.
Month 5: LLM Evaluation Basics
AI features are not deterministic. A chatbot might give a correct answer 95% of the time and hallucinate the other 5%. Traditional pass/fail testing does not work here. You need evaluation frameworks.
Start with PromptFoo. It lets you define test cases for LLM prompts, run them against multiple models, and score outputs with automated metrics (BLEU, ROUGE, cosine similarity). Build a simple project: test a customer support chatbot against 50 prompt variations and measure accuracy.
Then move to DeepEval for Python. It provides out-of-the-box metrics like hallucination detection, answer relevancy, and contextual precision. These are the tools that separate a manual tester from an AI SDET.
from deepeval import assert_test
from deepeval.test_case import LLMTestCase
from deepeval.metrics import HallucinationMetric, AnswerRelevancyMetric
def test_support_chatbot():
test_case = LLMTestCase(
input="How do I reset my password?",
actual_output="You can reset your password by clicking the forgot password link on the login page.",
context=["Users can reset passwords via the forgot password link."]
)
assert_test(test_case, [HallucinationMetric(threshold=0.5), AnswerRelevancyMetric(threshold=0.7)])
This is a real test you can run in CI. It scores the LLM output against context and fails the build if hallucination exceeds 50%. That is the kind of pipeline product companies pay premiums for.
Month 6: Hybrid AI + Automation Projects
Combine your Playwright framework with AI evaluation. For example:
- Use Playwright to navigate an AI-powered search page.
- Extract the AI-generated answer.
- Feed the answer into a DeepEval metric to check for hallucination.
- Fail the test if the hallucination score crosses a threshold.
This is not theoretical. I built a similar pipeline for BrowsingBee.com. It catches LLM drift before it reaches production. That is the kind of project that gets you hired at 25 LPA.
Phase 3: Agentic Systems & Architecture (Months 7-9)
Month 7: Understanding Agents
An AI agent is a system that plans, executes, and reflects on tasks with minimal human input. For testing, this means an agent that can:
- Read a requirements document
- Generate test cases
- Execute them in a browser
- Report failures with root-cause analysis
This is the future of QA. And it is not science fiction. LangGraph already enables this.
Month 8: Build a Planner-Generator-Healer Pipeline
LangGraph lets you build state machines where nodes are AI agents and edges are decision logic. I wrote a detailed guide on LangGraph for Test Automation: Building a Planner-Generator-Healer Pipeline. The core idea is three agents working together:
- Planner Agent: Reads Jira tickets and decides what to test.
- Generator Agent: Writes Playwright test code.
- Healer Agent: Fixes broken selectors and updates tests when the UI changes.
You do not need to build all three in month 8. Start with the Planner. Use LangChain + LangGraph to read a user story and output a JSON test plan. Then hook that JSON into your Playwright framework. Even a simple version of this impresses hiring managers because it shows you understand both testing and AI architecture.
Month 9: MCP & Tool Integration
MCP (Model Context Protocol) is how agents talk to external tools — browsers, APIs, databases. Learn to build an MCP server that exposes your test framework to an LLM. This is the bridge between “I write tests” and “I build test infrastructure.”
Here is a minimal MCP server in Python that exposes a “run_playwright_test” tool to an agent:
from mcp.server.fastmcp import FastMCP
import subprocess
mcp = FastMCP("test_runner")
@mcp.tool()
def run_playwright_test(test_path: str) -> str:
result = subprocess.run(
["npx", "playwright", "test", test_path],
capture_output=True, text=True
)
return result.stdout if result.returncode == 0 else result.stderr
if __name__ == "__main__":
mcp.run()
An LLM agent can now call run_playwright_test("tests/checkout.spec.ts"), receive the output, and decide whether to fix a broken test or report a bug. This is the architecture that senior AI SDETs design.
By the end of month 9, you should have a GitHub repository with:
- A Playwright framework with 50+ tests
- An AI evaluation pipeline using PromptFoo or DeepEval
- A LangGraph prototype that generates test plans from requirements
This portfolio is your new resume. It is more valuable than any certification.
Phase 4: Portfolio & Interview Prep (Months 10-12)
Month 10: Ship Three Portfolio Projects
Hiring managers do not care about your LinkedIn headline. They care about what you built. Ship three projects:
- End-to-End Automation Suite: Test a real public website (e.g., an e-commerce demo site) with UI, API, and visual regression tests. Run it in CI/CD.
- LLM Evaluation Project: Test a public chatbot or AI feature with PromptFoo/DeepEval. Show metrics, thresholds, and a summary report.
- Agentic Prototype: A LangGraph pipeline that reads a requirements file and generates at least one working Playwright test. Document it with a README and a 2-minute demo video.
Host everything on GitHub. Write a blog post about each project. This proves you can communicate, which is a skill most engineers ignore.
Month 11: Targeted Interview Prep
AI SDET interviews are different from traditional QA interviews. Yes, you will get coding questions. But you will also get system design and AI evaluation questions. I compiled the exact questions I ask candidates in SDET Interview Prep for AI-Era Hiring: 50 Questions I Actually Ask in 2026.
Expect questions like:
- “How would you test a non-deterministic LLM feature?”
- “Design a self-healing test framework.”
- “What metrics would you use to evaluate an AI summarization model?”
Prepare answers with real examples from your portfolio projects. Do not memorize definitions. Show diagrams, code snippets, and trade-off analysis.
Month 12: Apply Strategically
Do not spray resumes. Target product companies and AI-first startups. Use LinkedIn to find SDET managers, not HR. Send a short message with a link to your GitHub and a specific observation about their product. I got my Tekion interview because I sent the hiring manager a bug report I found while testing their public demo.
Negotiate based on data. If you have the portfolio and the skills, ₹25 LPA is not ambitious. It is market rate.
India Context — Where the 25 LPA Offers Are Coming From
In India, the QA salary gap is not just about skills. It is about company type. Service companies (TCS, Infosys, Cognizant) structure QA as a cost center. Their billing rates cap senior manual testers at ₹8-12 LPA. Even “automation engineers” in these firms rarely cross ₹15 LPA because the work is still treated as script maintenance, not engineering.
Product companies operate differently. In Bangalore, a mid-level AI SDET at a Series B startup earns ₹18-25 LPA. At a unicorn or FAANG-style office, Principal AI SDET roles hit ₹35-50 LPA. These numbers come from live job postings I have seen on LinkedIn and Naukri, and they align with the data in our 15 to 40 LPA skill breakdown.
Here is what I see in the 2026 hiring market:
- Companies are dropping “manual QA” headcount and replacing it with “AI SDET” roles.
- Recruiters actively search for “Playwright + LLM evaluation” on LinkedIn.
- Remote roles for US and EU companies now pay ₹25-40 LPA for India-based AI SDETs.
Specific companies that have posted AI SDET or Principal SDET roles in the last quarter include Tekion Corp, Razorpay, Freshworks, and several Sequoia-backed startups in Bangalore. These teams do not care about your ISTQB certificate. They care whether you can ship a Playwright suite that runs in under 5 minutes and evaluate an LLM feature for hallucination.
Remote work has also changed the equation. In 2024, a remote AI SDET role for a US company was rare. In 2026, I see 3-4 such openings every week on LinkedIn. They pay in USD or at dollar-adjusted INR rates. If you have the portfolio, you can live in Indore and earn a Bangalore salary. That was impossible two years ago.
The window is open. It will not stay open forever. As more testers complete this transition, the entry bar will rise. Right now, having even one LLM evaluation project on your GitHub puts you in the top 5% of candidates.
Common Traps That Keep Testers Stuck at 8 LPA
I see the same mistakes every week. Avoid them:
- Chasing certifications instead of projects. ISTQB does not get you a 25 LPA offer. A GitHub repo with a working agentic test pipeline does.
- Sticking to Selenium because it is “stable.” Selenium has 34,078 GitHub stars. Playwright has 87,888. The market has voted. Learn Playwright.
- Learning AI without learning automation first. You cannot evaluate LLM outputs if you do not understand what a good test looks like. Build the automation foundation first.
- Waiting for company training. Your employer will not teach you AI SDET skills. They have no incentive to. Invest your own time.
- Ignoring system design. Senior AI SDET roles ask you to design test infrastructure, not just write scripts. Study CI/CD, Docker, and cloud basics.
Key Takeaways
- Manual testers in India stall at ₹8 LPA because their skills are billed as hours, not engineering impact.
- AI SDET roles in product companies pay ₹20-35 LPA and are growing fast in 2026.
- The exact roadmap is 12 months: automation foundation (months 1-3), AI integration (4-6), agentic systems (7-9), portfolio and interviews (10-12).
- Playwright, PromptFoo/DeepEval, and LangGraph are the three pillars of the AI SDET stack.
- Your portfolio matters more than your resume. Ship three projects before you apply.
FAQ
Can a manual tester with no coding background become an AI SDET?
Yes. I have seen testers with arts degrees make this transition in 12 months. The key is consistency, not prior coding knowledge. One hour of deliberate practice every weekday for a year is 260 hours. That is enough to master the basics of TypeScript, Playwright, and LLM evaluation.
Is Java completely dead for AI SDET roles?
No. Java is still used in enterprise environments. But product companies and AI startups overwhelmingly prefer Python or TypeScript. If you are starting from scratch, pick TypeScript. If you already know Java, add Playwright + Java and immediately start learning Python for AI tooling.
Do I need a machine learning degree to evaluate LLMs?
No. LLM evaluation is a testing discipline, not a research discipline. You need to understand metrics like BLEU, ROUGE, and hallucination scores. You do not need to train models. Frameworks like PromptFoo and DeepEval abstract the complexity.
How long does it take to see a salary jump?
Most testers who follow this roadmap see their first AI SDET interview within 6-9 months. The full salary jump from ₹8 LPA to ₹25 LPA typically happens after 12-18 months, once you have a portfolio and can demonstrate agentic testing experience.
What is the biggest risk of not making this transition?
The risk is not that AI will replace you. The risk is that a younger engineer who knows Playwright and LLM evaluation will replace you at half the cost. Companies are already restructuring QA teams around AI-augmented workflows. The manual testers who do not adapt are being moved to non-technical roles or let go.
Should I quit my job to focus on this roadmap full-time?
No. I transitioned while working a 10-hour day at a service company. One focused hour every morning before work is enough. Quitting without an offer creates financial pressure that hurts learning quality. Build the portfolio on the side, apply selectively, and resign only after you sign the offer letter.
