Playwright Zero to Hero: Complete Setup, Project Structure, CI/CD, and AI-Assisted Testing Guide
This is the Playwright setup guide I wish someone had handed me three years ago. Not just the installation commands — every blog post covers those — but the complete journey from first install to a production-grade testing framework with proper project structure, GitHub Actions CI/CD, Docker containerization, and AI copilot integration. If you are new to Playwright or migrating from another framework, this is your starting point.
The problem with most Playwright tutorials is that they stop at “hello world.” They show you how to navigate to a page, click a button, and write an assertion. That is useful for the first hour. What happens on day two, when you need to organize 50 tests across multiple features, share authentication state between scenarios, handle test data, configure multiple environments, and integrate with your CI/CD pipeline? That is where tutorials fail and engineering begins.
Contents
Installation and Initial Configuration
Start with a clean Node.js project. Run npm init playwright@latest and accept the defaults — TypeScript as the language, tests in a tests directory, and yes to installing the browsers. This gives you a playwright.config.ts file, a sample test, and Chromium, Firefox, and WebKit browsers downloaded to your system.
The configuration file is where most teams make their first mistake: they leave it at defaults. Immediately configure three things. First, set retries: 0 for local development (retries mask flakiness) and retries: 2 for CI (where environmental variance justifies limited retries). Second, configure use.trace: 'on-first-retry' to capture trace files only when a test fails and retries — this gives you debugging data without the storage overhead of tracing every run. Third, set up projects for the browsers you actually need to support rather than running every test on all three browsers.
Project Structure That Scales
Your test directory structure determines how maintainable your framework will be at 500 tests. The structure I recommend after years of iteration separates concerns cleanly: tests/ contains spec files organized by feature (auth, checkout, search, admin), pages/ contains page object models with one class per page or component, fixtures/ contains custom test fixtures that extend Playwright’s base test, utils/ contains helper functions for data generation and common operations, and config/ contains environment-specific configurations.
Page objects remain the foundation of maintainable test automation. Each page object encapsulates the locators and actions for a specific page or component. When the UI changes, you update the page object in one place rather than hunting through dozens of test files. The Playwright community has experimented with alternatives — fixture-based patterns, component-based abstractions — but page objects persist because they are simple, understood by every QA engineer, and they work.
Custom fixtures are Playwright’s secret weapon for reducing boilerplate. Instead of repeating setup logic in every test — creating a user, logging in, navigating to a starting page — you define fixtures that encapsulate these flows and inject them into tests as parameters. A test that needs an authenticated admin user simply declares it as a fixture dependency, and the framework handles the rest.
GitHub Actions: CI/CD Integration
A Playwright test suite without CI/CD integration is a local experiment, not a quality gate. GitHub Actions is the most straightforward integration path. Your workflow file installs Node.js, installs dependencies, installs Playwright browsers, runs the test suite, and uploads the HTML report and any trace files as artifacts.
The critical optimization is sharding. Playwright supports splitting your test suite across multiple parallel runners with the --shard flag. A suite that takes 10 minutes on one runner takes under 4 minutes split across three shards. Configure your GitHub Actions workflow to use a matrix strategy with three or four shards, and merge the results into a single report after all shards complete.
Trigger your tests on every pull request and every push to main. This catches regressions before they merge and provides immediate feedback to developers. For longer-running suites, consider running a smoke subset on PRs and the full suite on merges to main — balancing feedback speed with coverage breadth.
Docker: Consistent Test Environments
The “works on my machine” problem is particularly acute for browser automation because browser behavior varies across operating systems and versions. Docker eliminates this entirely. Playwright provides official Docker images (mcr.microsoft.com/playwright) with all browsers pre-installed and configured. Your tests run inside this container both locally and in CI, guaranteeing identical behavior.
The Dockerfile is minimal: start from the Playwright base image, copy your project files, install Node.js dependencies, and set the test command as the entrypoint. Your CI pipeline builds this image and runs it, and developers can run the same image locally with docker run. The first run downloads the image, but subsequent runs start in seconds because Docker caches the layers.
AI-Assisted Test Writing
This is where the Playwright workflow intersects with the broader AI-in-QA trend. GitHub Copilot, when working inside a well-structured Playwright project, can generate test implementations from descriptive comments with surprising accuracy. Write a comment like “test that a user can add an item to cart and see the updated cart count” and Copilot will suggest an implementation using your page objects, your assertion patterns, and your project conventions.
The quality of AI suggestions depends entirely on your project structure. If your page objects are consistent, your fixtures are well-documented, and your existing tests follow clear patterns, the AI has strong context to work from. If your project is disorganized, the AI mirrors that disorganization. This creates a virtuous cycle: good project structure improves AI assistance, which makes it easier to maintain good structure.
Playwright’s built-in codegen tool (npx playwright codegen) is the other AI-adjacent productivity tool worth integrating into your workflow. It records browser interactions and generates Playwright code in real-time. The generated code needs cleanup — it uses raw locators instead of page objects — but it is an excellent starting point for complex interaction sequences that are tedious to write manually.
The Honest Caveats
This guide reflects my preferred project structure, which works well for mid-sized web application testing (100-1000 tests). Very large test suites (5000+) may need additional organizational patterns — test tagging systems, dynamic test selection, and custom parallelization strategies — that go beyond what I have described here.
The AI-assisted testing section reflects current capabilities as of early 2026. Copilot and similar tools are improving rapidly, and the specific interaction patterns I describe may evolve. The underlying principle — that well-structured code produces better AI suggestions — will remain true regardless of which AI tool you use.
Docker adds complexity to your setup. If your team is small and everyone runs the same operating system and browser versions, the containerization overhead may not be justified. Adopt it when environment consistency becomes a real problem, not preemptively.
Your First Week with Playwright
Day one: install Playwright, run the example test, explore the trace viewer. Day two: create your first page object and write three tests using it. Day three: set up GitHub Actions to run your tests on every push. Day four: add a custom fixture for authentication and refactor your tests to use it. Day five: experiment with codegen and Copilot to accelerate test creation. By the end of the week, you will have a functional framework that is ready to scale — and a clear understanding of whether Playwright is the right tool for your team.
This setup journey — from first install to production-grade CI/CD framework — is the hands-on track in my AI-Powered Testing Mastery course. Students build a complete Playwright framework over Modules 2 through 4, with downloadable starter templates and GitHub Actions workflow files.
