Playwright
Playwright (opens in a new tab) is a tool that can be used to automate the testing of web pages, and can be used to test for accessibility issues.
Generally, it adds a layer of confidence to tests that need to actually interact with the page, rather than just checking the output of a function.
However, it can also be used to check for accessibility issues, using the @axe-core/playwright
(opens in a new tab)
library.
Playwright can be used with various languages and frameworks, but to best use the @axe-core/playwright
library,
this page will only refer to the JavaScript version.
Setup
Installation
To get set up, the basic playwright installation docs will usually suffice, but the @axe-core/playwright
library will also need to be installed.
npm install playwright @axe-core/playwright # Install the dependencies for playwright and axe-core/playwright
npx playwright install # Install the browsers locally
Once installed, the tests can be written as usual, but to run an accessibility test, the axe-core
library will need to be used.
The core usage will look like:
test("should have no accessibility violations", async () => {
const results = await new AxeBuilder({ page })
.withTags(["wcag2a", "wcag2aa", "wcag21a", "wcag21aa", "wcag22aa"])
.analyze();
expect(results.violations).toEqual([]);
});
This will run all WCAG A and AA checks from the various versions, and return the results, failing the test if there are any violations.
JUnit output for CI
When running the Playwright tests in the CI pipelines, it is often useful to get the results in a JUnit format, as this is a standard format that can be read by many CI tools.
Playwright offers a junit
reporter by default, and it can be useful to ensure traces are tracked and attached to the junit output too
so that failures can be accompanied by a visual aid. This can be configured as below:
export default defineConfig({
...,
retries: process.env.CI ? 2 : 0, // If not using retries, set the `trace` option to `retain-on-failure`
reporter: process.env.CI
? [["junit", { outputFile: "results.xml" }], ["list"]] // In CI, use the junit reporter and output to "results.xml"
: "list",
use: {
...,
trace: process.env.CI ? "on-first-retry" : "retain-on-failure",
}
});
The traces are most useful for the "normal" Playwright tests, as the accessibility tests don't have much visual support.
Understanding the output
The output from the axe-core libraries can be quite verbose, and may be large blobs of JSON to those unfamiliar with the output. Over time, the output should only be showing the newly failing issues, but when introducing to a project it may be overwhelming.
The main thing to look for will be the sets of violations, which will be an array of objects, within which the important
information for resolution would be the id
and nodes
, which in turn will have html
and target
properties to help identify the failing element.
The other properties are very useful for understanding the issues and getting information on how to fix them, so they shouldn't be outright ignored.
If these are still proving too difficult to understand or find the elements, the Accessibility Insights (opens in a new tab) extension can be used to help identify the issues, as this will use the same tool under the hood, but will give a more user-friendly and visual output to help identify the problem elements. (The exact errors returned may differ depending on the versions used and tags checked.)
Following the core usage from above, one potentially helpful output could be to iterate over the violations and log the salient information:
for (const violation of results.violations) {
for (const node of violation.nodes) {
console.warn(`Rule: ${violation.id}\nHTML: ${node.html}\nSelector: ${node.target.join(", ")}`);
}
}
This will log the violated rule, the html node that is causing the violation, and a unique selector for the node.
By including it in the console warn, this makes it visible as part of the trace, and also becomes attached to the junit output.
Publishing test results in CI
Below is an example of how to ensure the JUnit results are published in an Azure CI pipeline:
steps:
- task: <Any task to run the tests>
displayName: "Run E2E tests"
env:
CI: "true"
- task: PublishTestResults@2
displayName: "Publish E2E test results"
condition: succeededOrFailed()
inputs:
testResultsFormat: "JUnit"
testResultsFiles: "**/results.xml"
Sample Test Fixture
Instead of setting this up over and over, it may be useful to create a fixture that can be used across tests.
Below is an example fixture that can be used in any playwright test and simply call
// Example fixture in /fixtures/a11yFixture.ts
import AxeBuilder from "@axe-core/playwright";
import { test as base, expect } from "@playwright/test";
type A11yFixture = {
testA11y: () => Promise<void>;
};
export const test = base.extend<A11yFixture>({
testA11y: async ({ page }, use, testInfo) => {
const testA11y = async () => {
await test.step("check page accessibility", async () => {
const results = await new AxeBuilder({ page })
.withTags(["wcag2a", "wcag2aa", "wcag21a", "wcag21aa", "wcag22aa"])
.analyze();
for (const violation of results.violations) {
for (const node of violation.nodes) {
// eslint-disable-next-line no-console
console.warn(
`Rule: ${violation.id}\nHTML: ${node.html}\nSelector: ${node.target.join(", ")}`,
);
}
}
await testInfo.attach("accessibility-scan-results.json", {
body: JSON.stringify(results, null, 2),
contentType: "application/json",
});
expect(results.violations).toEqual([]);
});
};
await use(testA11y);
},
});
export { expect } from "@playwright/test";
// Usage in a test file
import { expect, test } from "@/fixtures/a11yFixture.ts";
test("should not have any accessibility violations", async ({
page,
testA11y,
}) => {
const pageModel = new PageObjectModel(page);
await pageModel.visitPage();
await pageModel.verifyPage();
await testA11y();
});
E2E Testing Difficulties
E2E tests are not perfect, as there are a couple extra considerations needed to keep them working well.
Authentication
If the site needs to authenticate, this can be difficult. There are ways to configure additional fixtures to perform a login flow and cache the session, but this gets more complex the more complicated the login process is.
This approach may require passing through username, password, and TOTP secrets to generate 2FA codes, which may get complicated if the goal is a few simple A11y tests!
Alternatively, the tests could be run on a site that doesn't require authentication, or the tests could be run on a site that has a "test" user that is always logged in. Both of which mean this likely can't be run against Dev or QA environments, and instead would need a dedicated environment (or to spin the site up itself) if the tests are to be run in the CI pipeline.
Expected Data
This is more of a consideration for wider E2E tests, but still has an impact in A11y tests.
If the site needs to be tested in a certain state, such as on a page with a list of items, then the site needs to be in a state where the list of items is present to be tested. There's a chance that the site could have been tested against such that these items are no longer present and cause test issues.
One way to get around that is to test against only a freshly seeded site, or to ensure that the site is in a known state before running the tests. Or to always configure data for every test run, which can be slower, but much more reliable.