W3C Accessibility Guidelines (WCAG) 3.0 will provide a wide range of recommendations for making web content more accessible to users with disabilities. Following these guidelines will address many of the needs of users with blindness, low vision and other vision impairments; deafness and hearing loss; limited movement and dexterity; speech disabilities; sensory disorders; cognitive and learning disabilities; and combinations of these. These guidelines address accessibility of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other web of things devices. The guidelines apply to various types of web content including static, dynamic, interactive, and streaming content; visual and auditory media; virtual and augmented reality; and alternative access presentation and control. These guidelines also address related web tools such as user agents (browsers and assistive technologies), content management systems, authoring tools, and testing tools.

Each guideline in this standard provides information on accessibility practices that address documented user needs of people with disabilities. Guidelines are supported by multiple requirements and assertions to determine whether the need has been met. Guidelines are also supported by technology-specific methods to meet each requirement or assertion.

This specification is expected to be updated regularly to keep pace with changing technology by updating and adding methods, requirements, and guidelines to address new needs as technologies evolve. For entities that make formal claims of conformance to these guidelines, several levels of conformance are available to address the diverse nature of digital content and the type of testing that is performed.

See WCAG 3.0 Introduction for an introduction and links to WCAG technical and educational material.

This is an update to the W3C Accessibility Guidelines (WCAG) 3.0. It includes a restructuring of the guidelines and first draft decision trees for three Guidelines: Clear meaning, Image alternatives, and Keyboard focus appearance.

To comment, file an issue in the W3C wcag3 GitHub repository. The Working Group requests that public comments be filed as new issues, one issue per discrete comment. It is free to create a GitHub account to file issues. If filing issues in GitHub is not feasible, email public-agwg-comments@w3.org (comment archive). In-progress updates to the guidelines can be viewed in the public editors’ draft.

Introduction

Summary

What’s new in this version of WCAG 3.0?

This draft includes an updated list of the potential Guidelines and Requirements that we are exploring. The list of Requirements is longer than the list of Success Criteria in WCAG 2.2. This is because:

The final set of Requirements in WCAG 3.0 will be different from what is in this draft. Requirements will be added, combined, and removed. We also expect changes to the text of the Requirements. Only some of the Requirements will be used to meet the base level of conformance.

The Requirements are grouped into the following sections:

The purpose of this update is to demonstrate a potential structure for guidelines and indicate the current direction of the WCAG 3.0 conformance. Please consider the following questions when reviewing this draft:

To provide feedback, please file a GitHub issue or email public-agwg-comments@w3.org (comment archive).

About WCAG 3.0

This specification presents a new model and guidelines to make web content and applications accessible to people with disabilities. The W3C Accessibility Guidelines (WCAG) 3.0 support a wide set of user needs, use new approaches to testing, and allow frequent maintenance of guidelines and related content to keep pace with accelerating technology change. WCAG 3.0 supports this evolution by focusing on the functional needs of users. These needs are then supported by guidelines written as outcome statements, requirements, assertions, and technology-specific methods to meet those needs.

WCAG 3.0 is a successor to Web Content Accessibility Guidelines 2.2 [[WCAG22]] and previous versions, but does not deprecate WCAG 2. It will also incorporate some content from and partially extend User Agent Accessibility Guidelines 2.0 [[UAAG20]] and Authoring Tool Accessibility Guidelines 2.0 [[ATAG20]]. These earlier versions provided a flexible model that kept them relevant for over 15 years. However, changing technology and changing needs of people with disabilities have led to the need for a new model to address content accessibility more comprehensively and flexibly.

There are many differences between WCAG 2 and WCAG 3.0. The WCAG 3.0 guidelines address accessibility of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other Web of Things devices. The guidelines apply to various types of web content, including static, dynamic, interactive, and streaming content; visual and auditory media; virtual and augmented reality; and alternative access presentation and control. These guidelines also address related web tools such as user agents (browsers and assistive technologies), content management systems, authoring tools, and testing tools.

Each guideline in this standard provides information on accessibility practices that address documented user needs of people with disabilities. Guidelines are supported by multiple requirements to determine whether the need has been met. Guidelines are also supported by technology-specific methods to meet each requirement.

Content that conforms to WCAG 2.2 levels A and AA is expected to meet most of the minimum conformance level of this new standard but, since WCAG 3.0 includes additional tests and different scoring mechanics, additional work will be needed to reach full conformance. Since the new standard will use a different conformance model, the Accessibility Guidelines Working Group expects that some organizations may wish to continue using WCAG 2, while others may wish to migrate to the new standard. For those that wish to migrate to the new standard, the Working Group will provide transition support materials, which may use mapping and other approaches to facilitate migration.

Section status levels

As part of the WCAG 3.0 drafting process each normative section of this document is given a status. This status is used to indicate how far along in the development this section is, how ready it is for experimental adoption, and what kind of feedback the Accessibility Guidelines Working Group is looking for.

Guidelines

Summary

The following guidelines are being considered for WCAG 3.0. They are currently a list of topics which we expect to explore more thoroughly in future drafts. The list includes current WCAG 2 guidance and additional requirements. The list will change in future drafts.

Unless otherwise stated, requirements assume the content described is provided both visually and programmatically.

The individuals and organizations that use WCAG vary widely and include web designers and developers, policy makers, purchasing agents, teachers, and students. To meet the varying needs of this audience, several layers of guidance will be provided including guidelines written as outcome statements, requirements that can be tested, assertions, a rich collection of methods, resource links, and code samples.

The following list is an initial set of potential guidelines and requirements that the Working Group will be exploring. The goal is to guide the next phase of work. They should be considered drafts and should not be considered as final content of WCAG 3.0.

Ordinarily, exploratory content includes editor's notes listing concerns and questions for each item. Because this Guidelines section is very early in the process of working on WCAG 3.0, this editor's note covers most of the content in this section. Unless otherwise noted, all items in the list as exploratory at this point. It is a list of all possible topics for consideration. Not all items listed will be included in the final version of WCAG 3.0.

The guidelines and requirements listed below came from analysis of user needs that the Working Group has been studying, examining, and researching. They have not been refined and do not include essential exceptions or methods. Some requirements may be best addressed by authoring tools or at the platform level. Many requirements need additional work to better define the scope and to ensure they apply correctly to multiple languages, cultures, and writing systems. We will address these questions as we further explore each requirement.

Additional Research

One goal of publishing this list is to identify gaps in current research and request assistance filling those gaps.

Editor's notes indicate the requirements within this list where the Working Group has not found enough research to fully validate the guidance and create methods to support it or additional work is needed to evaluate existing research. If you know of existing research or if you are interested in conducting research in this area, please file a GitHub issue or send email to public-agwg-comments@w3.org (comment archive).

Image and media alternatives

Image alternatives

Users have equivalent alternatives for images.

Which foundational requirements apply?

For each image:

  1. Would removing the image impact how people understand the page?
  2. Is the image presented in a way that is available to user agents and assistive technology?
  3. Is an equivalent text alternative available for the image?
Decorative image

Decorative image is programmatically hidden.

Equivalent text alternative

Equivalent text alternative is available for image that conveys content.

Detectable image

Image is programmatically determinable.

Image role

The role and importance of the image is programmatically indicated.

Image type

The image type (photo, icon, etc.) is indicated.

Editable alternatives

Needs additional research

Auto generated text descriptions are editable by content creator.

Style guide

Text alternatives follow an organizational style guide.

Media alternatives

Users have equivalent alternatives for media content.

Descriptive transcripts

A transcript is available whenever audio or visual alternatives are used.

Findable media alternatives

Needs additional research

Media that has the desired media alternatives (captions, audio description, and descriptive transcripts) can be found.

Preferred language

Needs additional research

Equivalent audio alternatives are available in the preferred language.

Non-verbal cues

Needs additional research

Media alternatives explain nonverbal cues, such as tone of voice, facial expressions, body gestures, or music with emotional meaning.

Non-text alternatives

Users have alternatives available for non-text, non-image content that conveys context or meaning.

Non-text content

Needs additional research

Equivalent text alternatives are available for non-text, non-image content that conveys context or meaning.

Captions

Where there is audio content in media, there are equivalent synchronized captions.

Captions exist

Text alternatives to the information conveyed by the audio track exist.

Captions are findable

Mechanisms exist to help users find text alternatives to the auditory information conveyed by media.

Captions are controllable

The media player provides a mechanism to turn the captions on and off.

Captions are in the target user's language

When captions are used as a text alternative for an audio track, they are provided in the target user’s language for the media.

Captions are equivalent to audio content

Captions are equivalent and equal in content to that of audio.

Captions are synchronized

Captions are in sync with the visual media.

Captions are consistent

The captions are presented consistently throughout the media, and across several related productions, unless exceptions are warranted. This includes consistent styling and placement of the captions text and consistent methods for identifying speakers, languages, and sounds.

Captions do not obstruct visual information

In visual media, captions are placed on the screen so that they do not obstruct important visual information.

Speakers are identified

The speaker is identified in the captions. If there is only one speaker in the media, the speaker can be identified in the media description or at the beginning of the closed captioning. If there is more than one speaker in the media, then changes in speaker need to be identified throughout.

Languages of speech are identified

When there is more than one language spoken in media, the captions identify the language spoken by each speaker.

Sounds are identified or described

Significant sounds, including sound effects and other non-spoken audio, are identified or described in the captions.

Captions are adaptable

The appearance of captions and the language of captions are adaptable.

Alternative language versions are available

Captions in a different language than that of the media are available so that the user can choose to view captions in their preferred language.

Enhanced features to interact with captions are available

Enhanced features that allow users to interact with captions are available.

Captions are available in 360-degree space in augmented, virtual, and extended realities

In augmented, virtual, and extended reality environments, captions are available in 360-degree space.

Speakers are indicated visually in augmented, extended, and virtual realities

In augmented, virtual, and extended environments, a visual indicator or signal, in addition to audio, is available to direct users toward the source of a sound or to indicate who is the speaker.

Style guide

The captions are following an organization’s style guide.

  • Name of the style guide
  • Version (if any)
  • Date of release
  • Description
  • Examples of how core WCAG guidelines are addressed
Testing with users

The organization conducted tests with users who need captions and fixed issues based on findings.

  • Types of disabilities each user had
  • Number of users (for each type of disability)
  • Date of testing
  • Examples of fixed issues based on the results
Video player selection

The organization uses a video player that supports captions.The video player supports closed captions in a standard caption format, or an open captions format.

  • Name of the video player
  • Caption format
Contribution by producer

During the video production process, the video producer converts the dialogue, along with other important sounds and music, into a caption file format.

  • Names of the videos
  • File types
  • Number/Name of video producer
Video player controls for captions

The organization has selected a video player that provides controls for turning captions on and off. In the video player controls, there must be at least one method to turn captions on or off.

  • Name of the video player
Adaptable video player

The organization uses a video player that allows the user to personalize the appearance and location of closed captions. An individual’s need for modification will vary among people. The organization should allow for adjustment to these styles, including but not limited to: font size, font weight, font style, font color, background color, background transparency, and placement.

  • Name of the video player
  • Customizable styles
AR, VR, or XR video player

The organization uses a video player that supports captions remaining directly in front of the user in a 360-degree augmented, virtual, or extended environment (AR, VR, or XR). In these spaces, the user feels surrounded by content. As the user moves in this space, any caption provided will appear directly in front of the user regardless of where they are looking.

  • Name of the video player
Subtitles in other languages

The organization provides captions in one or more alternative languages for the most common languages in their market. Typically called subtitles when in another language, closed captions in multiple languages assists in understanding the content and learning another language.

  • Original language for video
  • Languages for subtitles
Visual indicators in 360 field

The organization provides visual indicators in extended reality environments to indicate where the speaker is or the direction of a sound when audio is heard from outside the current view. As users move in extended reality environments, the position of the audio may stay the same. Users can personalize the visual indicators by selecting from a set of options.

Using human captioners

For live events, the organization has a human captioner providing live captions to the audience using CART.

  • Name of the captioner or service provider
Perfect set of alternatives

As part of the organization’s standard media production procedures, the video producer creates the closed caption files, audio description, and descriptive transcript during the production cycle and then uploads them to their appropriate places during the publishing process.

  • Alternatives provided

Audio descriptions

Where there is visual content in media, there is an equivalent synchronized audio description.

Audio descriptions exist

An audio alternative to the visual information conveyed in visual media exists.

Audio description is findable

Mechanisms exist to help users find audio alternatives to the visual information conveyed in visual media.

Audio description is controllable

The media player provides a mechanism to turn the audio description on and off.

Audio description is in the target user's language

When audio description is provided as an alternative for visual information, it is provided in the target user’s language for the media.

Audio description equitably describes important visual information

Information about actions, charts or informative visuals, scene changes, and on-screen text that are important and are not described or spoken in the main soundtrack are included in the audio description.

Audio description is synchronized

The audio description is in sync with the visual content.

Audio description does not overlap other audio

Audio description is provided during existing pauses in dialogue and does not overlap other important audio.

Audio description is adaptable

Mechanisms are available that allow users to control the audio description volume independently from the audio volume of the video and to change the language of the audio description, if multiple languages are provided.

Extended audio description

In cases where the existing pauses in a soundtrack are not long enough, the video pauses the visual to extend the audio track and provides an extended audio description to describe all of important visual information.

Alternative language versions are available

Audio description in alternative languages to that of the media are available so that the user can choose to listen to the audio description in their preferred language.

Style guide

The script for the audio description is following an organization’s style guide.

  • Name of the style guide
  • Version (if any)
  • Date of release
  • Description
  • Example(s) of core guideline(items)
Testing with users

Tests with users who need audio description were conducted and fixed issues based on findings.

  • Type of disabilities each user had
  • Number of users (for each type)
  • Date of testing
  • Examples of fixed issues based on the results
Reviewed by content creators

The audio description was reviewed by the person who created the video.

  • Role of the creator
  • Number of creators (for each Role)
  • Date (Period) of review
  • Examples of fixed issues based on the feedback
Using human describers

For live events, the organization has a human describer providing live audio description to the audience using assistive listening devices.

Perfect set of alternatives

As part of the organization’s standard media production procedures, the video producer creates the closed caption files, audio description, and descriptive transcript during the production cycle and then uploads them to their appropriate places during the publishing process.

  • Alternatives provided

Figure captions

Users can view figure captions even if not focused at figure.

Persistent captions

Needs additional research

Figure captions persist or can be made to persist even if the focus moves away.

Single sense

Users have content that does not rely on a single sense or perception.

Use of hue

Needs additional research

Information conveyed by graphical elements does not rely on hue.

Use of visual depth

Needs additional research

Information conveyed with visual depth is also conveyed programmatically and/or through text.

Use of sound

Information conveyed with sound is also conveyed programmatically and/or through text.

Use of spatial audio

Information that is conveyed with spatial audio is also conveyed programmatically and/or through text.

Text and wording

Text appearance

Users can read visually rendered text.

Maximum text contrast

Needs additional research

The rendered text against its background meets a maximum contrast ratio test for its text appearance.

Minimum text contrast

Needs additional research

The rendered text against its background meets a minimum contrast ratio test for its text appearance.

Text size

Needs additional research

The rendered text meets a minimum font size and weight.

Text style

The rendered text does not use a decorative or cursive font face.

Text-to-speech

Users can access text content and its meaning with text-to-speech tools.

Text-to-speech supported

Needs additional research

Text content can be converted into speech.

Human language

The human language of the view and content within the view is programmatically available.

Semantic text appearance

Needs additional research

Meaning conveyed by text appearance is programmatically available.

Clear language

Users can understand the content without having to process complex or unclear language.

This guideline will include exceptions for poetic, scriptural, artistic, and other content whose main goal is expressive rather than informative.

To ensure this guideline works well across different languages, members of AG, COGA, and internationalization (i18n) agreed on an initial set of languages to pressure-test the guidance. The five “guardrail” languages are:

  • Arabic
  • English
  • Hindi
  • Mandarin
  • Russian

We started with the six official languages of the United Nations (UN). Then we removed French and Spanish because they are similar to English. We added Hindi because it is the most commonly spoken language that is not on the UN list.

The group of five languages includes a wide variety of language features, such as:

  • Right-to-left text layout
  • Vertical text layout
  • Tonal sounds that affect meaning

This list doesn’t include every language, but it helps keep the work manageable while making the guidance more useful for a wide audience. We will work with W3C’s Global Inclusion community group, the Internationalization (i18n) task force, and others to review and refine the testing and techniques for these requirements. We also plan to create guidance for translating the guidelines into more languages in the future.

Clear structure

Content has a meaningful and understandable structure that clearly indicates the purpose of each section.

Short blocks of text

Content is organized into short paragraphs or “chunks” to help users understand and remember the information.

Sentence structure

Sentences are streamlined to avoid unnecessary words or phrases and complex structures such as clauses within clauses that can make it hard for users to focus on the main point.

Common words

Common words are used, and definitions are available for uncommon words that the target audience is unlikely to know.

This requirement will include tests and techniques for content that is intended for specialized audiences rather than the general public.

Abbreviations

Explanations are available for the first use of abbreviations, acronyms, initialisms, and numeronyms.

Non-literal language

Explanations or unambiguous alternatives are available for non-literal language, such as idioms and metaphors.

Verb tense

The verb tense used is easiest to understand in context.

Numerical supplements

Alternatives are provided for numbers and numerical concepts.

Unambiguous numerical formatting

Numerical information includes sufficient context to avoid confusion when presenting dates, temperatures, time, and Roman numerals.

Visual aids

Visual aids are available to supplement and aid understanding of complex ideas in written content, such as processes, workflows, relationships, or chronological information.

Summaries

A summary is available for documents and articles that have more than 300 words.

Topic sentence

For content intended to inform the user, each paragraph begins with a sentence stating the aim or purpose.

Unambiguous pronunciation

For Arabic and Hebrew, where letters or diacritics needed for phonetic reading are often omitted, an alternative version is provided with these missing elements included.

Review process
  • The organization has a documented process to review content for clear language before publication.
  • If the organization uses AI tools to generate or alter content, the organization has a documented process for a human to review and attest that the content is clear and conveys the intended meaning.
Style guide
  • The organization has a documented style guide that includes guidance on clear language and a policy that requires editors to follow the style guide.
  • The style guide includes guidance on clear words as well as clear numbers, such as avoiding or explaining Roman numerals, removing numerical information that is not essential for understanding the content, and providing explanations of essential numerical information to aid users with disabilities that impact cognitive accessibility.
Training policy
  • The organization has documented training material for content editors that includes guidance on clear language and a policy that editors are required to complete the training regularly.

Interactive components

Keyboard focus appearance

Users can see which element has keyboard focus.

Which foundational requirements apply?

For each focusable item:

  1. Is the user agent default focus indicator used?
  2. Is the focus indicator defined by the author?
Custom indicator

A custom focus indicator is used with sufficient size, change of contrast, adjacent contrast, distinct style and adjacency.

User agent default indicator

Focusable item uses the user agent default indicator.

Supplementary indicators

@@

Style guide

Focus indicators follow an organizational style guide.

Pointer focus appearance

Users can see the location of the pointer focus.

Pointer visible

There is a visible indicator of pointer focus.

Navigating content

Users can determine where they are and move through content (including interactive elements) in a systematic and meaningful way regardless of input or movement method.

Focus in viewport

The focus does not move to a position outside the current viewport, unless a mechanism is available to return to the previous focus point.

Focus retention

A user can focus on a content “area,” such as a modal or pop-up, then resume their view of all content using a limited number of steps.

Keyboard focus order

The keyboard focus moves sequentially through content in an order and way that preserves meaning and operability.

Restore focus

When the focus is moved by the content into a temporary change of view (e.g. a modal), the focus is restored to its previous location when returned from the temporary change of view.

Relevant focus

The focus order does not include repetitive, hidden, or static elements.

Expected behavior

Users have interactive components that behave as expected.

Consistent interaction

Interactive components with the same functionality behave consistently.

Consistent labels

Interactive components with the same functionality have consistent labels.

Consistent visual design

Interactive components that have similar function and behavior have a consistent visual design.

Control location

Needs additional research

Interactive components are visually and programmatically located in conventional locations.

Conventions

Needs additional research

Interactive components follow established conventions.

Familiar component

Conventional interactive components are used.

Reliable positioning

Interactive components retain their position unless a user changes the viewport or moves the component.

Control information

Users have information about interactive components that is identifiable and usable visually and using assistive technology.

Control contrast

Needs additional research

Visual information required to identify user interface components and states meet a minimum contrast ratio test, except for inactive components or where the appearance of the component is determined by the user agent and not modified by the author.

Control importance

Needs additional research

The importance of interactive components is indicated.

Control labels

Interactive components have visible labels that identify the purpose of the component.

Control updates

Changes to interactive components’ names, roles, values or states are visually and programmatically indicated.

Distinguishable controls

Interactive components are visually distinguishable without interaction from static content and include visual cues on how to use them.

Field constraints

Field constraints and conditions (required line length, date format, password format, etc.) are available.

Input labels

Inputs have visible labels that identify the purpose of the input.

Label in name

The programmatic name includes the visual label.

Name, role, value, state

Accurate names, roles, values, and states are available for interactive components.

Input / operation

Keyboard interface input

Users can navigate and operate content using only the keyboard.

All elements keyboard actionable

All elements that can be controlled or activated by pointer, audio (voice or other), gesture, camera input, or other means can be controlled or activated from the keyboard interface.

Here, and throughout this section, “camera input” refers to user control using a camera as a motion sensor to detect gestures of any type, for example “in the air” gestures. It does not include, for example, a static QR code image on a web page.

All content keyboard accessible

All content that can be accessed by other input modalities can be accessed using keyboard interface only.

All content includes content made available via hovers, right clicks, etc.

Other input modalities include pointing devices, voice and speech recognition, gesture, camera input, and any other other means of input or control.

The All Elements Keyboard-Actionable requirement allows you to navigate to all actionable elements but if the next element is 5 screens down - you also need to be able to access all the content. Also if the content is in expanding sections - you need to not just open them but also access all of the content - not just its actionable elements.

A menu that opens and closes is an interactive group that consists of an icon or label (which opens and closes the menu - and is therefore an interactive element) and a group of interactive elements inside.

Bidirectional navigation

It is always possible to move forward and backward at each point using keyboard navigation.

We are considering making this require that the navigation be symmetrical (ie., if you navigate forward and then backward you always end up back in the same place) but are interested in comments on this.

Conflicting keyboard commands

Author generated keyboard commands do not conflict with standard platform keyboard commands or they can be remapped.

Keyboard navigable if responsive

If the page / view uses responsive design, the page / view remains fully keyboard navigable.

No keyboard trap

It is always possible to navigate away from an element after navigating to, entering, or activating the element by using a common keyboard navigation technique, or by using a technique for exiting is described on the page / view or on a page / view earlier in the process where it is used.

User control of keyboard focus

When the keyboard focus is moved, one of the following is true:

  • The focus was moved under direct user control;
  • A new view such as a dialog is introduced and focus is moved to that view;
  • The user is informed of the potential keyboard focus move before it happens and given the chance to avoid the move;
  • The keyboard focus moves to the next item in keyboard navigation order automatically on completion of some user action.

Examples of where it may be useful to “jump the user to some other location” (after of course asking the user if they want to move there) would be:

  • a form message that says “If you answer no, you can skip questions 8 through 15, would you like to skip to question 16”; and
  • when a form error is detected on submit and a message says “There is an error on the page, would you like to jump to it” (especially if it also provided information on what the error was).
Relevant tab order keyboard focus

Except for skip links and other elements that are hidden but specifically added to aid keyboard navigation, tabbing does not move the keyboard focus into content that was not visible before the Tab (or Shift + Tab) key was entered.

Accordions, dropdown menus, and ARIA tab panels are examples of expandable content. According to this requirement, these would not expand simply because they include an element in the tab-order contained in them. They would either not expand or would not have any tab-order elements in them.

For example, a menu that expands when you tab to it, but then uses arrow keys to navigate in it would pass. But a menu that expands and then requires you to tab through all the newly-visible elements to navigate past it would fail.

Physical or cognitive effort when using keyboard

Users can use keyboard without unnecessary physical or cognitive effort.

Logical keyboard focus order

The keyboard focus moves through content in an order and way that preserves meaning and operability.

Standard or described navigation keys

If any keyboard action needed to navigate, perceive, and operate the full content of the page / view is not a common keyboard navigation technique, then it is described in the page / view where it is required or on a page / view earlier in the process where it is used.

Any platform related functions are not the responsibility of the author as long as they are not overridden by the content. Examples:

  • Tab and Shift + Tab to move through elements
  • Sticky Keys functionality that allows single key activation of multi-key commands
Preserve keyboard focus

When keyboard focus moves from one context to another within a web page, whether automatically or by user request, the keyboard focus is preserved so that, when the user goes back to the previous context, the keyboard focus is restored to its previous location except if that location no longer exists.

An example of this would be when a modal dialog or other pop up opens.

Best practice on placing focus when the previous focus location no longer exists, is to put focus on the focusable location just before the one that was removed. An example of this would be a list of subject-matter tags in a document, with each tag having a delete button. A user clicks on the delete button in a tag in the middle of the tag list. When the tag is deleted, focus is placed onto the tag that was before the now-deleted tag.

This is also quite useful when moving between pages but this would usually have to be done by the browser unless the user is in some process where that information is stored in a cookie or on the server between pages in the process so that it still has the old location when the person returns to the page.

Repetitive links

Repetitive adjacent links that have the same destination are avoided.

Supplemental if applicable to all content, else best practice.

A common pattern is having a component that contains a linked image and some linked text, where both links go to the same content. Someone using screen reading software can be disoriented from the unnecessary chatter, and a keyboard user has to navigate through more tab stops than should be necessary. Combining adjacent links that go to the same content improves the user experience.

Comparable keyboard effort

Our user interface design principles include minimizing the difference between the number of input commands required when using the keyboard interface only and the number of commands when using other input modalities.

Other input modalities include pointer and voice.

Pointer input

Pointer input is consistent and all functionality can be done with simple pointer input in a time and pressure insensitive way.

Consistent pointer cancellation - set of pages / views

The method of pointer cancellation is consistent for each type of interaction within or set of views except where it is essential to be different.

Where it is essential to be different, it can be helpful to alert the user.

Pointer cancellation

For functionality that can be activated using a simple pointer input, at least one of the following is true:

No Down-Event
The down event of the pointer is not used to execute any part of the function;
Abort or Undo
Completion of the function is on the up event, and a mechanism is available to abort the function before completion or to undo the function after completion;
Up Reversal
The up event reverses any outcome of the preceding down event;
Essential
Completing the function on the down-event is essential.

An example of Abort would be dragging where there is a pickup action on button down but it can be cancelled by dropping in pickup point or anywhere other than the drop area.

Examples of places where action on down-event may be essential include Dutch auction or game trigger.

Pointer pressure alternative

Specific pointer pressure is not the only way of achieving any functionality, except where specific pressure is essential to the functionality.

Specific pressure would be essential to a paintbrush feature or advance signature verification.

Pointer speed alternative

Specific pointer speed is not the only way of achieving any functionality, except where specific pointer speed is essential to the functionality.

Specific pointer speed would be essential to a paintbrush feature, advanced signature verification, or time constrained gaming.

Simple pointer input

Any functionality that uses pointer input other than simple pointer input can also be operated by a simple pointer input or a sequence of simple pointer inputs that do not require timing.

Examples of pointer input that are not simple pointer input are double clicking, swipe gestures, multipoint gestures like pinching or split tap or two-finger rotor, variable pressure or timing, and dragging movements.

Complex pointer inputs are not banned, they just can’t be the only way to accomplish an action.

Simple pointer input is different than single pointer input and is more restrictive than simply using a single pointer.

Speech and voice input

Provide alternatives to speech input for those who cannot speak, and facilitate speech control for those for whom it is most effective.

Speech alternative

Speech input is not the only way of achieving any functionality except where a speech input is essential to the functionality.

Real-time bidirectional voice communication

Wherever there is real-time bidirectional voice communication, a real-time text option is available.

Input operation

Users have the option to use different input techniques and combinations and switch between them.

Change keyboard focus with pointer device

If content interferes with pointer or keyboard focus behavior of the user agent, then selecting anything on the view with a pointer moves the keyboard focus to that interactive element, even if the user drags off the element (so as to not activate it).

An example of this is: a user scrolls a document down six screens, then clicks on a paragraph with their pointer. The user then presses the tab key, which moves the focus to the first interactive component after the position on the screen that was clicked, rather than from the previous position, six screens up the document.

Content on hover or keyboard focus

When receiving and then removing pointer hover or keyboard focus triggers additional content to become visible and then hidden, and the visual presentation of the additional content is controlled by the author and not by the user agent, all of the following are true:

Dismissible

A mechanism is available to dismiss the additional content without moving pointer hover or keyboard focus, unless the additional content does not obscure or replace other content;

Hoverable

If pointer hover can trigger the additional content, then the pointer can be moved over the additional content without the additional content disappearing;

Persistent

The additional content remains visible until the hover or keyboard focus trigger is removed, the user dismisses it, or its information is no longer valid.

Examples of additional content controlled by the user agent include browser tooltips created through use of the HTML title attribute.

Custom tooltips, sub-menus, and other non-modal popups that display on hover and keyboard focus are examples of additional content covered by this criterion.

This applies to content that appears in addition to the triggering interactive element itself. Since hidden interactive elements that are made visible on keyboard focus (such as links used to skip to another part of a view) do not present additional content they are not covered by this requirement.

Gesture alternative

Gestures are not the only way of achieving any functionality, except where a gesture is essential to the functionality.

Input method flexibility

Where functionality, including input or navigation, are achievable using different input methods, users have the option to switch between those input methods at any time.

This does not mean that all input technologies (pointer, keyboard, voice, gesture) need to be supported in one’s content, but if an input modality is supported, it is supported everywhere in the content except where a particular input method is essential to the functionality.

Use without body movement

Full or gross body movement is not the only way of achieving any functionality, except where full or gross body movement is essential to the functionality.

This includes both detection of body movement and actions to the device (e.g., shaking) that require body movement.

Authentication

Provide alternatives to authentication for those who cannot use some authentication methods usable by people without disabilities.

Biometric identification

Use of a biometric is not the only way to identify or authenticate.

Voice identification

Voice is not the only way to identify or authenticate.

Error handling

Correct errors

Users know about and can correct errors.

Error association

Error notifications are programmatically associated with the error source so that users can access the error information while focused on the source of the error.

Error cause association

When an error occurs that results from a user interaction with an interactive element or component (aka a cause), the element or component is visually and programmatically associated with the trigger.

Error cause in notification

When an error occurs that results from an interactive element or component (aka a cause), an indicator of the trigger is included in the error notification. If the interactive element or component is located in a different part of a process, then the page or step in the process is included in the error notification.

Error identification

Errors are visually identifiable without relying on only text, only color, or only symbols.

Error linked

When an error notification is not adjacent to the item in error, a link to the error is provided.

Error location

Error notifications are visually collocated with the item in error within the viewport, or provide a link to the source of the error which, when activated, moves the focus to the error.

Error notification label

Error notifications include text with at least two of the following:

  • Error notifications are identified with an error icon in text.
  • Error notifications use a color that differentiates the error from other content.
  • Error notifications are labeled with text that indicates its an error.
Error notification

Errors that can be automatically detected are identified and described to the user.

Error persists

Error notifications persist until the user dismisses them or the error is resolved.

Error suggestion

When an error occurs and suggestions for correction are known, then the suggestions are provided to the user, unless it would jeopardize the security or purpose of the content.

Make errors distinct

Use a culturally relevant and familiar error symbol, color, and text to indicate errors.

Prevent errors

When users are submitting information, at least one of the following is true:

  • Users can review, confirm, and correct information before submitting
  • Information is validated and users can correct any errors found, or
  • Users can undo submissions.
Submission confirmation

When users submit information as part of a process, users are notified that submission was completed and what information was provided.

Validate after data entry

During data entry, ensure data validation occurs after the user enters data.

Validate as you go

When completing a multi-step process, validation for errors is completed before the user moves to the next step in the process.

Techniques could include in line error handling as well as step by step validation.

Animation and movement

Avoid physical harm

Users do not experience physical harm from content.

Audio shifting

Needs additional research

Audio shifting designed to create a perception of motion is avoided, or can be paused or prevented.

Flashing

Flashing or strobing beyond thresholds defined by safety standards are avoided, or can be paused or prevented.

Motion

Needs additional research

Visual motion and pseudo-motion that lasts longer than 5 seconds is avoided, or can be paused or prevented.

Motion from interaction

Needs additional research

Visual motion and pseudo-motion triggered by interaction is avoided or can be prevented, unless the animation is essential to the functionality or the information being conveyed.

Layout

Relationships

Users can determine relationships between content both visually and using assistive technologies.

Clear relationships

The relationships between parts of the content is clearly indicated.

Clear starting point

The starting point or home is visually and programmatically labeled.

Distinguishable relationships

Needs additional research

Relationships that convey meaning between pieces of content are programmatically determinable. Note: Examples of relationships include items positioned next to each other, arranged in a hierarchy, or visually grouped.

Distinguishable sections

Needs additional research

Sections are visually and programmatically distinguishable.

Recognizable layouts

Users have consistent and recognizable layouts available.

Consistent order

The relative order of content and interactions remain consistent throughout a workflow. Note: Relative order means that content can be added or removed, but repeated items are in the same order relative to each other.

Familiar layout

Conventional layouts are available.

Information about options

Information required to understand options is visually and programmatically associated with the options.

Related information

Related information is grouped together within a visual and programmatic structure.

Orientation

Users can determine their location in content both visually and using assistive technologies.

Current location

Needs additional research

The current location within the view, multi-step process, and product is visually and programmatically indicated.

Multi-step process

Context is provided to orient the user in a site or multi-step process.

Contextual information

Contextual information is provided to help the user orient within the product.

Structure

Users can understand and navigate through the content using structure.

Section labels

Major sections of content have within them well structured, understandable visual and programmatic headings.

Section length

Needs additional research

Content is organized into short sections of related content.

Section purpose

The purpose of each section of the content is clearly indicated.

Single idea

The number of concepts within a segment of text is minimized.

Topic sentence

For text intended to inform the user, each paragraph of text begins with a topic sentence stating the aim or purpose.

White spacing

Whitespace separates chunks of content.

Title

Content has a title or high-level description.

Lists

Three or more items of related data are presented as bulleted or numbered lists.

Numbered steps

Steps in a multi-step process are numbered.

Consistency across views

Consistency

Users have consistent and alternative methods for navigation.

Consistent navigation

Navigation elements remain consistent across views within the product.

Multiple ways

The product provides at least two ways of navigating and finding information (Search, Scan, Site Map, Menu Structure, Breadcrumbs, contextual links, etc.).

Persistent navigation

Navigation features are available regardless of screen size and magnification (responsive design).

Process and task completion

Avoid cognitive tasks

Users can complete tasks without needing to memorize nor complete advanced cognitive tasks.

Allow automated entry

Automated input from user agents, third-party tools, or copy-and-paste is not prevented.

No cognitive tests

Processes, including authentication, can be completed without puzzles, calculations, or other cognitive tests (essential exceptions would apply).

No memorization

Needs additional research

Processes can be completed without memorizing and recalling information from previous stages of the process.

Adequate time

Users have enough time to read and use content.

Adjust timing at start

For each process with a time limit, a mechanism exists to disable or extend the limit before the time limit starts.

Adjust timing at timeout

For each process with a time limit, a mechanism exists to disable or extend the time limit at timeout.

Disable timeout

For each process with a time limit, a mechanism exists to disable the limit.

Unnecessary steps

Users can complete tasks without unnecessary steps.

Optional information

Processes can be completed without being forced to read or understand unnecessary content.

Optional input

Processes can be completed without entering unnecessary information.

Avoid deception

Users do not encounter deception when completing tasks, unless essential to the task.

Deceptive controls

Needs additional research

Interactive components are not deceptively designed.

Exploitive behaviors

Needs additional research

Process completion does not include exploitive behaviors.

Misinformation

Needs additional research

Processes can be completed without navigating misinformation or redirections.

Preselections

Preselected options are visible by default during process completion without additional interactions.

Redirection

Needs additional research

A mechanism is available to prevent fraudulent redirection or alert users they are exiting the site.

Retain information

Users do not have to reenter information or redo work.

Go back in process

In a multi-step process, the interface supports stepping backwards in a process and returning to the current point without data loss.

Redundant entry

Information previously entered by or provided to the user that is required to be entered again in the same process is either auto-populated, or available for the user to select.

Save progress

Data entry and other task completion processes allow saving and resuming from the current step in the task.

Complete tasks

Users understand how to complete tasks.

Action required

In a process, the interface indicates when user input or action is required to proceed to the next step.

Inform at start of process

Information needed to complete a multi-step process is provided at the start of the process, including:

  • number of steps it might take (if known in advance),
  • details of any resources needed to perform the task, and
  • overview of the process and next step.
Steps and instructions

The steps and instructions needed to complete a multi-step process are available.

Policy and protection

Content source

Users can determine when content is provided by a Third Party

Citation

Needs additional research

The author or source of the primary content is visually and programmatically indicated.

Indicate third-party content

Needs additional research

Third-party content (AI, Advertising, etc.) is visually and programmatically indicated.

Obscuring primary content

Needs additional research

Advertising and other third-party content that obscures the primary content can be moved or removed without interacting with the advertising or third-party content.

Security and privacy

Users’ safety, security or privacy are not decreased by accessibility measures.

Clear agreement

Needs additional research

The interface indicates when a user is entering an agreement or submitting data.

Disability information privacy

Needs additional research

Disability information is not disclosed to or used by third parties and algorithms (including AI).

Sensitive information

Needs additional research

Prompts to hide and remove sensitive information from observers are available.

Risk statements

Needs additional research

Clear explanations of the risks and consequences of choices, including use, are stated.

Algorithms

Users are not disadvantaged by algorithms.

Algorithm bias

Needs additional research

Algorithms (including AI) used are not biased against people with disabilities.

Social media algorithm

Needs additional research

A mechanism is available to understand and control social media algorithms.

Help and feedback

Help available

Users have help available.

Consistent help

Needs additional research

Help is labeled consistently and available in a consistent visual and programmatic location.

Contextual help

Contextual help is available.

Conversational support

Conversational support allowing both text and verbal modes is available.

Data visualizations

Needs additional research

Help is available to understand and use data visualizations.

New interfaces

Needs additional research

When interfaces dramatically change (due to redesign), a mechanism to learn the new interface or revert to the older design is available.

Personalizable help

Needs additional research

Help is adaptable and personalizable.

Sensory characteristics

Instructions and help do not rely on sensory characteristics.

Support available

Needs additional research

Accessible support is available during data entry, task completion and search.

Supplemental content

Users have supplemental content available.

Number supplements

Text or visual alternatives are available for numerical concepts.

Text supplements

Needs additional research

Visual illustrations, pictures, and images are available to help explain complex ideas, events, and processes.

Feedback

Users can provide feedback to authors.

Feedback mechanism

A mechanism is available to provide feedback to authors.

User control

Control text

Users can control text presentation.

Adjust color

Text and background colors can be customized.

Adjust background

Patterns, designs, or images placed behind text are avoided or can be removed by the user.

Font size meaning

When font size conveys visual meaning (such as headings), the text maintains its meaning and purpose when text is resized.

Text customization

Users can change the text style (like font and size) and the layout (such as spacing and single column) to fit their needs.

Adjustable viewport

Users can transform size and orientation of content presentation to make it viewable and usable.

Orientation

Content orientation allows the user to read the language presented without changing head or body position.

Reflow

Content can be viewed in multiple viewport sizes, orientations, and zoom levels — without loss of content, functionality, meaningful relationships, and with scrolling only occurring in one direction.

Transform content

Users can transform content to make it understandable.

Alternative presentation

Needs additional research

Complex information or instructions for complex processes are available in multiple presentation formats.

Content markup

Role and priority of content is programmatically determinable.

Summary

Access to a plain-language summary, abstract, or executive summaries is available.

Transform content

Needs additional research

Content can be transformed to make its purpose clearer.

Media control

Users can control media and media alternative.

Adjust captions

The position and formatting of captions can be changed.

Audio control

Audio can be turned off, while still playing the video, and without affecting the system sound.

Interactive audio alternative

Needs additional research

Alternatives for audio include the ability to search and look up terms.

Media alternative control

Captions and audio descriptions can be turned on and off.

Media chapters

Needs additional research

Media can be navigated by chapters.

Control interruptions

Users can control interruptions.

Control notifications

The timing and positioning of notifications and other interruptions can be changed, suppressed or saved, except interruptions involving an emergency.

Control possible harm

Users can control potential sources of harm.

Disturbing content

Needs additional research

Warnings are available about content that may be emotionally disturbing, and the disturbing content can be hidden.

Haptic stimulation

Needs additional research

Haptic feedback can be reduced or turned off.

Triggers

Needs additional research

Warnings are available about triggering content, and the warnings and triggering content can be hidden.

Verbosity

Needs additional research

Overwhelming wordiness can be reduced or turned off.

Visual stimulation

Needs additional research

Visual stimulation from combinations of density, color, movement, etc. can be reduced or turned off.

User agent support

Users can control content settings from their User Agents including Assistive Technology.

Assistive technology control

Content can be controlled using assistive and adaptive technology.

Printing

Needs additional research

Printing respects user’s content presentation preferences.

User settings

User settings are honored.

Virtual cursor

Assistive technologies can access content and interactions when using mechanisms that convey alternative points of regard or focus (i.e. virtual cursor).

Conformance

Summary

You might want to make a claim that your content or product meets the WCAG 3.0 guidelines. If it does meet the guidelines, we call this “conformance”.

If you want to make a formal conformance claim, you must use the process described in this document. Conformance claims are not required and your content can conform to WCAG 3.0, even if you don’t want to make a claim.

There are two types of content in this document:

We are experimenting with different conformance approaches for WCAG 3.0. Once we have developed enough guidelines, we will test how well each works.

WCAG 3.0 will use a different conformance model than WCAG 2.2 in order to meet its requirements. Developing and vetting the conformance model is a large portion of the work AG needs to complete over the next few years.

AG is exploring a model based on Foundational Requirements, Supplemental Requirements, and Assertions.

The most basic level of conformance will require meeting all of the Foundational Requirements. This set will be somewhat comparable to WCAG 2.2 Level AA.

Higher levels of conformance will be defined and met using Supplemental Requirements and Assertions. AG will be exploring whether meeting the higher levels would work best based on points, percentages, or predefined sets of requirements (modules).

Other conformance concepts AG continues to explore the following include conformance levels, issue severity, adjectival ratings and pre-assessment checks.

See Explainer for W3C Accessibility Guidelines (WCAG) 3.0 for more information.

Only accessibility-supported ways of using technologies

The concept of "accessibility-supported" is to account for the variety of user agents and scenarios. How does an author know that a particular technique for meeting a guideline will work in practice with user agents that are used by real people?

The intent is for the responsibility of testing with user agents to vary depending on the level of conformance.

At the foundational level of conformance, assumptions can be made by authors that methods and techniques provided by WCAG 3.0 work. At higher levels of conformance the author may need to test that a technique works, or check that available user agents meet the requirement, or a combination of both.

This approach means the Working Group will ensure that methods and techniques included do have reasonably wide and international support from user agents, and there are sufficient techniques to meet each requirement.

The intent is that WCAG 3.0 will use a content management system to support tagging of methods/techniques with support information. There should also be a process where interested parties can provide information.

An "accessibility support set" is used at higher levels of conformance to define which user agents and assistive technologies you test with. It would be included in a conformance claim, and enables authors to use techniques that are not provided with WCAG 3.0.

An exception for long-present bugs in assistive technology is still under discussion.

Defining conformance scope

When evaluating the accessibility of content, WCAG 3.0 requires the guidelines apply to a specific scope. While the scope can be an all content within a digital product, it is usually one or more subsets of the whole. Reasons for this include:

WCAG 3.0 therefore defines two ways to scope content: views and processes. Evaluation is done on one or more complete views or processes, and conformance is determined on the basis of one or more complete views or processes.

Conformance is defined only for processes and views. However, a conformance claim may be made to cover one process and view, a series of processes and views, or multiple related processes and views. All unique steps in a process MUST be represented in the set of views. Views outside of the process MAY also be included in the scope.

We recognize that representative sampling is an important strategy that large and complex sites use to assess accessibility. While it is not addressed within this document at this time, our intent is to later address it within this document or in a separate document before the guidelines reach the Candidate Recommendation stage. We welcome your suggestions and feedback about the best way to incorporate representative sampling in WCAG 3.0.

Glossary

Many of the terms defined here have common meanings. When terms appear with a link to the definition, the meaning is as formally defined here. When terms appear without a link to the definition, their meaning is not explicitly related to the formal definition here. These definitions are in progress and may evolve as the document evolves.

This glossary includes terms used by content that has reached a maturity level of Developing or higher. The definitions themselves include a maturity level and may mature at a different pace than the content that refers to them. The AGWG will work with other taskforces and groups to harmonize terminology across documents as much as is possible.

Accessibility support set

group of user agents and assistive technologies you test with

The AGWG is considering defining a default set of user agents and assistive technologies that they use when validating guidelines.

Accessibility support sets may vary based on language, region, or situation.

If you are not using the default accessibility set, the conformance report should indicate what set is being used.

Accessibility supported

supported by in at least 2 major free browsers on every operating system and/or available in assistive technologies used by 80% cumulatively of the AT users on each operating system for each type of AT used

Actively available

available for the user to read and use any actionable items included

Assertion

formal claim of fact, attributed to a person or organization. An attributable and documented statement of fact regarding procedures practiced in the development and maintenance of the content or product to improve accessibility

Assistive technology

hardware and/or software that acts as a user agent, or along with a mainstream user agent, to provide functionality to meet the requirements of users with disabilities that go beyond those offered by mainstream user agents

Functionality provided by assistive technology includes alternative presentations (e.g., as synthesized speech or magnified content), alternative input methods (e.g., voice), additional navigation or orientation mechanisms, and content transformations (e.g., to make tables more accessible).

Assistive technologies often communicate data and messages with mainstream user agents by using and monitoring APIs.

The distinction between mainstream user agents and assistive technologies is not absolute. Many mainstream user agents provide some features to assist individuals with disabilities. The basic difference is that mainstream user agents target broad and diverse audiences that usually include people with and without disabilities. Assistive technologies target narrowly defined populations of users with specific disabilities. The assistance provided by an assistive technology is more specific and appropriate to the needs of its target users. The mainstream user agent may provide important functionality to assistive technologies like retrieving web content from program objects or parsing markup into identifiable bundles.

Audio describer

person who provides verbal descriptions of visual elements in media, cultural spaces, and live performances to make content and experiences more accessible to individuals who are blind or have low vision

They will describe actions, settings, costumes, and facial expressions, inserting these descriptions into pauses within the dialogue or audio.

Audio description

narration added to the soundtrack to describe important visual details that cannot be understood from the main soundtrack alone

For audiovisual media, audio description provides information about actions, characters, scene changes, on-screen text, and other visual content.

Audio description is also sometimes called “video description”, “described video”, “visual description”, or “descriptive narration”.

In standard audio description, narration is added during existing pauses in dialogue. See also extended audio description.

If all important visual information is already provided in the main audio track, no additional audio description track is necessary.

Automated evaluation

evaluation conducted using software tools, typically evaluating code-level features and applying heuristics for other tests

Automated testing is contrasted with other types of testing that involve human judgement or experience. Semi-automated evaluation allows machines to guide humans to areas that need inspection. The emerging field of testing conducted via machine learning is not included in this definition.

Blocks of text

continuous text with multiple sentences that is not separated by structural elements such as table cells, regions

CART

communication Access Realtime Translation, or CART, is a type of live captioning provided by trained captioners, using specialized software along with phonetic keyboards or stenography methods, to produce real-time visual captioning for meeting and event participants

CART is available primarily in English, with some providers providing French, Spanish, and other languages on demand. It is not available for Japanese and some other languages.

CART is sometimes referred to as “real-time captioning”.

Captions

time-synchronized visual and/or text alternative that communicates the audio portion of a work of multimedia (for example, a movie or podcast recording)

Captions are similar to dialogue-only subtitles, except captions convey not only the content of spoken dialogue, but also equivalents for non-dialogue audio information needed to understand the program content, including sound effects, music, laughter, speaker identification and location.

In some countries, captions are called subtitles.

Change of viewport within a page/view

change of content/context that causes the users keyboard navigation point to change where they have the option to move back out of the new content/context

“within a page/view is part of this term because - if the new viewport/content/context is within the same page/view going back etc. would be under the control of the author. If moving to another page/view - perhaps on a different site - the current author would not have control and this would be a requirement on the user agent.

This is different from Change of Context in WCAG 2.x major changes that, if made without user awareness, can disorient users who are not able to view the entire page simultaneously.

Closed captions

captions that are decoded into chunks known as “caption frames” that are synchronized with the media

Closed captions can be turned on and off with some players, and can often be read using assistive technology.

Closed system

information technology that prevents users from easily attaching or installing assistive technologies. For example, kiosk, calculator, vending machines, etc

Closely available

available in the currently perceivable content, or after one activation of an interactive element

Common keyboard navigation technique

keyboard navigation technique that is the same across most or all applications and platforms and can therefore be relied upon by users who need to navigate by keyboard alone

A sufficient listing of common keyboard navigation techniques for use by authors can be found in the WCAG common keyboard navigation techniques list

Complex pointer input

any pointer input other than a single pointer input

Component

grouping of interactive elements for a distinct function

Conformance

satisfying all the requirements of the guidelines. Conformance is an important part of following the guidelines even when not making a formal Conformance Claim

See the Conformance section for more information.

Conformance scope

set of views and/or pages selected to be part of a conformance claim. Where a view or page is part of a process, all the views or pages in the process must be included

How a person or organisation selects the set is not defined in WCAG3. There maybe informative guidance on selecting a suitable set in future, and regional laws or regulations may require a particular methodology.

Content

information and sensory experience to be communicated to the user by an interface, including code or markup that defines the content’s structure, presentation, and interactions

Contrast ratio test

To be defined.

Decorative image

To be defined.

Default direction of text

To be defined.

Default orientation

single orientation that a platform uses to view content by default

Deprecate

declare something outdated and in the process of being phased out, usually in favor of a specified replacement

Deprecated documents are no longer recommended for use and may cease to exist in the future.

Diverse set of users

To be defined.

Down event

platform event that occurs when the trigger stimulus of a pointer is depressed

The down-event may have different names on different platforms, such as “touchstart” or “mousedown”.

Element

To be defined.

Essential exception

exception because there is no way to carry out the function without doing it this way or fundamentally changing the functionality

Evaluation

process of examining content for conformance to these guidelines

Different approaches to evaluation include automated evaluation, semi-automated evaluation, human evaluation, and user testing.

Extended audio description

audio description that is added to audiovisual media by pausing the video to allow for additional time to add audio description

This technique is only used when the sense of the video would be lost without the additional audio description and the pauses between dialogue or narration are too short.

Figure captions

title, brief explanation, or comment that accompanies a work of visual media and is always visible on the page

Functional need

statement that describes a specific gap in one’s ability, or a specific mismatch between ability and the designed environment or context

Gesture

motion made by the body or a body part used to communicate to technology

Guideline

high-level, plain-language outcome statements used to organize requirements

Guidelines provide a high-level, plain-language outcome statements for managers, policy makers, individuals who are new to accessibility, and other individuals who need to understand the concepts but not dive into the technical details. They provide an easy-to-understand way of organizing and presenting the requirements so that non-experts can learn about and understand the concepts.

Each guideline includes a unique, descriptive name along with a high-level plain-language summary. Guidelines address functional needs on specific topics, such as contrast, forms, readability, and more.

Guidelines group related requirements and are technology-independent.

High cognitive load

To be defined.

Human evaluation

evaluation conducted by a human, typically to apply human judgement to tests that cannot be fully automatically evaluated

Human evaluation is contrasted with automated evaluation which is done entirely by machine, though it includes semi-automated evaluation which allows machines to guide humans to areas that need inspection. Human evaluation involves inspection of content features, by contrast with user testing which directly tests the experience of users with content.

Image

To be defined.

Image role

To be defined.

Image type

To be defined.

Informative

content provided for information purposes and not required for conformance. Also refered to as non-normative

Interactive element

part of the interface that responds to user input and can have a distinct programmatically determinable name

In contrast to non-interactive elements. For example, headings or paragraphs.

Interactive group

grouping of interactive elements for a distinct function. It may also contain non-interactive elements

A component could also include static elements (e.g. instructional text), but must include interactive elements.

A grouping of static elements (which are not interactive) fit in the “content” category (an umbrella term for everything perceivable).

Interactive groups could be nested.

Items

smallest testable unit for testing scope

Items could be interactive components such as a drop down menu, a link, or a media player.

They could also be units of content such as a phrase, a paragraph, a label or error message, an icon, or an image.

Keyboard focus

point in the content where any keyboard actions would take effect

Keyboard interface

API (Application Programming Interface) where software gets “keystrokes” from

“Keystrokes” that are passed to the software from the “keyboard interface” may come from a wide variety of sources including but not limited to a scanning program, sip-and-puff morse code software, speech recognition software, AI of all sorts, as well as other keyboard substitutes or special keyboards.

Mechanism

process or technique for achieving a result

The mechanism may be explicitly provided in the content, or may be relied upon to be provided by either the platform or by user agents, including assistive technologies.

The mechanism needs to meet all success criteria for the conformance level claimed.

Method

detailed information, either technology-specific or technology-agnostic, on ways to meet the requirement as well as tests and scoring information

Navigated sequentially

navigated in the order defined for advancing focus (from one element to the next) using a keyboard interface

Non-interactive element

part of the interface that does not respond to user input and does not include sub-parts

If a paragraph included a link, the text either side of the link would be considered a static element, but not the paragraph as a whole.

Letters within text do not constitute a “smaller part”.

Non-literal language

words or phrases used in a way that are beyond their standard or dictionary meaning to express deeper, more complex ideas

This is also called figurative language.

To understand the content, users have to interpret the implied meaning behind the words, rather than just their literal or direct meaning.

Examples include:

  • allusions
  • hyperbole
  • idioms
  • irony
  • jokes
  • litotes
  • metaphors
  • metonymies
  • onomatopoeias
  • oxymorons
  • personification
  • puns
  • sarcasm
  • similes
Non-web software

software that does not qualify as web content

Normative

content whose instructions are required for conformance

Open captions

captions that are visual equivalent images of text that are embedded in video

Open captions are also known as burned-in, baked-on, or hard-coded captions. Open captions cannot be turned off and cannot be read using assistive technology.

Page

non-embedded resource obtained from a single URI using HTTP plus any other resources that are used in the rendering or intended to be rendered together

Where a URI is available and represents a unique set of content, that would be the preferred conformance unit.

Path-based gesture

gesture that depends on the path of the pointer input and not just its endpoints

Path based gesture includes both time dependent and non-time dependent path-based gestures.

Platform

software, or collection of layers of software, that lie below the subject software and provide services to the subject software and that allows the subject software to be isolated from the hardware, drivers, and other software below

Platform software both makes it easier for subject software to run on different hardware, and provides the subject software with many services (e.g. functions, utilities, libraries) that make the subject software easier to write, keep updated, and work more uniformly with other subject software.

A particular software component might play the role of a platform in some situations and a client in others. For example a browser is a platform for the content of the page but it also relies on the operating system below it.

The platform is the context in which the product exists.

Point of regard

position in rendered content that the user is presumed to be viewing. The dimensions of the point of regard can vary

The point of regard is almost always within the viewport, but it can exceed the spatial or temporal dimensions of the viewport. See rendered content for more information about viewport dimensions.

The point of regard can also refer to a particular moment in time for content that changes over time. For example, an audio-only presentation.

User agents can determine the point of regard in a number of ways, including based on viewport position in content, keyboard focus, and selection.

Pointer

To be defined.

Private and sensitive information

private and sensitive information

Process

series of views or pages associated with user actions, where actions required to complete an activity are performed, often in a certain order, regardless of the technologies used or whether it spans different sites or domains

Product

testing scope that is a combination of all items, views, and task flows that make up the web site, set of web pages, web app, etc

The context for the product would be the platform.

Programmatically determinable

meaning of the content and all its important attributes can be determined by software functionality that is accessibility supported

Purely decorative

content that, if removed, does not affect the meaning or functionality of the page

Relied upon

content would not conform if that technology is turned off or is not supported

Requirement

result of practices that reduce or eliminate barriers that people with disabilities experience

Section

self-contained portion of content that deals with one or more related topics or thoughts

A section may consist of one or more paragraphs and include graphics, tables, lists and sub-sections.

Semi-automated evaluation

evaluation conducted using machines to guide humans to areas that need inspection

Semi-automated evaluation involves components of automated evaluation and human evaluation.

Simple pointer input

input event that involves only a single ‘click’ event or a ‘button down’ and ‘button up’ pair of events with no movement between

Examples of things that are not simple pointer actions include double clicks, dragging motions, gestures, and any use of multipoint input or gestures, and the simultaneous use of a mouse and keyboard.

Single pointer

input modality that only targets a single point on the page/screen at a time – such as a mouse, single finger on a touch screen, or stylus

Single pointer interactions include clicks, double clicks, taps, dragging motions, and single-finger swipe gestures. In contrast, multipoint interactions involve the use of two or more pointers at the same time, such as two-finger interactions on a touchscreen, or the simultaneous use of a mouse and stylus.

Single pointer input

input modality that only targets a single point on the view at a time – such as a mouse, single finger on a touch screen, or stylus

Single pointer interactions include clicks, double clicks, taps, dragging motions, and single-finger swipe gestures. In contrast, multipoint interactions involve the use of two or more pointers at the same time, such as two-finger interactions on a touchscreen, or the simultaneous use of a mouse and stylus.

Single pointer input is in contrast to multipoint input such as two, three or more fingers or pointers touching the surface, or gesturing in the air, at the same time.

Activation is usually by click or tap but can also be by programmatic simulation of a click or tap or other similar simple activation.

Standard platform keyboard commands

keyboard commands that are the same across most or platforms and are relied upon by users who need to navigate by keyboard alone

A sufficient listing of common keyboard navigation techniques for use by authors can be found in the WCAG standard keyboard navigation techniques list.

Subtitles

captions that are displayed with a work of media that translate or transcribe the dialogue or narrative

Subtitles are synchronized with the soundtrack in real-time and can include spoken dialogue, sound effects, and other auditory information.

Task flow

testing scope that includes a series views that support a specified user activity

A task flow may include a subset of items in a view or a group of views. Only the part of the views that support the user activity are included in a test of the task flow.

Technology

mechanism for encoding instructions to be rendered, played or executed by user agents

As used in these guidelines “web technology” and the word “technology” (when used alone) both refer to web content technologies.

Web content technologies may include markup languages, data formats, or programming languages that authors may use alone or in combination to create end-user experiences.

Temporary change of context

To be defined.

Test

mechanism to evaluate implementation of a method

Text

sequence of characters that can be programmatically determined, where the sequence is expressing something in human language

Two-dimensional content

To be defined.

Unambiguous numerical formatting

To be defined.

Under the control of the provider

where the provider is able to influence the content and its functionality

This could be by directly creating the content themselves or by having influence by means of financial or other reward or removal of reward to the author of the content.

Up event

platform event that occurs when the trigger stimulus of a pointer is released

The up-event may have different names on different platforms, such as “touchend” or “mouseup”.

User agent

software that retrieves and presents external content for users

User interface context

user interface with a specific layout and associated components

If more than X% of the associated components are changed, it is a new user interface context.

User manipulable text

text which the user can adjust

This could include, but is not limited to, changing:

  • Line, word or letter spacing
  • Color
  • Line length — being able to control width of block of text
  • Typographic alignment — justified, flushed right/left, centered
  • Wrapping
  • Columns — number of columns in one-dimensional content
  • Margins
  • Underlining, italics, bold
  • Font face, size, width
  • Capitalization — all caps, small caps, alternating case
  • End of line hyphenation
  • Links
User need

end goal a user has when starting a process through digital means

User testing

evaluation of content by observation of how users with specific functional needs are able to complete a process and how the content meets the relevant requirements

View

content that is actively available in a viewport including that which can be scrolled or panned to, and any additional content that is included by expansion while leaving the rest of the content in the viewport actively available

A modal dialog box would constitute a new view because the other content in the viewport is no longer actively available.

Viewport

object in which the platform presents content

The author has no control of the viewport and almost always has no idea what is presented in a viewport (e.g. what is on screen) because it is provided by the platform. On browsers the hardware platform is isolated from the content.

Content can be presented through one or more viewports. Viewports include windows, frames, loudspeakers, and virtual magnifying glasses. A viewport may contain another viewport. For example, nested frames. Interface components created by the user agent such as prompts, menus, and alerts are not viewports.

Privacy Considerations

The content of this document has not matured enough to identify privacy considerations. Reviewers of this draft should consider whether requirements of the conformance model could impact privacy.

Security Considerations

The content of this document has not matured enough to identify security considerations. Reviewers of this draft should consider whether requirements of the conformance model could impact security.

Change log

This section shows substantive changes made in WCAG 3.0 since the First Public Working Draft was published in 21 January 2021.

The full commit history to WCAG 3.0 and commit history to Silver is available.

Acknowledgements

Additional information about participation in the Accessibility Guidelines Working Group (AG WG) can be found on the Working Group home page.

Contributors to the development of this document

Previous contributors to the development of this document

Abi James, Abi Roper, Alastair Campbell, Alice Boxhall, Alistair Garrison, Amani Ali, Andrew Kirkpatrick, Andrew Somers, Andy Heath, Angela Hooker, Aparna Pasi, Avneesh Singh, Azlan Cuttilan, Ben Tillyer, Betsy Furler, Brooks Newton, Bruce Bailey, Bryan Trogdon, Caryn Pagel, Charles Hall, Charles Nevile, Chris Loiselle, Chris McMeeking, Christian Perera, Christy Owens, Chuck Adams, Cybele Sack, Daniel Bjorge, Daniel Henderson-Ede, Darryl Lehmann, David Fazio, David MacDonald, David Sloan, David Swallow, Dean Hamack, Detlev Fischer, DJ Chase, E.A. Draffan, Eleanor Loiacono, Francis Storr, Frederick Boland, Garenne Bigby, Gez Lemon, Giacomo Petri, Glenda Sims, Greg Lowney, Gregg Vanderheiden, Gundula Niemann, Imelda Llanos, Jaeil Song, JaEun Jemma Ku, Jake Abma, Jan McSorley, Janina Sajka, Jaunita George, Jeanne Spellman, Jeff Kline, Jennifer Chadwick, Jennifer Delisi, Jennifer Strickland, Jennison Asuncion, Jill Power, Jim Allan, Joe Cronin, John Foliot, John Kirkwood, John McNabb, John Northup, John Rochford, Jon Avila, Joshue O’Connor, Judy Brewer, Julie Rawe, Justine Pascalides, Karen Schriver, Katharina Herzog, Kathleen Wahlbin, Katie Haritos-Shea, Katy Brickley, Kelsey Collister, Kim Dirks, Kimberly Patch, Laura Carlson, Laura Miller, Léonie Watson, Lisa Seeman-Kestenbaum, Lori Samuels, Lucy Greco, Luis Garcia, Lyn Muldrow, Makoto Ueki, Marc Johlic, Marie Bergeron, Mark Tanner, Mary Jo Mueller, Matt Garrish, Matthew King, Melanie Philipp, Melina Maria Möhnle, Michael Cooper, Michael Crabb, Michael Elledge, Michael Weiss, Michellanne Li, Michelle Lana, Mike Crabb, Mike Gower, Nicaise Dogbo, Nicholas Trefonides, Omar Bonilla, Patrick Lauke, Paul Adam, Peter Korn, Peter McNally, Pietro Cirrincione, Poornima Badhan Subramanian, Rachael Bradley Montgomery, Rain Breaw Michaels, Ralph de Rooij, Rebecca Monteleone, Rick Boardman, Ruoxi Ran, Ruth Spina, Ryan Hemphill, Sarah Horton, Sarah Pulis, Scott Hollier, Scott O’Hara, Shadi Abou-Zahra, Shannon Urban, Shari Butler, Shawn Henry, Shawn Lauriat, Shawn Thompson, Sheri Byrne-Haber, Shrirang Sahasrabudhe, Shwetank Dixit, Stacey Lumley, Stein Erik Skotkjerra, Stephen Repsher, Steve Lee, Sukriti Chadha, Susi Pallero, Suzanne Taylor, sweta wakodkar, Takayuki Watanabe, Thomas Logan, Thomas Westin, Tiffany Burtin, Tim Boland, Todd Libby, Todd Marquis Boutin, Victoria Clark, Wayne Dick, Wendy Chisholm, Wendy Reid, Wilco Fiers.

Research Partners

These researchers selected a Silver research question, did the research, and graciously allowed us to use the results.

Enabling funders

This publication has been funded in part with U.S. Federal funds from the Health and Human Services, National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR), initially under contract number ED-OSE-10-C-0067, then under contract number HHSP23301500054C, and now under HHS75P00120P00168. The content of this publication does not necessarily reflect the views or policies of the U.S. Department of Health and Human Services or the U.S. Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.