Interface texts largely determine a user’s experience, so they should be clearly written and accessible to everyone. Usually, when it comes to accessibility, we mean two categories of users with disabilities: blind or visually impaired people and people with cognitive disorders, as their health peculiarities have a significant impact on the way they interact with an interface and a text. So, what can we do to make our message as understandable and accessible as possible?
Think and write from top to bottom and from left to right. This sequence is used in special reading programs for blind people and makes the interface accessible to them. These applications reproduce the content coherently line by line or element by element. That makes it challenging for blind people to fully understand a message or an image shown on the screen or even enclosed in a separate block.
For instance, it is important to take into account while adding explanation notes or instructions to the interface.
Let’s consider a password entry field. Frequently the requirements are indicated below it: the number of characters or the necessity of numbers or special characters. A blind person doesn’t know about these instructions instantly. First, the app tells them that there is a password field, then, the person can input a password, but they might encounter an error. That is why all the critical instructions and tips have to be placed above the input fields.
The same applies to the confirmation buttons. For example, if there is a checkbox to confirm the data policy below the button, the user can realize it only after encountering an error.
This principle should become your UX copywriting rule. It makes your product more inclusive and accessible: not only to people with visual impairments but also to non-native speakers. Same for people with cognitive disorders who might struggle with comprehending figurative texts.
As an example, we can use the empty states. Usually, the writers place some creative and not very ambiguous texts there. It is a good practice to tell the users directly what is going on, so people with screen readers instantly understand where they are.
Try to close your eyes and say your written text out loud. It should help to identify ambiguous wordings that become senseless without visual accompaniment.
Speak the same language as your users do. For instance, if you write text for a bank, just use “savings account”, not “nest egg” or “stash”. It helps the users to connect their goals with the actions they need to take in the app easier.
It is also important to remember consistency and always sign the elements with the same functionality in the same way.
Decipher the abbreviations and define the complex terms. Add the abbreviations to <abbr> and the transcript to the title: attribute.
Example
JRF Alex Smith defended his thesis in 2017.
<abbr title=“Junior research fellow”>JFR</abbr> Alex Smith defended his thesis in 2017.
Use the <dfn> tag to mark the complex terms, rel–for references, <dl>–for the lists of definitions. An audio description, is a short description of a subject, space, or action that is not clear.
The tiflo commentary is a short audio description of an item, space, or action.
<dl><dt>The tiflo commentary<dl><dt> is a short audio description of an item, space, or an action.</dd></dl>
The ALT-texts are the short descriptions of any image. These descriptions are not shown but recognized by screen readers.
There is no need to add ALT-texts for each and every image or photo. If an illustration is purely decorative or it does not relate to the actual content of the page, the ALT text can (and should be) left blank to reduce the amount of information to digest. It should’ve been done because the large chunks of information are challenging for listening comprehension.
Note that the <img> element should still comprise a blank/“null” ALT attribute: alt=“, otherwise the screen reader thinks there is an image and reads its URL. Hide the unnecessary.
What images should have the ALT-text
Do not include in the ALT-texts the words like “image”, “picture”, or “icon”. Screen Readers can tell if there is an image by their default settings.
If you put a title in the input field as a placeholder, it disappears every time the user interacts with the field. That is not a perfect design solution in general, even worse if we consider people with cognitive disorders or visual impairment.
It is important to notify the user with a status update or with the results when they successfully perform an action.
Take into account:
If a notification is designed as a background pop-up, give the users time to read it. That will help low-speed reading users and people with cognitive disorders as well.
If you have the dynamic page parts, make the user learn about the changes timely. You can add the ARIA-live attribute to any parent element of an interactive part. This attribute can have several values.
— The ARIA-live=“polite”
– screen readers will not announce these changes instantly, will not interrupt the current tasks, and will wait until the users stop interacting with the interface. They are suitable for notifications about new messages or an AutoSave.
— The Aria-live=“assertive”
–screen readers will reflect the changes as they are made.
This is suitable for error messages or alerts that appear after a user's actions. Instant notifications are disturbing, so use those only for essential events.
The user should understand what happens if he follows the link. Otherwise, the only way for blind or visually impaired users to understand what is hidden behind the link is to follow it. So popular links such as “Read next”, “Details”, and “Read here” give little information to the users who use the screen readers. Make them more descriptive, for example, “More details on delivery terms”.
In an ideal world, we should understand the link’s destination without any context. That will allow users with screen readers and keyboard navigation to reach their goals easier. You can make a list with links and check if they are self-explanatory.
Content structure is essential for all users. It is hard to assess a huge canvas of text without any hierarchy. The users are used to scanning texts which means they need some anchor points to distinguish important and secondary content.
Breaking the text into paragraphs, making subheadings bold, and using larger fonts is not enough for screen readers. We Should lay out the information structure using the levels of subheadings. That will allow the screen reader users to navigate between the sections easily and to refer to the list of subheadings and use them as a table of contents.
The subheading levels define the page structure. For instance, h1 is the main page heading, and h2 is used to define sections that can have subsections–h3.
<h1>Heading</h1>
<h2>Subheading</h2>
<h3>Subsection<h/3>
<h3>Subsection 2<h/3>
Do not omit the levels: start with h1, then use h2, and so on. In order to reduce the font size use the font-size CSS-tool and not the level of sub-/headings.