Accessibility is often treated as an afterthought in tech; for me, it should be a starting point. Recently, I built an experimental tool with the help of AI that turns written words into spoken audio; not just that, it also organizes essential communication phrases into a simple, color-coded interface. This project was designed to help people who are nonverbal or have partial vision difficulties communicate more easily.
In this article, I’ll walk you through how I built it and the code that powers it.
Step 1: The Idea
The core idea was simple:
A person presses a button or a key (With the words written on them).
The word or phrase is read out loud by the browser.
Colors and simple categories (greetings, needs, emergencies) help identify important phrases.
This meant I needed two things:
1. A text-to-speech engine. Most important part.
2. An accessible web interface.
Step 2: Using the Web Speech API
Browsers already support speech synthesis. The Web Speech API provides a built-in way to convert text into spoken audio. Most important part solved.
Here’s the core function I used:
<script>
function speak(text) {
const utterance = new SpeechSynthesisUtterance(text);
speechSynthesis.speak(utterance);
}
</script>
This small function is the foundation; it takes any string of text and speaks it out loud.
Step 3: Creating the Interface
I divided phrases into categories. Each phrase is represented by a color-coded button.
Here’s an example snippet:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Accessible Communication Tool</title>
<style>
body { font-family: Arial, sans-serif; text-align: center; }
.button {
padding: 15px;
margin: 10px;
border: none;
border-radius: 8px;
color: white;
font-size: 18px;
cursor: pointer;
}
.red { background-color: red; }
.blue { background-color: blue; }
.green { background-color: green; }
.yellow { background-color: goldenrod; }
.orange { background-color: orange; }
</style>
</head>
<body>
<h2>Greetings</h2>
<button class="button red" onclick="speak('Hello')">Hello</button>
<button class="button blue" onclick="speak('How are you?')">How are you?</button>
<button class="button green" onclick="speak('Goodbye')">Goodbye</button>
<button class="button yellow" onclick="speak('What is your name?')">What’s your name?</button>
<button class="button orange" onclick="speak('Thank you')">Thank you</button>
<script>
function speak(text) {
const utterance = new SpeechSynthesisUtterance(text);
speechSynthesis.speak(utterance);
}
</script>
</body>
</html>
You can modify the code how you think it will be essential for your needs, or project.
It will look like this.
Step 4: Expanding Phrases
I expanded categories for:
Basic needs: I am hungry; I need to rest.
Places: Hospital, Home.
Emergency: I need emergency help. Stay with me.
The idea is to make communication possible even if someone cannot type or sign.
Step 5: Build Decisions
Why AI?
I used AI to speed up prototyping. Instead of spending hours searching for syntax, I asked AI to help generate base code, which I refined as I wished, which in turn you can modify as you wish.
Why colors?
Colors act as quick cues for people with partial sight or caregivers assisting them. Simply, they can modify the color more easily.
Step 6: Demo
If you save the above code as index.html and open it in your browser, you’ll have a working demo immediately. Just click the buttons, and you’ll hear the phrases spoken out loud.
Final Thoughts
Accessibility is not just a feature. With just a few lines of code and the help of AI, I was able to create a working tool that could help people with disabilities communicate more effectively.
