There are two types of people in this world: those who watch TV shows with captions on, and those who are weird.
All joking aside, the importance of closed captions for video cannot be understated. In addition to being crucial for the deaf and hard-of-hearing, captions also are important when audio is unavailable or not clearly audible. Maybe you're watching a video in a public place and the audio is drowned out by ambient noise. Or maybe the person speaking in the video is using a microphone that isn't the best quality, or speaks with an accent or dialect that is unfamiliar to the viewer. Captions are always a good thing. Unfortunately, captioning audio in a live stream is tricky.
Before we dig into the problem of captioning live streams, let's talk about semantics a bit. Did you know that there is a difference between the terms closed caption and subtitle?
transcription or translation of the dialogue, suitable for when the sound is available but not understood (e.g. because the user does not understand the language of the media resource's audio track). Overlaid on the video.
The spec describes captions as:
Transcription or translation of the dialogue, sound effects, relevant musical cues, and other relevant audio information, suitable for when sound is unavailable or not clearly audible (e.g. because it is muted, drowned-out by ambient noise, or because the user is deaf). Overlaid on the video; labeled as appropriate for the hard-of-hearing.
This means that when we talk about "closed captions" for live videos, we're usually referring to subtitles since captions usually include descriptive information. Think about a scene in a TV show where an actor gets in the car to leave home and says goodbye to their spouse. The caption for this scene might read "Goodbye, dear. [car engine starts]."
We're not close to having AI systems describe contextual information like this for us, so we're limited to adding pure "speech-to-text" subtitles captions to our live stream; we can do that using the method below.
Note: You’ll notice that the title and body of this blog post uses the terms ‘captions’ or ‘closed captions’ even though what we’re really talking about here are subtitles based on the definitions above. Unfortunately, since the term ‘closed captions’ is so commonly misused, it makes the most sense to use this term improperly to help developers find this blog post and learn how to add this feature to their live streams. Just know that what we’re really talking about here are subtitles!
The solution that we look at in this post focuses on broadcasting to an Amazon Interactive Video Service (Amazon IVS) live stream from
For this demo, I've chosen to use the OBS-captions-plugin
by ratwithacompiler (
Next, select the 'gear' icon in the Captions dock to modify the settings.
Make sure that a Caption Source is selected, and modify the plugin configuration to suit your needs. For example, the default Caption Timeout for me was set to 15.0
seconds, but I found 5.0
seconds to be a better value.
Once you've saved your configuration and started a new live stream, the plugin handles converting your speech to text and produce the required caption information to the live stream.
To play back the caption data with the Amazon IVS player, we can add an event listener to listen for the TextCue
event (
ivsPlayer.addEventListener(IVSPlayer.PlayerEventType.TEXT_CUE, (evt) => {
console.log(evt);
}
The handler as configured above logs all incoming TextCue
events to the console.
The text
property of the TextCue
event contains the caption data.
With some HTML and CSS, we can render the caption data as an overlay on the <video>
element. This implementation is highly dependent on your needs, but you should take into account auto-hiding the overlay after a specified period of no caption data.
In this post, we looked at how to use an OBS plugin to convert speech to text and publish that text as caption data on an Amazon IVS live stream.