Subtitles vs. Captions – The Scoop on Captions

Summary
The primary difference between subtitles and captions is their intended audience.

 

According to YouTube, globally over 2 billion logged-in users visit the site and watch over a billion hours of videos. It is safe to say we consume a lot of video content daily. In order to meet the Web Content Accessibility Guidelines (WCAG) there needs to be captions added to those billions of hours of video we consume. This is so people who are deaf or hard of hearing can benefit from these videos. What about subtitles? Is there a difference between captions (both open and closed) and subtitles? The quick answer is yes, they have similar characteristics but are not the same. Captions provide a word for word transcription of dialogue and subtitles are a translation language spoken. The primary difference between subtitles and captions is their intended audience.

Subtitles

Subtitles are “the translation of the text display of a video’s dialogue into another language.” Subtitles, similar to captions, are superimposed onto the video and allow a person to understand content in their non-native language. Subtitles are also used as a written rendering of the dialogue in the same language, if the spoken language is too difficult to understand. For example, a person speaking with a strong accent or speaking very quiet. There are Subtitles for the Deaf and Hard of Hearing (SDH) that include information such as sound effects, speaker identification, and any other essential nonspeech features.

Captions

Captions are “the display of the text version of speech within a video.” Captions are in the same language of the original video. Captions can be a direct video dialogue, a description of video action or background noise, or speaker identification. Adding captions to videos is important in order to provide access to individuals who are deaf or hard of hearing. There are two types of captions. There are open captions and closed captions. The obvious difference is that open captions stay on all the time and do not have a way to be turned off and closed captions can be turned off. Closed captions are on a different track which allows a video player to turn them on and off. This means closed captions are published by uploading a separate caption file to the video player. The open caption is burned into the video itself as an overlay and remain on screen the entire time the audio and video are playing.

In the 1980’s the Federal Communications Commission (FCC) began requiring closed captioning for broadcast TV, and with the growth of the internet they are becoming required for web video as well. On October 8, 2010 the Twenty-First Century Communications and Video Accessibility Act (CVAA) was signed into law. This law provides an update to the federal communications law enacted in the 1980’s and 1990’s. It is brought up to date with Twenty-First century technologies, including new digital, broadband, and mobile innovations.

All videos should contain captioning. One in six people have some form of hearing loss. Captions can also make understanding video simpler for people with some cognitive disabilities. According to many studies 85 percent of Facebook videos are being watched with the sound off. Captions can be added to your videos manually or there are a lot of online services that will add them to your videos for a small fee. Using free automated services like Google is not the best practice, as many times the interpretation is wrong without a manual check.  According to the United States Census Bureau there are about 7.6 million people who are deaf or hard of hearing that rely on captions in multimedia. Accessible videos can benefit everyone.