Skip navigation links

Video - The Rest of the (Accessibility) Story

Auxiliary Material for a presentation at the 2019 Accessible Learning Conference by Jim White, Mike Hudson, and Alyssa Bradley. The presentation can be found at: Google Slides or a recording of the presentation (MSU Mediaspace 00:40:00) or in the native Video, the Rest of the Story PowerPoint.

Title Examples

To truly understand what is happening it is best to "view" these examples using a screen reader but beware, what you hear will be dependant on the screen reader and perhaps the user's settings for it. A title is required by 1.1.1 and serves the equivalent of the alt attribute on img elements if the screen reader reads it. This first one is presented as the default YouTube embed code. Other video serving sites will have something similar.


The NVDA screen reader only lets a user initially know there is a YouTube video here. This is the raw embed code: <iframe width="560" height="315" src="https://www.youtube.com/embed/dZ8yxzz7g_k" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

The next one adds two query parameters to the iframe to guarantee that the video will not autoplay (unless Google changes the rules) and the second tells Google to only show as related follow-on videos ones that belong to the same owner. Additionally, a title is added. These additions are given strong emphasis in the example code that follows the video.


The revised embed code: <iframe width="560" height="315" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" src="https://www.youtube.com/embed/dZ8yxzz7g_k?autoplay=0&amp;rel=0" title="Clean Plates at State Video 00:01:18"></iframe> Fair warning: simple copying of the preceding code likely will not work, you have to replicate, except for the strong emphasis, what it shows for your YouTube video by modifying its default embed code.

The following two links do exactly the same thing except the second one is much friendlier because it tells the user they are going to YouTube for a video, the video will start automatically when they get there, and the playing time is one minute and 18 seconds. They can easily hit the tab key and move on if they are not interested in such a video instead of wondering what "clean plates at state" means (a job announcement?, a picture of a stack of freshly washed plates? an ad for a superior dishwashing machine?).

Clean Plates at State

Clean Plates at State Video (YouTube autoplay 00:01:18)

The following link shows the video in whatever the current default player of your particular browser is. And if you peek at the HTML behind the screen you will also see that should your browser not support a video player, say it's a text only browser, there is also a fallback link to the YouTube version of the video.

Also if you peek behind the screen you will find that a proper title (with a playing time) has been included in the video tag so the screen reader user will know more than that they just landed on some video.

The following link is an example of the type of link you should, if you include a screenplay, place just before or just after your embedded video or your link to a video. Before lets the screen reader user know that they have not been left out but they will likely go on and try the video first anyway with the confidence to know they can abandon it and have the transcript read to them.

Clean Plates at State (transcript/web screenplay)

If you do have a separate video with audio description that link should also be near the link to the transcript and/or the main video or its link. Make it clear that the link goes to the audio described version and don't hesitate to again help the user out by warning of an off site location, autoplay, and length. In our case the addition of audio description added some 30 odd seconds to each video.

Issue No. 7 Accessibility (w/audio description, YouTube autoplay 00:02:24)

We tried hard to find a readily available system for doing second track audio that we could recommend for universal use but regret to note that we didn't find anything yet but there are glimmers out there. The ideal would be one where only one video was required and embedded instructions would actually pause video playback to inject audio description from its own user selectable track but we didn't find such so settled for two videos since there was no way to include audio description in the original length video.

And, no, at the moment I don't believe there is any way to land on a YouTube video page and not autoplay the video.

The Video Accessibility Process (Post Creation)

Create a Shot List

Watch the video carefully with closed captions turned on. What is occurring on the screen? Describe each scene. Note what adds information (particularly where there are vocal gaps in speaking or where text appears on screen but is not voiced). Create a scene-by-scene list of the visuals and speakers or what is causing sounds (e.g., car chase, tornado, gunfire). In this list you will be roughly identifying things that might go in your audio description but you don't need to get stuck on nailing that down yet. Material in quotes below is actual text on the screen but you can do your shot list (or shotlist) any way you want, it's only for your use to aid you in ensuring that you get appropriate descriptions into your eventual Media Alternative (text).

  • MSU drawing; camera starts centered on Beaumont Tower and backs off to broader campus view with Red Cedar River and buildings. Cheerful, brisk, light background music throughout video. [no speaking this scene]
  • "Clean Plates at State" Logo. [no speaking this scene; total of about 4 seconds of no speaking]
  • "Selin Sergin, Clean Plates Volunteer" speaking to off screen interviewer in office
  • Table of food trays with leftover food; person weighing it in background.
  • Students leaving trays and volunteers shifting leftovers to platters for weighing.
  • Cafeteria employee throwing food whose time has expired into a waste bin while a student is handed fresh food.
  • Fries being loaded onto a plate beside a burger with many falling off then one falling on floor when the overloaded plate is passed to a student.
  • More volunteers transferring waste food to platters for weighing.
  • "Brendan Wang, Clean Plates Volunteer" talking to us.
  • More students placing food trays with leftover food while volunteers prepare it for weighing (2 scenes).
  • Close up of scales and computer screen where food is weighed and recorded.
  • Brendan Wang talking and gesturing.
  • An obviously arranged for the camera row of food platters awaiting weighing.
  • Back in the office with Selin. Selin begins speaking.
  • "Eat at State" logo appears in lower right corner (remains for duration).
  • More food trays with leftovers placed on a table.
  • "Clean Plates at State, What's on your plate? Food Waste Study" banner on wall.
  • Back to view of Selin speaking in office.

Creating the shot list from the finished video required about 40 minutes for a video 1 minute and 18 seconds long. Could probably be considerably less if done during initial planning and final editing.

Create a Transcript

The automatic captions (https://support.google.com/youtube/answer/6373554?hl=en) of YouTube content is downloaded and processed as follows for videos which you do not own (ones where you have direct access to fix the transcript should be easier):

  1. From the "..." More Actions ellipsis dropdown item below the video and its title choose "Open transcript" (pick a language option if multiple ones appear). If the transcript box shows a vertical 3 dot ("More Actions") button click it and select "Toggle timestamps" if available.
  2. Click anywhere in the transcript then Ctrl-A then Ctrl-C
  3. Paste (Ctrl-V) into a new Notepad++ (or other macro capable editor) file as plain text (automatic in Notepad++)
  4. Something like the following will appear except that your result will probably have many more blank lines:

    If your result contains a bunch of HTML code such as <p></p> instead of only text don't worry about it, we will deal with it in a minute.
  5. Manually delete everything down through the "00:00" (or first time entry) that begins the actual automatic captions transcript or, if you were able to toggle the timestamps off, just delete everything down to the first line of the transcript.
  6. Find where the automatic captions end and delete the video title line through the end of the file.
  7. You will now have just the spoken words possibly interspersed with a timestamp where each switch to the next block of words is to appear during video playback and possibly with HTML paragraph tags.
  8. If your result has paragraph tags (<p> </p>) in it use the Replace or Find and Replace feature of your editor first to replace all the begin paragraph tags then again to remove all the end paragraph tags. In Notepad++ you would place the cursor before the "less than" character that begins the first paragraph tag then Right Arrow three times, Ctrl-F to open the "Find" popup then click the "Replace" tab, delete anything in the "Replace with" field and then click the "Replace All" button. Repeat for the end paragraph tag except Right Arrow 4 times.
  9. If your result has timestamp lines and your editor allows you to create a macro to remove all the timestamps you'll want to do that next. In Notepad++ Ctrl-Home, "Macro" > "Start Recording", Down Arrow, Shift-Down Arrow, Delete, "Macro" > "Stop Recording", "Macro" > "Run a Macro Multiple Times...", (it should already indicate "Macro to run: Current recorded macro"), select "Run until the end of file" > "Run", then "Cancel." You may find it easiest or most comfortable to skip the next step and go directly to creating the web screenplay.
  10. If you want to go further and make it all a monolithic block of text in Notepad++ you can Ctrl-Home again, Ctrl-F to open the "Find" popup then click the "Replace" tab, enter "\r\n" (without the quotes) in the "Find what" field enter a single Spacebar in the "Replace with" and be sure the "Extended (\n, \r, \t, \0, \x...)" option under "Search Mode" is checked then click the "Replace All" button.

Creating the transcript from the already available caption file on YouTube took approximately 15 minutes.

Create a Screenplay

There is no rigidly required format for providing an "alternative for time-based media [that reads] something like a screenplay or book" (1.2.3) but the following example created from the above captions and screen shots list will provide one example. To create your screenplay (and we will stick with "screenplay" or web screenplay because it is convenient and relatively descriptive) you need to go through the following process with your text as created above.

Play the video looking for things going on on the screen that are not in the audio while listening closely to the audio and matching spoken words to the transcript you've created above. Keep a trigger finger (so to speak) on the Pause button for the video because you will need it a lot as you go through this process. As you go through the video and the text deal with the proper capitalization, spelling, punctuation, etc. of the speaker (even if you have to fudge it a little, like dropping "ands" from run-on sentences)

Whenever anything meaningful happens in the video/audio pause, look over the shot list you created above, check what the speaker is saying and any audio effects and decide whether it is meaningful video/audio that needs to be described in text. Break a sentence, between sentences, before/after is your call? While technically "all" visual or sound things are mandatory per Guideline 1.2.3 Audio Description or Media Alternative you need to use some judgement, not every outfit Nancy Drew wore was described in the mystery novels of which she was the heroine and just because outfits (clothing) can be seen in a video doesn't mean it is important to the video. What people look like is generally irrelevant too but there are exceptions such as would be found in "Beauty and the Beast," "The Hunchback of Notre Dame." If you compare the shot list above with the descriptions of scenes media alternative screenplay below you will find only half the scenes made it into the screenplay.

As you'll see in the below web screenplay, we've used brackets around our inserted descriptions, clearly identified the speakers, and not surrounded the spoken text with quotes. If any of the spoken blocks were longer they would have been broken into convenient paragraphs also.

Clean Plates at State — Web "Screenplay" (Web Media Alternative Text for Reading)

The camera backs away from a stylized drawing of the MSU campus with Beaumont Tower and the Red Cedar River prominently visible then the Clean Plates at State plate logo assembles on the screen. Cut to a student in an office who starts to speak:

Selin Sergin, Clean Plates Volunteer: A key thing that I learned first was I was so surprised about how much food people wasted. They would get a whole meal and then they would maybe try a bite and that would rack up to over a pound of food waste. [Trays of leftover food on a table with students taking the leftovers to a scale for weighing.] Then imagine hundreds of people coming through and how much food that they waste. Something I realized, maybe it's not just a problem of students not realizing it but it's definitely multi-faceted. [A cafeteria staff member piles fries on a plate beside a hamburger as fries fall off including one onto the floor as the plate is handed to the student.] The cafeteria can think about the food that they're serving and how much they're serving and the students thinking about picking something that they know they'll eat and not wasting what they choose.

Brendan Wang, Clean Plates Volunteer: [Brendan standing talking to us.] I think it was a really eye-opening experience to really kind of see what's going on behind the scenes. It really starts with a mindset and the mentality to want to help our environment and the people around us. Even lots of my friends don't care at all whatsoever. [Food being weighed at checkout.] I think by starting off with education and just informing each other about little actions having big effects. I think that could have a really profound change in how we look at the foods we eat and how much we consume.

Selin: Don't be ashamed that you waste food. Something that was really interesting to me is that when we were doing the food waste audits then they realized that we were gonna weigh their food they always seemed so ashamed. It's something that people care about but it's not necessarily something that people always think about. [Large Clean Plates at State poster on the wall with the logo above "Food Waste Study." Then "Eat at State" logo appears in corner till the end.] Don't be ashamed of it but not be afraid to start thinking about it and having conversations about it.

Back to link to transcript

Melding the raw transcript with descriptive text to create the above web screenplay for the 1 minute and 18 second video took about 1 hour and 25 minutes.

The above screenplay/transcript meets 1.2.3 (A) Audio Description or Media Alternative (Prerecorded) (because it includes "all of the information [] both visual and auditory") and 1.2.8 (AAA) Media Alternative (Prerecorded).

Cleanup of the original .srt (or other) captioning file.

Some examples of text that the automatic captioning got wrong are shown below. As these are found they also need to be corrected in the online video .srt file.

Transcript: eat and always doing with the juice I

Correct: eat and not wasting what they choose

Transcript: starts with the mites and the mentality

Correct: starts with a mindset and the mentality

Transcript: doing the food waste audit suddenly

Correct: doing the food waste audits then they

"affects" when "effects" was said.

You're on your own for figuring out whether to change "gonna" to "going to" and such like for common speech vs written differences. In any event it is poor form (not recommended and not best practice) to leave automatic captioning in its raw form, it should always be reviewed and corrected. Outright mistakes such as commonly injecting a "not" in speech when exactly the opposite is meant should usually be cleaned up too e.g.,[not sic] should indicate the word was spoken but was not meant.

Going back and cleaning up the automated captioning based on having closely listened to and noted all transcript corrections as the screenplay was developed took about 20 minutes.

Total time in this example was 2 hours and 40 minutes to completely meet the WCAG criteria in a web screenplay for a 1 minute 18 second video. With a little practice and doing it simultaneously with the video editing would probably get the total time down to under an hour. Probably, if you have access to edit the captions initially, fixing the captions before downloading them and converting them to a transcript would also save some time.

The creators of the video estimate that it took 8-10 hours start-to-finish from planning to uploading the final video so with a little practice considering accessibility from the beginning of planning and doing it as you go should add less than 10% to video production costs.

Audio Description (and More? requiring video editing)

See the slides and some of the above description for shot list etc. to understand what was done and considered in preparing the below "screenplay." You should note by now that there are three quite different "screenplay" types: a screenplay from which the original video will be produced, a "web" screenplay that provides a text only media alternative to the video, and a screenplay that provides instruction to the editor and narrator/voice over (VO) artist.

Clean Plates at State — Video Editor Screenplay (Instructions for Video Editor Remediation)

VO: Illustration of Michigan State University's campus, the Clean Plates at State logo appears.

Selin Sergin, Clean Plates Volunteer: "A key thing that I learned first was I was so surprised about how much food people wasted. They would get a whole meal and then they would maybe try a bite and that would rack up to over a pound of food waste."

Then imagine hundreds of people coming through and how much food that they waste. Something I realized, maybe it's not just a problem of students not realizing it but it's definitely multi-faceted. The cafeteria can think about the food that they're serving and how much they're serving and the students thinking about picking something that they know they'll eat and not wasting what they choose."

[Pause video] We see several shots of students in the cafeteria, food being piled on to plates, lots of food leftover on trays, and those trays being cleaned up and weighed.

Brendan Wang, Clean Plates Volunteer: "I think it was a really eye-opening experience to really kind of see what's going on behind the scenes. It really starts with a mindset and the mentality to want to help our environment and the people around us. Even lots of my friends don't care at all whatsoever. I think by starting off with education and just informing each other about little actions having big effects. I think that could have a really profound change in how we look at the foods we eat and how much we consume."

[pause video] more shots of students holding trays of food and leftovers in the cafeteria. Some students take their trays to a Clean Plates at State food audit table to weigh their leftovers.

Selin: "Don't be ashamed that you waste food. Something that was really interesting to me is that when we were doing the food waste audits then they realized that we were gonna weigh their food they always seemed so ashamed. It's something that people care about but it's not necessarily something that people always think about. [Large Clean Plates at State poster on the wall with the logo above "Food Waste Study." Then "Eat at State" logo appears in corner till the end.] Don't be ashamed of it but not be afraid to start thinking about it and having conversations about it."

Developing this editor/narrator screenplay took about an hour using the shot list and web screenplay above then voice overs and editing took about another 4 hours. It all could have been done much more efficiently if the thinking and the building considered accessibility from the beginning.

Our final video meets the 1.2.7 (AAA) Extended Audio Description for the simple reason that there was insufficient non-speaking space in the original video to add any audio description and even then our version leaves out a lot of scene descriptions that were considered not essential to "the sense of the video."

Issue No. 7 Accessibility — Web "Screenplay"

Michigan State University Information Technology Services presents another video from the "How IT Works" comic poster/card series, this time the topic is Accessibility.

It's another beautiful day on the Michigan State University Campus [light background music] — and in the depths of the Brody Engagement Center, two students are studying. Camisha is working at her computer across the table from Justin at his computer. But something is about to go awry...[Shift to more dramatic music].

Camisha: Hey Justin, can you tell me what this webpage says? [looking up and over at Justin]

Justin: Sure thing, Camisha. Is it difficult for you to read? [looking up into the camera]

Camisha: Yes, it is. Many websites have low contrast between text and the background. That makes it tough for some people to read! [arms with palms up in a helpless shrug]

Justin: Boggling brain shaker! I wouldn't have even thought of that! [holding his head between the fingertips of both hands]

Camisha: You're not alone. A lot of people don't give it a second thought, but those who have even minor vision difficulty can be severely affected by websites and content that are inaccessible. [split frame with Camisha and another student in the top and Justin in the bottom looking thoughtfully into the upper distance]

Justin: Gee whiz! What other types of accessibility needs are there?

Camisha: Well, sometimes color is used to emphasize text, but some people can't see color very well, so it can be helpful to make the text bold or underline it. [with a large bright color wheel in the background]

Closed captioning (CC) is another important one. That's a must for any online video!

And, people with vision impairments use screen reader software. [Camisha points to headphones while looking at Justin] Since the computer is reading text for them, formatting that text with heading styles, list styles, and alternative text for images helps out immensely!

Justin: Enlightening illumination! [bright yellow sunburst behind Justin's head as he looks our way] I sure have more of an appreciation for persons with disabilities! Where can I find more info? [with a questioning look]

Camisha: Go to webaccess.msu.edu [holding up a banner that spells it out] to learn more about web accessibility solutions!

Issue Number 7. Accessibility

Copyright 2019 Michigan State University Board of Trustees

Other MSU IT connections: tech.msu.edu; Twitter: TechAtMSU; techstore.msu.edu; Facebook: MSUTechStore; YouTube: go.msu.edu/msuit-youtube

How IT Works splash screen with a silhouette of part of the MSU campus with the Beaumont Tower included in front of a setting sun.

Initial time for creation of screenplay: 50 minutes starting from the initial "director instructions" screenplay for this 1 minute 40 second video.

The original development of the Issue No. 7 published comic page required about 50 hours not counting printing and the initial video that was made from it took about 16 hours from figuring out timing, shooting, editing, and dealing with the music soundtrack.

Issue No. 7 Accessibility — Video Editor Screenplay

Narrator: Michigan State University Information Technology Services presents another video from the "How IT Works" comic poster/card series, featuring the topic of Accessibility.

It's another beautiful day on the Michigan State University Campus — and in the depths of the Brody Engagement Center, two students are studying. Camisha is working at her computer across the table from Justin at his computer. But something is about to go awry...

Camisha: (working at a computer) Hey Justin, can you tell me what this webpage says?

Justin: Sure thing, Camisha. Is it difficult for you to read?

Camisha: [arms with palms up in a helpless shrug] Yes, it is. Many websites have low contrast between text and the background. That makes it tough for some people to read!

Justin: [Looking perplexed and holding his head between the fingertips of both hands] Boggling brain shaker! I wouldn't have even thought of that!

Camisha: You're not alone. A lot of people don't give it a second thought, but those who have even minor vision difficulty can be severely affected by websites and content that are inaccessible.

Justin: Gee whiz! What other types of accessibility needs are there?

[A large bright color wheel fills the background of the panel]

Camisha: Well, sometimes color is used to emphasize text, but some people can't see color very well, so it can be helpful to make the text bold or underline it.

Closed captioning (CC) is another important one. That's a must for any online video!

[Camisha points to headphones she's wearing while looking at Justin] And, people with vision impairments use screen reader software. Since the computer is reading text for them, formatting that text with heading styles, list styles, and alternative text for images helps out immensely!

Justin: Enlightening illumination! I sure have more of an appreciation for persons with disabilities! Where can I find more info?

Camisha: Go to webaccess.msu.edu to learn more about web accessibility solutions!

Narrator: Issue Number 7. Accessibility

Copyright 2019 Michigan State University Board of Trustees

Other MSU IT connections: tech.msu.edu; Twitter: TechAtMSU; techstore.msu.edu; Facebook: MSUTechStore; YouTube: go.msu.edu/msuit-youtube

How IT Works splash screen with a silhouette of part of the MSU campus with the Beaumont Tower included in front of a setting sun.

The above screenplay built from the prior one took less than an hour then recording the voices and reworking the soundtrack took about 4 hours. The original creation of the comic page took about 50 hours and the original version of the video with just a music soundtrack took about 16 hours. Do the math and you'll see that total "accessibility" remediation was maybe 6 percent of the total time spent and if you think about it you'll realize that building the accessibility considerations in from the start would have likely added maybe one percent to the total.

WCAG 1.1 and 1.2 Guidelines Summarized and Grouped by Disability, Media, and Number

In the below various sorted orders of the Guidelines only the material with strong emphasis styling (A and AA) is required by current MSU Policy. If multiple guidelines are applicable for a specific item, all must be met. For example, MSU Policy requires meeting the guidelines at the AA level therefore a prerecorded video with audio always requires both captions and Audio Description. Media Alternative (text) with both the audio material and a description of the action such as something like a screenplay would provide is optional for prerecorded video. However, since it is often impossible to fit audio descriptions in pauses in dialog in videos (or editing restrictions preclude that), that often means that something that would read like a screenplay or a book is mandatory for AA and will simultaneously meet the AAA level requirement. An animated GIF that conveys any information will always require a screenplay or sufficient alt text even though it has no sound or it will need to be converted to a video and Audio Description sound added.

WCAG Video/Audio Guidelines Grouped By Disability

Deaf (AA): 1.2.1 (Prerecorded) Audio-only; 1.2.2 Captions (Prerecorded); 1.2.3 (Prerecorded Video) ... Media Alternative; 1.2.4 Captions (Live); [AAA 1.2.8 Media Alternative; 1.2.9 Audio-only (Live): (nearly) synchronized text]

Deaf-blind(): [(AAA) 1.2.8 Media Alternative; 1.2.9 Audio-only (Live): (nearly) synchronized text]

Blind (AA): 1.1.1 Alternative Text; 1.2.1 Video-only (Prerecorded), alternative text or audio description; 1.2.3 (Prerecorded Video) Audio Description or Media Alternative; 1.2.5 Audio Description (Prerecorded); [AAA 1.2.8 Media Alternative]

WCAG Video/Audio Guidelines Grouped By Media

Images, SVG, video, sound recording, etc. (with exceptions) (AA): 1.1.1 Alternative Text (depending on media: alt, title, name, full-text, table)

Audio-only (Prerecorded) (AA): 1.1.1 Non-text Content: text alternative; 1.2.1 Audio-only: alternative text; 1.2.2 Captions (Prerecorded): synchronized

Audio-only (Live) (AA): 1.2.4 Captions (Live): synchronized; [AAA 1.2.9 Audio-only (Live): (nearly) synchronized text]

Video-only (Prerecorded)(AA): 1.1.1 Non-text: alternative text; 1.2.1 Video-only (Prerecorded): alternative text, audio description; 1.2.3 Audio Description or Media Alternative (Prerecorded); 1.2.5 Audio Description (Prerecorded); [AAA 1.2.8 Media Alternative (Prerecorded)]

Video [w/audio] (Prerecorded) (AA): 1.1.1 Non-text Content: text alternative; 1.2.2 Captions (Prerecorded): synchronized; 1.2.3 Audio Description or Media Alternative; 1.2.5 Audio Description (Prerecorded): synchronized; [AAA 1.2.8 Media Alternative (Prerecorded)]

Video (Live): 1.2.4 Captions (Live) (AA): synchronized; [AAA 1.2.9 Audio-only (Live): (nearly) synchronized text]

WCAG Video/Audio Guidelines Ordered By Guideline

1. Perceivable

1.1 Text Alternatives

1.1.1 (A) Non-text Content [images, SVG, video, etc.] has a text alternative [blind] except:

Input Controls: descriptive name required [& 4.1]

Time-Based Media [video and/or audio prerecorded]: descriptive identification required [& 1.2]

Test: descriptive identification required (if text alternative would invalidate test)

Sensory [fear, joy]: descriptive identification required (scary music, joyful music)

CAPTCHA: text describes purpose AND alternative forms for different perceptions too

Decoration, Formatting, Invisible: make pure decoration made ignorable by assistive technologies

1.2 Time-based Media

1.2.1 (A) Audio-only and Video-only (Prerecorded): except when clearly identified as alternative for also provided text

Prerecorded Audio-only: equivalent information in alternative text (deaf)

Prerecorded Video-only: either alternative text (blind) or equivalent audio description (blind) of visuals

1.2.2 (A) Captions (Prerecorded [audio alone or audio in video]): synchronized visual captions (deaf) except when clearly identified as alternative for also provided text

1.2.3 (A) Audio Description [synchronized (blind)] or Media Alternative (blind, deaf) (Prerecorded [video]): except when clearly identified as alternative for also provided text

1.2.4 (AA) Captions (Live [audio alone or in video]): synchronized visual captions (deaf) of speech/speaker, sound effects, other significant audio (no exceptions)

1.2.5 (AA) Audio Description (Prerecorded [video]): synchronized audio (blind) {required but not always possible hence 1.2.7 or 1.2.8 may be useful} {may already be met by audio track (possibly 1.2.3 Audio Description) or due to meaningless video}

1.2.6 (AAA) Sign Language (Prerecorded [audio alone or in video]): (deaf) {video result always}

1.2.7 (AAA) Extended Audio Description (Prerecorded [video]): (blind) {may already be met by audio track or due to meaningless video}

1.2.8 (AAA) Media Alternative (Prerecorded [video only or audio visual]): screenplay (or "book") (deaf, blind, deaf-blind) {meets/met by 1.2.3 ...or Media Alternative}

1.2.9 (AAA) Audio-only (Live): live text including speech and description of other sound necessary for understanding (deaf)

A Few Examples and Resources Related to Good Audio Descriptions

Media Access Group WGBH is the leader in audio description with a start in the 1990's.

Samples of audio description from American Council of the Blind's Audio Description Project (ADP).

Samples of STEM-based audio description from Penn State.

Links to the Original Versions of the Remediated Videos

Issue No. 7 Accessibility

Clean Plates at State