Skip to content
This repository has been archived by the owner on Sep 24, 2019. It is now read-only.

Test for WCAG technique G87: Providing closed captions #93

Open
hannolans opened this issue Dec 15, 2013 · 8 comments
Open

Test for WCAG technique G87: Providing closed captions #93

hannolans opened this issue Dec 15, 2013 · 8 comments

Comments

@hannolans
Copy link
Contributor

The objective of this technique is to provide a way for people who have hearing impairments or otherwise have trouble hearing the dialogue in synchronized media material to be able to view the material and see the dialogue and sounds - without requiring people who are not deaf to watch the captions. With this technique all of the dialogue and important sounds are embedded as text in a fashion that causes the text not to be visible unless the user requests it. As a result they are visible only when needed. This requires special support for captioning in the user agent.

Procedure

  • Turn on the closed caption feature of the media player
  • View the synchronized media content
  • Check that captions (of all dialogue and important sounds) are visible
@hannolans
Copy link
Contributor Author

A first simple check would be to check if a srt-file or webvtt is provided, but sadly that doesn't say that if no subtitling is provided a video should fail. It can be a video without dialogues and important sounds.
To check a video we could analyse the video. For example, a certain failure would be if it's a talking heads video and there is no caption provided. To analyse video, we could render the video in canvas and take captures. There seems to be libraries to do face detection and further image analysis:
http://wesbos.com/html5-video-face-detection-canvas-javascript/
http://libccv.org/
An even better way would be to analyse the audio track with the web audio API. And if the browser QUAIL is running on is webkit, we could use the realtime using Web Speech API on JavaScript that do speech to text . The speech will get analysed (in Chrome webkit by Google) and you'll get the transcription back. Google has a session limit of 60 seconds, but that should be enough for us to detect if there are valid captions provided.
http://stiltsoft.com/blog/2013/05/google-chrome-how-to-use-the-web-speech-api/

So test would be:

  1. start video where human voice is assumed
  2. play 30 seconds of video and send the audio via de streaming API to the web speech API.
  3. check if there are captions for that given time span
  4. check if the words that the web speech API returns and the captions match (to a certain degree)
    Tests 3 and 4 should be true.

@hannolans
Copy link
Contributor Author

Talked with Arjan and this test could better handle the detection of (valid) caption files only, so if a technique is used.
Video analysis could be handled in "F8: Failure of Success Criterion 1.2.2 due to captions omitting some dialogue or important sound effects".
Added new issue for this failure: #152

@hannolans
Copy link
Contributor Author

Ok, this will leave the test to discovering caption files:

  1. test for an object element (main page or in iframe) with as parameter a file with a video extension, this will cover most of the video players like jw player)
    OR
  2. test for an html5 video element
    OR other techniques like test for an embed element (we might later add more exotic technologies)
  3. test if it includes caption/track files
  4. test if this file is in the language of the page inherit element (so it is not a translation file in another language)
  5. test if the file is not empty

kevee pushed a commit that referenced this issue Mar 1, 2014
kevee pushed a commit that referenced this issue Mar 1, 2014
…owsers only. Tested subtitles for language. #93
@kevee kevee closed this as completed Mar 1, 2014
@hannolans
Copy link
Contributor Author

Great that html5 video is covered. I think a object embed is not covered yet.

Is for example this video covered?
http://www.rijksoverheid.nl/documenten-en-publicaties/videos/2014/01/24/persconferentie-na-ministerraad-24-januari-2014.html

The html includes:
`

`

A test would be:

  • check of in the object element a video exension is mentioned (in that case we might assume it is a video player) and this check is apllicable, in the above example the video file can be recognised by flash video extension.flv (http%3A%2F%2Fserver.rijksoverheidsvideo.nl%2Fflash%2FMP-240114-5081.flv)
  • check the captions, here recognised by the .srt extension (http%3A%2F%2Fserver.rijksoverheidsvideo.nl%2Fondertiteling%2FMP-240114-5081.srt)

@hannolans
Copy link
Contributor Author

We could add .mp4 as well to check for in the param.

@kevee
Copy link
Collaborator

kevee commented Mar 24, 2014

I'm starting a video-captions branch to move some code into more components rather than putting it all in videoEmbeddedOrLinkedNeedCaptions.

@hannolans
Copy link
Contributor Author

Great idea. we could then also add a condition test whether it is live or recorded video.

kevee pushed a commit that referenced this issue Mar 26, 2014
@kevee
Copy link
Collaborator

kevee commented Mar 27, 2014

Merged in video-captions branch, any additional use cases we need to capture?

@kevee kevee added this to the Round 4- time-based media milestone Apr 29, 2014
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.