In an era where new online video platforms are undergoing robust growth, a law enacted in 2012 obligated companies such as Netflix, Hulu and Amazon Prime to ensure that 100% of their English-language content has subtitles in the same language as the original audio (also known as closed captions) by 2014. As of that moment, video transcription workflows within this format have grown exponentially for translators and subtitlers.
What needs to be kept in mind when transcribing this kind of subtitles? For starters, subtitling software will help enormously in the task, mainly when it comes to inserting time stamps in the subtitles (which is practically impossible to do in text processing software).
Putting together this kind of subtitles involves transcription of the video while obeying certain rules, many of which are the same as with normal subtitles, though others are exclusively oriented to making the visual content understandable without the corresponding audio.
The golden rule when undertaking the transcription is to put oneself in the shoes, so to speak, of the intended public, specifically persons with hearing loss. Thus, it becomes necessary to transcribe ambient noises. One of the most common errors is to transcribe absolutely all the noises that are heard. The rule of thumb is that if the source of the sound is visible on the screen, it’s not necessary to include it. On the other hand, if the sound of a wolf howling off in the distance is heard and the main character is startled, it will be necessary to clarify this sound so that the character’s reaction can be understood. While there are several ways to transcribe sounds, the most common is to place them inside brackets or parentheses. It is best to be as succinct as possible, not adorning the sound with unnecessary adjectives or adverbs.
Another common error is to end lines with prepositions, articles or conjunctions, which makes the text difficult to follow. Ideally, neither the cut between subtitles nor the lines of a specific subtitle should interrupt a unit of meaning (i.e. not separate a noun from its article, or an adjective from its noun).
One difference with classic subtitles is how the change in speaker is noted, which is particularly useful when the characters are not on screen. This can be done using dashes, with >>, or even by placing the captions underneath the character speaking. As for curse words, it can be assumed that if they are not censored in the audio, they can be transcribed without a problem. However, it is recommendable to consult with the client as including them could constitute a violation of local broadcasting rules.
Finally, the task of time stamping is equally as important as that of transcription. It is of the utmost importance that the subtitles not be too short, so that the viewer can follow along in reading them, and also that the synchronization of each subtitle with the corresponding audio be as accurate as possible.
Same-language subtitles, or closed captions, are a fundamental tool in the inclusion of persons with hearing loss. If laws already exist that regulate the amount of material that must contain such subtitles, it’s our job to concentrate on their quality in order to offer the viewer the best possible experience.