Adding auto-generated video to your slides

Author(s) orcid logoHelena Rasche avatar Helena Rasche
Reviewers Saskia Hiltemann avatarHelena Rasche avatarBérénice Batut avatarBjörn Grüning avatarNicola Soranzo avatarMartin Čech avatar
Overview
Creative Commons License: CC-BY Questions:
  • How can we add auto-generated video?

  • How does it work?

  • What do I need to do to make it optimal for viewers?

Objectives:
  • Adding a video to a set of slides

Time estimation: 20 minutes
Supporting Materials:
Published: Oct 20, 2020
Last modification: Nov 9, 2023
License: Tutorial Content is licensed under Creative Commons Attribution 4.0 International License. The GTN Framework is licensed under MIT
purl PURL: https://gxy.io/GTN:T00071
version Revision: 12

Video Lectures

Based on the work by Delphine Larivière and James Taylor with their COVID-19 Lectures we have implemented a similar feature in the Galaxy Training Network.

Agenda

In this tutorial, we will:

  1. Video Lectures
    1. How it Works
  2. Enabling Video
    1. Writing Good Captions
    2. Enable the Video
  3. Voices
  4. How it works: In Detail
  5. Conclusion

How it Works

We wrote a short script which does the following:

Locally and in production:

  • Extracts a ‘script’ from the slides. We extract every presenter comment in the slidedeck, and turn this into a text file.
  • Every line of this text file is then narrated by Amazon Polly (if you have money) or MozillaTTS (free).
  • The slide deck is converted to a PDF, and then each slide is extracted as a PNG.
  • Captions are extracted from the audio components.
  • The narration is stitched together into an mp3
  • The images are stitched together into an mp4 file
  • The video, audio, and captions are muxed together into a final mp4 file

In production

  • We use Amazon Polly, paid for by the Galaxyproject
  • The result is uploaded to an S3 bucket

Enabling Video

We have attempted to simplify this process as much as possible, but making good slides which work well is up to you.

Writing Good Captions

Every slide must have some narration in the presenter notes. It does not make sense for students to see a slide without commentary. For each slide, you’ll need to write presenter notes in full, but short sentences.

Sentence Structure

Use simple and uncomplex sentences whenever possible. Break up ideas into easy to digest bits. Students will be listening to this spoken and possibly reading the captions.

2021-05-01 There used to be a limit of ~120 characters per sentence, but this is no longer an issue. We now break up sentences which are too long in the captions and show them over multiple timepoints. So if you need to write a really long sentence, you can, but we still advise to simplify sentences where possible.

Captions per Slide

Every slide must have some speaker notes in this system, NO exceptions.

Punctuation

Sentences should end with punctuation like . or ? or even ! if you’re feeling excited.

Abbreviations

These are generally fine as-is. (e.g. e.g./i.e. is fine as-is, RNA is fine, etc.) Make sure abbreviations are all caps though.

Good This role deploys CVMFS.

“Weird” Names

In the captions you will want to teach the GTN how to pronounce these words by editing bin/ari-map.yml to provide your definition.

E.g.

Word Pronunciation
SQLAlchemy SQL alchemy
FastQC fast QC
nginx engine X
gxadmin GX admin
/etc / E T C

The same applies to the many terms we read differently from how they are written, e.g. ‘src’ vs ‘source’. Most of us would pronounce it like the latter, even though it isn’t spelt that way. Our speaking robot doesn’t know what we mean, so we need to spell it out properly.

So we write the definition in the bin/ari-map.yml file.

Other Considerations

(Written 2020-12-16, things may have changed since.)

Be sure to check the pronunciation of the slides. There are known issues with heteronyms, words spelt the same but having different pronunciation and meaning. Consider “read” for a classic example, or “analyses” for one that comes up often in the GTN. “She analyses data” and “Multiple analyses” are pronounced quite differently based on their usage in sentences. See the wiktionary page for more information, or the list of English heteronyms you might want to be aware of.

This becomes an issue for AWS Polly and Mozilla’s TTS which both don’t have sufficient context sometimes to choose between the two pronunciations. You’ll find that “many analyses” is pronounced correctly while “multiple analyses” isn’t.

Oftentimes the services don’t understand part of speech, so by adding adjectives to analyses, you confuse the engine in to thinking it should be the third person singular pronunciation. This is probably because it only has one or two words of context ahead of the word to be pronounced.

Enable the Video

Lastly, we need to tell the GTN framework we would like videos to be generated.

Hands-on: Enable video
  1. Edit the slides.html for your tutorial
  2. Add video: true to the top

That’s it! With this, videos can be automatically generated.

Voices

There are multiple voices available, see the following list:

Name Region Neural Audio Clip
Amy en-GB True
Aria en-NZ True
Brian en-GB True
Emma en-GB True
Joanna en-US True
Joey en-US True
Kendra en-US True
Matthew en-US True
Nicole en-AU False
Olivia en-AU True
Raveena en-IN False
Salli en-US True
Ayanda en-ZA True
Geraint en-GB-WLS True

By default a random voice is chosen every time the video is rebuilt (only whenever a change is made to that slide deck.) We do this to ensure a good diversity of genders and nationalities in the audio samples.

However, if you have a preferred voice, you can set that permanently for that video, add the following metadata to the top of your slide deck:

voice:
  id: Lupe
  lang: es-US
  neural: true

The above voice example is specific to Spanish language content, hence not being represented in the first list.

How it works: In Detail

  1. We take our markdown slides, e.g. topics/introduction/tutorials/galaxy-intro-short/slides.html
  2. In order for them to be processed, slides must have an annotation saying video: true in the header metadata, and then ‘speaker notes’ (everything after the ??? before the —)
  3. This is turned into our ‘plain text slides’ which just renders the markdown a bit more nicely (example)
  4. Then we run ari.sh which does the following:

All of this is run on cron by .github/workflows/video.yml which handles building all of these videos and then later uploading them to s3.

Many of the scripts internally are prefixed with ari, we named our internal version after github.com/jhudsl/ari/ which inspired it, but we wanted a version that would be more closely tied to the GTN and integrate with our infrastructure nicely, so we ended up writing our own.

Conclusion