LinuxFest Northwest (LFNW) is a local community conference about Linux put on every year by the Bellingham Linux Users Group and Bellingham Technical College. I spoke at LFNW last year, and had a proposal accepted again to speak this year. Unfortunately, COVID-19 threw a wrench in it, and the conference had to be canceled due to health concerns. The conference moved to an online conference made up of pre-recorded sessions (webinar-style), with an online discussion board. I recorded one of my sessions. This post is here to document what I did, as a first-time online presenter who primarily uses Linux.

This year, the LFNW organizers had accepted two of my talk proposals: “Linux Container Primitives” and “Extending containerd”. Both are talks that I have given before, so I did not write new content. I decided that “Linux Container Primitives” likely had a larger audience, so I decided to pick that talk.

## Slides

Since I was recording a talk I had given previously, I already had slides prepared. I didn’t make any major modifications to them, but went through them to make any minor changes that would be warranted. I primarily use Linux and (except for re:Invent) prefer to present from LibreOffice. Rather than using a screen capture tool to capture the slides, I decided to export them to a PDF in LibreOffice, convert the individual slides to JPEG, and manage them individually in the video editing program (described below).

I used pdftoppm(1) with the -jpeg flag to do the conversion. On Ubuntu and Debian, this program is provided by the poppler-utils package.

## Audio

I decided not to use video of myself, as I didn’t feel like I had an environment I’d be comfortable recording for the Internet. I also didn’t think I could record the whole presentation at once, and didn’t want to ensure that I had both a consistent environment and appearance to make the video reasonable.

Instead, I did an audio-only recording. I wrote scripts for each slide and demo, spoke into a cheap USB headset (that I had from the office for WFH situations before the pandemic started), and recorded the audio with Audacity. I used Audacity’s Noise Reduction effect to clean up background noise, and then I went through the recording to remove pauses, clicks from my mouth, and breathing noises. I made a separate recording for each slide and demo, which made it possible for me to do recordings at different times and to edit according to the context. I imported these recordings into the video editing program and synchronized them with the video content.

## Demos

For this talk, I’ve typically done live demos from a terminal. However, for the past few talks I’ve done (on other topics) I used a tool called demo-magic to automate the typing and make the demos more reliable. I wrote a separate demo script for each of my demos using demo-magic, and let the tool type while I narrated.

For capturing terminal output, I had the choice of either using a screen recorder or a text-based terminal recorder. I decided to use a text-based terminal recorder because I thought it would be easier to edit. In retrospect, I wish I had chosen to use a screen recorder instead; the terminal recorder was very labor-intensive.

My recording pipeline looked like this:

• Record with script(1) (part of the bsdutils package) for basic terminal recording
• Run demo scripts (with demo-magic) after invoking script(1)
• Convert the recording with teseq(1) (from the teseq package) into something a bit more editable, including using offset timestamps instead of absolute. This was done with
teseq -ttimings typescript > session.tsq
• Edit the text to remove the invocation of the demo script and adjust timings to synchronize with the audio
• Convert the edited session into an asciicast with Asciinema like this:
asciinema rec -c 'reseq edited.tsq --replay' demo.cast
• Render the recording to a gif with asciicast2gif like this:
docker run -v "$(pwd)":/data -u$UID -it --rm asciinema/asciicast2gif demo.cast demo.gif
• Convert the gif to mp4 with ffmpeg:
ffmpeg -i demo.gif -movflags faststart -pix_fmt yuv420p -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" demo.mp4
• Import into the video editing program.

This is a lot of steps, and the manual editing for timing adjustment to synchronize with the audio was especially time-consuming. I also had suboptimal results; when recording a demo with box-drawing characters they did not render properly in asciicast2gif. For one of my demos I used tmux and the rendered gif had many garbage lines in the output; I ended up manually editing the gif to remove the lines.

If I had used a screen recorder instead of a terminal recorder, I could have cut out some of the steps (multiple conversion and rendering passes), reduced the time taken for other steps (like audio synchronization), and had higher-quality results (no lost box-drawing characters, no extra garbage).

## Editing

To put everything together, I used Kdenlive. I was able to install a newer version of the editor via the Snap package. I used two video tracks; one with the slides (length adjusted to match the audio) and the other with demos. Kdenlive allowed me to have a visual fade between the tracks so that transitions between the slides and demos would be easier.

## The final result!

While there are always things to criticize about your own performance, I’m reasonably happy with the final video. Check it out!