Maciej Cieślar A JavaScript developer and a blogger @ https://www.mcieslar.com/

Generating video previews with Node.js and FFmpeg

7 min read 2038

Generating Video Previews With Node.js And FFmpeg

Every website that deals with video streaming in any way has a way of showing a short preview of a video without actually playing it. YouTube, for instance, plays a 3- to 4-second excerpt from a video whenever users hover over its thumbnail. Another popular way of creating a preview is to take a few frames from a video and make a slideshow.

We are going to take a closer look at how to implement both of these approaches.

How to manipulate a video with Node.js

Manipulating a video with Node.js itself would be extremely hard, so instead we are going to use the most popular video manipulation tool: FFmpeg. In the documentation, we read:

FFmpeg is the leading multimedia framework, able to decode, encode, transcode, mux, demux, stream, filter and play pretty much anything that humans and machines have created. It supports the most obscure ancient formats up to the cutting edge. No matter if they were designed by some standards committee, the community or a corporation. It is also highly portable: FFmpeg compiles, runs, and passes our testing infrastructure FATE across Linux, Mac OS X, Microsoft Windows, the BSDs, Solaris, etc. under a wide variety of build environments, machine architectures, and configurations.

Boasting such an impressive resume, FFmpeg is the perfect choice for video manipulation done from inside of the program, able to run in many different environments.

FFmpeg is accessible through CLI, but the framework can be easily controlled through the node-fluent-ffmpeg library. The library, available on npm, generates the FFmpeg commands for us and executes them. It also implements many useful features, such as tracking the progress of a command and error handling.

Although the commands can get pretty complicated quickly, there’s very good documentation available for the tool. Also, in our examples, there won’t be anything too fancy going on.

The installation process is pretty straightforward if you are on Mac or Linux machine. For Windows, please refer here. The fluent-ffmpeg library depends on the ffmpeg executable being either on our $PATH (so it is callable from the CLI like: ffmpeg ...) or by our providing the paths to the executables through the environment variables.

The exemplary .env file:

FFMPEG_PATH="D:/ffmpeg/bin/ffmpeg.exe"
FFPROBE_PATH="D:/ffmpeg/bin/ffprobe.exe"

Both paths have to be set if they are not already available in our $PATH.

Creating a preview

Now that we know what tools to use for video manipulation from within Node.js runtime, let’s create the previews in the formats mentioned above. I will be using Childish Gambino’s “This is America” video for testing purposes.

We made a custom demo for .
No really. Click here to check it out.

Video fragment

The video fragment preview is pretty straightforward to create; all we have to do is slice the video at the right moment. In order for the fragment to be a meaningful and representative sample of the video content, it is best if we get it from a point somewhere around 25–75 percent of the total length of the video. For this, of course, we must first get the video duration.

In order to get the duration of the video, we can use ffprobe, which comes with FFmpeg. ffprobe is a tool that lets us get the metadata of a video, among other things.

Let’s create a helper function that gets the duration for us:

export const getVideoInfo = (inputPath: string) => {
  return new Promise((resolve, reject) => {
    return ffmpeg.ffprobe(inputPath, (error, videoInfo) => {
      if (error) {
        return reject(error);
      }

      const { duration, size } = videoInfo.format;

      return resolve({
        size,
        durationInSeconds: Math.floor(duration),
      });
    });
  });
};

The ffmpeg.ffprobe method calls the provided callback with the video metadata. The videoInfo is an object containing many useful properties, but we are interested only in the format object, in which there is the duration property. The duration is provided in seconds.

Now we can create a function for creating the preview.

Before we do that, let’s take break down the FFmpeg command used to create the fragment:

ffmpeg -ss 146 -i video.mp4 -y -an -t 4 fragment-preview.mp4
  • -ss 146: Start video processing at the 146-second mark of the video (146 is just a placeholder here, our code will randomly generate the number of seconds)
  • -i video.mp4: The input file path
  • -y: Overwrite any existing files while generating the output
  • -an: Remove audio from the generated fragment
  • -t 4: The duration of the (fragment in seconds)
  • fragment-preview.mp4: The path of the output file

Now that we know what the command will look like, let’s take a look at the Node code that will generate it for us.

const createFragmentPreview = async (
  inputPath,
  outputPath,
  fragmentDurationInSeconds = 4,
) => {
  return new Promise(async (resolve, reject) => {
    const { durationInSeconds: videoDurationInSeconds } = await getVideoInfo(
      inputPath,
    );

    const startTimeInSeconds = getStartTimeInSeconds(
      videoDurationInSeconds,
      fragmentDurationInSeconds,
    );

    return ffmpeg()
      .input(inputPath)
      .inputOptions([`-ss ${startTimeInSeconds}`])
      .outputOptions([`-t ${fragmentDurationInSeconds}`])
      .noAudio()
      .output(outputPath)
      .on('end', resolve)
      .on('error', reject)
      .run();
  });
};

First, we use the previously created getVideoInfo function to get the duration of the video. Then we get the start time using the getStartTimeInSeconds helper function.

Let’s think about the start time (the -ss parameter) because it may be tricky to get it right. The start time has to be somewhere between 25–75 percent of the video length since that is where the most representative fragment will be.

But we also have to make sure that the randomly generated start time plus the duration of the fragment is not larger than the duration of the video (startTime + fragmentDurationvideoDuration). If that were the case, the fragment would be cut short due since there wouldn’t be enough video left.

With these requirements in mind, let’s create the function:

const getStartTimeInSeconds = (
  videoDurationInSeconds,
  fragmentDurationInSeconds,
) => {
  // by subtracting the fragment duration we can be sure that the resulting
  // start time + fragment duration will be less than the video duration
  const safeVideoDurationInSeconds =
    videoDurationInSeconds - fragmentDurationInSeconds;

  // if the fragment duration is longer than the video duration
  if (safeVideoDurationInSeconds <= 0) {
    return 0;
  }

  return getRandomIntegerInRange(
    0.25 * safeVideoDurationInSeconds,
    0.75 * safeVideoDurationInSeconds,
  );
};

First, we subtract the fragment duration from the video duration. By doing so, we can be sure that the resulting start time plus the fragment duration will be smaller than the video duration.

If the result of the subtraction is less than 0, then the start time has to be 0 because the fragment duration is longer than the actual video. For example, if the video were 4 seconds long and the expected fragment were to be 6 seconds long, the fragment would be the entire video.

The function returns a random number of seconds from the range between 25–75 percent of the video length using the helper function: getRandomIntegerInRange.

export const getRandomIntegerInRange = (min, max) => {
  const minInt = Math.ceil(min);
  const maxInt = Math.floor(max);

  return Math.floor(Math.random() * (maxInt - minInt + 1) + minInt);
};

It makes use of, among other things, Math.random() to get a pseudo-random integer in the range. The helper is brilliantly explained here.

Now, coming back to the command, all that’s left to do is set the command’s parameters with the generated values and run it.

return ffmpeg()
  .input(inputPath)
  .inputOptions([`-ss ${startTimeInSeconds}`])
  .outputOptions([`-t ${fragmentDurationInSeconds}`])
  .noAudio()
  .output(outputPath)
  .on('end', resolve)
  .on('error', reject)
  .run();

The code is self-explanatory. We make use of the .noAudio() method to generate the -an parameter. We also attach the resolve and reject listeners on the end and error events, respectively. As a result, we have a function that is easy to deal with because it’s wrapped in a promise.

In a real-world setting, we would probably take in a stream and output a stream from the function, but here I decided to use promises to make the code easier to understand.

Here are a few sample results from running the function on the “This is America” video. The videos were converted to gifs to embed them more easily.

This Is America Video Preview 1

This Is America Video Preview 2

This Is America Video Preview 3

Since the users are probably going to view the previews in small viewports, we could do without an unnecessarily high resolution and thus save on the file size.

Frames interval

The second option is to get x frames evenly spread throughout the video. For example if we had a video that was 100 seconds long and we wanted 5 frames out of it for the preview, we would take a frame every 20 seconds. Then we could either merge them together in a video (using ffmpeg) or load them to the website and manipulate them with JavaScript.

Let’s break down the command:

ffmpeg -i video.mp4 -y -vf fps=1/24 thumb%04d.jpg
  • -i video.mp4: The input video file
  • -y: Output overwrites any existing files
  • -vf fps=1/24: The filter that takes a frame every (in this case) 24 seconds
  • thumb%04d.jpg: The output pattern that generates files in the following fashion: thumb0001.jpg, thumb0002.jpg, etc. The %04d part specifies that there should be four decimal numbers

With the command also being pretty straightforward, let’s implement it in Node.

export const createXFramesPreview = (
  inputPath,
  outputPattern,
  numberOfFrames,
) => {
  return new Promise(async (resolve, reject) => {
    const { durationInSeconds } = await getVideoInfo(inputPath);

    // 1/frameIntervalInSeconds = 1 frame each x seconds
    const frameIntervalInSeconds = Math.floor(
      durationInSeconds / numberOfFrames,
    );

    return ffmpeg()
      .input(inputPath)
      .outputOptions([`-vf fps=1/${frameIntervalInSeconds}`])
      .output(outputPattern)
      .on('end', resolve)
      .on('error', reject)
      .run();
  });
};

As was the case with the previous function, we must first know the length of the video in order to calculate when to extract each frame. We get it with the previously defined helper getVideoInfo.

Next, we divide the duration of the video by the number of frames (passed as an argument, numberOfFrames). We use the Math.floor() function to make sure that the number is an integer and multiplied again by the number of frames is lower or equal to the duration of the video.

Then we generate the command with the values and execute it. Once again, we attach the resolve and reject functions to the end and error events, respectively, to wrap the output in the promise.

Here are some of the generated images (frames):

As stated above, we could now load the images in a browser and use JavaScript to make them into a slideshow or generate a slideshow with FFmpeg. Let’s create a command for the latter approach as an exercise:

ffmpeg -framerate 1/0.6 -i thumb%04d.jpg slideshow.mp4
  • -framerate 1/0.6: Each frame should be seen for 0.6 seconds
  • -i thumb%04d.jpg: The pattern for the images to be included in the slideshow
  • slideshow.mp4: The output video file name

Here’s the slideshow video generated from 10 extracted frames. A frame was extracted every 24 seconds.

Final Slideshow Result

This preview shows us a very good overview of the content of the video.

Fun fact

In order to prepare the resulting videos for embedding in the article, I had to convert them to the .gif format. There are many online converters available as well as apps that could do this for me. But writing a post about using FFmpeg, it felt weird to not even try and use it in this situation. Sure enough, converting a video to the gif format could be done with one command:

ffmpeg -i video.mp4 -filter_complex "[0:v] split [a][b];[a] palettegen [p];[b][p] paletteuse" converted-video.gif

Here’s the blog post explaining the logic behind it.

Now, sure, this command is not that easy to understand because of the complex filter, but it goes a long way in showing how many use cases FFmpeg has and how useful it is to be familiar with this tool.

Instead of using online converters, where the conversion could take some time due to the tools being free and doing it on the server side, I executed the command and had the gif ready after only a few seconds.

Summary

It is not very likely that you will need to create previews of videos yourself, but hopefully by now you know how to use FFmpeg and its basic command syntax well enough to use it in any potential projects. Regarding the previews formats, I would probably go with the video fragment option, as more people will be familiar with it because of YouTube.

We should probably generate the previews of the video with low quality to keep the preview file sizes small since they have to be loaded on users’ browsers. The previews are usually shown in a very small viewport, so the low resolution should not be a problem.

200’s only Monitor failed and slow network requests in production

Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third party services are successful, try LogRocket. https://logrocket.com/signup/

LogRocket is like a DVR for web apps, recording literally everything that happens on your site. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.

LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. .
Maciej Cieślar A JavaScript developer and a blogger @ https://www.mcieslar.com/

2 Replies to “Generating video previews with Node.js and FFmpeg”

  1. Hello, let me thank you for this great article. I finally managed to use these two commands

    ffmpeg -i video.mp4 -y -vf fps=1/24 thumb%04d.jpg
    ffmpeg -framerate 1/0.6 -i thumb%04d.jpg slideshow.mp4

    Is it please possible to batch more than 1 video using those ?

    Thank you
    Vlad

Leave a Reply