Jamie Maison Freelance Software Developer specializing in creating dynamic experiences for web, mobile & desktop. Formerly @SkyUK , @NOWTV . Contributor @hackernoon.

A comprehensive guide to video playback in React

8 min read 2348

The React logo against a black and white background.

In a recent study by Cisco, it was predicted that 82% of all consumer internet traffic will take the form of video content by the year 2022 —fifteen times higher than in 2017. It’s never been more important to understand the technology underpinning video playback and how it can best be leveraged to provide the best possible experience.

In this post we will go through the basics of video playback before highlighting some of the most popular JavaScript video player frameworks, their features, and how to implement them in React.

Video Playback Basics

To understand video playback, it is important to first understand the process involved between the user clicking on the play button and the first frame of the video being played. There are a fair amount of steps that can go wrong between these two points which ultimately can affect your video performance or at worse case mean your video does not play at all.

Unless your video file is very small, it’s unlikely that your video player will download the whole file in one go before playback.

This would ultimately result in the consumer waiting a considerable amount of time before they would see any content, which would likely result on them getting impatient and moving away from the video.

The process of downloading the entire video in one go is often referred to as single source playback and is not recommended for most use cases.

Instead, video content is usually split up into pieces known as “chunks,” which the video player downloads in a series to ensure that the device is only ever downloading small pieces of data at any given time.

On top of the data being split into small chunks, there are usually multiple levels of video quality available to the player so the client to be able to choose which bitrate is better suited for their network conditions.

If you have ever seen the dreaded buffering symbol, it is because the video player has failed to step down to a smaller bitrate in time and the player trying to download each chunk.

This process where the client is switching between different quality levels of video and trying to download chunks is called Adaptive Bitrate Streaming (ABS) and is important to understand when talking about video playback.

We made a custom demo for .
No really. Click here to check it out.

MSS, HLS, and DASH are three different technologies created to achieve effective ABS.

MSS

MSS (Microsoft Smooth Streaming) was standardized by Microsoft and first appeared with the introduction of the Silverlight player.

MSS downloads the video in a series of small chunks which are cached on the edge of the network, meaning that the transaction of clients requesting and receiving these chunks can happen much quicker.

To begin playback, the video player requests a manifest file from the server which lays out the details about the requested video such as the duration of the video, the location of each chunk, and the bitrates available to the player.

Below is a sample MSS manifest with some of the key information labelled:

<SmoothStreamingMedia
  MajorVersion="2"
  MinorVersion="1"
  Duration="1209510000"> // Total video duration
  
  <!-- Video information -->
  <StreamIndex
    Type="video"
    Name="video"
    Chunks="4" // Number of chunks
    QualityLevels="4" // Number of bitrates available
    MaxWidth="1280"
    MaxHeight="720"
    DisplayWidth="1280"
    DisplayHeight="720"
    Url="QualityLevels({bitrate})/Fragments(video={start time})"> // URL template to request chunks
    
    <!-- Quality levels -->
    <QualityLevel
      Index="0"
      Bitrate="2962000"
      MaxWidth="1280"
      MaxHeight="720" />
    <QualityLevel
      Index="1"
      Bitrate="2056000"
      MaxWidth="992"
      MaxHeight="560" />
    <QualityLevel
      Index="2"
      Bitrate="1427000"
      MaxWidth="768"
      MaxHeight="432" />
    <QualityLevel
      Index="3"
      Bitrate="991000"
      MaxWidth="592"
      MaxHeight="332" />
    <QualityLevel
      Index="4"
      Bitrate="688000"
      MaxWidth="448"
      MaxHeight="252" />
    
    <!-- Chunks -->
    <c d="20020000" /> // Chunk duration
    <c d="20020000" />
    <c d="20020000" />
    <c d="20020000" />
  </StreamIndex>
</SmoothStreamingMedia>

Most commonly used on Microsoft platforms, MSS is one of the older ABS technologies and aims to provide minimal buffering and a fast start up time.

HLS

HLS stands for HTTP Live Streaming and was developed by Apple in 2009 alongside the release of the iPhone 3 as an alternative to the popular Adobe flash format.

HLS divides the video into short 10 second chunks indexed in a separate playlist file and is the only supported standard on Apple devices with iOS and OSX.

A sample HLS playlist is as follows:

#EXTM3U
#EXT-X-PLAYLIST-TYPE:VOD
<!-- Video duration -->
#EXT-X-TARGETDURATION:10
#EXT-X-VERSION:4
#EXT-X-MEDIA-SEQUENCE:0
<!-- Chunk duration (10 seconds) -->
#EXTINF:10.0,
<!-- URL to request chunk -->
http://example.com/movie1/fileSequenceA.ts
#EXTINF:10.0,
http://example.com/movie1/fileSequenceB.ts
#EXTINF:10.0,
http://example.com/movie1/fileSequenceC.ts
#EXTINF:9.0,
http://example.com/movie1/fileSequenceD.ts
#EXT-X-ENDLIST

HLS’s main benefit is that it is the only native ABS format for Apple devices. If you want to achieve video playback on your iOS device, HLS is your only option.

DASH

Dynamic Adaptive Streaming over HTTP (DASH) is a fairly new technology that aims to provide support across all devices, avoiding the unnecessary complication of having to implement multiple technologies such as MSS & HLS for the same video source.

After a call for proposal by MPEG in 2009, over 50 major companies — including Google & Microsoft — collaborated to come up with the MPEG-DASH standard which was eventually published in 2012.

In essence, DASH aims to combine all of the current technologies into one, providing the benefits of each standard and reducing technical headache. Typically, a DASH video is split into 2-4 second video chunks allowing for faster video download and ultimately better performance.

A sample DASH manifest:

<MPD xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:full:2011">
    <Period duration="PT10M"> // Period Duration
        <BaseURL>main/</BaseURL>
        <AdaptationSet mimeType="video/mp2t">
            <BaseURL>video/</BaseURL>
            <!-- 1st Bitrate -->
            <Representation id="720p" bandwidth="3200000" width="1280" height="720">
                <BaseURL>720p/</BaseURL>
                <!-- Chunk List -->
                <SegmentList timescale="90000" duration="5400000">
                    <RepresentationIndex sourceURL="representation-index.sidx"/>
                    <SegmentURL media="segment-1.ts"/>
                    <SegmentURL media="segment-2.ts"/>
                    <SegmentURL media="segment-3.ts"/>
                    <SegmentURL media="segment-4.ts"/>
                    <SegmentURL media="segment-5.ts"/>
                    <SegmentURL media="segment-6.ts"/>
                    <SegmentURL media="segment-7.ts"/>
                    <SegmentURL media="segment-8.ts"/>
                    <SegmentURL media="segment-9.ts"/>
                    <SegmentURL media="segment-10.ts"/>
                </SegmentList>
            </Representation>
            <!-- 2nd Bitrate -->
            <Representation id="1080p" bandwidth="6800000" width="1920" height="1080">
                <BaseURL>1080/</BaseURL>
                <!-- Chunk List -->
                <SegmentList timescale="90000" duration="5400000"> // Segment duration
                    <RepresentationIndex sourceURL="representation-index.sidx"/>
                    <SegmentURL media="segmentl-1.ts"/>
                    <SegmentURL media="segmentl-2.ts"/>
                    <SegmentURL media="segmentl-3.ts"/>
                    <SegmentURL media="segmentl-4.ts"/>
                    <SegmentURL media="segmentl-5.ts"/>
                    <SegmentURL media="segmentl-6.ts"/>
                    <SegmentURL media="segmentl-7.ts"/>
                    <SegmentURL media="segmentl-8.ts"/>
                    <SegmentURL media="segmentl-9.ts"/>
                    <SegmentURL media="segmentl-10.ts"/>
                </SegmentList>
            </Representation>
        </AdaptationSet>
    </Period>
</MPD>

Video Player Frameworks

Now that you have an understanding of the different ABS technologies, we can look at the various JavaScript player frameworks available to accommodate them.

Although it seems like the industry is moving towards DASH, it is still a relatively new technology and systems running MSS and/or HLS are still  present in our video infrastructure, particularly on Apple devices where HLS is the only option.

Let’s look at some of the most popular JavaScript video player frameworks, the ABS technologies they support, and how they can be implemented in React.

Video.js

The first technology that we will look at is arguably one of the most popular open-source video player frameworks available today. Video.js was built from the ground up in 2010 with the Javascript world in mind and is used on approximately 500,000 websites worldwide.

Its main selling point is that it aims to support all types of video format including adaptive video formats such as HLS and DASH.

Used by companies such as Tumblr and LinkedIn, Video.js is easy to style, cross functional, and easy to implement. But how does it hold up in the React world?

Thankfully, implementing Video.js isn’t too much of a headache in React and can be up and running within minutes. To get started, you need to head here to download Video.js or import it using npm with npm i video.js.

From there, the Video.js player needs to be instantiated on componentDidMount before it is available for use:

import React from 'react';
import videojs from 'video.js'
export default class VideoPlayer extends React.Component {
  componentDidMount() {
    this.player = videojs(this.videoNode, this.props, function onPlayerReady() {
      console.log('Video.js Ready', this)
    });
  }
  componentWillUnmount() {
    if (this.player) {
      this.player.dispose()
    }
  }
  render() {
    return (
      <div> 
        <div data-vjs-player>
          <video ref={ node => this.videoNode = node } className="video-js"></video>
        </div>
      </div>
    )
  }
}

You may notice that the player is wrapped in a data-vjs-player div. This is an extra step ensures an additional wrapper isn’t created in the DOM. Once your component is created it can be triggered with various options to initiate playback.

The full list of available options to Video.js can be found here.

const videoJsOptions = {
  autoplay: true,
  controls: true,
  sources: [{
    src: '/path/to/video.mp4',
    type: 'video/mp4'
  }]
}
return <VideoPlayer { ...videoJsOptions } />

HLS.js

Video.js offers one solution for playing HLS content, but a player framework that is gaining traction as one of the better options for handling the Apple format is HLS.js.

Released in 2015, the technology is now in production across thousands of high profile websites including Twitter and the New York Times.

The attraction of HLS.js is its simplicity in implementation and tiny footprint (it is just half the size of the other players mentioned in this article at 71.1KB).

Relying on HTML5 video and MediaSource Extensions to achieve playback, you can deliver reliable HLS playback in the browser quickly and efficiently.

Your first step to implementing HLS.js in React is to download the latest library and include it in your React project or install it from npm with npm i hls.js.

From there we create a standard <video> tag in our render function.

We will also give the player a reference of this.player so that we can use it later to instantiate our player framework:

import React from "react";
import Hls from "hls.js";
export default class VideoPlayer extends React.Component {
  state = {};
  render() {
    return (
          <video
            className="videoCanvas"
            ref={player => (this.player = player)}
            autoPlay={true}
          />
    );
  }
}

Once the video tag is in place, rendering a video in HLS.js is as simple as updating our componentDidUpdate to attach the media:

import React from "react";
import Hls from "hls.js";
export default class VideoPlayer extends React.Component {
  state = {};
  componentDidUpdate() {
      const video = this.player;
      const hls = new Hls();
      const url = "https://bitdash-a.akamaihd.net/content/sintel/hls/playlist.m3u8";
 
      hls.loadSource(url);
      hls.attachMedia(video);
      hls.on(Hls.Events.MANIFEST_PARSED, function() { video.play(); });
  }
  render() {
    return (
          <video
            className="videoCanvas"
            ref={player => (this.player = player)}
            autoPlay={true}
          />
    );
  }
}

Here you can see that our componentDidUpdate is loading the source stream, attaching it to the video player, and attaching a listener to manifest events in order to know when to play the video.

The full documentation for HLS.js, as well as the many events that can be triggered during playback, can be found here.

DASH.js

DASH.js was created by the DASH Industry Forum to design a video player framework that would allow developers to build video players using the increasingly popular MPEG-DASH format that is both browser agnostic and robust in a production environment.

The first step to implementing the DASH.js video player framework is to import the library into your project, the latest of which can be found here.

As with our other players, you need to create your <video> tag and place it in your render function. This is what DASH.js will target to render your video.

Once again, we will give the video tag a reference of this.player so we can later use it to initialize DASH.js:

import React from "react";
import * as dashjs from "dash.js";
export default class VideoPlayer extends React.Component {
  state = {};
  render() {
    return (
          <video
            ref={player => (this.player = player)}
            autoPlay={true} 
          />
    );
  }
}

The final step to achieve playback is to instantiate your player on componentDidUpdate and provide it with your target URL:

import React from "react";
import * as dashjs from "dash.js";
export default class VideoPlayer extends React.Component {
  state = {};
  componentDidUpdate() {
      const url = "https://dash.akamaized.net/envivio/EnvivioDash3/manifest.mpd";
      const video = this.player;
      const dashjs = dashjs.MediaPlayer().create();
      dashjs.initialize(video, url, true);
  }
  render() {
    return (
          <video 
            ref={player => (this.player = player)}
            autoPlay={true} 
          />
    );
  }
}

At this point, you should be seeing some video playback! There are many settings, events and customization options available to DASH.js which can be found in their documentation.

Conclusion

In this article, we have took a look at the basics of video playback and what happens behind the scenes once the consumer hits the play button. We have gone through three of the most popular ABS formats and the applications for each of them.

Finally, we reviewed some of the popular Javascript video player frameworks before walking through their various implementations in React.

There is no better time to start delving into the wonderful world of video playback, with video being the number one method of information consumption for the vast majority of the internet’s users.

I hope this article get’s you thinking next time you see that buffering icon on your favorite streaming service and that you are inspired to start creating your own video players using one of the video player frameworks we have gone through with React.

For further reading, I recommend Bitmovins’ article on Adaptive Bitrate Streaming which delves further into how ABS adapts to the network conditions of the client.

You come here a lot! We hope you enjoy the LogRocket blog. Could you fill out a survey about what you want us to write about?

    Which of these topics are you most interested in?
    ReactVueAngularNew frameworks
    Do you spend a lot of time reproducing errors in your apps?
    YesNo
    Which, if any, do you think would help you reproduce errors more effectively?
    A solution to see exactly what a user did to trigger an errorProactive monitoring which automatically surfaces issuesHaving a support team triage issues more efficiently
    Thanks! Interested to hear how LogRocket can improve your bug fixing processes? Leave your email:

    Full visibility into production React apps

    Debugging React applications can be difficult, especially when users experience issues that are difficult to reproduce. If you’re interested in monitoring and tracking Redux state, automatically surfacing JavaScript errors, and tracking slow network requests and component load time, try LogRocket.

    LogRocket is like a DVR for web apps, recording literally everything that happens on your React app. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app's performance, reporting with metrics like client CPU load, client memory usage, and more.

    The LogRocket Redux middleware package adds an extra layer of visibility into your user sessions. LogRocket logs all actions and state from your Redux stores.

    Modernize how you debug your React apps — .

    Jamie Maison Freelance Software Developer specializing in creating dynamic experiences for web, mobile & desktop. Formerly @SkyUK , @NOWTV . Contributor @hackernoon.

    Leave a Reply