In today’s world, where remote and hybrid work is becoming more popular, organizations need to adopt asynchronous forms of communication in their day-to-day operations. These include recording or taking notes of meetings, using more text forms of communication, etc. There are many applications with these capabilities that make asynchronous communication easier to adopt.
In this article, you’ll learn how to add video and audio recording capabilities to your React applications using the MediaRecorder API.
Jump ahead:
First, we’ll scaffold a new React application using Vite, a super-fast JavaScript build tool:
npm create vite@latest
To answer the prompts that follow the command:
react-recorder
)React
as a frameworkJavascript
variantNext, let’s navigate to the newly created project directory, install the required dependencies, and run the development server using the following command:
cd react-recorder && npm i && npm run dev
Once complete, a server will be started on http://localhost:5173/
. Let’s open up the URL on the web browser. We should see the following:
To record audio or video on the browser, we’ll need MediaStream
. MediaStream
is an interface that represents media contents and consists of audio and video tracks.
To obtain a MediaStream
object, you can either use the MediaStream()
constructor or call the following functions: MediaDevices.getUserMedia()
, MediaDevices.getDisplayMedia()
, or HTMLCanvasElement.captureStream()
.
For the sake of this tutorial, we’ll focus on the MediaDevices.getuserMedia
function to create a video and audio recorder.
In this section, we’ll be creating the demo application’s interface.
First, create a file in the src
directory named AudioRecorder.jsx
and paste into it the contents of the following code block:
import { useState, useRef } from "react"; const AudioRecorder = () => { const [permission, setPermission] = useState(false); const [stream, setStream] = useState(null); const getMicrophonePermission = async () => { if ("MediaRecorder" in window) { try { const streamData = await navigator.mediaDevices.getUserMedia({ audio: true, video: false, }); setPermission(true); setStream(streamData); } catch (err) { alert(err.message); } } else { alert("The MediaRecorder API is not supported in your browser."); } }; return ( <div> <h2>Audio Recorder</h2> <main> <div className="audio-controls"> {!permission ? ( <button onClick={getMicrophonePermission} type="button"> Get Microphone </button> ): null} {permission ? ( <button type="button"> Record </button> ): null} </div> </main> </div> ); }; export default AudioRecorder;
The code block above does the following:
getMicrophonePermission
functionMediaStream
received from the navigator.mediaDevices.getUserMedia
function to the stream
state variable (we’ll get to using that soon)Next, let’s create the interface for the video recorder component.
Still in the src
directory, create another file named VideoRecorder.jsx
and paste in the contents of the code block below:
import { useState, useRef } from "react"; const VideoRecorder = () => { const [permission, setPermission] = useState(false); const [stream, setStream] = useState(null); const getCameraPermission = async () => { if ("MediaRecorder" in window) { try { const streamData = await navigator.mediaDevices.getUserMedia({ audio: true, video: true, }); setPermission(true); setStream(streamData); } catch (err) { alert(err.message); } } else { alert("The MediaRecorder API is not supported in your browser."); } }; return ( <div> <h2>Video Recorder</h2> <main> <div className="video-controls"> {!permission ? ( <button onClick={getCameraPermission} type="button"> Get Camera </button> ):null} {permission ? ( <button type="button"> Record </button> ):null} </div> </main> </div> ); }; export default VideoRecorder;
Similarly to the audio recorder component, the code block above achieves the following:
getCameraPermission
functionMediaStream
received from the getUserMedia
method to the stream
state variableWe won’t need to write too much code to style the application since most of the styling was taken care of during the app scaffolding.
In the index.css
file, located in the src
directory, add the following style at the bottom:
... .button-flex { display: flex; justify-content: center; align-items: center; gap: 10px; } .audio-controls, .video-controls { margin-bottom: 20px; } .audio-player, .video-player { display: flex; flex-direction: column; align-items: center; } .audio-player, .video-player, .recorded-player { display: flex; flex-direction: column; align-items: center; } .live-player { height: 200px; width: 400px; border: 1px solid #646cff; margin-bottom: 30px; } .recorded-player video { height: 400px; width: 800px; }
Then, change the value of place-items
on the body
element style object from center
to start
:
... body { margin: 0; display: flex; place-items: start; min-width: 320px; min-height: 100vh; } ...
To display the newly created components, navigate to App.jsx
and replace its contents with the following block of code:
import "./App.css"; import { useState, useRef } from "react"; import VideoRecorder from "../src/VideoRecorder"; import AudioRecorder from "../src/AudioRecorder"; const App = () => { let [recordOption, setRecordOption] = useState("video"); const toggleRecordOption = (type) => { return () => { setRecordOption(type); }; }; return ( <div> <h1>React Media Recorder</h1> <div className="button-flex"> <button onClick={toggleRecordOption("video")}> Record Video </button> <button onClick={toggleRecordOption("audio")}> Record Audio </button> </div> <div> {recordOption === "video" ? <VideoRecorder /> : <AudioRecorder />} </div> </div> ); }; export default App;
The code block above renders either the VideoRecorder
or the AudioRecorder
component, depending on the selected option.
Going back to the browser, you should get the following results:
With that done, let’s focus on enhancing the functionality of the components.
Our audio recorder needs to meet the following requirements:
Let’s start by declaring our variables and state values.
First, just outside the component’s function scope (because we don’t need it to re-render as component state updates), let’s declare the variable mimeType
:
... const mimeType = "audio/webm"; ...
This variable sets the desired file type. Learn more about the MIME type here.
Next, let’s declare the following state variables inside the AudioRecorder
component scope:
const [permission, setPermission] = useState(false); const mediaRecorder = useRef(null); const [recordingStatus, setRecordingStatus] = useState("inactive"); const [stream, setStream] = useState(null); const [audioChunks, setAudioChunks] = useState([]); const [audio, setAudio] = useState(null);
permission
uses a Boolean value to indicate whether user permission has been givenmediaRecorder
holds the data from creating a new MediaRecorder
object, given a MediaStream
to recordrecordingStatus
sets the current recording status of the recorder. The three possible values are recording
, inactive
, and paused
stream
contains the MediaStream
received from the getUserMedia
methodaudioChunks
contains encoded pieces (chunks) of the audio recordingaudio
contains a blob URL to the finished audio recordingWith that out of the way, let’s define the functions that will enable us to start and stop the recording.
Let’s begin with the startRecording
function. Just after the getMicrophonePermission
function, add the following code:
... const startRecording = async () => { setRecordingStatus("recording"); //create new Media recorder instance using the stream const media = new MediaRecorder(stream, { type: mimeType }); //set the MediaRecorder instance to the mediaRecorder ref mediaRecorder.current = media; //invokes the start method to start the recording process mediaRecorder.current.start(); let localAudioChunks = []; mediaRecorder.current.ondataavailable = (event) => { if (typeof event.data === "undefined") return; if (event.data.size === 0) return; localAudioChunks.push(event.data); }; setAudioChunks(localAudioChunks); }; ...
Next, create a stopRecording
function below the startRecording
function:
const stopRecording = () => { setRecordingStatus("inactive"); //stops the recording instance mediaRecorder.current.stop(); mediaRecorder.current.onstop = () => { //creates a blob file from the audiochunks data const audioBlob = new Blob(audioChunks, { type: mimeType }); //creates a playable URL from the blob file. const audioUrl = URL.createObjectURL(audioBlob); setAudio(audioUrl); setAudioChunks([]); }; };
Next, let’s modify <div className="audio-controls">
to conditionally render the start/stop recording buttons depending on the recordingStatus
state:
<div className="audio-controls"> {!permission ? ( <button onClick={getMicrophonePermission} type="button"> Get Microphone </button> ) : null} {permission && recordingStatus === "inactive" ? ( <button onClick={startRecording} type="button"> Start Recording </button> ) : null} {recordingStatus === "recording" ? ( <button onClick={stopRecording} type="button"> Stop Recording </button> ) : null} </div>
To play back the recorded audio file, we’ll use the HTML audio
tag.
Under the div
we created for audio-controls
, let’s add the following code:
... {audio ? ( <div className="audio-container"> <audio src={audio} controls></audio> <a download href={audio}> Download Recording </a> </div> ) : null} ...
Linking the blob from the recording to the anchor element and adding the download
attribute makes it downloadable.
Now, the audio recorder should look like this:
Our complete video recorder needs to meet the following requirements:
We need to see the camera’s field of view when it’s active to know what area is captured in the recording.
First, let’s set the desired file mimeType
just outside the function scope of the VideoRecorder
component:
... const mimeType = "video/webm"; ...
Next, let’s define the required state variables. We’ll go back to the VideoRecorder.jsx
file we previously created:
const [permission, setPermission] = useState(false); const mediaRecorder = useRef(null); const liveVideoFeed = useRef(null); const [recordingStatus, setRecordingStatus] = useState("inactive"); const [stream, setStream] = useState(null); const [videoChunks, setVideoChunks] = useState([]); const [recordedVideo, setRecordedVideo] = useState(null);
permission
uses a Boolean value to indicate whether user permission has been givenliveVideoFeed
contains the live video stream of the user’s camerarecordingStatus
sets the current recording status of the recorder. The three possible values are recording
, inactive
, and paused
stream
contains the MediaStream
received from the getUserMedia
methodvideoChunks
contains encoded pieces (chunks) of the video recordingrecordedVideo
contains a blob URL to the finished video recordingLet’s also modify the getCameraPermission
function to the following:
... const getCameraPermission = async () => { setRecordedVideo(null); if ("MediaRecorder" in window) { try { const videoConstraints = { audio: false, video: true, }; const audioConstraints = { audio: true }; // create audio and video streams separately const audioStream = await navigator.mediaDevices.getUserMedia( audioConstraints ); const videoStream = await navigator.mediaDevices.getUserMedia( videoConstraints ); setPermission(true); //combine both audio and video streams const combinedStream = new MediaStream([ ...videoStream.getVideoTracks(), ...audioStream.getAudioTracks(), ]); setStream(combinedStream); //set videostream to live feed player liveVideoFeed.current.srcObject = videoStream; } catch (err) { alert(err.message); } } else { alert("The MediaRecorder API is not supported in your browser."); } }; ...
To prevent the microphone from causing an echo during recording, we’ll create two separate media streams for audio and video, respectively, and then combine both streams into one. Finally, we set the liveVideoFeed
to contain just the video stream.
Similar to the audio recorder we created earlier, we’ll start by creating the startRecording
function just below the getCameraPermission
function:
... const startRecording = async () => { setRecordingStatus("recording"); const media = new MediaRecorder(stream, { mimeType }); mediaRecorder.current = media; mediaRecorder.current.start(); let localVideoChunks = []; mediaRecorder.current.ondataavailable = (event) => { if (typeof event.data === "undefined") return; if (event.data.size === 0) return; localVideoChunks.push(event.data); }; setVideoChunks(localVideoChunks); }; ...
Next, we’ll create the function stopRecording
just below the startRecording
function to stop the video recording:
... const stopRecording = () => { setPermission(false); setRecordingStatus("inactive"); mediaRecorder.current.stop(); mediaRecorder.current.onstop = () => { const videoBlob = new Blob(videoChunks, { type: mimeType }); const videoUrl = URL.createObjectURL(videoBlob); setRecordedVideo(videoUrl); setVideoChunks([]); }; }; ...
To enable playback and video download, and to see all the changes we’ve made so far, let’s update the HTML section of our component file:
... <div> <h2>Audio Recorder</h2> <main> <div className="audio-controls"> {!permission ? ( <button onClick={getMicrophonePermission} type="button"> Get Microphone </button> ) : null} {permission && recordingStatus === "inactive" ? ( <button onClick={startRecording} type="button"> Start Recording </button> ) : null} {recordingStatus === "recording" ? ( <button onClick={stopRecording} type="button"> Stop Recording </button> ) : null} </div> {audio ? ( <div className="audio-player"> <audio src={audio} controls></audio> <a download href={audio}> Download Recording </a> </div> ) : null} </main> </div> ...
Now, the video recorder should look like this:
Rather than write all this code to enable audio and video recording in your application, you might want to consider using an external library that is well-optimized for what you’re trying to achieve.
A popular example is the RecordRTC, a flexible JavaScript library that offers a wide range of customization options. Other examples include react-media-recorder, react-video-recorder, etc.
N.B., Remember to do your research before using any of these packages.
In this tutorial, we learned how to build a custom audio and video recorder in React using the native HTML MediaRecorder API and MediaStream API.
All of the source code for this project can be found in this GitHub repository. Feel free to fork the repository and play around with the code. I’d love to see what you can make of it 🙂
Cheers!
Install LogRocket via npm or script tag. LogRocket.init()
must be called client-side, not
server-side
$ npm i --save logrocket // Code: import LogRocket from 'logrocket'; LogRocket.init('app/id');
// Add to your HTML: <script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script> <script>window.LogRocket && window.LogRocket.init('app/id');</script>
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowBuild scalable admin dashboards with Filament and Laravel using Form Builder, Notifications, and Actions for clean, interactive panels.
Break down the parts of a URL and explore APIs for working with them in JavaScript, parsing them, building query strings, checking their validity, etc.
In this guide, explore lazy loading and error loading as two techniques for fetching data in React apps.
Deno is a popular JavaScript runtime, and it recently launched version 2.0 with several new features, bug fixes, and improvements […]
2 Replies to "How to create a video and audio recorder in React"
wonderful tutorial, thanks – but there are a few typos.. like the last block of code “Playback and video download” has the audio stuff copy/pasted and there are other differences from the github repo too in the code for AudioRecorder.jsx file
Please do you have the corrected version