Web Real-Time Communication (WebRTC) is an open source standard that allows real-time communication between web apps and sites without plugins or additional software installations. It’s also available as a library for iOS and Android apps that provides the same functionality as the standard.
WebRTC works on any operating system and is available on all modern browsers, including Google Chrome, Mozilla Firefox, and Safari. A few major projects that use WebRTC include Google Meet and Hangouts, WhatsApp, Amazon Chime, Facebook Messenger, Snapchat, and Discord.
In this article, we’ll walk through one of WebRTC’s major use cases: peer-to-peer (P2P) audio and video streaming from one system to another. This functionality is similar to that of live-streaming services, like Twitch but on a smaller and simpler scale.
In this section, I’ll review five essential concepts you should know to understand how a web application using WebRTC works. These concepts include peer-to-peer communication, signal servers, and the ICE protocol.
In this guide, we’ll be working with WebRTC’s RTCPeerConnection
object which is primarily involved in connecting two applications and allowing them to communicate using a peer-to-peer protocol.
In decentralized networks, peer-to-peer communication is a direct link between computer systems (peers) in the network without an intermediary (a server for example). While WebRTC doesn’t allow peers to directly communicate with each other in all scenarios the ICE protocol and Signal server that it uses allows similar behavior. You’ll find out more about them below.
For each pair in a WebRTC application to begin communication, they must perform a “handshake,” which is done through offers and answers. One peer generates an offer and shares it with the other peer, and that other peer generates an answer and shares it with the first peer.
For the handshake to be successful, each peer must have a method for sharing their offer or answer. This is where signal servers come in.
A signal server’s primary goal is to initiate communication between peers. A peer uses a signal server to share its offer or answer with another peer, and another can use the signal server to share its offer or answer with the first peer.
In specific scenarios, like when all the devices involved aren’t in the same local network, WebRTC applications may struggle to make peer connections with each other. This is because direct socket connections aren’t always possible between peers unless they’re in the same local network.
When you want to use the peer connection across different networks, you need to use the Interactive Connectivity Establishment (ICE) protocol. The ICE protocol is used to establish connections between peers over the internet. ICE servers use this protocol to establish connections and relay information between the peers.
The ICE protocol comprises either a Session Traversal Utilities for NAT (STUN) protocol, a Traversal Using Relay around NAT (TURN) protocol, or a mix of both.
In this tutorial, we won’t be covering the practical aspect of ICE protocols because of the complexity involved in building a server, getting it to work, and testing it. However, it’s helpful to know the limitations of WebRTC applications and where the ICE protocol comes in to solve those limitations.
Now that we’ve gone through all that, it’s time to start the grunt work. In the next section, we’ll work on the video streaming project. You can see a live demo of the project here as we get started.
Before we get into it, I have a GitHub repo that you can clone to follow the article. This repo has a a start-tutorial
folder organized in the steps you’ll take in the next section, along with a copy of the code at the end of each step. While using the repo isn’t necessary, it is helpful.
The folder we’ll be working on in the repo is called start-tutorial
. It contains three folders: step-1
, step-2
, and step-3
. These three folders correspond to the steps in the next section.
Now, let’s begin building the project. I divided this process into three steps. We’ll create a project we can run, test, and use with each step.
These steps include video streaming within a webpage, streaming between browser tabs and windows with BroadcastChannel, and finally, using a signal server to stream across different browsers on the same device.
In this step, we’ll only need an index.html
file. If you’re working in the repo, you can use the start-tutorial/step-1/index.html
file.
Now, let’s paste this code into it:
<body> <video id="local" autoplay muted></video> <video id="remote" autoplay></video> <button onclick="start(this)">start video</button> <button id="stream" onclick="stream(this)" disabled>stream video</button> <script> // get video elements const local = document.querySelector("video#local"); const remote = document.querySelector("video#remote"); function start(e) { e.disabled = true; navigator.mediaDevices.getUserMedia({ audio: true, video: true }) .then((stream) => { local.srcObject = stream; document.getElementById("stream").disabled = false; // enable the stream button }) .catch(() => e.disabled = false); } function stream(e) { // disable the stream button e.disabled = true; const config = {}; const localPeerConnection = new RTCPeerConnection(config); // local peer const remotePeerConnection = new RTCPeerConnection(config); // remote peer // if an icecandidate event is triggered in a peer add the ice candidate to the other peer localPeerConnection.addEventListener("icecandidate", e => remotePeerConnection.addIceCandidate(e.candidate)); remotePeerConnection.addEventListener("icecandidate", e => localPeerConnection.addIceCandidate(e.candidate)); // if the remote peer detects a track in the connection, it forwards it to the remote video element remotePeerConnection.addEventListener("track", e => remote.srcObject = e.streams[0]); // get camera and microphone source tracks and add it to the local peer local.srcObject.getTracks() .forEach(track => localPeerConnection.addTrack(track, local.srcObject)); // Start the handshake process localPeerConnection.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }) .then(async offer => { await localPeerConnection.setLocalDescription(offer); await remotePeerConnection.setRemoteDescription(offer); console.log("Created offer"); }) .then(() => remotePeerConnection.createAnswer()) .then(async answer => { await remotePeerConnection.setLocalDescription(answer); await localPeerConnection.setRemoteDescription(answer); console.log("Created answer"); }); } </script> </body>
It will get you something that looks like this:
Now, let’s take a look at what’s going on.
To build the project, we need two video elements. We’ll use one to capture the user’s camera and microphone. After that, we’ll feed the audio and video stream from this element to the other video element using WebRTC’s RTCPeerConnection
object:
<video id="local" autoplay muted></video> <video id="remote" autoplay></video>
An RTCPeerConnection
object is the main object that establishes direct, peer-to-peer connections between web browsers or devices.
Then we need two buttons. One is to activate the user’s webcam and microphone, and the other is to stream the content of the first video element to the second:
<button onclick="start(this)">start video</button> <button id="stream" onclick="stream(this)" disabled>stream video</button>
The start video button runs a start
function when clicked. The stream video button runs the stream
function when clicked.
Let’s first take a look at the start
function:
function start(e) { e.disabled = true; navigator.mediaDevices.getUserMedia({ audio: true, video: true }) .then((stream) => { local.srcObject = stream; document.getElementById("stream").disabled = false; // enable the stream button }) .catch(() => e.disabled = false); }
When the start
function runs, it first makes the start button unclickable. Then, it requests the user’s permission to use their webcam and microphone with the navigator.mediaDevices.getUserMedia
method.
If the user grants permission, the start
function sends the video and audio stream to the first video element through its srcObject
field and enables the stream
button. If there are issues getting permission from the user or the user rejects the permission, the function makes the start
button clickable again.
Now, let’s look at the stream
function:
function stream(e) { // disable the stream button e.disabled = true; const config = {}; const localPeerConnection = new RTCPeerConnection(config); // local peer const remotePeerConnection = new RTCPeerConnection(config); // remote peer // if an icecandidate event is triggered in a peer add the ice candidate to the other peer localPeerConnection.addEventListener("icecandidate", e => remotePeerConnection.addIceCandidate(e.candidate)); remotePeerConnection.addEventListener("icecandidate", e => localPeerConnection.addIceCandidate(e.candidate)); // if the remote peer receives track from the connection, it feeds them to the remote video element remotePeerConnection.addEventListener("track", e => remote.srcObject = e.streams[0]); // get camera and microphone tracks then feed them to local peer local.srcObject.getTracks() .forEach(track => localPeerConnection.addTrack(track, local.srcObject)); // Start the handshake process localPeerConnection.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }) .then(async offer => { await localPeerConnection.setLocalDescription(offer); await remotePeerConnection.setRemoteDescription(offer); console.log("Created offer"); }) .then(() => remotePeerConnection.createAnswer()) .then(async answer => { await remotePeerConnection.setLocalDescription(answer); await localPeerConnection.setRemoteDescription(answer); console.log("Created answer"); }); }
I’ve added comments to outline the processes in the stream
function to help with understanding it. However, the handshake process (lines 21–32) and the ICE candidate events (lines 10 and 11) are the essential parts that we’ll discuss in more detail.
In the handshake process, each pair sets its local and remote description with an offer and an answer depending on which one the pair creates:
After completing this process, the peers immediately start communicating with each other.
An ICE candidate is a peer’s address (IP, ports, and other related information). RTCPeerConnection
objects use ICE candidates to find and communicate with each other. The icecandidate
event in an RTCPeerConnection
object is triggered when the object generates an ICE candidate.
Our goal with the event listeners we set up is to pass ICE candidates from one peer to another.
BroadcastChannel
One of the challenging things about setting up a peer-to-peer application with WebRTC is getting it to work across different application instances or websites. In this section, we will use the Broadcast Channel API to allow our project to work outside a single webpage but within the browser context.
We’ll start by creating two files, streamer.html
and index.html
. In the repo, these files are in the start-tutorial/step-2
folder. The streamer.html
page allows users to create a live stream from their camera, while the index.html
page will enable users to watch those live streams.
Now, let’s paste these code blocks into the files. Then later, we’ll look deeper into them.
First, in the streamer.html
file, paste the following code:
<body> <video id="local" autoplay muted></video> <button onclick="start(this)">start video</button> <button id="stream" onclick="stream(this)" disabled>stream video</button> <script> // get video elements const local = document.querySelector("video#local"); let peerConnection; const channel = new BroadcastChannel("stream-video"); channel.onmessage = e => { if (e.data.type === "icecandidate") { peerConnection?.addIceCandidate(e.data.candidate); } else if (e.data.type === "answer") { console.log("Received answer") peerConnection?.setRemoteDescription(e.data); } } // function to ask for camera and microphone permission // and stream to #local video element function start(e) { e.disabled = true; document.getElementById("stream").disabled = false; // enable the stream button navigator.mediaDevices.getUserMedia({ audio: true, video: true }) .then((stream) => local.srcObject = stream); } function stream(e) { e.disabled = true; const config = {}; peerConnection = new RTCPeerConnection(config); // local peer connection // add ice candidate event listener peerConnection.addEventListener("icecandidate", e => { let candidate = null; // prepare a candidate object that can be passed through browser channel if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, }; } channel.postMessage({ type: "icecandidate", candidate }); }); // add media tracks to the peer connection local.srcObject.getTracks() .forEach(track => peerConnection.addTrack(track, local.srcObject)); // Create offer and send through the browser channel peerConnection.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }) .then(async offer => { await peerConnection.setLocalDescription(offer); console.log("Created offer, sending..."); channel.postMessage({ type: "offer", sdp: offer.sdp }); }); } </script> </body>
Then, in the index.html
file, paste the following code:
<body> <video id="remote" controls></video> <script> // get video elements const remote = document.querySelector("video#remote"); let peerConnection; const channel = new BroadcastChannel("stream-video"); channel.onmessage = e => { if (e.data.type === "icecandidate") { peerConnection?.addIceCandidate(e.data.candidate) } else if (e.data.type === "offer") { console.log("Received offer") handleOffer(e.data) } } function handleOffer(offer) { const config = {}; peerConnection = new RTCPeerConnection(config); peerConnection.addEventListener("track", e => remote.srcObject = e.streams[0]); peerConnection.addEventListener("icecandidate", e => { let candidate = null; if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, } } channel.postMessage({ type: "icecandidate", candidate }) }); peerConnection.setRemoteDescription(offer) .then(() => peerConnection.createAnswer()) .then(async answer => { await peerConnection.setLocalDescription(answer); console.log("Created answer, sending...") channel.postMessage({ type: "answer", sdp: answer.sdp, }); }); } </script> </body>
In your browser, the pages would look and function like the animation below:
streamer.html
fileNow, let’s explore these two pages in more detail. We’ll start with the streamer.html
page. This page only needs a video and two button elements:
<video id="local" autoplay muted></video> <button onclick="start(this)">start video</button> <button id="stream" onclick="stream(this)" disabled>stream video</button>
The start video button works like it did in the last step: it requests the user’s permission to use their camera and microphone and feeds the stream to the video element. Then, the stream video button initializes a peer connection and feeds the video stream to the peer connection.
Since this step involves two webpages, we’re working with the Broadcast Channel API. In our index.html
and streamer.html
files, we have to initialize a BroadcastChannel
object on each page with the same name to allow them to communicate.
A BroadcastChannel
object allows you to communicate essential information between browsing contexts (windows or tabs, for example) with the same URL origin.
When you initialize a BroadcastChannel
object, you have to give it a name. You can think of this name as a chat room name. If you initialize two BroadcastChannel
objects with the same name, they can talk to each other like they’re in a chat room. But if they have different names, they can’t communicate because they’re not in the same chat room.
I say “chat room” because you can have more than one BroadcastChannel
objects with the same name, and they can all communicate with each other simultaneously.
Since we’re working with two pages, each with its peer connection, we have to use the BroadcastChannel
object to pass the offers and answers back and forth between the two pages. We also have to pass a peer connection’s ICE candidate to the other. So, let’s take a look at how it is done.
It all starts with the stream
function:
// streamer.html -> script element function stream(e) { e.disabled = true; const config = {}; peerConnection = new RTCPeerConnection(config); // local peer connection // add ice candidate event listener peerConnection.addEventListener("icecandidate", e => { let candidate = null; // prepare a candidate object that can be passed through browser channel if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, }; } channel.postMessage({ type: "icecandidate", candidate }); }); // add media tracks to the peer connection local.srcObject.getTracks() .forEach(track => peerConnection.addTrack(track, local.srcObject)); // Create offer and send through the browser channel peerConnection.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }) .then(async offer => { await peerConnection.setLocalDescription(offer); console.log("Created offer, sending..."); channel.postMessage({ type: "offer", sdp: offer.sdp }); }); }
There are two areas in the function that interact with the BrowserChannel
object. The first is the ICE candidate event listener:
peerConnection.addEventListener("icecandidate", e => { let candidate = null; // prepare a candidate object that can be passed through browser channel if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, }; } channel.postMessage({ type: "icecandidate", candidate }); });
The other is after generating an offer:
peerConnection.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }) .then(async offer => { await peerConnection.setLocalDescription(offer); console.log("Created offer, sending..."); channel.postMessage({ type: "offer", sdp: offer.sdp }); });
Let’s look at the ICE candidate event listener first. If you pass the e.candidate
object directly to the BroadcastChannel
object, you’ll get a DataCloneError: object can not be cloned
error message in the console.
This error happens because the BroadcastChannel
object cannot process e.candidate
directly. You need to create an object with the required details from e.candidate
to send to the BroadcastChannel
object. We had to do the same thing for sending an offer.
You need to call the channel.postMessage
method to send a message to the BroadcastChannel
object. When this message is called, the BroadcastChannel
object at the other webpage triggers its onmessage
event listener. Have a look at this code from the index.html
page:
channel.onmessage = e => { if (e.data.type === "icecandidate") { peerConnection?.addIceCandidate(e.data.candidate) } else if (e.data.type === "offer") { console.log("Received offer") handleOffer(e.data) } }
As you can see, we have conditional statements checking the type of message coming into the BroadcastChannel
object. The message’s contents can be read through e.data
. e.data.type
corresponds to the type field from the objects we sent through channel.postMessage
:
// from the ICE candidate event listener channel.postMessage({ type: "icecandidate", candidate }); // from generating an offer channel.postMessage({ type: "offer", sdp: offer.sdp });
Now, let’s have a look at the index.html
file that handles received offers.
index.html
fileThe index.html
file starts with the handleOffer
function:
function handleOffer(offer) { const config = {}; peerConnection = new RTCPeerConnection(config); peerConnection.addEventListener("track", e => remote.srcObject = e.streams[0]); peerConnection.addEventListener("icecandidate", e => { let candidate = null; if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, } } channel.postMessage({ type: "icecandidate", candidate }) }); peerConnection.setRemoteDescription(offer) .then(() => peerConnection.createAnswer()) .then(async answer => { await peerConnection.setLocalDescription(answer); console.log("Created answer, sending...") channel.postMessage({ type: "answer", sdp: answer.sdp, }); }); }
When triggered, this method creates a peer connection and sends any ICE candidate it generates to the other peer. Then, continues with the handshake process, setting the streamer’s offer as its remote description, generating an answer, setting that answer as its local description, and sending that answer to the streamer using the BroadcastChannel
object.
Like with the BroadcastChannel
object in the index.html
file, the BroadcastChannel
object in the streamer.html
file needs an onmessage
event listener to receive the ICE candidates and answer from the index.html
file:
channel.onmessage = e => { if (e.data.type === "icecandidate") { peerConnection?.addIceCandidate(e.data.candidate); } else if (e.data.type === "answer") { console.log("Received answer") peerConnection?.setRemoteDescription(e.data); } }
If you’re wondering why the question mark ?
is after the peerConnection
, it tells the JavaScript runtime not to throw an error if peerConnection
is undefined. It is somewhat of a shorthand for this:
if (peerConnection) { peerConnection.setRemoteDescription(e.data); }
BroadcastChannel
with our signal serverBroadcastChannel
is only limited to browser contexts. In this step, we’ll overcome that limit by using a simple signal server, which we’ll build with Node.js. Like in the previous steps, I’ll first give you the code to paste and then explain what’s going on in them.
So, let’s begin. This step requires four files: index.html
, streamer.html
, signalserverclass.js
, and server/index.js
.
We’ll start with the signalserverclass.js
file:
class SignalServer { constructor(channel) { this.socket = new WebSocket("ws://localhost:80"); this.socket.addEventListener("open", () => { this.postMessage({ type: "join-channel", channel }); }); this.socket.addEventListener("message", (e) => { const object = JSON.parse(e.data); if (object.type === "connection-established") console.log("connection established"); else if (object.type === "joined-channel") console.log("Joined channel: " + object.channel); else this.onmessage({ data: object }); }); } onmessage(e) {} postMessage(data) { this.socket.send( JSON.stringify(data) ); } }
Next, let’s update the index.html
and streamer.html
files. The only changes to these files are where we initialized the BroadcastChannel
object and the script tag importing the signalserverclass.js
script.
Here’s the updated index.html
file:
<body> <video id="remote" controls></video> <script src="signalserverclass.js"></script> <!-- new change --> <script> const remote = document.querySelector("video#remote"); let peerConnection; const channel = new SignalServer("stream-video"); // <- new change channel.onmessage = e => { if (e.data.type === "icecandidate") { peerConnection?.addIceCandidate(e.data.candidate); } else if (e.data.type === "offer") { console.log("Received offer"); handleOffer(e.data); } } function handleOffer(offer) { const config = {}; peerConnection = new RTCPeerConnection(config); peerConnection.addEventListener("track", e => remote.srcObject = e.streams[0]); peerConnection.addEventListener("icecandidate", e => { let candidate = null; if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, }; } channel.postMessage({ type: "icecandidate", candidate }); }); peerConnection.setRemoteDescription(offer) .then(() => peerConnection.createAnswer()) .then(async answer => { await peerConnection.setLocalDescription(answer); console.log("Created answer, sending..."); channel.postMessage({ type: "answer", sdp: answer.sdp, }); }); } </script> </body>
Here’s the updated streamer.html
file:
<body> <video id="local" autoplay muted></video> <button onclick="start(this)">start video</button> <button id="stream" onclick="stream(this)" disabled>stream video</button> <script src="signalserverclass.js"></script> <!-- new change --> <script> const local = document.querySelector("video#local"); let peerConnection; const channel = new SignalServer("stream-video"); // <- new change channel.onmessage = e => { if (e.data.type === "icecandidate") { peerConnection?.addIceCandidate(e.data.candidate); } else if (e.data.type === "answer") { console.log("Received answer"); peerConnection?.setRemoteDescription(e.data); } } // function to ask for camera and microphone permission // and stream to #local video element function start(e) { e.disabled = true; document.getElementById("stream").disabled = false; // enable the stream button navigator.mediaDevices.getUserMedia({ audio: true, video: true }) .then((stream) => local.srcObject = stream); } function stream(e) { e.disabled = true; const config = {}; peerConnection = new RTCPeerConnection(config); // local peer connection peerConnection.addEventListener("icecandidate", e => { let candidate = null; if (e.candidate !== null) { candidate = { candidate: e.candidate.candidate, sdpMid: e.candidate.sdpMid, sdpMLineIndex: e.candidate.sdpMLineIndex, }; } channel.postMessage({ type: "icecandidate", candidate }); }); local.srcObject.getTracks() .forEach(track => peerConnection.addTrack(track, local.srcObject)); peerConnection.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }) .then(async offer => { await peerConnection.setLocalDescription(offer); console.log("Created offer, sending..."); channel.postMessage({ type: "offer", sdp: offer.sdp }); }); } </script> </body>
Finally, here are the contents of the server/index.js
file:
const { WebSocketServer } = require("ws"); const channels = {}; const server = new WebSocketServer({ port: 80 }); server.on("connection", handleConnection); function handleConnection(ws) { console.log('New connection'); ws.send( JSON.stringify({ type: 'connection-established' }) ); let id; let channel = ""; ws.on("error", () => console.log('websocket error')); ws.on('message', message => { const object = JSON.parse(message); if (object.type === "join-channel") { channel = object.channel; if (channels[channel] === undefined) channels[channel] = []; id = channels[channel].length || 0; channels[channel].push(ws); ws.send(JSON.stringify({type: 'joined-channel', channel})); } else { // forward the message to other channel memebers channels[channel]?.filter((_, i) => i !== id).forEach((member) => { member.send(message.toString()); }); } }); ws.on('close', () => { console.log('Client has disconnected!'); if (channel !== "") { channels[channel] = channels[channel].filter((_, i) => i !== id); } }); }
In the browser, they should look and run like this:
To get the server running, you need to open the server
folder in the terminal, initialize the folder as a Node project, install the ws
package, and then run the index.js
file. These steps can be done with these commands:
# initialize the project directory npm init --y # install the `ws` package npm install ws # run the `index.js` file node index.js
Now, let’s look into the files. To reduce the need for editing our code after swapping the BroadcastChannel
object constructor with the SignalServer
constructor, I try to make the SignalServer
class imitate the calls and things you do with the BroadcastChannel
— at least for our use case:
class SignalServer { constructor(channel) { // what the constructor does } onmessage(e) {} postMessage(data) { // what postMessage does } }
This class has a constructor that joins a channel when it is initialized. It also has a postMessage
function to allow sending messages and an onmessage
method that gets called when a message is received from another SignalServer
object.
Another aim of the SignalServer
class is to abstract our backend processes. Our signal server is a WebSocket server because it allows us to make event-based bidirectional communication between the server and the client, which makes it the go-to choice for building our signal server.
The SignalServer
class starts its operations from its constructor function:
constructor(channel) { this.socket = new WebSocket("ws://localhost:80"); this.socket.addEventListener("open", () => { this.postMessage({ type: "join-channel", channel }); }); this.socket.addEventListener("message", (e) => { const object = JSON.parse(e.data); if (object.type === "connection-established") console.log("connection established"); else if (object.type === "joined-channel") console.log("Joined channel: " + object.channel); else this.onmessage({ data: object }); }); }
It starts by initializing a connection to the backend. When the connection becomes active, it sends an object that we’re using as our join-channel
request to the server:
this.socket.addEventListener("open", () => { this.postMessage({ type: "join-channel", channel }); });
Now, let’s take a look at our WebSocket server:
const { WebSocketServer } = require("ws"); const channels = {}; const server = new WebSocketServer({ port: 80 }); server.on("connection", handleConnection); function handleConnection(ws) { // I cut out the details because it's not in focus right now }
This is a pretty standard WebSocket server. We have our server initialization and event listener for when a new client connects to the server. The only thing new is the channels
variable, which we’re using to store the channels that every SignalServer
object joins.
If a channel doesn’t exist and an object wants to join that channel, we want the server to create an empty array with the WebSocket connection as the first element. Then, we store that array as a field with the channel’s name in the channels
object.
You can see this in the message
event listener below. The code looks a little complex, but the explanation above is a general overview of what the code does:
// ... first rest of the code ws.on('message', message => { const object = JSON.parse(message); if (object.type === "join-channel") { channel = object.channel; if (channels[channel] === undefined) channels[channel] = []; id = channels[channel].length || 0; channels[channel].push(ws); ws.send(JSON.stringify({type: 'joined-channel', channel})); // ... other rest of the code
Afterwards, the event listener sends a joined-channel
message to the SignalServer
object, telling it that its request to join a channel was successful.
As for the rest of the event listener, it sends whatever message that is not of type join-channel
to other SignalServer
objects in the channel:
// rest of the event listener } else { // forward the message to other channel memebers channels[channel]?.filter((_, i) => i !== id).forEach((member) => { member.send(message.toString()); }); } });
In the handleConnection
function, the id
and channel
variables store the position of the SignalServer object
WebSocket connection in the channel and the name of the channel that the SignalServer object
WebSocket connection is stored in, respectively:
let id; let channel = "";
These variables are set when the SignalServer
object joins a channel. They’re helpful for passing messages from one SignalServer
object to others in the channel, as you can see in the else
block. They’re also helpful for removing the SignalServer
object from the channel when they get disconnected for whatever reason:
ws.on('close', () => { console.log('Client has disconnected!'); if (channel !== "") { channels[channel] = channels[channel].filter((_, i) => i !== id); } });
Finally, back to the SignalServer
class in the signalserverclass.js
file. Let’s look at the section that receives messages from the WebSocket server:
this.socket.addEventListener("message", (e) => { const object = JSON.parse(e.data); if (object.type === "connection-established") console.log("connection established"); else if (object.type === "joined-channel") console.log("Joined channel: " + object.channel); else this.onmessage({ data: object }); });
If you look at the WebSocket server’s handleConnection
function, there are two message types that the server sends directly to the SignalServer
object: joined-channel
and connection-established
. These two message types are handled directly by this event listener.
Other message types are forwarded to the onmessage
event listener for our frontend app.
In this article, we went through how we can build a P2P video streaming application with WebRTC — one of its primary use cases.
We started with creating peer connections within a single pages in order to get a simple look at how WebRTC applications work without the need to worry about signaling. Then, we touched on signaling with the Broadcast Channel API. Finally, we built our own signal server.
If you want to learn more about WebRTC, including a deep dive into some of its other use cases, feel free to check out these LogRocket articles:
I hope this article has been helpful and easy to understand. If you’d like to refer to the source code for our WebRTC video streaming project, you can check it out in this GitHub repo. You can also view the live demo here.
Deploying a Node-based web app or website is the easy part. Making sure your Node instance continues to serve resources to your app is where things get tougher. If you’re interested in ensuring requests to the backend or third-party services are successful, try LogRocket.
LogRocket is like a DVR for web and mobile apps, recording literally everything that happens while a user interacts with your app. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.
LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. Start monitoring for free.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowCompare Prisma and Drizzle ORMs to learn their differences, strengths, and weaknesses for data access and migrations.
It’s easy for devs to default to JavaScript to fix every problem. Let’s use the RoLP to find simpler alternatives with HTML and CSS.
Learn how to manage memory leaks in Rust, avoid unsafe behavior, and use tools like weak references to ensure efficient programs.
Bypass anti-bot measures in Node.js with curl-impersonate. Learn how it mimics browsers to overcome bot detection for web scraping.
One Reply to "Using WebRTC to implement P2P video streaming"
thanks