In this tutorial, we’ll show you how to implement voice assistance in your React app using React Speech Recognition.
We’ll cover the following:
React Speech Recognition is a React Hook that works with the Web Speech API to translate speech from your device’s mic into text. This text can then be read by your React app and used to perform tasks.
React Speech Recognition provides a command option to perform a certain task based on a specific speech phrase. For example, when a user asks for weather information, you can perform a weather API call. This is just a basic example, but when it comes to voice assistance and control, the possibilities are endless.
As of February 2021, React Speech Recognition supports the following browsers:
Unfortunately, iOS does not support these APIs.
To add React Speech Recognition to your React project, simply open your terminal and type:
npm i --save react-speech-recognition
When you press enter, this will add the hook to your project.
To see how the speech recognition Hook works, we’ll build a simple UI.
First, we’ll add a round button with a mic icon, a button with text to indicate whether or not we are listening to user speech, and a stop button to stop listening.
Below these elements, we’ll show the user’s speech-to-text translation and create a reset button to clear the text and stop listening.
Here is our JSX for the component described above:
// App.js function App() { return ( <div className="microphone-wrapper"> <div className="mircophone-container"> <div className="microphone-icon-container"> <img src={microPhoneIcon} className="microphone-icon" /> </div> <div className="microphone-status"> Click to start Listening </div> <button className="microphone-stop btn" > Stop </button> </div> <div className="microphone-result-container"> <div className="microphone-result-text">Speech text here</div> <button className="microphone-reset btn" > Reset </button> </div> </div> ); }
With that set up, we can now add some styling:
// App.css * { margin: 0; padding: 0; box-sizing: border-box; } body { background-color: rgba(0, 0, 0, 0.8); font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif; color: white; } .mircophone-container { display: flex; justify-content: center; align-items: center; width: 100vw; height: 50vh; } .microphone-icon-container { width: 100px; height: 100px; border-radius: 50%; background-image: linear-gradient(128deg, #ffffff, #647c88); padding: 20px; margin-right: 20px; position: relative; cursor: pointer; } .microphone-icon-container.listening::before { content: ""; width: 100px; height: 100px; background-color: #ffffff81; position: absolute; top: 50%; left: 50%; transform:translate(-50%, -50%) scale(1.4); border-radius: 50%; animation: listening infinite 1.5s; } @keyframes listening{ 0%{ opacity: 1; transform: translate(-50%, -50%) scale(1); } 100%{ opacity: 0; transform: translate(-50%, -50%) scale(1.4); } } .microphone-icon { width: 100%; height: 100%; } .microphone-status { font-size: 22px; margin-right: 20px; min-width: 215px; } .btn { border: none; padding: 10px 30px; margin-right: 10px; outline: none; cursor: pointer; font-size: 20px; border-radius: 25px; box-shadow: 0px 0px 10px 5px #ffffff1a; } .microphone-result-container { text-align: center; height: 50vh; display: flex; flex-direction: column; justify-content: space-between; align-items: center; padding-bottom: 30px; } .microphone-result-text { margin-bottom: 30px; width: 70vw; overflow-y: auto; } .microphone-reset { border: 1px solid #fff; background: none; color: white; width: fit-content; }
As you may have noticed, we also included an animation that will play when listening has started, thereby alerting the user that they can now speak.
To use React Speech Recognition, we must first import it into the component. We will use the useSpeechRecognition
hook and the SpeechRecognition
object.
To import React Speech Recognition:
import SpeechRecognition, { useSpeechRecognition } from "react-speech-recognition";
To start listening to the user’s voice, we need to call the startListening
function:
SpeechRecognition.startListening()
To stop listening, we can call stopListening
:
SpeechRecognition.stopListening()
To get the transcript of the user’s speech, we will use transcript
:
const { transcript } = useSpeechRecognition()
This will record the value whenever a user says something.
To reset or clear the value of transcript, you can call resetTranscript
:
const { resetTranscript } = useSpeechRecognition()
Using the resetTranscript
function will set the transcript to empty.
Finally, to check whether the browser supports the Web Speech APIs or not, we can use this function:
if (!SpeechRecognition.browserSupportsSpeechRecognition()) { // Browser not supported & return some useful info. }
With everything we’ve reviewed to this point, we are now ready to set up our code. Note that in the block below, we added the listening events and corresponding states:
import { useRef, useState } from "react"; import SpeechRecognition, { useSpeechRecognition } from "react-speech-recognition"; import "./App.css"; import microPhoneIcon from "./microphone.svg"; function App() { const { transcript, resetTranscript } = useSpeechRecognition(); const [isListening, setIsListening] = useState(false); const microphoneRef = useRef(null); if (!SpeechRecognition.browserSupportsSpeechRecognition()) { return ( <div className="mircophone-container"> Browser is not Support Speech Recognition. </div> ); } const handleListing = () => { setIsListening(true); microphoneRef.current.classList.add("listening"); SpeechRecognition.startListening({ continuous: true, }); }; const stopHandle = () => { setIsListening(false); microphoneRef.current.classList.remove("listening"); SpeechRecognition.stopListening(); }; const handleReset = () => { stopHandle(); resetTranscript(); }; return ( <div className="microphone-wrapper"> <div className="mircophone-container"> <div className="microphone-icon-container" ref={microphoneRef} onClick={handleListing} > <img src={microPhoneIcon} className="microphone-icon" /> </div> <div className="microphone-status"> {isListening ? "Listening........." : "Click to start Listening"} </div> {isListening && ( <button className="microphone-stop btn" onClick={stopHandle}> Stop </button> )} </div> {transcript && ( <div className="microphone-result-container"> <div className="microphone-result-text">{transcript}</div> <button className="microphone-reset btn" onClick={handleReset}> Reset </button> </div> )} </div> ); } export default App;
Now we have set up the app so that when a user clicks on the mic button, the app will listen to their voice and output a transcript below. You will need to allow microphone permission when you run the first time.
Now comes the fun part: adding commands to perform a task based on user speech/phrases.
To add commands, we must pass an array as a commands option to the useSpeechRecognition
.
Before we can do that, however, we must prepare our commands array like so:
const commands = [ { command: "open *", callback: (website) => { window.open("http://" + website.split(" ").join("")); }, }, { command: "change background colour to *", callback: (color) => { document.body.style.background = color; }, }, { command: "reset", callback: () => { handleReset(); }, }, , { command: "reset background colour", callback: () => { document.body.style.background = `rgba(0, 0, 0, 0.8)`; }, }, ];
Remember that commands
is the array of JSON object with command and callback properties. For our purposes, the command is the property where you will write the command; callback will fire correspondingly.
In the example above, you may have noticed that we have passed an asterisk symbol in the first and second command. This symbol allows us to capture multiple words and pass them back as a variable in the callback function.
You can pass the commands
variable to useSpeechRecognition
like this:
const { transcript, resetTranscript } = useSpeechRecognition({ commands });
Now you should be able to run your app and commands.
For future reference, our full code for the app we created using React Speech Recognition Hooks looks like this:
import { useRef, useState } from "react"; import SpeechRecognition, { useSpeechRecognition } from "react-speech-recognition"; import "./App.css"; import microPhoneIcon from "./microphone.svg"; function App() { const commands = [ { command: "open *", callback: (website) => { window.open("http://" + website.split(" ").join("")); }, }, { command: "change background colour to *", callback: (color) => { document.body.style.background = color; }, }, { command: "reset", callback: () => { handleReset(); }, }, , { command: "reset background colour", callback: () => { document.body.style.background = `rgba(0, 0, 0, 0.8)`; }, }, ]; const { transcript, resetTranscript } = useSpeechRecognition({ commands }); const [isListening, setIsListening] = useState(false); const microphoneRef = useRef(null); if (!SpeechRecognition.browserSupportsSpeechRecognition()) { return ( <div className="mircophone-container"> Browser is not Support Speech Recognition. </div> ); } const handleListing = () => { setIsListening(true); microphoneRef.current.classList.add("listening"); SpeechRecognition.startListening({ continuous: true, }); }; const stopHandle = () => { setIsListening(false); microphoneRef.current.classList.remove("listening"); SpeechRecognition.stopListening(); }; const handleReset = () => { stopHandle(); resetTranscript(); }; return ( <div className="microphone-wrapper"> <div className="mircophone-container"> <div className="microphone-icon-container" ref={microphoneRef} onClick={handleListing} > <img src={microPhoneIcon} className="microphone-icon" /> </div> <div className="microphone-status"> {isListening ? "Listening........." : "Click to start Listening"} </div> {isListening && ( <button className="microphone-stop btn" onClick={stopHandle}> Stop </button> )} </div> {transcript && ( <div className="microphone-result-container"> <div className="microphone-result-text">{transcript}</div> <button className="microphone-reset btn" onClick={handleReset}> Reset </button> </div> )} </div> ); } export default App;
By now, you should hopefully have a better understanding of how you can use React Speech Recognition hooks in your project. For future reading, I recommend learning more about programming by voice and the other ways that AI can assist in your coding endeavors.
Thank you for reading the article. Please leave any feedback or comments below.
Install LogRocket via npm or script tag. LogRocket.init()
must be called client-side, not
server-side
$ npm i --save logrocket // Code: import LogRocket from 'logrocket'; LogRocket.init('app/id');
// Add to your HTML: <script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script> <script>window.LogRocket && window.LogRocket.init('app/id');</script>
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowLearn how to implement one-way and two-way data binding in Vue.js, using v-model and advanced techniques like defineModel for better apps.
Compare Prisma and Drizzle ORMs to learn their differences, strengths, and weaknesses for data access and migrations.
It’s easy for devs to default to JavaScript to fix every problem. Let’s use the RoLP to find simpler alternatives with HTML and CSS.
Learn how to manage memory leaks in Rust, avoid unsafe behavior, and use tools like weak references to ensure efficient programs.
4 Replies to "Using the React Speech Recognition Hook for voice assistance"
What is the benefit of this library having a React hook? Does the voice recognition logic have anything to do with React?
Wow
Hi, It is using web speech under the hood and this hoook is implemented in react way.
Not working in edge browser how to fix it??