I’m going to cover a few things in this article, including how to migrate from the now-deprecated react-native-camera
to the powerful react-native-vision-camera
, handle permissions, optimize performance, and implement features like custom UI and advanced use cases such as face detection.
By the end, you’ll have the tools to build a production-ready camera experience for your app’s needs. I’ll also refer to react-native-vision-camera
as VisionCamera throughout the piece.
For this article, we will be using the react-native-vision-camera
package.
To install VisionCamera, run the command below:
/* Expo */ npx expo install react-native-vision-camera
Next, we’ll configure VisionCamera by adding it to the plugins array in our app.json
file:
{ "name": "camera-app", "slug": "camera-app", "plugins": [ [ "react-native-vision-camera", { "cameraPermissionText": "$(PRODUCT_NAME) needs access to your Camera.", "enableMicrophonePermission": true, "microphonePermissionText": "$(PRODUCT_NAME) needs access to your Microphone." } ] ] }
Next, install expo-dev-client
package. This package replaces the default in-app development tools with ones that support network debugging, support for launch updates, and so on. To install the package, run the command below:
npx expo install expo-dev-client
Prebuild your app to compile and apply the changes:
npx expo prebuild
Finally, run the development build on your device using the command:
eas build --profile development --platform android
If you encounter the error “Failed to upload metadata to EAS Build”, it’s because eas doesn’t know which files you want to upload. To fix the error, simply initialize your directory using git init
.
react-native-camera
to react-native-vision-camera
Previously, react-native-camera
was the go-to package for implementing camera functionalities. It’s been deprecated and unmaintained for a while now, and so you have to migrate to react-native-vision-camera
.
A typical react-native-camera
implementation looks this:
import { RNCamera } from 'react-native-camera'; const Camera = () => ( <RNCamera style={styles.camera} type={RNCamera.Constants.Type.back} flashMode={RNCamera.Constants.FlashMode.auto} androidCameraPermissionOptions={{ title: 'Grant Camera Permission', message: 'Camera App needs access', buttonPositive: 'OK', buttonNegative: 'Cancel', }} /> );
To use react-native-vision-camera
, import your camera component, device, and permission hooks from the package. The useCameraDevice
hooks is used to select the camera type you want to use, i.e., either the back or front camera, while the useCameraPermission
hooks allow you to request for permission and check the permission status:
import { useEffect } from "react"; import { Camera, useCameraDevice, useCameraPermission, } from "react-native-vision-camera"; const NewCamera = () => { const device = useCameraDevice("back"); const { hasPermission, requestPermission } = useCameraPermission(); useEffect(() => { if (!hasPermission) { requestPermission(); } }, [hasPermission]); if (!hasPermission) return null; if (device == null) return null; return ( <Camera style={{ flex: 1 }} device={device} isActive={true} photo={true} video={false} /> ); };
Before you can use the camera component, you’ll have to grant permission to the application. Fortunately, react-native-vision-camera
has a hook that lets us do this easily:
import { useCameraPermission } from "react-native-vision-camera"; export default function HomeScreen() { const { hasPermission, requestPermission } = useCameraPermission(); if (!hasPermission) { return ( <View style={styles.permissionView}> <Text style={styles.permissionText}> Camera App requires permission. </Text> <TouchableOpacity onPress={requestPermission} style={styles.permissionButton} > <Text style={styles.permissionButtonText}>Grant Permission</Text> </TouchableOpacity> </View> ); } return( .... )}
hasPermission
returns a boolean that tells us if permission has been granted or not. If the user hasn’t granted permission, then a button is displayed asking the user to do so. react-native-vision-camera
exports a function called requestPermission
that we use for this purpose and the function returns a promise.
Finally, we can handle the full camera permissions:
import React from "react"; import { View, Text, StyleSheet, TouchableOpacity } from "react-native"; import { Camera, useCameraDevice, useCameraPermission, } from "react-native-vision-camera"; export default function HomeScreen() { const device = useCameraDevice("back"); const { hasPermission, requestPermission } = useCameraPermission(); if (!hasPermission || !device) { return ( <View style={styles.permissionView}> <Text style={styles.permissionText}> Camera App requires permission. </Text> <TouchableOpacity onPress={requestPermission} style={styles.permissionButton} > <Text style={styles.permissionButtonText}>Grant Permission</Text> </TouchableOpacity> </View> ); } return ( <View style={styles.container}> <Camera photo={true} device={device} isActive={!!device} style={StyleSheet.absoluteFill} /> </View> ); } const styles = StyleSheet.create({ container: { flex: 1, justifyContent: "center", alignItems: "center", backgroundColor: "black", }, permissionView: { flex: 1, justifyContent: "center", alignItems: "center", backgroundColor: "#fff", }, permissionText: { fontSize: 18, marginBottom: 20, }, permissionButton: { backgroundColor: "#007BFF", paddingVertical: 12, paddingHorizontal: 24, borderRadius: 6, }, permissionButtonText: { color: "#fff", fontSize: 16, }, });
In the code above, If hasPermission
returns true, meaning camera permission is granted, we proceed to using the camera. But if state is false, it means permission isn’t granted, and we have to request permission. Finally, we’ll use the back camera since we explicitly stated this in the useCameraDevice
hooks.
Now, that we have implemented Camera permission, we can go ahead to start implementing our functionalities. Let’s dive in and look at some of the Camera use cases we will be implementing.
For taking pictures, we’ll create a ref for our camera, and using the ref, we’ll call the takePhoto({})
method. The takePhoto({})
method takes options like flash, enableShutterSound
, path
, enableAutoRedEyeReduction
, etc.
We will only be calling flash and enableShutterSound
for now:
const cameraRef = useRef<Camera>(null); const takePhoto = async () => { if (cameraRef.current) { const { path } = await cameraRef.current.takePhoto({ flash: "on", enableShutterSound: true, }); console.log(path); //save to camera roll here } else { Alert.alert("Camera not ready"); } };
Click the button, and the path will be logged to your console. After getting the path, you can then save it to your camera roll.
To save the picture to your camera roll, install this package:
npm install @react-native-camera-roll/camera-roll
Rebuild your app so the changes, or the new package, can take effect.
The package exports a CameraRoll
that provides us with access to the local cameral roll or photo library. With the method, we can save our image like thus:
const takePhoto = async () => { if (cameraRef.current) { const { path } = await cameraRef.current.takePhoto({ flash: "on", enableShutterSound: true, }); await CameraRoll.saveAsset(`file://${path}`, { type: "photo", }); } else { Alert.alert("Camera not ready"); } };
Go to your camera roll and you will see the image you just captured.
The full code will look like this:
import React, { useRef } from "react"; import { View, Text, Alert, StyleSheet, TouchableOpacity } from "react-native"; import { Camera, useCameraDevice, useCameraPermission, } from "react-native-vision-camera"; import { CameraRoll } from "@react-native-camera-roll/camera-roll"; export default function HomeScreen() { const cameraRef = useRef<Camera>(null); const device = useCameraDevice("back"); const { hasPermission, requestPermission } = useCameraPermission(); const takePhoto = async () => { if (cameraRef.current) { const { path } = await cameraRef.current.takePhoto({ flash: "on", enableShutterSound: true, }); await CameraRoll.saveAsset(`file://${path}`, { type: "photo", }); } else { Alert.alert("Camera not ready"); } }; if (!hasPermission || !device) { return ( <View style={styles.permissionView}> <Text style={styles.permissionText}> Camera App requires permission. </Text> <TouchableOpacity onPress={requestPermission} style={styles.permissionButton} > <Text style={styles.permissionButtonText}>Grant Permission</Text> </TouchableOpacity> </View> ); } return ( <View style={styles.container}> <Camera ref={cameraRef} photo={true} device={device} isActive={!!device} style={StyleSheet.absoluteFill} /> <TouchableOpacity onPress={takePhoto} style={styles.takePhoto}> <View style={styles.takePhotoButton}></View> </TouchableOpacity> </View> ); } const styles = StyleSheet.create({ container: { flex: 1, justifyContent: "center", alignItems: "center", backgroundColor: "black", }, permissionView: { flex: 1, justifyContent: "center", alignItems: "center", backgroundColor: "#fff", }, permissionText: { fontSize: 18, marginBottom: 20, }, permissionButton: { backgroundColor: "#007BFF", paddingVertical: 12, paddingHorizontal: 24, borderRadius: 6, }, permissionButtonText: { color: "#fff", fontSize: 16, }, takePhoto: { position: "absolute", bottom: 80, width: 70, height: 70, borderRadius: 50, backgroundColor: "#fff", padding: 4, }, takePhotoButton: { borderWidth: 2, borderColor: "#000", backgroundColor: "#fff", borderRadius: 50, width: "100%", height: "100%", }, });
Next, we will implement recording videos. Similarly to what we did with taking photos, we’ll use cameraRef
to record videos:
const startRecording = async () => { if (cameraRef.current) { try { cameraRef.current.startRecording({ flash: "off", onRecordingError: (error) => console.error("Recording error:", error), onRecordingFinished: async ({ path }) => { try { await CameraRoll.saveAsset(`file://${path}`, { type: "video" }); } catch (error) { console.error("Error saving video:", error); } }, }); } catch (error) { console.error("Error during recording:", error); } } };
startRecording
takes options like flash
, fileType
, onRecordingError
, onRecordingFinished
, etc.
flash
takes two options called on
and off
, fileType
allows you to choose if the video will be an mp4
or a mov
, onRecordingError
allows you to capture runtime errors while recording the video, while onRecordingFinished
is called when recording has been successful so you can save the file to your camera roll.
In our camera component, we will set video and audio to true
since we want to use the video capabilities:
<View style={styles.container}> <Camera ref={cameraRef} video={true} audio={true} device={device} isActive={!!device} style={StyleSheet.absoluteFill} /> <TouchableOpacity onPress={startRecording} style={styles.takePhoto}> <View style={[styles.takePhotoButton, { backgroundColor: "red" }]} ></View> </TouchableOpacity> </View>
You might have noticed that we haven’t implemented a way to end the video recording, so our video only records for now.
To end or stop the recording, we’ll use a state and set it to true
when our recording starts, then we set it to false
when we want to end the recording:
const [isRecording, setIsRecording] = useState(false); const startRecording = async () => { if (cameraRef.current) { try { if (isRecording) { await cameraRef.current.stopRecording(); setIsRecording(false); } else { cameraRef.current.startRecording({ flash: "off", onRecordingError: (error) => console.error("Recording error:", error), onRecordingFinished: async ({ path }) => { try { await CameraRoll.saveAsset(`file://${path}`, { type: "video" }); } catch (error) { console.error("Error saving video:", error); } }, }); setIsRecording(true); } } catch (error) { console.error("Error during recording:", error); } } };
Similarly, we will use a conditional to change the style of our button for when the video recording starts and stops. This is for a better user experience:
<TouchableOpacity onPress={startRecording} style={styles.takePhoto}> <View style={[ styles.takePhotoButton, !isRecording ? { backgroundColor: "red" } : { backgroundColor: "#fff", borderWidth: 3 }, ]} ></View> </TouchableOpacity>
Previously, we implemented photo and video capture, but this section focuses on enhancing the user experience with features like flash toggling, camera switching, in-app gallery navigation, and recording timers.
This is the StyleSheet
we will be using for our customization:
const deviceWidth = Dimensions.get("window").width; const styles = StyleSheet.create({ container: { flex: 1, justifyContent: "center", alignItems: "center", backgroundColor: "black", }, permissionView: { flex: 1, justifyContent: "center", alignItems: "center", backgroundColor: "#fff", }, permissionText: { fontSize: 18, marginBottom: 20, }, permissionButton: { backgroundColor: "#007BFF", paddingVertical: 12, paddingHorizontal: 24, borderRadius: 6, }, permissionButtonText: { color: "#fff", fontSize: 16, }, galleryGrid: { flexDirection: "row", flexWrap: "wrap", justifyContent: "space-evenly", paddingBottom: 50, }, galleryPhotos: { width: deviceWidth / 3 - 1, height: 150, borderWidth: 2, borderColor: "#fff", marginVertical: 1, justifyContent: "flex-start", alignItems: "flex-start", borderRadius: 8, }, videoTimer: { position: "absolute", top: 50, fontSize: 24, fontWeight: "bold", color: "white", }, flash: { position: "absolute", top: 80, right: 20, }, slider: { position: "absolute", bottom: 150, width: "70%", }, textOverlayView: { position: "absolute", top: 80, left: 20, flexDirection: "row", justifyContent: "space-between", alignItems: "center", width: 100, }, textOverlayText: { fontSize: 20, color: "#fff", paddingVertical: 4, paddingHorizontal: 12, }, controlsView: { position: "absolute", bottom: 30, left: 0, right: 0, alignItems: "center", justifyContent: "space-between", flexDirection: "row", paddingHorizontal: 30, width: deviceWidth, }, photogallery: { width: 60, height: 60, borderRadius: 10, backgroundColor: "rgba(255, 255, 255, 0.1)", }, takePhoto: { width: 70, height: 70, borderRadius: 50, backgroundColor: "#fff", padding: 4, }, takePhotoButton: { borderWidth: 2, borderColor: "#000", backgroundColor: "#fff", borderRadius: 50, width: "100%", height: "100%", }, toggleCamera: { backgroundColor: "rgba(255, 255, 255, 0.1)", borderRadius: 50, alignItems: "center", justifyContent: "center", height: 60, width: 60, }, photoContainer: { position: "absolute", top: 50, left: 20, right: 20, backgroundColor: "rgba(0, 0, 0, 0.5)", padding: 10, borderRadius: 10, }, photoOverlay: { position: "absolute", bottom: 80, left: 20, width: 140, height: 170, borderRadius: 12, borderWidth: 1, borderColor: "#fff", overflow: "hidden", justifyContent: "center", alignItems: "center", zIndex: 2, }, photoImage: { width: "100%", height: "100%", }, });
First, we add the two buttons and an image placeholder. The middle button takes photos, the right button switches camera, while the placeholder image is used to open up the gallery:
<View style={styles.controlsView}> <Pressable> <Image style={styles.photogallery} source={{ uri: "https://reactnative.dev/img/tiny_logo.png" }} /> </Pressable> <TouchableOpacity onPress={takePhoto} style={styles.takePhoto}> <View style={styles.takePhotoButton}></View> </TouchableOpacity> <TouchableOpacity onPress={toggleCamera} style={styles.toggleCamera}> <MaterialIcons name="cameraswitch" size={28} color="white" /> </TouchableOpacity> </View>
The camera roll package we installed earlier has a hook that lets us easily save images and get photos from the camera roll. To retrieve images, call the getPhotos()
method to get the first 20 images:
const [photos, getPhotos, save] = useCameraRoll();
Copy the code below into your app, and I’ll explain below:
import React, { useEffect, useRef, useState } from "react"; import { View, Text, Image, Alert, Pressable, ScrollView, Dimensions, StyleSheet, TouchableOpacity, Platform, } from "react-native"; import { Camera, useCameraDevice, useCameraPermission, } from "react-native-vision-camera"; import { CameraRoll, PhotoIdentifier, useCameraRoll, } from "@react-native-camera-roll/camera-roll"; import { Entypo, MaterialIcons } from "@expo/vector-icons"; import { SafeAreaView } from "react-native-safe-area-context"; import { hasAndroidPermission } from "@/hooks/usePermission"; export default function HomeScreen() { const cameraRef = useRef<Camera>(null); const [photos, getPhotos] = useCameraRoll(); const timerRef = useRef<NodeJS.Timeout | null>(null); const { hasPermission, requestPermission } = useCameraPermission(); const [toggleFrontCamera, setToggleFrontCamera] = useState(false); const device = useCameraDevice(toggleFrontCamera ? "front" : "back"); const [showPhoto, setShowPhoto] = useState(false); const [showGallery, setShowGallery] = useState(true); const [isRecording, setIsRecording] = useState(false); const [toggleVideo, setToggleVideo] = useState(false); const [videoTimer, setVideoTimer] = useState<number>(0); const [photoUri, setPhotoUri] = useState<string | null>(null); const [gallery, setGallery] = useState<PhotoIdentifier[]>([]); const [turnOnFlash, setTurnOnFlash] = useState<"off" | "on">("off"); useEffect(() => { const getGallery = () => { CameraRoll.getPhotos({ first: 1, assetType: "Photos", }) .then((photo) => { setGallery(photo.edges); }) .catch((err) => { console.error(err); }); }; getGallery(); return () => getGallery(); }, [photos, photoUri]); useEffect(() => { let timer: NodeJS.Timeout; if (showPhoto) { timer = setTimeout(() => { setShowPhoto(false); }, 3000); } return () => { clearTimeout(timer); }; }, [showPhoto]); const takePhoto = async () => { if (Platform.OS === "android" && !(await hasAndroidPermission())) { return; } if (cameraRef.current) { const { path } = await cameraRef.current.takePhoto({ flash: turnOnFlash, enableShutterSound: true, }); await CameraRoll.saveAsset(`file://${path}`, { type: "photo", }); setPhotoUri(path); setShowPhoto(true); } else { Alert.alert("Camera not ready"); } }; const startRecording = async () => { if (cameraRef.current) { try { if (isRecording) { await cameraRef.current.stopRecording(); setIsRecording(false); if (timerRef.current) { clearInterval(timerRef.current); timerRef.current = null; } } else { cameraRef.current.startRecording({ flash: turnOnFlash, onRecordingError: (error) => console.error("Recording error:", error), onRecordingFinished: async ({ path }) => { try { await CameraRoll.saveAsset(`file://${path}`, { type: "video" }); } catch (error) { console.error("Error saving video:", error); } if (timerRef.current) { clearInterval(timerRef.current); } }, }); setIsRecording(true); setVideoTimer(0); timerRef.current = setInterval(() => { setVideoTimer((prevTime) => prevTime + 1); }, 1000); } } catch (error) { console.error("Error during recording:", error); setIsRecording(false); if (timerRef.current) { clearInterval(timerRef.current); } } } }; const toggleFlash = () => { setTurnOnFlash((prev) => (prev === "on" ? "off" : "on")); }; const toggleCamera = () => { setToggleFrontCamera((prev) => !prev); }; const startVideoTimer = (timeInSeconds: number) => { const minutes = Math.floor(timeInSeconds / 60); const seconds = timeInSeconds % 60; return `${String(minutes).padStart(2, "0")}:${String(seconds).padStart( 2, "0" )}`; }; if (!hasPermission || !device) { return ( <View style={styles.permissionView}> <Text style={styles.permissionText}> Camera App requires permission. </Text> <TouchableOpacity onPress={requestPermission} style={styles.permissionButton} > <Text style={styles.permissionButtonText}>Grant Permission</Text> </TouchableOpacity> </View> ); } return ( <View style={styles.container}> {showGallery && photos.edges.length > 0 ? ( <ScrollView> <SafeAreaView> <TouchableOpacity onPress={() => setShowGallery(false)} style={{ margin: 10, marginVertical: 20, padding: 10, borderRadius: 5, backgroundColor: "#ccc", alignItems: "flex-end", marginLeft: "auto", }} > <View> <Text>Back to Camera</Text> </View> </TouchableOpacity> <View style={styles.galleryGrid}> {photos.edges.map((item, index) => { return ( <Image key={index} style={styles.galleryPhotos} source={{ uri: item.node.image.uri }} /> ); })} </View> </SafeAreaView> </ScrollView> ) : ( <> <Camera ref={cameraRef} photo={!toggleVideo} video={toggleVideo} style={StyleSheet.absoluteFill} device={device} isActive={!!device} pixelFormat="yuv" audio={toggleVideo} enableZoomGesture={true} torch={turnOnFlash} /> {showPhoto && photoUri && ( <View style={styles.photoOverlay}> <Image source={{ uri: `file://${photoUri}` }} style={styles.photoImage} /> </View> )} {isRecording && ( <Text style={styles.videoTimer}>{startVideoTimer(videoTimer)}</Text> )} <TouchableOpacity style={styles.flash} onPress={toggleFlash}> <Entypo name="flash" size={40} color={ turnOnFlash === "on" ? "yellow" : "rgba(255, 255, 255, 0.4)" } /> </TouchableOpacity> <View style={styles.textOverlayView}> <TouchableOpacity onPress={() => setToggleVideo(false)}> <Text style={[ styles.textOverlayText, !toggleVideo && { borderWidth: 1, borderColor: "#fff", borderRadius: 6, }, ]} > Photo </Text> </TouchableOpacity> <TouchableOpacity onPress={() => setToggleVideo(true)}> <Text style={[ styles.textOverlayText, toggleVideo && { borderWidth: 1, borderColor: "#fff", borderRadius: 6, }, ]} > Video </Text> </TouchableOpacity> </View> <View style={styles.controlsView}> {gallery.length > 0 ? ( gallery.splice(0, 1).map((item, index) => { return ( <Pressable onPress={() => { getPhotos(); setShowGallery(true); }} key={index} > <Image style={styles.photogallery} source={{ uri: item.node.image.uri }} /> </Pressable> ); }) ) : ( <Pressable> <Image style={styles.photogallery} source={{ uri: "https://reactnative.dev/img/tiny_logo.png" }} /> </Pressable> )} {toggleVideo ? ( <TouchableOpacity onPress={startRecording} style={styles.takePhoto} > <View style={[ styles.takePhotoButton, !isRecording ? { backgroundColor: "red" } : { backgroundColor: "#fff", borderWidth: 3 }, ]} ></View> </TouchableOpacity> ) : ( <TouchableOpacity onPress={takePhoto} style={styles.takePhoto}> <View style={styles.takePhotoButton}></View> </TouchableOpacity> )} <TouchableOpacity onPress={toggleCamera} style={styles.toggleCamera} > <MaterialIcons name="cameraswitch" size={28} color="white" /> </TouchableOpacity> </View> </> )} </View> ); }
From the code above, first, we’re checking if the user has permission to access the camera using useCameraPermission
. If permission is not granted, then the application prompts the user to grant camera access.
Next, we have the following states to handle our functionalities:
toggleFrontCamera
: We use this state to toggle between front and back camerashowPhoto
: This acts as our preview i.e. it is used to display the last captured photo over the camerashowGallery
: This state toggles between camera view and gallery viewerisRecording
: Used to track whether a video is currently being recorded or nottoggleVideo
: Toggles or switches between video and camera modesvideoTimer
: Used to keep track of video recordings i.e. the time duration for video recordingsphotoUri
: This state is used to store the URI path of the last captured photogallery
: This states stores the fetched gallery photos for displayturnOnFlash
: Used to turn camera flash on and offWhen the user taps the capture photo button in photo mode, a photo is snapped using cameraRef.current.takePhoto()
and then saved to the gallery using CameraRoll.saveAsset()
. The URI of the photo is stored in photoUri
state, and showPhoto
state is set to true to display it temporarily.
Similarly, on video mode, the recording starts using cameraRef.current.startRecording()
when the video button is pressed and the timer starts counting and it’s tracked via videoTimer
state. The timer is also displayed on the top center of the screen. When the user ends the video recording, the video is saved to the gallery and the timer is then cleared.
There are also flash control(turnOnFlash
) to toggle the camera flash on
and off
, a switch camera button to switch between front and back camera using toggleFrontCamera
, and a pressable image on the bottom left that takes the user to the gallery viewer.
Another good use case for using the camera is implementing QR and barcode scanning. For this implementation, react-native-vision-camera
has an instance of a scanner that can be used to detect codes.
First, enable the code scanner in your app.json
file:
{ "name": "my app", "plugins": [ [ "react-native-vision-camera", { // ... "enableCodeScanner": true } ] ] }
Next, let’s create the instance and pass it to our camera stream:
const codeScanner = useCodeScanner({ codeTypes: ["qr", "ean-13", "upc-a"], onCodeScanned: (codes) => { for (const code of codes) { console.log(`Scanned ${code.type}: ${code.frame}, ${code.value}`); } }, }); return ( <Camera ref={cameraRef} photo={!toggleVideoRecorder} style={StyleSheet.absoluteFill} device={device} isActive={!!device} pixelFormat="yuv" codeScanner={codeScanner} torch={turnOnFlash} /> )
Try scanning any barcode, and you’ll get accurate values.
Another good use case is face detection. Being able to identify and locate human faces within images and videos is a real-world scenario that is widely used in various fields, from photography to security systems.
For face detection, we need react-native-vision-camera
‘s frame processor. Before that, we will install Worklets, which will help us run JavaScript functions on separate threads.
To install Worklets and the package we will use for face detection, run the command below:
npm i react-native-worklets-core vision-camera-face-detector
Next, in your babel.config.js
file, add the plugin below. If you’re using Expo, create the file and add the plugin as seen below:
module.exports = { plugins: [ ['react-native-worklets-core/plugin'], ], }
To implement the face detection, see the code below:
import { StyleSheet } from "react-native"; import { Camera, useCameraDevice, useFrameProcessor, } from "react-native-vision-camera"; import { Worklets } from "react-native-worklets-core"; import { scanFaces } from "vision-camera-face-detector"; export default function App() { const device = useCameraDevice("back"); const faceDetectionProcessor = useFrameProcessor((frame) => { "worklet"; try { const detectedFaces = scanFaces(frame); if (detectedFaces) { Worklets.createRunOnJS((detectedFaces) => { console.log(detectedFaces); }); } } catch (error) { console.log("Error scanning faces", error); } }, []); return ( <> <Camera photo={true} style={StyleSheet.absoluteFill} device={device} isActive={!!device} pixelFormat="yuv" frameProcessor={faceDetectionProcessor} /> </> ); }
There are several other camera functionality use cases, from motion detectors to image labelers and object detection. Facial recognition is a popular and prevalent use case where the camera functionality can be used to authenticate applications.
You can find the full source code here for the implementations we looked at in the sections above.
If you want to try out other use cases, here you will find several plugins that you can integrate with VisionCamera and your camera application.
LogRocket is a React Native monitoring solution that helps you reproduce issues instantly, prioritize bugs, and understand performance in your React Native apps.
LogRocket also helps you increase conversion rates and product usage by showing you exactly how users are interacting with your app. LogRocket's product analytics features surface the reasons why users don't complete a particular flow or don't adopt a new feature.
Start proactively monitoring your React Native apps — try LogRocket for free.
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowWith the right tools and strategies, JavaScript debugging can become much easier. Explore eight strategies for effective JavaScript debugging, including source maps and other techniques using Chrome DevTools.
This Angular guide demonstrates how to create a pseudo-spreadsheet application with reactive forms using the `FormArray` container.
Implement a loading state, or loading skeleton, in React with and without external dependencies like the React Loading Skeleton package.
The beta version of Tailwind CSS v4.0 was released a few months ago. Explore the new developments and how Tailwind makes the build process faster and simpler.