Object classification is one of the simplest and most fundamental applications of machine learning. With the advances in data generation and collection across all industries, object classification has become an important tool in many day-to-day operations. If you’re not familiar with it already, object classification refers to the process of identifying and labeling objects in images or videos. It is commonly used in areas like autonomous driving, security systems, medical imaging, scientific research, and more.
TensorFlow is an open source machine learning framework developed by Google. It provides a long list of pre-trained models for common tasks like object classification and pose detection, and it can be easily integrated across platforms. TensorFlow.js is a lightweight library that brings the power of machine learning to JavaScript. It offers a wide variety of lightweight and easy-to-use machine learning models that you can import and use in browser environments.
In this article, we’ll use TensorFlow.js and React Native to build an object classification app that you can use to classify objects in any image. You’ll learn how to use the MobileNet and Coco SSD classifications models, but you could easily use any other machine learning model supported by TensorFlow.js in a similar way. Let’s get started!
Jump ahead:
Before getting started, you’ll need to have Node.js and npm installed locally on your system. You’ll also need an Android or iOS phone or emulator to test out the application during development. Lastly, I also recommend using Expo.
To get started, create a new React Native project using the Expo CLI by running the following command:
npx create-expo-app rn-object-classification
The CLI will take some time to initialize a new React Native project for you. Once it’s done, you can then add the required dependencies.
To keep things simple, we’ll add dependencies to the package.json
file instead of installing them manually using npm or Expo. It is quite common to run into compatibility issues if the dependency versions aren’t matched correctly. Paste the following code in your package.json
to include the required dependencies:
{ "name": "rn-object-classification", "main": "index.js", "version": "0.0.1", "license": "Apache-2.0", "scripts": { "start": "expo start" }, "dependencies": { "@react-native-async-storage/async-storage": "~1.17.3", "@tensorflow-models/coco-ssd": "^2.0.3", "@tensorflow-models/mobilenet": "2.1.0", "@tensorflow/tfjs": "3.18.0", "@tensorflow/tfjs-react-native": "0.8.0", "expo": "~45.0.6", "expo-camera": "^12.2.0", "expo-file-system": "~14.0.0", "expo-gl": "^11.3.0", "expo-gl-cpp": "^11.3.0", "expo-image-picker": "~13.1.1", "jpeg-js": "0.3.7", "react": "17.0.2", "react-dom": "17.0.2", "react-native": "0.68.2", "react-native-fs": "2.14.1", "react-native-gesture-handler": "~2.2.1" }, "devDependencies": { "@babel/core": "^7.9.0", "@expo/webpack-config": "^0.15.0", "@types/react": "~16.9.35", "@types/react-native": "~0.63.2", "babel-preset-expo": "~8.3.0", "jest-expo": "~41.0.0", "typescript": "~4.0.0" }, "jest": { "preset": "react-native" }, "private": true }
Since you’re working with some Expo-specific packages as well, you’ll need to update your app.json
file to configure those dependencies correctly. You can replace the content in your app.json
file with the following code snippet:
{ "name": "rn-object-classification", "displayName": "rn-object-classification", "expo": { "name": "rn-object-classification", "slug": "rn-object-classification", "version": "1.0.0", "assetBundlePatterns": [ "**/*" ] }, "plugins": [ [ "expo-image-picker", { "photosPermission": "The app accesses your photos to run them through object detection" } ] ] }
This configuration also sets up the image picker plugin that you’ll use later to pick an image for classification. Run the following command to install all dependencies locally:
yarn install
Before working with the machine learning models, you need to set up a basic UI that allows users to pick images from their gallery for classification and display the result when done. Head over to the App.tsx
file in the root of your project directory and add the following code to it:
import React, { useState } from 'react'; import { View, Text, Image, Button } from 'react-native'; import * as tf from '@tensorflow/tfjs'; import { decodeJpeg } from '@tensorflow/tfjs-react-native'; import * as ImagePicker from 'expo-image-picker'; import * as FileSystem from 'expo-file-system'; import * as jpeg from 'jpeg-js' const App = () => { // State containers for the app const [isTfReady, setIsTfReady] = useState(false); const [result, setResult] = useState(''); const [pickedImage, setPickedImage] = useState(''); // This function helps you call the image picker and enable the user to choose an image for classification const pickImage = async () => { // No permissions request is necessary for launching the image library let result = await ImagePicker.launchImageLibraryAsync({ mediaTypes: ImagePicker.MediaTypeOptions.All, allowsEditing: true, aspect: [4, 3], quality: 1, }); if (!result.cancelled) { setPickedImage(result.uri); } }; return ( <View style={{ height: '100%', display: 'flex', flexDirection: 'column', alignItems: 'center', justifyContent: 'center', }} > // Show the picked image <Image source={{ uri: pickedImage }} style={{ width: 200, height: 200, margin: 40 }} /> // Show a button to open the image picker {isTfReady && <Button title="Pick an image" onPress={pickImage} /> } <View style={{ width: '100%', height: 20 }} /> // Display the state and result of processing {!isTfReady && <Text>Loading TFJS model...</Text>} {isTfReady && result === '' && <Text>Pick an image to classify!</Text>} {result !== '' && <Text>{result}</Text>} </View> ); }; export default App;
This completes the setup of your basic UI.
The MobileNet model is a pre-trained machine learning model that helps you classify objects. Although it offers low-performance overhead, it sacrifices accuracy. Therefore, MobileNet is a good model to start with for quick and non-sensitive operations. It uses labels from the ImageNet database to classify objects.
First, you need to import MobileNet into your App.tsx
file:
import * as mobilenet from '@tensorflow-models/mobilenet';
Next, you’ll need to create a function that prepares the selected image for processing and calls the model to classify the image data:
const classifyUsingMobilenet = async () => { try { // Load mobilenet await tf.ready(); const model = await mobilenet.load(); setIsTfReady(true); console.log("starting inference with picked image: " + pickedImage) // Convert image to tensor const imgB64 = await FileSystem.readAsStringAsync(pickedImage, { encoding: FileSystem.EncodingType.Base64, }); const imgBuffer = tf.util.encodeString(imgB64, 'base64').buffer; const raw = new Uint8Array(imgBuffer) const imageTensor = decodeJpeg(raw); // Classify the tensor and show the result const prediction = await model.classify(imageTensor); if (prediction && prediction.length > 0) { setResult( ${prediction[0].className} (${prediction[0].probability.toFixed(3)}) ); } } catch (err) { console.log(err); } };
Finally, you need to set up the function to be called every time the user picks a new image. You can do so by using the useEffect
Hook:
useEffect(() => { classifyUsingMobilenet() }, [pickedImage]);
Make sure to import the Hook at the top of the App.tsx
file before using it. This completes our setup of an object classification app using the MobileNet model. When done, your complete App.tsx
file should look like the following:
import React, { useEffect, useState } from 'react'; import { View, Text, Image, Button } from 'react-native'; import * as tf from '@tensorflow/tfjs'; import { decodeJpeg } from '@tensorflow/tfjs-react-native'; import * as mobilenet from '@tensorflow-models/mobilenet'; import * as cocossd from '@tensorflow-models/coco-ssd' import * as ImagePicker from 'expo-image-picker'; import * as FileSystem from 'expo-file-system'; import * as jpeg from 'jpeg-js' const App = () => { const [isTfReady, setIsTfReady] = useState(false); const [result, setResult] = useState(''); const [pickedImage, setPickedImage] = useState(''); const pickImage = async () => { // No permissions request is necessary for launching the image library let result = await ImagePicker.launchImageLibraryAsync({ mediaTypes: ImagePicker.MediaTypeOptions.All, allowsEditing: true, aspect: [4, 3], quality: 1, }); if (!result.cancelled) { setPickedImage(result.uri); } }; const classifyUsingMobilenet = async () => { try { // Load mobilenet. await tf.ready(); const model = await mobilenet.load(); setIsTfReady(true); console.log("starting inference with picked image: " + pickedImage) // Convert image to tensor const imgB64 = await FileSystem.readAsStringAsync(pickedImage, { encoding: FileSystem.EncodingType.Base64, }); const imgBuffer = tf.util.encodeString(imgB64, 'base64').buffer; const raw = new Uint8Array(imgBuffer) const imageTensor = decodeJpeg(raw); // Classify the tensor and show the result const prediction = await model.classify(imageTensor); if (prediction && prediction.length > 0) { setResult( ${prediction[0].className} (${prediction[0].probability.toFixed(3)})` ); } } catch (err) { console.log(err); } }; useEffect(() => { classifyUsingMobilenet() }, [pickedImage]); return ( <View style={{ height: '100%', display: 'flex', flexDirection: 'column', alignItems: 'center', justifyContent: 'center', }} > <Image source={{ uri: pickedImage }} style={{ width: 200, height: 200, margin: 40 }} /> {isTfReady && <Button title="Pick an image" onPress={pickImage} /> } <View style={{ width: '100%', height: 20 }} /> {!isTfReady && <Text>Loading TFJS model...</Text>} {isTfReady && result === '' && <Text>Pick an image to classify!</Text>} {result !== '' && <Text>{result}</Text>} </View> ); }; export default App;
Now, you can run the app using the command below:
npx expo start
Once the app loads, you can select images to classify objects from:
Next, we’ll learn how to use another model, Coco SSD, to classify objects.
Coco SSD is a pre-trained object detection model that can identify multiple objects from a single image. It uses the TensorFlow Object Detection API and supports up to 80 image classes, as listed in the docs. It is another handy model to get started quickly with low-performance machines with fairly low workloads.
To start using the Coco SSD model to classify objects in your React Native application, you’ll need to first import it in your App.tsx
file:
import * as cocossd from '@tensorflow-models/coco-ssd'
Next, you can use the following function to process the selected image and classify it using the Coco SSD model:
const classifyUsingCocoSSD = async () => { try { // Load Coco-SSD. await tf.ready(); const model = await cocossd.load(); setIsTfReady(true); console.log("starting inference with picked image: " + pickedImage) // Convert image to tensor const imgB64 = await FileSystem.readAsStringAsync(pickedImage, { encoding: FileSystem.EncodingType.Base64, }); const imgBuffer = tf.util.encodeString(imgB64, 'base64').buffer; const raw = new Uint8Array(imgBuffer) const TO_UINT8ARRAY = true const { width, height, data } = jpeg.decode(raw, TO_UINT8ARRAY) const buffer = new Uint8Array(width * height * 3) let offset = 0 for (let i = 0; i < buffer.length; i += 3) { buffer[i] = data[offset] buffer[i + 1] = data[offset + 1] buffer[i + 2] = data[offset + 2] offset += 4 } const imageTensor = tf.tensor3d(buffer, [height, width, 3]) // Classify the tensor and show the result const prediction = await model.detect(imageTensor); if (prediction && prediction.length > 0) { setResult(${prediction[0].class} (${prediction[0].score.toFixed(3)}`); } } catch (err) { console.log(err); } }
You’ll also need to update the useEffect
Hook to call this function instead of the MobileNet
function from before:
useEffect(() => { classifyUsingCocoSSD() }, [pickedImage]);
When done, your App.tsx
file should look like the following code:
import React, { useEffect, useState } from 'react'; import { View, Text, Image, Button } from 'react-native'; import * as tf from '@tensorflow/tfjs'; import { decodeJpeg } from '@tensorflow/tfjs-react-native'; import * as mobilenet from '@tensorflow-models/mobilenet'; import * as cocossd from '@tensorflow-models/coco-ssd' import * as ImagePicker from 'expo-image-picker'; import * as FileSystem from 'expo-file-system'; import * as jpeg from 'jpeg-js' const App = () => { const [isTfReady, setIsTfReady] = useState(false); const [result, setResult] = useState(''); const [pickedImage, setPickedImage] = useState(''); const pickImage = async () => { // No permissions request is necessary for launching the image library let result = await ImagePicker.launchImageLibraryAsync({ mediaTypes: ImagePicker.MediaTypeOptions.Images, allowsEditing: true, aspect: [1, 1], quality: 1, }); if (!result.cancelled) { setPickedImage(result.uri); } }; const classifyUsingMobilenet = async () => { try { // Load mobilenet. await tf.ready(); const model = await mobilenet.load(); setIsTfReady(true); console.log("starting inference with picked image: " + pickedImage) // Convert image to tensor const imgB64 = await FileSystem.readAsStringAsync(pickedImage, { encoding: FileSystem.EncodingType.Base64, }); const imgBuffer = tf.util.encodeString(imgB64, 'base64').buffer; const raw = new Uint8Array(imgBuffer) const imageTensor = decodeJpeg(raw); // Classify the tensor and show the result const prediction = await model.classify(imageTensor); if (prediction && prediction.length > 0) { setResult( ${prediction[0].className} (${prediction[0].probability.toFixed(3)})` ); } } catch (err) { console.log(err); } }; const classifyUsingCocoSSD = async () => { try { // Load Coco-SSD. await tf.ready(); const model = await cocossd.load(); setIsTfReady(true); console.log("starting inference with picked image: " + pickedImage) // Convert image to tensor const imgB64 = await FileSystem.readAsStringAsync(pickedImage, { encoding: FileSystem.EncodingType.Base64, }); const imgBuffer = tf.util.encodeString(imgB64, 'base64').buffer; const raw = new Uint8Array(imgBuffer) const TO_UINT8ARRAY = true const { width, height, data } = jpeg.decode(raw, TO_UINT8ARRAY) const buffer = new Uint8Array(width * height * 3) let offset = 0 for (let i = 0; i < buffer.length; i += 3) { buffer[i] = data[offset] buffer[i + 1] = data[offset + 1] buffer[i + 2] = data[offset + 2] offset += 4 } const imageTensor = tf.tensor3d(buffer, [height, width, 3]) // Classify the tensor and show the result const prediction = await model.detect(imageTensor); if (prediction && prediction.length > 0) { setResult(${prediction[0].class} (${prediction[0].score.toFixed(3)}`); } } catch (err) { console.log(err); } } useEffect(() => { classifyUsingCocoSSD() }, [pickedImage]); return ( <View style={{ height: '100%', display: 'flex', flexDirection: 'column', alignItems: 'center', justifyContent: 'center', }} > <Image source={{ uri: pickedImage }} style={{ width: 200, height: 200, margin: 40 }} /> {isTfReady && <Button title="Pick an image" onPress={pickImage} /> } <View style={{ width: '100%', height: 20 }} /> {!isTfReady && <Text>Loading TFJS model...</Text>} {isTfReady && result === '' && <Text>Pick an image to classify!</Text>} {result !== '' && <Text>{result}</Text>} </View> ); }; export default App;
You can now run and test the app on your mobile device or emulator. The output should look like the following:
And with that, we’ve completed our development of an object classification app in React Native using TensorFlow. We covered how to pick, process, and pass images to the TensorFlow.js machine-learning models. We also learned how to work with two different machine learning models from the TensorFlow library.
If a pre-trained model doesn’t fit your needs, you can always consider retraining an existing TF model on your custom dataset or setting up a full-fledged machine learning system on the cloud using AWS SageMaker. In the meantime, feel free to explore other operations like pose detection, face detection, depth estimation, and more in TensorFlow’s official models’ repository from TensorFlow.
LogRocket is a React Native monitoring solution that helps you reproduce issues instantly, prioritize bugs, and understand performance in your React Native apps.
LogRocket also helps you increase conversion rates and product usage by showing you exactly how users are interacting with your app. LogRocket's product analytics features surface the reasons why users don't complete a particular flow or don't adopt a new feature.
Start proactively monitoring your React Native apps — try LogRocket for free.
There’s no doubt that frontends are getting more complex. As you add new JavaScript libraries and other dependencies to your app, you’ll need more visibility to ensure your users don’t run into unknown issues.
LogRocket is a frontend application monitoring solution that lets you replay JavaScript errors as if they happened in your own browser so you can react to bugs more effectively.
LogRocket works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app’s performance, reporting metrics like client CPU load, client memory usage, and more.
Build confidently — start monitoring for free.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowLearn how to manage memory leaks in Rust, avoid unsafe behavior, and use tools like weak references to ensure efficient programs.
Bypass anti-bot measures in Node.js with curl-impersonate. Learn how it mimics browsers to overcome bot detection for web scraping.
Handle frontend data discrepancies with eventual consistency using WebSockets, Docker Compose, and practical code examples.
Efficient initializing is crucial to smooth-running websites. One way to optimize that process is through lazy initialization in Rust 1.80.