Zain Sajjad Head of Product Experience at Peekaboo Guru. In love with mobile machine learning, React, React Native, and UI designing.

AI in browsers: Comparing TensorFlow, ONNX, and WebDNN for image classification

3 min read 1051

AI in Browsers: Comparing TensorFlow, ONNX, and WebDNN for image classification

The web has transformed from the world’s most widely used document platform to its most widely used application platform. In the past few years, we have seen tremendous growth in the field of AI. Web as a platform is making great progress, allowing developers to ship some excellent experiences leveraging AI advancements. Today, we have devices with great processing power and browsers capable of leveraging them to the full extent.

Tech giants have invested heavily in making it easier for developers to ship AI features with their web apps. Today, we have many libraries to perform complex AI tasks inside the browser. In this article, we will compare three major libraries that allow us to perform image recognition inside the browser.

Three major image classification libraries

Before we dive in, let’s go over the basics of TensorFlow.js, ONNX.js, and WebDNN (if you’re already familiar with these libraries, feel free to scroll to the next section).


Backed by Google, TensorFlow.js allows users to develop machine learning models in JavaScript and use ML directly in the browser or Node.js. It enables developers to train and execute models in the browser and to retrain the existing model via transfer learning using their data. The recent acquisition of Keras.js has already brought some significant improvements to TensorFlow and is poised to enhance the library’s capabilities further.


The Open Neural Network Exchange (ONNX) is an open standard for representing machine learning models. ONNX is developed and supported by a community of partners that includes AWS, Facebook OpenSource, Microsoft, AMD, IBM, and Intel AI. ONNX.js uses a combination of web worker and web assembly to achieve extraordinary CPU performance.


Deep neural networks show great promise when it comes to getting accurate results. Contrary to libraries like TensorFlow, MIL WebDNN provides an efficient architecture for deep learning applications such as image recognition and language modeling using convolutional and recurrent neural networks. This framework optimizes the trained DNN model to compress the model data and accelerate its execution. It executes with novel JavaScript APIs such as WebAssembly and WebGPU to achieve zero-overhead execution.

Comparing performance

To evaluate the performance of all three libraries, we developed a react app that uses Squeezenet model for image classification. Let’s take a look at the results.

Inference on CPU

All three libraries support multiple backends but use CPU as a fallback for older browsers. Besides having WebAssembly and WebWorker as backends, ONNX.js and WebDNN also treat native JavaScript as a different backend. We gave our red wine to all three libraries and saw their judgement.

Ahen it comes to CPU inference, as shown below, TensorFlow.js leads with a magnificent speed of 1501ms, followed by ONNX.js at 2195ms. Both WebDNN and ONNX.js have other WASM backends that can be considered CPU backends as well since they don’t use GPU.

TensorFlow, ONNX, and WedDNN Inference on CPU

Inference on WebAssembly

WASM has emerged as one of the best performance boosters for web apps, and it is now available for use with all the major browsers. WASM enables developers to deliver performant experiences on devices without GPU. The image below shows how the libraries judged red wine using WASM.

TensorFlow, ONNX, and WedDNN Inference on WebAssembly

ONNX.js and WebDNN both scored high here; figures such as 135ms (ONNX.js) and 328ms (WebDNN) aren’t too far from GPU performance. ONNX’s speed is due to its wise use of the web worker to offload many calculations from the main thread.

Inference on WebGL

WebGL is based on OpenGL. It provides developers with a great API to perform complex calculations in an optimized way. All of these libraries use WebGL as a backend to provide boosted results.

TensorFlow, ONNX, and WedDNN Inference on WebGL

As shown above, ONNX.js takes lead here with 48ms, compared to TensorFlow’s 69ms. WebDNN isn’t really in this race; they may be preparing for WebGL2 or perhaps focusing more on WebMetal.

Note: These results were obtained using Safari on a MacBook Pro (2018), 2.2GHz 6-Core Intel Core i7, 16GB 2400MHz DDR4, Intel UHD Graphics 630 1536MB.

Backends supported

There are four backends available in modern browsers:

  1. WebMetal — Compute on GPU by WebMetal API. This is the fastest of the four backends, but it is currently only supported only in Safari. Apple originally proposed this API as WebGPU in 2017 and renamed it to WebMetal in 2019
  2. WebGL — Today, all major browsers are shipped with the support of WebGL. It is up to 100 times faster than the vanilla CPU backend
  3. WebAssembly — A binary instruction format for a stack-based virtual machine, WebAssembly aims to execute at native speed by taking advantage of common hardware capabilities available on a wide range of platforms
  4. PlainJS — Compute on CPU by ECMAScript3. This backend is only for backward compatibility and is not very fast

All three libraries support both CPU and WebGL backends. WebDNN takes a lead and allows you to leverage the WebMetal experimental feature. ONNX.js, meanwhile, smartly combines WASM and WebWorker to make CPU inferencing more efficient.

Library/Browser CPU WebAssembly WebGL WebMetal
ONNX.js ✔[+ Worker]

Browser support

Supporting all of the major browsers across different operating systems is a major challenge when handling heavy computational tasks. The chart below compares browser support for these libraries.

Chrome Firefox Safari Edge iE
ONNX.js ..
WebDNN ✔ + WebGPU

Popularity and adoption

Popularity and adoption is also an important parameter. The chart below shows the download trend for each of the three major libraries over a six-month period.

Popularity and Adoption of TensorFlow, ONNX, and WedDNN

(Source: npm trends)

As you can see, TensorFlow.js is far ahead in the race for adoption compared to other ML libraries available today. However, ONNX.js and WebDNN are ahead in performance, indicating a promising future for both.


TensorFlow, ONNX and WebDNN all have their own advantages, and any one can serve as a strong foundation for your next AI-based web app. We found that ONNX.js the most promising library when it comes to performance and TensorFlow.js has the highest adoption rate. WebDNN, meanwhile, is focusing on leveraging modern hardware and, as a result, has made significant improvements recently.

In addition to the three major libraries we compared in this post, you can also check out the following libraries to perform tasks other than image recognition in browsers:

Get setup with LogRocket's modern error tracking in minutes:

  1. Visit to get an app ID.
  2. Install LogRocket via NPM or script tag. LogRocket.init() must be called client-side, not server-side.
  3. $ npm i --save logrocket 

    // Code:

    import LogRocket from 'logrocket';
    Add to your HTML:

    <script src=""></script>
    <script>window.LogRocket && window.LogRocket.init('app/id');</script>
  4. (Optional) Install plugins for deeper integrations with your stack:
    • Redux middleware
    • ngrx middleware
    • Vuex plugin
Get started now
Zain Sajjad Head of Product Experience at Peekaboo Guru. In love with mobile machine learning, React, React Native, and UI designing.

Leave a Reply