The web has transformed from the world’s most widely used document platform to its most widely used application platform. In the past few years, we have seen tremendous growth in the field of AI. Web as a platform is making great progress, allowing developers to ship some excellent experiences leveraging AI advancements. Today, we have devices with great processing power and browsers capable of leveraging them to the full extent.
Tech giants have invested heavily in making it easier for developers to ship AI features with their web apps. Today, we have many libraries to perform complex AI tasks inside the browser. In this article, we will compare three major libraries that allow us to perform image recognition inside the browser.
Three major image classification libraries
Before we dive in, let’s go over the basics of TensorFlow.js, ONNX.js, and WebDNN (if you’re already familiar with these libraries, feel free to scroll to the next section).
The Open Neural Network Exchange (ONNX) is an open standard for representing machine learning models. ONNX is developed and supported by a community of partners that includes AWS, Facebook OpenSource, Microsoft, AMD, IBM, and Intel AI. ONNX.js uses a combination of web worker and web assembly to achieve extraordinary CPU performance.
To evaluate the performance of all three libraries, we developed a react app that uses Squeezenet model for image classification. Let’s take a look at the results.
Inference on CPU
Ahen it comes to CPU inference, as shown below, TensorFlow.js leads with a magnificent speed of 1501ms, followed by ONNX.js at 2195ms. Both WebDNN and ONNX.js have other WASM backends that can be considered CPU backends as well since they don’t use GPU.
Inference on WebAssembly
WASM has emerged as one of the best performance boosters for web apps, and it is now available for use with all the major browsers. WASM enables developers to deliver performant experiences on devices without GPU. The image below shows how the libraries judged red wine using WASM.
ONNX.js and WebDNN both scored high here; figures such as 135ms (ONNX.js) and 328ms (WebDNN) aren’t too far from GPU performance. ONNX’s speed is due to its wise use of the web worker to offload many calculations from the main thread.
Inference on WebGL
WebGL is based on OpenGL. It provides developers with a great API to perform complex calculations in an optimized way. All of these libraries use WebGL as a backend to provide boosted results.
As shown above, ONNX.js takes lead here with 48ms, compared to TensorFlow’s 69ms. WebDNN isn’t really in this race; they may be preparing for WebGL2 or perhaps focusing more on WebMetal.
Note: These results were obtained using Safari on a MacBook Pro (2018), 2.2GHz 6-Core Intel Core i7, 16GB 2400MHz DDR4, Intel UHD Graphics 630 1536MB.
There are four backends available in modern browsers:
- WebMetal — Compute on GPU by WebMetal API. This is the fastest of the four backends, but it is currently only supported only in Safari. Apple originally proposed this API as WebGPU in 2017 and renamed it to WebMetal in 2019
- WebGL — Today, all major browsers are shipped with the support of WebGL. It is up to 100 times faster than the vanilla CPU backend
- WebAssembly — A binary instruction format for a stack-based virtual machine, WebAssembly aims to execute at native speed by taking advantage of common hardware capabilities available on a wide range of platforms
- PlainJS — Compute on CPU by ECMAScript3. This backend is only for backward compatibility and is not very fast
All three libraries support both CPU and WebGL backends. WebDNN takes a lead and allows you to leverage the WebMetal experimental feature. ONNX.js, meanwhile, smartly combines WASM and WebWorker to make CPU inferencing more efficient.
Supporting all of the major browsers across different operating systems is a major challenge when handling heavy computational tasks. The chart below compares browser support for these libraries.
|WebDNN||✔||✔||✔ + WebGPU||✔||✔|
Popularity and adoption
Popularity and adoption is also an important parameter. The chart below shows the download trend for each of the three major libraries over a six-month period.
(Source: npm trends)
As you can see, TensorFlow.js is far ahead in the race for adoption compared to other ML libraries available today. However, ONNX.js and WebDNN are ahead in performance, indicating a promising future for both.
TensorFlow, ONNX and WebDNN all have their own advantages, and any one can serve as a strong foundation for your next AI-based web app. We found that ONNX.js the most promising library when it comes to performance and TensorFlow.js has the highest adoption rate. WebDNN, meanwhile, is focusing on leveraging modern hardware and, as a result, has made significant improvements recently.
In addition to the three major libraries we compared in this post, you can also check out the following libraries to perform tasks other than image recognition in browsers:
LogRocket: Full visibility into your web and mobile apps
LogRocket is a frontend application monitoring solution that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.