Three.js is the most popular 3D WebGL library, powering countless 3D experiences like landing pages, VR rooms, games, and even entire 3D editors! If youβre interested in developing, say, a 3D editor for modeling or 3D printing, or a procedural geometry generator, you might consider bringing SVGs to the party.
In this tutorial, Iβll show you how you can bring your vector graphics into Three.js with its SVGLoader, and how to extrude and preview them in 3D!
Letβs start with the basics. Weβll install the required dependencies, configure the Vite build tool, and set up the Three.js scene.
First, start a new project from Viteβs βvanillaβ template and install Three.js:
# npm 6.x npm init @vitejs/app svg-threejs --template vanilla # npm 7+, extra double-dash is needed: npm init @vitejs/app svg-threejs -- --template vanilla cd svg-threejs npm install three npm run dev
With those few lines, the development environment is all set up.
Next, weβll make a few changes to the default HTML and CSS files:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <link rel="icon" type="image/svg+xml" href="favicon.svg" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Three.js SVG extruder</title> </head> <body> <div id="app"></div> <div class="controls"> <input type="range" min="1" max="50" id="input" /> </div> <script type="module" src="/main.js"></script> </body> </html>
In HTML, add a type=range
input field to control the level of SVG extrusion. Then, in CSS, position and style it to your needs. In the example below, I position the slider and size the top elements take so that the Three.js canvas will cover the entire window.
html, body, #app { height: 100%; margin: 0; overflow: hidden; } .controls { position: fixed; bottom: 1rem; right: 1rem; }
With this done, you move to JavaScript to start building the Three.js scene.
Starting in the main.js
file created by Vite, we access the DOM elements, listen to the input
event for future extrusion change handling, and delegate creating the Three.js scene to another module β scene.js
.
import "./style.css"; import { setupScene } from "./scene"; const defaultExtrusion = 1; const container = document.querySelector("#app"); const extrusionInput = document.querySelector("#input"); const scene = setupScene(container); extrusionInput.addEventListener("input", () => { // Handle extrusion change }); extrusionInput.value = defaultExtrusion;
The heavy-lifting in the scene.js
file is all focused on creating a Three.js scene:
import * as THREE from "three"; import { OrbitControls } from "three/examples/jsm/controls/OrbitControls"; const setupScene = (container) => { const scene = new THREE.Scene(); const renderer = new THREE.WebGLRenderer({ antialias: true, alpha: true }); const camera = new THREE.PerspectiveCamera( 50, window.innerWidth / window.innerHeight, 0.01, 1e7 ); const ambientLight = new THREE.AmbientLight("#888888"); const pointLight = new THREE.PointLight("#ffffff", 2, 800); const controls = new OrbitControls(camera, renderer.domElement); const animate = () => { renderer.render(scene, camera); controls.update(); requestAnimationFrame(animate); }; renderer.setSize(window.innerWidth, window.innerHeight); scene.add(ambientLight, pointLight); camera.position.z = 50; camera.position.x = 50; camera.position.y = 50; controls.enablePan = false; container.append(renderer.domElement); window.addEventListener("resize", () => { camera.aspect = window.innerWidth / window.innerHeight; camera.updateProjectionMatrix(); renderer.setSize(window.innerWidth, window.innerHeight); }); animate(); return scene; }; export { setupScene };
Now, Iβll assume youβve got some knowledge of Three.js. If not, there are some great guides on the web, including on this very blog. With that said, hereβs a general overview of whatβs going on.
First, the basic parts of every Three.js scene are created: the scene
, renderer
, and camera
. Notice the options for the THREE.WebGLRenderer
β turning on anti-aliasing and background transparency β which are important to making the app look good.
Then, there are lights and THREE.OrbitControls
. These are necessary for properly illuminating the materials weβll be using and to allow easy control of the 3D view, respectively.
Lastly, thereβs the render
loop, additional settings like camera position, renderer viewport size, and window resize handler.
The function returns the THREE.Scene
instance for easy access from the main module.
With the scene set up, itβs time to load some SVG files! For that, weβll move to another module: svg.js
.
import * as THREE from "three"; import { SVGLoader } from "three/examples/jsm/loaders/SVGLoader"; const fillMaterial = new THREE.MeshBasicMaterial({ color: "#F3FBFB" }); const stokeMaterial = new THREE.LineBasicMaterial({ color: "#00A5E6", }); const renderSVG = (extrusion, svg) => { const loader = new SVGLoader(); const svgData = loader.parse(svg); // ... }; export { renderSVG };
Here, you can see the materials that will be used for the extruded geometry, so that both fill and stroke better visualize the source SVG shapes and their extrusion in 3D space.
Next, thereβs our focal point β the renderSVG()
function thatβll use the SVGLoader
to load and later extrude the SVG shapes.
But before we do that, letβs take a quick look at the SVGLoader API.
SVGLoader is an instance of the Three.js Loader
class, inheriting and extending its methods and properties, most notably load()
, loadAsync()
, and parse()
.
These three methods are responsible for the majority of SVGLoaderβs functionalities. All of them result in an array of ShapePath
instances, just in different ways.
// ... const loader = new SVGLoader(); const svgUrl = "..."; //SVG URL const svg = "..."; // SVG data loader.load(svgUrl, (data) => { const shapePaths = data.paths; // ... }); // or loader.loadAsync(svgUrl).then((data) => { const shapePaths = data.paths; // ... }); // or const data = loader.parse(svg); const shapePaths = data.paths;
The gist is that youβll always use at least one of these methods when working with SVGLoader, depending on how you want to access the SVG data. For more detailed info, you can refer to the official docs.
Once you have the ShapePath
s, you need to convert them to an array of Shape
s. To do that, you should use the SVGLoader.createShapes()
static method, like so:
shapePaths.forEach((path) => { const shapes = SVGLoader.createShapes(path); // ... });
From here, all thatβs left is to generate ExtrudeGeometry
from the available shapes.
To extrude our SVG-originated Shape
s, we need to update the renderSVG()
function.
// ... const renderSVG = (extrusion, svg) => { const loader = new SVGLoader(); const svgData = loader.parse(svg); const svgGroup = new THREE.Group(); const updateMap = []; svgGroup.scale.y *= -1; svgData.paths.forEach((path) => { const shapes = SVGLoader.createShapes(path); shapes.forEach((shape) => { const meshGeometry = new THREE.ExtrudeBufferGeometry(shape, { depth: extrusion, bevelEnabled: false, }); const linesGeometry = new THREE.EdgesGeometry(meshGeometry); const mesh = new THREE.Mesh(meshGeometry, fillMaterial); const lines = new THREE.LineSegments(linesGeometry, stokeMaterial); updateMap.push({ shape, mesh, lines }); svgGroup.add(mesh, lines); }); }); const box = new THREE.Box3().setFromObject(svgGroup); const size = box.getSize(new THREE.Vector3()); const yOffset = size.y / -2; const xOffset = size.x / -2; // Offset all of group's elements, to center them svgGroup.children.forEach((item) => { item.position.x = xOffset; item.position.y = yOffset; }); svgGroup.rotateX(-Math.PI / 2); return { object: svgGroup, update(extrusion) { updateMap.forEach((updateDetails) => { const meshGeometry = new THREE.ExtrudeBufferGeometry( updateDetails.shape, { depth: extrusion, bevelEnabled: false, } ); const linesGeometry = new THREE.EdgesGeometry(meshGeometry); updateDetails.mesh.geometry.dispose(); updateDetails.lines.geometry.dispose(); updateDetails.mesh.geometry = meshGeometry; updateDetails.lines.geometry = linesGeometry; }); }, }; };
Letβs break down whatβs happening here.
First, youβll notice that in between the SVG loading bits, we create a THREE.Group
to hold all our extruded shapes. Itβs then flipped on the Y-axis, and later on, we properly offset it, rotate it to position, and center it in our scene properly. This ensures an optimal user experience when using the OrbitControls
, so that with no panning, the controls are orbiting primarily around the objectβs base.
Thereβs also some important code is inside the shapes
loop, where weβre generating the THREE.ExtrudeBufferGeometry
from the shapes. As we donβt need to interact with these geometries in any complex way, opting for buffer geometries improves performance at no additional cost.
We also use THREE.EdgesGeometry
, together with THREE.LineSegments
to highlight the edges.
Meshes are added to the group, and required details are saved to our updateMap
. This is used in the returned update()
method to update the geometry according to the selected extrusion correctly. To do that, we create new geometries and dispose of the old ones to clean memory.
With the renderSVG()
function now ready, we can now go back to the main.js
module and put it to good use.
// ... import { renderSVG } from "./svg"; import { svg } from "./example"; // ... const { object, update } = renderSVG(defaultExtrusion, svg); scene.add(object); extrusionInput.addEventListener("input", () => { update(Number(extrusionInput.value)); }); // ...
From example.js
, Iβll export an SVG string for testing. Here itβs imported and passed on to renderSVG()
along with the default extrusion. The resulting object is destructed, with THREE.Group
added to the scene and update()
method used for handling extrusion change.
And with that, weβve got the base SVG extruder ready!
See the Pen
Three.js SVG extruder by Arek Nawo (@areknawo)
on CodePen.
Naturally, the app above is fairly basic, and it could benefit from additional functionality. The first thing that comes to mind is a βfocusβ feature that would adjust OrbitControls
and camera
when changing the extrusion. Letβs take a look!
Weβll put this function in the scene.js
module, as itβs closely related.
// ... // Inspired by https://discourse.threejs.org/t/camera-zoom-to-fit-object/936/3 const fitCameraToObject = (camera, object, controls) => { const boundingBox = new THREE.Box3().setFromObject(object); const center = boundingBox.getCenter(new THREE.Vector3()); const size = boundingBox.getSize(new THREE.Vector3()); const offset = 1.25; const maxDim = Math.max(size.x, size.y, size.z); const fov = camera.fov * (Math.PI / 180); const cameraZ = Math.abs((maxDim / 4) * Math.tan(fov * 2)) * offset; const minZ = boundingBox.min.z; const cameraToFarEdge = minZ < 0 ? -minZ + cameraZ : cameraZ - minZ; controls.target = center; controls.maxDistance = cameraToFarEdge * 2; controls.minDistance = cameraToFarEdge * 0.5; controls.saveState(); camera.position.z = cameraZ; camera.far = cameraToFarEdge * 3; camera.updateProjectionMatrix(); }; export { fitCameraToObject, setupScene };
The steps are as follows:
OrbitControls
target to orbit the camera around the object, and prevent it from zooming in and out too much by adjusting the maxDistance
and minDistance
propertiesThe setupScene()
function also needs an adjustment to get easy access to the camera and controls instances.
// ... const setupScene = (container) => { // ... return { scene, camera, controls }; }; // ...
Then just add a #focus
button to the .controls
container in HTML, and edit the main.js
to integrate all the changes.
// ... import { fitCameraToObject, setupScene } from "./scene"; // ... const focusButton = document.querySelector("#focus"); const { scene, camera, controls } = setupScene(app); // ... focusButton.addEventListener("click", () => { fitCameraToObject(camera, object, controls); }); // ...
Thatβs how we add focus functionality to our 3D app!
See the Pen
Three.js SVG extruder with focus by Arek Nawo (@areknawo)
on CodePen.
As you can see, Three.js is a very powerful library. Its SVGLoader, as well as countless other APIs, make it very versatile.
With an idea, some learning, and time, you can use Three.js to bring native-like 3D experiences to the web like never seen before. The skyβs the limit!
Thereβs no doubt that frontends are getting more complex. As you add new JavaScript libraries and other dependencies to your app, youβll need more visibility to ensure your users donβt run into unknown issues.
LogRocket is a frontend application monitoring solution that lets you replay JavaScript errors as if they happened in your own browser so you can react to bugs more effectively.
LogRocket works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your appβs performance, reporting metrics like client CPU load, client memory usage, and more.
Build confidently β start monitoring for free.
Would you be interested in joining LogRocket's developer community?
Join LogRocketβs Content Advisory Board. Youβll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowFrom sketches to code in minutes, DesignCoder shows how AI-generated, hierarchy-aware UIs could change the way developers prototype and ship apps.
Learn when to use TypeScript, Zod, or both for data validation. Avoid redundant checks and build safer, type-sound applications.
Discover how WebAssembly 3.0βs garbage collector, exception handling, and Memory64 transform Wasm into a true mainstream web platform.
AI agents often break shadcn/ui components with outdated docs or made-up props. The MCP server fixes this by giving live access to registries. In this tutorial, weβll set it up and build a Kanban board to show it in action.