Modern browsers are quietly eating the JavaScript ecosystem alive.
Over the past two years, the major engines have shipped native Web APIs that replace a surprising number of the utilities we still install by default. Yet most developers keep reaching for the same familiar packages anyway. If a dependency has always worked, it stays in the stack, even when the browser already does the same job.
That reflex costs more than it seems. Every extra package adds weight, maintenance overhead, version churn, and long-term abandonment risk. Native APIs ship 0KB to users, run deep in the engine (often off the main thread), and benefit from optimizations userland libraries can’t match.
If you’re trying to shrink bundles, improve runtime performance, or audit dependency bloat, it’s worth knowing which “defaults” have become optional.
In this article, we’ll walk through 10 Web APIs that replace common modern JavaScript libraries, honest browser support numbers, and clear guidance on when libraries still win. If you’ve read my piece on CSS replacing JavaScript workarounds, consider this the sequel. This time, we’re going after your entire dependency tree. First on our list is Fetch().
The Replay is a weekly newsletter for dev and engineering leaders.
Delivered once a week, it's your curated guide to the most important conversations around frontend dev, emerging AI tools, and the state of modern software.
Replaces: Axios, jQuery.ajax, request libraries
At some point, Axios became the default HTTP client for every JavaScript project. Start a new React app? npm install axios. The reasoning was fair back in 2016: XMLHttpRequest was painful, and Axios gave you a clean promise-based API with interceptors, automatic JSON parsing, and request cancellation.
The thing is, fetch() has been available in all major browsers since 2017. It’s promise-based, supports streaming, handles every HTTP method, and works in both browser and Node.js contexts. Yet developers still install Axios by reflex.
Here’s a typical Axios setup:
import axios from 'axios';
const api = axios.create({
baseURL: 'https://api.example.com',
timeout: 5000,
headers: { 'Content-Type': 'application/json' }
});
// GET
const { data: users } = await api.get('/users');
// POST
const { data: newUser } = await api.post('/users', {
name: 'Marvel',
role: 'developer'
});
This is clean, but here’s the native equivalent:
const BASE_URL = 'https://api.example.com';
// GET
const res = await fetch(`${BASE_URL}/users`);
const users = await res.json();
// POST
const res2 = await fetch(`${BASE_URL}/users`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name: 'Marvel', role: 'developer' })
});
const newUser = await res2.json();
The only real difference is that Fetch doesn’t throw on HTTP error status codes. A 404 response resolves the promise successfully; you have to check res.ok yourself. Axios throws automatically.
That trips people up, but it’s a one-line guard:
if (!res.ok) throw new Error(`HTTP ${res.status}`);
For request cancellation, pair fetch with AbortController:
const controller = new AbortController();
fetch('/api/data', { signal: controller.signal });
// Cancel the request
controller.abort();
Browser support: All modern Browsers.
Axios still wins: If you’re building a simple app with a few API calls, fetch is plenty. If you’re building a large app with centralized auth handling, retry strategies, and request/response transformations across plenty of endpoints, Axios’s interceptor pattern earns its bundle weight.
Replaces: Form serialization libraries, file upload helpers.
Before FormData, collecting form values for submission meant either leaning towards Axios’s automatic serialization, writing the collection logic yourself, or, more commonly, letting React Hook Form or Formik handle everything from validation to submission so you never had to think about serialization at all.
The quirk is that most of those libraries do way more than you need. If all you want is to grab form values and send them to an endpoint, you’re importing an entire form management system to do a one-liner’s job.
The manual approach looked something like this:
function getFormValues(form) {
const data = {};
const inputs = form.querySelectorAll('input, select, textarea');
inputs.forEach(input => {
if (input.type === 'checkbox') {
data[input.name] = input.checked;
} else if (input.type === 'file') {
data[input.name] = input.files[0];
} else {
data[input.name] = input.value;
}
});
return data;
}
That’s before you even deal with multi-select fields, radio buttons, or disabled inputs. And file uploads? You’d need to read the file, encode it, set the right content type headers, and all(it’skind of much).
With FormData you can handle it this way:
const form = document.querySelector('#myForm');
const formData = new FormData(form);
That’s it. It grabs every field, handles files automatically, and when you pass it to fetch, the browser sets the correct multipart/form-data content type with the right boundary:
const form = document.querySelector('#upload-form');
form.addEventListener('submit', async (e) => {
e.preventDefault();
const formData = new FormData(form);
// Files, text fields, checkboxes — all handled
await fetch('/api/upload', {
method: 'POST',
body: formData // No manual content-type needed
});
});
You can also build FormData programmatically:
const formData = new FormData();
formData.append('name', 'Marvel');
formData.append('avatar', fileInput.files[0]);
formData.append('tags', JSON.stringify(['react', 'typescript']));
Browser support: All major browsers since July 2015. This API is older than most npm packages trying to replace it.
When to still use libraries: FormData grabs values and ships them. That’s about it. It doesn’t validate inputs, manage state across steps, or conditionally show fields based on what the user picked three screens ago. If you’re building a multi-step checkout or an onboarding wizard where field “A ”determines what shows up in step “B”, React Hook Form and Formik still earn their bundle weight.
Replaces: URI.js
Parsing URLs in JavaScript used to mean one of two things: regex or npm install. The URI.js package still pulls 1.4 million weekly downloads for something the browser handles natively; it’s just muscle memory at the end of the day.
Here’s the kind of code developers wrote (and some still write):
// The regex approach
function getQueryParam(url, param) {
const regex = new RegExp('[?&]' + param + '=([^&#]*)');
const match = url.match(regex);
return match ? decodeURIComponent(match[1]) : null;
}
// Or string splitting
const query = window.location.search.substring(1);
const pairs = query.split('&');
const params = {};
pairs.forEach(pair => {
const [key, value] = pair.split('=');
params[key] = decodeURIComponent(value);
});
The URL and URLSearchParams APIs handle all of this natively:
const url = new URL('https://example.com/search?q=web+apis&page=2&sort=recent#results');
url.hostname; // 'example.com'
url.pathname; // '/search'
url.hash; // '#results'
url.searchParams.get('q'); // 'web apis' (decoded automatically)
url.searchParams.has('sort'); // true
// Modify params
url.searchParams.set('page', '3');
url.searchParams.append('filter', 'new');
url.searchParams.delete('sort');
url.toString();
// 'https://example.com/search?q=web+apis&page=3&filter=new#results'
Parsing, encoding, decoding, and modification are all handled. URLSearchParams also works for building query strings from scratch:
const params = new URLSearchParams({ q: 'hello world', page: '1' });
params.toString(); // 'q=hello+world&page=1'
Browser support: All major browsers as of 2026. URLSearchParams since April 2018. Works in Node.js 10+ too.
When libraries are still better: Libraries like URI.js are mostly unnecessary now. The native URL and URLSearchParams APIs cover 95% of what developers actually need.
Replaces: Tippy.js, Floating UI
Tooltips, dropdowns, and popovers have been a JavaScript problem for as long as anyone can remember. You need an element to float above everything else, positioned relative to a trigger, dismissible when the user clicks outside, and accessible via keyboard. That’s a lot of moving parts, so developers reach for libraries like Tippy.js or Floating UI, and @floating-ui/react alone pulls over 10 million weekly downloads.
Here’s what a basic popover looks like with Floating UI in React:
import { useFloating, offset, flip, shift, autoUpdate } from '@floating-ui/react';
import { useState } from 'react';
function Popover() {
const [isOpen, setIsOpen] = useState(false);
const { refs, floatingStyles } = useFloating({
open: isOpen,
placement: 'bottom',
middleware: [offset(8), flip(), shift()],
whileElementsMounted: autoUpdate,
});
return (
<>
<button
ref={refs.setReference}
onClick={() => setIsOpen(!isOpen)}
>
Toggle popover
</button>
{isOpen && (
<div ref={refs.setFloating} style={floatingStyles}>
Popover content
</div>
)}
</>
);
}
That’s the minimal setup, and it doesn’t even include light dismiss (clicking outside to close), focus trapping, or keyboard handling.
The Popover API does all of this with HTML attributes:
<button popovertarget="my-popover">Toggle popover</button> <div id="my-popover" popover>Popover content</div>
Two lines, and the browser handles top-layer rendering (no z-index fight), light dismiss (click outside or press Escape to close), focus management (tab navigates into the popover automatically), and accessible keyboard bindings, all natively.
If you would like to control it with JavaScript, you can also take this approach:
const popover = document.querySelector('#my-popover');
popover.showPopover(); // Open
popover.hidePopover(); // Close
popover.togglePopover(); // Toggle
For styling, the :popover-open pseudo-class targets the popover when it’s visible, and ::backdrop lets you add effects behind it:
#my-popover {
padding: 1rem;
border: 1px solid #ddd;
border-radius: 8px;
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
}
#my-popover::backdrop {
background: rgba(0, 0, 0, 0.15);
}
#my-popover:popover-open {
animation: fadeIn 0.2s ease-out;
}
@keyframes fadeIn {
from { opacity: 0; transform: translateY(-4px); }
to { opacity: 1; transform: translateY(0); }
}
The popover attribute has two modes: popover="auto" (the default) gives you light dismiss and auto-closes other open popovers. popover="manual" keeps the popover open until you explicitly close it, useful for persistent notifications or multi-step flows.
Browser support: Baseline. Newly available since January 2025. Chrome 114+, Firefox 125+, Safari 17+.
When to still use libraries: The Popover API doesn’t handle positioning. Anchoring a popover to a specific side of the trigger with collision detection and viewport flipping is where Floating UI still wins.
CSS Anchor Positioning is coming to solve that too, but browser support isn’t there yet. So if you need a tooltip that intelligently flips from bottom to top when it hits the viewport edge, Floating UI still earns its spot. But for action menus, notification toasts, teaching UIs, and basic disclosure patterns, the browser handles it now.
Replaces: clipboard.js, copy-to-clipboard
Every “copy to clipboard” button you’ve ever built probably used clipboard.js or the copy-to-clipboard package. And before those existed, the hack was even worse: create a hidden <textarea>, inject the text into it, select it programmatically, run document.execCommand('copy'), then remove the element from the DOM.
That was the standard approach for years, and it was as fragile as it sounds.
// The old hack
function copyToClipboard(text) {
const textarea = document.createElement('textarea');
textarea.value = text;
textarea.style.position = 'fixed';
textarea.style.opacity = '0';
document.body.appendChild(textarea);
textarea.select();
document.execCommand('copy');
document.body.removeChild(textarea);
}
That’s 9 lines of DOM manipulation to copy a string. And document.execCommand('copy') has been deprecated for years.
The Clipboard API replaces all of that with:
await navigator.clipboard.writeText('Hello, clipboard!');
That’s it. Promise-based, async, clean. Reading from the clipboard is just as simple:
const text = await navigator.clipboard.readText();
For a real “copy” button implementation:
const copyBtn = document.querySelector('#copy-btn');
copyBtn.addEventListener('click', async () => {
try {
await navigator.clipboard.writeText(codeSnippet.textContent);
copyBtn.textContent = 'Copied!';
} catch (err) {
copyBtn.textContent = 'Failed to copy';
}
});
One thing to know: the Clipboard API only works in secure contexts (HTTPS) and requires the page to be focused. The browser handles permission prompts natively; no library needs to manage that for you. That’s a security win, not a limitation.
Browser support: writeText and readText work in all major browsers. The more advanced write() method (for copying images and rich content) has slightly patchier support.
When to still use libraries: If you’re copying complex clipboard formats like rich HTML or images across all browsers, library wrappers still smooth out the inconsistencies. For text, which is 95% of copy-to-clipboard use cases, the native API is all you need.
Replaces: Window resize event listeners, element-resize-detector, react-resize-detector
Tracking element dimensions used to mean listening to window.resize and hoping for the best. The problem is that window.resize fires when the viewport changes, not when a specific element changes. Your sidebar could collapse, or your container could reflow.
So developers wrote workarounds: polling getBoundingClientRect() on intervals, attaching throttled resize listeners and recalculating layouts manually, or installing packages like element-resize-detector or react-resize-detector to do what the browser should’ve handled natively.
// The old way: throttled polling
let lastWidth = 0;
window.addEventListener('resize', throttle(() => {
const el = document.querySelector('.card');
const width = el.getBoundingClientRect().width;
if (width !== lastWidth) {
lastWidth = width;
updateLayout(width);
}
}, 200));
This is expensive. Every call to getBoundingClientRect() forces the browser to recalculate layout. Do it in a loop or on frequent events, and you’re going to get the performance problem you were trying to avoid.
ResizeObserver watches individual elements and fires a callback when their dimensions change, regardless of what caused the change:
const observer = new ResizeObserver((entries) => {
for (const entry of entries) {
const width = entry.contentBoxSize[0].inlineSize;
if (width < 400) {
entry.target.classList.add('compact');
} else {
entry.target.classList.remove('compact');
}
}
});
observer.observe(document.querySelector('.card'));
The browser handles the batching and debouncing internallym without throttle utility, and it reacts to any size change.
The real use case here is container-aware components: a card that switches from horizontal to vertical layout based on its own width (not the viewport), a chart component that redraws when its container shrinks, or a dashboard panel that adapts its content density based on how much space it actually has.
Browser support: All major browsers since July 2020.
When to still use libraries: Container queries (@container) in CSS now handle a lot of the responsive-component use cases ResizeObserver was solving. If your layout logic is purely visual (switch styles at breakpoints), container queries are the better tool. ResizeObserver should still be used when you need JavaScript to react to size changes.
Replaces: Motion(Framer Motion), GSAP (for page transitions)
Page transitions have always been a JavaScript problem. Before this section looks funnier than it is already, view Transistions API doesn’t replace our dear Motion; it replaces a feature of it.
Going further, let’s say you wanted a smooth crossfade between routes, or you need a slide animation when navigating between views. You will most probably reach out for one of these libraries. Not a bad choice since it’s your go-to animation library, but here’s the kind of setup you may be running:
import { AnimatePresence, motion } from "framer-motion";
import { useLocation } from "react-router-dom";
function App() {
const location = useLocation();
return (
<AnimatePresence mode="wait">
<motion.div
key={location.pathname}
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
exit={{ opacity: 0, y: -20 }}
transition={{ duration: 0.3 }}
>
<Routes location={location}>
<Route path="/" element={<Home />} />
<Route path="/about" element={<About />} />
</Routes>
</motion.div>
</AnimatePresence>
);
}
That’s Framer Motion wrapping every route transition. It works, and it looks great. But you’re pulling in ~30KB of animation library for what amounts to a crossfade between two pages.
The View Transitions API does this natively and freely, if I must add. The browser takes a snapshot of the current DOM, you update however you want, and it animates between old and new. Here is what it looks like:
function navigateTo(url) {
if (!document.startViewTransition) {
updateContent(url);
return;
}
document.startViewTransition(() => updateContent(url));
}
The browser captures the old state, calls your callback, captures the new state, and runs a crossfade using CSS animations on the compositor thread. You can add styles:
::view-transition-old(root) {
animation: fade-out 0.3s ease-in;
}
::view-transition-new(root) {
animation: fade-in 0.3s ease-out;
}
And if you also need specific elements to morph between views, assign matching view-transition-name values:
.product-thumbnail { view-transition-name: hero-image; }
.product-hero { view-transition-name: hero-image; }
The browser smoothly morphs the thumbnail into the hero. The same effect that takes 50+ lines of Motion layoutId config becomes three lines of CSS.
Browser support: Chrome 111+, Safari 18+, Firefox 144+ (shipped October 2025). Same-document view transitions are Baseline Newly available across all three major engines. Cross-document transitions work in Chrome 126+ and Safari 18.2+.
When to still use libraries: Spring physics, complex timeline choreography with staggered children, interruptible gesture-driven animations, or precise playback controls (pause, reverse, seek). The View Transitions API replaces the 80% case: page-level transitions and element morphing. The remaining 20% is where Framer Motion and GSAP justify their bundle weight.
Replaces: react-modal, @headlessui/react dialogs, custom modal implementations
Modals are one of those things that look simple until you actually build one properly. A few things you may not notice earlier, but just in case you’d like to build one, please keep these in mind: you need to trap focus inside the dialog, prevent scrolling on the body, handle the Escape key to close, return focus to the trigger when it closes, and render above everything else without z-index fights.
That’s why libraries like react-modal and Headless UI exist; they handle the accessibility and layering logic for you. Their efforts are highly appreciated.
The typical React modal setup with react-modal:
import Modal from 'react-modal';
Modal.setAppElement('#root');
function App() {
const [isOpen, setIsOpen] = useState(false);
return (
<>
<button onClick={() => setIsOpen(true)}>Open modal</button>
<Modal
isOpen={isOpen}
onRequestClose={() => setIsOpen(false)}
contentLabel="Example Modal"
style={{
overlay: { backgroundColor: 'rgba(0, 0, 0, 0.5)' },
content: { maxWidth: '500px', margin: 'auto' }
}}
>
<h2>Modal Title</h2>
<p>Some content here</p>
<button onClick={() => setIsOpen(false)}>Close</button>
</Modal>
</>
);
}
and this is what the <dialog> element natively looks like:
<button onclick="document.getElementById('my-dialog').showModal()">
Open modal
</button>
<dialog id="my-dialog">
<h2>Modal Title</h2>
<p>Some content here</p>
<button onclick="this.closest('dialog').close()">Close</button>
</dialog>
In the code above, when you call .showModal() the browser gives you a good modal, a ::backdrop pseudo-element for the overlay, focus trapping (tab stays inside the dialog), Escape key to close, and focus restoration when the dialog closes. All the accessibility behavior that react-modal implements in JavaScript, the browser does for free.
Feel free to style it however you want, or copy mine:
dialog {
max-width: 500px;
border: none;
border-radius: 12px;
padding: 2rem;
box-shadow: 0 8px 30px rgba(0, 0, 0, 0.12);
}
dialog::backdrop {
background: rgba(0, 0, 0, 0.5);
backdrop-filter: blur(4px);
}
For non-modal dialogs (ones that don’t block the rest of the page), use .show() instead of .showModal(). And if you need a return value from the dialog (like a form submission), the returnValue property handles that:
dialog.addEventListener('close', () => {
console.log(dialog.returnValue); // Whatever was passed to dialog.close('value')
});
Browser support: All major browsers. Chrome 37+, Firefox 98+, Safari 15.4+. This has been Baseline since March 2022. There’s no reason not to use it.
When to still use libraries: If you’re building a design system that needs animation on open/close (the dialog element doesn’t animate natively by default, though you can pull that off with CSS transitions), or if you need portal behavior in React where the modal needs to live outside the component tree for state management reasons. For everything else, confirmation dialogs, alert modals, form dialogs the native element does more than most libraries, with zero JavaScript dependencies.
Replaces: Moment.js, date-fns, Day.js, Luxon
Date handling in JavaScript has been broken since 1995. The Date object mutates when you touch it, months are zero-indexed (January is 0, December is 11), timezone support is basically nonexistent, and parsing the same date string gives different results in different browsers. To be sincere, Day.js does it for me; I install, and pretend the native Date object doesn’t exist.
The Temporal API fixes all of this at the language level. It’s not a browser API; it’s a JavaScript language feature (TC39 Stage 3) that introduces immutable date/time objects, first-class timezone support, nanosecond precision, and separate types for the different things “date” can mean.
Here’s the mess with Date:
// Months are 0-indexed. This is February, not January.
const date = new Date(2025, 1, 14);
// Mutation. This modifies the original object.
date.setMonth(11);
// date is now December 14, 2025. The original is gone.
// Timezone chaos.
new Date('2025-02-14'); // Different results in different browsers
With Temporal:
// Clear, readable, immutable
const date = Temporal.PlainDate.from('2025-02-14');
// Returns a NEW object. Original is untouched.
const later = date.add({ months: 3 }); // 2025-05-14
console.log(date.toString()); // Still 2025-02-14
// Timezone-aware, no guessing
const meeting = Temporal.ZonedDateTime.from('2025-02-14T10:00[America/New_York]');
const inLagos = meeting.withTimeZone('Africa/Lagos');
console.log(inLagos.toString()); // 2025-02-14T16:00:00+01:00[Africa/Lagos]
// Duration math that actually makes sense
const start = Temporal.PlainDate.from('2025-01-01');
const end = Temporal.PlainDate.from('2025-03-15');
const diff = start.until(end);
console.log(diff.toString()); // P2M14D (2 months, 14 days)
Temporal gives you separate types for separate concerns: PlainDate for dates without time, PlainTime for time without dates, PlainDateTime for both, ZonedDateTime for timezone-aware moments, and Instant for exact points in time. No more guessing which “date” your Date object actually represents.
Browser support: This is the honest part. Temporal is not Baseline yet. Firefox 139+ shipped it in May 2025, and Chrome 144 followed in January 2026. But Safari and Edge haven’t shipped it, meaning you can’t use it in production without a polyfill (@js-temporal/polyfill or temporal-polyfill). It’s coming, but it’s not here for everyone yet.
When to still use libraries: Right now, if you need cross-browser support, date-fns and Day.js are still the practical choice. They’re lightweight, and work everywhere. But keep Temporal on your radar. Once Safari ships it, Moment.js, Luxon, and every other date library in your package.json become candidates for deletion. This is the one API on this list where the migration story is “not yet, but soon.”
Replaces: IP-based location services, geoip-lite, third-party geolocation APIs
When developers need a user’s location, the first instinct is usually a third-party service. Call an IP geolocation API like ipapi.co or ip-api.com, parse the response, and get a rough city-level location. Some apps install geoip-lite on the server. Others pay for services like MaxMind or IPinfo. All of this for coordinates that might be 50 kilometers off because IP-based location is really just an educated guess.
Here’s the IP lookup approach:
// Third-party API call
const res = await fetch('https://ipapi.co/json/');
const data = await res.json();
console.log(data.city); // "Lagos" (maybe)
console.log(data.latitude); // 6.4541 (roughly)
console.log(data.longitude); // 3.3947 (roughly)
That’s an external HTTP request, a dependency on a third-party service (with pricing tiers), and city-level accuracy at best. If the user is on a VPN, you get the VPN server’s location. If the ISP routes traffic through a different region, the coordinates are wrong.
The Geolocation API gives you GPS-level precision directly from the user’s device:
navigator.geolocation.getCurrentPosition(
(position) => {
console.log(position.coords.latitude); // 6.5244 (precise)
console.log(position.coords.longitude); // 3.3792 (precise)
console.log(position.coords.accuracy); // Accuracy in meters
},
(error) => {
console.error('Location access denied:', error.message);
},
{ enableHighAccuracy: true }
);
The browser asks the user for permission, the user decides, and the device returns coordinates using GPS, Wi-Fi, or cell tower triangulation, whichever is available. On mobile devices with GPS, you’re looking at accuracy within a few meters, not a few kilometers.
For apps that need continuous tracking (ride-sharing, delivery, fitness), watchPosition fires a callback every time the device moves:
const watchId = navigator.geolocation.watchPosition(
(position) => {
updateMap(position.coords.latitude, position.coords.longitude);
},
(error) => console.error(error),
{ enableHighAccuracy: true }
);
// Stop watching when done
navigator.geolocation.clearWatch(watchId);
The privacy model is also cleaner. The user grants or denies permission through the browser’s native UI, no cookie consent banners for location tracking, and their are no hidden IP lookups. The user is in control.
We saw the limits of the “old way” when X rolled out its location visibility feature in late 2025. My guess is they may have relied on IP-based geolocation to flag where accounts were based, and the feature wasn’t totally factual, as Journalists in New York were flagged as being in London because of corporate VPNs; rural users were shown as being in different states because their ISP routed traffic through a major hub.
That is the quirk of guessing the location on the server. Next time a platform wants to verify where its users actually are, they shouldn’t look at IP logs. They should call navigator.geolocation and ask the device itself. If the user is actually standing in Lagos, the GPS will say so, VPN or not.
Browser support: All major browsers. HTTPS only. This has been universally supported for years.
When to still use IP geolocation: When you need location without asking the user’s permission, for example, default currency selection. IP geolocation works silently in the background. The Geolocation API requires explicit user consent, which means a permission prompt. For any use case where precision matters and the user is willing to share, the native API should be your right call.
This isn’t a call to delete your package.json. Third-party libraries exist for valid reasons: better developer experience, edge case handling, ecosystem integration, and faster iteration. In many cases, they remain the right choice.
The goal is simply to shift the default reflex. Before reaching for npm install, check whether a modern Web API already solves the problem. Browsers now provide native solutions for many tasks that once required dependencies, from data handling to performance management and background processing.
If you’re auditing dependencies, reducing bundle size, or modernizing your frontend architecture, start by asking a simple question: does the browser already support this? Making that check part of your workflow can lead to smaller bundles, fewer maintenance risks, and a more resilient codebase built closer to the platform itself.

@container scroll-state: Replace JS scroll listeners nowCSS @container scroll-state lets you build sticky headers, snapping carousels, and scroll indicators without JavaScript. Here’s how to replace scroll listeners with clean, declarative state queries.

Russ Miles, a software development expert and educator, joins the show to unpack why “developer productivity” platforms so often disappoint.

Discover what’s new in The Replay, LogRocket’s newsletter for dev and engineering leaders, in the February 18th issue.

Learn how to recreate Claude Skills–style workflows in GitHub Copilot using custom instruction files and smarter context management.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up now