Developers are the primary users of programming API. Often, we care about UI and UX for our product, but miss out on the importance of creating good UX for APIs.
It might not cause problems in the initial stages of product development, but once it gets consumed by multiple group of developers for different needs, it easily becomes a bottleneck to the speed of development and product execution.
In this post, we’re going to talk about how you can avoid this issue and make sure your API scales smoothly as the product grows.
We’ll discuss some of the best practices and guidelines to build better UX for API, especially the widely used RESTful API.
This is not a guide to say, ‘this is the best way to build REST API’. Every product has different requirements — these are general guidelines to give your REST API a better DX (developer experience).
You won’t make a good API by blindly following web standards. RESTful is a flexible architectural style for creating APIs. It doesn’t dictate how to do it — instead, it just tells you what you’ll need to keep in mind during design.
Here are some basic tips for REST API design:
In this post, we’re going to create a mock API for a job board along these guidelines.
A REST API revolves around creating resources. Essentially, a resource is a logical splitting of your application.
It doesn’t need to be the same as your data models. Because you can use resources in multiple data models, it’s different than CRUD.
For example, in our job board, we can have multiple resources, some of which use multiple data models in their operations.
Inside these resources, there will be multiple operations — not just CRUD for a data model. In the next section, we’ll explore how to use HTTP verbs and URLs to separate these operations.
lThere are several HTTP verbs – GET, POST, PUT, PATCH, DELETE. All these HTTP verbs have specific functionality.
Along with these HTTP verbs, a resource can have multiple functionalities.
For example:
GET /jobs
– Retrieves all the jobsGET /jobs/1234
– Retrieves a specific Job with the JobID 1234
POST /jobs
– Creates a new Job listingPUT /jobs/1234
– Updates the job with JobID 1234
DELETE /jobs/1234
– Deletes the job with JobID 1234
PATCH /jobs/1234
– Updates parts of job with JobID 1234
. It is similar to PUT
, but put updates the whole job, whereas PATCH
updates specific parts of the Job data.A quick tip: don’t construct the URL like this:
POST /createJobs
to create a job ❌GET /getAllJobs
to fetch all the jobs ❌GET /getJobById
to get a specific job with ID ❌This approach will work, and it’s also a REST API. There’s no rule saying you can’t use a REST API this way.
However, this approach doesn’t scale well.
It would be a nightmare for the developer using it, and they’d need to go through the documentation each time to check the URL schema needed for a specific operation.
I’d advise using a noun for resource URLs — not a verb. It’s easier for users to know the URL for updating and deletion by seeing it.
POST /jobs
– Create a job ✅
GET /jobs
– Retrieve all jobs ✅
Using this template for URLs will help developers easily understand that they need to send a delete request to /jobs/:id
to delete a job.
Always send the default content types if they’re not specified explicitly in the URL.
Nowadays, JSON is the default content type and sends the header for the content type so users know what type of content the API URL returns.
Some of the content type headers include the following:
Resources often have a lot of relationships, so we might need to fetch those relations through nested resources. This can be tricky if the nested resources are not defined properly.
In our job board example, a job can have multiple applications. You can fetch those through the job resource itself.
For example:
GET /jobs/1234/applications
– Get all applications for a specific jobID (1234
)GET /jobs/1234/applications/:123
– Get the specific application with applicationID (123
) for the job with jobID (1234
)/companies/12345/applications
– Get all applications for a specific company (12345
).Here you can see that both Jobs
and Companies
have a relation to the Applications
resource.
In such cases, it’s inadvisable to create new applications through a nested resource.
Instead, retrieve through nested resources and create new applications through the Applications
resource.
In other words, use POST /applications
to create a new application, which will contain information about a specific job.
This is the most efficient approach under certain circumstances, but not all. Ultimately, it depends on the use case.
If the only direct connection for an application is jobs and not companies, then this approach will work. You can create an application for a job in POST /jobs/1234/applications
.
Still, it’s always good to separate resources and avoid nesting as much as possible.
In general, try not to go deeper than one level of nesting and make sure to split into separate resources logically.
In our use case, using filtering can help us to avoid nesting:
GET /applications?jobId=1234
– This will fetch all the applications for the specific job with IDGET /applications?companyId=12345
– This will fetch all the applications for the specific company with IDFilters can also be based on fields:
GET /jobs?jobType=Remote
– This fetches the jobs with jobType: Remote
GET /jobs?categories=developers,designers,marketers
– Filters can be an Array. In this case, it filters all the jobs within the categories developers
, designers
and marketers
There are two types of search:
General search can be passed as a query string with either q
or search
as the key.
For example: /jobs?q=searchterm
Field based searches are the same as filtering based on fields.
Some fields filter with exact matches, while others filter for partial regex-based matches.
For example: /jobs?title=marketing ninja
. Here, we can search for jobs with the partial title of marketing ninja
We all know what a specific HTTP status code means – 200, 4xx, 5xx, 302 etc.
We use those status codes to let the API consumer know exactly what has happened to process their request. Using it consistently is the key to a good API user experience.
It’s important to note that you don’t need to support all the HTTP status codes, but you should try to support the HTTP status codes that align with what your API needs.
You don’t want to send a Not found
error with a status code of 200
. It’s bad practice and confuses the user whether an error happened or not.
Here are some examples of HTTP status codes in the API:
The following are a few status codes for errors:
It’s also a good idea to send the details of client errors in responses so the API user can show error details to their end user.
A sample response with a proper error response is as follows:
// A sample response { errors: [{ 'status': 'InvalidError' 'message': 'Invalid value for email', ... // Other details of the error }, { ... // Next error object }], data: { ... // Any data } }
If an API action is doing an asynchronous operation in the background, send a response to the user immediately. Don’t wait for the process to end to send a response with the appropriate status code.
Usually, you’ll use 202 Accepted
in this case. This doesn’t mean that the operation is complete — just that it’s been accepted.
Email triggers and extensive calculations are asynchronous operations.
Allow your API users to select the fields they want. By default, send them all relevant data.
If the user explicitly asks for specific details, send only the requested details. That way your API will have the flexibility to send the exact data clients ask for.
Example:
GET /jobs?fields=id,title,description,jobType,categories
– This exclusively displays the jobs within fields explicitly passed to the fields query string.Data models have ID references for multiple models. If your response time is slow, don’t expand the object from multiple models by default when resolving resources.
For example, the following code snippet shows a jobs response with jobType and categories as IDs:
// GET /jobs [{ title: 'Job title', description: 'Job description', jobType: 1233043949238923, // ID ref to jobType model categories: [ // ID ref to categories model 1029102901290129, 0232392930920390, ] }, { ... // Job Objects }]
Next, we’ll expand the jobType and Categories data using an explicit request: GET /jobs?expand=jobType,categories
// GET /jobs?expand=jobType,categories [{ title: 'Job title', description: 'Job description', jobType: 'Remote', // Resolved from jobType model categories: [ // Resolved from categories model { name: 'Front end developer' }, { name: 'React developer' }, ] }, { ... // Job Objects }]
By default, every resources has a different sorting order. By extension, it’s better to provide API users with the flexibility to sort based on fields. It’s pretty easy to support responses in both ascending and descending order.
For example:
GET /jobs?sort=createdDate
– This simply sorts the response by createdDate
in ascending orderGET /jobs?sort=-createdDate
– This sorts in reverse order (descending)GET /jobs?sort=-createdDate,title
– This sorts by multiple values (createdDate in descending order and title in ascending order)You don’t need to follow the same convention, it completely depends on the framework you’re using. This is just a general example of how you can support sorting for your resources.
For smaller resources, you don’t need to use paginations.
However, once the response exceeds a certain size, pagination comes to the rescue. Make your pagination implementation simple and explicit.
For example:
GET /jobs?page=2&size=10
– Here, page
denotes the number of the page and ‘size’ denotes the limit for the number of jobs per page. In this example, page 2 contains jobs from 11-20.In the response, we’ll send the API user the relevant page information along with the content:
// Sample paginated list example { data: [ { ... // actual response data } ], pageInfo: { currentPage: 2, hasNextPage: false, hasPrevPage: true, ... // Add any more pagination related information } }
So far, we’ve covered the bare minimum concepts you’d need to know to create a REST API.
Now we’re going to switch gears and discuss some advanced concepts for creating a developer-friendly, production-ready RESTful API.
Developers often hate HATEOAS, and not just because ‘hate’ is in the name itself. I’m not going to get into what HATEOAS is — I’m just going to tell you what it does.
HATEOAS is a way to explicitly send all related resource URLs to your endpoints. It allows consumers to easily navigate between your resources without having to build the URL themselves.
This is one of the main concepts behind RESTful APIs. It allows the API user to have an awareness of different operations on any given resource and its related resources.
For example:
GET /jobs
– Gets all jobs.
Its response with HATEOAS looks like this:
// HATEOAS links are in the links section { data: [{...job1}, {...job2}, {...job3}, ...], links: [ // GET all applications { "rel": "applications", "href": "https://example.com/applications", "action": "GET", "types": ["text/xml","application/json"] }, { "rel": "jobs", "href": "https://example.com/jobs", "action": "POST", "types": ["application/json"] }, { "rel": "jobs", "href": "https://example.com/jobs", "action": "DELETE", "types": [] } ] }
All related links are added onto the response itself. It helps the API user navigate between resources and different actions.
Always authenticate and authorize users before allowing them to complete any action that will alter the data.
You should also limit access to all sensitive information by protecting it behind an authorization wall. Only public information should be available to users who don’t complete the necessary authentication and authorization.
Here are some tips to keep in mind during authentication and authorization:
403 forbidden
response.401 Unauthorized
response401 Unauthorized
responseSecurity is a broad topic. In the API level, the best practices are,
An API is a contract between users and developers. When you make a significant change in the schema, it’s common to forget about the contract and break things for existing API clients.
This is where API versioning comes in.
For example:
GET /v1/jobs
– Fetches version 1 of the API and sends the XML responseGET /v2/jobs
– Sends the JSON response by defaultThis way, we won’t break the API for existing consumers. Instead, we can show a deprecating warning wherever necessary and ask existing users to onboard onto the new version of the API.
Versioning also helps you out in a few other ways:
Some examples of widely-used versioning methods include number-based and date-based versioning.
Finally, versioning doesn’t need to be on the URL. Some APIs, like Github REST, pass versioning as custom headers:
Accept: application/vnd.github.v3+json
Most APIs don’t call for rate limiting, but it can add some basic security to your API.
There are several levels of rate limiting:
This is how Github does rate limiting for their API:
curl -i https://api.github.com/users/octocat HTTP/1.1 200 OK Date: Mon, 01 Jul 2013 17:27:06 GMT Status: 200 OK X-RateLimit-Limit: 60 X-RateLimit-Remaining: 56 X-RateLimit-Reset: 1372700873 This way, you don’t need to fetch from DB every time.
Modern databases are optimized for read, so this may not always be necessary. Still, caching wherever possible can help to improve the read speed.
While caching is valuable, it adds an additional level of complexity to your API since you need to bust and recache whenever there is a change in data.
If the data hasn’t changed, the server should return 304 Not Modified
. This response will show your browser client that the data hasn’t changed and prompt the server to reuse old data it fetched previously.
CORS allows cross-domain access to the API. Most applications just need to whitelist certain domains to allow CORS from those domains.
For public APIs, you may need to allow anyone to fetch the data if they have the proper authentication key set. In such cases, implement CORS to allow all domains and start blacklisting domains if they seem suspicious.
Logging is an integral part of developing any web platform. The same is true for APIs —we need to segregate logs based on priority (errors, info, warnings.)
Proper logging and separation will expedite debugging later when errors and security issues arise.
Keep these tips in mind to ensure your logs are as efficient as possible:
Here are a few tips to keep in mind when monitoring setup:
When developing API documentation for developers, it’s important to make sure everything is up to date:
Postman collections and Swagger API documentation are good examples of developer docs.
The public API documentation is as follows:
If you want to read up on good API documentations, check out these sources:
You can apply this last piece of advice to any development project you’re working on, including API development.
In general, it’s easier to reuse open source frameworks to build a solid API for consumers rather than reinvent the wheel.
This guide serves as a jumping-off point for building a great API user experience.
In many cases, we just need to build a quick API that may not be used by the general public.
Make sure to access the users for your API, implement only what is necessary for the current level of product, and then scale things as needed. Premature optimization is never a good idea.
Feel free to share your insights and experiences with building APIs in the comments.
Install LogRocket via npm or script tag. LogRocket.init()
must be called client-side, not
server-side
$ npm i --save logrocket // Code: import LogRocket from 'logrocket'; LogRocket.init('app/id');
// Add to your HTML: <script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script> <script>window.LogRocket && window.LogRocket.init('app/id');</script>
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowLearn how to manage memory leaks in Rust, avoid unsafe behavior, and use tools like weak references to ensure efficient programs.
Bypass anti-bot measures in Node.js with curl-impersonate. Learn how it mimics browsers to overcome bot detection for web scraping.
Handle frontend data discrepancies with eventual consistency using WebSockets, Docker Compose, and practical code examples.
Efficient initializing is crucial to smooth-running websites. One way to optimize that process is through lazy initialization in Rust 1.80.