Microservices have become popular throughout the software industry for their flexibility and scalability. Depending on the use case, microservices often need to communicate with other microservices, clients (like web apps and mobile apps), or datastores.
During this process, one of the problems that software development teams often encounter is making sure correct payloads are sent to and from services. Developers can assume payload structure by speaking to each other (e.g. through docs, email, etc.) or they can opt into a technology, like shared schema, that handles this for them.
In this article, we’ll showcase two serialization technologies, Typical and Protobuf, and we‘ll explore common tasks developers may encounter when using schemas with TypeScript. We’ll walk through a tutorial for working with employee data, including how to serialize and deserialize the data, safely make schema changes, and take advantage of options for nullable fields.
Jump ahead:
Data serialization is the process of converting data into a format, like bytes, that can be easily transmitted or stored. Schema is the criteria that describes or defines the data.
Buying into one serialization technology makes it easy for developers to share knowledge, like schemas, and build utilities for the common format which results in shipping code that is less prone to validation errors.
OpenAPI is a popular, language-agnostic description format for REST APIs. Other data serialization formats, including Typical, Protobuf, Thrift, and Avro, work with the gRPC framework. An example use case is two services within an organization that need to communicate employee data with each other over a topic in Kafka.
These technologies use schema to define the expected fields and types for their data. This is used to serialize payloads into binary data and deserialize the binary data into a payload. They offer type-safety in that the processes will fail if the payloads are not of the correct structure.
Typical and Protobuf are both trusted technologies for using schemas. Typical is relatively new to the landscape, and is maintained by a small set of core developers. In comparison, Protobuf has fifteen years behind it and is maintained by Google.
Both Typical and Protobuf offer a CLI tool to generate code based on a schema, meaning the schema can be used as the source of truth for what payloads should look like. If changes need to be made, this can be handled in the schema, saving developers a lot of time and headaches.
Typical and Protobuf have a lot of similarities, but Typical offers more modern features. For example, Typical has unique features like “asymmetric” fields for ensuring backward/forward compatibility and “choices” for supporting pattern-matching. However, Protobuf supports plugins and has a long-standing reputation as a battle-tested solution.
For the purpose of this tutorial, we’ll consider a hypothetical service that sends employee data to other services within an organization. When it comes to data interchanges and schema, the service sending the data is referred to as the “writer” (or the producer, serializer, or publisher) and the service that handles receiving the data is referred to as the “readers” (or the consumers, deserializers, or subscribers).
We could use Kafka, GCP pub/sub, or a myriad of other technologies to send data that supports binary transports. But, for this tutorial, we’ll assume our services are written in TypeScript and are running on Node.js.
To start, we’ll get our service set up to use both Typical and Protobuf so we can demonstrate the differences. Then, we’ll use these tools to generate serializers, deserializers, and TypeScript types. All of the code used in this article can be referenced on GitHub.
To set up the development machine, we’ll run a script to install the Typical and Protobuf CLI tools via Homebrew.
N.B., it would be nice if Typical and Protobufs came packaged up on npm, but I wasn’t able to find any suitable packages; let me know if you find a better way to install them
We’ll create a new folder for the project, create a package.json
file with npm init
, and get TypeScript installed:
mkdir typical-vs-protobuf-example & cd typical-vs-protobuf-example brew install typical brew install protobuf // -- fix mac os js protobuf compiler issue brew install protobuf@3 brew link --overwrite protobuf@3 // -- npm init -Y npx tsc --init npm add -D typescript ts-node @types/node // Needed for TypeScript type gen in protobuf npm add -D ts-protoc-gen
Here’s a sample payload structure for the data that we’ll send from service to service:
{ "id": "1", "name": "John Doe", "hourly_rate": 20, "department": "HR", "email": "[email protected]", "active": true }
As a next step, we’ll define a schema for the payload. The schema specifies what the payload should look like. If the payload doesn’t follow the schema, the serialization and deserialization will fail when the service is run.
This is a common functionality among most data interchange technologies. Schema is what provides “safety” to services and clients, ensuring that they will receive the correct data in a shared format.
In both Typical and Protobuf, the schemas will look similar with minor differences. Let’s define an Employee
struct that contains basic information, like name
and hourly_rate
. We’ll use an enum to offer a set of choices for the department
instead of a string since we only have two options, HR
and NOT_HR
.
Here’s the schema in Typical:
// types.t struct Employee { id: String = 1 name: String = 2 houry_rate: F64 = 3 department: Department = 4 email: String = 5 active: Bool = 6 } choice Department { HR = 0 NOT_HR = 1 }
Here’s the schema in Protobuf:
// types.proto syntax = "proto3"; package employee.v1; message Employee { string id = 0; string name = 1; string email = 2; int32 hourly_rate = 3; Department department = 5; string email = 6; bool active = 7; enum Department { HR = 0; NOT_HR = 1; } }
Now that we’ve defined the schema, we can generate the TypeScript types and code using some npm scripts. Let’s add these scripts to our package.json
file:
{ "scripts" { "generate:typical:types:1": "typical generate typical-example/types-1.t --typescript typical-example/generated/types.ts", "generate:protobuf:types:1": "protoc --plugin=\\"protoc-gen-ts=./node_modules/.bin/protoc-gen-ts\\" --ts_opt=esModuleInterop=true --js_out=\\"./protobuf-example/generated\\" --ts_out=\\"./protobuf-example/generated\\" ./protobuf-example/types-1.proto" } }
Next, we’ll run the generate:typical:types:1
command and inspect the generated code in the typical-example/generated/types.ts
file. We’ll also do the same for Protobuf, using the following path: protobuf-example/protobuf-example/typeos-1_pb.ts
.
Now, let’s write some code to serialize our sample payload into a binary.
Here, we serialize the payload in Typical:
import { Types1 } from "./generated/types"; // Take our sample payload const payload = { id: "1", name: "John Doe", hourlyRate: BigInt(20), department: { $field: "hr" as const }, email: "[email protected]", active: true, }; // Serialize the Employee object to binary using the generated Serializer from Typical const binary = Types1.Employee.serialize(payload); // Log that it was successful console.log("Successfully serialized Employee object to binary:", binary); // Send the binary off using Kafka, etc ...
Here, we serialize the payload in Protobuf:
import { Employee } from "./protobuf-example/types-1_pb"; // Take our sample payload const payload = { id: "1", name: "John Doe", hourlyRate: 20, department: Employee.Department.HR, email: "[email protected]", active: true, }; // Create the Employee object based on our sample payload const employee = new Employee(); employee.setId(payload.id); employee.setName(payload.name); employee.setHourlyRate(payload.hourlyRate); employee.setDepartment(payload.department); employee.setEmail(payload.email); employee.setActive(payload.active); // Serialize the Employee object to binary const binary = employee.serializeBinary(); // Log that it was successful console.log("Successfully serialized Employee object to binary:", binary); // Send the binary off using Kafka, etc ...
As you can see, there are still not many differences between Typical and Protobufs. But, we will see this change when we start looking at schema changes and nullables.
Let’s say we found a crucial error with our employee data example. We realized that we should have included employee phone number in the schema. Then, after some consideration, we decide to add it and make the field required.
This change will mean that all employees will have to input their phone number into the service, so that it will always be present. Depending on the technology used and the number of readers and writers, this could be a breaking schema change, meaning we need to be careful to not send incompatible payloads to services that can’t handle them.
Let’s say we have multiple writers, who are part of a large organization, and they all use this schema. If we add a required field, it’s unlikely that all writers will be updated at the same time. So, readers won’t find (or shouldn’t expect) the new field to be present on all messages until every writer has updated their code and schema.
There may be other changes we want to make to our example, as well. The big question is how do we roll this change out safely and effectively.
N.B., this is a big topic when it comes to messaging systems; based on the scope of this article, we’ll only focus on the differences between Typical and Protobuf
Here are some important definitions with regard to schema changes:
First, let’s dig into how Typical handles schema changes. Every change is forward and backward compatible in Typical, making things easy to understand.
Typical has a feature called “asymmetric” fields which was made for this use case. With this feature, when we add the keyword on a field, it becomes required for the writer and optional for the reader.
When we are completely sure that all of the writers have been updated, we simply remove the keyword, and this change makes the field required. All fields in Typical are required by default.
Here’s an example of the employee phone number schema in Typical:
struct Employee { id: String = 1 ... asymmetric phone_number: String = 7 }
Now that we have the schema, let’s run type generation again and inspect the outputted types:
export type EmployeeOut = { id: string; ... phoneNumber: string; }; export type EmployeeIn = { id: string; ... phoneNumber?: string; };
We can see that this schema is now optional for readers. Here’s a summary of the schema changes that are safe in Typical.
N.B., there are no nullable field types in Typical, which is a big difference compared to other technologies
optional
keywordAll fields are required by default in Protobuf, but there’s a traditional optional
keyword that we can use to make a field required. This makes the rules governing how we can evolve a schema more involved. Some changes are forward compatible and some are backward compatible, but overall it provides a more granular approach.
To make a field required in Protobuf, we follow these steps::
phoneNumber
as optionalphoneNumber
as requiredThis process is involved and fairly complex; it is similar to the comparable process in Thrift and Avro.
There may be when we have payloads where nullable (optional) fields make sense. An example is filling the success and error fields inside a payload. When the success message is sent, an error will not be present, and vice versa.
Typical’s spec doesn’t support non-nullable fields, but instead uses something called “choices”. This offers pattern-matching-like capabilities to readers and acts like an enum (we used it before for department
). It’s much more flexible than a traditional enum as fields inside of a choice can be strings or new structs.
Here is an example of adding a details
field to the Employee struct which explains to a reader if it was successful or an error occurred in the payload.
Here is the Typical schema:
struct Employee { id: String = 1 ... details: Details = 7 } choice Details { success = 0 error: String = 1 }
Here is what the writer may serialize:
const payload: Types2.EmployeeOut = { id: "1", ... details: { $field: "success", }, };
Here is what the reader may deserialize:
// Read the binary payload from a file const fileContents = readFileSync(filePath); // Deserialize using Typical generated code const payloadDeserialized = Types2.Employee.deserialize( new DataView( fileContents.buffer, fileContents.byteOffset, fileContents.byteLength ) ); // Handle the details field switch (payloadDeserialized.details.$field) { case "success": console.log("We have a success!"); break; case "error": console.log("We have an error!"); break; default: throw new Error("Unknown details field"); }
In Protobuf you can use the optional
keyword like we described before. If you want to mimic the behavior of a choice in Typical, you can use Protbuf’s oneof
keyword. This can codify different options a field can take and you can define more than a traditional enum.
Here is an example of using the same schema as above using oneof
:
message Employee { string id = 1; ... oneof details { bool success = 7; string error = 8; } }
Here is what the reader may deserialize:
... // Read the binary payload from a file const fileContents = readFileSync(filePath); const payloadDeserialized = Employee.deserializeBinary(fileContents); // Handle the details field switch (payloadDeserialized.getDetailsCase()) { case Employee.DetailsCase.SUCCESS: console.log("We have a success!"); break; case Employee.DetailsCase.ERROR: console.log("We have an error!"); break; default: throw new Error("Unknown details field"); }
While Typical offers some unique features like asymmetric fields for ensuring backward/forward compatibility and choices for pattern-matching, Protobuf’s long-standing reputation and widespread popularity make it an attractive choice. Being a relatively new technology, Typical can offer some advantages, like a single cohesive CLI tool that can generate types and code instead of having to download separate packages in Protobuf.
However, Protobuf supports plugins and has a wider range of language support compared to Typical. It also offers traditional nullable/optional fields, making schema evolution more involved but also more granular when compared to the opinionated approach of Typical.
Both Typical and Protobuf have their own unique advantages and limitations. The choice between the two depends on the specific needs and preferences of your organization.
LogRocket is a frontend application monitoring solution that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.
In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page and mobile apps.
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowSOLID principles help us keep code flexible. In this article, we’ll examine all of those principles and their implementation using JavaScript.
JavaScript’s Date API has many limitations. Explore alternative libraries like Moment.js, date-fns, and the new Temporal API.
Explore use cases for using npm vs. npx such as long-term dependency management or temporary tasks and running packages on the fly.
Validating and auditing AI-generated code reduces code errors and ensures that code is compliant.