A computer science professor of mine once said, “For me to understand your code, show me your data.” The design of data is central to designing code. It can shape the character of the code. Architectural decisions can turn on an estimation of how much and what kind of data is used during the program execution.
While it’s not uncommon in software applications to read data from relational databases or even flat files with columns of data (CSV or TSV), often a more elegant structure is needed to express more intricate relationships between data. This is where XML and JSON have come into wide use. XML was used for many years, but gradually JSON has taken over as the data format of choice in many applications.
XML and JSON each have some fundamental features that reflect the way data is organized in computer applications:
Data with attributes is a fundamental concept in computer science. It’s a central feature of object-oriented programming, and before that C and C++ had structs, Lisp had assoc lists and properties. Attributes capture features of data. A data object representing a customer would have details like a first name, last name, age, gender, etc. Data objects with attributes can also express dictionaries, constructs which map from one set of data values to another (like a map of month names to month numbers, “January” is 1, “February” is 2, and so on). This is a powerful way of encoding some intelligence in software, defining associations that reflect meaning between data.
Hierarchy is a common way of expressing a relationship between related objects. A customer might have an address, which in turn has attributes like street name, city, country and mail code. Hierarchy might also involve grouping, like a list of product orders outstanding for a customer.
Arrays provide a way to collect multiple instances of data in one place, offering the opportunity to process the data in a simple loop construct in code. The same programmatic loop can process any amount of data, be it 500 or 5,000,000, and is key for creating powerful code that can flexibly handle arbitrarily large amounts of data.
In the mid-1990s software developers started using XML to define structured data. HTML had been used very successfully to tag elements of a web document to specify their appearance. XML used a very similar tagged notation to specify parts of data and their significance. HTML was designed to be read and interpreted by a web browser. XML was designed to be read mostly by application software.
Here’s an example of XML syntax, representing some data about a customer and their recent orders, demonstrating attributes, hierarchy, and arrays:
<customers> <customer firstName="Pat" lastName="Smith"> <address> <city>Anytown</city> <country>United States</country> <state>Missouri</state> <street>123 Main Street</street> </address> <orders> <order> <orderDate>20180901</orderDate> <orderId>11111111</orderId> <price>159.99</price> <productName>Floating Bluetooth Speaker</productName> <quantity>1</quantity> <sku>123123123</sku> </order> <order> <orderDate>20180915</orderDate> <orderId>22222222</orderId> <price>39.95</price> <productName>Quad Copter</productName> <quantity>1</quantity> <sku>456456456</sku> </order> </orders> </customer> </customers>
(The example here is nicely formatted and indented for readability. In real applications, the newlines and indentation would most likely be stripped away — computers can still read it even if humans can’t.)
XML became wildly popular as a way to exchange data between the client and server sides in so-called “multi-tier” applications and was also commonly used to define the format of configuration files for many applications. Software standards and tools were developed to specify, validate and manipulate XML-structured data. DTDs (Data Type Definitions) and later XSchema to express the structure of XML data, XSLT to transform XML data from one format to another — each of these themselves encoded in XML format (XML-like in the case of DTDs).
But the popularity of XML also coincided with the growth of B2B applications. XML began to be used to pass business-critical data between partner corporations large and small, and startup companies like Aruba and Commerce One appeared at this time providing platforms and toolkits for an exchange of data.
SOAP (“Simple Object Access Protocol”) was introduced as an XML-based interchange protocol: a common “envelope” of XML headers which provided a way to specify addressing/routing and security, and “payload” section that carried application-specific data to be sent from one computer to another. Other standards were developed for use under the general umbrella of “Electronic Data Interchange” (EDI) for B2B applications.
XML was a powerful standard for structuring data for processing and exchanging data. But it had some quirks and limitations.
It could be very verbose. The leading tag at the start of an XML element defines the content for processing by machines and to be readable by people alike. When you see “Customer” as the start of an XML element, you know what kind of data that element encloses. The trailing tag improves readability slightly for people but doesn’t really add anything for machine readability. Eliminating the closing tag of XML element in favor of a simpler way of terminating the content could measurably reduce the size of the data.
Also, there is no explicit representation of an array element in XML. Collections of similar objects that were intended to be processed as a group were simply put together under a common element. But there’s no explicit indication of this intention in the XML data. A spec in a DTD or XSchema could be created to define this, and it would be clear from reading the code that processes the data that the code is looping to process repeated XML elements.
But XML offers no visual indicator of a data array. It’s possible to create such an indicator by using a wrapping element (like an <orders>
element around a group of <order>
elements), but this syntax is not required in XML.
XML does support namespacing, a prefix to the element name indicating that it belongs in a certain group of related tags, most likely originated by a separate organization and governed by a distinct XML schema. It’s useful for organization and validation by a computer (especially for partitioning/classifying the parts of a data exchange: SOAP envelope vs. the payload, etc.), but adds complexity to parsing of XML as well as visual clutter for the human reader.
Then there’s one of the classic topics of debate in software engineering (right in there with “curly braces on the same line or next line”): should attributes or elements be used for properties of a data object? XML leaves this choice open to the implementer. Details about a Customer object could equally be specified using XML attributes:
<customers> <customer firstName="Pat" lastName="Smith"> ...
…or using subelements of the XML data object:
<customers> <customer> <firstName>Pat</firstName> <lastName>Smith</lastName> ...
Attribute names have to be unique to the element, there can’t be more than one. But there can be more than one subelement with the same tag name under any given element.
Subelements have an implicit order that could be treated as significant by the producing and consuming code (without any visual cue). Attributes do not have an explicit order.
There’s kind of a notion that attributes should express an “is-a” relationship to the XML element, whereas subelements express a “has-a” relationship, but in a lot of cases, the decision is a gray area.
In the early 2000s, an alternative format was proposed: JavaScript Object Notation, aka JSON. Appearing as a part of an early version of the ECMAScript specification, JSON was championed by Douglas Crockford (author of “JavaScript: The Good Parts”). In 2006, Crockford created the json.org website to extoll the virtues of JSON, saying JSON is “a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language”.
Here’s an example of the same customer data, formatted as JSON:
{"customers": [{ "customer": { "lastName": "Smith", "firstName": "Pat", "address": { "city": "Anytown", "country": "United States", "state": "Missouri", "street": "123 Main Street" }, "orders": [ { "orderDate": "20180901", "orderId": "11111111", "price": 159.99, "productName": "Floating Bluetooth Speaker", "quantity": 1, "sku": "123123123" }, { "orderDate": "20180915", "orderId": "22222222", "price": 39.95, "productName": "Quad Copter", "quantity": 1, "sku": "456456456" } ] } }]}
JSON represents objects (dictionaries) and arrays explicitly. It is inherently a dictionary type of data representation. Where an XML hierarchy is expressed with nested elements in XML, in JSON it’s expressed using an attribute (or in JavaScript terminology, a property) on the parent object whose value is the child object (notice the “address” or “orders” attribute in the above example). Arrays are also expressed explicitly using square brackets and can hold primitive types like strings or numbers as well as objects.
JSON simplified things quite a bit compared to XML format. The only association that can be expressed in JSON is an attribute. Hierarchy is expressed by nested curly braces, where each curly brace-enclosed object is associated with a property of its parent. And there’s no terminating name or label at each level of hierarchy, just a closing curly brace, making JSON a much simpler and more succinct way than XML of encoding a collection of data.
And there’s a close alignment with the JavaScript language: JSON is essentially the representation of a JavaScript object literal, and object literals are one of the core features of JavaScript.
JSON certainly grew as part of the growth of JavaScript as the preeminent software development language that it is today. With the rise of more and more sophisticated JavaScript frameworks like Angular and React (as well as grunt, gulp, webpack… the list goes on and on), the notion of isomorphic development took hold: JavaScript used everywhere.
Several books were written about “MEAN” development, using MongoDB, Express, Angular, and Node for all tiers of a web application (substitute your choice of front-end framework for Angular). JSON was a natural choice for the data interchange format between server side and front end.
It’s the natural format in which data is stored in MongoDB (MongoDB is implemented in C++ but stores data in a JSON-like format called BSON, binary serialization of JSON). Conditions in MongoDB queries are expressed using JavaScript object literals, and JavaScript code can be used to interpret the JSON results of a MongoDB query.
Parsing XML involves using an API — some kind of library, written in the programming language being used. The same is true for JSON, except in JavaScript: the JSON.parse()
function (supported since ES6) converts JSON from string form into native JavaScript objects, arrays, and hashes. Once the JSON has been parsed it can be traversed as regular JavaScript data structure.
This is another way that JSON contributes to making isomorphic programming in JavaScript a big win! Other software development languages (Python, PHP, Ruby, Java) do provide JSON parsing support out of the box, making JSON a way to exchange data between applications written in different languages.
That JSON data looks so much like JavaScript object literal syntax is likely no accident.
Brendan Eich, the original creator of JavaScript, borrowed ideas from the languages Scheme and Self for JavaScript. Scheme is a dialect of Lisp, and the syntax of Lisp is “homoiconic” — code and data are represented in exactly the same way, using very simple nested parenthesized syntax. All code and data in Lisp is a list (like an array). Dictionaries can be represented using nested lists.
Here is an example of the same customer data represented in Lisp:
(setq customer '((firstName "Pat") (lastName "Smith") (address (street "123 Main Street") (city "Anytown") (state "Missouri") (country "United States")) (orders ((order (orderId "11111111") (orderDate "20180901") (productName "Floating Bluetooth Speaker") (quantity 1) (sku "123123123") (price 159.99)) (order (orderId "22222222") (orderDate "20180915") (productName "Quad Copter") (quantity 1)(sku "456456456") (price 39.95)) )) ))
And here is a simple Lisp function that interprets the data:
(defun find-orders (customer) (assoc 'orders customer))
…and a demo of how the function and the data work together:
> (find-orders customer) (orders ((order (orderId "11111111") (orderDate "20180901") ...)))
The first element in a Lisp list is significant. In code, it begins an executable “form” (a function), but in data often serves as a label that is somehow associated with the succeeding elements in the list. As demonstrated in the above code, the “assoc” function looks up data by testing the first element of each of the sublists. This is the equivalent of a dictionary lookup in other programming languages.
This equivalence of data and code carried over to JavaScript to a large extent. Not only is JSON strongly similar (but not quite homoiconic) to the representation for a JavaScript object literal, but it is also parseable JavaScript code. It was common years ago to use the built-in JavaScript eval()
function to evaluate and convert JSON data to an object literal.
The eval()
function is also standard in Lisp. It was perhaps the first programming language to use a REPL, or read-eval-print loop. Today it’s considered to be a security risk to use eval()
on arbitrary data submitted from an external source, but the newer (and more secure) JSON.parse()
method fits the purpose. There’s also a function object that provides a way to convert a string into a JavaScript function — again, this honoring the duality of code and data that began in Lisp and is carried forth in JavaScript today.
JSON uses a simpler syntax to represent two of the most fundamental data structures in software development: dictionaries and arrays. Its close alignment with the syntax of JavaScript makes it the ideal choice of data format for many applications. Parsing JSON data is as simple as using JSON.parse()
to convert it to JavaScript and then traversing the result as a regular JavaScript object.
It’s simpler in syntax than XML, element for element, consuming less space to capture a collection of data and leaving the markup less dense and more easily human readable. Features of JSON like explicit arrays and unambiguous representation of data object attributes as JavaScript properties promote a simpler and cleaner syntax.
However, XML is hardly dead and gone today. Website syndication with RSS is still widely used (it’s a basic feature of WordPress, which powers a significant number of today’s websites), and a recent article suggested that it may stage a comeback. Electronic data interchange (EDI) is still in wide use by major corporations.
A recent story about the NotPetya ransomware attack told of the international shipping firm Maersk and how it was shut down for days when their shipping and logistics EDI would no longer run, resulting in container trucks lined up at shipping terminals and stalled deliveries around the world.
But representing associations between objects as a nested hierarchy doesn’t fit some application domains. One example is social network data, for which GraphQL (championed by Facebook, and still using a JSON-like representation) is often a choice.
RDF (an XML-based representation developed by the W3C Semantic Web group) also expresses non-hierarchical graphs of data using “(subject, predicate, object)” triples, where the “object” part may be a reference to another triple to define a general graph of relationships between data. It’s being used in many projects on the web.
And namespacing that was originally used in XML now finds its way into tag data in HTML (for example, semantic markup like the “twitter:” and “og:” namespaces in Twitter and Facebook card markup).
But still, for many applications, JSON greatly simplifies implementation of Internet-based software systems. It’s a JavaScript world out there and JSON plays a big role!
Install LogRocket via npm or script tag. LogRocket.init()
must be called client-side, not
server-side
$ npm i --save logrocket // Code: import LogRocket from 'logrocket'; LogRocket.init('app/id');
// Add to your HTML: <script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script> <script>window.LogRocket && window.LogRocket.init('app/id');</script>
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowBackdrop and background have similar meanings, as they both refer to the area behind something. The main difference is that […]
AI tools like IBM API Connect and Postbot can streamline writing and executing API tests and guard against AI hallucinations or other complications.
Explore DOM manipulation patterns in JavaScript, such as choosing the right querySelector, caching elements, improving event handling, and more.
`window.ai` integrates AI capabilities directly into the browser for more sophisticated client-side functionality without relying heavily on server-side processing.