2019 is a truly amazing time for all aspiring frontend developers.
There’s plenty of educational materials, courses, and tutorials. Every day, endless blog posts and articles sprout like mushrooms after rain. Anyone who wants to become a homegrown pro has access to everything they need — usually for free.
Many people took this opportunity and successfully taught themselves the quirky craft of frontend development. A lot of them had a chance to work on full-scale projects, then quickly began writing features, fixing bugs, and architecting their code in a proper way.
After some time, the lucky ones had a chance to see their own features in live, deployed code somewhere on the internet as a web app, portal, or just a regular website — a truly glorious moment, especially for junior frontend devs. Surprisingly, few of them raise a very important question: We developers create the app, but what magic puts it on the internet?
Common thinking is that it’s also done by developers, just more “senior” ones. Some of you might hear of DevOps, operators, cloud managers, sysadmins, and other whatnots living closer to some magic realm than the mortal plane.
Well, that’s true — to some extent. Everything that happens after coding and successful testing is often associated with the dark arts of scripts, Linux commands, and container-ish black magic. And there is an unwritten law that only the most experienced and trusted developers/admins in a given organization are responsible for successful delivery finalization.
Should it be this way? It certainly makes sense — after all, it’s a complicated and largely critical task. But does that mean it’s a knowledge reserved only to some elite caste? Absolutely not.
As frontend developers, we could blissfully ignore this aspect and go on believing everything will be done by other magicians — but we shouldn’t. Competencies in the IT world are changing at a great pace, and soon, knowledge about every stack element will make you more valuable as a developer, regardless of whether you’re on the frontend or the backend.
If you want to progress faster with your development career and stand out among your peers, you’re going to need this knowledge sooner or later. Let me convince you why.
As we’ve already touched upon, writing code is just one piece in the grand scheme of software production. Let’s try to list the steps needed to ship any product — not necessarily software:
What we’ll discuss here isn’t strictly related to the coding itself; what we’re trying to focus on is what happens after the main development phase. Why is it important? Because it can be complicated — and the more serious the solution is, the more sophisticated this part will be.
Imagine a web-based application with a certain number of features. Let’s assume the version release cycle is designed in a way that the app will be deployed to the web periodically, one feature after another. We can consider a precondition that every functionality is tested before shipment to production.
The thing is, we probably won’t employ just one programmer to do the job; features will be coded by a team. Those assumptions also imply that — apart from every developer’s local environment for coding and the final, stable environment for production — it’d be good to have a “staging” server to push the features into. Here, it’s possible for testers/clients to assess their quality before putting them into production.
Now we’re getting closer to a schema like this:
As you can see, things are getting complicated quickly (and believe me, we’re talking about a pretty simple example here). But we’re not here to cover the subject of product management lifecycle. Let’s focus on the technical aspect.
Assume that a frontend developer needs a few minutes to build an app. If we care about the code quality, they will need to run linting, unit tests, integration tests, and possibly other checks before marking their part as complete. This takes time.
Finally, putting the completed bundle on the server takes another couple minutes. And if we’re talking about assigning one programmer all those tasks, remember that we didn’t even consider the time required for switching their context (e.g., change code branch, refocus their work, etc.).
Now, who wants to take care of manually deploying every single feature? What if there are three new features tested every day? What if there are 15? Depending on the scale, it could certainly take more than one full-time employee just to handle the tasks described above.
That’s why we should apply the same principle here that gave birth to the whole idea of computing: we should get a machine to do it for us.
Before we talk about specific software solutions that will build, test, and deploy our code for us, let’s become familiar with two terms that describe this process. You’ve probably already heard of them:
Why are there two separate phrases, and what do they even mean? Don’t worry — to avoid confusion, let’s clear this one out and describe the general idea behind both.
The continuous integration part of CI/CD is an idea that covers repeated testing of our app’s integrity. From a technical point of view, it means we need to constantly perform linting, run unit/E2E tests, check preliminary code quality, etc. And by continuously, it means this must be done during new code pushes — which implies it should be done automatically.
For example, the CI process can define a batch of unit tests that will run with the code as part of the pull request. In this scenario, every time new code attempts to appear on, e.g., the develop branch, some machine checks whether it meets the standards and contains no errors.
The continuous deployment piece of CI/CD usually covers everything related to the process of building and moving the application to the usable environment — also automatically. For example, it can fetch our app’s code from the designated branch (e.g., master
), build it using the proper tools (e.g., webpack), and deploy it to the right environment (e.g., the hosting service).
It’s not strictly limited to production environments; for instance, we could set up a pipeline that will build a “staging” version of an app and push it in the proper hosting slot for testing purposes.
Those two terms are separate concepts with different origins in software management lifecycle theory, but in practice, they’re often complementary processes living in one large pipeline. Why are they so closely related? Often, parts of CI and CD can overlap.
For instance, we could have a project in which both E2E tests and deployment need to build the frontend code with webpack. Still, in most “serious” production projects, there is a number of both CI and CD processes.
Now let’s go back to our imaginary project with numerous features. Where can CI/CD help here?
Now think of what can we derive from the flow above. Let’s look at it from a cause and effect point of view. It’s possible to extract particular scenarios that form our hypothetical workflow. For instance:
When a developer tries to push their code to the common codebase,
then a set of unit tests need to pass.
This way, we have something with a clear beginning and an action — something we could automate by using scripts or some other machinery. In your future adventures with CI/CD, you can name those scenarios pipelines.
Note the bolded words above: when and then. Every reaction first needs an action. In order to run a particular pipeline, we need some kind of kickstart — or a trigger — to be initiated. These could be:
It’s possible to invoke particular pipelines from other ones as well, especially when we need to integrate a complex application consisting of many subparts that are being built separately.
Alright, we have pretty much covered the theory. Now let’s talk about the software that was designed to do all that dirty work for us.
On a basic level, every piece of CI/CD software is essentially just some kind of task runner that runs jobs when some action is triggered. Our role here is to configure it by feeding it the right information on what job needs to be done and when.
Despite this basic description, CI/CD software comes in many shapes, sizes, and flavors — and some of them can be so sophisticated that they need hundreds of manual pages. Anyway, don’t be frightened: before the end of this article, you will become familiar with one of them.
For starters, we can break CI/CD software into two categories:
It’s hard to discuss explicit advantages of either of these; as is often the case with this topic, it comes down to the app’s requirements, the organization’s budget and policies, and other factors.
It’s worth mentioning that a few of the popular repository providers (e.g., BitBucket) maintain their own CI/CD web services that are tied closely to their source code control systems, which is intended to ease the configuration process. Also, some cloud-hosted CI/CD services are free and open to the public — as long as the application is open source.
One popular example of a service with a free tier is CircleCI. We’re going to take advantage of this and configure a fully functional CI/CD pipeline for our example frontend application — in just a few steps!
CircleCI is a cloud-based CI/CD service capable of integrating with GitHub, from which it can easily fetch source code. There is an interesting principle represented in this service: pipelines are defined from inside the source code. This means all your actions and reactions are configured by setting up a special file in your source code; in this case, it’s a file named config.yml
in the folder named .circleci
.
For the purposes of our tutorial, we’re going to do the following:
config.yml
file that will contain the pipeline process definitionThe whole process should take no more than 30 minutes. If you’re still with me, let’s get down to the list of preparations. You’ll need:
You can start by forking and cloning the aforementioned repository to your local computer. For starters, let’s check what it does. After a successful fetch, you can navigate to the target directory and invoke the following commands:
npm install npm start
Now open up your browser and navigate to the http://localhost:8080 URL. You should see something like this:
It’s a very simple frontend app that indicates the successful loading of .js
and .css
files in respective areas. You can look up the source code and see that it’s a very plain mechanism.
Of course, you can continue with this tutorial while working with your own application; you’ll just need to change build script commands if necessary. As long as it’s a pretty standard app built by a node toolset such as npm, you should be good to go.
Before we try to automate the process and set up our continuous magic, let’s build the app and manually put it into S3. This way, we’ll be sure our target environment is set up properly.
We’ll start by building the app bundle locally. If you’re using our provided example app, you can achieve it by invoking the npm run build
command. You should end up with a folder named dist
appearing in your project’s root directory:
Neat. Our app was built and the bundle was prepared. You can check how it behaves in a simulated server environment by invoking the npx serve -s dist
command. This one will run a serve
package, which is a micro HTTP server that will distribute the contents of the dist
directory.
After running the command, you can navigate to http://localhost:5000
in your browser. You should see the same view as in the development server mode.
OK, now let’s put the app somewhere on the internet. To do this, we’ll start working with S3.
Amazon S3, which is part of the AWS ecosystem, is a pretty simple concept: it gives you a bucket where you can upload any kind of file (including static HTML, CSS, and JavaScript assets) and enable a simple HTTP server to distribute them. And the best part is that (under certain circumstances) it’s free!
First, start by logging in to the console:
Next, navigate to the S3 control panel by clicking Services and selecting S3 under Storage.
Now we’ll create a new bucket to host our web application. Enter a name, consisting of alphanumeric characters and hyphens only. Next, select the proper region for the bucket, and write down both values — we’ll need them later.
It’s important to set up proper permissions so the files will be public. To do so, click Next until you reach Set permissions. There, uncheck the first three boxes to enable public hosting of files:
This way, HTTP servers will be able to expose uploaded files as the website. After finalizing the bucket, you can access it and see the empty file list:
Click Upload, and you’ll be prompted to select the files you want to upload. You can select three bundle files from the dist
folder and put them here. Again, it’s of the utmost important to navigate to Set permissions and select the Grant public read access to this object(s) option under the Manage public permissions box.
Voilà! The files are there. There’s one last thing we need to enable our hosting on S3. Navigate to the Properties tab on the bucket view, find the Static website hosting option, and enable it:
You’ll need to add index.html
as your Index document; this will be the entry point to our app. Now, it seems to be ready. A link to your newly generated site is at the top of this dialog box. Click it to see your newly deployed app:
Great, we have the website working — unfortunately, that’s not our goal. Nothing is automated here. You wouldn’t want to go through this process of logging in to the S3 console and uploading a bunch of files each time something changes; that’s the job for the robots.
Let’s set up a continuous deployment process!
If you look closely at the code in our example repository, you can see that we’ve put a sample CD process definition there. Open the .circleci/config.yml
file.
version: 2.1 orbs: aws-s3: circleci/[email protected] jobs: build: docker: - image: circleci/python:2.7-node environment: AWS_REGION: us-east-1 steps: - checkout - run: npm install - run: npm run build - aws-s3/sync: from: dist to: 's3://demo-ci-cd-article/' arguments: | --acl public-read --cache-control "max-age=86400" overwrite: true workflows: version: 2.1 build: jobs: - build: filters: branches: only: master
As mentioned before, config.yml
is a file recognized by CircleCI containing the definition of a pipeline that will be invoked during the CD process. In this case, those 26 lines contain complete information about:
If you’re unfamiliar with YAML format, you’ll surely notice that it uses tabulation heavily. This is how these files are structured and organized: each section can have children, while the hierarchy is denoted by a tab consisting of double spaces.
Now, let’s dissect this file section by section:
version: 2.1 orbs: aws-s3: circleci/[email protected]
The lines above contain information about the interpreter version used and define additional packages (“orbs” in CircleCI nomenclature) necessary in the deployment process. In this case, we need to import an orb named aws-s3
, which contains tools needed to send files to the S3 bucket.
jobs: build: docker: - image: circleci/python:2.7-node environment: AWS_REGION: us-east-1 steps: - checkout - run: npm install - run: npm run build - aws-s3/sync: from: dist to: 's3://demo-ci-cd-article/' arguments: | --acl public-read --cache-control "max-age=86400" overwrite: true
The lines above carry information about the job definition — the heart of our pipeline.
For starters, note that we have named our job build
, which you can see in the second line of the section. We’ll see the same name later in the CircleCI console reports.
In the next lines, by using the docker
directive, we define which container (effectively, which virtual machine) will be used to build the app. If you’re not familiar with containerization and/or docker topics yet, you can safely imagine this step as selecting a virtual computer that will be enslaved to do the build task.
In this case, it’s a linux VM with Python and Node.js on board; we need Python for the AWS S3 toolset to work and Node to build our front-end app.
environment
and AWS_REGION
are the environment variables AWS needs to run. The exact value is irrelevant; S3 will work anyway.
The next section — steps
— should be more self-descriptive. Effectively, it’s a list of stages invoked one by one to finish the described job. The steps defined in this example are:
checkout
: grabs the source code from the repositoryrun: npm install
: pretty straightforward. This installs the node dependenciesrun: npm run build
: the heart of our pipeline. This step invokes the build of our codeaws-s3/sync
: another important stage, this deploys (“synchronizes”) the contents of the dist
directory in the given S3 bucket. Please note that this example uses demo-ci-cd-article
as the bucket name; if you’re following this tutorial, you should change your bucket name to match this exampleOn a basic level, you can imagine a single job as the group of actions you would normally run on your local computer. This way, you just tell the VM what to do step by step. Likewise, you can consider it to be a somewhat unusual shell script with some extra powers.
There is one significant principle regarding a job: every single step is expected to end up successful. If any single command fails, the remaining portion of the job will immediately stop, and the current run of the pipeline will be marked as FAILED
. Job failure will be indicated later in the CI/CD console with relevant errors, which is a hint at what went wrong.
There are various reasons for failure. For instance, in a pipeline meant to perform automatic testing, it might just indicate that a unit test failed and a certain developer needs to fix their code. Or it could be incorrect configuration of tools, which prevents successful build and deploy. Regardless of the reason, CI/CD processes usually notify admins (or culprits) about pipeline failure via email for proper remediation.
That’s why it’s important to define our jobs in a relatively safe way; if something bad happens at a certain step, we need to make sure previous steps didn’t yield any permanent side effects.
We’re getting close to the end. The last section is workflows
:
workflows: version: 2.1 perform_build: jobs: - build: filters: branches: only: master
In CircleCI, the “workflow” is a group of jobs that are started together. Since we have only one job defined here (build
), we could omit this one. By defining a workflow, however, we get access to an important feature: branch filtering.
If you look closely at the last two lines of the configuration file, you’ll see a filters
section. In this example, it contains branches: only: master
. This means that, by definition, the build job should run only when code on the master branch changed.
This way, we can filter out which branches we want to be “watched” by our CI/CD process. For instance, we can invoke different workflows (with different jobs) on distinct branches, build separate versions, or run tests only in particular circumstances.
If you haven’t done it yet, connect your GitHub account with CircleCI by selecting Log In with GitHub.
After logging in to GitHub and authorizing the CircleCI integration, you should see a sidebar with an option to Add project. Click it to see the list of your active GitHub repositories:
We’ll assume you’ve got one repository that you have either cloned from the example or prepared for yourself (remember about the proper .circleci/config.yml
file).
Locate this project in the list and click Set Up Project next to it. You should see an information screen that describes the principles of CircleCI:
See the Start building button at the bottom? Yep, that’s it — click it to enable our automated process and make this machinery do the job for us.
After clicking this one, you will see … an error.
Bummer.
There is one thing we still need to configure: the mechanism that makes the CircleCI API authorize to AWS. Until now, we haven’t put our AWS password anywhere in the code, GitHub, or CircleCI. There’s no way for AWS to know it’s us asking to put things in S3, hence the error.
We can fix it by changing our projects settings in the CircleCI panel. To enter it, click the gear icon in the top right corner, then locate the AWS permissions tab on the left pane. You should see something like this:
Access Key ID and Secret Access Key are special AWS authorization values that allow third-party services like CircleCI to do stuff for you — for instance, upload files to an S3 bucket. Initially, those keys will have the same permissions as the user to whom they are assigned.
You can generate these in the IAM section of the AWS console. There, expand the Access keys (access key ID and secret access key) pane. Click Create New Access Key and generate a key pair you can copy into CircleCI:
Click Save AWS keys, and we should be good to go. You can either try to reinitialize the repository on CircleCI, or use the quicker way: go to the failed attempt report, locate the Rerun workflow button, and click it.
There should be no unaddressed issues now, and the build should finish seamlessly.
Yay! You can log in to the S3 console and check the file modification time. It should indicate that files are freshly uploaded. But it’s not the end just yet — let’s see how the “continuous” part does work. I’m going back to the code editor to introduce a small change in the source code of the app (index.html
):
Now, let’s push the code to the repository:
git add . git commit -m “A small update!” git push origin master
You can see the magic happening in the CircleCI panel. In the blink of an eye, just after the successful push, you should see that CircleCI consumed the updated code and started building it automatically:
After a few seconds, you should see a SUCCESS
message. Now, you can navigate to your S3-hosted web page and refresh it to see that changes were applied:
That’s it! It’s all happening automatically: you push the code, some robot on the internet builds it for you, and deploys it to the production environment.
Of course, this was just a small example. Now we have a good opportunity to review a more complicated use case — for instance, deploying to multiple environments and changing the app’s behavior based on that.
If you go back to our example source code, you’ll notice there are two separate build scripts in package.json
: one for production
and one for staging
. Since it’s only an example project, it doesn’t introduce any heavy changes; here, it just ends up in a different JavaScript console message.
After running the app built with the staging
variant and opening the browser, you should see the relevant log entry in the JavaScript console:
Now, we can take advantage of this mechanism and extend our build pipelines. Consider the following code:
version: 2.1 orbs: aws-s3: circleci/[email protected] jobs: build: docker: - image: circleci/python:2.7-node environment: AWS_REGION: us-east-1 steps: - checkout - run: npm install - run: npm run build - aws-s3/sync: from: dist to: 's3://demo-ci-cd-article/' arguments: | --acl public-read --cache-control "max-age=86400" overwrite: true build-staging: docker: - image: circleci/python:2.7-node environment: AWS_REGION: us-east-1 steps: - checkout - run: npm install - run: npm run build:staging - aws-s3/sync: from: dist to: 's3://demo-ci-cd-article/' arguments: | --acl public-read --cache-control "max-age=86400" overwrite: true workflows: version: 2.1 build: jobs: - build: filters: branches: only: master build-staging: jobs: - build-staging: filters: branches: only: develop
Note that we’ve added a new job and a new workflow named build-staging
. There are two differences: the new job invokes the previously mentioned npm run build:staging
method, and the respective workflow is being filtered by the develop
branch.
This means all changes being pushed to develop
will invoke the “staging” build, while all changes on the master
branch will retain their original behavior and trigger the “production” build. In this case, both end up in the same S3 bucket, but we can always change that and have separate target environments.
Give it a try: create a new develop
branch based on master
and push it to the repo. In your CircleCI console, you should see that distinct workflow has been invoked:
The respective change was just pushed to the S3 bucket, but this time, it’s a staging build originating from the develop
branch. Your multiversion build is working perfectly. Neat — we’re getting close to our original workflow from the previous part of the article!
We’ve dealt with the continuous deployment part, but what about continuous integration? As we already discussed, this one is related to performing regular checks of your code quality, i.e., running tests.
If you look closely at the example repository, you can see that a sample unit test is added there. You can invoke it by running the npm run test
command. It doesn’t do much; it just compares a dummy function result to some pattern by assertion:
function getMessage() { return 'True!'; } // ... module.exports = getMessage; const getMessage = require('./jsChecker'); const assert = require('assert'); assert.equal(getMessage(), 'True!');
We can include this test in our pipeline, then set up our repository to perform it on every pull request created. In order to achieve it, we’ll start by creating a new job and new workflow in our config.yml
:
version: 2.1 orbs: aws-s3: circleci/[email protected] jobs: build: # ... build-staging: # ... test: docker: - image: circleci/python:2.7-node steps: - checkout - run: npm install - run: npm run test workflows: version: 2.1 build: # ... build-staging: # ... test: jobs: - test
We have defined a new job and a new workflow named test
. Its sole purpose is invoking the npm run test
script for us. You can push this file to the repository and check what happened in the CircleCI console:
A new workflow was automatically invoked, which resulted in a successful test run. Now, let’s wire it up with our GitHub repository. It’s possible to integrate this job to run every time a new pull request to a particular branch is being created. To do it, you need to open your GitHub project page and navigate to the Settings view. There, select the Branches tab:
By clicking Add rule, you can add a new policy that will enforce performing certain checks before allowing a pull request to be merged. One of the available checks is invoking the CircleCI workflow, as you can see below:
By checking the Require status checks to pass before merging box and selecting ci/circleci: test
below, we have just set the rule to run this workflow as a prerequisite for a pull request to be valid.
You can test this behavior by attempting to create a new pull request and expanding the Checks pane:
Of course, we can break it. You can try to create a commit that will cause the test to fail, put it on a new branch, and create a pull request:
We have broken the successful test — the assertion will fail with the below input:
assert.equal(getMessage(), 'True!'); --> > node src/modules/jsChecker.test.js assert.js:42 throw new errors.AssertionError({ ^ AssertionError [ERR_ASSERTION]: 'True, but different!' == 'True!' at Object.<anonymous>
Now the pull request won’t be available for merging since it is trying to introduce the code that makes the tests fail:
Neat! Our example project is pretty well covered by continuous testing, and no one will succeed in introducing bad code to the production branch as long as the test cases are properly written. The same mechanism can be used to perform code linting, static code analysis, E2E tests, and other automatic checks.
OK, that’s it! Although our example project is awfully simple, it’s entangled with real, working CI/CD process. Both integration and deployment are orchestrated by a robot living in a cloud, so you can shift all your focus to the coding.
Regardless of the number of people involved, your machinery will tirelessly work for you and check if everything is in place. Of course, setting everything up also took some time; but in the long term, the benefits of delegating all the mundane work are invaluable.
Of course, it’s not a free paradise forever: sooner or later, additional costs will be involved here. For instance, CircleCI provides 1,000 minutes of build per month for free. It should be pretty sufficient for smaller teams and simple open source projects, but any larger enterprise project will surely exceed this quota.
We’ve reviewed the basics, but there are still plenty of other important subjects untouched by this post.
One is making use of environment variables. Usually, you wouldn’t want to hold passwords, API keys, and other sensitive info directly in the source code. In a scenario where CI/CD automation gets involved, you’ll need to feed the machine with proper variables first — just like we did with the AWS secret password in this example.
Apart from that, environment variables are used to control the flow of the building, e.g., which target should be built or which features of the app should be enabled in a particular version. You may want to read more about their use in CircleCI.
Another topic: many CI/CD processes introduce the idea of artifact management. An artifact is a general name for the code resulting from a particular build process. For example, a bundled package or a generated container image holding the particular version of the app can be an artifact.
In certain organizations, proper management of versioned artifacts is important due to various requirements; for instance, they might be cataloged and archived for rollback or legal purposes.
Another important subject is the vast world of roles, permissions, and security. This post is related to the basic technical aspects of defining pipelines and workflow, but in large, real-life projects, it’s necessary to take the organization’s processes and strategies into consideration. For instance, we would want to have certain pipelines invoked or approved only by a certain person in the company’s structure.
Another example is fine-grained access to particular pipeline settings or VM configurations. But again, it’s all a matter of software used and particular project/company requirements; there is no single schema for a good automation process, just as there’s no single recipe for a good IT project.
Whew. We’re there.
What did you just achieve by reading this post? Most importantly, you now have a general understanding of what happens behind the curtain in “serious” projects. Regardless of the approach and software used, principles will be always similar: there will be tasks, pipelines, and agents that do the job. Hopefully, this big, scary concept isn’t so scary anymore. Finally, you had a chance to create your own working CI/CD pipeline and deploy an application to the cloud using automation.
What can you do next?
Of course, expand your knowledge and strive to be better. If you’re working on a project for a company, you can try to play with the code and invent your own testing/deploying pipeline after hours. You can (or even should) introduce an automatic testing and/or packaging process in your next open source project. You can also become familiar with more CI/CD software specimens: Travis, Jenkins, or Azure DevOps.
Also, you can check out other posts related to front-end development on my profile. Good luck on your journey!
Install LogRocket via npm or script tag. LogRocket.init()
must be called client-side, not
server-side
$ npm i --save logrocket // Code: import LogRocket from 'logrocket'; LogRocket.init('app/id');
// Add to your HTML: <script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script> <script>window.LogRocket && window.LogRocket.init('app/id');</script>
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowThe useReducer React Hook is a good alternative to tools like Redux, Recoil, or MobX.
Node.js v22.5.0 introduced a native SQLite module, which is is similar to what other JavaScript runtimes like Deno and Bun already have.
Understanding and supporting pinch, text, and browser zoom significantly enhances the user experience. Let’s explore a few ways to do so.
Playwright is a popular framework for automating and testing web applications across multiple browsers in JavaScript, Python, Java, and C#. […]