Pulumi is an increasingly popular Infrastructure as Code (IaC) platform leveraging several programming languages to interact with cloud resources. In particular, Pulumi programs are blueprints of the infrastructure and describe how the latter should be composed.
In this article, we’re going to focus on programs written in TypeScript. First, we’ll first take a look at the benefits of Infrastructure as a Service (IaaS) providers and those of IaC. Then, we’ll dive into how to use Pulumi and TypeScript together. We’ll set up a small example project using Pulumi and TypeScript and test it out using Amazon Web Services as the cloud provider.
Jump ahead:
Infrastructure as a Service aims at replacing on-premise data centers and infrastructure by providing computational power, memory, storage, and the related software as a cloud service. In a few words, instead of building expensive data centers on our own, we rent those from another company (the so-called IaaS or cloud provider).
Examples of popular IaaS providers are Google Compute Engine, AWS, and Microsoft Azure.
Migrating to an IaaS-based solution has several advantages:
Despite these advantages, IaaS comes with some challenges as well. First, with IaaS we are heavily dependent on a cloud provider. Additionally, there is no standard for the resources made available by different cloud providers. Hence, migrating from one provider to another can be extremely complex, depending on the level of our infrastructure’s sophistication. While it is true that IaC can mitigate this problem, if we’re running a very tailored infrastructure it may not be possible to migrate it effortlessly.
Second, IaaS can also come with unexpected costs. It is very easy to misconfigure something, and every misconfiguration can very well lead to unexpected costs. For example, if we set up an automatic auto-scaling if our website’s load increases, we might find ourselves with increased costs in case of a DoS attack. Lastly, another consequence of misconfiguration is setting up wrong security policies.
Generally speaking, we should always examine what every cloud provider offers in terms of Service Level Agreement, bandwidth, and features and carefully consider if those offerings match our needs.
Furthermore, before fully committing to a given IaaS provider, we should ensure we have the necessary competencies. Otherwise, we might find ourselves with unexpected costs and security holes.
If we move our infrastructure to a cloud provider, we still have to manually create and configure our resources. That operation is extremely costly and error-prone.
Infrastructure as Code aims to solve this issue by letting us configure our system using machine-readable code, rather than physical configuration or interactive configuration tools.
Even though IaC and IaaS are often used together, they are completely independent of one another. On the one hand, we can rely on an IaaS provider and manage our infrastructure manually. On the other hand, we can use IaC to configure our on-premise environment.
IaC offers several advantages:
However, just like IaaS, IaC comes with some challenges. First, re-deploying previous versions of our infrastructure is not always possible. Depending on the resource we’re redeploying, as well as on our selected cloud provider (if any), restoring a previous state may not be feasible in a totally automated way.
This poses even greater challenges, forcing us to work out the issue manually. Similarly, if the deployment fails somewhere in the middle of the process, it may be difficult to restart it from the same point, and re-deploying everything from scratch could take a long time.
Second, the code describing an infrastructure can become very large, very soon. Understanding what the code does and tracking all the dependencies within the code base might be difficult.
Third, our code will likely depend on some libraries provided by our cloud provider. Hence, we’ll have to manage the (possibly breaking) updates of such dependencies. Furthermore, if the infrastructure is not managed carefully, we might have drifts.
Drifts happen when the deployed version of our resources does not match the description provided by our code. This could happen because someone did something manually, but it could also occur due to automatic updates of the deployed resources.
Lastly, since the code is likely versioned in some repository, we ought to restrict access to that repository or we might have security issues.
Now, let’s focus on writing infrastructural code in Pulumi using TypeScript as the programming language.
For our example, we’ll set up a very simple TypeScript-based Pulumi project to create an S3 bucket on AWS. However, we could use Pulumi with other cloud providers (such as Azure and GCP) or programming languages (such as Java, Python, or Go).
As mentioned previously, Pulumi programs describe a blueprint for the infrastructure of a project. In particular, they allocate resources and set their properties to match the desired state of the infrastructure.
Resources can also be used throughout the program to set dependencies. For instance, we might want a resource R1 to be created after another resource R2.
Pulumi programs reside in projects. These projects are directories containing source files (e.g., TypeScript files) as well as metadata to configure the deployment (i.e., the way the program is run).
Instances of Pulumi programs are called stacks and represent different deployment environments. For example, we might have one stack each for development, staging, and production.
In its simplest form, a TypeScript-based Pulumi project contains the following files:
index.ts
: the “main” file of our Pulumi program, describing the resources to be deployed as part of the current stackpackage.json
and package-lock.json
: the files describing our project’s dependenciesPulumi.<stack-name>.yaml
: one or more files setting the configuration parameters of our stack(s)Pulumi.yaml
: a file setting some general information about our project, such as the name and the descriptiontsconfig.json
: a file for configuring TypeScriptThe following JSON snippet shows the default package.json
file generated by Pulumi for TypeScript projects using AWS as a cloud provider:
{ |
This file defines the name of the project, LogRocket
, the main file, main.ts
, and several dependencies. In particular, pulumi/aws
and pulumi/awsx
contain the necessary classes to describe the resources of our infrastructure.
pulumi/awsx
defines opinionated components, following the AWS well-architected best practices, with default values thought to simplify and speed up the deployment of working infrastructure.
The following YAML snippet shows the default Pulumi.yaml
file:
name: LogRocket description: A minimal AWS TypeScript Pulumi program runtime: nodejs
As we can see, the basic configuration of our project is fairly minimal. Once again we have to set the name
of the project and provide a brief description
and the target runtime
. Pulumi will use the latter property to establish how to run our program.
Since we’re using TypeScript, we should also provide a tsconfig.json
file.
The last piece of configuration we have to worry about is that of each stack. As we saw above, stacks in Pulumi are just instances of our program, corresponding to different deployment environments of our infrastructure.
For example, we might have the same set of resources deployed for staging and production, but those for production could be more performant than those for staging.
The following YAML snippet shows a possible configuration file named Pulumi.dev.yaml
:
config: aws:profile: ProfileName aws:region: us-west-2 aws:assumeRole: roleArn: "arn:aws:iam::000000000000:role/AccessRole"
The above code simply configures the AWS provider, telling Pulumi which AWS profile to use to create the infrastructure. In this case, we’re asking Pulumi to assume a given role, rather than hardcoding the AWS credentials.
We can add any other configuration value using some commands. For example, pulumi config set bucketName my-bucket
will add a new setting, bucket-name
, with value of my-bucket
:
config: LogRocket:bucketName: my-bucket
Configuration values are namespaced. In the example above, Pulumi uses the default value for the namespace, which is the project name. If we want a different value, we’ll need to specify it as part of the key’s name: pulumi config set namespace:key value
.
If --secret
is passed to the command, then Pulumi will encrypt the value and not show it in plain text in the .yaml
file. Secret encryption is stack-dependent. Hence, the same secret will result in different encrypted values if set in different stacks.
After setting up all the required configuration values, we can finally focus on the code to create an S3 bucket:
import * as pulumi from "@pulumi/pulumi"; import * as aws from "@pulumi/aws"; // Access the configuration and read the name of the bucket const config = new pulumi.Config(); const bucketName = config.require("bucketName"); // Create a new resource const exampleBucket = new aws.s3.BucketV2("bucket", { bucket: bucketName }); new aws.s3.BucketVersioningV2("bucket-versioning", { bucket: exampleBucket.bucket, versioningConfiguration: { status: "Enabled" } }); // Export the ARN of the newly-created bucket export const bucketArn = bucket.arn;
As a first step, we access the configuration to retrieve the name of the bucket we want to create. This is done by creating an instance of pulumi.Config()
. By default, the config
object will access the values in the LogRocket
namespace, where LogRocket
is the name of the project.
To fetch values from a different namespace, we just have to pass the namespace name in the constructor of pulumi.Config()
. Then, we retrieve the name of the bucket using config.require()
. This will throw an exception if a key named bucketName
is not found in the .yaml
file for the stack we’re currently deploying.
We can now create a versioned bucket. First, we instantiate a new resource of type aws.s3.BucketV2
. The first argument in the constructor is an ID local to the deployment. Hence, we can re-use the same ID if we deploy the same resource in different stacks.
The second argument is a list of properties. In this case, we just set the name of the resource, using the default values for all the other properties.
We then create another resource of type, aws.s3.BucketVersioningV2,
to tell AWS to create a versioned bucket. In this case, we set up an implicit dependency on exampleBucket
.
In fact, we set exampleBucket.bucket
as a value for the bucket property of aws.s3.BucketVersioningV2.
This ensures that Pulumi will create the aws.s3.BucketVersioningV2
resource after the aws.s3.BucketV2
resource.
Lastly, we export the identifier of the newly-created bucket, named ARN, in AWS. Stack outputs are shown during an update and can be accessed from the command line. We generally export the identifiers of important resources in our stacks, so that other stacks can access them.
We can now ask Pulumi to deploy our stack by running pulumi up
. If we just want to see the changes applied by our program, we can use pulumi preview
:
$ pulumi preview Previewing update (dev) View Live: https://app.pulumi.com/… Type Name Plan + pulumi:pulumi:Stack LogRocket-dev create + ├─ aws:s3:BucketV2 bucket create + └─ aws:s3:BucketVersioningV2 bucket-versioning create Outputs: bucketArn: output<string> Resources: + 3 to create
The output of the pulumi preview
command shows us some useful information about our deployment.
First, it displays the name of the current stack, dev
.
Second, it shows a list of resources. It shows the ID for each resource and tells us whether the resource will be created, updated, or deleted.
Lastly, it displays a list of outputs and a final recap of how many resources are to be created, deleted, or updated.
Components in Pulumi are logical groupings of resources. We can use them to instantiate a set of related resources to create a larger abstraction. In our case, we might want to create a component resource for a VersionedBucket
.
All we have to do to create a component is subclass Pulumi’s ComponentResource
class, using the constructor to allocate the child resources:
export interface VersionedBucketArgs { bucketName: string } export class VersionedBucket extends pulumi.ComponentResource { public readonly bucket: aws.s3.BucketV2 constructor( name: string, args: VersionedBucketArgs, opts?: pulumi.ComponentResourceOptions ) { super("LogRocket:example:VersionedBucket", name, {}, opts); this.bucket = new aws.s3.BucketV2(`${name}-bucket`, { bucket: args.bucketName }, { parent: this }); new aws.s3.BucketVersioningV2(`${name}-bucket-versioning`, { bucket: this.bucket.bucket, versioningConfiguration: { status: "Enabled" } }, { parent: this }); this.registerOutputs({ bucketArn: this.bucket.arn }); } }
In the above example, we first defined an interface, VersionedBucketArgs
, to describe the parameters of our component. In this case, we’re just interested in the name of the bucket.
Then, VersionedBucket
extends ComponentResource
to create a new component.
First, we invoke the parent constructor. This registers the component resource instance in the Pulumi engine so that we can see the differences across different deployments.
Second, component resources must also register a unique type. Generally speaking, it should be in the form package:module:type
. In this case, we chose LogRocket:example:VersionedBucket
. We’ll see this type in the pulumi preview
command output.
Third, we can simply create child resources as we did before. In this case, however, we explicitly set the parent
, so that Pulumi knows we’re creating a child resource. Furthermore, it is good practice to derive the name of the children from the name of the parent. Hence, in the example, we used the name
parameter as a prefix in the names of the child resources.
Lastly, we can register some outputs to tell Pulumi we’re done creating child resources. As a best practice, we should always call registerOutput
, even if our component doesn’t output anything, to let Pulumi know that our component can be considered fully constructed. Also note how our component exposes the child bucket, promoting it to a class field.
We can now rewrite the index.ts
file to use VersionedBucket
:
import * as pulumi from "@pulumi/pulumi"; import { VersionedBucket } from "./components/VersionedBucket" // Access the configuration and read the name of the bucket const config = new pulumi.Config(); const bucketName = config.require("bucketName"); const versionedBucket = new VersionedBucket("versioned-bucket", { bucketName: bucketName }); // Export the ARN of the newly-created bucket export const bucketArn = versionedBucket.bucket.arn;
The code is now much cleaner. We simply import our new component into the scope and use it to create a versioned bucket. Then, we register a stack output by accessing the underlying BucketV2
object.
The pulumi preview
output will now be slightly different:
$ pulumi preview Previewing update (dev) View Live: https://app.pulumi.com/… Type Name Plan + pulumi:pulumi:Stack LogRocket-dev create + └─ LogRocket:example:VersionedBucket versioned-bucket create + ├─ aws:s3:BucketV2 versioned-bucket-bucket create + └─ aws:s3:BucketVersioningV2 versioned-bucket-bucket-versioning create Outputs: bucketArn: output<string> Resources: + 4 to create
In this article, we investigated some of the pros and cons of IaaS- and IaC-based solutions. We also demonstrated how to leverage Pulumi to create a simple S3 bucket in AWS using TypeScript.
In my experience, Infrastructure as Code is worth a try. Setting up complex infrastructure manually is definitely more prone to errors. IaC lets us describe our code with precision, possibly automating deployments as needed.
However, writing infrastructural code is also very different from writing applicative code. For instance, in many cases it is just fine to duplicate infrastructural code. Meanwhile, duplicating applicative code is often a smell that something bad is going on with our design.
Additionally, testing IaC code is much more complex and sometimes is not even entirely possible. Hence, programmers should always start by keeping things simple, avoiding complex architectures or generalizations for infrastructural code.
At the time of writing, Pulumi is definitely one of the best tools available to write code for AWS-based infrastructure. The code we can write in TypeScript is just much more readable than its Terraform- or CloudFormation-based alternatives where we have to use a proprietary language or write JSON/YAML deployment files, respectively.
The closest competitor to Pulumi in the AWS ecosystem is AWS Cloud Development Kit (CDK), which relies on CloudFormation under the hood. In CDK, deployments are much slower and more fragile than their Pulumi counterparts, which instead rely on the AWS SDK.
LogRocket is a frontend application monitoring solution that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.
In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page and mobile apps.
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowBuild scalable admin dashboards with Filament and Laravel using Form Builder, Notifications, and Actions for clean, interactive panels.
Break down the parts of a URL and explore APIs for working with them in JavaScript, parsing them, building query strings, checking their validity, etc.
In this guide, explore lazy loading and error loading as two techniques for fetching data in React apps.
Deno is a popular JavaScript runtime, and it recently launched version 2.0 with several new features, bug fixes, and improvements […]