Terraform is an amazing tool for managing infrastructure, and it’s simple enough to get the hang of in just a few hours. However, once you get started using Terraform, you’ll quickly run into tasks that seem easy yet have no obvious solution. Let’s go over some tricks and hacks to help you get the most out of the popular infrastructure as code (IaC) solution.
count
as an on-off switch for resourcesOne of Terraform’s strengths is its ability to turn blocks of resources and data into reusable modules. As part of this process, you’ll often need a way to disable the creation of certain resources based on an input variable. At present, there’s no attribute like resource_enabled = false
to disable the creation of a resource. Fortunately, you can achieve a similar effect by setting count = 0
to disable resource creation or count = 1
to enable it.
count
can be used to create an array of resources instead of just a single resource, so setting count = 0
will create an array of resources of length 0, effectively disabling the resource. This technique is common even within official Terraform modules. For example, the following abbreviated code snippet is from the official terraform-aws-autoscaling
module source code.
resource "aws_launch_configuration" "this" { count = var.create_lc ? 1 : 0 image_id = var.image_id instance_type = var.instance_type # ... }
This code creates an AWS autoscaling launch configuration if the variable create_lc
is set to true
when using the module.
Setting count
causes the resource to become an array instead of a single item, so if you need to access properties of that resource, you’ll need to access them as an array. For example, if you need to access the id
attribute from the above aws_launch_configuration
, you’d need to write something like concat(aws_launch_configuration.this.*.id, [""])[0]
to safely pull out the id
from the resources array.
null_resource
Sometimes the built-in functionality Terraform provides just isn’t enough. For instance, you may need to execute some command locally on the machine that runs Terraform. You can do this using the mysteriously named null_resource
. This acts like a normal resource within the Terraform resource graph but doesn’t actually do anything.
Why is this useful? Because null_resource
can run provisioners just like any normal resource, including the local-exec
provisioner, which runs a command on the local machine. You can control when this provisioner is run by passing in a triggers
map.
For example, if the Kubernetes Terraform provider doesn’t have all the functionality you need, you could manually run the kubectl apply
command using null_resource
, as shown below.
variable "config_path" { description = "path to a kubernetes config file" } variable "k8s_yaml" { description = "path to a kubernetes yaml file to apply" } resource "null_resource" "kubectl_apply" { triggers = { config_contents = filemd5(var.config_path) k8s_yaml_contents = filemd5(var.k8s_yaml) } provisioner "local-exec" { command = "kubectl apply --kubeconfig ${var.config_path} -f ${var.k8s_yaml}" } }
In the above example, any changes to the contents of the Kubernetes config file or Kubernetes YAML will cause the command to rerun. Unfortunately, there’s no easy way to get the output of the local-exec
command using this method and save it to the Terraform state. You’ll also need to make sure the machine running Terraform has dependencies installed to run the actual command specified by the local-exec
provisioner.
If you’re building a large infrastructure in Terraform, you’ll likely need to create a service in Terraform and then configure that service via a separate Terraform provider. Terraform is great at handling dependencies between resources, but it can’t handle situations where a Terraform provider depends on the creation of a resource in another provider.
For example, you’ll run into trouble if you need to create a Kubernetes cluster using Terraform, then configure that same cluster using the Terraform Kubernetes provider after it’s created. That’s because Terraform will try to connect to all defined provisioners and read the state of all defined resources during planning, but it can’t connect to the Kubernetes provider because the cluster doesn’t exist yet.
It would be great if Terraform could handle dependencies between providers like this, but you can solve this chicken-egg dilemma by breaking up your Terraform project into smaller projects that can be run in a chain.
Assuming you’re using remote state for Terraform, you can import the Terraform state from previous runs using the terraform_remote_state
data source. This allows the outputs from previous terraform runs to act as the input to the next terraform run.
Let’s say a Terraform run creates a Kubernetes cluster and outputs the connection info for that cluster. The next Terraform run could import that state from the first run and read the cluster connection info into the Terraform Kubernetes provider.
The technique is demonstrated below. The first stage might look something like this:
# stage1/main.tf provider "aws" { region = "us-east-1" } terraform { backend "s3" { bucket = "my-terraform-state-bucket" key = "stage1.tfstate" region = "us-east-1" } } resource "aws_eks_cluster" "k8s" { name = "sample-kubernetes-cluster" # ... } # Output connection info for the kubernetes cluster into the Terraform state output "k8s_endpoint" { value = aws_eks_cluster.k8s.endpoint } output "k8s_ca_data" { value = aws_eks_cluster.k8s.certificate_authority.0.data }
The second stage of Terraform config would then appear as follows.
# stage2/main.tf provider "aws" { region = "us-east-1" } terraform { backend "s3" { bucket = "my-terraform-state-bucket" key = "stage2.tfstate" region = "us-east-1" } } # Import the state from stage 1 and read the outputs data "terraform_remote_state" "stage1" { backend = "s3" config = { bucket = "my-terraform-state-bucket" key = "stage1.tfstate" region = "us-east-1" } } provider "kubernetes" { cluster_ca_certificate = base64decode(data.terraform_remote_state.stage1.outputs.k8s_ca_data) host = data.terraform_remote_state.stage1.outputs.endpoint # ... } resource "kubernetes_deployment" "example" { # ... continue configuring cluster }
In the above example, we ran the first stage to create a Kubernetes cluster and output connection info for the cluster into the Terraform state. Then, the second stage imported the first stage’s Terraform state as data
and read that connection info to configure the cluster.
templatefile()
Terraform makes it easy to take outputs from one resource and pipe them as inputs to another resource. However, it struggles when a resource writes a file on the local filesystem that another resource needs to read as an input.
Ideally, resources would never do this, but in reality, providers sometimes write outputs into local files instead of returning the output as a string. Terraform has no way to figure out that there’s a dependency between resources when that dependency comes in the form of writing and reading from a local file.
Fortunately, you can trick Terraform into realizing this dependency by using the templatefile()
function. This function reads a file from the filesystem and substitutes any variables you pass to the function into the file as it reads it. However, if those variables come from outputs of another resource, then Terraform must wait for that resource to apply before reading the file.
This is demonstrated below using the alicloud_cs_managed_kubernetes
resource from the Alicloud platform. This resource creates a Kubernetes cluster and writes the cluster config to a file on the local disk. We then read that file using templatefile()
and write its contents to an output.
resource "alicloud_cs_managed_kubernetes" "k8s" { name_prefix = "sample kubernetes cluster" kube_config = "${path.module}/kube.config" # ... } output "k8s_cluster_config_contents" { value = templatefile("${path.module}/kube.config", { # This variable creates a dependency on the cluster before reading the file cluster_id = alicloud_cs_managed_kubernetes.k8s.id }) }
In the above example, the kube.config
file is read via the templatefile()
function with a variable that depends on the output of the cluster resource. The cluster_id
variable is not actually used; it’s only there to force Terraform to wait for the cluster to be created before it tries to read the kube.config
contents. If you use this technique, you’ll need to create the file on your local filesystem manually before the first run, since Terraform expects the file to exist before it begins its run.
Hopefully, these techniques will come in handy in your Terraform excursions. If there are any Terraform tricks you’ve found useful, feel free to share them in the comments below.
Happy Terraforming!
Install LogRocket via npm or script tag. LogRocket.init()
must be called client-side, not
server-side
$ npm i --save logrocket // Code: import LogRocket from 'logrocket'; LogRocket.init('app/id');
// Add to your HTML: <script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script> <script>window.LogRocket && window.LogRocket.init('app/id');</script>
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowBackdrop and background have similar meanings, as they both refer to the area behind something. The main difference is that […]
AI tools like IBM API Connect and Postbot can streamline writing and executing API tests and guard against AI hallucinations or other complications.
Explore DOM manipulation patterns in JavaScript, such as choosing the right querySelector, caching elements, improving event handling, and more.
`window.ai` integrates AI capabilities directly into the browser for more sophisticated client-side functionality without relying heavily on server-side processing.
One Reply to "Dirty Terraform hacks"
I have terraform that creates a storage account for storing the tfstate, which creates a chicken-and-egg problem. My solution, also a hack, is to create a local_file resource with the filename = “backend.tf” and contents set to a terraform backend block configured to match the storage account. The first time the terraform is applied, the storage account and backend.tf file are created. The second time it is applied, the state is migrated to the backend specified in the generated backend.tf.