Imagine being able to sculpt your cloud infrastructure with the precision of a master builder, all from a single blueprint. With Terraform, this vision becomes reality. In this guide, we’ll explore how to use Terraform code on IBM Cloud, guiding you through the art of crafting both Classic and VPC environments.
Whether you're an architect of digital landscapes or a curious newcomer, this guide will equip you with the tools to create and control your cloud world with ease.
Ready to terraform your future? Let’s get started!
Reference : https://cloud.ibm.com/docs/ibm-cloud-provider-for-terraform?topic=ibm-cloud-provider-for-terraform-getting-started
Note: Use Visual Studio Code for best results!
Before we start, a little introduction about Terraform:
What it is:
Terraform is an infrastructure as code tool that lets you build, change, and version cloud and on-prem resources safely and efficiently.
How does it work:
Terraform creates and manages resources on cloud platforms and other services through their application programming interfaces (APIs). Providers enable Terraform to work with virtually any platform or service with an accessible API.
The core Terraform workflow consists of three stages:
Write:
You define resources, which may be across multiple cloud providers and services. For example, you might create a configuration to deploy an application on virtual machines in a Virtual Private Cloud (VPC) network with security groups and a load balancer.
Plan:
Terraform creates an execution plan describing the infrastructure it will create, update, or destroy based on the existing infrastructure and your configuration.
Apply:
On approval, Terraform performs the proposed operations in the correct order, respecting any resource dependencies. For example, if you update the properties of a VPC and change the number of virtual machines in that VPC, Terraform will recreate the VPC before scaling the virtual machines
Why Terraform?
Manage any infrastructure:
Terraform takes an immutable approach to infrastructure, reducing the complexity of upgrading or modifying your services and infrastructure.
Track your infrastructure:
It generates a plan and prompts you for your approval before modifying your infrastructure. It also keeps track of your real infrastructure in a stale file, which acts as a source of truth for your environment. Terraform uses the state file to determine the changes to make to your infrastructure so that it will match your configuration.
Automate changes:
Terraform configuration files are declarative, meaning that they describe the end state of your infrastructure. You do not need to write step-by-step instructions to create resources because Terraform handles the underlying logic. Terraform builds a resource graph to determine resource dependencies and creates or modifies non-dependent resources in parallel. This allows Terraform to provision resources efficiently.
Standardize configurations:
It supports reusable configuration components called modules that define configurable collections of infrastructure, saving time and encouraging best practices. You can use publicly available modules from the Terraform Registry or write your own.
Collaborate:
Since your configuration is written in a file, you can commit it to a Version Control System (VCS) and use HCP Terraform to efficiently manage Terraform workflows across teams. HCP Terraform runs Terraform in a consistent, reliable environment and provides secure access to shared state and secret data, role-based access controls, a private registry for sharing both modules and providers, and more.
We will take few scenarios in this blog.
- Create a VPC, a VSI and a Security Group created for VSI
- A VSI along with public IP and one IBM Cloud load balancer
- VPN tunnel connection between two VPCs
- TGW connection between 2 VPCs
- Application load balancer created for a VPC
- Network Load balancer created for a VPC
Terraform configuration files are used for writing your Terraform code. They have a .tf extension and use a declarative language called HashiCorp Configuration Language (HCL) to describe the different components that are used to automate your infrastructure.
Step 1 :
Go to terminal and run below commands:
$mkdir terraform && cd terraform
$export PATH=$PATH:$HOME/terraform
$terraform
$mkdir myproject && cd myproject
Step 2:
Create “versions.tf” file as mentioned below:
versions.tf define the required Terraform and provider versions.
terraform {
required_version = ">=1.0.0, <2.0"
required_providers {
ibm = {
source = "IBM-Cloud/ibm"
}
}
}
Step 3:
Create or retrieve an IBM Cloud API key
Step 4:
Create “terraform.tfvars” file as mentioned below:
The .tfvars files are used to assign values to the input variables declared in other Terraform configuration files.
By default, Terraform will load variable values from files called terraform.tfvars or any_name.auto.tfvars. If you have both files, any_name.auto.tfvars will take precedence over terraform.tfvars.
ibmcloud_api_key = <ibmcloud api key>
Step 5:
Create “provider.tf” file as mentioned below:
In the provider.tf file, you declare the providers required by a Terraform configuration, specifying details like authentication credentials, API endpoints, and other provider-specific settings needed to interact with external systems or cloud platforms.
variable "ibmcloud_api_key" {}
variable "region" {}
provider "ibm" {
ibmcloud_api_key = var.ibmcloud_api_key
region = var.region
}
Note: To check the sample templates for VPC infrastructure go to below link:
https://cloud.ibm.com/docs/ibm-cloud-provider-for-terraform?topic=ibm-cloud-provider-for-terraform-provider-template#vpc-templates
Note: To check vpc.tf file which we will need, go to below document, step 4:
https://cloud.ibm.com/docs/ibm-cloud-provider-for-terraform?topic=ibm-cloud-provider-for-terraform-sample_vpc_config
The above steps are same for every scenario. The next step will actually determine your configuration.
The main.tf (here vpc.tf) file is the starting point where you will implement the logic of infrastructure as code. This file will include Terraform resources, but it can also contain data sources and locals.
Example 1:
- Create a VPC along with a VSI.
- VSI will have FIP assigned to it.
- SG created for VSI
Step 1-5 will be same as mentioned above.
Step 6 is as follows:
Create "vpc.tf" file as below:
// Creating ssh-key. Provide the public key here and use private key to login into the VSI.
resource "ibm_is_ssh_key" "ssh-key" {
name = "testterranewkey"
public_key = "xxxxx"
type = "rsa"
}
// Allow all incoming network traffic on port 22 for VPC
resource "ibm_is_security_group_rule" "ingress_ssh_all" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "0.0.0.0/0"
tcp {
port_min = 22
port_max = 22
}
}
// Getting image for VSI
data "ibm_is_image" "centos" {
name = "ibm-centos-7-9-minimal-amd64-10"
}
//
#Code for VPC
//
resource "ibm_is_vpc" "vpc" {
name = "vpctest1"
}
// Creating address-prefix
resource "ibm_is_vpc_address_prefix" "vpc_prefix" {
name = "test-address-prefix"
zone = "br-sao-1"
vpc = ibm_is_vpc.vpc.id
cidr = "192.168.10.0/24"
}
// Creating Subnet resource
resource "ibm_is_subnet" "subnet1" {
depends_on = [
ibm_is_vpc_address_prefix.vpc_prefix
]
ipv4_cidr_block = "192.168.10.0/24"
name = "subnettest1"
vpc = ibm_is_vpc.vpc.id
zone = "br-sao-1"
}
// Defining Security Group Rules
resource "ibm_is_security_group" "sg1" {
name = "sgtestgroup1"
vpc = ibm_is_vpc.vpc.id
}
resource "ibm_is_security_group_rule" "sgrule1" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
icmp {
}
}
resource "ibm_is_security_group_rule" "sgrule2" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
udp {
port_min = 500
port_max = 500
}
}
resource "ibm_is_security_group_rule" "sgrule3" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
udp {
port_min = 4500
port_max = 4500
}
}
resource "ibm_is_security_group_rule" "sgrule4" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
tcp {
port_min = 22
port_max = 22
}
}
resource "ibm_is_security_group_rule" "sgrule5" {
group = ibm_is_security_group.sg1.id
direction = "outbound"
remote = "0.0.0.0/0"
}
// Creating VSI
resource "ibm_is_instance" "vsi1" {
name = "vsitest1"
vpc = ibm_is_vpc.vpc.id
zone = "br-sao-1"
keys = [ibm_is_ssh_key.ssh-key.id]
image = data.ibm_is_image.centos.id
profile = "cx2-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
// Assigning Floating-IP
resource "ibm_is_floating_ip" "fip1" {
name = "fiptest1"
target = ibm_is_instance.vsi1.primary_network_interface[0].id
}
// Getting ssh command to login VSI
output "sshcommand" {
value = "ssh root@${ibm_is_floating_ip.fip1.address}"
Step 7:
After saving the vpc.tf file, run below commands on the terminal:
$terraform init
This command initializes a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.
$terraform fmt
This command should always be run on your configuration files, so formatting standards and language style conventions are applied across your configuration in a uniform manner.
$terraform validate
Validates the configuration files in a directory.
$terraform plan
The terraform plan command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.
$terraform apply
The terraform apply command performs a plan just like terraform plan does, but then actually carries out the planned changes to each resource using the relevant infrastructure provider's API.
$terraform show
The terraform show command is used to provide human-readable output from a state or plan file. This can be used to inspect a plan to ensure that the planned operations are expected, or to inspect the current state as Terraform sees it.
$terraform plan -destroy
The terraform plan command is a crucial part of the Terraform workflow. It is used to generate an execution plan that outlines the changes that will be applied to the infrastructure when the terraform apply command is executed.
$terraform destroy
The terraform destroy command is a convenient way to destroy all remote objects managed by a particular Terraform configuration.
Note: To delete any specific resource from vpc.tf file , there are two ways:
- Comment the resource block you wish to delete and run “terraform apply” command.
- elete the resource block you wish to delete and run “terraform apply” command.
Example 2:
- A VSI along with public and private address
- Cloud LB provisioned with the new VSI created above attached to it
Create "vpc.tf" file as below:
resource "ibm_compute_ssh_key" "test_ssh_key" {
label = "testterranewkey"
notes = "test_key_terra_classic"
public_key = "xxxxx"
}
resource "ibm_compute_vm_instance" "compute_test1" {
hostname = "testvsi1"
domain = "SoftLayer-Internal-Network-Operations.cloud"
os_reference_code = "CENTOS_7_64"
datacenter = "sjc04"
network_speed = 100
hourly_billing = true
private_network_only = false
cores = 1
memory = 1024
public_vlan_id = 3023244
private_vlan_id = 3023246
}
resource "ibm_lbaas" "lbaas" {
name = "terraformLB"
description = "testLB_terraform"
subnets = [2095401]
protocols {
frontend_protocol = "TCP"
frontend_port = 22
backend_protocol = "TCP"
backend_port = 22
load_balancing_method = "round_robin"
}
}
resource "ibm_lbaas_health_monitor" "lbaas_hm" {
protocol = ibm_lbaas.lbaas.health_monitors[0].protocol
port = ibm_lbaas.lbaas.health_monitors[0].port
timeout = 3
interval = 5
max_retries = 6
url_path = "/"
lbaas_id = ibm_lbaas.lbaas.id
monitor_id = ibm_lbaas.lbaas.health_monitors[0].monitor_id
}
resource "ibm_lbaas_server_instance_attachment" "lbaas_member" {
count = 1
private_ip_address = element(
ibm_compute_vm_instance.compute_test1.*.ipv4_address_private,
count.index,
)
lbaas_id = ibm_lbaas.lbaas.id
depends_on = [ibm_lbaas.lbaas]
}
resource "ibm_subnet" "testsubnet" {
type = "Portable"
private = true
ip_version = 4
capacity = 8
vlan_id = xyz
}
Use a built-in function cidrhost with index 1.
output "first_ip_address" {
value = cidrhost(ibm_subnet.testsubnet.subnet_cidr,1)
}
Example 3:
- Create 2 VPCs along with 2 VSIs.
- VSIs will have FIPs assigned to them.
- Both VPC will have address-prefixes and subnets added.
- We will create a successful working VPN tunnel between the VPCs.
Create "vpc.tf" file as below:
//Create ssh-key
resource "ibm_is_ssh_key" "ssh-key" {
name = "testterranewkey"
public_key = "xxxx"
type = "rsa"
}
//allow all incoming network traffic on port 22 for 1st VPC
resource "ibm_is_security_group_rule" "ingress_ssh_all" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "0.0.0.0/0"
tcp {
port_min = 22
port_max = 22
}
}
//allow all incoming network traffic on port 22 for 2nd VPC
resource "ibm_is_security_group_rule" "ingress_ssh_all2" {
group = ibm_is_security_group.sg2.id
direction = "inbound"
remote = "0.0.0.0/0"
tcp {
port_min = 22
port_max = 22
}
}
// Getting VSI image
data "ibm_is_image" "centos" {
name = "ibm-centos-7-9-minimal-amd64-10"
}
//
#Code for First VPC
//
resource "ibm_is_vpc" "vpc" {
name = "vpctest1"
}
// Creating address-prefix
resource "ibm_is_vpc_address_prefix" "vpc_prefix" {
name = "test-address-prefix"
zone = "br-sao-1"
vpc = ibm_is_vpc.vpc.id
cidr = "192.168.10.0/24"
}
// Creating subnet
resource "ibm_is_subnet" "subnet1" {
depends_on = [
ibm_is_vpc_address_prefix.vpc_prefix
]
ipv4_cidr_block = "192.168.10.0/24"
name = "subnettest1"
vpc = ibm_is_vpc.vpc.id
zone = "br-sao-1"
}
//Creating Security Group
resource "ibm_is_security_group" "sg1" {
name = "sgtestgroup1"
vpc = ibm_is_vpc.vpc.id
}
// Creating Security group rules
resource "ibm_is_security_group_rule" "sgrule1" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
icmp {
}
}
resource "ibm_is_security_group_rule" "sgrule2" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
udp {
port_min = 500
port_max = 500
}
}
resource "ibm_is_security_group_rule" "sgrule3" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
udp {
port_min = 4500
port_max = 4500
}
}
resource "ibm_is_security_group_rule" "sgrule4" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
tcp {
port_min = 22
port_max = 22
}
}
resource "ibm_is_security_group_rule" "sgrule5" {
group = ibm_is_security_group.sg1.id
direction = "outbound"
remote = "0.0.0.0/0"
}
// Creating VSI1
resource "ibm_is_instance" "vsi1" {
name = "vsitest1"
vpc = ibm_is_vpc.vpc.id
zone = "br-sao-1"
keys = [ibm_is_ssh_key.ssh-key.id]
image = data.ibm_is_image.centos.id
profile = "cx2-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
// Creating FIP1
resource "ibm_is_floating_ip" "fip1" {
name = "fiptest1"
target = ibm_is_instance.vsi1.primary_network_interface[0].id
}
// Creating VPN gateway
resource "ibm_is_vpn_gateway" "is_vpn_gateway" {
name = "my-vpn-gateway"
subnet = ibm_is_subnet.subnet1.id
mode = "route"
}
// Creating VPN gateway connection
resource "ibm_is_vpn_gateway_connection" "conn1" {
name = "vpn-gateway-connection1"
vpn_gateway = ibm_is_vpn_gateway.is_vpn_gateway.id
peer_address = ibm_is_vpn_gateway.is_vpn_gateway2.public_ip_address
preshared_key = "xxxx"
admin_state_up = true
local_cidrs = [ibm_is_subnet.subnet1.ipv4_cidr_block]
peer_cidrs = [ibm_is_subnet.subnet2.ipv4_cidr_block]
}
output "sshcommand" {
value = "ssh root@${ibm_is_floating_ip.fip1.address}"
}
//
#Code for Second VPC
//
resource "ibm_is_vpc" "vpc2" {
name = "vpctest2"
}
// Creating address-prefix
resource "ibm_is_vpc_address_prefix" "vpc_prefix2" {
name = "test-address-prefix2"
zone = "br-sao-1"
vpc = ibm_is_vpc.vpc2.id
cidr = "192.168.30.0/24"
}
// Creating subnet
resource "ibm_is_subnet" "subnet2" {
depends_on = [
ibm_is_vpc_address_prefix.vpc_prefix2
]
ipv4_cidr_block = "192.168.30.0/24"
name = "subnettest-2"
vpc = ibm_is_vpc.vpc2.id
zone = "br-sao-1"
}
// Creating Security Group
resource "ibm_is_security_group" "sg2" {
name = "sgtestgroup2"
vpc = ibm_is_vpc.vpc2.id
}
// Creating Security Group rules
resource "ibm_is_security_group_rule" "sgrule6" {
group = ibm_is_security_group.sg2.id
direction = "inbound"
remote = "192.168.10.0/24"
icmp {
}
}
resource "ibm_is_security_group_rule" "sgrule7" {
group = ibm_is_security_group.sg2.id
direction = "inbound"
remote = "192.168.10.0/24"
udp {
port_min = 500
port_max = 500
}
}
resource "ibm_is_security_group_rule" "sgrule8" {
group = ibm_is_security_group.sg2.id
direction = "inbound"
remote = "192.168.10.0/24"
udp {
port_min = 4500
port_max = 4500
}
}
resource "ibm_is_security_group_rule" "sgrule9" {
group = ibm_is_security_group.sg2.id
direction = "inbound"
remote = "192.168.10.0/24"
tcp {
port_min = 22
port_max = 22
}
}
resource "ibm_is_security_group_rule" "sgrule10" {
group = ibm_is_security_group.sg2.id
direction = "outbound"
remote = "0.0.0.0/0"
}
// Creating VSI2
resource "ibm_is_instance" "vsi2" {
name = "vsitest2"
vpc = ibm_is_vpc.vpc2.id
zone = "br-sao-1"
keys = [ibm_is_ssh_key.ssh-key.id]
image = data.ibm_is_image.centos.id
profile = "cx2-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet2.id
security_groups = [ibm_is_security_group.sg2.id]
}
}
// Creating FIP2
resource "ibm_is_floating_ip" "fip2" {
name = "fiptest2"
target = ibm_is_instance.vsi2.primary_network_interface[0].id
}
// Creating VPN Gateway 2
resource "ibm_is_vpn_gateway" "is_vpn_gateway2" {
name = "my-vpn-gateway2"
subnet = ibm_is_subnet.subnet2.id
mode = "route"
}
// Creating VPN gateway Connection 2
resource "ibm_is_vpn_gateway_connection" "conn2" {
name = "vpn-gateway-connection2"
vpn_gateway = ibm_is_vpn_gateway.is_vpn_gateway2.id
peer_address = ibm_is_vpn_gateway.is_vpn_gateway.public_ip_address
preshared_key = "xxxx"
admin_state_up = true
local_cidrs = [ibm_is_subnet.subnet2.ipv4_cidr_block]
peer_cidrs = [ibm_is_subnet.subnet1.ipv4_cidr_block]
}
output "sshcommand2" {
value = "ssh root@${ibm_is_floating_ip.fip2.address}"
}
Example 4:
- Create 2 VPCs along with 2 VSIs.
- VSIs will have FIPs assigned to them.
- Both VPC will have address-prefixes and subnets added.
- Create TGW for both VPCs
- We will create a successful working TGW connection between the VPCs.
Create "vpc.tf" file as below:
// Use this resource block if you wish to create new API-Key
resource "ibm_iam_api_key" "iam_api_key" {
name = "test-api-2"
apikey = "xxx"
}
// To create new SSH-Key
resource "ibm_is_ssh_key" "ssh-key" {
name = "testterranewkey2"
public_key = "xxxxx"
type = "rsa"
}
// Use below data block if you want to use existing ssh Key (Edit the name field as per your need)
data "ibm_is_ssh_key" "ssh" {
name = "abcd"
}
// Use below data block if you want to use existing API Key (Edit the apikey_id field as per your need)
data "ibm_iam_api_key" "iam_api_key" {
apikey_id = "xxxx"
}
// Creating transit gateway
resource "ibm_tg_gateway" "new_tg_gw"{
name="transit-gateway-test-terra"
location="br-sao"
global=true
}
// Creating transit gateway connection for VPC 1
resource "ibm_tg_connection" "test_ibm_tg_connection" {
gateway = ibm_tg_gateway.new_tg_gw.id
network_type = "vpc"
name = "myconnection"
network_id = ibm_is_vpc.vpc.resource_crn
}
// Creating transit gateway connection for VPC 2
resource "ibm_tg_connection" "test_ibm_tg_connection2" {
gateway = ibm_tg_gateway.new_tg_gw.id
network_type = "vpc"
name = "myconnection2"
network_id = ibm_is_vpc.vpc2.resource_crn
}
// Allow all incoming network traffic on port 22 for 1st VPC
resource "ibm_is_security_group_rule" "ingress_ssh_all" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "0.0.0.0/0"
tcp {
port_min = 22
port_max = 22
}
}
// Allow all incoming network traffic on port 22 for 2nd VPC
resource "ibm_is_security_group_rule" "ingress_ssh_all2" {
group = ibm_is_security_group.sg2.id
direction = "inbound"
remote = "0.0.0.0/0"
tcp {
port_min = 22
port_max = 22
}
}
// Getting image for VSI
data "ibm_is_image" "centos" {
name = "ibm-centos-7-9-minimal-amd64-10"
}
//
#Code for First VPC
//
resource "ibm_is_vpc" "vpc" {
name = "vpctest1"
}
resource "ibm_is_vpc_address_prefix" "vpc_prefix" {
name = "test-address-prefix"
zone = "br-sao-1"
vpc = ibm_is_vpc.vpc.id
cidr = "192.168.10.0/24"
}
// Creating Subnet1
resource "ibm_is_subnet" "subnet1" {
depends_on = [
ibm_is_vpc_address_prefix.vpc_prefix
]
ipv4_cidr_block = "192.168.10.0/24"
name = "subnettest1"
vpc = ibm_is_vpc.vpc.id
zone = "br-sao-1"
}
// Creating SG1
resource "ibm_is_security_group" "sg1" {
name = "sgtestgroup1"
vpc = ibm_is_vpc.vpc.id
}
// Creating SG rules
resource "ibm_is_security_group_rule" "sgrule1" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
icmp {
}
}
resource "ibm_is_security_group_rule" "sgrule2" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
udp {
port_min = 500
port_max = 500
}
}
resource "ibm_is_security_group_rule" "sgrule3" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
udp {
port_min = 4500
port_max = 4500
}
}
resource "ibm_is_security_group_rule" "sgrule4" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
tcp {
port_min = 22
port_max = 22
}
}
resource "ibm_is_security_group_rule" "sgrule5" {
group = ibm_is_security_group.sg1.id
direction = "outbound"
remote = "0.0.0.0/0"
}
// Creating VSI1
resource "ibm_is_instance" "vsi1" {
name = "vsitest-terra"
vpc = ibm_is_vpc.vpc.id
zone = "br-sao-1"
keys = [ibm_is_ssh_key.ssh-key.id]
image = data.ibm_is_image.centos.id
profile = "cx2-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
// Creating Floating IP1
resource "ibm_is_floating_ip" "fip1" {
name = "fiptest1-terra"
target = ibm_is_instance.vsi1.primary_network_interface[0].id
}
output "sshcommand" {
value = "ssh root@${ibm_is_floating_ip.fip1.address}"
}
//
#Code for Second VPC
//
resource "ibm_is_vpc" "vpc2" {
name = "vpctest2"
}
resource "ibm_is_vpc_address_prefix" "vpc_prefix2" {
name = "test-address-prefix2"
zone = "br-sao-1"
vpc = ibm_is_vpc.vpc2.id
cidr = "192.168.30.0/24"
}
// Creating Subnet2
resource "ibm_is_subnet" "subnet2" {
depends_on = [
ibm_is_vpc_address_prefix.vpc_prefix2
]
ipv4_cidr_block = "192.168.30.0/24"
name = "subnettest-2"
vpc = ibm_is_vpc.vpc2.id
zone = "br-sao-1"
}
#Creating SG2
resource "ibm_is_security_group" "sg2" {
name = "sgtestgroup2"
vpc = ibm_is_vpc.vpc2.id
}
// Creating SG2 Rules
resource "ibm_is_security_group_rule" "sgrule6" {
group = ibm_is_security_group.sg2.id
direction = "inbound"
remote = "192.168.10.0/24"
icmp {
}
}
resource "ibm_is_security_group_rule" "sgrule7" {
group = ibm_is_security_group.sg2.id
direction = "inbound"
remote = "192.168.10.0/24"
udp {
port_min = 500
port_max = 500
}
}
resource "ibm_is_security_group_rule" "sgrule8" {
group = ibm_is_security_group.sg2.id
direction = "inbound"
remote = "192.168.10.0/24"
udp {
port_min = 4500
port_max = 4500
}
}
resource "ibm_is_security_group_rule" "sgrule9" {
group = ibm_is_security_group.sg2.id
direction = "inbound"
remote = "192.168.10.0/24"
tcp {
port_min = 22
port_max = 22
}
}
resource "ibm_is_security_group_rule" "sgrule10" {
group = ibm_is_security_group.sg2.id
direction = "outbound"
remote = "0.0.0.0/0"
}
// Creating VSI2
resource "ibm_is_instance" "vsi2" {
name = "vsitest2-terra"
vpc = ibm_is_vpc.vpc2.id
zone = "br-sao-1"
keys = [ibm_is_ssh_key.ssh-key.id]
image = data.ibm_is_image.centos.id
profile = "cx2-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet2.id
security_groups = [ibm_is_security_group.sg2.id]
}
}
// Creating FIP2
resource "ibm_is_floating_ip" "fip2" {
name = "fiptest2-terra"
target = ibm_is_instance.vsi2.primary_network_interface[0].id
}
output "sshcommand2" {
value = "ssh root@${ibm_is_floating_ip.fip2.address}"
}
Example 5:
- Create a VPC along with a VSI.
- VSI will have FIP assigned to it.
- SG created for VSI
- ALB created for VPC
"vpc.tf" file will be as below:
// Use this resource block if you wish to create new API-Key
resource "ibm_iam_api_key" "iam_api_key" {
name = "test-api-2-terra-lavisha"
apikey = "xxxxx"
}
// Use below data block if you want to use existing API Key (Edit the apikey_id field as per your need)
data "ibm_iam_api_key" "iam_api_key" {
apikey_id = "xxxxx"
}
// Use this resource block if you wish to create new SSH-Key
resource "ibm_is_ssh_key" "ssh-key" {
name = "testterranewkey2"
public_key = "xxxxl"
type = "rsa"
}
// Use below data block if you want to use existing ssh Key (Edit the name field as per your need)
# data "ibm_is_ssh_key" "ssh" {
# name = "testterranewkey2"
# }
//Allow all incoming network traffic on port 22 for VPC
resource "ibm_is_security_group_rule" "ingress_ssh_all" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "0.0.0.0/0"
tcp {
port_min = 22
port_max = 22
}
}
// Creating ALB
resource "ibm_is_lb" "albtest" {
name = "alb-test-terra"
subnets = [ibm_is_subnet.subnet1.id, ibm_is_subnet.subnet2.id]
}
// Getting VSI image
data "ibm_is_image" "centos" {
name = "ibm-centos-7-9-minimal-amd64-10"
}
//
#Code for VPC
//
resource "ibm_is_vpc" "vpc" {
name = "vpctestlavisha1"
}
resource "ibm_is_vpc_address_prefix" "vpc_prefix" {
name = "test-address-prefix"
zone = "br-sao-1"
vpc = ibm_is_vpc.vpc.id
cidr = "192.168.10.0/24"
}
resource "ibm_is_vpc_address_prefix" "vpc_prefix2" {
name = "test-address-prefix2"
zone = "br-sao-1"
vpc = ibm_is_vpc.vpc.id
cidr = "192.168.30.0/24"
}
// Creating Subnet resource
resource "ibm_is_subnet" "subnet1" {
depends_on = [
ibm_is_vpc_address_prefix.vpc_prefix
]
ipv4_cidr_block = "192.168.10.0/24"
name = "subnettest1"
vpc = ibm_is_vpc.vpc.id
zone = "br-sao-1"
}
resource "ibm_is_subnet" "subnet2" {
depends_on = [
ibm_is_vpc_address_prefix.vpc_prefix2
]
ipv4_cidr_block = "192.168.30.0/24"
name = "subnettest2"
vpc = ibm_is_vpc.vpc.id
zone = "br-sao-1"
}
// Defining Security Group Rules
resource "ibm_is_security_group" "sg1" {
name = "sgtestgroup1"
vpc = ibm_is_vpc.vpc.id
}
resource "ibm_is_security_group_rule" "sgrule1" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
icmp {
}
}
resource "ibm_is_security_group_rule" "sgrule4" {
group = ibm_is_security_group.sg1.id
direction = "inbound"
remote = "192.168.30.0/24"
tcp {
port_min = 22
port_max = 22
}
}
resource "ibm_is_security_group_rule" "sgrule5" {
group = ibm_is_security_group.sg1.id
direction = "outbound"
remote = "0.0.0.0/0"
}
// Creating VSI
resource "ibm_is_instance" "vsi1" {
name = "vsitest1"
vpc = ibm_is_vpc.vpc.id
zone = "br-sao-1"
keys = [ibm_is_ssh_key.ssh-key.id]
image = data.ibm_is_image.centos.id
profile = "cx2-2x4"
primary_network_interface {
subnet = ibm_is_subnet.subnet1.id
security_groups = [ibm_is_security_group.sg1.id]
}
}
// Assigning Floating-IP
resource "ibm_is_floating_ip" "fip1" {
name = "fiptest1"
target = ibm_is_instance.vsi1.primary_network_interface[0].id
}
// Getting ssh command to login VSI
output "sshcommand" {
value = "ssh root@${ibm_is_floating_ip.fip1.address}"
}
Example 6:
- Create a VPC along with a VSI.
- VSI will have FIP assigned to it.
- SG created for VSI
- NLB created for VPC
"vpc.tf" file will be as below:
// Create a VPC
resource "ibm_is_vpc" "vpc" {
name = "my-vpc"
}
// Create a Subnet
resource "ibm_is_subnet" "subnet" {
name = "my-subnet"
vpc = ibm_is_vpc.vpc.id
zone = "us-south-1"
ipv4_cidr_block = "10.10.10.0/24"
}
// Create a Security Group
resource "ibm_is_security_group" "security_group" {
name = "my-security-group"
vpc = ibm_is_vpc.vpc.id
}
// Allow all inbound traffic for security group
resource "ibm_is_security_group_rule" "allow_inbound" {
direction = "inbound"
remote = "0.0.0.0/0"
ip_version = "ipv4"
action = "allow"
protocol = "all"
security_group_id = ibm_is_security_group.security_group.id
}
// Allow all outbound traffic for security group
resource "ibm_is_security_group_rule" "allow_outbound" {
direction = "outbound"
remote = "0.0.0.0/0"
ip_version = "ipv4"
action = "allow"
protocol = "all"
security_group_id = ibm_is_security_group.security_group.id
}
// Create a VSI
resource "ibm_is_instance" "vsi" {
name = "my-vsi"
vpc = ibm_is_vpc.vpc.id
profile = "bx2-2x8"
zone = "us-south-1"
image = "ibm-ubuntu-20-04-1-minimal-amd64-3"
primary_network_interface {
subnet = ibm_is_subnet.subnet.id
security_groups = [ibm_is_security_group.security_group.id]
}
keys = ["<your-ssh-key-name>"]
}
// Assign a Floating IP to the VSI
resource "ibm_is_floating_ip" "fip" {
name = "my-fip"
zone = "us-south-1"
target = ibm_is_instance.vsi.primary_network_interface[0].id
}
// Create a Network Load Balancer
resource "ibm_is_lb" "nlb" {
name = "my-nlb"
subnets = [ibm_is_subnet.subnet.id]
type = "network"
listener {
protocol = "tcp"
port = 80
}
pool {
protocol = "tcp"
algorithm = "round_robin"
member {
target = ibm_is_instance.vsi.primary_network_interface[0].id
port = 80
}
}
}
In this blog, we’ve explored how Terraform simplifies Infrastructure as Code (IaC), providing a flexible, declarative approach to managing cloud infrastructure.
We've covered the basics of Terraform’s functionality, from its core components to its multi-cloud capabilities. With a specific focus on IBM Cloud, we’ve shown how Terraform can be used to efficiently automate the provisioning and management of resources in this environment, helping you streamline cloud operations.
The sample use cases for IBM Cloud demonstrate Terraform’s real-world applicability, showing how it integrates with services such as IBM Cloud Virtual Servers, Load balancers, Transit Gateways, VPNs etc.
Now that you've been introduced to both Terraform and its practical applications in IBM Cloud, you're ready to experiment and build your own infrastructure. By leveraging Terraform, you can manage your resources with confidence, knowing that your infrastructure is easily reproducible and adaptable to future needs.
So…..Happy coding! If you’re hungry for more examples or have any questions, drop a comment below — I’d love to hear from you.
Thanks for stopping by, and happy Terraforming!