https://www.digitalocean.com/community/tutorials/how-to-structure-a-terraform-project
terraform apply -auto-approve
Check if a plan matches the expectation and also store the plan in output file plan-out
terraform plan -out "plan-out"
Once the plan is verified, apply the changes to get the desired infrastructure components.
terraform apply "plan-out"
Filter show output with grep
terraform show | grep "ipv4"
Destroy module by name:
If you have custom module and you want to destroy it.
terraform destroy \
-var "do_token=${DO_PAT}" \
-var "mongo_uri=${MONGO_URI}" \
-target="module.backend"
Move resource
terraform state mv 'module.my_module.some_resource.resource_name' 'module.other_module.some_resource.resource_name'
Terraform Visualizations
terraform graph | dot -Tsvg > graph.svg
terraform graph -type=plan | dot -Tpng > graph.png
or
terraform graph -type=plan | dot -Tpng -o graph.png
https://learn.hashicorp.com/tutorials/terraform/data-sources
data "aws_region" "current" { }
output "aws_region" {
description = "AWS region"
value = data.aws_region.current.name
}
Modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory.
Resource provisioners allow you to execute scripts on local or remote machines as part of resource creation or destruction.
They are used for various tasks, such as bootstrapping, copying files, hacking into the mainframe, etc.
You can attach a resource provisioner to any resource, but most of the time it doesn’t make sense to do so, which is why provisioners are most commonly seen on null resources.
Null resources are basically resources that don’t do anything, so having a provisioner on one is as close as you can get to having a standalone provisioner.
NOTE:
Most people who use provisioners exclusively use creation-time provisioners: for
example, to run a script or kick off some miscellaneous automation task. The follow-
ing example is unusual because it uses both:
resource "google_project_service" "enabled_service" {
for_each = toset(local.services)
project = var.project_id
service = each.key
provisioner "local-exec" {
command = "sleep 60"
}
provisioner "local-exec" {
when = destroy
command = "sleep 15"
}
}
https://github.com/hashicorp/terraform-provider-aws/issues/11409#issuecomment-568254554
resource "aws_ecs_cluster" "cluster" {
name = local.cluster_name
capacity_providers = [aws_ecs_capacity_provider.cp.name]
default_capacity_provider_strategy {
capacity_provider = aws_ecs_capacity_provider.cp.name
}
//We need to terminate all instances before the cluster can be destroyed.
//(Terraform would handle this automatically if the autoscaling group depended
//on the cluster, but we need to have the dependency in the reverse
//direction due to the capacity_providers field above).
provisioner "local-exec" {
when = destroy
command = <<CMD
//Get the list of capacity providers associated with this cluster
CAP_PROVS="$(aws ecs describe-clusters --clusters "${self.arn}" \
--query 'clusters[*].capacityProviders[*]' --output text)"
//Now get the list of autoscaling groups from those capacity providers
ASG_ARNS="$(aws ecs describe-capacity-providers \
--capacity-providers "$CAP_PROVS" \
--query 'capacityProviders[*].autoScalingGroupProvider.autoScalingGroupArn' \
--output text)"
if [ -n "$ASG_ARNS" ] && [ "$ASG_ARNS" != "None" ]
then
for ASG_ARN in $ASG_ARNS
do
ASG_NAME=$(echo $ASG_ARN | cut -d/ -f2-)
// Set the autoscaling group size to zero
aws autoscaling update-auto-scaling-group \
--auto-scaling-group-name "$ASG_NAME" \
--min-size 0 --max-size 0 --desired-capacity 0
// Remove scale-in protection from all instances in the asg
INSTANCES="$(aws autoscaling describe-auto-scaling-groups \
--auto-scaling-group-names "$ASG_NAME" \
--query 'AutoScalingGroups[*].Instances[*].InstanceId' \
--output text)"
aws autoscaling set-instance-protection --instance-ids $INSTANCES \
--auto-scaling-group-name "$ASG_NAME" \
--no-protected-from-scale-in
done
fi
CMD
}
}
This creation-time provisioner invokes the command sleep 60 to wait for 60 seconds after Create() has completed but before the resource is marked as “created” by Terraform (see figure 7.8).
Likewise, the destruction-time provisioner waits for 15 seconds before Delete() is called (see figure 7.9).
Both of these pauses (determined experimentally through trial and error) are essential to avoid potential race condi Null resource with a local-exec provisioner
If both a creation-time and a destruction-time provisioner are attached to the same null_resource, you can cobble together a sort of custom Terraform resource.
Null resources don’t do anything on their own.
Therefore, if you have a null resource with a creation-time provisioner that calls a create script and a destruction time provisioner that calls a cleanup script, it wouldn’t behave all that differently from a conventional Terraform resource.
The following example code creates a custom resource that prints “Hello World!” on resource creation and “Goodbye cruel world!” on resource deletion.
I’ve spiced it up a bit by using cowsay, a CLI tool that prints a picture of an ASCII cow saying the message:
resource "null_resource" "cowsay" {
provisioner "local-exec" {
command = "cowsay Hello World!"
}
provisioner "local-exec" {
when = destroy
command = "cowsay -d Goodbye cruel world!"
}
}
Build and Push Dockerfile to ECR with bash script.
~/project/push.sh
#!/bin/bash -x
#
# Builds a Docker image and pushes to an AWS ECR repository
# name of the file - push.sh
set -e
source_path="$1" # 1st argument from command line
repository_url="$2" # 2nd argument from command line
tag="${3:-latest}" # Checks if 3rd argument exists, if not, use "latest"
userid="$4"
# splits string using '.' and picks 4th item
region="$(echo "$repository_url" | cut -d. -f4)"
# splits string using '/' and picks 2nd item
image_name="$(echo "$repository_url" | cut -d/ -f2)"
# builds docker image
(cd "$source_path" && DOCKER_BUILDKIT=1 docker build -t "$image_name" .)
# login to ecr
aws --region "$region" ecr get-login-password | docker login --username AWS --password-stdin ${userid}.dkr.ecr.us-east-1.amazonaws.com
#$(aws ecr get-login-password --region "$region")
# tag image
docker tag "$image_name" "$repository_url":"$tag"
# push image
docker push "$repository_url":"$tag"
~/main.tf
resource "null_resource" "push" {
provisioner "local-exec" {
command = "${coalesce("push.sh", "${path.module}/push.sh")} ${var.source_path} ${aws_ecr_repository.repo.repository_url} ${var.tag} ${data.aws_caller_identity.current.account_id}"
interpreter = ["bash", "-c"]
}
}
Environment variables can be used to set variables.
The environment variables must be in the format TF_VAR_name and this will be checked last for a value. For example:
export TF_VAR_region=us-west-1
export TF_VAR_ami=ami-049d8641
export TF_VAR_alist='[1,2,3]'
export TF_VAR_amap='{ foo = "bar", baz = "qux" }'
types
arguments
variables in the cli
terraform apply -var="image_id=ami-abc123"
terraform apply -var='image_id_list=["ami-abc123","ami-def456"]' -var="instance_type=t2.micro"
terraform apply -var='image_id_map={"us-east-1":"ami-abc123","us-east-2":"ami-def456"}'
variables in tf.var file
terraform apply -var-file="testing.tfvars"
variables as env vars
export TF_VAR_image_id=ami-abc123
terraform plan
# or
export TF_VAR_availability_zone_names='["us-west-1b","us-west-1d"]'
# or
export TF_VAR_image_id='ami-abc123'
variable definition precedence
The above mechanisms for setting variables can be used together in any combination.
Terraform loads variables in the following order, with later sources taking precedence over earlier ones:
Environment variables
Terraform evaluates all of the configuration files in a module, effectively treating the entire module as a single document.
Separating various blocks into different files is purely for the convenience of readers and maintainers, and has no effect on the module's behavior.
tl;dr:
Given...
resource "employee" "mike" {
name = "Michael"
}
We want to change the resource name from "mike" to "michael".
First, we update the resource name.
resource "employee" "michael" {
name = "Michael"
}
Next, we update the state.
terraform state mv employee.mike employee.michael
Given...
module "the_employer" {
name = "Ferrari"
}
We want to change the module name from "the_employer" to "employer".
First, we update the module name.
module "employer" {
name = "Ferrari"
}
Next, we update the state.
terraform state mv module.the_employer module.employer
Given...
locals { names = [ "Alain", "Jim", "Lewis" ] }
resource "employee" "employee_1" {
name = local.names[0]
}resource "employee" "employee_2" {
name = local.names[1]
}
resource "employee" "employee_3" {
name = local.names[2]
}
We want to manage these similar objects via a single block.
First, we create a new resource block that leverages the count
meta-argument.
locals {
names = [
"Alain",
"Jim",
"Lewis"
]
}
resource "employee" "employees" {
count = length(local.names)
name = local.names[count.index]
}
Next, we update the state.
terraform state mv employee.employee_1 employee.employees[0]
terraform state mv employee.employee_2 employee.employees[1]
terraform state mv employee.employee_3 employee.employees[2]
provider "docker" {
host = "unix:///var/run/docker.sock"
registry_auth {
address = local.docker_address
config_file = pathexpand("~/.docker/config.json")
}
}
// build image locally
resource "docker_image" "build_image" {
name = "${local.docker_username}/${local.image_name}"
build {
path = "${local.dockerfile_path}"
force_remove = true
no_cache = true
label = {
author : "${local.docker_username}"
}
}
provisioner "local-exec" {
command = "docker push ${local.docker_username}/${local.image_name}:${local.image_version}"
}
}
//p.214
resource "null_resource" "docker_build" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "docker build -t ${local.image_tag} --file ../${local.service_name}/Dockerfile-prod ../${local.service_name}"
}
}
resource "null_resource" "docker_login" {
depends_on = [ null_resource.docker_build ]
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "docker login ${local.login_server} --username ${local.username} --password ${local.password}"
}
}
resource "null_resource" "docker_push" {
depends_on = [ null_resource.docker_login ]
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "docker push ${local.image_tag}"
}
}
Summary
Services
service_type
or terraform type = var.service_type
ClusterIP
: Exposes Service on a cluster-internal IP.
only reachable
from within
the cluster
.NodePort
: Exposes Service on each
Node's IP
at a static port
(the NodePort).
ref
: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeportLoadBalancer
: Exposes the Service externally using a cloud provider's load balancer.
ExternalName
: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.image_pull_secrets
- (Optional)
DockerConfig
type secrets are honored.
port
target_port
session_affinity
- (Optional) Used to maintain session affinity.
- Supports ClientIP
and None
.
- Defaults
to None
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.7.0"
}
}
}
provider "digitalocean" {
token = var.do_token
}
provider "kubernetes" {
host = digitalocean_kubernetes_cluster.primary.endpoint
token = digitalocean_kubernetes_cluster.primary.kube_config[0].token
cluster_ca_certificate = base64decode(digitalocean_kubernetes_cluster.primary.kube_config[0].cluster_ca_certificate)
}
locals {
// # image_tag = "${var.login_server}/${var.service_name}:${var.app_version}"
dockercreds = {
auths = {
"${var.login_server}" = {
auth = base64encode("${var.username}:${var.password}")
}
}
}
}
// # https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret
// # https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/secret
resource "kubernetes_secret" "docker_credentials" {
metadata { name = "docker-credentials" }
data = { ".dockerconfigjson" = jsonencode(local.dockercreds) }
type = "kubernetes.io/dockerconfigjson"
}
data "digitalocean_kubernetes_versions" "current" {
version_prefix = var.cluster_version
}
// # https://docs.digitalocean.com/reference/terraform/reference/resources/kubernetes_cluster/
resource "digitalocean_kubernetes_cluster" "primary" {
name = var.cluster_name
region = var.cluster_region
# auto_upgrade = true
version = data.digitalocean_kubernetes_versions.current.latest_version
node_pool {
name = "default"
size = var.worker_size
node_count = var.worker_count
}
}
// Deploys a MongoDB database to the Kubernetes cluster.
resource "kubernetes_deployment" "database" {
metadata {
name = "database"
labels = { pod = "database" }
}
spec {
replicas = 1
selector { match_labels = { pod = "database" } }
template {
metadata { labels = { pod = "database" } }
spec {
container {
image = "mongo:4.2.8"
name = "database"
port { container_port = 27017 }
}
}
}
}
}
resource "kubernetes_service" "database" {
metadata { name = "database" }
spec {
selector = { pod = kubernetes_deployment.database.metadata[0].labels.pod }
port { port = 27017 }
type = "LoadBalancer"
}
}
// ########################
// ########################
resource "kubernetes_deployment" "api" {
metadata {
name = "api"
labels = { pod = "api" }
}
spec {
replicas = 1
selector {
match_labels = { pod = "api" }
}
template {
metadata {
labels = { pod = "api" }
}
spec {
container {
image = "<username>/<image_name>:latest"
name = "api"
env {
name = "MONGO_URI"
value = "uri conn string"
}
env {
name = "MONGO_DB_NAME"
value = "db_name"
}
port { container_port = 5000 }
}
// # provides dockerhub login creds
image_pull_secrets {
name = kubernetes_secret.docker_credentials.metadata[0].name
}
}
}
}
}
resource "kubernetes_service" "api" {
metadata { name = "api" }
spec {
selector = { pod = kubernetes_deployment.api.metadata[0].labels.pod }
port {
// # these must be same as container_port in kubernetes_deployment
port = 5000
target_port = 5000
}
session_affinity = "ClientIP"
type = "LoadBalancer"
}
}
variable "app_version" {
description = "image tag"
default = "latest"
}
variable "cluster_version" {
default = "1.23.14"
}
variable "worker_count" {
default = 1
}
variable "worker_size" {
description = "s-2vcpu-2gb | s-2vcpu-4gb"
default = "s-1vcpu-2gb"
}
variable "write_kubeconfig" {
type = bool
default = false
}
variable "cluster_name" {
type = string
default = "kube-cluster"
}
variable "cluster_region" {
description = "tor | nyc3 | sgp1"
type = string
default = "sgp1"
}
variable "username" {
description = "username for dockerhub or other"
default = "username"
}
variable "password" {
description = "password to login"
default = "asdasdasdad"
}
variable "login_server" {
// # login_server = azurerm_container_registry.container_registry.login_server
description = "server url of the docker repo"
default = "https://index.docker.io/v1/"
}
// # https://docs.digitalocean.com/reference/terraform/reference/resources/app/
variable "do_token" {}
variable "domain_name" {}
variable "private_key" {}
variable "mongo_uri" {
type = string
description = "mongodb connection string"
}
locals {
// # db_creds = jsondecode(data.aws_secretsmanager_secret_version.creds.secret_string)
// # docker_creds_arn = data.aws_secretsmanager_secret_version.creds.secret_id
// # docker_creds_arn = "arn:aws:secretsmanager:us-east-1:14661:secret:user_dockerhub_login-ffdds"
// # credentialsParameter = "arn:aws:secretsmanager:us-east-1:qq212:secret:DockerHth-l33Lwe"
// # docker_username = "username_or_org_name"
// # image_name = "terraform-testing"
// # image_version = "latest"
// # dockerfile_path = "."
// # aws_region = "us-east-2"
// # iam_role_name = "terraTestRole"
local_folder_name = "new_upload"
handler_is_func_name = "lambda_handler"
python_runtime = "python3.8"
zip_name = "lambda.zip"
s3_bucket_name = "bucket_one"
}
Run a bash command
zip -r lambda.zip a_watch_new_upload
resource "null_resource" "zip_func" {
provisioner "local-exec" {
command = "zip -r lambda.zip my_lambda_func"
//command = "${coalesce("push.sh", "${path.module}/push.sh")} ${var.source_path} ${aws_ecr_repository.repo.repository_url} ${var.tag}"
interpreter = ["bash", "-c"]
}
}
resource "aws_lambda_function" "conversion_lambda" {
runtime = "python3.9"
filename = data.archive_file.lambda_zip.output_path
function_name = local.lambda_name
handler = "conversion_lambda_python_file.lambda_handler"
source_code_hash = data.archive_file.lambda_zip.output_base64sha256
role = aws_iam_role.lambda_role.arn
memory_size = 10240
timeout = 900
environment {
variables = {
incoming_bucket_arn = aws_s3_bucket.incoming.arn
}
}
}
data "archive_file" "zip_the_python_code" {
type = "zip"
source_dir = "${path.module}/python/"
output_path = "${path.module}/python/hello-python.zip"
}
// or
data "archive_file" "lambda_zip" {
type = "zip"
source_file = "${path.module}/src/conversion_lambda_python_file.py"
output_file_mode = "0666"
output_path = "${path.module}/bin/conversion_lambda_python_file.zip"
}
or
data "archive_file" "lambda" {
type = "zip"
# output_file_mode = "0666"
output_path = "${path.module}/.terraform/${locals.zip_name}"
source {
filename = "app.py"
content = <<-EOF
zip -r ${locals.zip_name} a_watch_new_upload
EOF
}
}
https://registry.terraform.io/modules/terraform-aws-modules/step-functions/aws/latest
https://github.com/terraform-aws-modules/terraform-aws-step-functions
https://docs.aws.amazon.com/step-functions/latest/dg/concepts-states.html
States:
Task:
Do some work in your state machine (a Task state)
Choice:
Make a choice between branches of execution (a Choice state)
Fail/Succceed:
Stop an execution with a failure or success (a Fail or Succeed state)
Pass:
Simply pass its input to its output or inject some fixed data (a Pass state)
Wait:
Provide a delay for a certain amount of time or until a specified time/date (a Wait state)
Parallel:
Begin parallel branches of execution (a Parallel state)
Map:
Dynamically iterate steps (a Map state)
After you have created and executed Express Workflows, and if logging is enabled, you can access information about the execution in Amazon CloudWatch Logs.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sfn_state_machine
Statement:
or
an array of individual statements.Sid
Action
Effect:
Allow
or Deny
Resource:
Principal
Service Principals:
"Principal": {
"Service": [
"ecs.amazonaws.com",
"elasticloadbalancing.amazonaws.com"
]
}
All Principals
Example lambda + s3 policy
// # Lambda Function Policy
resource "aws_iam_policy" "lambda_policy" {
name = "${var.app_env}-lambda-policy"
description = "${var.app_env}-lambda-policy-description"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "${aws_s3_bucket.bucket.arn}"
},
{
"Action": [
"sqs:ReceiveMessage",
"sqs:DeleteMessage",
"sqs:GetQueueAttributes"
],
"Effect": "Allow",
"Resource": "${aws_sqs_queue.queue.arn}"
},
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_sns_topic" "sns_topic_one" {
name = local.topic_one_name
policy = <<POLICY
{
"Version":"2012-10-17",
"Statement":[{
"Effect": "Allow",
"Principal": { "Service": ["s3.amazonaws.com", "comprehend.amazonaws.com"] },
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:*:*:${local.topic_one_name}",
"Condition":{
"ArnLike":{"aws:SourceArn":"${aws_s3_bucket.incoming_bucket.arn}"}
}
}]
}
POLICY
}
//creates an SNS subscription which pipes to an SQS Queue
resource "aws_sns_topic_subscription" "user_updates_sqs_target" {
topic_arn = aws_sns_topic.sns_topic_one.arn
protocol = "sqs"
endpoint = aws_sqs_queue.sqs_topic_one.arn
}
resource "aws_sqs_queue" "sqs_topic_one" {
name = local.sqs_topic_name
max_message_size = 2048
message_retention_seconds = 86400
receive_wait_time_seconds = 10
}
aws_s3_bucket_notification
AWS DOCS:
S3 Event Notifications
https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-cloudwatch-events-s3.html
https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_EventBus.html
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_bus
https://www.beabetterdev.com/2021/09/10/aws-sqs-vs-sns-vs-eventbridge/
Ability to create scheduled events that periodically ‘poke’ your event bus to broadcast a message.
Can use a cron expression to set a periodic event that fires at a certain time.
Scheduled events can be used for periodic maintenance tasks and a wide variety of other use cases.
Use Eventbridge when:
https://github.com/hashicorp/terraform-provider-aws/issues/22013#issuecomment-996796168
resource "null_resource" "bucket_notification_mybucket" {
triggers = {
bucket = "mybucket"
notification-configuration=<<-EOF
{
"LambdaFunctionConfigurations": [
{
"Id": "mybucket01",
"LambdaFunctionArn": "somearn",
"Events": [
"s3:ObjectCreated:*"
],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "Suffix",
"Value": "foo"
}
]
}
}
}
],
"EventBridgeConfiguration": {}
}
EOF
}
provisioner "local-exec" {
interpreter = ["bash", "-c"]
command = <<EOF
aws s3api put-bucket-notification-configuration --bucket ${self.triggers.bucket} --notification-configuration '${self.triggers.notification-configuration}'
EOF
}
provisioner "local-exec" {
interpreter = ["bash", "-c"]
when = destroy
command = <<EOF
aws s3api put-bucket-notification-configuration --bucket ${self.triggers.bucket} --notification-configuration '{ }'
EOF
}
}
`
// # TERRAFORM PROVIDER BLOCK
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.54.0"
}
docker = {
source = "kreuzwerker/docker"
version = "2.15.0"
}
}
}
locals {
ecs_policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
apprunner_policy_arn = "arn:aws:iam::aws:policy/service-role/AWSAppRunnerServicePolicyForECRAccess"
docker_username = "docker_user_name"
image_name = "main-api-gateway"
image_version = "latest"
dockerfile_path = "."
iam_role_name = "apiGatewayRole"
ecs_service_name = "api-gateway-service"
ecs_cluster_name = "api-gateway-cluster"
ecs_task_definition_family = "testAppTaskDefinition2"
target_group_name = "api-gateway-target-group"
lb_name = "api-gateway-lb"
lb_container_port = 5001 # port app is running on
ecs_task_memory = 512
ecs_task_cpu = 256
//ecr_login_cmd = "aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin 1234.dkr.ecr.us-east-2.amazonaws.com"
ecr_login_cmd = "aws ecr get-login-password --region ${var.aws_region} | docker login --username AWS --password-stdin ${var.aws_user_id}.dkr.ecr.${var.aws_region}.amazonaws.com"
}
resource "aws_ecr_repository" "gateway_repo" {
name = "${local.image_name}"
image_tag_mutability = "MUTABLE"
}
resource "docker_image" "build_image" {
name = "${local.image_name}:${local.image_version}"//use this name if pushing to ECR
//name = "${local.docker_username}/${local.image_name}:${local.image_version}"# use this name if pushing to dockerhub
build {
path = "${local.dockerfile_path}"
force_remove = true
no_cache = true
label = {
author : "${local.docker_username}"
}
}
provisioner "local-exec" {
# use this command for pushing to ECR
command = "${local.ecr_login_cmd} && docker tag ${local.image_name}:${local.image_version} ${aws_ecr_repository.gateway_repo.repository_url} && docker push ${aws_ecr_repository.gateway_repo.repository_url}"
}
}
data "aws_iam_policy_document" "role_policy_data" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = [
"build.apprunner.amazonaws.com",
"tasks.apprunner.amazonaws.com"
]
}
}
}
// # # CREATING REQUIRED IAM ROLE AND STS ASSUME ROLE
resource "aws_iam_role" "ecsTaskExecutionRole" {
name = "${local.iam_role_name}"
description = "Allows ecs to execute external api commands on your behalf"
assume_role_policy = data.aws_iam_policy_document.role_policy_data.json
}
resource "aws_iam_role_policy_attachment" "ecsTaskExecutionRole_policy" {
role = aws_iam_role.ecsTaskExecutionRole.name
policy_arn = "${local.apprunner_policy_arn}"
}
resource "aws_apprunner_auto_scaling_configuration_version" "app_runner_config" {
auto_scaling_configuration_name = "demo_auto_scalling"
# auto_scaling_configuration_name = "one_autoscale"
max_concurrency = 100
max_size = 5
min_size = 1
tags = {
Name = "demo_auto_scalling"
}
}
resource "aws_apprunner_service" "api_gateway" {
service_name = "api_gateway"
source_configuration {
image_repository {
image_configuration {
port = "80"
}
# image_identifier = "1234.dkr.ecr.us-east-2.amazonaws.com/api-gateway:latest"
image_identifier = "${aws_ecr_repository.gateway_repo.repository_url}:${local.image_version}"# must have tag?
image_repository_type = "ECR"
}
authentication_configuration{
access_role_arn = aws_iam_role.ecsTaskExecutionRole.arn
}
auto_deployments_enabled = true
}
auto_scaling_configuration_arn = aws_apprunner_auto_scaling_configuration_version.app_runner_config.arn
health_check_configuration {
healthy_threshold = 1
interval = 10
path = "/"
protocol = "TCP"
timeout = 5
unhealthy_threshold = 5
}
tags = {
Name = "api_gateway"
}
}
output "app_runner_url" {
value = aws_apprunner_service.api_gateway.service_url
}
https://github.com/praveenv4/Terraform-Lab/blob/main/modules/ecr/main.tf
resource "aws_ecr_repository" "ecr" {
for_each = toset(var.ecr_name)
name = each.key
image_tag_mutability = var.image_mutability
encryption_configuration {
encryption_type = var.encrypt_type
}
image_scanning_configuration {
scan_on_push = true
}
tags = var.tags
}
Commands:
doctl account get
doctl auth list
create / switch between accounts
doctl auth init --context <NAME>
doctl auth switch --context <NAME>
These are available sizes that are used in kubernetes deployment.
doctl compute size list
Create droplet with ssl dns
data "external" "droplet_name" {
program = ["python3", "${path.module}/external/name-generator.py"]
}
data "digitalocean_ssh_key" "ssh_key" {
name = "ssh_key_name_in_acct"
}
resource "digitalocean_droplet" "web" {
image = "ubuntu-20-04-x64"
name = data.external.droplet_name.result.name
region = "fra1"
size = "s-1vcpu-1gb"
ssh_keys = [data.digitalocean_ssh_key.ssh_key.id]
connection {
# connection specifies how terraform should connect to target droplet
host = self.ipv4_address
user = "root"
type = "ssh"
private_key = file(var.private_key)
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
// # Install Apache
"apt update",
"apt -y install apache2"
]
}
}
resource "digitalocean_record" "www" {
domain = var.domain_name
type = "A"
name = "@"
value = digitalocean_droplet.web.ipv4_address
}
File upload S3 -> S3 Event Notification -> Eventbridge Rule Target -> Execute State Machine
For a practical application, you could launch a state machine that performs operations on files that you add to the bucket, such as creating thumbnails or running Amazon Rekognition analysis on image and video files.
ex. 1
│ Error: Error deleting IAM User boto3-user: DeleteConflict: Cannot delete entity, must delete policies first.
│ status code: 409, request id: 1cdc5eca-2cb2-4698-869a-cb8e6ecfc949
│
│
╵
╷
│ Error: failed creating IAM User (boto3-user): EntityAlreadyExists: User with name boto3-user already exists.
│ status code: 409, request id: 9294d99e-7d99-4f33-8353-cee2c84b96ba
│
│ with aws_iam_user.boto_user,
solution
Solution: