iT邦幫忙

2023 iThome 鐵人賽

DAY 24
0
DevOps

大家都在用 Terraform 實作 IaC 為什麼不將程式寫得更簡潔易讀呢?系列 第 24

實作 AWS 常用服務之 Terraform 模組系列 - EKS with Node Group 篇

  • 分享至 

  • xImage
  •  

AWS EKS with NodeGroup 模組實作

本篇是實作常用的 AWS EKS with NodeGroup 服務之 Terraform 模組包括 my_eksmy_aws_load_balancer_controller,並且會使用到 my_iam 來建立會使用到的 IAM Role,完整的專案程式碼分享在我的 Github 上。

以下簡單說明一下 AWS Load Balancer Controller(通常稱為 AWS LB Controller)是一個 Kubernetes 控制器,目的在簡化 AWS LB 的管理和配置,特別是在 Amazon EKS(Elastic Kubernetes Service)Cluster 中。

它負責處理以下主要功能:

  • 負載均衡器自動配置:AWS LB Controller 可以自動創建和配置 AWS 負載均衡器,包括 Application Load Balancer(ALB)和 Network Load Balancer(NLB)。它為您管理負載均衡器的新增、刪除和配置。

  • Ingress 資源支持:AWS LB Controller 支持 Kubernetes Ingress 資源。它可以將 Ingress 資源中定義的規則轉化為 AWS 負載均衡器的配置,以便將流量路由到 Kubernetes 集群中的服務。

  • Service 資源支持:除了 Ingress,AWS LB Controller 也支持 Kubernetes Service 資源。它可以將 Service 資源轉化為負載均衡器的後端目標組(Target Group),從而實現服務發現和負載均衡。

  • TLS 卸載(TLS Offload):AWS LB Controller 支持將 TLS/SSL 終止(卸載)在負載均衡器上,以減輕後端服務的負擔。它可以管理 TLS 證書的導入和更新,並在負載均衡器上配置 SSL 握手。

  • 動態負載均衡:AWS LB Controller 可以根據 Kubernetes 橫向 Pod 自動縮放事件(例如 Horizontal Pod Autoscaler)來調整負載均衡器的後端目標組成員。

  • 日誌和監控:AWS LB Controller 集成了 AWS CloudWatch 日志記錄和監控,使您可以輕松地查看和分析流量和性能數據。

  • 自動更新和維護:AWS LB Controller 可以自動檢測和應用 AWS 負載均衡器的更新和修覆,從而減少了操作和維護的工作。

  • 多協議支持:除了 HTTP/HTTPS,AWS LB Controller 還支持 TCP 和 UDP 協議,因此適用於各種不同類型的應用程式。


以下來說明如何實作 AWS EKS 與 AWS LB Controller 的 Terraform 模組:

  1. 先定義模組 my_aws_load_balancer_controllermy_eks 的放置位置 my_aws_load_balancer_controllermodules/my_eks:
├── configs
│   ├── cloudfront
│   │   └── distributions.yaml
│   ├── cloudwatch
│   │   └── loggroups.yaml
│   ├── iam
│   │   ├── assume_role_policies
│   │   │   ├── eks-cluster.json
│   │   │   ├── eks-fargate-pod-execution-role.json
│   │   │   └── eks-node-group.json
│   │   ├── iam.yaml
│   │   ├── role_policies
│   │   │   └── eks-cluster-cloudwatch-metrics.json
│   │   └── user_policies
│   │       └── admin_access.json
│   ├── kinesis
│   │   └── streams.yaml
│   ├── kms
│   │   ├── keys.yaml
│   │   └── policies
│   │       └── my-key-policy.json
│   ├── s3
│   │   ├── policies
│   │   │   └── my-bucket.json
│   │   └── s3.yaml
│   ├── subnet
│   │   └── my-subnets.yaml
│   └── vpc
│       └── my-vpcs.yaml
├── example.tfvars
├── locals.tf
├── main.tf
├── modules
│   ├── my_aws_load_balancer_controller
│   │   ├── aws-load-balancer-controller.tf
│   │   ├── outputs.tf
│   │   ├── provider.tf
│   │   └── variables.tf
│   ├── my_cloudfront
│   ├── my_cloudwatch
│   ├── my_eips
│   ├── my_eks
│   │   ├── eks_cluster.tf
│   │   ├── eks_fargate_profile.tf
│   │   ├── eks_node_group.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── my_eventbridge
│   ├── my_iam
│   ├── my_igw
│   ├── my_instances
│   ├── my_kinesis_stream
│   ├── my_kms
│   ├── my_msk
│   ├── my_nacls
│   ├── my_route_tables
│   ├── my_s3
│   ├── my_subnets
│   └── my_vpc
├── my-ingress-controller-values.yaml
├── my-ingress-node-red.yaml
├── packer
│   └── apache-cassandra
└── variables.tf
  1. 實作 EKS 需要用的 IAM Role
  • ./configs/iam/assume_role_policies/eks-cluster.json: Assume Role for EKS Role
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "eks.amazonaws.com",
                    "eks-fargate-pods.amazonaws.com"
                ]
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

  • ./configs/iam/assume_role_policies/eks-fargate-pod-execution-role.json: Assume Role for Fargate Execution Role
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "eks.amazonaws.com",
                    "eks-fargate-pods.amazonaws.com"
                ]
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
  • ./configs/iam/assume_role_policies/eks-node-group.json: Assume Role for EKS Node Group Role
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

  • ./configs/iam/role_policies/eks-cluster-cloudwatch-metrics.json: CloudWatch Metrics Policy for EKS Role
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "cloudwatch:PutMetricData"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}

  • ./configs/iam/iam.yaml: 定義以下三個 Roles
    • EKS Role: eks-cluster
    • EKS Node Group Role: eks-node-group
    • EKS Fargate Pod Execution Role: eks-fargate-pod-execution-role
# Define Policy
policies: []

# Define user, bind user group and attch existing policy
users: []

# Define user group, attch inline and existing policies
groups: []

# Define role, attch inline and existing policies
roles:
  - name: eks-cluster
    description: "EKS cluster role"
    assume_role_policy_json_file: ./configs/iam/assume_role_policies/eks-cluster.json
    inline_policies:
      - name: AmazonEKSClusterCloudWatchMetricsPolicy
        json_file: ./configs/iam/role_policies/eks-cluster-cloudwatch-metrics.json
    policy_arns:
      - "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
      - "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
      - "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
    tag_name: "EKS cluster role"
    path: "/"
  - name: eks-node-group
    description: "EKS node group"
    assume_role_policy_json_file: ./configs/iam/assume_role_policies/eks-node-group.json
    policy_arns:
      - "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
      - "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
      - "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
    tag_name: "EKS node group"
    path: "/"
  - name: eks-fargate-pod-execution-role
    description: "EKS fargate pod execution role"
    assume_role_policy_json_file: ./configs/iam/assume_role_policies/eks-fargate-pod-execution-role.json
    policy_arns:
      - "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
      - "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
      - "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
    tag_name: "EKS fargate pod execution role"
    path: "/"

instance_profiles: []

  1. 撰寫 my_eks 模組:
  • ./modules/my_eks/outputs.tf:
output "eks_cluster_id" {
value  = aws_eks_cluster.eks_cluster.id
}

output "cluster_name" {
  value = aws_eks_cluster.eks_cluster.name
}

output "oidc_url" {
  value = aws_eks_cluster.eks_cluster.identity[0].oidc[0].issuer
}

output "ca_certificate" {
  value = aws_eks_cluster.eks_cluster.certificate_authority.0.data
}

output "endpoint" {
  value = aws_eks_cluster.eks_cluster.endpoint
}

output "oidc_arn" {
  value = aws_iam_openid_connect_provider.oidc_provider.arn
}
  • ./modules/my_eks/provider.tf:
provider "aws" {
    region  = var.aws_region
    profile = var.aws_profile
}
  • ./modules/my_eks/variables.tf:
variable "aws_region" {
  description = "AWS region"
  default     = "ap-northeast-1"
}

variable "aws_profile" {
  description = "AWS profile"
  default     = ""
}

variable "project_name" {
  type    = string
  description = "Project name"
  default = ""
}

variable "department_name" {
  type        = string
  description = "Department name"
  default     = "SRE"
}

variable "cluster_name" {
    type        = string
    description = "Name of the EKS Cluster"
}

variable "cluster_role_arn" {
    type        = string
    description = "Role ARN of the EKS Cluster"
}

variable "endpoint_private_access" {
    type        = bool
    default     = false
    description = "Endpoint Private Access of the EKS Cluster"
}

variable "endpoint_public_access" {
    type        = bool
    default     = true
    description = "Endpoint Public Access of the EKS Cluster"
}

variable "public_subnets" {
    type        = list(any)
    description = "List of all the Public Subnets"
}

variable "private_subnets" {
    type        = list(any)
    description = "List of all the Private Subnets"
}

variable "public_access_cidrs" {
    type = list(any)
    description = "List of all the Private Access CIDRs"
}

variable "eks_version" {
    type        = string
    description = "Version of the EKS Cluster"
}

variable "node_groups" {
  type        = list(any)
  description = "List of all the Node Groups"
  default     = []
}

variable "fargate_profiles" {
  type        = list(any)
  description = "List of all the Fargate Profiles"
  default     = []
}

  • ./modules/my_eks/eks_cluster.tf:
resource "aws_eks_cluster" "eks_cluster" {
  name                      = var.cluster_name
  enabled_cluster_log_types = ["api", "audit", "authenticator","controllerManager","scheduler"]
  role_arn                  = var.cluster_role_arn

  vpc_config {
    endpoint_private_access = var.endpoint_private_access
    public_access_cidrs     = var.public_access_cidrs
    subnet_ids              = concat(var.public_subnets, var.private_subnets)
  }

  version = var.eks_version

  tags = {
    Name = var.cluster_name
  }

  depends_on = [
    var.cluster_role_arn
  ]
}

data "tls_certificate" "certificate" {
  url = aws_eks_cluster.eks_cluster.identity[0].oidc[0].issuer
}

resource "aws_iam_openid_connect_provider" "oidc_provider" {
  client_id_list  = ["sts.amazonaws.com"]
  thumbprint_list = [data.tls_certificate.certificate.certificates[0].sha1_fingerprint]
  url             = aws_eks_cluster.eks_cluster.identity[0].oidc[0].issuer
}
  • ./modules/my_eks/eks_fargate_profile.tf:
resource "aws_eks_fargate_profile" "profiles" {
  for_each = { for r in var.fargate_profiles : "${r.name}" => r }

  cluster_name           = var.cluster_name
  fargate_profile_name   = each.value.name
  pod_execution_role_arn = each.value.pod_execution_role_arn
  subnet_ids             = var.private_subnets

  selector {
    namespace = each.value.namespace
    labels = each.value.labels
  }

  depends_on = [
    var.private_subnets
  ]
}
  • ./modules/my_eks/eks_node_group.tf:
resource "aws_eks_node_group" "groups" {
  for_each = { for r in var.node_groups : "${r.name}" => r }

  cluster_name    = var.cluster_name
  node_group_name = each.value.name
  node_role_arn   = each.value.node_role_arn
  subnet_ids      = var.private_subnets
  ami_type        = lookup(each.value, "ami_type", "AL2_x86_64")
  capacity_type   = each.value.capacity_type
  instance_types  = each.value.instance_types
  disk_size       = each.value.disk_size

  scaling_config {
    desired_size = each.value.desired_nodes
    max_size     = each.value.max_nodes
    min_size     = each.value.min_nodes
  }

  labels = each.value.labels
  dynamic "taint" {
    for_each = lookup(each.value, "taint", [])
    content {
      key    = taint.value.key
      value  = taint.value.value
      effect = taint.value.effect
    }
  }

  depends_on = [
    aws_eks_cluster.eks_cluster,
    var.private_subnets
  ]
}
  1. 撰寫 my_aws_load_balancer_controller 模組: 此模組會透過 Helm 來執行 aws-load-balancer-controller 的安裝
  • ./modules/my_aws_load_balancer_controller/outputs.tf:
output "chart" {
  value = join("", helm_release.aws_load_balancer_controller.*.chart)
}

output "repository" {
  value       = join("", helm_release.aws_load_balancer_controller.*.repository)
  description = "Repository URL where to locate the requested chart."
}
output "version" {
  value       = join("", helm_release.aws_load_balancer_controller.*.version)
  description = "Specify the exact chart version to install. If this is not specified, the latest version is installed."
}
  • ./modules/my_aws_load_balancer_controller/provider.tf: 這裡用到兩個第一次使用到的 providers - kuberneteshelm
data "aws_eks_cluster_auth" "eks_auth" {
  name = var.eks_cluster_name
}
 
provider "kubernetes" {
  host                      = var.eks_cluster_endpoint
  cluster_ca_certificate    = base64decode(var.eks_ca_certificate)
  token                     = data.aws_eks_cluster_auth.eks_auth.token
  #load_config_file          = false
}

provider "helm" {
  kubernetes {
    host                   = var.eks_cluster_endpoint
    token                  = data.aws_eks_cluster_auth.eks_auth.token
    cluster_ca_certificate = base64decode(var.eks_ca_certificate)
  }
}
  • ./modules/my_aws_load_balancer_controller/variables.tf:
variable "aws_region" {
    type = string
    description = "AWS Region"
}

variable "chart_version" {
    type = string
    description = "Chart Version of eks/aws-load-balancer-controller"
    default = "1.6.0"
}

variable "vpc_id" {
    type = string
    description = "EKS Cluster VPC ID"
}

variable "vpc_cidr" {
    type = string
    description = "VPC CIDR Block"
}

variable "eks_cluster_name" {
    description = "Name of the EKS Cluster"
}

variable "eks_cluster_endpoint" {
    type = string
    description = "EKS Cluster Endpoint"
}

variable "eks_oidc_url" {
    type = string
    description = "EKS Cluster OIDC Provider URL"
}

variable "eks_ca_certificate" {
    type = string
    description = "EKS Cluster CA Certificate"
}

  • ./modules/my_aws_load_balancer_controller/aws_load_balancer_controller.tf:
data "aws_caller_identity" "current" {}

data "http" "iam_policy" {
  url = "https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.6.0/docs/install/iam_policy.json"
  request_headers = {
    Accept = "application/json"
  }
}

resource "aws_iam_policy" "AWSLoadBalancerControllerIAMPolicy" {
  name = "AWSLoadBalancerControllerIAMPolicy"
  policy = data.http.iam_policy.response_body
}

data "aws_iam_policy_document" "elb_assume_role_policy" {
  statement {
    actions = ["sts:AssumeRoleWithWebIdentity"]
    effect  = "Allow"

    condition {
      test     = "StringEquals"
      variable = "${replace(var.eks_oidc_url, "https://", "")}:sub"
      values   = ["system:serviceaccount:kube-system:aws-load-balancer-controller"]
    }

    principals {
      identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:oidc-provider/${replace(var.eks_oidc_url, "https://", "")}"]
      type        = "Federated"
    }
  }
}

resource "aws_iam_role" "eks_lb_controller" {
  assume_role_policy = data.aws_iam_policy_document.elb_assume_role_policy.json
  name               = "AmazonEKSLoadBalancerControllerRole"
}

resource "aws_iam_role_policy_attachment" "ALBIngressControllerIAMPolicy" {
  policy_arn = aws_iam_policy.AWSLoadBalancerControllerIAMPolicy.arn
  role       = aws_iam_role.eks_lb_controller.name
}

resource "kubernetes_service_account" "load_balancer_Controller" {
  automount_service_account_token = true
  metadata {
    name      = "aws-load-balancer-controller"
    namespace = "kube-system"
    annotations = {
      "eks.amazonaws.com/role-arn" = aws_iam_role.eks_lb_controller.arn
    }
    labels = {
      "app.kubernetes.io/name"       = "aws-load-balancer-controller"
      "app.kubernetes.io/component"  = "controller"
      "app.kubernetes.io/managed-by" = "terraform"
    }
  }
}

resource "kubernetes_cluster_role" "load_balancer_Controller" {
  metadata {
    name = "aws-load-balancer-controller"

    labels = {
      "app.kubernetes.io/name"       = "aws-load-balancer-controller"
      "app.kubernetes.io/managed-by" = "terraform"
    }
  }

  rule {
    api_groups = ["", "extensions"]
    resources  = ["configmaps", "endpoints", "events", "ingresses", "ingresses/status", "services"]
    verbs      = ["create", "get", "list", "update", "watch", "patch"]
  }

  rule {
    api_groups = ["", "extensions"]
    resources  = ["nodes", "pods", "secrets", "services", "namespaces"]
    verbs      = ["get", "list", "watch"]
  }
}

resource "kubernetes_cluster_role_binding" "load_balancer_Controller" {
  metadata {
    name = "aws-load-balancer-controller"

    labels = {
      "app.kubernetes.io/name"       = "aws-load-balancer-controller"
      "app.kubernetes.io/managed-by" = "terraform"
    }
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = kubernetes_cluster_role.load_balancer_Controller.metadata[0].name
  }

  subject {
    api_group = ""
    kind      = "ServiceAccount"
    name      = kubernetes_service_account.load_balancer_Controller.metadata[0].name
    namespace = kubernetes_service_account.load_balancer_Controller.metadata[0].namespace
  }
  depends_on = [ kubernetes_cluster_role.load_balancer_Controller ]
}

resource "helm_release" "aws_load_balancer_controller" {
  name       = "aws-load-balancer-controller"
  repository = "https://aws.github.io/eks-charts"
  chart      = "aws-load-balancer-controller"
  version    = var.chart_version
  namespace  = "kube-system"
  atomic     = true

  set {
    name  = "clusterName"
    value = var.eks_cluster_name
  }
  set {
    name  = "serviceAccount.create"
    value = "false"
  }
  set {
    name  = "serviceAccount.name"
    value = "aws-load-balancer-controller"
  }
  set {
    name  = "region"
    value = var.aws_region
  }
  set {
    name  = "vpcId"
    value = var.vpc_id
  }
  depends_on = [kubernetes_cluster_role_binding.load_balancer_Controller]
}
  1. 撰寫專案相關程式:
  • example.tfvars:
aws_region="ap-northeast-1"
aws_profile="<YOUR_PROFILE>"
project_name="example"
department_name="SRE"
cassandra_root_password="<CASSANDRA_ROOT_PASSWORD>"
  • main.tf:
    • 定義需要建立的 Node Group in node_groups list
    • 定義需要建立的 Fargate Profile in fargate_profiles list
      • karpenter 下一篇文章會介紹到,另外 corednsaws-load-balancer-controller 都可以 run as Fargate pods
terraform {
  required_providers {
    aws = {
      version = "5.15.0"
    }
  }

  backend "s3" {
    bucket                  = "<YOUR_S3_BUCKET_NAME>"
    dynamodb_table          = "<YOUR_DYNAMODB_TABLE_NAME>"
    key                     = "terraform.tfstate"
    region                  = "ap-northeast-1"
    shared_credentials_file = "~/.aws/config"
    profile                 = "<YOUR_PROFILE>"
  }
}

#其他模組省略...

# 建立一個 AWS EKS with 1 Node Group 來測試一下模組,並安裝 aws_load_balancer_controller 於  EKS 上

# eks
module "eks" {
  aws_region       = var.aws_region
  aws_profile      = var.aws_profile
  cluster_name     = "MY-EKS-CLUSTER"
  cluster_role_arn = module.iam.iam_role_arn["eks-cluster"].arn

  endpoint_private_access = true

  public_subnets = [
    module.subnet.subnets["my-public-ap-northeast-1a"].id,
    module.subnet.subnets["my-public-ap-northeast-1c"].id,
    module.subnet.subnets["my-public-ap-northeast-1d"].id
  ]

  public_access_cidrs = local.bastion_allowed_ips

  private_subnets = [
    module.subnet.subnets["my-application-ap-northeast-1a"].id,
    module.subnet.subnets["my-application-ap-northeast-1c"].id,
    module.subnet.subnets["my-application-ap-northeast-1d"].id
  ]

  eks_version = "1.25"

  node_groups = [
    {
      name           = "ng-arm-spot"
      node_role_arn  = module.iam.iam_role_arn["eks-node-group"].arn
      ami_type       = "AL2_ARM_64"
      capacity_type  = "SPOT" # ON_DEMAND or SPOT
      instance_types = ["t4g.small"]
      disk_size      = 20
      desired_nodes  = 1
      max_nodes      = 2
      min_nodes      = 1
      labels         = {}
      taint          = [
        {
          key    = "spotInstance"
          value  = "true"
          effect = "PREFER_NO_SCHEDULE"
        }
      ]
    }
  ]

  fargate_profiles = [
    {
      name                   = "karpenter",
      namespace              = "karpenter",
      pod_execution_role_arn = module.iam.iam_role_arn["eks-fargate-pod-execution-role"].arn,
      labels                 = {}
    },
    {
      name                   = "coredns",
      namespace              = "kube-system",
      pod_execution_role_arn = module.iam.iam_role_arn["eks-fargate-pod-execution-role"].arn,
      labels                 = {
        k8s-app = "kube-dns"
      }
    },
    {
      name                   = "aws-load-balancer-controller",
      namespace              = "kube-system",
      pod_execution_role_arn = module.iam.iam_role_arn["eks-fargate-pod-execution-role"].arn,
      labels                 = {
        "app.kubernetes.io/name" = "aws-load-balancer-controller"
      }
    }
  ]

  source = "./modules/my_eks"
}

# aws_load_balancer_controller
module "aws_load_balancer_controller" {
  aws_region            = var.aws_region
  vpc_id                = module.vpc.my_vpcs["my-vpc"].id
  vpc_cidr              = module.vpc.my_vpcs["my-vpc"].cidr_block
  eks_cluster_name      = module.eks.cluster_name
  eks_cluster_endpoint  = module.eks.endpoint
  eks_oidc_url          = module.eks.oidc_url
  eks_ca_certificate    = module.eks.ca_certificate

  source                = "./modules/my_aws_load_balancer_controller"
}


Terraform 執行計畫

  1. 於專案目錄下執行 terraform init && terraform plan --out .plan -var-file=example.tfvars 來確認一下結果:

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

  # module.aws_load_balancer_controller.data.aws_iam_policy_document.elb_assume_role_policy will be read during apply
  # (config refers to values not yet known)
 <= data "aws_iam_policy_document" "elb_assume_role_policy" {
      + id   = (known after apply)
      + json = (known after apply)

      + statement {
          + actions = [
              + "sts:AssumeRoleWithWebIdentity",
            ]
          + effect  = "Allow"

          + condition {
              + test     = "StringEquals"
              + values   = [
                  + "system:serviceaccount:kube-system:aws-load-balancer-controller",
                ]
              + variable = (known after apply)
            }

          + principals {
              + identifiers = [
                  + (known after apply),
                ]
              + type        = "Federated"
            }
        }
    }

  # module.aws_load_balancer_controller.aws_iam_policy.AWSLoadBalancerControllerIAMPolicy will be created
  + resource "aws_iam_policy" "AWSLoadBalancerControllerIAMPolicy" {
      + arn         = (known after apply)
      + id          = (known after apply)
      + name        = "AWSLoadBalancerControllerIAMPolicy"
      + name_prefix = (known after apply)
      + path        = "/"
      + policy      = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = [
                          + "iam:CreateServiceLinkedRole",
                        ]
                      + Condition = {
                          + StringEquals = {
                              + "iam:AWSServiceName" = "elasticloadbalancing.amazonaws.com"
                            }
                        }
                      + Effect    = "Allow"
                      + Resource  = "*"
                    },
                  + {
                      + Action   = [
                          + "ec2:DescribeAccountAttributes",
                          + "ec2:DescribeAddresses",
                          + "ec2:DescribeAvailabilityZones",
                          + "ec2:DescribeInternetGateways",
                          + "ec2:DescribeVpcs",
                          + "ec2:DescribeVpcPeeringConnections",
                          + "ec2:DescribeSubnets",
                          + "ec2:DescribeSecurityGroups",
                          + "ec2:DescribeInstances",
                          + "ec2:DescribeNetworkInterfaces",
                          + "ec2:DescribeTags",
                          + "ec2:GetCoipPoolUsage",
                          + "ec2:DescribeCoipPools",
                          + "elasticloadbalancing:DescribeLoadBalancers",
                          + "elasticloadbalancing:DescribeLoadBalancerAttributes",
                          + "elasticloadbalancing:DescribeListeners",
                          + "elasticloadbalancing:DescribeListenerCertificates",
                          + "elasticloadbalancing:DescribeSSLPolicies",
                          + "elasticloadbalancing:DescribeRules",
                          + "elasticloadbalancing:DescribeTargetGroups",
                          + "elasticloadbalancing:DescribeTargetGroupAttributes",
                          + "elasticloadbalancing:DescribeTargetHealth",
                          + "elasticloadbalancing:DescribeTags",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                    },
                  + {
                      + Action   = [
                          + "cognito-idp:DescribeUserPoolClient",
                          + "acm:ListCertificates",
                          + "acm:DescribeCertificate",
                          + "iam:ListServerCertificates",
                          + "iam:GetServerCertificate",
                          + "waf-regional:GetWebACL",
                          + "waf-regional:GetWebACLForResource",
                          + "waf-regional:AssociateWebACL",
                          + "waf-regional:DisassociateWebACL",
                          + "wafv2:GetWebACL",
                          + "wafv2:GetWebACLForResource",
                          + "wafv2:AssociateWebACL",
                          + "wafv2:DisassociateWebACL",
                          + "shield:GetSubscriptionState",
                          + "shield:DescribeProtection",
                          + "shield:CreateProtection",
                          + "shield:DeleteProtection",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                    },
                  + {
                      + Action   = [
                          + "ec2:AuthorizeSecurityGroupIngress",
                          + "ec2:RevokeSecurityGroupIngress",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                    },
                  + {
                      + Action   = [
                          + "ec2:CreateSecurityGroup",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                    },
                  + {
                      + Action    = [
                          + "ec2:CreateTags",
                        ]
                      + Condition = {
                          + Null         = {
                              + "aws:RequestTag/elbv2.k8s.aws/cluster" = "false"
                            }
                          + StringEquals = {
                              + "ec2:CreateAction" = "CreateSecurityGroup"
                            }
                        }
                      + Effect    = "Allow"
                      + Resource  = "arn:aws:ec2:*:*:security-group/*"
                    },
                  + {
                      + Action    = [
                          + "ec2:CreateTags",
                          + "ec2:DeleteTags",
                        ]
                      + Condition = {
                          + Null = {
                              + "aws:RequestTag/elbv2.k8s.aws/cluster"  = "true"
                              + "aws:ResourceTag/elbv2.k8s.aws/cluster" = "false"
                            }
                        }
                      + Effect    = "Allow"
                      + Resource  = "arn:aws:ec2:*:*:security-group/*"
                    },
                  + {
                      + Action    = [
                          + "ec2:AuthorizeSecurityGroupIngress",
                          + "ec2:RevokeSecurityGroupIngress",
                          + "ec2:DeleteSecurityGroup",
                        ]
                      + Condition = {
                          + Null = {
                              + "aws:ResourceTag/elbv2.k8s.aws/cluster" = "false"
                            }
                        }
                      + Effect    = "Allow"
                      + Resource  = "*"
                    },
                  + {
                      + Action    = [
                          + "elasticloadbalancing:CreateLoadBalancer",
                          + "elasticloadbalancing:CreateTargetGroup",
                        ]
                      + Condition = {
                          + Null = {
                              + "aws:RequestTag/elbv2.k8s.aws/cluster" = "false"
                            }
                        }
                      + Effect    = "Allow"
                      + Resource  = "*"
                    },
                  + {
                      + Action   = [
                          + "elasticloadbalancing:CreateListener",
                          + "elasticloadbalancing:DeleteListener",
                          + "elasticloadbalancing:CreateRule",
                          + "elasticloadbalancing:DeleteRule",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                    },
                  + {
                      + Action    = [
                          + "elasticloadbalancing:AddTags",
                          + "elasticloadbalancing:RemoveTags",
                        ]
                      + Condition = {
                          + Null = {
                              + "aws:RequestTag/elbv2.k8s.aws/cluster"  = "true"
                              + "aws:ResourceTag/elbv2.k8s.aws/cluster" = "false"
                            }
                        }
                      + Effect    = "Allow"
                      + Resource  = [
                          + "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
                          + "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
                          + "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*",
                        ]
                    },
                  + {
                      + Action   = [
                          + "elasticloadbalancing:AddTags",
                          + "elasticloadbalancing:RemoveTags",
                        ]
                      + Effect   = "Allow"
                      + Resource = [
                          + "arn:aws:elasticloadbalancing:*:*:listener/net/*/*/*",
                          + "arn:aws:elasticloadbalancing:*:*:listener/app/*/*/*",
                          + "arn:aws:elasticloadbalancing:*:*:listener-rule/net/*/*/*",
                          + "arn:aws:elasticloadbalancing:*:*:listener-rule/app/*/*/*",
                        ]
                    },
                  + {
                      + Action    = [
                          + "elasticloadbalancing:ModifyLoadBalancerAttributes",
                          + "elasticloadbalancing:SetIpAddressType",
                          + "elasticloadbalancing:SetSecurityGroups",
                          + "elasticloadbalancing:SetSubnets",
                          + "elasticloadbalancing:DeleteLoadBalancer",
                          + "elasticloadbalancing:ModifyTargetGroup",
                          + "elasticloadbalancing:ModifyTargetGroupAttributes",
                          + "elasticloadbalancing:DeleteTargetGroup",
                        ]
                      + Condition = {
                          + Null = {
                              + "aws:ResourceTag/elbv2.k8s.aws/cluster" = "false"
                            }
                        }
                      + Effect    = "Allow"
                      + Resource  = "*"
                    },
                  + {
                      + Action    = [
                          + "elasticloadbalancing:AddTags",
                        ]
                      + Condition = {
                          + Null         = {
                              + "aws:RequestTag/elbv2.k8s.aws/cluster" = "false"
                            }
                          + StringEquals = {
                              + "elasticloadbalancing:CreateAction" = [
                                  + "CreateTargetGroup",
                                  + "CreateLoadBalancer",
                                ]
                            }
                        }
                      + Effect    = "Allow"
                      + Resource  = [
                          + "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
                          + "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
                          + "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*",
                        ]
                    },
                  + {
                      + Action   = [
                          + "elasticloadbalancing:RegisterTargets",
                          + "elasticloadbalancing:DeregisterTargets",
                        ]
                      + Effect   = "Allow"
                      + Resource = "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*"
                    },
                  + {
                      + Action   = [
                          + "elasticloadbalancing:SetWebAcl",
                          + "elasticloadbalancing:ModifyListener",
                          + "elasticloadbalancing:AddListenerCertificates",
                          + "elasticloadbalancing:RemoveListenerCertificates",
                          + "elasticloadbalancing:ModifyRule",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + policy_id   = (known after apply)
      + tags_all    = (known after apply)
    }

  # module.aws_load_balancer_controller.aws_iam_role.eks_lb_controller will be created
  + resource "aws_iam_role" "eks_lb_controller" {
      + arn                   = (known after apply)
      + assume_role_policy    = (known after apply)
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "AmazonEKSLoadBalancerControllerRole"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)
    }

  # module.aws_load_balancer_controller.aws_iam_role_policy_attachment.ALBIngressControllerIAMPolicy will be created
  + resource "aws_iam_role_policy_attachment" "ALBIngressControllerIAMPolicy" {
      + id         = (known after apply)
      + policy_arn = (known after apply)
      + role       = "AmazonEKSLoadBalancerControllerRole"
    }

  # module.aws_load_balancer_controller.helm_release.aws_load_balancer_controller will be created
  + resource "helm_release" "aws_load_balancer_controller" {
      + atomic                     = true
      + chart                      = "aws-load-balancer-controller"
      + cleanup_on_fail            = false
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "aws-load-balancer-controller"
      + namespace                  = "kube-system"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://aws.github.io/eks-charts"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "1.6.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          + name  = "clusterName"
          + value = "COSMO-EKS-CLUSTER"
        }
      + set {
          + name  = "region"
          + value = "ap-northeast-1"
        }
      + set {
          + name  = "serviceAccount.create"
          + value = "false"
        }
      + set {
          + name  = "serviceAccount.name"
          + value = "aws-load-balancer-controller"
        }
      + set {
          + name  = "vpcId"
          + value = (known after apply)
        }
    }

  # module.aws_load_balancer_controller.kubernetes_cluster_role.load_balancer_Controller will be created
  + resource "kubernetes_cluster_role" "load_balancer_Controller" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "app.kubernetes.io/managed-by" = "terraform"
              + "app.kubernetes.io/name"       = "aws-load-balancer-controller"
            }
          + name             = "aws-load-balancer-controller"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }

      + rule {
          + api_groups = [
              + "",
              + "extensions",
            ]
          + resources  = [
              + "configmaps",
              + "endpoints",
              + "events",
              + "ingresses",
              + "ingresses/status",
              + "services",
            ]
          + verbs      = [
              + "create",
              + "get",
              + "list",
              + "update",
              + "watch",
              + "patch",
            ]
        }
      + rule {
          + api_groups = [
              + "",
              + "extensions",
            ]
          + resources  = [
              + "nodes",
              + "pods",
              + "secrets",
              + "services",
              + "namespaces",
            ]
          + verbs      = [
              + "get",
              + "list",
              + "watch",
            ]
        }
    }

  # module.aws_load_balancer_controller.kubernetes_cluster_role_binding.load_balancer_Controller will be created
  + resource "kubernetes_cluster_role_binding" "load_balancer_Controller" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "app.kubernetes.io/managed-by" = "terraform"
              + "app.kubernetes.io/name"       = "aws-load-balancer-controller"
            }
          + name             = "aws-load-balancer-controller"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }

      + role_ref {
          + api_group = "rbac.authorization.k8s.io"
          + kind      = "ClusterRole"
          + name      = "aws-load-balancer-controller"
        }

      + subject {
          + api_group = (known after apply)
          + kind      = "ServiceAccount"
          + name      = "aws-load-balancer-controller"
          + namespace = "kube-system"
        }
    }

  # module.aws_load_balancer_controller.kubernetes_service_account.load_balancer_Controller will be created
  + resource "kubernetes_service_account" "load_balancer_Controller" {
      + automount_service_account_token = true
      + default_secret_name             = (known after apply)
      + id                              = (known after apply)

      + metadata {
          + annotations      = (known after apply)
          + generation       = (known after apply)
          + labels           = {
              + "app.kubernetes.io/component"  = "controller"
              + "app.kubernetes.io/managed-by" = "terraform"
              + "app.kubernetes.io/name"       = "aws-load-balancer-controller"
            }
          + name             = "aws-load-balancer-controller"
          + namespace        = "kube-system"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # module.eks.data.tls_certificate.certificate will be read during apply
  # (config refers to values not yet known)
 <= data "tls_certificate" "certificate" {
      + certificates = (known after apply)
      + id           = (known after apply)
      + url          = (known after apply)
    }

  # module.eks.aws_eks_cluster.eks_cluster will be created
  + resource "aws_eks_cluster" "eks_cluster" {
      + arn                       = (known after apply)
      + certificate_authority     = (known after apply)
      + cluster_id                = (known after apply)
      + created_at                = (known after apply)
      + enabled_cluster_log_types = [
          + "api",
          + "audit",
          + "authenticator",
          + "controllerManager",
          + "scheduler",
        ]
      + endpoint                  = (known after apply)
      + id                        = (known after apply)
      + identity                  = (known after apply)
      + name                      = "COSMO-EKS-CLUSTER"
      + platform_version          = (known after apply)
      + role_arn                  = (known after apply)
      + status                    = (known after apply)
      + tags                      = {
          + "Name" = "COSMO-EKS-CLUSTER"
        }
      + tags_all                  = {
          + "Name" = "COSMO-EKS-CLUSTER"
        }
      + version                   = "1.27"

      + vpc_config {
          + cluster_security_group_id = (known after apply)
          + endpoint_private_access   = true
          + endpoint_public_access    = true
          + public_access_cidrs       = [
              + "118.163.66.16/28",
              + "60.250.143.40/29",
            ]
          + subnet_ids                = (known after apply)
          + vpc_id                    = (known after apply)
        }
    }

  # module.eks.aws_eks_node_group.groups["ng-arm-spot"] will be created
  + resource "aws_eks_node_group" "groups" {
      + ami_type               = "AL2_ARM_64"
      + arn                    = (known after apply)
      + capacity_type          = "SPOT"
      + cluster_name           = "COSMO-EKS-CLUSTER"
      + disk_size              = 20
      + id                     = (known after apply)
      + instance_types         = [
          + "t4g.small",
        ]
      + node_group_name        = "ng-arm-spot"
      + node_group_name_prefix = (known after apply)
      + node_role_arn          = (known after apply)
      + release_version        = (known after apply)
      + resources              = (known after apply)
      + status                 = (known after apply)
      + subnet_ids             = (known after apply)
      + tags_all               = (known after apply)
      + version                = (known after apply)

      + scaling_config {
          + desired_size = 1
          + max_size     = 2
          + min_size     = 1
        }

      + taint {
          + effect = "PREFER_NO_SCHEDULE"
          + key    = "spotInstance"
          + value  = "true"
        }
    }

  # module.eks.aws_iam_openid_connect_provider.oidc_provider will be created
  + resource "aws_iam_openid_connect_provider" "oidc_provider" {
      + arn             = (known after apply)
      + client_id_list  = [
          + "sts.amazonaws.com",
        ]
      + id              = (known after apply)
      + tags_all        = (known after apply)
      + thumbprint_list = (known after apply)
      + url             = (known after apply)
    }

  # module.iam.aws_iam_role.roles["eks-cluster"] will be created
  + resource "aws_iam_role" "roles" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = [
                              + "eks.amazonaws.com",
                              + "eks-fargate-pods.amazonaws.com",
                            ]
                        }
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + description           = "EKS cluster role"
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "eks-cluster"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + tags                  = {
          + "Department" = "SRE"
          + "Name"       = "EKS cluster role"
          + "Project"    = "cosmo"
        }
      + tags_all              = {
          + "Department" = "SRE"
          + "Name"       = "EKS cluster role"
          + "Project"    = "cosmo"
        }
      + unique_id             = (known after apply)
    }

  # module.iam.aws_iam_role.roles["eks-fargate-pod-execution-role"] will be created
  + resource "aws_iam_role" "roles" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = [
                              + "eks.amazonaws.com",
                              + "eks-fargate-pods.amazonaws.com",
                            ]
                        }
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + description           = "EKS fargate pod execution role"
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "eks-fargate-pod-execution-role"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + tags                  = {
          + "Department" = "SRE"
          + "Name"       = "EKS fargate pod execution role"
          + "Project"    = "cosmo"
        }
      + tags_all              = {
          + "Department" = "SRE"
          + "Name"       = "EKS fargate pod execution role"
          + "Project"    = "cosmo"
        }
      + unique_id             = (known after apply)
    }

  # module.iam.aws_iam_role.roles["eks-node-group"] will be created
  + resource "aws_iam_role" "roles" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "ec2.amazonaws.com"
                        }
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + description           = "EKS node group"
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "eks-node-group"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + tags                  = {
          + "Department" = "SRE"
          + "Name"       = "EKS node group"
          + "Project"    = "cosmo"
        }
      + tags_all              = {
          + "Department" = "SRE"
          + "Name"       = "EKS node group"
          + "Project"    = "cosmo"
        }
      + unique_id             = (known after apply)
    }

  # module.iam.aws_iam_role_policy.role_policies["eks-cluster:AmazonEKSClusterCloudWatchMetricsPolicy"] will be created
  + resource "aws_iam_role_policy" "role_policies" {
      + id     = (known after apply)
      + name   = "AmazonEKSClusterCloudWatchMetricsPolicy"
      + policy = jsonencode(
            {
              + Statement = [
                  + {
                      + Action   = [
                          + "cloudwatch:PutMetricData",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + role   = "eks-cluster"
    }

  # module.iam.aws_iam_role_policy_attachment.attachments["eks-cluster/arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"] will be created
  + resource "aws_iam_role_policy_attachment" "attachments" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
      + role       = "eks-cluster"
    }

  # module.iam.aws_iam_role_policy_attachment.attachments["eks-cluster/arn:aws:iam::aws:policy/AmazonEKSServicePolicy"] will be created
  + resource "aws_iam_role_policy_attachment" "attachments" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
      + role       = "eks-cluster"
    }

  # module.iam.aws_iam_role_policy_attachment.attachments["eks-cluster/arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"] will be created
  + resource "aws_iam_role_policy_attachment" "attachments" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
      + role       = "eks-cluster"
    }

  # module.iam.aws_iam_role_policy_attachment.attachments["eks-fargate-pod-execution-role/arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"] will be created
  + resource "aws_iam_role_policy_attachment" "attachments" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
      + role       = "eks-fargate-pod-execution-role"
    }

  # module.iam.aws_iam_role_policy_attachment.attachments["eks-fargate-pod-execution-role/arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"] will be created
  + resource "aws_iam_role_policy_attachment" "attachments" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
      + role       = "eks-fargate-pod-execution-role"
    }

  # module.iam.aws_iam_role_policy_attachment.attachments["eks-fargate-pod-execution-role/arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"] will be created
  + resource "aws_iam_role_policy_attachment" "attachments" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
      + role       = "eks-fargate-pod-execution-role"
    }

  # module.iam.aws_iam_role_policy_attachment.attachments["eks-node-group/arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"] will be created
  + resource "aws_iam_role_policy_attachment" "attachments" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
      + role       = "eks-node-group"
    }

  # module.iam.aws_iam_role_policy_attachment.attachments["eks-node-group/arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"] will be created
  + resource "aws_iam_role_policy_attachment" "attachments" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
      + role       = "eks-node-group"
    }

  # module.iam.aws_iam_role_policy_attachment.attachments["eks-node-group/arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"] will be created
  + resource "aws_iam_role_policy_attachment" "attachments" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
      + role       = "eks-node-group"
    }

Plan: 68 to add, 0 to change, 0 to destroy.
─────────────────────────────────────────────────────────────────────────────

Saved the plan to: .plan

To perform exactly these actions, run the following command to apply:
    terraform apply ".plan"

下一篇文章將會展示實作 AWS EKS with Karpenter 之 Terraform 模組。


上一篇
實作 AWS 常用服務之 Terraform 模組系列 - Launch an Custom AMI of Cassandra Cluster 篇
下一篇
實作 AWS 常用服務之 Terraform 模組系列 - EKS with Karpenter 篇
系列文
大家都在用 Terraform 實作 IaC 為什麼不將程式寫得更簡潔易讀呢?30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言