iT邦幫忙

2023 iThome 鐵人賽

DAY 9
0

這邊要簡單記錄一下使用 terraform 的部分,至於詳細介紹的部分的話,則有不少的鐵人賽 或其他的 網路文章上面已經很多了,所以我這邊就依據我的使用經驗快速記錄一下

另外還有一個重要的事情就是 AWS Configure 記得設定好需要指定部署的 Account 下面,因為他預設會使用 Environment Variable 的Account

Command line

因為Terraform 其實也有一部分需要靠 Command line 下指令來操作 Apply and Destroy ,另外偶爾也會使用到 Output 特定的 resource ID,所以這邊建議也稍微記錄一下,不然會覺得 Terraform 怎麼那麼難用

terraform init # download package or module 
terraform plan # check state and plan to apply
terraform apply -target module.vpc # deploy service, if do not assign target, that will apply all.
terraform destroy # destroy service 
terraform output 

Version

這算是一開始就是宣告要使用 Terraform 的部分來部署 IaC ,然而對於程式碼的部分,版本的資訊也是極為重要的元素,所以這邊也要宣要一下要部署哪個版本的 Package 跟 Module ,另外還有一個就是被 Command 的部分,這部分留到後面再講,因為這個極為重要

terraform {
  required_version = ">= 1.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 4.47"
    }
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.9"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.20"
    }
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.14"
    }
  }

  # ##  Used for end-to-end testing on project; update to suit your needs
  # backend "s3" {
  #   bucket = "terraform-ssp-github-actions-state"
  #   region = "us-west-2"
  #   key    = "e2e/karpenter/terraform.tfstate"
  # }
}

Provider

這部分比較簡單容易理解,就是會需要使用的部分,然後指定的

provider "aws" {
  region = local.region
}

# Required for public ECR where Karpenter artifacts are hosted
provider "aws" {
  region = "us-east-1"
  alias  = "virginia"
}

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    # This requires the awscli to be installed locally where Terraform is executed
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
  }
}

provider "helm" {
  kubernetes {
    host                   = module.eks.cluster_endpoint
    cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

    exec {
      api_version = "client.authentication.k8s.io/v1beta1"
      command     = "aws"
      # This requires the awscli to be installed locally where Terraform is executed
      args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
    }
  }
}

provider "kubectl" {
  apply_retry_count      = 5
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
  load_config_file       = false

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    # This requires the awscli to be installed locally where Terraform is executed
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
  }
}

Data

Data 是不能改的,如果有重複或找不到的話,則會出錯

data "aws_ecrpublic_authorization_token" "token" {
  provider = aws.virginia
}

data "aws_availability_zones" "available" {}

Module

這部分就是把一個 service 變成一個 module 可以省去很多設定跟串接的部分,感謝各位大神的努,不過這邊記得 source 跟 version的部分,會依據 source 的不同有所變更

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 19.16"

  cluster_name                   = local.name
  cluster_version                = "1.27"
  cluster_endpoint_public_access = true

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  # Fargate profiles use the cluster primary security group so these are not utilized
  create_cluster_security_group = false
  create_node_security_group    = false

  manage_aws_auth_configmap = true
  aws_auth_roles = [
    # We need to add in the Karpenter node IAM role for nodes launched by Karpenter
    {
      rolearn  = module.eks_blueprints_addons.karpenter.node_iam_role_arn
      username = "system:node:{{EC2PrivateDNSName}}"
      groups = [
        "system:bootstrappers",
        "system:nodes",
      ]
    },
  ]

  fargate_profiles = {
    karpenter = {
      selectors = [
        { namespace = "karpenter" }
      ]
    }
    kube_system = {
      name = "kube-system"
      selectors = [
        { namespace = "kube-system" }
      ]
    }
  }

  tags = merge(local.tags, {
    # NOTE - if creating multiple security groups with this module, only tag the
    # security group that Karpenter should utilize with the following tag
    # (i.e. - at most, only one security group should have this tag in your account)
    "karpenter.sh/discovery" = local.name
  })
}

Resource

這個就是最基礎的建立service的部分,如果有重複的地方也是會出錯

resource "kubectl_manifest" "karpenter_provisioner" {
  yaml_body = <<-YAML
    apiVersion: karpenter.sh/v1alpha5
    kind: Provisioner
    metadata:
      name: default
    spec:
      requirements:
        - key: "karpenter.k8s.aws/instance-category"
          operator: In
          values: ["c", "m"]
        - key: "karpenter.k8s.aws/instance-cpu"
          operator: In
          values: ["8", "16", "32"]
        - key: "karpenter.k8s.aws/instance-hypervisor"
          operator: In
          values: ["nitro"]
        - key: "topology.kubernetes.io/zone"
          operator: In
          values: ${jsonencode(local.azs)}
        - key: "kubernetes.io/arch"
          operator: In
          values: ["arm64", "amd64"]
        - key: "karpenter.sh/capacity-type" # If not included, the webhook for the AWS cloud provider will default to on-demand
          operator: In
          values: ["spot", "on-demand"]
      kubeletConfiguration:
        containerRuntime: containerd
        maxPods: 110
      limits:
        resources:
          cpu: 1000
      consolidation:
        enabled: true
      providerRef:
        name: default
      ttlSecondsUntilExpired: 604800 # 7 Days = 7 * 24 * 60 * 60 Seconds
  YAML

  depends_on = [
    module.eks_blueprints_addons
  ]
}

Output

這個會跟後面cdk8s 的程式互相的影響,因為在部署 k8s 的資源的時候,會去綁定一些 aws service 的id ,所以就會很仰賴 output 的部分,這邊先簡單說明,後面就會教大家使用

output "configure_kubectl" {
  description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
  value       = "aws eks --region ${local.region} update-kubeconfig --name ${module.eks.cluster_name}"
}

以上說明,其實還有很多方法跟用途,依據各個情況不同跟使用習慣不同。所以這部分也不見得是全對的。有問題歡迎討論


Reference

https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/examples/karpenter/main.tf


上一篇
{Day 8: Some tips to save the cost in EKS }
下一篇
{Day 10: Terraform resource}
系列文
Don't be a Machine Learning Engineer30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言