Creating AWS Kubernetes Service (EKS) Using Azure DevOps and Terraform

In this blog post, we’ll walk through the process of setting up an AWS Kubernetes Service (EKS) using Azure DevOps and Terraform. We’ll cover the essential Terraform configuration files: providers.tf, main.tf, variables.tf, output.tf, and Kubernetes manifests: deployment.yaml and services.yaml. Additionally, we’ll discuss how to integrate Azure DevOps with AWS.

Prerequisites

  • AWS account with necessary permissions to create EKS clusters.
  • Azure DevOps account.
  • Terraform installed on your local machine.
  • AWS CLI configured on your local machine.

Step 1: Setting Up Terraform Configuration Files

providers.tf

This file specifies the providers required for Terraform to interact with AWS.
provider "aws" {
  region = var.aws_region
}
variables.tf

Define the variables used in the Terraform configuration.
variable "aws_region" {
  description = "The AWS region to deploy resources"
  default     = "us-west-2"
}

variable "cluster_name" {
  description = "The name of the EKS cluster"
  default     = "my-eks-cluster"
}

variable "node_group_name" {
  description = "The name of the node group"
  default     = "my-node-group"
}

variable "node_instance_type" {
  description = "EC2 instance type for the nodes"
  default     = "t3.medium"
}

variable "desired_capacity" {
  description = "Desired number of worker nodes"
  default     = 2
}

variable "max_capacity" {
  description = "Maximum number of worker nodes"
  default     = 3
}

variable "min_capacity" {
  description = "Minimum number of worker nodes"
  default     = 1
}
main.tf

This file contains the main configuration for creating the EKS cluster and node group.
resource "aws_eks_cluster" "eks_cluster" {
  name     = var.cluster_name
  role_arn = aws_iam_role.eks_cluster_role.arn

  vpc_config {
    subnet_ids = aws_subnet.eks_subnet[*].id
  }
}

resource "aws_eks_node_group" "node_group" {
  cluster_name    = aws_eks_cluster.eks_cluster.name
  node_group_name = var.node_group_name
  node_role_arn   = aws_iam_role.eks_node_role.arn
  subnet_ids      = aws_subnet.eks_subnet[*].id

  scaling_config {
    desired_size = var.desired_capacity
    max_size     = var.max_capacity
    min_size     = var.min_capacity
  }

  instance_types = [var.node_instance_type]
}
output.tf

Output the necessary information after the resources are created.
output "cluster_name" {
  value = aws_eks_cluster.eks_cluster.name
}

output "cluster_endpoint" {
  value = aws_eks_cluster.eks_cluster.endpoint
}

output "cluster_certificate_authority_data" {
  value = aws_eks_cluster.eks_cluster.certificate_authority[0].data
}
Step 2: Kubernetes Manifests

deployment.yaml

Define the deployment for your application.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-app-image:latest
        ports:
        - containerPort: 80
services.yaml

Define the service to expose your application.
apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: my-app
Step 3: Integrating Azure DevOps with AWS
  • Create an IAM User in AWS:
    • Go to the IAM console in AWS.
    • Create a new user with programmatic access.
    • Attach the AdministratorAccess policy or a custom policy with necessary permissions.
    • Save the Access Key ID and Secret Access Key.
  • Create a Service Connection in Azure DevOps:
    • Navigate to your Azure DevOps project.
    • Go to Project Settings > Service connections.
    • Click on “New service connection” and select “AWS”.
    • Enter the Access Key ID and Secret Access Key from the IAM user.
    • Verify the connection and save it.
Step 4: Setting Up Azure DevOps Pipeline
  • Create a new pipeline in Azure DevOps.
  • Connect your repository containing the Terraform and Kubernetes configuration files.
  • Add tasks to the pipeline to:
    • Install Terraform.
    • Initialize Terraform.
    • Apply Terraform configuration.
    • Deploy Kubernetes manifests using kubectl.
       Here’s an example of a simple Azure DevOps pipeline YAML:
trigger:
- main

pool:
  vmImage: 'ubuntu-latest'

steps:
- task: TerraformInstaller@0
  inputs:
    terraformVersion: '1.0.0'

- script: |
    terraform init
    terraform apply -auto-approve
  displayName: 'Run Terraform'

- task: Kubernetes@1
  inputs:
    connectionType: 'Kubernetes Service Connection'
    kubernetesServiceEndpoint: '<your-k8s-service-connection>'
    namespace: 'default'
    command: 'apply'
    useConfigurationFile: true
    configuration: '$(Pipeline.Workspace)/manifests/deployment.yaml'
    arguments: '-f $(Pipeline.Workspace)/manifests/services.yaml'
Conclusion

By following these steps, you can set up an AWS EKS cluster using Terraform and deploy your application using Azure DevOps. This approach ensures a consistent and repeatable process for managing your Kubernetes infrastructure and deployments.

Feel free to customize the configurations and pipeline according to your specific requirements. Happy deploying! 

If you have any questions or need further assistance, let me know!

Comments

Popular posts from this blog

How to update build number in Azure DevOps pipeline?

How to get latest build ID from Azure DevOps pipeline?

How to install AWS System Manager (SSM) Agent on windows using PowerShell?