Kubernetes on EKS (AWS) Setup

Overview

Heeler can automatically harvest EKS resources and model them as services linked to your existing code base. To do so, three pieces must be in place:

AWS Connection

Heeler must be able to connect to your AWS accounts that host your EKS resources. By following the steps outlined in the Organization or Account set up, you will have deployed these resources, which support EKS connectivity:

  • IAM policy, HeelerEKS, that allows EKS cluster API access configuration.

  • IAM policy, Heeler, that denies visibility to the customer data plane, which includes actions such as s3:GetObject, dynamodb:GetItem and more.

  • IAM role, heeler-member, that uses provided permissions and trust policy for Heeler connectivity.

EKS Access Granted to Heeler IPs

Heeler must have connectivity to reach your EKS clusters. Their networking configurations must include public API server endpoint access that includes Heeler's two IPs in the Public access source allowlist. To set those values in the AWS console:

  1. Navigate to the Networking tab

  2. Click on Manage and choose the Endpoint access setting

  1. Select the Public and private endpoints option

  2. Under Advanced settings, enter the two Heeler IPs under CIDR block

    1. 44.221.229.40/32

    2. 52.73.231.96/32

  3. Save changes

If using terraform to manage your EKS cluster, the vpc_config definition should include these values for endpoint_public_access and public_access_cidrs:

resource "aws_eks_cluster" "this" {
  name = var.cluster_name

  vpc_config {
    subnet_ids              = var.subnet_ids
    security_group_ids      = [var.cluster_sg_id]

    # 1) “Public and private” endpoint access
    endpoint_public_access  = true
    endpoint_private_access = true

    # 2) Public access source allowlist includes Heeler's two IPs
    public_access_cidrs = [
      "44.221.229.40/32",
      "52.73.231.96/32",
    ]
  }
}

EKS access includes EKS API

Clusters created after January 1, 2024 have the required Cluster Authentication Mode setting enabled already and there is no action required.

Legacy clusters, however, may not have the required setting enabled. To check and possibly update the Cluster Authentication Mode in the console,

  1. Navigate to the Access tab

  2. Select Manage

  1. If it is a legacy EKS cluster, it may have ConfigMap selected. (N.b., this screenshot does not show that setting because it is not a legacy cluster, which cannot be created after 1/1/24). Choose EKS API and ConfigMap (do not choose EKS API if you are still using ConfigMap to manage access)

  2. Save changes

If using terraform to manage your EKS clusters, they probably already have the proper Cluster Authentication Mode setting as terraform will throw an error if attempting to apply the legacy value.

To confirm, you can verify that the access_config definition includes the API_AND_CONFIG_MAP or API value for authentication_mode:

resource "aws_eks_cluster" "this" {
  name = var.cluster_name

  access_config {
    authentication_mode = "API_AND_CONFIG_MAP" # or "API"
  }
}

Last updated

Was this helpful?