top of page
Buscar

How to Install and Use an EKS Distribution on the Zadara Cloud

Atualizado: 6 de mar.




Written by Osmar R Leão

SA @Zadara


This article demonstrates how to create a Kubernetes cluster on Zadara’s zCompute. The creation of all the virtual network infrastructure, access, and integration with the cloud orchestrator so that Kubernetes can use the resources automatically will be demonstrated in this document.


The requirements for access to zCompute and how to create an account on Zadara will not be discussed.


Installation Steps

  1. Project

  2. User and Privileges

  3. AWS Credentials (Access Keys)

  4. Key Pair

  5. Deployment Choices

  6. File Downloading

  7. CNI Networking Provider Choice

  8. Volume Type Choice

  9. Instance Image Choice

  10. Installing EKS-D

  11. Conclusion


1. Project

After logging into the Cloud Panel, go to Identity & Access and click on Accounts:



Within Accounts, create a project in “Create Project”:



Name the project and give it a description, choose the IP Pool, and click create, remember to choose the VPC option:




2. User and Privileges

Create a user and assign it to the project you created. To do this, go to Accounts and the Users tab:



On the Users tab, click Create User:



Fill in the data for the user:



And then assign the MemberFullAccess and IAMFullAccess permissions (required by the terraform AWS provider) to the created project:




3. AWS Credentials (Access Keys)

Access keys are required to use the Terraform script. To create an AWS-compatible access key, go to My Profile and Security:



Within Access Keys, click Create to create a new key:



Pay attention to the instructions, keep the key and the secret:




4. Key Pairs

You need at least one pair of access keys to be placed on the EKS-D hosts and one pair for the bastion (which can be the same). To do this, go to Compute, Key Pairs:



Within Key Pairs, click Create to create the key pair:



Name your key. It is possible to create a key pair and send the public key, but this will not be covered in this tutorial:



Pay attention to the instructions, you need to save your private key:



NOTE: Change the read permissions to your user only on the operating system, removing any other permissions.



5. Deployment Workflow

It is possible to create EKS-D on the Zadara cloud in two ways: (a) in a simple way; and (b) in more elaborate stages. This document will show a method that is more complex than the standard one.


6. File Downloading

The first step is to download the Terraform scripts:


git clone <https://github.com/zadarastorage/zadara-examples.git>cd zadara-examples/k8s/eksd/

7. Volume Type Choice

The remaining steps are definitions of the EKS-D infrastructure. To define the types of volumes to be created (in the AWS standard), you need to check whether your cloud has the disk types and which is the standard for the EBS creation.


In the Symphony console, check the types:



The idea is to use the default type of your network (gp3 in the case below):



The summary of both commands in one is:


volume volume-types list -c name -c alias -c is_default -
c is_provisioning_disabled

8. CNI Networking Provider Choice

There are 3 different types of network providers to choose from: Flannel, Calico, or Cilium. The default is Flannel, Cilium has limited support.


9. Instance Image Choice

The instances to be created must be listed in your cloud’s image list. The default is Ubuntu 22.04 LTS for the bastion host and EKS-D 1.29 for the EKS nodes. Go to Images under Machine Images in the Zadara dashboard:



The images must be listed:



If they are not listed, you can download the images from the Marketplace within Machine Images:



You need to collect the IDs in AWS format for the images. By clicking on the image you can check the ID:



With all the steps and choices made, the next step is to configure Terraform with the defined options, access keys, and passwords.


10. Installing EKS-D

To create the EKS-D infrastructure and create a cluster with the options defined in the previous step, you need to create a new options file based on the Zadara GitHub repository.


The method that will be used is more complete and complex than the standard one for demonstration environments. In a way, it is what should be done for production environments.


You need to edit and configure the EKS-D environment in phases, the first phase being the deployment of the infrastructure. To do this, you must go to the infra-terraform directory and copy and edit the terraform.auto.tfvars.template file:


Edit the terraform.auto.tfvars file by changing, or adding in some cases, the parameters below:


api_endpoint                       = "URL_Cloud_Zadara"cluster_access_key                 = "Access_Key_Id"cluster_access_secret_id           = "Secret_Access_Key"bastion_keyname                    = "Key_Name"bastion_ami                        = "AMI_ID"environment                        = "ENV_Name"

Run the terraform init command to initialize terraform:


# terraform initInitializing the backend...Initializing provider plugins...- Finding hashicorp/aws versions matching "~> 3.33.0"...- Finding latest version of hashicorp/cloudinit...- Installing hashicorp/aws v3.33.0...- Installed hashicorp/aws v3.33.0 (signed by HashiCorp)- Installing hashicorp/cloudinit v2.3.5...- Installed hashicorp/cloudinit v2.3.5 (signed by HashiCorp)Terraform has created a lock file .terraform.lock.hcl to record the providerselections it made above. Include this file in your version control repositoryso that Terraform can guarantee to make the same selections by default whenyou run "terraform init" in the future.Terraform has been successfully initialized!You may now begin working with Terraform. Try running "terraform plan" to seeany changes that are required for your infrastructure. All Terraform commandsshould now work.If you ever set or change modules or backend configuration for Terraform,rerun this command to reinitialize your working directory. If you forget, othercommands will detect it and remind you to do so if necessary.

Run a terraform plan to check everything that will be created as infrastructure:


# terraform planTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with thefollowing symbols:  + create...Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions ifyou run "terraform apply" now.

With everything settled, you need to apply the resources in Zadara with the terraform apply --auto-approve command:


# terraform apply --auto-approveTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with thefollowing symbols:  + create...Apply complete! Resources: 28 added, 0 changed, 0 destroyed.Outputs:bastion_ip = "IP"environment = "ENV_Name"masters_load_balancer_id = "NLB_ID"masters_load_balancer_internal_dns = "LB_HOSTNAME"private_subnet_id = "subnet-PRIV-ID"public_subnet_id = "subnet-Pub-ID"security_group_id = "sg-ID"vpc_id = "vpc-ID"x_loadbalancer_script = "./get_loadbalancer.sh IP LB_HOSTNAME <access_key> <secret_key> <bastion_user> <bastion_key>"

Run the command from the last line of the terraform apply output to get the private and public IPs of the Load Balancer or check the IPs in the Load Balancers option of the Zadara cloud interface:



Now is necessary to create the EKS-D (Kubernetes) infrastructure in the infrastructure created in the Zadara cloud. To build it, change to (eksd-terraform) directory and copy the terraform.auto.tfvars.template file to terraform.auto.tfvars:



Fill in all the data relating to the created infrastructure:


cluster_access_key                 = "Access_Key_Id"cluster_access_secret_id           = "Secret_Access_Key"api_endpoint                       = "URL_Cloud_Zadara"bastion_ip                         = "Bastion_IP"bastion_ami                        = "AMI_ID"bastion_keyfile                    = "PATH_to_Key"bastion_keyname                    = "Key_Name"bastion_user                       = "ubuntu"environment                        = "ENV_Name"vpc_id                             = "vpc-ID"public_subnet_id                   = "subnet-PUB-ID"private_subnet_id                  = "subnet-PRIV-ID"security_group_id                  = "sg-ID"masters_load_balancer_id           = "NLB-ID"masters_load_balancer_internal_dns = "NLB-Hostname"masters_load_balancer_public_ip    = "NLB-PUB-IP"masters_load_balancer_private_ip   = "NLB-PRIV-IP"eksd_ami                           = "ami-ID"masters_keyname                    = "Key_Name"masters_keyfile                    = "PATH_to_PEM"workers_keyname                    = "Key_Name"workers_keyfile                    = "PATH_to_PEM"

Some parameters are optional, the ones changed for this example are below with comments:


masters_instance_type              = "z4.large"workers_instance_type              = "z8.large"masters_count                      = 3 # minimal recommendedmanage_masters_using_asg           = false # do not use master nodes Auto Scalingworkers_count                      = 5 # there is no minimalmasters_volume_size                = "50"workers_volume_size                = "100"workers_addition                   = 3 # Max workers that will be added by ASGebs_csi_volume_type                = "gp3"install_ebs_csi                    = trueinstall_lb_controller              = trueinstall_autoscaler                 = trueinstall_kasten_k10                 = false

The instance type options (mainly workers) are sensitive to the Kubernetes pod usage load. They will be higher or lower based on usage. The masters_count option defines how many master nodes will be used in redundancy with the autoscale option disabled (recommended by the Zadara team). The option to not autoscale is linked to ETCD database corruption issues.


The workers_addition and workers_count options are important for defining cluster growth automatically by the zCompute cloud orchestrator. The workers_count is the initial (and minimum) number of workers in the Kubernetes cluster. The workers_addition informs how many extra nodes will be created automatically to meet the extra demand for computing capacity.


There are other values for backing up Kubernetes ETCD in a compatible S3 bucket (not covered in this tutorial and linked to the “install_kasten_k10” option):


backup_access_key_id # access-key of user NGOS/S3backup_secret_access_key # secret-key of the NGO/S3 userbackup_region # NGO/S3 region (or located in us-east-1)backup_endpoint # NGO endpointbackup_bucket # bucket name NGOS/S3backup_rotation # maximum number of files to be stored (standard is 100)

With the parameters configured, it’s time to initialize terraform, plan, and deploy EKS-D on the Zadara cloud:


# terraform initInitializing the backend...Initializing modules...- master_instance_profile in modules\instance-profile- masters_asg in modules\asg- worker_instance_profile in modules\instance-profile- workers_asg in modules\asgInitializing provider plugins...- Finding latest version of hashicorp/cloudinit...- Finding hashicorp/aws versions matching "~> 3.33.0"...- Finding latest version of hashicorp/random...- Installing hashicorp/cloudinit v2.3.5...- Installed hashicorp/cloudinit v2.3.5 (signed by HashiCorp)- Installing hashicorp/aws v3.33.0...- Installed hashicorp/aws v3.33.0 (signed by HashiCorp)- Installing hashicorp/random v3.6.3...- Installed hashicorp/random v3.6.3 (signed by HashiCorp)Terraform has created a lock file .terraform.lock.hcl to record the providerselections it made above. Include this file in your version control repositoryso that Terraform can guarantee to make the same selections by default whenyou run "terraform init" in the future.Terraform has been successfully initialized!You may now begin working with Terraform. Try running "terraform plan" to seeany changes that are required for your infrastructure. All Terraform commandsshould now work.If you ever set or change modules or backend configuration for Terraform,rerun this command to reinitialize your working directory. If you forget, othercommands will detect it and remind you to do so if necessary.
# terraform planTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with thefollowing symbols:  + create...Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions ifyou run "terraform apply" now.
# terraform apply --auto-approveTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with thefollowing symbols:  + create...Apply complete! Resources: 22 added, 0 changed, 0 destroyed.Outputs:apiserver_private = "PRIV-IP"apiserver_public = "PUB-IP"bastion_ip = "PUB-IP"master_hostname = "k8s-master-1"x_kubeconfig_script = "./get_kubeconfig.sh k8s-master-1 PRIV-IP PUB-IP PUB-IP <bastion_user> <bastion_keypair> <master_user> <master_keypair>"

Run the final terraform apply script to get the kubeconfig file:


NOTE: The new script has 9 parameters, but the example only shows 8 parameters.


# ./get_kubeconfig.sh k8s-master-1 BASTION_PRIV_IP BASTION_PUB_IP MASTER_IP ubuntu KEYPATH/KEY_FILE ubuntu KEYPATH/KEY_FILE PATH_TO_KUBECONFIGeksd.pem                100% 1674   157.0KB/s   00:00kubeconfig              100% 5653   267.2KB/s   00:00

Using OpenLens (example) and with the downloaded kubeconfig file, you can access the cluster:


11. Conclusion

The installation is complete and you can use an EKS-D (AWS standard) Kubernetes cluster with a few master nodes and worker nodes to create and deploy applications.


You can define automatic growth based on load (autoscale) for Kubernetes workers integrated with the Zadara cloud. The default is to increase only 3 nodes, but you can increase this value by adding a workers_addition line in the previously edited terraform.auto.tfvars file.


The creation of access to applications with Load Balancers (Network and Application) and the creation of volumes (PV and PVC) automatically and integrated with zCompute was also done with the install_lb_controller and install_ebs_csi options. It is now possible to create PVC with the default StorageClass already defined for the cloud EBS and when access is requested with Ingress or Load Balancers, the respective ALB or NLB will also be created in the Zadara cloud to serve Kubernetes.



Comentarios


Ya no es posible comentar esta entrada. Contacta al propietario del sitio para obtener más información.
bottom of page