#When the editor Strawberry was solving the problem of containerized application deployment, he found that the image from ECR (Amazon Elastic Container Registry) to EKS (Amazon Elastic Kubernetes Service) was not working properly. Specifically, the generated Pod is always 0/2, which means that the container cannot start or run normally. This problem may involve many aspects, including problems with the image itself, errors in container configuration, or limitations of the network environment. Some common solutions will be introduced in detail below to help developers quickly solve this problem.
I've tried almost everything to get things on the right path but still can't get my pod to be in a usable state.
So I have a basic application written in go.
I used docker build --tag docker-gs-ping .
to create an image of the program
Then I tried running the same command inside the container docker run --publish 8080:8080 docker-gs-ping
Then I want to save my image to amazon ecr, for this I created a repository in ecr.
Now, after creating the repository, I tagged the images that exist in my local.
docker tag f49366b7f534 ****40312665.dkr.ecr.us-east-1.amazonaws.com/docker-gs-ping:latest
f49366b7f534
is my local image tag. docker-gs-ping
is the repository name in ecr.
Then I uploaded the tagged image to ecr using the command.
docker push ****40312665.dkr.ecr.us-east-1.amazonaws.com/docker-gs-ping:latest
Not sure if the above command will push the tagged image or the recent image from local as there is no way to mention the specific image to be pushed to ecr.
The current result is
After completing the above steps, I created a vps using the following files and commands:
eks stack:
--- awstemplateformatversion: '2010-09-09' description: 'amazon eks cluster' parameters: clustername: type: string default: my-eks-cluster numberofworkernodes: type: number default: 1 workernodesinstancetype: type: string default: t2.micro kubernetesversion: type: string default: 1.22 resources: ########################################### ## roles ########################################### eksrole: type: aws::iam::role properties: rolename: my.eks.cluster.role assumerolepolicydocument: version: "2012-10-17" statement: - effect: allow principal: service: - eks.amazonaws.com action: - sts:assumerole path: / managedpolicyarns: - "arn:aws:iam::aws:policy/amazoneksclusterpolicy" eksnoderole: type: aws::iam::role properties: rolename: my.eks.node.role assumerolepolicydocument: version: "2012-10-17" statement: - effect: allow principal: service: - ec2.amazonaws.com action: - sts:assumerole path: / managedpolicyarns: - "arn:aws:iam::aws:policy/amazoneksworkernodepolicy" - "arn:aws:iam::aws:policy/amazonec2containerregistryreadonly" - "arn:aws:iam::aws:policy/amazoneks_cni_policy" ########################################### ## eks cluster ########################################### ekscluster: type: aws::eks::cluster properties: name: !ref clustername version: !ref kubernetesversion rolearn: !getatt eksrole.arn resourcesvpcconfig: securitygroupids: - !importvalue controlplanesecuritygroupid subnetids: !split [ ',', !importvalue privatesubnetids ] eksnodegroup: type: aws::eks::nodegroup dependson: ekscluster properties: clustername: !ref clustername noderole: !getatt eksnoderole.arn scalingconfig: minsize: ref: numberofworkernodes desiredsize: ref: numberofworkernodes maxsize: ref: numberofworkernodes subnets: !split [ ',', !importvalue privatesubnetids ]
Command: aws cloudformation create-stack --region us-east-1 --stack-name my-eks-cluster --capability capability_named_iam --template-body file://eks-stack.yaml
eks vpc yaml
--- awstemplateformatversion: '2010-09-09' description: 'amazon eks vpc - private and public subnets' parameters: vpcblock: type: string default: 192.168.0.0/16 description: the cidr range for the vpc. this should be a valid private (rfc 1918) cidr range. publicsubnet01block: type: string default: 192.168.0.0/18 description: cidrblock for public subnet 01 within the vpc publicsubnet02block: type: string default: 192.168.64.0/18 description: cidrblock for public subnet 02 within the vpc privatesubnet01block: type: string default: 192.168.128.0/18 description: cidrblock for private subnet 01 within the vpc privatesubnet02block: type: string default: 192.168.192.0/18 description: cidrblock for private subnet 02 within the vpc metadata: aws::cloudformation::interface: parametergroups: - label: default: "worker network configuration" parameters: - vpcblock - publicsubnet01block - publicsubnet02block - privatesubnet01block - privatesubnet02block resources: vpc: type: aws::ec2::vpc properties: cidrblock: !ref vpcblock enablednssupport: true enablednshostnames: true tags: - key: name value: !sub '${aws::stackname}-vpc' internetgateway: type: "aws::ec2::internetgateway" vpcgatewayattachment: type: "aws::ec2::vpcgatewayattachment" properties: internetgatewayid: !ref internetgateway vpcid: !ref vpc publicroutetable: type: aws::ec2::routetable properties: vpcid: !ref vpc tags: - key: name value: public subnets - key: network value: public privateroutetable01: type: aws::ec2::routetable properties: vpcid: !ref vpc tags: - key: name value: private subnet az1 - key: network value: private01 privateroutetable02: type: aws::ec2::routetable properties: vpcid: !ref vpc tags: - key: name value: private subnet az2 - key: network value: private02 publicroute: dependson: vpcgatewayattachment type: aws::ec2::route properties: routetableid: !ref publicroutetable destinationcidrblock: 0.0.0.0/0 gatewayid: !ref internetgateway privateroute01: dependson: - vpcgatewayattachment - natgateway01 type: aws::ec2::route properties: routetableid: !ref privateroutetable01 destinationcidrblock: 0.0.0.0/0 natgatewayid: !ref natgateway01 privateroute02: dependson: - vpcgatewayattachment - natgateway02 type: aws::ec2::route properties: routetableid: !ref privateroutetable02 destinationcidrblock: 0.0.0.0/0 natgatewayid: !ref natgateway02 natgateway01: dependson: - natgatewayeip1 - publicsubnet01 - vpcgatewayattachment type: aws::ec2::natgateway properties: allocationid: !getatt 'natgatewayeip1.allocationid' subnetid: !ref publicsubnet01 tags: - key: name value: !sub '${aws::stackname}-natgatewayaz1' natgateway02: dependson: - natgatewayeip2 - publicsubnet02 - vpcgatewayattachment type: aws::ec2::natgateway properties: allocationid: !getatt 'natgatewayeip2.allocationid' subnetid: !ref publicsubnet02 tags: - key: name value: !sub '${aws::stackname}-natgatewayaz2' natgatewayeip1: dependson: - vpcgatewayattachment type: 'aws::ec2::eip' properties: domain: vpc natgatewayeip2: dependson: - vpcgatewayattachment type: 'aws::ec2::eip' properties: domain: vpc publicsubnet01: type: aws::ec2::subnet metadata: comment: subnet 01 properties: mappubliciponlaunch: true availabilityzone: fn::select: - '0' - fn::getazs: ref: aws::region cidrblock: ref: publicsubnet01block vpcid: ref: vpc tags: - key: name value: !sub "${aws::stackname}-publicsubnet01" - key: kubernetes.io/role/elb value: 1 publicsubnet02: type: aws::ec2::subnet metadata: comment: subnet 02 properties: mappubliciponlaunch: true availabilityzone: fn::select: - '1' - fn::getazs: ref: aws::region cidrblock: ref: publicsubnet02block vpcid: ref: vpc tags: - key: name value: !sub "${aws::stackname}-publicsubnet02" - key: kubernetes.io/role/elb value: 1 privatesubnet01: type: aws::ec2::subnet metadata: comment: subnet 03 properties: availabilityzone: fn::select: - '0' - fn::getazs: ref: aws::region cidrblock: ref: privatesubnet01block vpcid: ref: vpc tags: - key: name value: !sub "${aws::stackname}-privatesubnet01" - key: kubernetes.io/role/internal-elb value: 1 privatesubnet02: type: aws::ec2::subnet metadata: comment: private subnet 02 properties: availabilityzone: fn::select: - '1' - fn::getazs: ref: aws::region cidrblock: ref: privatesubnet02block vpcid: ref: vpc tags: - key: name value: !sub "${aws::stackname}-privatesubnet02" - key: kubernetes.io/role/internal-elb value: 1 publicsubnet01routetableassociation: type: aws::ec2::subnetroutetableassociation properties: subnetid: !ref publicsubnet01 routetableid: !ref publicroutetable publicsubnet02routetableassociation: type: aws::ec2::subnetroutetableassociation properties: subnetid: !ref publicsubnet02 routetableid: !ref publicroutetable privatesubnet01routetableassociation: type: aws::ec2::subnetroutetableassociation properties: subnetid: !ref privatesubnet01 routetableid: !ref privateroutetable01 privatesubnet02routetableassociation: type: aws::ec2::subnetroutetableassociation properties: subnetid: !ref privatesubnet02 routetableid: !ref privateroutetable02 controlplanesecuritygroup: type: aws::ec2::securitygroup properties: groupdescription: cluster communication with worker nodes vpcid: !ref vpc outputs: publicsubnetids: description: public subnets ids in the vpc value: !join [ ",", [ !ref publicsubnet01, !ref publicsubnet02 ] ] export: name: publicsubnetids privatesubnetids: description: private subnets ids in the vpc value: !join [ ",", [ !ref privatesubnet01, !ref privatesubnet02 ] ] export: name: privatesubnetids controlplanesecuritygroupid: description: security group for the cluster control plane communication with worker nodes value: !ref controlplanesecuritygroup export: name: controlplanesecuritygroupid vpcid: description: the vpc id value: !ref vpc export: name: vpcid
Command: aws cloudformation create-stack --region us-east-1 --stack-name my-eks-vpc --template-body file://eks-vpc-stack.yaml
Result after command:
Now I try to deploy deployment.yaml and service.yaml files
deployment.yaml
apiversion: apps/v1 kind: deployment metadata: name: helloworld namespace: default spec: replicas: 2 selector: matchlabels: app: helloworld template: metadata: labels: app: helloworld spec: containers: - name: new-container image: ****40312665.dkr.ecr.us-east-1.amazonaws.com/docker-gs-ping:latest ports: - containerport: 80
Commands and results:
Nowservice.yaml
apiversion: v1 kind: service metadata: name: helloworld spec: type: loadbalancer selector: app: helloworld ports: - name: http port: 80 targetport: 80
Commands and results:
After all this is done, when I run kubectl get deploy, I get the following results:
For debugging, I tried kubectl describe pod helloworld and I got the following
C:\Users\visratna\GolandProjects\testaws>kubectl describe pod helloworld Name: helloworld-c6dc56598-jmpvr Namespace: default Priority: 0 Service Account: default Node: docker-desktop/192.168.65.4 Start Time: Fri, 07 Jul 2023 22:22:18 +0530 Labels: app=helloworld pod-template-hash=c6dc56598 Annotations: <none> Status: Pending IP: 10.1.0.7 IPs: IP: 10.1.0.7 Controlled By: ReplicaSet/helloworld-c6dc56598 Containers: new-container: Container ID: Image: 549840312665.dkr.ecr.us-east-1.amazonaws.com/docker-gs-ping:latest Image ID: Port: 80/TCP Host Port: 0/TCP State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sldvv (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-sldvv: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23m default-scheduler Successfully assigned default/helloworld-c6dc56598-jmpvr to docker-desktop Normal Pulling 22m (x4 over 23m) kubelet Pulling image "549840312665.dkr.ecr.us-east-1.amazonaws.com/docker-gs-ping:latest" Warning Failed 22m (x4 over 23m) kubelet Failed to pull image "549840312665.dkr.ecr.us-east-1.amazonaws.com/docker-gs-ping:latest": rpc error: code = Unknown desc = Error response from daemon: Head "https://549840312665.dkr.ecr.us-east-1.amazonaws.com/v2/docker-gs-ping/manifests/latest": no basic auth credentials Warning Failed 22m (x4 over 23m) kubelet Error: ErrImagePull Warning Failed 22m (x6 over 23m) kubelet Error: ImagePullBackOff Normal BackOff 3m47s (x85 over 23m) kubelet Back-off pulling image "549840312665.dkr.ecr.us-east-1.amazonaws.com/docker-gs-ping:latest" Name: helloworld-c6dc56598-r9b4d Namespace: default Priority: 0 Service Account: default Node: docker-desktop/192.168.65.4 Start Time: Fri, 07 Jul 2023 22:22:18 +0530 Labels: app=helloworld pod-template-hash=c6dc56598 Annotations: <none> Status: Pending IP: 10.1.0.6 IPs: IP: 10.1.0.6 Controlled By: ReplicaSet/helloworld-c6dc56598 Containers: new-container: Container ID: Image: 549840312665.dkr.ecr.us-east-1.amazonaws.com/docker-gs-ping:latest Image ID: Port: 80/TCP Host Port: 0/TCP State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-84rw4 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-84rw4: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23m default-scheduler Successfully assigned default/helloworld-c6dc56598-r9b4d to docker-desktop Normal Pulling 22m (x4 over 23m) kubelet Pulling image "549840312665.dkr.ecr.us-east-1.amazonaws.com/docker-gs-ping:latest" Warning Failed 22m (x4 over 23m) kubelet Failed to pull image "549840312665.dkr.ecr.us-east-1.amazonaws.com/docker-gs-ping:latest": rpc error: code = Unknown desc = Error response from daemon: Head "https://549840312665.dkr.ecr.us-east-1.amazonaws.com/v2/docker-gs-ping/manifests/latest": no basic auth credentials Warning Failed 22m (x4 over 23m) kubelet Error: ErrImagePull Warning Failed 22m (x6 over 23m) kubelet Error: ImagePullBackOff Normal BackOff 3m43s (x86 over 23m) kubelet Back-off pulling image "549840312665.dkr.ecr.us-east-1.amazonaws.com/docker-gs-ping:latest"
I've tried many solutions as suggested on stackoverflow but nothing seems to work for me, any suggestions how I can get things working? Thank you very much in advance.
A few things. First, you should avoid using the latest tag. This is an anti-pattern. When you push an image to ECR, use the build label or version number as the image label. Second, you need to verify that your worker nodes have permission to pull images from ECR, specifically the AmazonEC2ContainerRegistryReadOnly policy. Otherwise, the kubelet will not be able to pull the image from ECR. If the registry is in a different account than the cluster, you need to create a repository [resource] policy. See https://docs.aws.amazon.com/AmazonECR /latest/userguide/repository-policies.html.
The above is the detailed content of Image from ECR to EKS not working as the resulting Pod is always 0/2. For more information, please follow other related articles on the PHP Chinese website!