Kubernetes on AWS + kops
Prerequisite
- kops 1.9.0
- kubectl client v1.10.1, server v1.9.3
설치
아래 링크대로 따라서 설치
Following this ref
그리고 어김없이 찾아오는 에러…
Error
kops 로 클러스터 구성(after executing, kops update cluster —yes $NAME
) 후에 아래와 같은 output 을 볼수 있습니다.
...
Suggestions:
* validate cluster: kops validate cluster
* list nodes: kubectl get nodes --show-labels
* ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.useast1.k8s.example.com
* the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
* read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/addons.md.
아래 명령을 실행합니다.
kops validate cluster
Output:
tkwon-macbook:~/projects/k8s ktg$ kops validate cluster
Using cluster from kubectl context: useast1.k8s.example.com
Validating cluster useast1.k8s.example.com
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east-1a Master m5.large 1 1 us-east-1a
nodes Node c5.large 2 2 us-east-1a
NODE STATUS
NAME ROLE READY
VALIDATION ERRORS
KIND NAME MESSAGE
dns apiserver Validation Failed
The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.
Validation Failed
뭔지 모르겠지만 실패했습니다.
ssh로 master node
에 접속 시도 해봅니다.
ssh -i ~/.ssh/id_rsa admin@api.useast1.k8s.example.com
ssh 접속이 되지 않았습니다. ..
route53
에서 확인해보니 도매인의 ip가 master node ip로 변경되어 있지 않았습니다.
Ec2 dashboard 를 확인하니 정상적으로 instance가 실행중입니다.
ip로 ssh 접속 시도합니다.
ssh -i ~/.ssh/id_rsa admin@1.1.1.1
정상적으로 접속됩니다.
sudo docker ps
Output:
protokube 1.9.0
protokube
라는 이미지가 실행중입니다.
docker log 를 확인해보니 아래와 같은 에러로그가 반복되고 있습니다.
...
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.659365 1576 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0f1a04a36c6baaaae"
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.659373 1576 volume_mounter.go:107] Waiting for volume "vol-0f1a04a36c6baaaae" to be attached
Mar 30 22:30:39 ip-172-31-1-13 docker[1546]: I0330 22:30:39.659499 1576 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0f1a04a36c6baaaae"
Mar 30 22:30:39 ip-172-31-1-13 docker[1546]: I0330 22:30:39.659519 1576 volume_mounter.go:107] Waiting for volume "vol-0f1a04a36c6baaaae" to be attached
Mar 30 22:30:40 ip-172-31-1-13 docker[1546]: I0330 22:30:40.659641 1576 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0f1a04a36c6baaaae"
Mar 30 22:30:40 ip-172-31-1-13 docker[1546]: I0330 22:30:40.659660 1576 volume_mounter.go:107] Waiting for volume "vol-0f1a04a36c6baaaae" to be attached
...
위의 링크에 원인이 나와있었습니다.
thumb up 한번 눌러드리고 수정합니다.
$ kops edit ig master-us-east-1a
$ kops edit ig nodes
위 명령을 실행해서 instance type m5, c5 에서 m4, c4로 바꿉니다.
kops rolling-update cluster --cloudonly --yes
실행하고… 기다립니다.
kops validate cluster
Output:
tkwon-macbook:~/projects/k8s ktg$ kops validate cluster
Using cluster from kubectl context: useast1.k8s.example.com
Validating cluster useast1.k8s.example.com
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east-1a Master m4.large 1 1 us-east-1a
nodes Node c4.large 2 2 us-east-1a
NODE STATUS
NAME ROLE READY
ip-10-110-42-179.ec2.internal node True
ip-10-110-45-25.ec2.internal master True
ip-10-110-47-243.ec2.internal node True
Your cluster useast1.k8s.example.com is ready
이제 kubernetes cluster 를 사용할 수 있습니다!