Hello, Cloud/Amazon EKS で Kubernetes の guestbook チュートリアルをやってみる
Amazon EKS
Amazon EKS で Kubernetes のチュートリアル(guestbook)をやってみた記録です。既存の環境に組み込もうとするとIAMだったりVPCだったりハマりどころ満載でしたが動き始めると便利に扱うことができそうでした。
awscliをアップデートしておく
awscliが古いとeksを操作できないのでアップデートしておきます。
%sh pip install --upgrade awscli
Requirement already up-to-date: awscli in /opt/conda/lib/python2.7/site-packages
Requirement already up-to-date: s3transfer<0.2.0,>=0.1.12 in /opt/conda/lib/python2.7/site-packages (from awscli)
Requirement already up-to-date: botocore==1.12.18 in /opt/conda/lib/python2.7/site-packages (from awscli)
Requirement already up-to-date: PyYAML<=3.13,>=3.10 in /opt/conda/lib/python2.7/site-packages (from awscli)
Requirement already up-to-date: rsa<=3.5.0,>=3.1.2 in /opt/conda/lib/python2.7/site-packages (from awscli)
Requirement already up-to-date: colorama<=0.3.9,>=0.2.5 in /opt/conda/lib/python2.7/site-packages (from awscli)
Requirement already up-to-date: docutils>=0.10 in /opt/conda/lib/python2.7/site-packages (from awscli)
Requirement already up-to-date: futures<4.0.0,>=2.2.0; python_version == "2.6" or python_version == "2.7" in /opt/conda/lib/python2.7/site-packages (from s3transfer<0.2.0,>=0.1.12->awscli)
Requirement already up-to-date: jmespath<1.0.0,>=0.7.1 in /opt/conda/lib/python2.7/site-packages (from botocore==1.12.18->awscli)
Requirement already up-to-date: python-dateutil<3.0.0,>=2.1; python_version >= "2.7" in /opt/conda/lib/python2.7/site-packages (from botocore==1.12.18->awscli)
Requirement already up-to-date: urllib3<1.24,>=1.20 in /opt/conda/lib/python2.7/site-packages (from botocore==1.12.18->awscli)
Collecting pyasn1>=0.1.3 (from rsa<=3.5.0,>=3.1.2->awscli)
Downloading https://files.pythonhosted.org/packages/d1/a1/7790cc85db38daa874f6a2e6308131b9953feb1367f2ae2d1123bb93a9f5/pyasn1-0.4.4-py2.py3-none-any.whl (72kB)
Collecting six>=1.5 (from python-dateutil<3.0.0,>=2.1; python_version >= "2.7"->botocore==1.12.18->awscli)
Downloading https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
Installing collected packages: pyasn1, six
Found existing installation: pyasn1 0.1.9
DEPRECATION: Uninstalling a distutils installed project (pyasn1) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling pyasn1-0.1.9:
Successfully uninstalled pyasn1-0.1.9
Found existing installation: six 1.10.0
DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling six-1.10.0:
Successfully uninstalled six-1.10.0
Successfully installed pyasn1-0.4.4 six-1.11.0
You are using pip version 9.0.1, however version 18.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
下準備(IAM Role)
EKS用のIAM Roleを作成しておきます。
%sh aws iam get-role --role-name hello-eks
{
"Role": {
"Description": "Allows EKS to manage clusters on your behalf.",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
}
}
]
},
"MaxSessionDuration": 3600,
"RoleId": "AROAICYPXU3SFVEBZSBDQ",
"CreateDate": "2018-09-30T10:43:59Z",
"RoleName": "hello-eks",
"Path": "/",
"Arn": "arn:aws:iam::845933287843:role/hello-eks"
}
}
VPCの準備
今回は既存のVPCにEKS用のサブネットを切る形で導入してみました。EKSを動作させるには2つ以上のサブネットが必要になります
%sh
cat <<EOF > ./hello-eks-vpc.yaml
AWSTemplateFormatVersion: 2010-09-09
Parameters:
Vpc:
Type: String
Default: vpc-fee0fb85
InternetGateway:
Type: String
Default: igw-4739613f
Resources:
RouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref Vpc
Tags:
- Key: Name
Value: public subnets for eks
Route:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref RouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone:
Fn::Select:
- '0'
- Fn::GetAZs: us-east-1
CidrBlock: 10.0.10.0/24
VpcId: !Ref Vpc
Tags:
- Key: Name
Value: EKS-PublicSubnet1
PublicSubnet2:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone:
Fn::Select:
- '1'
- Fn::GetAZs: us-east-1
CidrBlock: 10.0.11.0/24
VpcId: !Ref Vpc
Tags:
- Key: Name
Value: EKS-PublicSubnet2
PublicSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet1
RouteTableId: !Ref RouteTable
PublicSubnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet2
RouteTableId: !Ref RouteTable
ControlPlaneSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster communication with worker nodes
VpcId: vpc-fee0fb85
Outputs:
SecurityGroup:
Value: !Ref ControlPlaneSecurityGroup
EOF
aws cloudformation deploy \
--stack-name hello-eks-vpc \
--template-file ./hello-eks-vpc.yamlWaiting for changeset to be created.. Waiting for stack create/update to complete Successfully created/updated stack - hello-eks-vpc
クラスタを立ち上げたりするときに使うのでセキュリティグループとサブネットのIDを控えておきます。
%sh aws cloudformation describe-stack-resources --stack-name hello-eks-vpc --query StackResources[].PhysicalResourceId
[
"sg-0b9c339d18f49f74b",
"subnet-0b23128d40de826f0",
"rtbassoc-03894c3e5d0d2fd5b",
"subnet-09f154e7175f9f396",
"rtbassoc-01d609ab6a9c59582",
"hello-Route-1UYG7N4KIZRKK",
"rtb-0479468e156a1784d"
]
EKSクラスタの立ち上げはIAMロールとVPCの設定を記述すれば大丈夫です。
%sh
aws eks create-cluster \
--name hello-eks \
--role-arn arn:aws:iam::845933287843:role/hello-eks \
--resources-vpc-config subnetIds=subnet-0b23128d40de826f0,subnet-09f154e7175f9f396,securityGroupIds=sg-0b9c339d18f49f74b{
"cluster": {
"status": "CREATING",
"name": "hello-eks",
"certificateAuthority": {},
"roleArn": "arn:aws:iam::845933287843:role/hello-eks",
"resourcesVpcConfig": {
"subnetIds": [
"subnet-0b23128d40de826f0",
"subnet-09f154e7175f9f396"
],
"vpcId": "vpc-fee0fb85",
"securityGroupIds": [
"sg-0b9c339d18f49f74b"
]
},
"version": "1.10",
"arn": "arn:aws:eks:us-east-1:845933287843:cluster/hello-eks",
"platformVersion": "eks.2",
"createdAt": 1538961762.219
}
}
クラスタが立ち上がるまで10分〜20分程度かかるので待っている間にkubectlコマンドなどをインストールしましょう。
%sh # kubectl curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/kubectl curl -o kubectl.sha256 https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/kubectl.sha256 cat kubectl.sha256 openssl sha -sha256 kubectl chmod +x kubectl cp kubectl /usr/local/bin/ # aws-iam-authenticator: kubeconfigから指定 curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator curl -o aws-iam-authenticator.sha256 https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator.sha256 cat aws-iam-authenticator.sha256 openssl sha -sha256 aws-iam-authenticator chmod +x ./aws-iam-authenticator cp aws-iam-authenticator /usr/local/bin/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
2 51.6M 2 1545k 0 0 1049k 0 0:00:50 0:00:01 0:00:49 1048k
11 51.6M 11 6093k 0 0 2475k 0 0:00:21 0:00:02 0:00:19 2474k
21 51.6M 21 11.0M 0 0 3269k 0 0:00:16 0:00:03 0:00:13 3269k
31 51.6M 31 16.3M 0 0 3765k 0 0:00:14 0:00:04 0:00:10 3764k
42 51.6M 42 22.1M 0 0 4144k 0 0:00:12 0:00:05 0:00:07 4560k
53 51.6M 53 27.6M 0 0 4376k 0 0:00:12 0:00:06 0:00:06 5358k
64 51.6M 64 33.0M 0 0 4548k 0 0:00:11 0:00:07 0:00:04 5570k
74 51.6M 74 38.5M 0 0 4667k 0 0:00:11 0:00:08 0:00:03 5631k
85 51.6M 85 44.0M 0 0 4778k 0 0:00:11 0:00:09 0:00:02 5676k
94 51.6M 94 48.6M 0 0 4772k 0 0:00:11 0:00:10 0:00:01 5464k
100 51.6M 100 51.6M 0 0 4794k 0 0:00:11 0:00:11 --:--:-- 5387k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 73 100 73 0 0 133 0 --:--:-- --:--:-- --:--:-- 132
a624d08f7cae5e64aa73686c3b0fe7953a0733e20c4333b01635d1351fabfa2f kubectl
SHA256(kubectl)= a624d08f7cae5e64aa73686c3b0fe7953a0733e20c4333b01635d1351fabfa2f
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
8 25.1M 8 2118k 0 0 1802k 0 0:00:14 0:00:01 0:00:13 1801k
33 25.1M 33 8663k 0 0 3984k 0 0:00:06 0:00:02 0:00:04 3983k
61 25.1M 61 15.4M 0 0 5001k 0 0:00:05 0:00:03 0:00:02 5000k
87 25.1M 87 21.9M 0 0 5370k 0 0:00:04 0:00:04 --:--:-- 5369k
100 25.1M 100 25.1M 0 0 5338k 0 0:00:04 0:00:04 --:--:-- 5568k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 87 100 87 0 0 117 0 --:--:-- --:--:-- --:--:-- 117
246f6d13b051bbfb12962edca074c8f67436930e84b2bec3a45a5d9242dc6f0c aws-iam-authenticator
SHA256(aws-iam-authenticator)= 246f6d13b051bbfb12962edca074c8f67436930e84b2bec3a45a5d9242dc6f0c
%sh aws eks describe-cluster --name hello-eks --query cluster.status
"ACTIVE"
CREATING が ACTIVE に変わったら食べてもいい。
%sh aws eks describe-cluster --name hello-eks --query cluster.endpoint
"https://948358B29DBCDE3450C18739761A7828.yl4.us-east-1.eks.amazonaws.com"
kubectlの設定
KubernetesにEKSの認証情報を渡す必要があります。少し前までkubeconfigを自分で書く必要があったようですが、現在は awscli に update-kubeconfig というサブコマンドが入ったので、これを利用すれば .kube/config をよしなに生成することができました。
%sh aws eks update-kubeconfig --name hello-eks
Updated context arn:aws:eks:us-east-1:845933287843:cluster/hello-eks in /root/.kube/config
%sh kubectl version --short
Client Version: v1.10.3 Server Version: v1.10.3-eks
Server Version に eks という接尾辞が入っていれば設定完了です。
%sh kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 2m
クラスターも認識できています。
ワーカーノードを立ち上げる
ワーカー用のインスタンスを立ち上げます。クラスタと同様にVPCに関する設定を記述し、オートスケーリングの設定と使用するインスタンスの情報などを書いていきます。
%sh aws eks describe-cluster --name hello-eks --query cluster.resourcesVpcConfig
{
"subnetIds": [
"subnet-0b23128d40de826f0",
"subnet-09f154e7175f9f396"
],
"vpcId": "vpc-fee0fb85",
"securityGroupIds": [
"sg-0b9c339d18f49f74b"
]
}
%sh
aws cloudformation create-stack \
--stack-name hello-eks-nodegroup \
--template-body https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-eks-nodegroup.yaml \
--parameters \
ParameterKey=ClusterName,ParameterValue=hello-eks \
ParameterKey=ClusterControlPlaneSecurityGroup,ParameterValue=sg-0b9c339d18f49f74b \
ParameterKey=NodeGroupName,ParameterValue=hello-eks-nodegroup \
ParameterKey=NodeAutoScalingGroupMinSize,ParameterValue=1 \
ParameterKey=NodeAutoScalingGroupMaxSize,ParameterValue=2 \
ParameterKey=NodeInstanceType,ParameterValue=t2.medium \
ParameterKey=NodeImageId,ParameterValue=ami-0440e4f6b9713faf6 \
ParameterKey=KeyName,ParameterValue=gateway \
ParameterKey=VpcId,ParameterValue=vpc-fee0fb85 \
ParameterKey=Subnets,ParameterValue='subnet-0b23128d40de826f0\,subnet-09f154e7175f9f396' \
--capabilities CAPABILITY_IAM{
"StackId": "arn:aws:cloudformation:us-east-1:845933287843:stack/hello-eks-nodegroup/cb1ffbe0-ca9c-11e8-b555-500c28604cae"
}
ノードインスタンスの立ち上げも地味に時間が掛かるのでなるほどと言いながらドキュメントを眺めます。
%sh aws cloudformation wait stack-create-complete --stack-name hello-eks-nodegroup aws cloudformation describe-stacks --stack-name hello-eks-nodegroup --query Stacks[0].StackStatus
"CREATE_COMPLETE"
CREATE_COMPLETE になったら configmap を作成します。
%sh aws cloudformation describe-stacks --stack-name hello-eks-nodegroup --query Stacks[0].Outputs
[
{
"Description": "The node instance role",
"OutputKey": "NodeInstanceRole",
"OutputValue": "arn:aws:iam::845933287843:role/hello-eks-nodegroup-NodeInstanceRole-1RUXQ4KQDNVF0"
}
]
%sh
cat << EOF > configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::845933287843:role/hello-eks-nodegroup-NodeInstanceRole-1RUXQ4KQDNVF0
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOF
kubectl apply -f configmap.yamlconfigmap "aws-auth" configured
設定が終わったら kubectl get nodes を実行してみましょう。
%sh kubectl get nodes
NAME STATUS ROLES AGE VERSION ip-10-0-10-6.ec2.internal NotReady <none> 5s v1.10.3 ip-10-0-11-6.ec2.internal NotReady <none> 4s v1.10.3
guestbookを動かす
redisを立ち上げます
%sh
cat << EOF > redis-master-deployment.json
{
"kind":"ReplicationController",
"apiVersion":"v1",
"metadata":{
"name":"redis-master",
"labels":{
"app":"redis",
"role":"master"
}
},
"spec":{
"replicas":1,
"selector":{
"app":"redis",
"role":"master"
},
"template":{
"metadata":{
"labels":{
"app":"redis",
"role":"master"
}
},
"spec":{
"containers":[
{
"name":"redis-master",
"image":"redis:2.8.23",
"ports":[
{
"name":"redis-server",
"containerPort":6379
}
]
}
]
}
}
}
}
EOF
kubectl apply -f redis-master-deployment.jsonreplicationcontroller "redis-master" created
%sh kubectl get pods
NAME READY STATUS RESTARTS AGE redis-master-d4gdq 1/1 Running 0 15s
podが立ち上がりました。kubectl exec を使うと pod でコマンドを実行することができます。
%sh kubectl exec redis-master-d4gdq whoami
root
%sh
cat << EOF > redis-master-service.json
{
"kind":"Service",
"apiVersion":"v1",
"metadata":{
"name":"redis-master",
"labels":{
"app":"redis",
"role":"master"
}
},
"spec":{
"ports": [
{
"port":6379,
"targetPort":"redis-server"
}
],
"selector":{
"app":"redis",
"role":"master"
}
}
}
EOF
kubectl apply -f redis-master-service.jsonservice "redis-master" created
%sh kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 29m redis-master ClusterIP 172.20.70.52 <none> 6379/TCP 9s
続いてredisのslaveを立ち上げます。
%sh
cat << EOF > redis-slave-deployment.json
{
"kind":"ReplicationController",
"apiVersion":"v1",
"metadata":{
"name":"redis-slave",
"labels":{
"app":"redis",
"role":"slave"
}
},
"spec":{
"replicas":2,
"selector":{
"app":"redis",
"role":"slave"
},
"template":{
"metadata":{
"labels":{
"app":"redis",
"role":"slave"
}
},
"spec":{
"containers":[
{
"name":"redis-slave",
"image":"kubernetes/redis-slave:v2",
"ports":[
{
"name":"redis-server",
"containerPort":6379
}
]
}
]
}
}
}
}
EOF
kubectl apply -f redis-slave-deployment.jsonreplicationcontroller "redis-slave" created
%sh kubectl get pods
NAME READY STATUS RESTARTS AGE redis-master-d4gdq 1/1 Running 0 1m redis-slave-ftswv 0/1 ContainerCreating 0 8s redis-slave-tkls5 0/1 ContainerCreating 0 8s
%sh
cat << EOF > redis-slave-service.json
{
"kind":"Service",
"apiVersion":"v1",
"metadata":{
"name":"redis-slave",
"labels":{
"app":"redis",
"role":"slave"
}
},
"spec":{
"ports": [
{
"port":6379,
"targetPort":"redis-server"
}
],
"selector":{
"app":"redis",
"role":"slave"
}
}
}
EOF
kubectl apply -f redis-slave-service.jsonservice "redis-slave" created
%sh kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 31m redis-master ClusterIP 172.20.70.52 <none> 6379/TCP 2m redis-slave ClusterIP 172.20.213.112 <none> 6379/TCP 9s
Webアプリケーションもredisと同様に立ち上げていきます。
%sh
cat <<EOF > frontend-controller.json
{
"kind":"ReplicationController",
"apiVersion":"v1",
"metadata":{
"name":"guestbook",
"labels":{
"app":"guestbook"
}
},
"spec":{
"replicas":3,
"selector":{
"app":"guestbook"
},
"template":{
"metadata":{
"labels":{
"app":"guestbook"
}
},
"spec":{
"containers":[
{
"name":"guestbook",
"image":"k8s.gcr.io/guestbook:v3",
"ports":[
{
"name":"http-server",
"containerPort":3000
}
]
}
]
}
}
}
}
EOF
kubectl apply -f frontend-controller.jsonreplicationcontroller "guestbook" created
%sh
cat << EOF > frontend-service.json
{
"kind":"Service",
"apiVersion":"v1",
"metadata":{
"name":"guestbook",
"labels":{
"app":"guestbook"
}
},
"spec":{
"ports": [
{
"port":3000,
"targetPort":"http-server"
}
],
"selector":{
"app":"guestbook"
},
"type": "LoadBalancer"
}
}
EOF
kubectl apply -f frontend-service.jsonservice "guestbook" created
%sh kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE guestbook LoadBalancer 172.20.223.78 a894ee3c7ca9e... 3000:30068/TCP 2m kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 36m redis-master ClusterIP 172.20.70.52 <none> 6379/TCP 7m redis-slave ClusterIP 172.20.213.112 <none> 6379/TCP 5m
EXTERNAL-IP が見切れているときは kubectl get services に -o wide を渡します。
%sh kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR guestbook LoadBalancer 172.20.223.78 a894ee3c7ca9e11e8a49b020fd078860-812237424.us-east-1.elb.amazonaws.com 3000:30068/TCP 2m app=guestbook kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 36m <none> redis-master ClusterIP 172.20.70.52 <none> 6379/TCP 7m app=redis,role=master redis-slave ClusterIP 172.20.213.112 <none> 6379/TCP 5m app=redis,role=slave
接続してみましょう(ロードバランサーに設定が反映されるまで若干時間が掛かるのに注意しましょう)。
%sh curl -s http://a894ee3c7ca9e11e8a49b020fd078860-812237424.us-east-1.elb.amazonaws.com:3000
<!DOCTYPE html>
<html lang="en">
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<meta charset="utf-8">
<meta content="width=device-width" name="viewport">
<link href="style.css" rel="stylesheet">
<title>Guestbook</title>
</head>
<body>
<div id="header">
<h1>Guestbook</h1>
</div>
<div id="guestbook-entries">
<p>Waiting for database connection...</p>
</div>
<div>
<form id="guestbook-form">
<input autocomplete="off" id="guestbook-entry-content" type="text">
<a href="#" id="guestbook-submit">Submit</a>
</form>
</div>
<div>
<p><h2 id="guestbook-host-address"></h2></p>
<p><a href="env">/env</a>
<a href="info">/info</a></p>
</div>
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="script.js"></script>
</body>
</html>
ハマった点は
- nodegroup作成時にCloudFormationのテンプレートURLを間違えていた-
- 気をつけましょう… :-|
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-eks-nodegroup.yamlを使うと動作しました
- Service の
EXTERNAL-IPが PENDING のまま変わらない- public subnetを一つしか割り当ててなかったのが原因のようです
おまけ: CloudFormationテンプレート(nodegroup)の差分
%sh diff <(curl -s https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml) <(curl -s https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-eks-nodegroup.yaml) echo
3c3
< Description: 'Amazon EKS - Node Group'
---
> Description: 'Amazon EKS - Node Group - Released 2018-08-30'
75c75
< ConstraintDescription: must be a valid EC2 instance type
---
> ConstraintDescription: Must be a valid EC2 instance type
86a87,91
> NodeVolumeSize:
> Type: Number
> Description: Node volume size
> Default: 20
>
88c93,98
< Description: The cluster name provided when the cluster was created. If it is incorrect, nodes will not be able to join the cluster.
---
> Description: The cluster name provided when the cluster was created. If it is incorrect, nodes will not be able to join the cluster.
> Type: String
>
> BootstrapArguments:
> Description: Arguments to pass to the bootstrap script. See files/bootstrap.sh in https://github.com/awslabs/amazon-eks-ami
> Default: ""
107,219d116
< Mappings:
< MaxPodsPerNode:
< c4.large:
< MaxPods: 29
< c4.xlarge:
< MaxPods: 58
< c4.2xlarge:
< MaxPods: 58
< c4.4xlarge:
< MaxPods: 234
< c4.8xlarge:
< MaxPods: 234
< c5.large:
< MaxPods: 29
< c5.xlarge:
< MaxPods: 58
< c5.2xlarge:
< MaxPods: 58
< c5.4xlarge:
< MaxPods: 234
< c5.9xlarge:
< MaxPods: 234
< c5.18xlarge:
< MaxPods: 737
< i3.large:
< MaxPods: 29
< i3.xlarge:
< MaxPods: 58
< i3.2xlarge:
< MaxPods: 58
< i3.4xlarge:
< MaxPods: 234
< i3.8xlarge:
< MaxPods: 234
< i3.16xlarge:
< MaxPods: 737
< m3.medium:
< MaxPods: 12
< m3.large:
< MaxPods: 29
< m3.xlarge:
< MaxPods: 58
< m3.2xlarge:
< MaxPods: 118
< m4.large:
< MaxPods: 20
< m4.xlarge:
< MaxPods: 58
< m4.2xlarge:
< MaxPods: 58
< m4.4xlarge:
< MaxPods: 234
< m4.10xlarge:
< MaxPods: 234
< m5.large:
< MaxPods: 29
< m5.xlarge:
< MaxPods: 58
< m5.2xlarge:
< MaxPods: 58
< m5.4xlarge:
< MaxPods: 234
< m5.12xlarge:
< MaxPods: 234
< m5.24xlarge:
< MaxPods: 737
< p2.xlarge:
< MaxPods: 58
< p2.8xlarge:
< MaxPods: 234
< p2.16xlarge:
< MaxPods: 234
< p3.2xlarge:
< MaxPods: 58
< p3.8xlarge:
< MaxPods: 234
< p3.16xlarge:
< MaxPods: 234
< r3.xlarge:
< MaxPods: 58
< r3.2xlarge:
< MaxPods: 58
< r3.4xlarge:
< MaxPods: 234
< r3.8xlarge:
< MaxPods: 234
< r4.large:
< MaxPods: 29
< r4.xlarge:
< MaxPods: 58
< r4.2xlarge:
< MaxPods: 58
< r4.4xlarge:
< MaxPods: 234
< r4.8xlarge:
< MaxPods: 234
< r4.16xlarge:
< MaxPods: 737
< t2.small:
< MaxPods: 8
< t2.medium:
< MaxPods: 17
< t2.large:
< MaxPods: 35
< t2.xlarge:
< MaxPods: 44
< t2.2xlarge:
< MaxPods: 44
< x1.16xlarge:
< MaxPods: 234
< x1.32xlarge:
< MaxPods: 234
<
237a135
> - NodeVolumeSize
238a137
> - BootstrapArguments
315a215,236
> NodeSecurityGroupFromControlPlaneOn443Ingress:
> Type: AWS::EC2::SecurityGroupIngress
> DependsOn: NodeSecurityGroup
> Properties:
> Description: Allow pods running extension API servers on port 443 to receive communication from cluster control plane
> GroupId: !Ref NodeSecurityGroup
> SourceSecurityGroupId: !Ref ClusterControlPlaneSecurityGroup
> IpProtocol: tcp
> FromPort: 443
> ToPort: 443
>
> ControlPlaneEgressToNodeSecurityGroupOn443:
> Type: AWS::EC2::SecurityGroupEgress
> DependsOn: NodeSecurityGroup
> Properties:
> Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443
> GroupId: !Ref ClusterControlPlaneSecurityGroup
> DestinationSecurityGroupId: !Ref NodeSecurityGroup
> IpProtocol: tcp
> FromPort: 443
> ToPort: 443
>
357a279,284
> BlockDeviceMappings:
> - DeviceName: /dev/xvda
> Ebs:
> VolumeSize: !Ref NodeVolumeSize
> VolumeType: gp2
> DeleteOnTermination: true
360,394c287,294
< Fn::Join: [
< "",
< [
< "#!/bin/bash -xe\n",
< "CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki", "\n",
< "CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt", "\n",
< "MODEL_DIRECTORY_PATH=~/.aws/eks", "\n",
< "MODEL_FILE_PATH=$MODEL_DIRECTORY_PATH/eks-2017-11-01.normal.json", "\n",
< "mkdir -p $CA_CERTIFICATE_DIRECTORY", "\n",
< "mkdir -p $MODEL_DIRECTORY_PATH", "\n",
< "curl -o $MODEL_FILE_PATH https://s3-us-west-2.amazonaws.com/amazon-eks/1.10.3/2018-06-05/eks-2017-11-01.normal.json", "\n",
< "aws configure add-model --service-model file://$MODEL_FILE_PATH --service-name eks", "\n",
< "aws eks describe-cluster --region=", { Ref: "AWS::Region" }," --name=", { Ref: ClusterName }," --query 'cluster.{certificateAuthorityData: certificateAuthority.data, endpoint: endpoint}' > /tmp/describe_cluster_result.json", "\n",
< "cat /tmp/describe_cluster_result.json | grep certificateAuthorityData | awk '{print $2}' | sed 's/[,\"]//g' | base64 -d > $CA_CERTIFICATE_FILE_PATH", "\n",
< "MASTER_ENDPOINT=$(cat /tmp/describe_cluster_result.json | grep endpoint | awk '{print $2}' | sed 's/[,\"]//g')", "\n",
< "INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)", "\n",
< "sed -i s,MASTER_ENDPOINT,$MASTER_ENDPOINT,g /var/lib/kubelet/kubeconfig", "\n",
< "sed -i s,CLUSTER_NAME,", { Ref: ClusterName }, ",g /var/lib/kubelet/kubeconfig", "\n",
< "sed -i s,REGION,", { Ref: "AWS::Region" }, ",g /etc/systemd/system/kubelet.service", "\n",
< "sed -i s,MAX_PODS,", { "Fn::FindInMap": [ MaxPodsPerNode, { Ref: NodeInstanceType }, MaxPods ] }, ",g /etc/systemd/system/kubelet.service", "\n",
< "sed -i s,MASTER_ENDPOINT,$MASTER_ENDPOINT,g /etc/systemd/system/kubelet.service", "\n",
< "sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service", "\n",
< "DNS_CLUSTER_IP=10.100.0.10", "\n",
< "if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi", "\n",
< "sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service", "\n",
< "sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig" , "\n",
< "sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.service" , "\n",
< "systemctl daemon-reload", "\n",
< "systemctl restart kubelet", "\n",
< "/opt/aws/bin/cfn-signal -e $? ",
< " --stack ", { Ref: "AWS::StackName" },
< " --resource NodeGroup ",
< " --region ", { Ref: "AWS::Region" }, "\n"
< ]
< ]
---
> !Sub |
> #!/bin/bash
> set -o xtrace
> /etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}
> /opt/aws/bin/cfn-signal --exit-code $? \
> --stack ${AWS::StackName} \
> --resource NodeGroup \
> --region ${AWS::Region}
400d299
<
初期化の部分が変わっているようにみえる。
片付け
終わったら片付けましょう
%sh aws cloudformation delete-stack --stack-name hello-eks-nodegroup aws cloudformation delete-stack --stack-name hello-eks-vpc aws eks delete-cluster --name hello-eks --query cluster.status
"DELETING"
// 雑感: Replication ControllerとDeploymentの違いがよく分かってない気がする :-|
%md