Scheduling on Fargate
So why isn't the checkout service already running on Fargate? Let's check its labels:
Looks like our Pod is missing the label fargate=yes, so lets fix that by updating the deployment for that service so the Pod spec includes the label needed for the profile to schedule it on Fargate.
- Kustomize Patch
- Deployment/checkout
- Diff
apiVersion: apps/v1
kind: Deployment
metadata:
  name: checkout
spec:
  template:
    metadata:
      labels:
        fargate: "yes"
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/created-by: eks-workshop
    app.kubernetes.io/type: app
  name: checkout
  namespace: checkout
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/component: service
      app.kubernetes.io/instance: checkout
      app.kubernetes.io/name: checkout
  template:
    metadata:
      annotations:
        prometheus.io/path: /metrics
        prometheus.io/port: "8080"
        prometheus.io/scrape: "true"
      labels:
        app.kubernetes.io/component: service
        app.kubernetes.io/created-by: eks-workshop
        app.kubernetes.io/instance: checkout
        app.kubernetes.io/name: checkout
        fargate: yes
    spec:
      containers:
        - envFrom:
            - configMapRef:
                name: checkout
          image: public.ecr.aws/aws-containers/retail-store-sample-checkout:1.2.1
          imagePullPolicy: IfNotPresent
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 30
            periodSeconds: 3
          name: checkout
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP
          resources:
            limits:
              memory: 512Mi
            requests:
              cpu: 250m
              memory: 512Mi
          securityContext:
            capabilities:
              drop:
                - ALL
            readOnlyRootFilesystem: true
          volumeMounts:
            - mountPath: /tmp
              name: tmp-volume
      securityContext:
        fsGroup: 1000
      serviceAccountName: checkout
      volumes:
        - emptyDir:
            medium: Memory
          name: tmp-volume
         app.kubernetes.io/component: service
         app.kubernetes.io/created-by: eks-workshop
         app.kubernetes.io/instance: checkout
         app.kubernetes.io/name: checkout
+        fargate: yes
     spec:
       containers:
         - envFrom:
             - configMapRef:
Apply the kustomization to the cluster:
[...]
This will cause the Pod specification for the checkout service to be updated and trigger a new deployment, replacing all the Pods. When the new Pods are scheduled, the Fargate scheduler will match the new label applied by the kustomization with our target profile and intervene to ensure our Pod is schedule on capacity managed by Fargate.
How can we confirm that it worked? Describe the new Pod thats been created and take a look at the Events section:
[...]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning LoggingDisabled 10m fargate-scheduler Disabled logging because aws-logging configmap was not found. configmap "aws-logging" not found
Normal Scheduled 9m48s fargate-scheduler Successfully assigned checkout/checkout-78fbb666b-fftl5 to fargate-ip-10-42-11-96.us-west-2.compute.internal
Normal Pulling 9m48s kubelet Pulling image "public.ecr.aws/aws-containers/retail-store-sample-checkout:0.4.0"
Normal Pulled 9m5s kubelet Successfully pulled image "public.ecr.aws/aws-containers/retail-store-sample-checkout:0.4.0" in 43.258137629s
Normal Created 9m5s kubelet Created container checkout
Normal Started 9m4s kubelet Started container checkout
The events from fargate-scheduler give us some insight in to what has happened. The entry we're mainly interested in at this stage in the lab is the event with the reason Scheduled. Inspecting that closely gives us the name of the Fargate instance that was provisioned for this Pod, in the case of the above example this is fargate-ip-10-42-11-96.us-west-2.compute.internal.
We can inspect this node from kubectl to get additional information about the compute that was provisioned for this Pod:
Name: fargate-ip-10-42-11-96.us-west-2.compute.internal
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
eks.amazonaws.com/compute-type=fargate
failure-domain.beta.kubernetes.io/region=us-west-2
failure-domain.beta.kubernetes.io/zone=us-west-2b
kubernetes.io/arch=amd64
kubernetes.io/hostname=ip-10-42-11-96.us-west-2.compute.internal
kubernetes.io/os=linux
topology.kubernetes.io/region=us-west-2
topology.kubernetes.io/zone=us-west-2b
[...]
This provides us with a number of insights in to the nature of the underlying compute instance:
- The label eks.amazonaws.com/compute-typeconfirms that a Fargate instance was provisioned
- Another label topology.kubernetes.io/zonespecified the availability zone that the pod is running in
- In the System Infosection (not shown above) we can see that the instance is running Amazon Linux 2, as well as the version information for system components likecontainer,kubeletandkube-proxy