Running Serverless Workloads on AWS Using Fargate

Even though the managed Kubernetes offered as a PaaS service on AWS provides several benefits to the management of your Kubernetes cluster, you might often find yourself being held back by the overhead cost of managing the compute instances for the Kubernetes cluster.

In this article, you will learn about the Fargate compute engine on AWS and how it can be used to minimize your container cost. You will follow through with a hands-on demonstration of Fargate being used to run pods for a containerized React.js application.

Prerequisites

This article contains a hands-on demonstration involving Fargate and the Elastic Kubernetes Service (EKS) on AWS. To follow along, it is recommended that you have the following;

  • An active AWS account.
  • The AWS and Eksctl CLI tools are configured on your computer.
  • A basic understanding of Kubernetes, although the Kubernetes resources used within the article are explained.

Introduction

Running an application in a serverless manner means leveraging a third-party service to manage the underlying infrastructure for your application while you focus on building the application itself. While the underlying infrastructure still exists, you are not responsible for it. This is the same approach that AWS Fargate applies to manage the compute infrastructure for your Kubernetes clusters, although a third party is not introduced in this scenario.

Fargate is a serverless, pay-as-you-go (PAYUG) compute engine on AWS that manages the compute infrastructure for your Kubernetes clusters. Originally built for the Elastic Container Service (ECS), Fargate now has support for the Elastic Kubernetes Service (EKS), and we’ll focus on the use of Fargate with the EKS. As shown in the diagram below, when using Fargate, you don’t have to manage and pay for the EC2 instances running your containers.

Fargate applied in a container architecture

A scenario where you will find Fargate beneficial is in running Web applications. A Web application that experiences sudden peaks in traffic can utilize Fargate to automatically provide the right compute capacity when needed, and scale down without having underutilized instances.

Operations of Fargate

A Fargate Profile is used to schedule the pods within a cluster that will run on Fargate. The purpose of creating a Fargate profile is to describe the kind of pods within your cluster that qualify to run on Fargate through the use of namespaces and selectors. Using Fargate profiles, you can have a mixed cluster environment consisting of Fargate nodes, and normal EKS worker nodes. When certain pods don’t meet the description of the Fargate profile, they will be assigned to the worker nodes.

Also, when accessing whatever workload is running within the Fargate pods, you can make use of either an Application Load Balancer or a Network Load Balancer, which targets the pods via their IP. In the demonstration in this article, you will learn how a Network Load Balancer is created to provide an endpoint for accessing a workload.

Fargate also helps to lower your AWS billing when running a Kubernetes cluster, as you do not pay for the EC2 instances running the Kubernetes cluster. Let’s generate an estimate of the cost incurred when running 3 Kubernetes pods in a cluster with Fargate support.

Operating Expenditure Involved in AWS Fargate

Within the context of cloud engineering, Operating Expenditure ( OpEx ) refers to costs incurred while using services from a cloud service provider. As we have learned, a major benefit of using Fargate to manage your cluster compute capacity is that you don’t get billed for the EC2 instances.

Through the Pricing Calculator on AWS, you can calculate the estimated cost incurred when using a particular service. Through the two, let’s calculate the OpEx incurred while using only Fargate or EKS to manage.

  • On your web browser, open the AWS Pricing Calculator and click the Create Estimate button.

    Within the next services page, type EKS into the search bar below the AWS Services text. This will limit the services shown within the estimation page to only container-related services.

Click the **Configure** button within the **Amazon EKS** card to navigate to the next page where you will create an estimate of the cost involved in using the EKS.

Searching for EKS within Pricing Calculator

From the next estimation page, provide a description for the estimation in the Description Input field and leave the number of clusters at its default of 1.

Total monthly cost for using EKS

As highlighted in the image above, the estimated cost for running a single cluster on the EKS is placed at a flat fee of $73.

Let’s proceed to estimate the cost for running 3 pods using Fargate.

  • Navigate back to the Select Services page of the pricing calculator and click the Configure button within the AWS Fargate card.

Selecting AWS Fargate service

At the next estimation page, configure the input fields to have the following values;

  • Select 3, and per month in the Number of tasks or pods input fields.
  • Select 12, and hours in the Average duration input fields.
  • Select 4 in the Amount of vCPU allocated input field.
  • Select 10 in the Amount of memory allocated input field.

Total monthly cost for using AWS Fargate

From the calculation generated, you can see that the bill is much more flexible than that of the EKS, and it is directly based on the time spent using the specified number of pods (referred to as ‘tasks for ECS’), vCPU and memory resources.

It is also important to note that the first 20gb of ephemeral storage consumed is not billed for. However, if the ephemeral storage consumed exceeds 20gb, the excess of 20gb will be calculated on your bill.

Building a demo Kubernetes workload

In the past sections, you have learned about Fargate. Let’s proceed to apply the knowledge gained by deploying a demo workload to AWS Fargate.

To make it easier to navigate through, this section has been broken down into the following subsections with numbered steps;

Pushing a Docker image to the Elastic Container Registry ( ECR )

In this section, you will run through the outlined steps below to create and push a Docker image to the elastic container registry service on AWS.

  1. Execute the command below to create a new React.js application using the create-react-app CLI. The application created will serve as the web application to be deployed as a workload within the Kubernetes cluster.

     npx create-react-app fargate-react
    

    After creating the application, execute the two commands below to change into the new fargate-react directory and start the application.

    ```

    change directory

    cd fargate-react

# start react application
npm run start
```

With the react application running, you can view it through the web browser at [http://localhost:3000](http://localhost:3000). 

Local React.js application

  1. Using your code editor, create a new file within the fargate-react project directory called Dockerfile and add the content of the code block below into the new Dockerfile.

    The code below contains steps to be used by Docker to build a containerized image of the web application.

```
FROM node:alpine

WORKDIR /fargate-image

COPY . .

RUN npm install

EXPOSE 3000

CMD exec npm run start
```

Next, execute the command below to build the Docker image with a tag of fargate-ecr-image. The tag added will be useful when identifying the container.

```
sudo docker build . -t fargate-ecr-image 
```

Executing Docker build steps

  1. Using your configured AWS CLI, execute the ecr-public command below to create a new public repository in the us-east-1 region within the ECR named fargate-react-image-repository. This new repository will be used to store the image built in step two ( 2 ).

    After the execution, the command below will return an output describing the repository that was created. Take note of the repositoryUri that will be returned among the repository details.

     aws ecr-public create-repository --repository-name fargate-react-image-repository --region us-east-1
    

    The image below shows the repositoryUri highlighting an example of the repository created within this tutorial.

![Repository URL for created repository](https://res.cloudinary.com/dhuoxl63u/image/upload/v1635935621/blog/fargate-react-images/8.png)

Next, replace the REPOSITORY_URI placeholder within the command below, and execute it to tag the Docker image for the react application with the repositoryUri value returned from the previous command. 

The repositoryUri tag added to the Docker image will be used when pushing the image into the _fargate-react-image-repository_.


```
sudo docker tag fargate-ecr-image:latest <REPOSITORY_URI>
```
  1. With the Docker image tagged, replace the REPOSITORY_URI placeholder in the command below, and execute it to authenticate your Docker client in the ECR registry through the use of an authentication token.

     aws ecr-public get-login-password --region us-east-1 | sudo docker login --username AWS --password-stdin <REPOSITORY_URI>
    

    Create public ECR Repository

    Next, execute the command below to push the image into the _fargate-react-image-repository _in the ECR.

    Note: Replace the the REPOSITORY_URL placeholder with the your created repository URL.

     docker push public.ecr.aws/{REPOSITORY_URL}
    

    Pushing Docker image to ECR

    After the push command from your terminal has been completed, you can view the repository within the ECR section of the AWS management console for your account.

    Viewing pushed Docker image

Creating an EKS Cluster With Fargate Support

One quick way to manage clusters within the EKS is through the use of eksctl, which is the official CLI based tool for AWS. Eksctl leverages the CloudFormation service on AWS to group and create cluster related resources within multiple Stacks.

The ‘eksctl create command below happens to be the shortest command for using Fargate. When executed, it will create an EKS cluster with support for fargate, assigning it a random name with one Fargate profile and two namespaces.

eksctl create cluster --fargate

Using your code editor, create a file within the fargate-react project directory named fargate-config.yaml. Rather than using the default configurations, you will use the configurations defined in the fargate-config file with eksctl to create the EKS cluster.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
 name: fargate-react-cluster
 region: us-east-2

fargateProfiles:
 - name: react-fargate-profile-default
   selectors:
     # All workloads in the "default" Kubernetes namespace will be
     # scheduled onto Fargate:
     - namespace: default
     # All workloads in the "kube-system" Kubernetes namespace will be
     # scheduled onto Fargate:
     - namespace: kube-system

Next, execute the command below to create a cluster using the fargate-config file above. The cluster creation process will take a significant amount of time so you need to be patient.

eksctl create cluster -f fargate-config.yaml

To further view the nodes within the cluster created, execute the kubectl get command below;

kubectl get nodes -o wide

As shown in the resulting image below, you can see that the two nodes created for the cluster have a “fargate-” prefix attached to their names, rather than the “ip-” prefix attached to user-managed worker nodes.

Viewing Fargate nodes

Execute the command to switch your kubectl context to that of the fargate-react-cluster. Executing the command will enable kubectl to connect to your EKS clusters.

aws eks --region us-east-2 update-kubeconfig --name fargate-react-cluster

Execute the _eksctl _command below to view details of the fargate profile created within the _fargate-react-cluster _in a YAML format;

eksctl get fargateprofile --cluster fargate-react-cluster -o yaml

Viewing created Fargate profile

Creating The Deployment Cluster Resources

At this point, you should have an empty cluster with a default and kube-system namespace. Next, let’s proceed to create a Deployment resource within the cluster. The Deployment resource will be used to create the pods that will be picked up by Fargate.

Using your code editor, create a file named Deployment.yaml and add the code within the code block below into the file. The code you added into the Deployment.file will form the configurations for a Deployment resource.

Note: Replace the FARGATE_REACT_IMAGE_REPOSITORY_URI placeholder with the repositoryUri returned when you pushed the fargate-ecr-image Docker image to the ECR in step 5.

apiVersion: apps/v1
kind: Deployment
metadata:
 name: react-fargate-k8-deployment
 namespace: default
 labels:
   app: react-fargate-k8
spec:
 replicas: 2
 selector:
   matchLabels:
     app: react-fargate-k8
 template:
   metadata:
     labels:
       app: react-fargate-k8
   spec:
     containers:
     - name: react-fargate-k8
       image: <FARGATE_REACT_IMAGE_REPOSITORY_URI>:latest
       ports:
         - containerPort: 3000

When used in the next step, the Deployment in the code block above will create two pods in the default namespace, each running the fargate-ecr-image for the React.js application pulled from the fargate-react-image-repository within the ECR.

Execute the kubectl command below to create the deployment resource using the deployment.yaml file you created above.

kubectl apply -f deployment.yaml

Execute the command below to view details of the two pods created by the deployment above.

kubectl get pods -o wide

Viewing Kubernetes pods within Deployment

Accessing The Fargate Pods

Within the previous section, you used a deployment resource to create two pods running the React.js Docker image. Let’s proceed to create a Kubernetes service resource to expose the Fargate pods to ingress traffic over the public internet.

Before proceeding, you must have the AWS Load Balancer Controller installed within your cluster. The installation guide from the AWS EKS documentation will walk you through the process of installing the AWS Load Balancer Controller on your cluster.

Execute the command below to view the pods running the AWS Load Balancer Controller within the kube-system namespace. The running pods were created from the installation guide.

Viewing all pods within Kube-system

With the Load Balancer Controller installed, create a loadbalancer.yaml file within the fargate-react project directory to store the configurations for a load balancer service type.

Add the configurations within the YAML formatted code block below into the loadbalancer.yaml file.

apiVersion: v1
kind: Service
metadata:
 name: fargate-react-loadbalancer
 namespace: default
 annotations:
   service.beta.kubernetes.io/aws-load-balancer-type: external
   service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
   service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
 type: LoadBalancer
 selector:
   app: react-fargate
 ports:
   - protocol: TCP
     port: 3000
     targetPort: 3000

The configuration file above contains the following sections which are of key importance;

  • The metadata section which declares information about the Service resource, including its name and namespace. The keys within the annotations object determine what kind of load balancer is created within AWS.

    As an example, the ip value of the service.beta.kubernetes.io/aws-load-balancer-nlb-target-type key configures the load balancer to resolve traffic to the Fargate pods using their IPs rather than the default “instance” value for an EKS cluster.

  • The spec section contains a type field to specify the Service created as a Load balancer and a selector to match it with the deployment resource that was created previously, alongside the port configuration.

Execute the command below to create a load balancer using the configurations in the loadbalancer.yaml file.

kubectl apply -f loadbalancer.yaml

The load balancer created through the command above will contain an external IP address that can be used to access the workload within the pods.

Execute the command below to retrieve the external IP address of the load balancer.

Viewing Kubernetes loadbalancer service for Fargate

Navigate to the loadbalancer’s external IP address using your browser to view the React,js application running within the Fargate pods.

Viewing React.js application running within Fargate deployment

As shown in the address bar of the browser image above, the React.js application was accessed through the external IP of the loadbalancer created, indicating that the Fargate cluster created is operating as expected.

Conclusion

In this tutorial, you have deployed a React.js application to a serverless Kubernetes cluster created using the Fargate compute engine on AWS. You started the deployment process by building and pushing a Docker image to the ECR, after which you used the eksctl CLI tool to create a cluster supporting Fargate and its related Kubernetes resource.

Now that you have a practical knowledge of AWS Fargate, you can proceed to run your workloads within Fargate pods while keeping in mind the outlined considerations for Fargate.