Containers allow applications to run quicker across many different development environments, and a single container encapsulates everything needed to run an application. Container technologies have exploded in popularity in recent years, leading to diverse use cases as well as new and unexpected challenges. This Zone offers insights into how teams can solve these challenges through its coverage of container performance, Kubernetes, testing, container orchestration, microservices usage to build and deploy containers, and more.
AWS Fargate is a serverless computing engine for containers that allows developers to run Docker containers without having to manage the underlying infrastructure. Fargate provides a scalable, secure, and cost-effective way to run containers on the cloud, making it a popular choice for modern application architectures. In this blog, we will explore the key concepts of Fargate and how they can help you build and manage containerized applications on AWS. Introduction Fargate is a compute engine for Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS) that allows you to run containers without managing the underlying infrastructure. Fargate abstracts away the complexity of managing servers, clusters, and infrastructure scaling, allowing you to focus on your application code. AWS Fargate is a fully-managed container orchestration service that automates the deployment and scaling of containerized applications. It removes the need for manual infrastructure management and allows you to deploy applications faster and more efficiently. Architecture of AWS Fargate AWS Fargate is built on top of Amazon Elastic Container Service (ECS). The architecture of AWS Fargate can be divided into the following components: Container definition: A container definition is a blueprint that describes how a container should be run. It includes information such as the Docker image, CPU and memory requirements, and the command to run. Task definition: A task definition is a blueprint that describes how to run one or more containers together as a single unit. It includes the container definition, networking, and other parameters. Task: A task is an instance of a task definition that is running on AWS Fargate. It includes one or more containers and the resources required to run them. Cluster: A cluster is a logical grouping of tasks and services. It includes the AWS Fargate resources required to run them. Service: A service is a set of tasks that are running together and can be scaled up or down based on demand. How Fargate Works Fargate works by launching containers on a shared infrastructure that is managed by AWS. When you launch a container using Fargate, you specify the resources that the container needs, such as CPU and memory, and Fargate provisions the necessary resources to run the container. Fargate automatically manages the infrastructure scaling and availability of the containers, ensuring that they are always available to handle incoming traffic. It scales the infrastructure up or down based on demand and automatically replaces unhealthy containers to maintain high availability. Fargate integrates with ECS and EKS, allowing you to launch and manage your containers using the same APIs and management tools. You can use ECS or EKS to create task definitions that describe your containerized application, and Fargate takes care of launching the containers and managing the underlying infrastructure. Fargate also integrates with other AWS services, such as Amazon Elastic Container Registry (ECR) for storing and managing container images, AWS CloudFormation for infrastructure as code, and AWS Identity and Access Management (IAM) for role-based access control. Benefits of Fargate Fargate provides several benefits for running containers on AWS: No infrastructure management: With Fargate, you do not have to manage servers, clusters, or infrastructure scaling. Fargate handles all of this automatically, allowing you to focus on your application code. Scalable and flexible: Fargate can scale your containers automatically based on your application’s needs, allowing you to handle sudden spikes in traffic without manual intervention. It also provides the flexibility to run different types of applications, such as stateless, stateful, and batch-processing workloads. Secure: Fargate provides a secure environment for running your containers, isolating them from other containers running on the same infrastructure. Fargate also integrates with AWS Identity and Access Management (IAM) and Amazon Virtual Private Cloud (VPC) to provide you with additional security features. Cost-effective: With Fargate, you only pay for the resources you use, making it a cost-effective way to run containers on AWS. Fargate also eliminates the need for over-provisioning of infrastructure, reducing your operational costs. Use Cases for Fargate Fargate can be used in a variety of scenarios, including: Modern application architectures: Fargate is ideal for building modern application architectures, such as microservices and serverless applications, that require scalable and flexible infrastructure. Continuous integration and delivery (CI/CD): Fargate can be used in a CI/CD pipeline to build and deploy containerized applications automatically. Machine learning and data processing: Fargate can be used to run containerized machine learning workloads and data processing tasks that require scalable infrastructure. IoT and edge computing: Fargate can be used to run containerized workloads at the edge, providing a scalable and flexible way to process and analyze data. Hybrid cloud deployments: Fargate can be used to deploy containerized applications in hybrid cloud environments, providing a consistent way to manage containers across on-premises and cloud environments. Getting Started With Fargate To get started with Fargate, you need to create an ECS or EKS cluster and launch a task definition that describes your containerized application. AWS Fargate can be managed through the AWS Management Console or the AWS CLI. Create an Amazon Elastic Container Registry (ECR) repository: Before you can deploy containers to AWS Fargate, you need to create an Amazon ECR repository to store your Docker images. Create a task definition: A task definition is a blueprint that describes how to run one or more containers together as a single unit. Create a cluster: A cluster is a logical grouping of tasks and services. Create a service: A service is a set of tasks that are running together and can be scaled up or down based on demand. Deploy the service: To deploy the service, either use console or use the AWS CLI and run the following command: Shell aws ecs create-service --cluster [cluster-name] --service-name [service-name] --task-definition [task-definition-arn] --desired-count [desired-count] Conclusion AWS Fargate is a powerful tool for building and managing containerized applications on AWS. Its serverless approach to container orchestration eliminates the need for manual infrastructure management, making it a popular choice for modern application architectures. With Fargate, you can deploy and scale your containers quickly and easily while also maintaining a secure and cost-effective environment.
In this blog post, you will be using AWS Controllers for Kubernetes on an Amazon EKS cluster to put together a solution where HTTP requests sent to a REST endpoint exposed by Amazon API Gateway are processed by a Lambda function and persisted to a DynamoDB table. AWS Controllers for Kubernetes (also known as ACK) leverage Kubernetes Custom Resource and Custom Resource Definitions and give you the ability to manage and use AWS services directly from Kubernetes without needing to define resources outside of the cluster. The idea behind ACK is to enable Kubernetes users to describe the desired state of AWS resources using the Kubernetes API and configuration language. ACK will then take care of provisioning and managing the AWS resources to match the desired state. This is achieved by using Service controllers that are responsible for managing the lifecycle of a particular AWS service. Each ACK service controller is packaged into a separate container image that is published in a public repository corresponding to an individual ACK service controller. There is no single ACK container image. Instead, there are container images for each individual ACK service controller that manages resources for a particular AWS API. This blog post will walk you through how to use the API Gateway, DynamoDB, and Lambda service controllers for ACK. Prerequisites To follow along step-by-step, in addition to an AWS account, you will need to have AWS CLI, kubectl, and Helm installed. There are a variety of ways in which you can create an Amazon EKS cluster. I prefer using the eksctl CLI because of the convenience it offers. Creating an EKS cluster using eksctl can be as easy as this: eksctl create cluster --name <my-cluster> --region <region-code> For details, refer to the Getting Started with Amazon EKS – eksctl documentation. Clone this GitHub repository and change to the right directory: git clone https://github.com/abhirockzz/k8s-ack-apigw-lambda cd k8s-ack-apigw-lambda Ok, let's get started! Setup the ACK Service Controllers for AWS Lambda, API Gateway, and DynamoDB Install ACK Controllers Log into the Helm registry that stores the ACK charts: aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws Deploy the ACK service controller for Amazon Lambda using the lambda-chart Helm chart: RELEASE_VERSION_LAMBDA_ACK=$(curl -sL "https://api.github.com/repos/aws-controllers-k8s/lambda-controller/releases/latest" | grep '"tag_name":' | cut -d'"' -f4) helm install --create-namespace -n ack-system oci://public.ecr.aws/aws-controllers-k8s/lambda-chart "--version=${RELEASE_VERSION_LAMBDA_ACK}" --generate-name --set=aws.region=us-east-1 Deploy the ACK service controller for API Gateway using the apigatewayv2-chart Helm chart: RELEASE_VERSION_APIGWV2_ACK=$(curl -sL "https://api.github.com/repos/aws-controllers-k8s/apigatewayv2-controller/releases/latest" | grep '"tag_name":' | cut -d'"' -f4) helm install --create-namespace -n ack-system oci://public.ecr.aws/aws-controllers-k8s/apigatewayv2-chart "--version=${RELEASE_VERSION_APIGWV2_ACK}" --generate-name --set=aws.region=us-east-1 Deploy the ACK service controller for DynamoDB using the dynamodb-chart Helm chart: RELEASE_VERSION_DYNAMODB_ACK=$(curl -sL "https://api.github.com/repos/aws-controllers-k8s/dynamodb-controller/releases/latest" | grep '"tag_name":' | cut -d'"' -f4) helm install --create-namespace -n ack-system oci://public.ecr.aws/aws-controllers-k8s/dynamodb-chart "--version=${RELEASE_VERSION_DYNAMODB_ACK}" --generate-name --set=aws.region=us-east-1 Now, it's time to configure the IAM permissions for the controller to invoke Lambda, DynamoDB, and API Gateway. Configure IAM Permissions Create an OIDC Identity Provider for Your Cluster For the steps below, replace the EKS_CLUSTER_NAME and AWS_REGION variables with your cluster name and region. export EKS_CLUSTER_NAME=demo-eks-cluster export AWS_REGION=us-east-1 eksctl utils associate-iam-oidc-provider --cluster $EKS_CLUSTER_NAME --region $AWS_REGION --approve OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f2- | cut -d '/' -f2-) Create IAM Roles for Your Lambda, API Gateway, and DynamoDB ACK Service Controllers ACK Lambda Controller Set the following environment variables: ACK_K8S_SERVICE_ACCOUNT_NAME=ack-lambda-controller ACK_K8S_NAMESPACE=ack-system AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) Create the trust policy for the IAM role: read -r -d '' TRUST_RELATIONSHIP <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:sub": "system:serviceaccount:${ACK_K8S_NAMESPACE}:${ACK_K8S_SERVICE_ACCOUNT_NAME}" } } } ] } EOF echo "${TRUST_RELATIONSHIP}" > trust_lambda.json Create the IAM role: ACK_CONTROLLER_IAM_ROLE="ack-lambda-controller" ACK_CONTROLLER_IAM_ROLE_DESCRIPTION="IRSA role for ACK lambda controller deployment on EKS cluster using Helm charts" aws iam create-role --role-name "${ACK_CONTROLLER_IAM_ROLE}" --assume-role-policy-document file://trust_lambda.json --description "${ACK_CONTROLLER_IAM_ROLE_DESCRIPTION}" Attach IAM policy to the IAM role: # we are getting the policy directly from the ACK repo INLINE_POLICY="$(curl https://raw.githubusercontent.com/aws-controllers-k8s/lambda-controller/main/config/iam/recommended-inline-policy)" aws iam put-role-policy \ --role-name "${ACK_CONTROLLER_IAM_ROLE}" \ --policy-name "ack-recommended-policy" \ --policy-document "${INLINE_POLICY}" Attach ECR permissions to the controller IAM role: these are required since Lambda functions will be pulling images from ECR. Make sure to update the ecr-permissions.json file with the AWS_ACCOUNT_ID and AWS_REGION variables. aws iam put-role-policy \ --role-name "${ACK_CONTROLLER_IAM_ROLE}" \ --policy-name "ecr-permissions" \ --policy-document file://ecr-permissions.json Associate the IAM role to a Kubernetes service account: ACK_CONTROLLER_IAM_ROLE_ARN=$(aws iam get-role --role-name=$ACK_CONTROLLER_IAM_ROLE --query Role.Arn --output text) export IRSA_ROLE_ARN=eks.amazonaws.com/role-arn=$ACK_CONTROLLER_IAM_ROLE_ARN kubectl annotate serviceaccount -n $ACK_K8S_NAMESPACE $ACK_K8S_SERVICE_ACCOUNT_NAME $IRSA_ROLE_ARN Repeat the steps for the API Gateway controller. ACK API Gateway Controller Set the following environment variables: ACK_K8S_SERVICE_ACCOUNT_NAME=ack-apigatewayv2-controller ACK_K8S_NAMESPACE=ack-system AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) Create the trust policy for the IAM role: read -r -d '' TRUST_RELATIONSHIP <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:sub": "system:serviceaccount:${ACK_K8S_NAMESPACE}:${ACK_K8S_SERVICE_ACCOUNT_NAME}" } } } ] } EOF echo "${TRUST_RELATIONSHIP}" > trust_apigatewayv2.json Create the IAM role: ACK_CONTROLLER_IAM_ROLE="ack-apigatewayv2-controller" ACK_CONTROLLER_IAM_ROLE_DESCRIPTION="IRSA role for ACK apigatewayv2 controller deployment on EKS cluster using Helm charts" aws iam create-role --role-name "${ACK_CONTROLLER_IAM_ROLE}" --assume-role-policy-document file://trust_apigatewayv2.json --description "${ACK_CONTROLLER_IAM_ROLE_DESCRIPTION}" Attach managed IAM policies to the IAM role: aws iam attach-role-policy --role-name "${ACK_CONTROLLER_IAM_ROLE}" --policy-arn arn:aws:iam::aws:policy/AmazonAPIGatewayAdministrator aws iam attach-role-policy --role-name "${ACK_CONTROLLER_IAM_ROLE}" --policy-arn arn:aws:iam::aws:policy/AmazonAPIGatewayInvokeFullAccess Associate the IAM role to a Kubernetes service account: ACK_CONTROLLER_IAM_ROLE_ARN=$(aws iam get-role --role-name=$ACK_CONTROLLER_IAM_ROLE --query Role.Arn --output text) export IRSA_ROLE_ARN=eks.amazonaws.com/role-arn=$ACK_CONTROLLER_IAM_ROLE_ARN kubectl annotate serviceaccount -n $ACK_K8S_NAMESPACE $ACK_K8S_SERVICE_ACCOUNT_NAME $IRSA_ROLE_ARN Repeat the steps for the DynamoDB controller. ACK DynamoDB Controller Set the following environment variables: ACK_K8S_SERVICE_ACCOUNT_NAME=ack-dynamodb-controller ACK_K8S_NAMESPACE=ack-system AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) Create the trust policy for the IAM role: read -r -d '' TRUST_RELATIONSHIP <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:sub": "system:serviceaccount:${ACK_K8S_NAMESPACE}:${ACK_K8S_SERVICE_ACCOUNT_NAME}" } } } ] } EOF echo "${TRUST_RELATIONSHIP}" > trust_dynamodb.json Create the IAM role: ACK_CONTROLLER_IAM_ROLE="ack-dynamodb-controller" ACK_CONTROLLER_IAM_ROLE_DESCRIPTION="IRSA role for ACK dynamodb controller deployment on EKS cluster using Helm charts" aws iam create-role --role-name "${ACK_CONTROLLER_IAM_ROLE}" --assume-role-policy-document file://trust_dynamodb.json --description "${ACK_CONTROLLER_IAM_ROLE_DESCRIPTION}" Attach IAM policy to the IAM role: # for dynamodb controller, we use the managed policy ARN instead of the inline policy (like we did for Lambda controller) POLICY_ARN="$(curl https://raw.githubusercontent.com/aws-controllers-k8s/dynamodb-controller/main/config/iam/recommended-policy-arn)" aws iam attach-role-policy --role-name "${ACK_CONTROLLER_IAM_ROLE}" --policy-arn "${POLICY_ARN}" Associate the IAM role to a Kubernetes service account: ACK_CONTROLLER_IAM_ROLE_ARN=$(aws iam get-role --role-name=$ACK_CONTROLLER_IAM_ROLE --query Role.Arn --output text) export IRSA_ROLE_ARN=eks.amazonaws.com/role-arn=$ACK_CONTROLLER_IAM_ROLE_ARN kubectl annotate serviceaccount -n $ACK_K8S_NAMESPACE $ACK_K8S_SERVICE_ACCOUNT_NAME $IRSA_ROLE_ARN Restart ACK Controller Deployments and Verify the Setup Restart the ACK service controller Deployment using the following commands. It will update the service controller Pods with IRSA environment variables. Get the list of ACK service controller deployments: export ACK_K8S_NAMESPACE=ack-system kubectl get deployments -n $ACK_K8S_NAMESPACE Restart Lambda, API Gateway, and DynamoDB Deployments: DEPLOYMENT_NAME_LAMBDA=<enter deployment name for lambda controller> kubectl -n $ACK_K8S_NAMESPACE rollout restart deployment $DEPLOYMENT_NAME_LAMBDA DEPLOYMENT_NAME_APIGW=<enter deployment name for apigw controller> kubectl -n $ACK_K8S_NAMESPACE rollout restart deployment $DEPLOYMENT_NAME_APIGW DEPLOYMENT_NAME_DYNAMODB=<enter deployment name for dynamodb controller> kubectl -n $ACK_K8S_NAMESPACE rollout restart deployment $DEPLOYMENT_NAME_DYNAMODB List Pods for these Deployments. Verify that the AWS_WEB_IDENTITY_TOKEN_FILE and AWS_ROLE_ARN environment variables exist for your Kubernetes Pod using the following commands: kubectl get pods -n $ACK_K8S_NAMESPACE LAMBDA_POD_NAME=<enter Pod name for lambda controller> kubectl describe pod -n $ACK_K8S_NAMESPACE $LAMBDA_POD_NAME | grep "^\s*AWS_" APIGW_POD_NAME=<enter Pod name for apigw controller> kubectl describe pod -n $ACK_K8S_NAMESPACE $APIGW_POD_NAME | grep "^\s*AWS_" DYNAMODB_POD_NAME=<enter Pod name for dynamodb controller> kubectl describe pod -n $ACK_K8S_NAMESPACE $DYNAMODB_POD_NAME | grep "^\s*AWS_" Now that the ACK service controller has been set up and configured, you can create AWS resources! Create API Gateway Resources, DynamoDB table, and Deploy the Lambda Function Create API Gateway Resources In the file apigw-resources.yaml, replace the AWS account ID in the integrationURI attribute for the Integration resource. This is what the ACK manifest for API Gateway resources (API, Integration, and Stage) looks like: apiVersion: apigatewayv2.services.k8s.aws/v1alpha1 kind: API metadata: name: ack-demo-apigw-httpapi spec: name: ack-demo-apigw-httpapi protocolType: HTTP --- apiVersion: apigatewayv2.services.k8s.aws/v1alpha1 kind: Integration metadata: name: ack-demo-apigw-integration spec: apiRef: from: name: ack-demo-apigw-httpapi integrationType: AWS_PROXY integrationMethod: POST integrationURI: arn:aws:lambda:us-east-1:AWS_ACCOUNT_ID:function:demo-dynamodb-func-ack payloadFormatVersion: "2.0" --- apiVersion: apigatewayv2.services.k8s.aws/v1alpha1 kind: Stage metadata: name: demo-stage spec: apiRef: from: name: ack-demo-apigw-httpapi stageName: demo-stage autoDeploy: true description: "demo stage for http api" Create the API Gateway resources (API, Integration, and Stage) using the following command: kubectl apply -f apigw-resources.yaml Create DynamoDB Table This is what the ACK manifest for the DynamoDB table looks like: apiVersion: dynamodb.services.k8s.aws/v1alpha1 kind: Table metadata: name: user annotations: services.k8s.aws/region: us-east-1 spec: attributeDefinitions: - attributeName: email attributeType: S billingMode: PAY_PER_REQUEST keySchema: - attributeName: email keyType: HASH tableName: user You can replace the us-east-1 region with your preferred region. Create a table (named user) using the following command: kubectl apply -f dynamodb-table.yaml # list the tables kubectl get tables Build Function Binary and Create Docker Image GOARCH=amd64 GOOS=linux go build -o main main.go aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws docker build -t demo-apigw-dynamodb-func-ack . Create a private ECR repository, tag, and push the Docker image to ECR: AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com aws ecr create-repository --repository-name demo-apigw-dynamodb-func-ack --region us-east-1 docker tag demo-apigw-dynamodb-func-ack:latest $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/demo-apigw-dynamodb-func-ack:latest docker push $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/demo-apigw-dynamodb-func-ack:latest Create an IAM execution Role for the Lambda function and attach the required policies: export ROLE_NAME=demo-apigw-dynamodb-func-ack-role ROLE_ARN=$(aws iam create-role \ --role-name $ROLE_NAME \ --assume-role-policy-document '{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}' \ --query 'Role.[Arn]' --output text) aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole Since the Lambda function needs to write data to DynamoDB, let's add the following policy to the IAM role: aws iam put-role-policy \ --role-name "${ROLE_NAME}" \ --policy-name "dynamodb-put" \ --policy-document file://dynamodb-put.json Create the Lambda Function Update function.yaml file with the following info: imageURI - The URI of the Docker image that you pushed to ECR; e.g., <AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/demo-apigw-dynamodb-func-ack:latest role - The ARN of the IAM role that you created for the Lambda function; e.g., arn:aws:iam::<AWS_ACCOUNT_ID>:role/demo-apigw-dynamodb-func-ack-role This is what the ACK manifest for the Lambda function looks like: apiVersion: lambda.services.k8s.aws/v1alpha1 kind: Function metadata: name: demo-apigw-dynamodb-func-ack annotations: services.k8s.aws/region: us-east-1 spec: architectures: - x86_64 name: demo-apigw-dynamodb-func-ack packageType: Image code: imageURI: AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/demo-apigw-dynamodb-func-ack:latest environment: variables: TABLE_NAME: user role: arn:aws:iam::AWS_ACCOUNT_ID:role/demo-apigw-dynamodb-func-ack-role description: A function created by ACK lambda-controller To create the Lambda function, run the following command: kubectl create -f function.yaml # list the function kubectl get functions Add API Gateway Trigger Configuration Here is an example using AWS Console. Open the Lambda function in the AWS Console and click on the Add trigger button. Select API Gateway as the trigger source, select the existing API, and click on the Add button. Now you are ready to try out the end-to-end solution! Test the Application Get the API Gateway endpoint: export API_NAME=ack-demo-apigw-httpapi export STAGE_NAME=demo-stage export URL=$(kubectl get api/"${API_NAME}" -o=jsonpath='{.status.apiEndpoint}')/"${STAGE_NAME}"/demo-apigw-dynamodb-func-ack" Invoke the API Gateway endpoint: curl -i -X POST -H 'Content-Type: application/json' -d '{"email":"user1@foo.com","name":"user1"}' $URL curl -i -X POST -H 'Content-Type: application/json' -d '{"email":"user2@foo.com","name":"user2"}' $URL curl -i -X POST -H 'Content-Type: application/json' -d '{"email":"user3@foo.com","name":"user3"}' $URL curl -i -X POST -H 'Content-Type: application/json' -d '{"email":"user4@foo.com","name":"user4"}' $URL The Lambda function should be invoked and the data should be written to the DynamoDB table. Check the DynamoDB table using the CLI (or AWS console): aws dynamodb scan --table-name user Clean Up After you have explored the solution, you can clean up the resources by running the following commands: Delete API Gateway resources, DynamoDB table, and the Lambda function: kubectl delete -f apigw-resources.yaml kubectl delete -f function.yaml kubectl delete -f dynamodb-table.yaml To uninstall the ACK service controllers, run the following commands: export ACK_SYSTEM_NAMESPACE=ack-system helm ls -n $ACK_SYSTEM_NAMESPACE helm uninstall -n $ACK_SYSTEM_NAMESPACE <enter name of the apigw chart> helm uninstall -n $ACK_SYSTEM_NAMESPACE <enter name of the lambda chart> helm uninstall -n $ACK_SYSTEM_NAMESPACE <enter name of the dynamodb chart> Conclusion and Next Steps In this post, we have seen how to use AWS Controllers for Kubernetes to create a Lambda function, API Gateway integration, and DynamoDB table and wire them together to deploy a solution. All of this (almost) was done using Kubernetes! I encourage you to try out other AWS services supported by ACK : here is a complete list. Happy building!
Do you still write lengthy Dockerfiles describing every step necessary to build a container image? Then, buildpacks come to your rescue! Developers simply feed them an application, buildpacks do their magic, and turn it into a fully functional container ready to be deployed on any cloud. But how exactly does the magic happen? And what should you do if the resulting container performance doesn’t meet the business requirements? This article will look under the hood of buildpacks to see how they operate and give tips on optimizing the default settings to reach better performance outcomes. What Are Buildpacks? A buildpack turns the application source code into a runnable production-ready container image. Buildpacks save time and effort for developers because there’s no need to configure the image and manually manage dependencies through a Dockerfile. Heroku was the first company to develop buildpacks in 2011. Since then, many other companies (Cloud Foundry, Google, etc.) have adopted the concept. In 2018, Heroku partnered with Pivotal to create the Cloud Native Buildpacks project, encompassing modern standards and specifications for container images, such as the OCI format. The project is part of the Cloud Native Computing Foundation (CNCF). Paketo buildpacks, which we will use in this article, is an open-source project backed by Cloud Foundry and sponsored by VMware. It implements Cloud Native Buildpacks specifications and supports the most popular languages, including Java. Containers produced with Paketo buildpacks can run on any cloud. How Buildpacks Work Buildpacks operate in two phases: detect and build. 1. The Detect Phase During the detection phase, the buildpack analyzes the source code looking for indicators of whether or not it should be applied to the application. In other words, a group of buildpacks is tested against the source code, and the first group deemed fit for the code is selected for building the app. After the buildpack detects the necessary indicators, it returns a contract of what is required for creating an image and proceeds to the build phase. 2. The Build Phase During the build phase, the buildpack transforms the codebase, fulfilling the contract requirements composed earlier. It provides the build-time and runtime environment, downloads necessary dependencies, compiles the code if needed, and sets the entry points and startup scripts. Builders A builder is a combination of components required for building a container image: Buildpacks, sets of executables that analyze the code and provide a plan for building and running the app; Stack consists of two images: the build image and the run image. The build image provides the built environment (a containerized environment where build packs are executed), the run image offers the environment for the application image during runtime; Lifecycle manages the buildpack execution and assembles the resulting artifact into a final image. Therefore, one builder can automatically detect and build different applications. Buildpacks Offer a Variety of JVMs — How to Choose? Paketo buildpacks use Liberica JVM by default. Liberica is a HotSpot-based Java runtime supported by a major OpenJDK contributor and recommended by Spring. It provides JDK and JRE for all LTS versions (8, 11, 17), the current version, and Liberica Native Image Kit (NIK), a GraalVM-based utility for converting JVM-based apps into native images with an accelerated startup. Native images are beneficial when you need to avoid cold starts in AWS. But the buildpacks support several Java distributions, which can be used instead of the default JVM: Adoptium Alibaba Dragonwell Amazon Corretto Azul Zulu BellSoft Liberica (default) Eclipse OpenJ9 GraalVM Oracle JDK Microsoft OpenJDK SapMachine If you want to switch JVMs, you have to keep in mind several nuances: Alibaba Dragonwell, Amazon Corretto, GraalVM, Oracle JDK, and Microsoft OpenJDK offer only JDK. The resulting container will be twice as big as the JRE-based one; Adoptium provides JDK and JRE for Java 8 and 11 and only JDK for Java 16+; Oracle JDK provides only Java 17. Another important consideration: buildpacks facilitate and accelerate deployment, but if you are dissatisfied with container performance or seek to improve essential KPIs (throughput, latency, or memory consumption), you have to tune the JVM yourself. For more details, see the section Configuring the JVM below. For instance, Eclipse OpenJ9 based on the OpenJ9 JVM may demonstrate better performance than HotSpot in some cases because HotSpot comes with default settings, and OpenJ9 is already tuned. Adding a few simple parameters will give you equal or superior performance with HotSpot. How to Use Paketo Buildpacks Let’s build a Java container utilizing a Paketo buildpack. First, make sure Docker is up and running. If you don’t have it, follow these instructions to install Docker Desktop for your system. The next step is to install pack CLI, a Command Line Interface maintained by Cloud Native Buildpack that can be used to work with buildpacks. Follow the guide to complete the installation for your platform (macOS, Linux, and Windows are supported). Pack is one of the several available tools. Spring Boot developers, for instance, can look into Spring Boot Maven Plugin or Spring Boot Gradle Plugin. We will use Paketo sample applications, so run the following command: git clone https://github.com/paketo-buildpacks/samples && cd samples Alternatively, utilize your own demo app. Make Paketo Base builder the default builder: pack config default-builder paketobuildpacks/builder:base To build an image from source with Maven, run pack build samples/java \ --path java/maven --env BP_JVM_VERSION=17 Java example images should return {"status":"UP"} from the actuator health endpoint: docker run --rm --tty --publish 8080:8080 samples/java curl -s http://localhost:8080/actuator/health | jq . It is also possible to build an image from a compiled artifact. The following archive formats are supported: executable JAR, WAR, or distribution ZIP. To compile an executable JAR and build an image using pack, run cd java/maven ./mvnw package pack build samples/java \ --path ./target/demo-0.0.1-SNAPSHOT.jar Extracting a Software Bill of Materials Software supply chains consist of numerous libraries, tools, and processes used to develop and run applications. It is often hard to trace the origin of all software components in a software product, increasing the risk of nested vulnerabilities. A software bill of materials (SBOM) lists all library dependencies utilized to build a software artifact. It is similar to a traditional bill of materials, which summarizes the raw materials, parts, components, and exact quantities required to manufacture a product. SBOMs enable the developers to monitor the version of software components, integrate security patches promptly, and keep vulnerable libraries out. Buildpacks also enable the developers to see an SBOM for their image. Run the following command to extract the SBOM for the samples/java image built previously: pack sbom download samples/java --output-dir /tmp/samples-java-sbom After that, you can browse the folder. SBOMs are presented in JSON format. To list all .json files in the folder, run the following: find /tmp/samples-java-sbom -name "*.json" /tmp/samples-java-sbom/layers/sbom/launch/paketo-buildpacks_executable-jar/sbom.cdx.json /tmp/samples-java-sbom/layers/sbom/launch/paketo-buildpacks_executable-jar/sbom.syft.json /tmp/samples-java-sbom/layers/sbom/launch/paketo-buildpacks_spring-boot/helper/sbom.syft.json /tmp/samples-java-sbom/layers/sbom/launch/paketo-buildpacks_spring-boot/spring-cloud-bindings/sbom.syft.json /tmp/samples-java-sbom/layers/sbom/launch/paketo-buildpacks_bellsoft-liberica/jre/sbom.syft.json /tmp/samples-java-sbom/layers/sbom/launch/paketo-buildpacks_bellsoft-liberica/helper/sbom.syft.json /tmp/samples-java-sbom/layers/sbom/launch/sbom.legacy.json /tmp/samples-java-sbom/layers/sbom/launch/paketo-buildpacks_ca-certificates/helper/sbom.syft.json Now, you can open the file with any text editor. For instance, if you have Visual Studio Code installed, run the following: code /tmp/samples-java-sbom/layers/sbom/launch/paketo-buildpacks_bellsoft-liberica/jre/sbom.syft.json You will get the following output: { "Artifacts": [ { "ID": "1f2d01eeb13b5894", "Name": "BellSoft Liberica JRE", "Version": "17.0.6", "Type": "UnknownPackage", "FoundBy": "libpak", "Locations": [ { "Path": "buildpack.toml" } ], "Licenses": [ "GPL-2.0 WITH Classpath-exception-2.0" ], "Language": "", "CPEs": [ "cpe:2.3:a:oracle:jre:17.0.6:*:*:*:*:*:*:*" ], "PURL": "pkg:generic/bellsoft-jre@17.0.6?arch=amd64" } ], "Source": { "Type": "directory", "Target": "/layers/paketo-buildpacks_bellsoft-liberica/jre" }, "Descriptor": { "Name": "syft", "Version": "0.32.0" }, "Schema": { "Version": "1.1.0", "URL": "https://raw.githubusercontent.com/anchore/syft/main/schema/json/schema-1.1.0.json" } } Configuring the JVM The BellSoft Liberica Buildpack provides the newest patch updates of Java versions supported in the buildpack. The buildpack uses the latest LTS version by default. If you want to use another Java version, use the BP_JVM_VERSION environment variable. For instance, BP_JVM_VERSION=11 will install the newest release of Liberica JDK and JRE 11. In addition, you can change the JDK type. The buildpack uses JDK at build-time and JRE at runtime. Specifying the BP_JVM_TYPE=JDK option will force the buildpack to use JDK at runtime. The BP_JVM_JLINK_ENABLED option runs the jlink tool with Java 9+, which cuts out a custom JRE. If you deploy a Java application to an application server, the buildpack uses Apache Tomcat by default. You can select another server (TomEE or Open Liberty). For instance, run the following command to switch to TomEE: pack build samples/war -e BP_JAVA_APP_SERVER=tomee You can configure JVM at runtime by using the JAVA_TOOL_OPTIONS environment variable. For instance, you can configure garbage collection, number of threads, memory limits, etc., to reach optimal performance for your specific needs: docker run --rm --tty \ --env JAVA_TOOL_OPTIONS='-XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40' \ --env BPL_JVM_THREAD_COUNT=100 \ samples/java The whole list of JVM configuration options can be found on the Liberica Buildpack page. Conclusion As you can see, buildpacks are great automation tools saving developers time. But it would help if you used them wisely, or there’s a risk you will get a cat in the sack. Our general recommendation is to define the KPIs and adjust JVM settings accordingly. What can you do if you are not happy with the size of the resulting image? After all, it’s not possible to change the base OS image utilized by buildpacks. One option is to migrate to the Native Image to optimize resource consumption. Another alternative is to manually build containers and switch to a smaller OS image, such as Alpine or Alpaquita Linux. The latter supports two libc implementations (optimized musl and glibc) and comes with numerous performance and security enhancements.
In this post, you will learn how to create a Docker image for your GraalVM native image. By means of some hands-on experiments, you will learn that it is a bit trickier than what you are used to when creating Docker images. Enjoy! Introduction In a previous post, you learned how to create a GraalVM native image for a Spring Boot 3 application. Nowadays, applications are often distributed as Docker images, so it is interesting to verify how this is done for a GraalVM native image. A GraalVM native image does not need a JVM, so can you use a more minimalistic Docker base image for example? You will execute some experiments during this blog and will learn by doing. The sources used in this blog are available on GitHub. The information provided in the GraalVM documentation is a good starting point for learning. It is good reference material when reading this blog. As an example application, you will use the Spring Boot application from the previous post. The application contains one basic RestController which just returns a hello message. The RestController also includes some code in order to execute tests in combination with Reflection, but this part was added for the previous post. Java @RestController public class HelloController { @RequestMapping("/hello") public String hello() { // return "Hello GraalVM!" String helloMessage = "Default message"; try { Class<?> helloClass = Class.forName("com.mydeveloperplanet.mygraalvmplanet.Hello"); Method helloSetMessageMethod = helloClass.getMethod("setMessage", String.class); Method helloGetMessageMethod = helloClass.getMethod("getMessage"); Object helloInstance = helloClass.getConstructor().newInstance(); helloSetMessageMethod.invoke(helloInstance, "Hello GraalVM!"); helloMessage = (String) helloGetMessageMethod.invoke(helloInstance); } catch (ClassNotFoundException e) { throw new RuntimeException(e); } catch (InvocationTargetException e) { throw new RuntimeException(e); } catch (InstantiationException e) { throw new RuntimeException(e); } catch (IllegalAccessException e) { throw new RuntimeException(e); } catch (NoSuchMethodException e) { throw new RuntimeException(e); } return helloMessage; } } Build the application: Shell $ mvn clean verify Run the application from the root of the repository: Shell $ java -jar target/mygraalvmplanet-0.0.1-SNAPSHOT.jar Test the endpoint: Shell $ curl http://localhost:8080/hello Hello GraalVM! You are now ready for Dockerizing this application! Prerequisites Prerequisites for this blog are: Basic Linux knowledge, Ubuntu 22.04 is used during this post Basic Java and Spring Boot knowledge Basic GraalVM knowledge Basic Docker knowledge Basic SDKMAN knowledge Create Docker Image for Spring Boot Application In this section, you will create a Dockerfile for the Spring Boot application. This is a very basic Dockerfile and is not to be used in production code. See previous posts "Docker Best Practices" and "Spring Boot Docker Best Practices" for tips and tricks for production-ready Docker images. The Dockerfile you will be using is the following: Dockerfile FROM eclipse-temurin:17.0.5_8-jre-alpine COPY target/mygraalvmplanet-0.0.1-SNAPSHOT.jar app.jar ENTRYPOINT ["java", "-jar", "app.jar"] You use a Docker base image containing a Java JRE, copy the JAR file into the image, and, in the end, you run the JAR file. Build the Docker image: Shell $ docker build . --tag mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT Verify the size of the image. It is 188MB in size. Shell $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydeveloperplanet/mygraalvmplanet 0.0.1-SNAPSHOT be12e1deda89 33 seconds ago 188MB Run the Docker image: Shell $ docker run --name mygraalvmplanet mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT ... 2023-02-26T09:20:48.033Z INFO 1 --- [ main] c.m.m.MyGraalVmPlanetApplication : Started MyGraalVmPlanetApplication in 2.389 seconds (process running for 2.981) As you can see, the application started in about 2 seconds. Test the endpoint again. First, find the IP Address of your Docker container. In the output below, the IP Address is 172.17.0.2, but it will probably be something else on your machine. Shell $ docker inspect mygraalvmplanet | grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "172.17.0.2", "IPAddress": "172.17.0.2", Invoke the endpoint with the IP Address and verify that it works. Shell $ curl http://172.17.0.2:8080/hello Hello GraalVM! In order to continue, stop the container, remove it, and also remove the image. Do this after each experiment. This way, you can be sure that you start from a clean situation each time. Shell $ docker rm mygraalvmplanet $ docker rmi mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT Create Docker Image for GraalVM Native Image Let’s do the same for the GraalVM native image. First, switch to using GraalVM. Shell $ sdk use java 22.3.r17-nik Create the native image: Shell $ mvn -Pnative native:compile Create a similar Dockerfile (Dockerfile-native-image). This time, you use an Alpine Docker base image without a JVM. You do not need a JVM for running a GraalVM native image as it is an executable and not a JAR file. Dockerfile FROM alpine:3.17.1 COPY target/mygraalvmplanet mygraalvmplanet ENTRYPOINT ["/mygraalvmplanet"] Build the Docker image, this time with an extra --file argument because the file name deviates from the default. Shell $ docker build . --tag mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT --file Dockerfile-native-image Verify the size of the Docker image. It is now only 76.5MB instead of the 177MB earlier. Shell $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydeveloperplanet/mygraalvmplanet 0.0.1-SNAPSHOT 4f7c5c6a9b29 25 seconds ago 76.5MB Run the container and note that it does not start correctly. Shell $ docker run --name mygraalvmplanet mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT exec /mygraalvmplanet: no such file or directory What is wrong here? Why does this not work? It is a vague error, but the Alpine Linux Docker image uses musl as a standard C library whereas the GraalVM native image is compiled using an Ubuntu Linux distro, which uses glibc. Let’s change the Docker base image to Ubuntu. The Dockerfile is Dockerfile-native-image-ubuntu: Dockerfile FROM ubuntu:jammy COPY target/mygraalvmplanet mygraalvmplanet ENTRYPOINT ["/mygraalvmplanet"] Build the Docker image. Shell $ docker build . --tag mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT --file Dockerfile-native-image-ubuntu Verify the size of the Docker image, it is now 147MB. Shell $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydeveloperplanet/mygraalvmplanet 0.0.1-SNAPSHOT 1fa90b1bfc54 3 hours ago 147MB Run the container and it starts successfully in less than 200ms. Shell $ docker run --name mygraalvmplanet mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT ... 2023-02-26T12:48:26.140Z INFO 1 --- [ main] c.m.m.MyGraalVmPlanetApplication : Started MyGraalVmPlanetApplication in 0.131 seconds (process running for 0.197) Create Docker Image Based on Distroless Image The size of the Docker image build with the Ubuntu base image is 147MB. But, the Ubuntu image does contain a lot of tooling which is not needed. Can we reduce the size of the image by using a distroless image which is very small in size? Create a Dockerfile Dockerfile-native-image-distroless and use a distroless base image. Dockerfile FROM gcr.io/distroless/base COPY target/mygraalvmplanet mygraalvmplanet ENTRYPOINT ["/mygraalvmplanet"] Build the Docker image. Shell $ docker build . --tag mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT --file Dockerfile-native-image-distroless Verify the size of the Docker image, it is now 89.9MB. Shell $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydeveloperplanet/mygraalvmplanet 0.0.1-SNAPSHOT 6fd4d44fb622 9 seconds ago 89.9MB Run the container and see that it is failing to start. It appears that several necessary libraries are not present in the distroless image. Shell $ docker run --name mygraalvmplanet mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT /mygraalvmplanet: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory When Googling this error message, you will find threads that mention copying the required libraries from other images (e.g., the Ubuntu image), but you will encounter a next error and a next error. This is a difficult path to follow and costs some time. See, for example, this thread. A solution for using distroless images can be found here. Create Docker Image Based on Oracle Linux Another approach for creating Docker images is the one that can be found on the GraalVM GitHub page. Build the native image in a Docker container and use a multistage build to build the target image. The Dockerfile being used is copied from here and can be found in the repository as Dockerfile-oracle-linux. Create a new file Dockerfile-native-image-oracle-linux, copy the contents of Dockerfile-oracle-linux into it, and change the following: Update the Maven SHA and DOWNLOAD_URL. Change L36 in order to compile the native image as you used to do: mvn -Pnative native:compile Change L44 and L45 in order to copy and use the mygraalvmplanet native image. The resulting Dockerfile is the following: Dockerfile FROM ghcr.io/graalvm/native-image:ol8-java17-22 AS builder # Install tar and gzip to extract the Maven binaries RUN microdnf update \ && microdnf install --nodocs \ tar \ gzip \ && microdnf clean all \ && rm -rf /var/cache/yum # Install Maven # Source: # 1) https://github.com/carlossg/docker-maven/blob/925e49a1d0986070208e3c06a11c41f8f2cada82/openjdk-17/Dockerfile # 2) https://maven.apache.org/download.cgi ARG USER_HOME_DIR="/root" ARG SHA=1ea149f4e48bc7b34d554aef86f948eca7df4e7874e30caf449f3708e4f8487c71a5e5c072a05f17c60406176ebeeaf56b5f895090c7346f8238e2da06cf6ecd ARG MAVEN_DOWNLOAD_URL=https://dlcdn.apache.org/maven/maven-3/3.9.0/binaries/apache-maven-3.9.0-bin.tar.gz RUN mkdir -p /usr/share/maven /usr/share/maven/ref \ && curl -fsSL -o /tmp/apache-maven.tar.gz ${MAVEN_DOWNLOAD_URL} \ && echo "${SHA} /tmp/apache-maven.tar.gz" | sha512sum -c - \ && tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \ && rm -f /tmp/apache-maven.tar.gz \ && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn ENV MAVEN_HOME /usr/share/maven ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2" # Set the working directory to /home/app WORKDIR /build # Copy the source code into the image for building COPY . /build # Build RUN mvn -Pnative native:compile # The deployment Image FROM docker.io/oraclelinux:8-slim EXPOSE 8080 # Copy the native executable into the containers COPY --from=builder /build/target/mygraalvmplanet . ENTRYPOINT ["/mygraalvmplanet"] Build the Docker image. Relax, this will take quite some time. Shell $ docker build . --tag mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT -f Dockerfile-native-image-oracle-linux This image size is 177MB. Shell $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydeveloperplanet/mygraalvmplanet 0.0.1-SNAPSHOT 57e0fda006f0 9 seconds ago 177MB Run the container and it starts in 55ms. Shell $ docker run --name mygraalvmplanet mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT ... 2023-02-26T13:13:50.188Z INFO 1 --- [ main] c.m.m.MyGraalVmPlanetApplication : Started MyGraalVmPlanetApplication in 0.055 seconds (process running for 0.061) So, this works just fine. This is the way to go when creating Docker images for your GraalVM native image: Prepare a Docker image based on your target base image; Install the necessary tooling, in the case of this application, GraalVM and Maven; Use a multistage Docker build in order to create the target image. Conclusion Creating a Docker image for your GraalVM native image is possible, but you need to be aware of what you are doing. Using a multistage build is the best option. Dependent on whether you need to shrink the size of the image by using a distroless image, you need to prepare the image to build the native image yourself.
If you’re a developer, chances are you’ve worked with Docker at some point in your career. Docker has become the standard for containerization, allowing developers to package their applications in containers and deploy them anywhere. However, Docker isn’t the only containerization solution out there. Rancher Desktop is a popular alternative that offers many benefits over Docker. In this blog post, we’ll explore the reasons for migrating from Docker to Rancher Desktop, as well as the steps involved in the migration process. Why Migrate from Docker to Rancher Desktop? Before we dive into the migration process, let’s first explore the reasons why you might want to migrate from Docker to Rancher Desktop. Here are some of the key benefits of using Rancher Desktop: Improved UI/UX: Rancher Desktop offers a user-friendly interface that makes it easy to manage containers and Kubernetes clusters. The interface is intuitive and allows developers to manage their containers without having to use the command line. Kubernetes Integration: Rancher Desktop integrates with Kubernetes, allowing developers to manage their Kubernetes clusters from within the Rancher Desktop interface. This integration makes it easy to deploy, scale, and manage Kubernetes clusters, all from a single dashboard. Multi-Platform Support: Rancher Desktop supports multiple platforms, including Windows, macOS, and Linux. This makes it easy for developers to work with containers on any platform of their choice. Secure: Rancher Desktop offers several security features, including secure container isolation and network segmentation. These features make it easy to deploy and manage containers securely. Open-Source: Rancher Desktop is open-source, which means that developers can contribute to the project and help improve the software. Now that we’ve explored the benefits of using Rancher Desktop let’s dive into the migration process. Migration Steps The migration process from Docker to Rancher Desktop involves the following steps: Step 1: Backup Docker Containers Before you migrate from Docker to Rancher Desktop, you should back up all of your Docker containers. This will ensure that you don’t lose any data during the migration process. You can use the Docker CLI to back up your containers using the following command: docker save -o my_containers.tar my_image:latest This command will create a backup of your Docker container in a .tar file. Step 2: Uninstall Docker The next step is to uninstall Docker from your system. You can do this using the following command: sudo apt-get remove docker docker-engine docker.io containerd runc This command will remove all Docker-related packages from your system. Step 3: Install Rancher Desktop Once you’ve uninstalled Docker, you can proceed with installing Rancher Desktop. You can download the latest version of Rancher Desktop from the official website. Step 4: Import Docker Containers After you’ve installed Rancher Desktop, you can import your Docker containers into Rancher Desktop using the following command: docker load -i my_containers.tar This command will import your Docker containers into Rancher Desktop. Step 5: Start Containers Once you’ve imported your Docker containers into Rancher Desktop, you can start them using the Rancher Desktop interface. Simply navigate to the “Containers” section of the interface and click on “Start” to start your containers. Step 6: Test Containers After you’ve started your containers, you should test them to ensure that they’re working correctly. You can do this by accessing your containers’ web interfaces and checking that they’re functioning correctly. Steps to build container images using Rancher Desktop and create a Docker image with a Dockerfile: Install Rancher Desktop using the command brew install rancher. Choose CLI Options: a. nerdctl for containers-runtime b. docker CLI for Moby-runtime Create a folder with two files inside it, a Dockerfile and an HTML file. To create a Dockerfile, use the command vi Dockerfile. Populate the Dockerfile with the following code:FROM alpine CMD["echo", "Hello World"] Build the image using nerdctl with the command nerdctl -n k8s.io build --tag helloworld:latest . List images with the command nerdctl -n k8s.io images Create a namespace with the command nerdctl namespace create test List namespaces with the command nerdctl namespace ls Go to Rancher Desktop, navigate to Images, select the Image Namespace test, click on Add Image and pull the nginx:latest image. Run the built image with the command nerdctl -n k8s.io run -d -p 8084:80 helloworld:latest in the k8s.io namespace. Conclusion Migrating from Docker to Rancher Desktop is a straightforward process.
Containerization has resulted in many businesses and organizations developing and deploying applications differently. A recent report by Gartner indicated that by 2022, more than 75% of global organizations would be running containerized applications in production, up from less than 30% in 2020. However, while containers come with many benefits, they certainly remain a source of cyberattack exposure if not appropriately secured. Previously, cybersecurity meant safeguarding a single "perimeter." By introducing new layers of complexity, containers have rendered this concept outdated. Containerized environments have many more abstraction levels, which necessitates using specific tools to interpret, monitor, and protect these new applications. What Is Container Security? Container security is using a set of tools and policies to protect containers from potential threats that will affect an application, infrastructure, system libraries, run time, and more. Container security involves implementing a secure environment for the container stack, which consists of the following: Container image Container engine Container runtime Registry Host Orchestrator Most software professionals automatically assume that Docker and Linux kernels are secure from malware, an easily overestimated assumption. Top 5 Container Security Best Practices 1. Host and OS Security Containers provide isolation from the host, although they both share kernel resources. Often overlooked, this aspect makes it more difficult but not impossible for an attacker to compromise the OS through a kernel exploit so they can gain root access to the host. Hosts that run your containers need to have their own set of security access in place by ensuring the underlying host operating system is up to date. For example, it is running the latest version of the container engine. Ideally, you will need to set up some monitoring to be alerted for any vulnerabilities on the host layer. Also, choose a "thin OS," which will speed up your application deployment and reduce the attack surface by removing unnecessary packages and keeping your OS as minimal as possible. Essentially, in a production environment, there is no need to let a human admin SSH to the host to apply any configuration changes. Instead, it would be best to manage all hosts through IaC with Ansible or Chef, for instance. This way, only the orchestrator can have ongoing access to run and stop containers. 2. Container Vulnerability Scans Regular vulnerability scans of your container or host should be carried out to detect and fix potential threats that hackers could use to access your infrastructure. Some container registries provide this kind of feature; when your image is pushed to the registry, it will automatically scan it for potential vulnerabilities. One way you can be proactive is to set up a vulnerability scan in your CI pipeline by adopting the "shift left" philosophy, which means you implement security early in your development cycle. Again, Trivy would be an excellent choice to achieve this. Suppose you were trying to set up this kind of scan to your on-premise nodes. In that case, Wazuh is a solid option that will log every event and verify them against multiple CVE (Common Vulnerabilities and Exposure) databases. 3. Container Registry Security Container registries provide a convenient and centralized way to store and distribute images. It is common to find organizations storing thousands of images in their registries. Since the registry is so important to the way a containerized environment works, it must be well protected. Therefore, investing time to monitor and prevent unauthorized access to your container registry is something you should consider. 4. Kubernetes Clusters Security Another action you can take is to re-enforce security around your container orchestration, such as preventing risks from over-privileged accounts or attacks over the network. Following the least-privileged access model, protecting pod-to-pod communications would limit the damage done by an attack. A tool that we would recommend in this case is Kube Hunter, which acts as a penetration testing tool. As such, it allows you to run a variety of tests on your Kubernetes cluster so you can start taking steps to improve security around it. You may also be interested in Kubescape, which is similar to Kube Hunter; it scans your Kubernetes cluster, YAML files, and HELM Charts to provide you with a risk score: 5. Secrets Security A container or Dockerfile should not contain any secrets. (certificate, passwords, tokens, API Keys, etc.) and still, we often see secrets hard-coded into the source code, images, or build process. Choosing a secret management solution will allow you to store secrets in a secure, centralized vault. Conclusion These are some of the proactive security measures you may take to protect your containerized environments. This is vital because Docker has only been around for a short period, which means its built-in management and security capabilities are still in their infancy. Thankfully, the good news is that achieving decent security in a containerized environment can be easily done with multiple tools, such as the ones we listed in the article.
Container orchestration is a critical aspect of modern software development, enabling organizations to deploy and manage large-scale containerized applications. In this article, we will discuss what container orchestration is, why it is important, and some of the popular container orchestration tools available today. What Is Container Orchestration? Container orchestration is the process of automating the deployment, scaling, and management of containerized applications. Containers are lightweight, portable software units that can run anywhere, making them ideal for modern, distributed applications. However, managing containerized applications can be complex, as they typically consist of multiple containers that must be deployed, configured, and managed as a single entity. Container orchestration tools provide a platform for automating tasks, enabling organizations to manage large-scale containerized applications easily. They typically provide features such as automated deployment, load balancing, service discovery, scaling, and monitoring, making it easier to manage complex containerized applications. One of the most popular container orchestration tools is Kubernetes, which was developed by Google. Kubernetes provides a platform for automating the deployment, scaling, and management of containerized applications and has a large and active community. Other popular container orchestration tools include Docker Swarm, Apache Mesos, and Nomad. Container orchestration is important for organizations that develop and deploy modern, distributed applications. Containerization provides several benefits, including improved portability, scalability, and agility. However, managing containerized applications can be challenging, particularly as the number of containers and applications increases. Container orchestration tools provide a way to automate the management of containerized applications, enabling organizations to deploy and manage complex applications with ease. They also help ensure applications are highly available, scalable, and reliable, making it easier to deliver high-quality services to customers. Why Is Container Orchestration Important? Container orchestration is important for several reasons, particularly for organizations that develop and deploy modern, distributed applications. Here are some of the key reasons why container orchestration is important: Automation Container orchestration tools enable organizations to automate the deployment, scaling, and management of containerized applications. This reduces the need for manual intervention, making it easier to manage large-scale applications. Scalability Container orchestration tools provide features, such as automatic scaling and load balancing, which make it easier to scale applications up or down as demand changes. Container orchestration platforms make it easy to scale applications horizontally by adding or removing containers based on demand. Availability Container orchestration tools help ensure applications are highly available and reliable by providing features such as service discovery and self-healing. Portability Containers are portable, meaning they can be run anywhere, from local development environments to public cloud platforms. Container orchestration tools enable organizations to manage containerized applications across different environments and platforms, making it easier to move applications between different infrastructure providers. Container orchestration platforms provide a high degree of portability, enabling developers to run their applications in any environment, from on-premises data centers to public cloud environments. Flexibility Container orchestration tools provide a flexible and modular platform for managing containerized applications, making it easier to customize and extend the platform to meet specific requirements. Efficiency Container orchestration platforms automate many of the tasks involved in managing containerized applications, which can save developers time and reduce the risk of errors. Resiliency Container orchestration platforms offer self-healing capabilities that ensure applications remain available and responsive even with failures. Overall, container orchestration is essential for organizations that are developing and deploying modern, distributed applications. By automating the deployment, scaling, and management of containerized applications, container orchestration tools enable organizations to deliver high-quality services to customers, while also reducing the complexity and cost of managing containerized applications. Popular Container Orchestration Tools There are several container orchestration tools available, each with its own strengths and weaknesses. The most popular container orchestration tool is Kubernetes, which is an open-source platform for managing containerized applications. Kubernetes provides a robust set of features for managing containers, including container deployment, scaling, and health monitoring. Other popular container orchestration tools include Docker Swarm, which is a simple and lightweight orchestration tool, and Apache Mesos, which is a highly scalable and flexible orchestration tool. Kubernetes Kubernetes is one of the most popular container orchestration tools and is widely used in production environments. It provides a rich set of features, including automatic scaling, load balancing, service discovery, and self-healing. Docker Swarm Docker Swarm is a container orchestration tool that is tightly integrated with the Docker ecosystem. It provides a simple and easy-to-use platform for managing containerized applications but has fewer features than Kubernetes. Apache Mesos Apache Mesos is a distributed systems kernel that provides a platform for managing resources across clusters of machines. It can be used to manage a wide range of workloads, including containerized applications. Nomad Nomad is a container orchestration tool developed by HashiCorp. It provides a simple and flexible platform for managing containerized applications and can be used to manage containers and non-container workloads. OpenShift OpenShift is a container application platform developed by Red Hat. It is based on Kubernetes but provides additional features and capabilities, such as integrated developer tools and enterprise-grade security. Amazon ECS Amazon Elastic Container Service (ECS) is a fully managed container orchestration service provided by Amazon Web Services. It provides a simple and easy-to-use platform for managing containerized applications on the AWS cloud platform. Google Cloud Run Google Cloud Run is a fully managed serverless container platform provided by Google Cloud. It allows developers to run containerized applications without the need to manage the underlying infrastructure. Overall, the choice of container orchestration tool will depend on a range of factors, including the specific requirements of the organization, the size and complexity of the application, and the preferred infrastructure platform. Container Orchestration Best Practices To ensure successful container orchestration, there are several best practices that organizations should follow. These include: Standardize Container Images Use standardized container images to ensure consistency and repeatability in deployments. Monitor Container Health Use container monitoring tools to ensure containers are healthy and performing as expected. Automate Deployments Use automated deployment tools to reduce the risk of human error and ensure consistent deployments. Implement Resource Quotas Implement resource quotas to ensure containerized applications are not overprovisioned and to optimize resource utilization. Plan for Disaster Recovery Plan for disaster recovery by implementing backup and restore processes, and testing disaster recovery plans regularly. Conclusion Container orchestration is an essential aspect of modern software development, enabling organizations to manage large-scale containerized applications with ease. By automating the deployment, scaling, and management of containerized applications, container orchestration tools enable organizations to deliver high-quality services to customers, while also reducing the complexity and cost of managing containerized applications. With several popular container orchestration tools available, organizations have a wide range of options for managing containerized applications and can choose the platform that best meets their needs. Container orchestration is a critical element of modern software development and deployment. It enables organizations to manage containerized applications at scale, ensuring they are highly available and resilient. By following best practices and leveraging container orchestration tools like Kubernetes, organizations can optimize resource utilization, accelerate the software development lifecycle, and reduce the risk of human error.
Docker has revolutionized the way we build and deploy applications. It provides a platform-independent environment that allows developers to package their applications and dependencies into a single container. This container can then be easily deployed across different environments, making it an ideal solution for building and deploying applications at scale. Building Docker images from scratch is a must skill that any DevOps engineer needs to acquire for working with Docker. It allows you to create custom images tailored to your application's specific needs, making your deployments more efficient and reliable. Here, in this blog, we'll explore Docker images, its benefits, the process of building Docker images from scratch, and the best practices for building a Docker image. What Is a Docker Image? A Docker image is a lightweight, standalone, executable package that includes everything needed to run the software, including code, libraries, system tools, and settings. Docker images are built using a Dockerfile, which is a text file that contains a set of instructions for building the image. These instructions specify the base image to use, the packages and dependencies to install, and the configuration settings for the application. Docker images are designed to be portable and can be run on any system that supports Docker. They are stored in a central registry, such as Docker Hub, where others can easily share and download. By using Docker images, developers can quickly and easily deploy their applications in a consistent and reproducible manner, regardless of the underlying infrastructure. This makes Docker images an essential tool for modern software development and deployment. Benefits of Building a Docker Image By building image Docker, you can improve the consistency, reliability, and security of your applications. In addition, Docker images make it easy to deploy and manage applications, which helps to reduce the time and effort required to maintain your infrastructure. Here are some major benefits of building a Docker image: Portability: Docker images are portable and can run on any platform that supports Docker. This makes moving applications between development, testing, and production environments easy. Consistency: Docker images provide a consistent environment for running applications. This ensures that the application behaves the same way across different environments. Reproducibility: Docker images are reproducible, which means you can recreate the same environment every time you run the image. Scalability: Docker images are designed to be scalable, which means that you can easily spin up multiple instances of an application to handle increased traffic. Security: Docker images provide a secure way to package and distribute applications. They allow you to isolate your application from the host system and other applications running on the same system. Efficiency: Docker images are lightweight and take up minimal disk space. This makes it easy to distribute and deploy applications quickly. Versioning: Docker images can be versioned, which allows you to track changes and roll back to previous versions if necessary. Structure of a Docker Image A Docker image is a read-only template that contains the instructions for creating a Docker container. Before you learn how to build a Docker image, let's read about its structure first. The structure of a Docker image includes the following components: Base Image A Docker image is built on top of a base image, which is the starting point for the image. The base image can be an official image from the Docker Hub registry or a custom image created by another user. Filesystem The filesystem of a Docker image is a series of layers that represent the changes made to the base image. Each layer contains a set of files and directories that represent the differences from the previous layer. Metadata Docker images also include metadata that provides information about the image, such as its name, version, author, and description. This metadata is stored in a file called the manifest. Dockerfile The Dockerfile is a text file that contains the instructions for building the Docker image. It specifies the base image, the commands to run in the image, and any additional configuration needed to create the image. Before learning how to build the docker image using the Docker build command from Dockerfile, knowing how dockerfile works will be helpful. Configuration Files Docker images may also include configuration files that are used to customize the image at runtime. These files can be mounted as volumes in the container to provide configuration data or environment variables. Runtime Environment Finally, Docker images may include a runtime environment that specifies the software and libraries needed to run the application in the container. This can include language runtimes such as Python or Node.js or application servers such as Apache or Nginx. The structure of a Docker image is designed to be modular and flexible, allowing technology teams to create images tailored to their specific needs while maintaining consistency and compatibility across different environments. How to Build a Docker Image? To build a Docker image, you need to follow these steps: Create a Dockerfile A Dockerfile is a script that contains instructions on how to build your Docker image. The Dockerfile specifies the base image, dependencies, and application code that are required to build the image. After creating a Dockerfile and understanding how Dockerfile works, move to the next step. Define the Dockerfile Instructions In the Dockerfile, you need to define the instructions for building the Docker image. These instructions include defining the base image, installing dependencies, copying files, and configuring the application. Build the Docker Image To build a Docker image, you need to use the Docker build command. This command takes the Dockerfile as input and builds the Docker image. After using the Docker build command with Dockerfile, you can also specify the name and tag for the image using the -t option. Test the Docker Image Once the Docker image is built, you can test it locally using the docker run command. This command runs a container from the Docker image and allows you to test the application. Push the Docker Image to a Registry Once you have tested the Docker image, you can push it to a Docker registry such as Docker Hub or a private registry. This makes it easy to share the Docker image with others and deploy it to other environments. Let's see this Docker build command example. Once you've created your Dockerfile, you can use the "docker build" command to build the image. Here's the basic syntax for the docker build command with dockerfile: (php) docker build -t <image-name> <path-to-Dockerfile> Here, in this Docker build command example, if your Dockerfile is located in the current directory and you want to name your image "my-app," you can use the following Docker build command from dockerfile. (perl) docker build -t my-app This Docker builds command builds the Docker image using the current directory as the build context and sets the name and tag of the image to "my-app." Best Practices for Building a Docker Image Here are some best practices to follow when building a Docker image: First, use a small base image: Use a small base image such as Alpine Linux or BusyBox while building an image Docker. This helps to reduce the size of your final Docker image and improves security by minimizing the attack surface. Use a .dockerignore file: Use a .dockerignore file to exclude files and directories that are not needed in the Docker image. This helps to reduce the size of the context sent to the Docker daemon during the build process. Use multistage builds: Use multistage builds to optimize your Docker image size. Multistage builds allow you to build multiple images in a single Dockerfile, which can help reduce the number of layers in your final image. Minimize the number of layers: Minimize the number of layers in your Docker image to reduce the build time and image size. Each layer in a Docker image adds overhead, so it's important to combine multiple commands into a single layer. Use specific tags: Use specific tags for your Docker image instead of the latest tag. This helps to ensure that you have a consistent and reproducible environment. Avoid installing unnecessary packages: Avoid installing unnecessary packages in your Docker image to reduce the image size and improve security. Use COPY instead of ADD: Use the COPY command instead of ADD to copy files into your Docker image. The COPY command is more predictable and has fewer side effects than the ADD command. Avoid using root user: Avoid using the root user in your Docker image to improve security. Instead, create a non-root user and use that user in your Docker image. Docker Images: The Key to Seamless Container Management By following these steps and practices outlined in this blog, you can create custom Docker images tailored to your application's specific needs. This will not only make your deployments more efficient and reliable, but it will also help you to save time and resources. With these skills, you can take your Docker knowledge to the next level and build more efficient and scalable applications. Docker is a powerful tool for building and deploying applications, but it can also be complex and challenging to manage. Whether you're facing issues with image compatibility, security vulnerabilities, or performance problems, it's important to have a plan in place for resolving these issues quickly and effectively.
Native Image technology is gaining traction among developers whose primary goal is to accelerate startup time of applications. In this article, we will learn how to turn Java applications into native images and then containerize them for further deployment in the cloud. We will use: Spring Boot 3.0 with baked-in support for Native Image as the framework for our Java application; Liberica Native Image Kit (NIK) as a native-image compiler; Alpaquita Stream as a base image. Building Native Images from Spring Boot Apps Installing Liberica NIK It would be best to utilize a powerful computer with several gigabytes of RAM to work with native images. Opt for a cloud service provided by Amazon or a workstation so as not to overload the laptop. We will be using Linux bash commands further on because bash is a perfect way of accessing the code remotely. macOS commands are similar. As for Windows, you can use any alternative, for instance, bash included in the Git package for Windows. Download Liberica Native Image Kit for your system. Choose a Full version for our purposes. Unpack tar.gz with: tar -xzvf ./bellsoft-liberica.tar.gz Now, put the compiler to $PATH with: GRAALVM_HOME=/home/user/opt/bellsoft-liberica export PATH=$GRAALVM_HOME/bin:$PATH Check that Liberica NIK is installed: java -version openjdk version "17.0.5" 2022-10-18 LTS OpenJDK Runtime Environment GraalVM 22.3.0 (build 17.0.5+8-LTS) OpenJDK 64-Bit Server VM GraalVM 22.3.0 (build 17.0.5+8-LTS, mixed mode, sharing) native-image --version GraalVM 22.3.0 Java 17 CE (Java Version 17.0.5+8-LTS) If you get the error "java: No such file or directory" on Linux, you installed the binary for Alpine Linux, not Linux. Check the binary carefully. Creating a Spring Boot Project The easiest way to create a new Spring Boot project is to generate one with Spring Initializr. Select Java 17, Maven, JAR, and Spring SNAPSHOT-version (3.0.5 at the time of writing this article), then fill in the fields for project metadata. We don’t need any dependencies.Add the following code to you main class:System.out.println("Hello from Native Image!"); Spring has a separate plugin for native compilation, which utilizes multiple context dependent parameters under the hood. Let’s add the required configuration to our pom.xml file: XML <profiles> <profile> <id>native</id> <build> <plugins> <plugin> <groupId>org.graalvm.buildtools</groupId> <artifactId>native-maven-plugin</artifactId> <executions> <execution> <id>build-native</id> <goals> <goal>compile-no-fork</goal> </goals> <phase>package</phase> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles> Let’s build the project with the following command: ./mvnw clean package -Pnative The resulting native image is in the target directory. Write a Dockerfile We need to write a Dockerfile to generate a Docker image container. Put the following file into the application folder: Dockerfile FROM bellsoft/alpaquita-linux-base:stream-musl COPY target/native-image-demo . CMD ["./native-image-demo"] Where we: Create an image with Alpaquita Linux base image (the native image doesn’t need a JVM to execute); Copy the app into the new image; Run the program inside the container. We can also skip the step with Liberica NIK installation and we build a native image straight in a container, which is useful when the development and deployment architectures are different. For that purpose, create another folder and put there your application and the following Dockerfile: Dockerfile FROM bellsoft/liberica-native-image-kit-container:jdk-17-nik-22.3-stream-musl as builder WORKDIR /home/myapp ADD native-image-demo /home/myapp/native-image-demo RUN cd native-image-demo && ./mvnw clean package -Pnative FROM bellsoft/alpaquita-linux-base:stream-musl WORKDIR /home/myapp COPY --from=builder /home/myapp/native-image-demo/target/native-image-demo . CMD ["./native-image-demo"] Where we: Specify the base image for Native Image generation; Point to the directory where the image will execute inside Docker; Copy the program to the directory; Build a native image; Create another image with Alpaquita Linux base image (the native image doesn’t need a JVM to execute); Specify the executable directory; Copy the app into the new image; Run the program inside the container. Build a Native Image Container To generate a native image and containerize it, run: docker build . Note that if you use Apple M1, you may experience troubles with building a native image inside a container. Check that the image was create with the following command: Dockerfile docker images REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> 8ebc2a97ef8e 18 seconds ago 45.2MB Tag the newly created image: docker tag 8ebc2a97ef8e nik-example Now you can run the image with: docker run -it --rm 8ebc2a97ef8e Hello from Native Image! Conclusion Native image containerization is as simple as creating Docker container images of standard Java apps. Much trickier is to migrate a Java application to Native Image. We used a simple program that didn’t require any manual configuration. But dynamic Java features (Reflection, JNI, Serialization, etc.) are not supported by GraalVM, so you have to make the native-image tool aware of them.
IBM App Connect Enterprise (ACE) has provided support for the concept of “shared classes” for many releases, enabling various use cases including providing supporting Java classes for JMS providers and also for caching data in Java static variables to make it available across whole servers (plus other scenarios). Some of these scenarios are less critical in a containerized server, and others might be handled by using shared libraries instead, but for the remaining scenarios there is still a need for the shared classes capability in containers. What Is the Equivalent of /var/mqsi/shared-classes in Containers? Adding JARs to shared classes is relatively simple when running ACE in a virtual machine: copying the JAR files into a specific directory such as /var/mqsi/shared-classes allows all flows in all servers to make use of the Java code. There are other locations that apply only to certain integration nodes or servers, but the basic principle is the same, and only needs to be performed once for a given version of supporting JAR as the copy action is persistent across redeploys and reboots. The container world is different, in that it starts with a fixed image every time, so copying files into a specific location must either be done when building the container image, or else done every time the container starts (because changes to running containers are generally non-persistent). Further complicating matters is the way flow redeploy works with containers: the new flow is run in a new container, and the old container with the old flow is deleted, so any changes to the old container are lost. Two main categories of solution exist in the container world: Copy the shared classes JARs into the container image during the container build, and Deploy the shared classes JARs in a BAR file or configuration in IBM Cloud Pak for Integration (CP4i) and configure the server to look for them. There is also a modified form of the second category that uses persistent volumes to hold the supporting JARs, but from an ACE point of view it is very similar to the CP4i configuration method. The following discussion uses an example application from the GitHub repo at https://github.com/trevor-dolby-at-ibm-com/ace-shared-classes to illustrate the question and some of the answers. Original Behavior With ACE in a Virtual Machine Copying the supporting JAR file into /var/mqsi/shared-classes was sufficient when running in a virtual machine, as the application would be able to use the classes without further configuration: The application would start and run successfully, and other applications would also be able to use the same shared classes across all servers. Container Solution 1: Copy the Shared Classes JARs in While Building the Container Image This solution has several variants, but they all result in the container starting up with the support JAR already in place. ACE servers will automatically look in the “shared-classes” directory within the work directory, and so it is possible to simply copy the JARs into the correct location; the following example from the Dockerfile in the repo mentioned above shows this: # Copy the pre-built shared JAR file into placeRUN mkdir /home/aceuser/ace-server/shared-classesCOPY SharedJava.jar /home/aceuser/ace-server/shared-classes/ and the server in the container will load the JAR into the shared classloader: Note that this solution also works for servers running locally during development in a virtual machine. It also means that any change to the supporting JAR requires a rebuild of the container image, but this may not be a problem if a CI/CD pipeline is used to build application-specific container images. The server may also be configured to look elsewhere for shared classes by setting the additionalSharedClassesDirectories parameter in server.conf.yaml. This parameter can be set to a list of directories to use, and then the supporting JAR files can be placed anywhere in the container. The following example shows the JAR file in the “/git/ace-shared-classes” directory: This solution would be most useful for cases where the needed JAR files are already present in the image, possibly as part of another application installation. Container Solution 2: Deploy the Shared Classes JARs in a BAR File or Configuration in CP4i For many CP4i use cases, the certified container image will be used unmodified, so the previous solution will not work as it requires modification of the container image. In these cases, the supporting JAR files can be deployed either as a BAR file or else as a “generic files” configuration. In both cases, the server must be configured to look for shared classes in the desired location. If the JAR files are small enough or if the shared artifacts are just properties files, then using a “generic files” configuration is a possible solution, as that type of configuration is a ZIP file that can contain arbitrary contents. The repo linked above shows an example of this, where the supporting JAR file is placed in a ZIP file in a subdirectory called “extra-classes” and additionalSharedClassesDirectories is set to “/home/aceuser/generic/extra-classes”: (If a persistent volume is used instead, then the “generic files” configuration is not needed and the additionalSharedClassesDirectories setting should point to the PV location; note that this requires the PV to be populated separately and managed appropriately (including allowing multiple simultaneous versions of the JARs in many cases)). The JAR file can also be placed in a shared library and deployed in a BAR file, which allows the supporting JARs to be any size and also allows a specific version of the supporting JARs to be used with a given application. In this case, the supporting JARs must be copied into a shared library and then additionalSharedClassesDirectories must be set to point the server at the shared library to tell it to use it as shared classes. This example uses a shared library called SharedJavaLibrary and so additionalSharedClassesDirectories is set to “{SharedJavaLibrary}”: Shared libraries used this way cannot also be used by applications in the server. Summary Existing solutions that require the use of shared classes can be migrated to containers without needing to be rewritten, with two categories of solution that allow this. The first category would be preferred if building container images is possible, while the second would be preferred if a certified container image is used as-is. For further reading on container image deployment strategies, see Comparing Styles of Container-Based Deployment for IBM App Connect Enterprise; ACE servers can be configured to work with shared classes regardless of which strategy is chosen.
Yitaek Hwang
Software Engineer,
NYDIG
Abhishek Gupta
Principal Developer Advocate,
AWS
Alan Hohn
Director, Software Strategy,
Lockheed Martin
Marija Naumovska
Product Manager,
Microtica