An Intro to Azure Container Instances
Azure Container Instances is out for public preview! See how you can use it to easily create containers without worrying about the underlying infrastructure.
Join the DZone community and get the full member experience.Join For Free
Microsoft just released a public preview of a new service, Azure Container Instances. This may seem confusing initially, Azure already has a container service called Azure Container Services (ACS), but this is a somewhat different offering. ACS is a full container hosting solution, including orchestrators, deployed on top of multiple IaaS-based Azure virtual machines. Azure Container Instances (ACI) is not an orchestrator, it is a platform for deploying containers quickly and simply. The exciting difference with ACI is that containers are now a first-class object in Azure. Using ACI, you can just deploy a container object, there is no need to deploy and manage VMs to host your containers. You simply specify the size of the container you require without having to care about the underlying VMs or storage.
This simple container as service offering should be useful to those looking to quickly spin up a container without having to worry about the infrastructure behind it. But what is even more exciting is the potential to provide a platform-as-a-service offering to be used by orchestrators for providing containers (see later in this article).
In this article, we’ll take a look at some of the basics of this service, how it can be used, and its limitations.
ACI is in preview, so there are some limitations on what you can do with the service:
- Only Linux containers are supported at the moment — Windows containers to come in the future.
- It’s not currently possible to attach a container to a virtual network.
- Use of ACS is currently only through the Azure Cloud Shell or using Azure Resource Manager templates. There is no GUI in the portal and no PowerShell or command line option to run locally. We’ll focus on using the cloud shell in this article.
- There are some limitations both on region availability and the size of containers in a region.
Containers are run on the host machine using HyperV isolation so that container instances are as isolated as virtual machines in Azure. This should mean containers offer the same level of security and performance isolation from other users sharing the same host.
Creating a Container
Creating a container is a single line command in the cloud shell, with various option to go along with it. At a minimum, you need to specify the name to use for the container, the image you want to use (can come from Docker Hub or a private registry), the resource group to store this in, and, if you want, a public IP for the ip-address option. Note that your container name needs to be lower case, otherwise you’ll see an error.
az container create --name helloworld --image microsoft/aci-helloworld --resource-group containertesting --ip-address public
This creates a new container with the default size of 1 CPU and 1.5GB memory. It adds a public IP and exposes port 80.
As mentioned, there’s no interface in the portal for containers, so if you look in your resource group, you will see the object you created, but there is very little you can do with it in the GUI.
Once you're done with the container, you can delete it with the container delete command:
az container delete --name helloworldcontainer --resource-group containertest
As mentioned, containers can also be deployed using ARM templates. You can find some examples here.
Unlike virtual machines, containers aren’t restricted to pre-set sizes. You can choose at deployment time a number of CPU cores and the amount of memory. To specify these, amend your command to include the cpu and memory commands:
az container create --name helloworld --image microsoft/aci-helloworld --resource-group containertest --cpu 2 --memory 4 --ip-address (public
I have seen some errors when deploying larger containers in certain regions (mainly West Europe), so I suspect there are some limits in place for the preview. Larger sizes in East US seemed to work fine. I’ve not yet been able to find documentation of how big your containers can get.
Data and Persistence
As mentioned, it’s not currently possible to connect a container to an Azure virtual network, so using local network resources for persisting data is not possible. Container instances do offer the ability to mount Azure file shares for persistent data, with plans to add Azure disks in the future.
Alternatively, if your data can be stored in a database ,you can make use of public services like Azure SQL, Cosmos DB, or Table storage.
ACI has the concept of container groups. These are multiple containers that are deployed to the same host and share the same network and any mounted volumes. The concept is similar to Kubernetes pods. Container groups need to be deployed using an ARM template rather than through the cloud shell. You can see an example of this here.
As already mentioned, ACI is not an orchestrator, it’s a tool for providing easy deployment of individual VMs. However, ACI is intended to be used with orchestrators. The orchestrator can utilize ACI as the method of provisioning VMs and does not have to care about managing hosts. ACI comes ready with an example (experimental) connector for Kubernetes and I am sure that, further down the line, we will see these for other orchestrators.
Container pricing is a bit more complex than VMs. You are charged on three metrics:
- Create request – a one-off fee for each creation of a container, currently $0.0025 in the East US region.
- Memory usage – The amount of memory used by each container per second. Billed at $0.0000125 per GB per second.
- CPU usage – The number of CPU cores used by each container per second. Billed at $0.000013 per core per second.
So a container created once with two cores and 2GB memory and running for an hour would cost:
0.0025 +((0.0000125 * 3600)*2) + ((0.000013 * 3600)*2)=$0.1861
Published at DZone with permission of Sam Cogan, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.