How are OpenStack Components Related?
Join the DZone community and get the full member experience.
Join For Free
how are openstack components related? by v. k. cody bumgardner if you’ll be working with openstack on a high level, even if you’re simply responsible for a vendor-managed solution, this article will help you understand the collection of interacting components that make up an openstack deployment. |
since the first public release of openstack in 2010, the framework has grown from a few core components to nearly ten. there are now hundreds of openstack-related projects, each with various levels of interoperability. these projects range from openstack library dependencies to projects where the openstack framework is the dependency.
in an effort to provide structure around the diverse set of projects, the openstack foundation created several project designations, including core, incubated, library, gating, supporting, and related. these project designations and their descriptions can be found in table 1.
table 1mproject designations
incubated projects, once fully developed and accepted, will eventually function in the same way core projects do. library functions will be abstracted (not observable) by core project interaction. gating and supporting projects aren’t used to provide resources in a deployed system, so you don’t need to worry about those. that leaves the related projects, which as the name implies, have some affiliation with openstack, even if the affiliation is self-nominated.
understanding component communication
often when we refer to “openstack,” we’re referring to projects with a “core” designation. core projects can use the openstack trademark and must pass all “must-pass” tests, as defined by the openstack foundation. simply put, core components are those that almost everyone will use in an openstack deployment. projects like compute, networking, storage, shared services, and dashboard are examples of projects with a core designation, as shown in table 2 below.
table 2mcore projects
project |
codename |
description |
compute |
nova |
manages virtual machine (vm) resources, including cpu, memory, disk, and network interfaces |
networking |
neutron |
provides resources used by vm network interfaces, including ip addressing, routing, and software-defined networking (sdn) |
object storage |
swift |
provides object-level storage accessible via restful api |
block storage |
cinder |
provides block-level (traditional disk) storage to vms |
identity service (shared service) |
keystone |
manages role-based access control (rbac) for openstack components; provides authorization services |
image service (shared service) |
glance |
manages vm disk images; provides image delivery to vms and snapshot (backup) services |
telemetry service (shared service) |
ceilometer |
centralized collection for metering and monitoring openstack components |
orchestration service (shared service) |
heat |
template-based cloud application orchestration for openstack environments |
database service (shared service) |
trove |
provides users with relational and non-relational database services |
dashboard |
horizon |
provides a web-based gui for working with openstack |
in addition to various project designations, there are also several topologies in which you can deploy project components. specifically, if more of a specific resource (storage, compute, networking, and so on) is required, more component-specific servers can be added.
dashboard authentication communication
let’s jump right in and take a look at how core components communicate. we’ll walk through the process of accessing the openstack dashboard, reviewing the vm creation options, and creating a virtual machine.
you must first provide the dashboard with your login credentials and obtain an authentication token. the authentication token will be saved as a cookie in your web browser and be used with subsequent requests. as shown in figure 1, you obtain an authentication token from the identity service. while you can use the dashboard (instead of the cli or apis) to navigate through the rset of this example, we won’t show the interaction with the dashboard. even during the login process, the dashboard just displays interactions between the web browser and the openstack apis. we’re primarily concerned with component interaction on the api level.
1 dashboard login
#a user supplies user credentials to log in: <username> <password>
#b check if user is who they say they are
#c if user is correctly identified, supply an authentication token that they can use for the rest of their session
#d welcome user to the openstack dashboard, and provide token for authentication.
once you have your authentication token, you can take the second step and access the compute component to create your virtual machine (vm).
resource query and request communication
openstack works on a tenant model. if the openstack deployment is a hotel of resources, you can think of tenants as rooms in the hotel. each tenant (room) is assigned a resource quota (a number of towels, beds, and so on). openstack users (guests) are assigned to tenants (rooms) through the use of roles. the identity information is kept by the identity component, and the quota information is maintained by the compute component.
in the dashboard, when you click on the launch instance button, the compute component is queried to determine what resources and configurations are available in your current tenant. based on the available options, you describe the vm you want and submit the configuration for creation.
the communication between components during a vm creation request is shown in figure 2. because the creation of a vm isn’t instantaneous, the process is executed asynchronously, so after you submit a vm for provisioning, you’re returned to the dashboard. in the dashboard, your browser will periodically update the vm status information.
figure 2 resource query and request
#a what resources are available for the creation of virtual machines?
#b you have a quota of x units of {cpu, ram, storage} resources, access to private and public networks, and an image for ubuntu linux 12.04.
#c i want to create a vm named myvm, with {cpu: 2, ram: 8 gb, storage: 40 gb} on the private network. please load the ubuntu linux 12.04 image on myvm.
#d ok, i will start the process of provisioning myvm.
resource provisioning communication
when vm creation requests are submitted, the compute service component will interact with other components to provision the requested vm. the first thing that happens is that the vm object record is registered with the compute service component. this object record contains information about the vm status and configuration—the vm object isn’t the vm instance, only a record describing the instance.
when components communicate in the vm creation process, they reference common objects, like this vm object. for instance, the compute service component might request a storage assignment from the storage service component. the storage service component would then provision the requested storage and provide a reference to a storage object, which would then be referenced in the vm object record.
as shown in figure 3, the compute service component communicates with other core components to provision and assign resources to the vm object. compute will first request infrastructure components like storage and networking. when the virtual infrastructure has been assigned to the vm and referenced in the vm object, compute will clone the requested operating system to the virtual storage volume. at this point, the vm creation process is complete and the vm is ready to start.
figure 3 provisioning of resources
#a let’s create a vm instance named myvm with a resource reservation of {cpu: 2, ram: 8 gb}.
#b i need a 40 gb volume for myvm.
#c ok, a 40 gb volume has been allocated for myvm.
#d i need a virtual adapter created on the “private” network for myvm.
#e ok, an adapter has been assigned to myvm and placed on the “private” network.
#f i need the ubuntu linux 12.04 image cloned to the 40 gb volume on myvm.
#g ok. i have cloned the image to the volume on myvm.
as demonstrated in the previous figures, core components work in concert to provide openstack services. openstack interactions, even those in the dashboard, eventually find their way to the openstack api. related projects will often use api calls exclusively to interact with openstack. as you’ll see next, related projects use the api exclusively for openstack interaction.
related project communication
let’s take a look at how ubuntu juju, a related project, interacts with openstack. juju is a cloud automation package that uses openstack for virtual infrastructure. juju automates the deployment and configuration of applications on virtual infrastructure using application-specific charms.
for the lack of a better description, juju charms are collections of installation scripts that define how services and applications integrate into virtual infrastructure. because infrastructure, including networks and storage, can be provisioned programmatically using openstack, juju can deploy entire application suites from a charm. simply, juju turns newly provisioned vm instances into applications. we’ll discuss this process in detail in later chapters, but essentially you tell an application charm how large you want your instances to be and how many instances you want, and it does the work to deploy your applications.
the first step in using juju in your openstack deployment is to deploy what juju calls its bootstrap , using the juju cli. the bootstrap is a vm that juju uses to control its automation processes. the deployment of the bootstrap, from a component perspective, is similar to what you’ve seen in recent figures (see figures 1, 2, and 3). the difference here is that in place of the web browser making the request, it’s the juju application.
once the bootstrap node has been started, juju commands will be issued to the bootstrap node, not directly to openstack apis. the reason for this is that the provisioning process is asynchronous, as mentioned earlier, and it’s sometimes time-consuming. you don’t want to maintain a connection from the desktop to the openstack deployment while a 20-vm application is deployed.
let’s consider an example where you use juju and openstack to deploy a load-balanced, five-node wordpress service, with a clustered mysql backend. in this case, you have three types of service nodes: load-balancing, wordpress (apache/php), and mysql db. using the juju charm developed for wordpress, you’d describe the number of nodes for each service, their virtual size (cpu, ram, and so on), and how the nodes relate. you’d submit this charm to your bootstrap node, which would then interact with openstack to provision the application. this process is shown in figure 4.
figure 4 openstack interacting with a related project
#a i want a five-node wordpress site with a pair of load balancers and a clustered database.
#b ok. i will start the process.
#c create two lb_ instances with a resource reservation of {cpu: 2, ram: 4 gb, storage: 10 gb}. create five web_ instances with a resource reservation of {cpu: 2, ram: 8 gb, storage: 40 gb}. create two db_ instances with a resource reservation of {cpu: 4, ram: 16 gb, storage: 80 gb}. create a virtual network named wb_net between the lb_, web_, and db_ instances. assign the “public” network to lb_. #d ok. i will start the process.
let’s assume that openstack, through the direction of the bootstrap node, successfully provisions all the required virtual infrastructure. at this point you have a collection of vms, but no applications. the bootstrap node polls openstack, watching for its requested vms to come online. once the vms are online, it will start a process outside the openstack framework to complete the install. as shown in figure 5, the bootstrap node will communicate directly with the newly provisioned vms. from this point forward, openstack simply provides the virtual infrastructure and is unaware of the application roles assigned to each vm.
figure 5 juju bootstrap controls the vms
#a install mysql and configure an active/passive db cluster using host db_0 and db_1
#b install apache, php, and wordpress using the database cluster db_0/db_1 #c install haproxy and load-balance web traffic for web_0–web_n.
we’ve now discussed how the components of openstack communicate on the logical level. in the figures we’ve illustrated component communication, as if everything was communicating inside a single big node (physical instance). in practice, however, openstack components will be distributed across many physical commodity servers in a multinode topology.
Published at DZone with permission of Cody Bumgardner. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments