Configuring an OpenStack VM with Multiple Network Cards
Join the DZone community and get the full member experience.
Join For Free[This article was written by Barak Merimovich.]
We have discussed OpenStack networking extensively in previous posts. In this post, I’d like to dive into a more advanced OpenStack networking scenario.
Many cloud images are not configured to automatically bring up all network cards that are available. They will usually only have a single network card configured. To correctly set up a host in the cloud with multiple network cards, log on to the machine and bring up the additional interfaces.
echo $'auto eth1\niface eth1 inet dhcp' | sudo tee /etc/network/interfaces.d/eth1.cfg > /dev/null sudo ifup eth1
Networks in the cloud
A complex network architecture is a mainstay of modern IaaS clouds. Understanding how to configure your cloud-based networks, and hosts, is critical to getting your application working in the cloud. This is especially true with Cloudify, the open source cloud orchestration platform I work on.
The cloud, like the world, used to be flat
It was not that long a time ago that most IaaS providers only supported flat networks – all of your hosts were in one large network. Separation between services running in the cloud was enforced in software or with firewalls/security-groups. But technically, all of the hosts were connected to the same network and visible to each other.
The flat network model is simple, and therefore easy to reason and understand. It was a good choice for the early days of the IaaS cloud and no doubt helped with getting applications into the cloud in the first place. It was one of the things that made EC2 so easy to use for anyone just starting out with the ‘cloud’. This model is in fact still available on Amazon Web Services under the title ‘EC2-Classic’. And for many applications, a flat network is good enough.
But as cloud adoption increases, more complex applications are moving into the clouds, and issues like network separation, security, SLA and broadcast domains make more complex networks models a must. Software Defined Networks (SDN) fill that gap. They are now a staple of most major IaaS clouds. AWS has AWS-VPC, OpenStack has the Neutron project and there are many other implementations.
Working with SDN requires knowing a bit more about how information moves around between your cloud resources. In this post I am going to discuss how to set up a host in the cloud so it will play nice with complex networks. I’ll be using OpenStack, but the concepts are similar for other cloud infrastructures.
Openstack configuration
I am going to start with an empty tenant, only the public network is available.
First, lets set up out networks and router:
neutron router-create demo-router neutron net-create demo-network-1 neutron net-create demo-network-2 neutron subnet-create --name demo-subnet-1 demo-network-1 10.0.0.0/24 neutron subnet-create --name demo-subnet-2 demo-network-2 10.0.1.0/24 neutron router-interface-add demo-router demo-subnet-1 neutron router-interface-add demo-router demo-subnet-2 neutron router-gateway-set demo-router publicNote the network IDs:
neutron net-list | id | name | subnets | | 2c33efe2-6204-4125-9716-3bc525630016 | demo-network-1 | 928dafa0-83ef-459c-b20d-71d8ea596fa2 10.0.0.0/24 | | aa30627e-c181-4a4b-89bf-5dd7c26c244e | demo-network-2 | 26d573f7-7953-4a54-825b-ed7bbc0661c7 10.0.1.0/24 | | e502de8d-929a-4ee0-bd18-efa297875cf6 | public | d40dab51-a729-452c-9ee6-b9ad08d10808 |We’ll start with a standard Ubuntu cloud image:
glance image-create --name "Ubuntu 12.04 Standard" --location "http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img" --disk-format qcow2 --container-format bareCreate the keypair and security group:
nova keypair-add demo-keypair > demo-keypair.pem chmod 400 demo-keypair.pem nova secgroup-create demo-security-group "Security group for demo" nova secgroup-add-rule demo-security-group tcp 22 22 0.0.0.0/0Let’s spin up an instance connected to both our networks:
nova boot -flavor m1.small --image "Ubuntu 12.04 Standard" --nic net-id=2c33efe2-6204-4125-9716-3bc525630016 --nic net-id=aa30627e-c181-4a4b-89bf-5dd7c26c244e --security-groups demo-security-group --key-name demo-keypair demo-vmAnd set up floating IPs for the first network:
nova list | ID | Name | Status | Task State | Power State | Networks | 2b17588b-8980-4489-9a04-6539a159dc3c | demo-vm | ACTIVE | None | Running | demo-network-1=10.0.0.2; demo-network-2=10.0.1.2 | neutron floatingip-create public neutron floatingip-list | id | fixed_ip_address | floating_ip_address | port_id | | 49c8b05e-bb8f-4b07-80ed-3155ab6ffc09 | | 192.168.15.42 | | neutron port-list | id | name | mac_address | fixed_ips | | 1ccfd334-7328-4b22-b93e-24a0888276ab | | fa:16:3e:14:39:39 | {"subnet_id": "94598487-c1fc-4f55-ac1f-ef2545d5cfeb", "ip_address": "10.0.1.3"} | | a482c4f6-fa74-476e-b1ce-cd8dd0c70815 | | fa:16:3e:18:92:79 | {"subnet_id": "94598487-c1fc-4f55-ac1f-ef2545d5cfeb", "ip_address": "10.0.1.2"} | | b23d7836-30c5-4bff-b873-15c87ba051f6 | | fa:16:3e:3a:28:40 | {"subnet_id": "dec6ec74-cfa9-4a08-8792-54900631b98e", "ip_address": "10.0.0.3"} | | d421b447-2adf-406f-876b-142238683344 | | fa:16:3e:9d:fc:7f | {"subnet_id": "dec6ec74-cfa9-4a08-8792-54900631b98e", "ip_address": "10.0.0.2"} | | dcf8696b-cc80-4b48-b09c-61c0f8ab02ac | | fa:16:3e:5b:39:fb | {"subnet_id": "94598487-c1fc-4f55-ac1f-ef2545d5cfeb", "ip_address": "10.0.1.1"} | | f6a1666e-495a-4d3f-afa3-754b3cb3cfc0 | | fa:16:3e:8a:1b:fb | {"subnet_id": "dec6ec74-cfa9-4a08-8792-54900631b98e", "ip_address": "10.0.0.1"} | neutron floatingip-associate 49c8b05e-bb8f-4b07-80ed-3155ab6ffc09 d421b447-2adf-406f-876b-142238683344
Note how we matched the VM’s IP to its port, and associated the floating IP to the port. I wish there was an easier way to do this from the CLI…
If everything worked correctly, you should have the following setup:
Let’s make sure ssh works correctly:ssh -i demo-keypair.pem ubuntu@192.168.15.31 hostname demo-vmCool, ssh works. Now, we should have two network cards, right?
ssh -i demo-keypair.pem ubuntu@192.168.15.31 hostname demo-vm Cool, ssh works. Now, we should have two network cards, right? ssh -i demo-keypair.pem ubuntu@192.168.15.31 ifconfig eth0 Link encap:Ethernet HWaddr fa:16:3e:5f:a2:5f inet addr:10.0.0.4 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe5f:a25f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:230 errors:0 dropped:0 overruns:0 frame:0 TX packets:224 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:46297 (46.2 KB) TX bytes:31130 (31.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Huh?! The VM only has one working network interface! Where is my second NIC? Was there a configuration problem with the OpenStack network setup? The answer is here:
ssh -i demo-keypair.pem ubuntu@192.168.15.31 ifconfig -a eth0 Link encap:Ethernet HWaddr fa:16:3e:5f:a2:5f inet addr:10.0.0.4 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe5f:a25f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:324 errors:0 dropped:0 overruns:0 frame:0 TX packets:332 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:69973 (69.9 KB) TX bytes:47218 (47.2 KB) eth1 Link encap:Ethernet HWaddr fa:16:3e:29:6d:22 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
The second NIC exists, but is not running.
The issue is not with the OpenStack network configuration – it’s with the image. The image itself should be configured to work correctly with multiple NICs. All we have to do is bring up the NIC. So we ssh into the instance:ssh -i demo-keypair.pem ubuntu@192.168.15.31And run the following commands:
echo $'auto eth1\niface eth1 inet dhcp' | sudo tee /etc/network/interfaces.d/eth1.cfg > /dev/null sudo ifup eth1The second NIC should now be running:
ifconfig eth1 eth1 Link encap:Ethernet HWaddr fa:16:3e:18:92:79 inet addr:10.0.1.2 Bcast:10.0.1.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe18:9279/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:81 errors:0 dropped:0 overruns:0 frame:0 TX packets:45 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15376 (15.3 KB) TX bytes:3960 (3.9 KB)
And there you go – your VM can access both networks.
This issue can make life complicated when setting up a complex, or even a not very complex, application. When will this issue hurt you? Well, imagine a scenario where you have a web server and a database server. The web server is connected to both Network1 and Network2, and the database server is only connected to Network2. Network1 is connected to the external world over a router, and Network 2 is completely internal, adding another layer of security to the critical database server. So what happens if the web server only has one network card? If only the NIC for Network1 is up, the web server can’t access the database. If only the NIC for Network2 is up, the web server can’t be reached from the external world. Even worse, if this web server is accessed via a floating IP, this IP will also not work, so you won’t be able to access the web server and fix the issue. Tricky.
In conclusion
The above commands will bring up your additional network card. You will of-course need to repeat this process for each additional network card, and for each VM. You can use a start-up script (a.k.a. user-data script) or system service to run these commands, but there are better ways. I’ll discuss how to automate the network setup in a follow-up post.
This was originally posted at Barak's blog Head in the Clouds, find it here.
Published at DZone with permission of Sharone Zitzman, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments