More on setting up Openstack Ansible

Since I wrote this blog post in December 2015, things have been changing in the openstack-ansible world. Here I will describe an ‘all command line’ method for setting up a cloud server in the Rackspace Public Cloud and go through the steps necessary to get an All In One Openstack installation running.

This will include installing and setting up a command line tool for bringing up server instances, getting logged in to a new server instance, obtaining openstack-ansible, making some small configuration adjustments and using openstack-ansible to set up an ‘All in One’ Openstack setup.

The rack tool

Command line provisioning of cloud servers in the Rackspace public cloud may be accomplished using the rack tool available for MacOSX, Linux and Windows from the Rackspace Developer site.

To use the tool, you’ll need to have an account on and to set up the tool you’ll need your username and API key from that account.

Follow the installation instructions on the developer site to get the rack tool set up for your operating system.

Before you create a server

Once you have created a server, you will want to ssh in to it to set up Openstack. You could just use ssh with a password, but it’s entirely possible that the Openstack installation process will disable password logins (hint: It will) so you’d be well advised to set things up so that you can log in with an ssh key instead.

If you already have an ssh keypair available, you’re ready to use the racker tool to add the public key to your mycloud account so that it can be automatically installed in your cloud servers. If you don’t, or if you’re not quite sure how to go about making a keypair, the nice folks at github have some great instructions for doing that.

With your public key living at ~/.ssh/, you can:

rack servers keypair upload --file ~/.ssh/ --name mi_key
Name		mi_key
Fingerprint	e6:bb:27:21:b8:6a:d8:17:04:c6:26:56:a8:bc:ef:b6
PublicKey	ssh-rsa <snipped the actual key>

This will give your public key the name mi_key and upload it to your mycloud account, ready to use when you create servers. Choose your own name for the key when you upload it.

Creating a server

A couple of things that you’ll need to know before you create a server are an image name which will determine your server’s base operating system and a flavor which will determine its memory, disk and CPU characteristics. The racker tool includes some useful commands to help you decide which image and flavor you want to use.
To work out what system images are available, use:

rack servers image list

And to work out what flavors are available:

rack servers flavor list

Having run these commands to determine what options are available, I can now create a server with:

rack servers instance create --name myserver --image-name "Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)" --flavor-id io1-15 --keypair mi_key

Once the command has run, you’ll see an ID for the server and a root admin password (which you may want to copy) and you should be able to see details of the new server via your list of servers with:

rack servers instance list | grep myserver

Your ssh key should already be installed on the new server so using the IP address revealed by the list:

ssh root@<server.ip.address>

Preparing the system

The Ubuntu 14.04 image will benefit from some package upgrades, so start out by doing that:

apt-get update -y && apt-get upgrade -y

And you’ll need a couple of extra tools set up:

apt-get install git tmux

The tmux tool lets you set up a session that will keep going if you lose your network connection. This command will create a session called ‘admin’ that you can later re-connect to with the tmux attach -t admin command.

tmux new -s admin

Next get a copy of the openstack-ansible code using git:

git clone /opt/openstack-ansible

The procedure for kicking off ansible has changed slightly since the last time I wrote about it. Start with:

cd /opt/openstack-ansible

It’s important to have quite a lot of disk available to set this up. This requires a config change:

sed -i '/#bootstrap_host_data_disk_device:/c\bootstrap_host_data_disk_device: xvde' tests/roles/bootstrap-host/defaults/main.yml

Now the openstack setup can run:


At the time of writing, the master branch of openstack-ansible was not in a stable state, so lets run playbooks from the known-good ‘liberty’ branch:

git checkout liberty

This will need an hour or more to run. Once it completes, you should be able to see all of the lxc containers running Openstack services on your server with:

lxc -f

Setting up Openstack Ansible AIO

These notes are an attempt to bring together a few sources of information that collectively explain how to set up an ‘All In One’ (AIO) Openstack installation on a single host.

The official Openstack Docs Quickstart guide is very good and is a great place to start.
In addition, Richard Jones has blogged about setting up an AIO installation for Horizon development work. His notes point out some of the customisation that can be done if you have a need to not deploy all of the components of the stack. Also, Miguel Grinberg has written a blog post about his use of AIO for development work.

Setting up an AIO instance may be done on a standalone computer or on a virtual machine. Here I am working with a virtual machine in the Rackspace Public Cloud. One of the first things that the Quick Start guide makes clear is the minimum system requirements for RAM and disk being 16 gig of RAM and 80 gig of disk. I initially attempted an AIO setup on a virtual machine with only 8 gig of RAM, and though the process seems to mostly proceed as it should, it eventually fails and cannot be used. Others have reported AIO installation failures when attempting to use virtual servers from the ‘standard’ set of Rackspace server ‘flavors’ due to scripted assumptions about disk partitioning. I have had success in setting up AIO on a virtual server from the ‘I/O’ group with 15 gig of RAM, 40 gig of disk and Ubuntu 14.04 as the operating system. This combination is called 15 GB I/O v1.

Note: When creating the server, you must scroll to the Recommended Installs section at the bottom of the Create screen and select the checkbox for ‘Operating system security patches applied on selected images’.

Tip: Doing protracted operations on a server via ssh runs the risk that a flaky network connection may drop out and lose all of your hard work. To mitigate that risk, I like to use tmux (terminal multiplexer). Using tmux, you can create a shell session that will persist even if you lose the link. When you ssh in again, your session is still there and you can attach to it. So once you have ssh’d into your new server:

tmux new -s admin  #creates a session called admin

Now just do everything you normally would, but safely ensconced in a tmux session. Later on, if you lose the link and ssh back in again, you just:

tmux attach -t admin  #reattaches to your admin session. 

Okay. On with what we came here to do:

apt-get install git
git clone /opt/openstack-ansible
cd /opt/openstack-ansible

Once the AIO is up and running, you’ll likely want to log into the Horizon web interface to interact with the system. The scripts will have helpfully created some memorable passwords like “8237b1b4e89221693c81c50156bc7b69d2cd91f6a9a37b9c36db2” which you can use to log in to the web UI. If you find that a bit tedious, now is the time to go and edit /etc/openstack_deploy/user_secrets.yml and seek the line that begins with keystone_auth_admin_password. Change the long string there to something you can remember like ‘TrickyPa55’.

Now you can finish the setup process:


Watch a great deal of logging go past for perhaps an hour or so. Once the setup is complete, that log may be found at /var/log/cloud-init.log

If all has gone well, it should now be possible to point a web browser at the public IP of your virtual server and log in to the Openstack dashboard.

User: admin
Password: TrickyPa55

…or whatever password you set in /etc/openstack_deploy/user_secrets.yml

From the shell prompt where you ran the setup scripts, you can take a look at the containers that have been set up using:

lxc-ls -f

This will list all of the containers (32 at the time of writing) and some metadata; most usefully their IP addresses. To ‘log in’ to a container, you might (as Miguel suggests in his blog post) use:

lxc-attach -n the_full_container_name_and_identifier

Since the installation scripts have helpfully copied your virtual server’s root ssh public key into each container, it is perhaps easier to look inside a container using ssh. This is a matter of getting the container’s IP address from the lxc-ls -f command and just ssh-ing to that address. For example:

root@cadence:~# ssh
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-58-generic x86_64)

Executing openstack CLI commands can be done by accessing the ‘utility_container’. ssh to its IP address then:

source openrc

(or even)

. openrc

Then it’s possible to run queries like:

root@aio1_utility_container-60daf7b8:~# cinder type-list
| ID                                   | Name | Description | Is_Public |
| 341ae6ec-5298-4cd0-b433-2fa5ffe5990d | lvm  | -           | True      |