Setting up Openstack Ansible AIO

These notes are an attempt to bring together a few sources of information that collectively explain how to set up an ‘All In One’ (AIO) Openstack installation on a single host.

The official Openstack Docs Quickstart guide is very good and is a great place to start.
In addition, Richard Jones has blogged about setting up an AIO installation for Horizon development work. His notes point out some of the customisation that can be done if you have a need to not deploy all of the components of the stack. Also, Miguel Grinberg has written a blog post about his use of AIO for development work.

Setting up an AIO instance may be done on a standalone computer or on a virtual machine. Here I am working with a virtual machine in the Rackspace Public Cloud. One of the first things that the Quick Start guide makes clear is the minimum system requirements for RAM and disk being 16 gig of RAM and 80 gig of disk. I initially attempted an AIO setup on a virtual machine with only 8 gig of RAM, and though the process seems to mostly proceed as it should, it eventually fails and cannot be used. Others have reported AIO installation failures when attempting to use virtual servers from the ‘standard’ set of Rackspace server ‘flavors’ due to scripted assumptions about disk partitioning. I have had success in setting up AIO on a virtual server from the ‘I/O’ group with 15 gig of RAM, 40 gig of disk and Ubuntu 14.04 as the operating system. This combination is called 15 GB I/O v1.

Note: When creating the server, you must scroll to the Recommended Installs section at the bottom of the Create screen and select the checkbox for ‘Operating system security patches applied on selected images’.

Tip: Doing protracted operations on a server via ssh runs the risk that a flaky network connection may drop out and lose all of your hard work. To mitigate that risk, I like to use tmux (terminal multiplexer). Using tmux, you can create a shell session that will persist even if you lose the link. When you ssh in again, your session is still there and you can attach to it. So once you have ssh’d into your new server:

tmux new -s admin  #creates a session called admin

Now just do everything you normally would, but safely ensconced in a tmux session. Later on, if you lose the link and ssh back in again, you just:

tmux attach -t admin  #reattaches to your admin session. 

Okay. On with what we came here to do:

apt-get install git
git clone https://github.com/openstack/openstack-ansible /opt/openstack-ansible
cd /opt/openstack-ansible
scripts/bootstrap-ansible.sh
scripts/bootstrap-aio.sh

Once the AIO is up and running, you’ll likely want to log into the Horizon web interface to interact with the system. The scripts will have helpfully created some memorable passwords like “8237b1b4e89221693c81c50156bc7b69d2cd91f6a9a37b9c36db2” which you can use to log in to the web UI. If you find that a bit tedious, now is the time to go and edit /etc/openstack_deploy/user_secrets.yml and seek the line that begins with keystone_auth_admin_password. Change the long string there to something you can remember like ‘TrickyPa55’.

Now you can finish the setup process:

scripts/run-playbooks.sh

Watch a great deal of logging go past for perhaps an hour or so. Once the setup is complete, that log may be found at /var/log/cloud-init.log

If all has gone well, it should now be possible to point a web browser at the public IP of your virtual server and log in to the Openstack dashboard.

User: admin
Password: TrickyPa55

…or whatever password you set in /etc/openstack_deploy/user_secrets.yml

From the shell prompt where you ran the setup scripts, you can take a look at the containers that have been set up using:

lxc-ls -f

This will list all of the containers (32 at the time of writing) and some metadata; most usefully their IP addresses. To ‘log in’ to a container, you might (as Miguel suggests in his blog post) use:

lxc-attach -n the_full_container_name_and_identifier

Since the installation scripts have helpfully copied your virtual server’s root ssh public key into each container, it is perhaps easier to look inside a container using ssh. This is a matter of getting the container’s IP address from the lxc-ls -f command and just ssh-ing to that address. For example:

root@cadence:~# ssh 172.29.238.249
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-58-generic x86_64)

Executing openstack CLI commands can be done by accessing the ‘utility_container’. ssh to its IP address then:


source openrc

(or even)

. openrc

Then it’s possible to run queries like:

root@aio1_utility_container-60daf7b8:~# cinder type-list
+--------------------------------------+------+-------------+-----------+
| ID                                   | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| 341ae6ec-5298-4cd0-b433-2fa5ffe5990d | lvm  | -           | True      |
+--------------------------------------+------+-------------+-----------+