This post originally appeared on the Aptira Blog. Sina is the Director of Cloud Operations at Aptira. You should follow him on Twitter.
Welcome to part two of the series of Aptira blogs where we explore the potential of running OpenStack on FreeBSD! At the end of part one, I mentioned that the next section would be on the OpenStack Image (Glance) service but after considering it some more I thought we would try for something a bit harder and go for OpenStack Object Storage (Swift) instead, since this is a fully fledged user facing service (actually my favorite of all the OpenStack services) which can also be used as an active-active highly available, horizontally scalable backend for Glance.
So, grab your popcorn and settle in because this job will be a bit more involved than the last one! If you aren’t familiar with object storage as a concept, or how OpenStack Swift is designed, it’s definitely worth doing a bit of reading before proceeding as this blog is intended more as an operators guide than introduction to either object storage or swift itself.
To start with, we are going to create a new Vagrant environment building on the work we did in part one, but with some modifications to the OpenStack Identity service Keystone definition (you can tear down your Vagrant environment from part one as we will be starting this environment from scratch).
Here is the Vagrantfile we will be using for this guide (please see part one for the command to download the vagrant box we will be using below hfm4/freebsd-10.0):
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define "keystonebsd" do |keystonebsd|
keystonebsd.vm.box = "hfm4/freebsd-10.0"
keystonebsd.vm.hostname = "keystonebsd.sy3.aptira.com"
keystonebsd.vm.network "private_network", ip: "10.0.3.15"
keystonebsd.vm.provider "virtualbox" do |v|
v.customize ['modifyvm', :id ,'--memory','2048']
end
end
config.vm.define "swiftproxybsd01" do |swiftproxybsd01|
swiftproxybsd01.vm.box = "hfm4/freebsd-10.0"
swiftproxybsd01.vm.hostname = "swiftproxybsd01.sy3.aptira.com"
swiftproxybsd01.vm.network "private_network", ip: "10.0.3.16"
swiftproxybsd01.vm.provider "virtualbox" do |v|
v.customize ['modifyvm', :id ,'--memory','2048']
end
end
config.vm.define "swiftstoragebsd01" do |swiftstoragebsd01|
swiftstoragebsd01.vm.box = "hfm4/freebsd-10.0"
swiftstoragebsd01.vm.hostname = "swiftstoragebsd01.sy3.aptira.com"
swiftstoragebsd01.vm.network "private_network", ip: "10.0.3.17"
swiftstoragebsd01.vm.provider "virtualbox" do |v|
v.customize ['modifyvm', :id ,'--memory','2048']
v.customize ['createhd', '--filename', '/tmp/swiftstoragebsd01.vdi', '--size', 500]
v.customize ['storageattach', :id, '--storagectl', 'IDE Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', '/tmp/swiftstoragebsd01.vdi']
end
end
config.vm.define "swiftstoragebsd02" do |swiftstoragebsd02|
swiftstoragebsd02.vm.box = "hfm4/freebsd-10.0"
swiftstoragebsd02.vm.hostname = "swiftstoragebsd02.sy3.aptira.com"
swiftstoragebsd02.vm.network "private_network", ip: "10.0.3.18"
swiftstoragebsd02.vm.provider "virtualbox" do |v|
v.customize ['modifyvm', :id ,'--memory','2048']
v.customize ['createhd', '--filename', '/tmp/swiftstoragebsd02.vdi', '--size', 500]
v.customize ['storageattach', :id, '--storagectl', 'IDE Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', '/tmp/swiftstoragebsd02.vdi']
end
end
config.vm.define "swiftstoragebsd03" do |swiftstoragebsd03|
swiftstoragebsd03.vm.box = "hfm4/freebsd-10.0"
swiftstoragebsd03.vm.hostname = "swiftstoragebsd03.sy3.aptira.com"
swiftstoragebsd03.vm.network "private_network", ip: "10.0.3.19"
swiftstoragebsd03.vm.provider "virtualbox" do |v|
v.customize ['modifyvm', :id ,'--memory','2048']
v.customize ['createhd', '--filename', '/tmp/swiftstoragebsd03.vdi', '--size', 500]
v.customize ['storageattach', :id, '--storagectl', 'IDE Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', '/tmp/swiftstoragebsd03.vdi']
end
end
end
As you can see the main differences to the keystone definition are that we have changed the vm name and added a private_network definition. Once you have created the Vagrantfile, start up the keystone vm and follow the steps from part one of the guide, except when running the endpoint-create command you should point it to the private_network IP defined above.
$ vagrant up keystonebsd
$ vagrant ssh keystonebsd
(now from inside the vm)
$ sudo -i
# ...(follow all the steps in part one of this guide except the last endpoint-create command which follows as)
# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://10.0.3.15:35357/v2.0/ endpoint-create --service=identity --publicurl=http://10.0.3.15:5000/v2.0 --internalurl=http://10.0.3.15:5000/v2.0 --adminurl=http://10.0.3.5:35357/v2.0
# ...(run test commands from part one of this guide using 10.0.3.15 instead of localhost as the --os-auth-url flag)
while we are logged into our keystone node we should prepare the swift service, user/tenant, endpoints as well and then we can exit out of our keystone node:
# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://10.0.3.15:5000/v2.0 service-create --name swift --type object-store
# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://10.0.3.15:5000/v2.0 tenant-create --name service
# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://10.0.3.15:5000/v2.0 user-create --name swift --tenant service --pass password
# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://10.0.3.15:5000/v2.0 user-role-add --user swift --tenant service --role admin
# ...(you can confirm the above commands worked by using the test commands at the end of part one of this guide)
# exit
$ exit
Now the fun really begins, we can start spinning up our swift servers! First on the list is our swift proxy. As mentioned above, if you aren’t sure what any of these services are or what their function is, it’s worth reading the architecture overview (at minimum) to familiarise yourself before continuing:
$ vagrant up swiftproxy01
$ vagrant ssh swiftproxy01
(now from inside the vm)
$ sudo -i
Installing memcached, enabling it as a service and starting the memcached service
# pkg install memcached
# echo 'memcached_enable="YES"' >> /etc/rc.conf
# service memcached start
Installing swift:
# pkg install python git wget
# pkg install py27-xattr libxslt
# pip install pbr six prettytable oslo.config python-keystoneclient netaddr keystonemiddleware
# git clone https://github.com/openstack/swift.git
# cd swift
# python setup.py install
Configuring swift user and required directories:
# mkdir /var/run/swift
# mkdir /etc/swift
# pw groupadd -n swift
# pw useradd -n swift -g swift -s /sbin/nologin -d /var/run/swift
# chown -R swift:swift /var/run/swift
Now we will copy over the provided example swift configuration files and modify them. swift.conf is first (below commands assume your current working directory is the cloned git repository):
# cp etc/swift.conf.sample /etc/swift/swift.conf
and modify the swifthash* lines in /etc/swift/swift.conf (again, same as in part one, we are not using secure values that would be required in production, only demonstrating the concepts):
swift_hash_path_suffix = suffsuffsuff
swift_hash_path_prefix = prefprefpref
then we will copy and modify the proxy-server.conf:
# cp etc/proxy-server.conf-sample /etc/swift/proxy-server.conf
Carefully modify the following sections, starting with [DEFAULT]:
bind_port = 8000
admin_key = adminkeyadminkey
account_autocreate = true
then [pipeline:main](in this case we are modifying the application pipeline to remove the tempauth authentication module and instead use keystone authentication):
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo proxy-logging proxy-server
so we should also comment out the tempauth filter:
#[filter:tempauth]
#use = egg:swift#tempauth
then uncomment and modify the authtoken filter to use the keystone configuration we setup at the start of this guide:
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.0.3.15
auth_port = 35357
auth_protocol = http
auth_uri = http://10.0.3.15:5000/
admin_tenant_name = service
admin_user = swift
admin_password = password
delay_auth_decision = 1
cache = swift.cache
include_service_catalog = False
we also need to uncomment the keystoneauth filter:
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin, swiftoperator
reseller_admin_role = ResellerAdmin
Now we can write that configuration file and exit. The next step is to build our ringfiles, which provide the logical storage configuration to all the swift servers (ringfile documentation can be found [here] and [here]):
# cd /etc/swift
# swift-ring-builder container.builder create 18 3 1
# swift-ring-builder account.builder create 18 3 1
# swift-ring-builder object.builder create 18 3 1
once the ringfiles are created we can populate them with information about storage nodes and their devices (in the below configuration each storage node is confgured as a "zone"):
# swift-ring-builder object.builder add z1-10.0.3.17:6000/swiftdisk 100
# swift-ring-builder container.builder add z1-10.0.3.17:6001/swiftdisk 100
# swift-ring-builder account.builder add z1-10.0.3.17:6002/swiftdisk 100
# swift-ring-builder object.builder add z2-10.0.3.18:6000/swiftdisk 100
# swift-ring-builder container.builder add z2-10.0.3.18:6001/swiftdisk 100
# swift-ring-builder account.builder add z2-10.0.3.18:6002/swiftdisk 100
# swift-ring-builder object.builder add z3-10.0.3.19:6000/swiftdisk 100
# swift-ring-builder container.builder add z3-10.0.3.19:6001/swiftdisk 100
# swift-ring-builder account.builder add z3-10.0.3.19:6002/swiftdisk 100
and once the rings are populated the final step is to rebalance them:
# swift-ring-builder account.builder rebalance
# swift-ring-builder container.builder rebalance
# swift-ring-builder object.builder rebalance
Finally, we can set ownership of all the files in /etc/swift to the swift user and start the service! As I mentioned at the beginning of the guide, swift is an active-active, horizontally scalable service so you can add as many swift proxies as you like to your Vagrantfile and repeat the above procedure for each of them. You don’t need to build the ringfiles on each node, simply copy the gzipped ringfiles (/etc/swift/*.gz) to /etc/swift on each of the nodes (we will do this on the swift storage nodes as you will see below).
# chown -R swift:swift /etc/swift
# /usr/local/bin/swift-init proxy start
Now we can start spinning up our swift storage nodes. I am only going to go through this procedure for one of the nodes, you should simply repeat it for each of the swift storage nodes defined in the Vagrantfile:
$ vagrant up swiftstoragebsd01
$ vagrant ssh swiftstoragebsd01
(now inside the VM)
$ sudo -i
Enabling and starting the ZFS service (we are using this since I couldn’t find documentation on using XFS that is normally used when operating swift clusters) and configuring the second disk defined in the Vagrantfile as a ZFS pool:
# echo 'zfs_enable="YES"' >> /etc/rc.conf
# service zfs start
# zpool create swiftdisk /dev/ada1
# zfs set mountpoint=/srv/node/swiftdisk swiftdisk
Next, install rsync (which we will use as a daemon):
# pkg install rsync
and copy the following configuration to /etc/rsyncd.conf:
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 0.0.0.0
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/spool/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/spool/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/spool/lock/object.lock
Enable the rsyncd service and start it:
# echo 'rsyncd_enable="YES"' >> /etc/rc.conf
# service rsyncd start
Installing swift:
# pkg install git wget python
# pkg install py27-xattr py27-sqlite3
# wget [https://bootstrap.pypa.io/get-pip.py](https://bootstrap.pypa.io/get-pip.py)
# python get-pip.py
# git clone [https://github.com/openstack/swift.git](https://github.com/openstack/swift.git)
# cd swift
# python setup.py install
Configuring swift user and required directories:
# mkdir /etc/swift
# mkdir /var/cache/swift
# mkdir /var/run/swift
# pw groupadd -n swift
# pw useradd -n swift -g swift -s /sbin/nologin -d /var/run/swift
# chown -R swift:swift /var/run/swift
# chown -R swift:swift /var/cache/swift
# chown -R swift:swift /srv
Now we will copy over the provided example swift configuration files and modify them. swift.conf is first (below commands assume your current working directory is the cloned git repository):
# cp etc/swift.conf.sample /etc/swift/swift.conf
and modify the swifthash* lines in /etc/swift/swift.conf (again, same as in part one, we are not using secure values that would be required in production, only demonstrating the concepts):
swift_hash_path_suffix = suffsuffsuff
swift_hash_path_prefix = prefprefpref
then we can copy the configuration files for the swift storage services (these should not require any modification):
# cp etc/account-server.conf-sample /etc/swift/account-server.conf
# cp etc/container-server.conf-sample /etc/swift/container-server.conf
# cp etc/object-server.conf-sample /etc/swift/object-server.conf
At this point you should copy the ringfiles created on the swiftproxybsd01 vm to /etc/swift. I set a password for the vagrant user and copied them using scp to each vm. This is definitely not a recommended method of doing things in production but is fine for demonstration purposes:
# scp [email protected]:/etc/swift/*.gz /etc/swift
Finally, we can set ownership of all the files in /etc/swift to the swift user and start the all the storage node services!
# chown -R swift:swift /etc/swift
# /usr/local/bin/swift-init all start
After you have completed these steps on one node, repeat them for the remaining storage nodes defined in the Vagrantfile.
Once this is done, your swift service is up and running! You can test it now, so log back into your keystone node (or any computer that can access 10.0.3.15/10.0.3.16) and install the swift client:
# pip install python-swiftclient
Now you can run some test commands!
# swift --os-auth-url=http://10.0.3.15:5000/v2.0 --os-username=admin --os-password=test123 --os-tenant-name=admin post testcontainer
# swift --os-auth-url=http://10.0.3.15:5000/v2.0 --os-username=admin --os-password=test123 --os-tenant-name=admin upload testcontainer get-pip.py
# swift --os-auth-url=http://10.0.3.15:5000/v2.0 --os-username=admin --os-password=test123 --os-tenant-name=admin upload testcontainer get-pip.py (for some reason the first object never shows up in stat so I always run it twice)
# swift --os-auth-url=http://10.0.3.15:5000/v2.0 --os-username=admin --os-password=test123 --os-tenant-name=admin stat
which, if everything worked correctly, should show the following output:
Account: AUTH_516f9ace29294cff91316153d793bdab
Containers: 1
Objects: 1
Bytes: 1340903
X-Account-Storage-Policy-Policy-0-Bytes-Used: 1340903
X-Timestamp: 1408880268.17095
X-Account-Storage-Policy-Policy-0-Object-Count: 1
X-Trans-Id: txfc005fa6c999449b81e7b-0053fa972f
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes
and that’s it! You’re now running two OpenStack services on FreeBSD. Once we have setup and installed Glance, then the hard work of making OpenStack Compute (Nova) work somehow with FreeBSD begins. As I mentioned in part one, I am very interested in the idea of getting FreeBSD Jails supported in Nova! Stay tuned!
- OpenStack Object Storage (Swift) on FreeBSD with ZFS - September 8, 2014
- OpenStack Identity (Keystone) on FreeBSD - August 11, 2014