At Loom Systems, we receive a continuous stream of questions about OpenStack and OpenStack monitoring and we take the extra step of categorizing them. That’s given us a wealth of information about both the common and not-so-common issues that pop up. Since our goal is to be helpful and give back to both the OpenStack community specifically and the IT industry at large, we’ve put together general answers to the five most common OpenStack questions we receive. If you have a follow-up question, feel free to share it in the comments section; and, of course, keep sending us your questions at [email protected]
How can I find out which version of OpenStack I have installed?
This is a frequent question we get that affects your entire OpenStack environment, so it’s important to get it right. Here’s a quick way to find out:
- SSH to your OpenStack hosts
- Run
openstack --version
If you want to know which version you have installed of specific services, the approach is similar:
- SSH to your OpenStack hosts
nova-manage --version
cinder-manage --version
glance-manage --version
Add it to a post-it or an ongoing cheatsheet. It’ll come in handy again and again.
How do I start/stop OpenStack services manually through the command line?
This is another important one to add to your cheatsheet.
- Sudo to your OpenStack hosts
- List all your OpenStack services by running
systemctl
systemctl start/stop SERVICE_NAME
You can tab complete to finish service names in case you don’t remember the full names of each service. Auto complete is your friend. It could look like this:
systemctl stop openstack-glance-api
How do I manually configure a firewall to permit OpenStack service traffic?
On deployments that have restrictive firewalls in place, you may need to configure a firewall manually to permit OpenStack service traffic. Here’s a working list of default ports that OpenStack services respond to:
OpenStack service |
Default ports |
Port type |
Block Storage (cinder) |
8776 |
publicurl and adminurl |
Compute (nova) endpoints |
8774 |
publicurl and adminurl |
Compute API (nova-api) |
8773, 8775 |
|
Compute ports for access to virtual machine consoles |
5900-5999 |
|
Compute VNC proxy for browsers ( openstack-nova-novncproxy) |
6080 |
|
Compute VNC proxy for traditional VNC clients (openstack-nova-xvpvncproxy) |
6081 |
|
Identity service (keystone) administrative endpoint |
35357 |
adminurl |
Identity service public endpoint |
5000 |
publicurl |
Image Service (glance) API |
9292 |
publicurl and adminurl |
Image Service registry |
9191 |
|
Networking (neutron) |
9696 |
publicurl and adminurl |
Object Storage (swift) |
6000, 6001, 6002 |
|
Orchestration (heat) endpoint |
8004 |
publicurl and adminurl |
Telemetry (ceilometer) |
8777 |
publicurl and adminurl |
HTTP |
80 |
OpenStack dashboard (Horizon) when it is not configured to use secure access. |
HTTP alternate |
8080 |
OpenStack Object Storage (swift) service. |
HTTPS |
443 |
Any OpenStack service that is enabled for SSL, especially secure-access dashboard. |
rsync |
873 |
OpenStack Object Storage. Required. |
iSCSI target |
3260 |
OpenStack Block Storage. Required. |
MySQL database service |
3306 |
Most OpenStack components. |
Message Broker (AMQP traffic) |
5672 |
OpenStack Block Storage, Networking, Orchestration, and Compute. |
How do I properly reboot a machine running DevStack?
I frequently hear this, “I couldn’t find ./rejoin-stack.sh. How can I just reboot the server and bring it all back up?” I’ve gone through this a lot and here’s the answer. Because DevStack was not meant to either run a cloud or support restoring a running stack after a reboot, rejoin-stack.sh was removed. Instead, you will need to run stack.sh and create a new cloud. Remember to put the stuff you need (like your public key) into local.sh and they will be available for the next deployment. And remember that if you do need to run a cloud and were relying on DevStack, please investigate one of the main alternatives that are designed and tested for cloud operation.
How do I properly reboot a machine running DevStack?
A frequent issue with Cinder Volumes is failing to remove them. If you tried to do cinder delete $volume_id and got an “error_deleting” response, here’s what to do.
- Get volume UUID by running the following command:
[root@rdo-vm-2 devops]#cinder list
- You can check the available status and try to reset the state of volume. If it shows “Error_deleting” or “Detaching”, you can reset the state of the volume with:
[root@rdo-vm-2 devops]#cinder reset-state --state available $volume_uuid
- If that also fails, log in to mysql db and use Cinder DB:
mysql> use cinder
- Following cinder mysql query sets the Cinder state to available:
mysql>update volumes set attach_status='detached', status='available' where id ='$volume_uuid';
- If the above workflow does not help, then the below mysql query should solve the issue and delete the volume:
mysql>update volumes set deleted=1,status='deleted', deleted_at=now(), updated_at=now() where deleted=0 and id='$volume_uuid';
Aviv Lichtigstein is the head of product evangelism at Loom Systems. This post first appeared on Loom System’s blog. Superuser is always interested in community content, get in touch at editorATopenstack.org.
- The five most common OpenStack questions, answered - October 9, 2017