openstack(grizzly) 오류와 해결.

오픈스택 설치할 때 발생했던 오류들과 해결 방법을 정리

 

1. 접속시 “Unable to retrieve quota information.”  volumes 클릭하면 에러 나는 경우

cinder 설정 확인

/etc/tgt/conf.d/cinder.conf 가 있는지 확인

# more cinder.conf
include /var/lib/cinder/volumes/*

# /etc/init.d/tgt restart

# cinder-manage db sync
2013-09-11 21:47:24 INFO [migrate.versioning.api] 0 -> 1…
2013-09-11 21:47:26 INFO [migrate.versioning.api] done
2013-09-11 21:47:26 INFO [migrate.versioning.api] 1 -> 2…
2013-09-11 21:47:26 INFO [migrate.versioning.api] done
2013-09-11 21:47:26 INFO [migrate.versioning.api] 2 -> 3…
2013-09-11 21:47:27 INFO [migrate.versioning.api] done
2013-09-11 21:47:27 INFO [migrate.versioning.api] 3 -> 4…
2013-09-11 21:47:27 INFO [004_volume_type_to_uuid] Created foreign key volume_type_extra_specs_ibfk_1
2013-09-11 21:47:27 INFO [migrate.versioning.api] done
2013-09-11 21:47:27 INFO [migrate.versioning.api] 4 -> 5…
2013-09-11 21:47:27 INFO [migrate.versioning.api] done
2013-09-11 21:47:27 INFO [migrate.versioning.api] 5 -> 6…
2013-09-11 21:47:27 INFO [migrate.versioning.api] done
2013-09-11 21:47:27 INFO [migrate.versioning.api] 6 -> 7…
2013-09-11 21:47:27 INFO [migrate.versioning.api] done
2013-09-11 21:47:27 INFO [migrate.versioning.api] 7 -> 8…
2013-09-11 21:47:28 INFO [migrate.versioning.api] done
2013-09-11 21:47:28 INFO [migrate.versioning.api] 8 -> 9…
2013-09-11 21:47:28 INFO [migrate.versioning.api] done

cinder-manage db sync 커맨드를 내렸을때  아무런 메시지도 나오지 않다가 문제가 해결되니 위와 같은 메시지가 나온다.

2.   ERROR [cinder.openstack.common.rpc.common] AMQP server on 192.168.100.1:5672 is unreachable: Socket closed. Trying again in 23 seconds.

볼륨이 삭제되지 않아서 /var/log/cinder/cinder-volume.log를 확인했을때 나오는 메시지

/etc/cinder/cinder.conf에서 아래 내용 확인

rabbit_userid = guest
rabbit_password = password
rabbit_virtual_host = /

# rabbitmqctl change_password guest password    (guest 의 비밀번호를 password로 변경해 줌)

rabbit_virtual_host 가 /nova 로 되어 있던 부분을 /로 변경

이후, rabbitmq-server와 cinder 관련 서비스를 재실행. 이후 에러 메시지는 더 이상 보이지 않음

 

3. 볼륨 생성 오류

볼륨을 생성하려고 했으나, 오류가 발생했다. /var/log/cinder/cinder-volume.log에 아래와 같은 에러 메시지가 보인다.

INFO [cinder.volume.iscsi] Creating iscsi_target for: volume-ff6f0165-8973-4c0a-abcf-14fe541db21e
2013-09-13 11:57:19 ERROR [cinder.volume.iscsi] Failed to create iscsi target for volume id:volume-ff6f0165-8973-4c0a-abcf-14fe541db21e.
2013-09-13 11:57:19 ERROR [cinder.volume.manager] volume volume-ff6f0165-8973-4c0a-abcf-14fe541db21e: create failed
2013-09-13 11:57:19 ERROR [cinder.openstack.common.rpc.amqp] Exception during message handling

iscsi 설정화일 /etc/tgt/conf.d/cinder.conf 에서 volume의 path가 정확한지 확인하고, tgt를 재 실행. 오류 없어짐.

 

4. 인스턴스 오류. horizon에서 인스턴스 시작시 아래와 같은 오류 발생

Unable to get log for instance “b06225ed-1f98-4818-8e19-e555a36844a1”.

/var/log/nova-scheduler.log에 아래와 같은 오류메시지 보임

WARNING nova.scheduler.driver [req-911ed196-a6e6-4dbf-8c
0d-7d32ac117ef2 d9d6feab3c684fc5a6fd8301a668ec21 1ac47050923f46a7ad3045c87305c41
4] [instance: b06225ed-1f98-4818-8e19-e555a36844a1] Setting instance to ERROR st
ate.

nova-compute.log

TRACE nova Stderr: “qemu-img: Could not open ‘/var/lib/libvirt/images/ubuntu_1.img’: Permission denied\n”
2013-09-14 16:43:29.952 10250 TRACE nova

해당 디렉토리의 소유권을 nova:nova 로 변경하고 nova-compute 서비스를 재 실행하니 문제가 해결되었다.

# nova-manage service list
Binary Host Zone Status State Updated_At
nova-conductor tech internal enabled 🙂 2013-09-14 07:48:36
nova-scheduler tech internal enabled 🙂 2013-09-14 07:48:44
nova-consoleauth tech internal enabled 🙂 2013-09-14 07:48:43
nova-cert tech internal enabled 🙂 2013-09-14 07:48:44
nova-compute fox2 nova enabled XXX None

이 경우는 compute-node 에서 nova-compute 서비스가 실행되지 않아서 발생한 문제 였다.

# nova-manage service list
Binary Host Zone Status State Updated_At
nova-conductor tech internal enabled 🙂 2013-09-14 07:58:18
nova-scheduler tech internal enabled 🙂 2013-09-14 07:58:15
nova-consoleauth tech internal enabled 🙂 2013-09-14 07:58:15
nova-cert tech internal enabled 🙂 2013-09-14 07:58:16
nova-compute fox2 nova enabled 🙂 2013-09-14 07:58:18

 

5. 인스턴스 생성 오류

인스턴스 생성할때, spawning 중 에러 발생. /var/log/nova/nova-compute.log에 아래와 같은 메시지.

WARNING nova.virt.libvirt.utils [req-1f0eeb1f-46f1-4be6-9c4a-5c86a2b5b36b d9d6feab3c684fc5a6fd8301a668ec21 1ac47050923f46a7ad3045c87305c414] systool is not installed

# apt-get install sysfsutils

systool이 없어서 발생했으므로 설치하여 해결.

아래는 compute 노드에 브릿지(br-int)가 없어서 발생.(내 경우는 실제 브릿지가 없었던 것이 아니고 등록시에 이름을 br-in으로 만들었기 때문에 발생)

ERROR nova.compute.manager [req-b9ca79da-4615-46fa-a778-a41b07a9a8a0 d9d6feab3c684fc5a6fd8301a668ec21 1ac47050923f46a7ad3045c87305c414] [instance: 8f273c09-de91-406a-85d4-9cbcceba4d31] Error: [‘Traceback (most recent call last):\n’, ‘  File “/usr/lib/python2.7/dist-packages/nova/compute/manager.py”, line 848, in _run_instance\n    set_access_ip=set_access_ip)\n’, ‘  File “/usr/lib/python2.7/dist-packages/nova/compute/manager.py”, line 1107, in _spawn\n    LOG.exception(_(\’Instance failed to spawn\’), instance=instance)\n’, ‘  File “/usr/lib/python2.7/contextlib.py”, line 24, in __exit__\n    self.gen.next()\n’, ‘  File “/usr/lib/python2.7/dist-packages/nova/compute/manager.py”, line 1103, in _spawn\n    block_device_info)\n’, ‘  File “/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py”, line 1528, in spawn\n    block_device_info)\n’, ‘  File “/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py”, line 2444, in _create_domain_and_network\n    domain = self._create_domain(xml, instance=instance)\n’, ‘  File “/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py”, line 2405, in _create_domain\n    domain.createWithFlags(launch_flags)\n’, ‘  File “/usr/lib/python2.7/dist-packages/eventlet/tpool.py”, line 187, in doit\n    result = proxy_call(self._autowrap, f, *args, **kwargs)\n’, ‘  File “/usr/lib/python2.7/dist-packages/eventlet/tpool.py”, line 147, in proxy_call\n    rv = execute(f,*args,**kwargs)\n’, ‘  File “/usr/lib/python2.7/dist-packages/eventlet/tpool.py”, line 76, in tworker\n    rv = meth(*args,**kwargs)\n’, ‘  File “/usr/lib/python2.7/dist-packages/libvirt.py”, line 650, in createWithFlags\n    if ret == -1: raise libvirtError (\’virDomainCreateWithFlags() failed\’, dom=self)\n’, “libvirtError: Cannot get interface MTU on ‘br-int’: No such device\n”]

br-int라는 브릿지를 만들면 해결된다.

 

아래는 compute 노드에 브릿지가 있어도, openvswitch에 등록되지 않아서 발생.

ERROR nova.compute.manager [req-61cc9569-b914-4a2f-ab1c-8325f32c4989 d9d6feab3c684fc5a6fd8301a668ec21 1ac47050923f46a7ad3045c87305c414] [instance: 94d19e29-f231-4660-a2bf-1ffaf89f0b38] Error: [‘Traceback (most recent call last):\n’, ‘ File “/usr/lib/python2.7/dist-packages/nova/compute/manager.py”, line 848, in _run_instance\n set_access_ip=set_access_ip)\n’, ‘ File “/usr/lib/python2.7/dist-packages/nova/compute/manager.py”, line 1107, in _spawn\n LOG.exception(_(\’Instance failed to spawn\’), instance=instance)\n’, ‘ File “/usr/lib/python2.7/contextlib.py”, line 24, in __exit__\n self.gen.next()\n’, ‘ File “/usr/lib/python2.7/dist-packages/nova/compute/manager.py”, line 1103, in _spawn\n block_device_info)\n’, ‘ File “/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py”, line 1528, in spawn\n block_device_info)\n’, ‘ File “/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py”, line 2444, in _create_domain_and_network\n domain = self._create_domain(xml, instance=instance)\n’, ‘ File “/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py”, line 2405, in _create_domain\n domain.createWithFlags(launch_flags)\n’, ‘ File “/usr/lib/python2.7/dist-packages/eventlet/tpool.py”, line 187, in doit\n result = proxy_call(self._autowrap, f, *args, **kwargs)\n’, ‘ File “/usr/lib/python2.7/dist-packages/eventlet/tpool.py”, line 147, in proxy_call\n rv = execute(f,*args,**kwargs)\n’, ‘ File “/usr/lib/python2.7/dist-packages/eventlet/tpool.py”, line 76, in tworker\n rv = meth(*args,**kwargs)\n’, ‘ File “/usr/lib/python2.7/dist-packages/libvirt.py”, line 650, in createWithFlags\n if ret == -1: raise libvirtError (\’virDomainCreateWithFlags() failed\’, dom=self)\n’, ‘libvirtError: Unable to add port tap495f6c61-11 to OVS bridge br-int: Operation not permitted\n’]

# ovs-vsctl add-br br-int

# ovs-vsctl list-br
br-int
br-tun

ovs-vsctl 커맨드로 br-int 를 등록해준다.

 

6. 인스턴스 삭제가 안될때

gui에서 인스턴스의 id 확인 : d783ea77-847c-46c5-b98e-3d59a7e89877

# nova delete d783ea77-847c-46c5-b98e-3d59a7e89877
The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-4cea6e76-5ff5-43da-8fd4-725e42705fcd)

그래도 삭제가 안되면 데이터베이스(nova)에서 강제로 삭제.

mysql> update instances set vm_state=’deleted’,task_state=NULL,deleted=1,deleted_at=now() where uuid=’d783ea77-847c-46c5-b98e-3d59a7e89877′;
Query OK, 1 row affected (0.02 sec)
Rows matched: 1 Changed: 1 Warnings: 0

 

7. 인스턴스가 만들어졌지만 실행이 안됨.(compute node의 /var/log/nova/nova-compute.log에 아래와 같은 에러 메시지 표시)

2013-09-16 12:21:10.400 32395 ERROR nova.compute.manager [-] Instance 3f7b3255-43d2-c391-24cd-5746b387a82f found in the hypervisor, but not in the database

# nova service-list

+——————+———-+———-+———+——-+—————————-+
| Binary | Host | Zone | Status | State | Updated_at |
+——————+———-+———-+———+——-+—————————-+
| nova-cert | tech | internal | enabled | up | 2013-09-16T03:31:12.000000 |
| nova-compute | compute1 | nova | enabled | up | 2013-09-16T03:31:15.000000 |
| nova-compute | fox2 | nova | enabled | down | 2013-09-14T08:24:37.000000 |
| nova-conductor | tech | internal | enabled | up | 2013-09-16T03:31:14.000000 |
| nova-consoleauth | tech | internal | enabled | up | 2013-09-16T03:31:12.000000 |
| nova-scheduler | tech | internal | enabled | up | 2013-09-16T03:31:16.000000 |
+——————+———-+———-+———+——-+—————————-+

실제로  fox2와 compute1은 같은 호스트  (fox2는 public ip, compute1은 private ip)여서 발생한 문제로 추정. 아래와 같이 서비스에서 제거했더니 정상작동.

nova 서비스에서 compute 노드 삭제(nova 데이터베이스의 services 테이블에서 해당 호스트 삭제)

mysql> select * from services;
+———————+———————+————+—-+———-+——————+————-+————–+———-+———+
| created_at | updated_at | deleted_at | id | host | binary | topic | report_count | disabled | deleted |
+———————+———————+————+—-+———-+——————+————-+————–+———-+———+
| 2013-09-13 03:49:12 | 2013-09-16 03:42:46 | NULL | 1 | tech | nova-conductor | conductor | 25800 | 0 | 0 |
| 2013-09-13 03:49:39 | 2013-09-16 03:42:48 | NULL | 2 | tech | nova-scheduler | scheduler | 25793 | 0 | 0 |
| 2013-09-13 03:49:43 | 2013-09-16 03:42:45 | NULL | 3 | tech | nova-consoleauth | consoleauth | 25792 | 0 | 0 |
| 2013-09-13 03:49:47 | 2013-09-16 03:42:45 | NULL | 4 | tech | nova-cert | cert | 25792 | 0 | 0 |
| 2013-09-13 03:49:50 | 2013-09-14 08:24:37 | NULL | 5 | fox2 | nova-compute | compute | 195 | 0 | 0 |
| 2013-09-14 08:24:40 | 2013-09-16 03:42:49 | NULL | 6 | compute1 | nova-compute | compute | 15506 | 0 | 0 |
+———————+———————+————+—-+———-+——————+————-+————–+———-+———+
6 rows in set (0.00 sec)

mysql> delete from services where host=’fox2′;
Query OK, 1 row affected (0.01 sec)

mysql> select * from services;
+———————+———————+————+—-+———-+——————+————-+————–+———-+———+
| created_at | updated_at | deleted_at | id | host | binary | topic | report_count | disabled | deleted |
+———————+———————+————+—-+———-+——————+————-+————–+———-+———+
| 2013-09-13 03:49:12 | 2013-09-16 03:43:06 | NULL | 1 | tech | nova-conductor | conductor | 25802 | 0 | 0 |
| 2013-09-13 03:49:39 | 2013-09-16 03:43:08 | NULL | 2 | tech | nova-scheduler | scheduler | 25795 | 0 | 0 |
| 2013-09-13 03:49:43 | 2013-09-16 03:43:05 | NULL | 3 | tech | nova-consoleauth | consoleauth | 25794 | 0 | 0 |
| 2013-09-13 03:49:47 | 2013-09-16 03:43:05 | NULL | 4 | tech | nova-cert | cert | 25794 | 0 | 0 |
| 2013-09-14 08:24:40 | 2013-09-16 03:43:09 | NULL | 6 | compute1 | nova-compute | compute | 15508 | 0 | 0 |
+———————+———————+————+—-+———-+——————+————-+————–+———-+———+
5 rows in set (0.00 sec)

 

8.  네트워크 노드에서 quantum-l3-agent 실행시 아래 오류메시지(/var/log/quantum-l3-agent.log) 발생

Command: [‘sudo’, ‘quantum-rootwrap’, ‘/etc/quantum/rootwrap.conf’, ‘ip’, ‘netns’, ‘exec’, ‘qrouter-f76ddf26-1fb2-424a-8f22-deed44198644’, ‘sysctl’, ‘-w’, ‘net.ipv4.ip_forward=1’]
Exit code: 1
Stdout: ”

 

9. Horizon에서 네트워크 삭제 안될때.

quantum 서버에서 아래 커맨드로 삭제한다.

# quantum

(quantum) net-list
+————————————–+———-+———+
| id | name | subnets |
+————————————–+———-+———+
| 4665cff9-85d4-466c-b2f5-dafedb09455e | demo-net | |
+————————————–+———-+———+
(quantum) net-delete demo-net
Deleted network: demo-net
(quantum)

10. quantum-plugin-openvswitch-agent오류

INFO [quantum.common.config] Logging enabled!
2013-09-28 17:50:12 ERROR [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Failed to create OVS patch port. Cannot have tunneling enabled on this agent, since this version of OVS does not support tunnels or patch ports. Agent terminated!

이 에러메시지는 openvswitch -datapath 패키지가 설치되지 않아서 발생.

# apt-get install openvswitch-datapath-source
# apt-get  install module-assistant
# module-assistant auto-install openvswitch-datapath

module-assistant 로 설치할때 오류가 발생할 수 있으며, 이 경우에 아래 패키지를 설치하면 해결 될 수 있다.

# apt-get install openvswitch-datapath-dkms

 

10. Horizon에서 볼륨 삭제가 안될때.

볼륨정보도 확인이 안되고, 삭제도 안되는경우에 cinder 서버에서 lvdisplay로 확인후, 사용하지 않는 cinder-volume을 삭제한다.

사용하는 볼륨의 ID를 확인하고 이것은 지우지 않도록 조심해야 한다. LV이름은 volume-[volumeID]이다.

# lvdisplay
— Logical volume —
LV Path /dev/cinder-volumes/volume-ab79c6ab-fd5b-4101-98a1-5045a637cf79
LV Name volume-ab79c6ab-fd5b-4101-98a1-5045a637cf79
VG Name cinder-volumes
LV UUID MdRxsG-rOSc-SKsU-fLDL-jDHQ-qxtq-ImXf6m
LV Write Access read/write
LV Creation host, time tech, 2013-09-14 16:18:56 +0900
LV Status available
# open 0
LV Size 5.00 GiB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 252:3

— Logical volume —
LV Path /dev/cinder-volumes/volume-25211d80-38ac-4ac5-8ebb-826e5244d0d1
LV Name volume-25211d80-38ac-4ac5-8ebb-826e5244d0d1
VG Name cinder-volumes
LV UUID e05FVO-tTc6-JLR1-gLLK-dgff-gtzs-yt7d71
LV Write Access read/write
LV Creation host, time cloud, 2013-09-16 12:19:24 +0900
LV snapshot status source of
_snapshot-4e4f4bb0-febe-4505-be9b-cdb751bdccad [active]
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 252:4

— Logical volume —
LV Path /dev/cinder-volumes/_snapshot-4e4f4bb0-febe-4505-be9b-cdb751bdccad
LV Name _snapshot-4e4f4bb0-febe-4505-be9b-cdb751bdccad
VG Name cinder-volumes
LV UUID t8hFNj-n2T6-cFFZ-OaNx-WiUU-KXkk-7SpKcD
LV Write Access read/write
LV Creation host, time cloud, 2013-09-16 16:58:55 +0900
LV snapshot status active destination for volume-25211d80-38ac-4ac5-8ebb-826e5244d0d1
LV Status available
# open 0
LV Size 20.00 GiB
Current LE 5120
COW-table size 20.00 GiB
COW-table LE 5120
Allocated to snapshot 0.03%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 252:5

— Logical volume —
LV Path /dev/cinder-volumes/volume-2f7e9492-7d03-49d9-8517-727cba5fd257
LV Name volume-2f7e9492-7d03-49d9-8517-727cba5fd257
VG Name cinder-volumes
LV UUID MzeF9B-5YPQ-9kGj-hE5i-KS4W-InBz-3pz8Zl
LV Write Access read/write
LV Creation host, time cloud, 2013-09-16 17:01:10 +0900
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 252:8

— Logical volume —
LV Path /dev/cinder-volumes/volume-161b2d9f-6d3d-46d5-942e-d0dbb8610448
LV Name volume-161b2d9f-6d3d-46d5-942e-d0dbb8610448
VG Name cinder-volumes
LV UUID 80uyJI-Sjxw-oHH6-eVzV-GetK-rHlq-DZjml3
LV Write Access read/write
LV Creation host, time cloud, 2013-09-21 18:45:56 +0900
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 252:2

— Logical volume —
LV Path /dev/cinder-volumes/volume-8475b871-3310-43c6-a63b-0fbf3dfce94f
LV Name volume-8475b871-3310-43c6-a63b-0fbf3dfce94f
VG Name cinder-volumes
LV UUID ZnelGS-I21J-Pn11-15iO-vcBe-lw7c-mDq5uK
LV Write Access read/write
LV Creation host, time cloud, 2013-10-02 09:06:18 +0900
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 252:10

— Logical volume —
LV Path /dev/fox2/root
LV Name root
VG Name fox2
LV UUID uvO1cf-4qop-Vy9o-zYBb-wr3b-i4QV-e80wwf
LV Write Access read/write
LV Creation host, time fox2, 2012-09-02 02:51:03 +0900
LV Status available
# open 1
LV Size 132.49 GiB
Current LE 33918
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 252:0

— Logical volume —
LV Path /dev/fox2/swap_1
LV Name swap_1
VG Name fox2
LV UUID ArmZzh-vdUM-lLQM-fEDA-vm7A-OZ8r-QuUlBp
LV Write Access read/write
LV Creation host, time fox2, 2012-09-02 02:51:04 +0900
LV Status available
# open 2
LV Size 4.00 GiB
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 252:1

아래 커맨드로 볼륨을 삭제하면 horizon에서도 사라진다.

# cinder force-delete 2f7e9492-7d03-49d9-8517-727cba5fd257

11.  사용하지 않는 cinder-volume을 database에서 삭제(10과 동일한 증상일 경우)

mysql> select id, deleted, status from volumes;
+————————————–+———+———-+
| id | deleted | status |
+————————————–+———+———-+
| 04ae27b2-05a7-4631-9ce7-969239e01477 | 1 | deleted |
| 161b2d9f-6d3d-46d5-942e-d0dbb8610448 | 0 | in-use |
| 25211d80-38ac-4ac5-8ebb-826e5244d0d1 | 0 | in-use |
| 2939aa30-4e3f-4e5d-9acc-8ddd55eb7d0f | 1 | deleted |
| 2f7e9492-7d03-49d9-8517-727cba5fd257 | 0 | deleting |
| 599c8daa-077c-43c4-9895-46b1d6bb706a | 1 | deleted |
| 63c31d4d-0ba0-4e7b-ae17-8274e7bf10af | 1 | deleted |
| 659e1fce-6587-40ac-8e8b-3d5e83f4cc0d | 1 | deleted |
| 6cb22cda-afcb-4645-bbec-ab4b2e17097a | 1 | deleted |
| 76643c9f-23b7-41c0-bf96-6bc98a04fd29 | 1 | deleted |
| 83598ffc-7ea6-4ce2-bf33-a65db4d5adfe | 1 | deleted |
| 8475b871-3310-43c6-a63b-0fbf3dfce94f | 0 | in-use |
| d8194fe1-0f58-4c96-8968-4787224032fa | 1 | deleted |
| dbdb340a-1f9c-4499-9495-31178d192db8 | 1 | deleted |
| dcb8eafb-d247-44e9-9aac-ac13511d230e | 1 | deleted |
| e928ed52-2b28-42a9-9ef4-4060d209f93f | 1 | deleted |
| f8330a57-92da-4515-b084-7bd5517d5730 | 1 | deleted |
| ff6f0165-8973-4c0a-abcf-14fe541db21e | 1 | deleted |
+————————————–+———+———-+
18 rows in set (0.00 sec)

mysql> update volumes set status=’deleted’ where id=’25211d80-38ac-4ac5-8ebb-826e5244d0d1‘;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0

mysql> select id, deleted, status from volumes; +————————————–+———+———-+
| id | deleted | status |
+————————————–+———+———-+
| 04ae27b2-05a7-4631-9ce7-969239e01477 | 1 | deleted |
| 161b2d9f-6d3d-46d5-942e-d0dbb8610448 | 0 | in-use |
| 25211d80-38ac-4ac5-8ebb-826e5244d0d1 | 0 | deleted |
| 2939aa30-4e3f-4e5d-9acc-8ddd55eb7d0f | 1 | deleted |
| 2f7e9492-7d03-49d9-8517-727cba5fd257 | 0 | deleting |
| 599c8daa-077c-43c4-9895-46b1d6bb706a | 1 | deleted |
| 63c31d4d-0ba0-4e7b-ae17-8274e7bf10af | 1 | deleted |
| 659e1fce-6587-40ac-8e8b-3d5e83f4cc0d | 1 | deleted |
| 6cb22cda-afcb-4645-bbec-ab4b2e17097a | 1 | deleted |
| 76643c9f-23b7-41c0-bf96-6bc98a04fd29 | 1 | deleted |
| 83598ffc-7ea6-4ce2-bf33-a65db4d5adfe | 1 | deleted |
| 8475b871-3310-43c6-a63b-0fbf3dfce94f | 0 | in-use |
| d8194fe1-0f58-4c96-8968-4787224032fa | 1 | deleted |
| dbdb340a-1f9c-4499-9495-31178d192db8 | 1 | deleted |
| dcb8eafb-d247-44e9-9aac-ac13511d230e | 1 | deleted |
| e928ed52-2b28-42a9-9ef4-4060d209f93f | 1 | deleted |
| f8330a57-92da-4515-b084-7bd5517d5730 | 1 | deleted |
| ff6f0165-8973-4c0a-abcf-14fe541db21e | 1 | deleted |
+————————————–+———+———-+
18 rows in set (0.00 sec)

mysql> update volumes set deleted=1 where id=’25211d80-38ac-4ac5-8ebb-826e5244d0d1‘;
Query OK, 1 row affected (0.02 sec)
Rows matched: 1 Changed: 1 Warnings: 0

12.  horizon의 ‘ images & snapshots’ 클릭시 오류나는 경우

/var/log/cinder/cinder-api.log의 끝 부분에 아래와 같은 메시지 발생.

2013-10-15 16:05:54 INFO [cinder.api.openstack.wsgi] GET http://192.168.100.1:8776/v1/1ac47050923f46a7ad3045c87305c414/snapshots/detail
2013-10-15 16:05:54 INFO [cinder.api.openstack.wsgi] http://192.168.100.1:8776/v1/1ac47050923f46a7ad3045c87305c414/snapshots/detail returned with HTTP 200
2013-10-15 16:05:54 INFO [cinder.api.openstack.wsgi] GET http://192.168.100.1:8776/v1/1ac47050923f46a7ad3045c87305c414/volumes/25211d80-38ac-4ac5-8ebb-826e5244d0d1
2013-10-15 16:05:54 INFO [cinder.api.openstack.wsgi] HTTP exception thrown: The resource could not be found.
2013-10-15 16:05:54 INFO [cinder.api.openstack.wsgi] http://192.168.100.1:8776/v1/1ac47050923f46a7ad3045c87305c414/volumes/25211d80-38ac-4ac5-8ebb-826e5244d0d1 returned with HTTP 404

 

13.  cloud controller의 hostname을 변경했을때 rabbitmq 오류(호스트네임이 tech에서 cloud로 변경)

# /etc/init.d/rabbitmq-server start
* Starting message broker rabbitmq-server
* FAILED – check /var/log/rabbitmq/startup_\{log, _err\}

# cat /var/log/rabbitmq/startup_log
ERROR: node with name “rabbit” already running on “cloud”

DIAGNOSTICS
===========

nodes in question: [rabbit@cloud]

hosts, their running nodes and ports:
– cloud: [{rabbit,35263},
{rabbitmqprelaunch29269,41417},
{rabbitmqctl29299,49432}]

current node details:
– node name: rabbitmqprelaunch29269@cloud
– home dir: /var/lib/rabbitmq
– cookie hash: k85cpMwvJuloEZutJLr0tw==

/var/lib/rabbitmq/mnesia 디렉토리에  rabbit@beforehostname rabbit@beforehostname-plugins-expand와 처럼 이전 hostname이 보인다. rabbitmq-server를 재설치하여 해결했다. (참고문서:  http://www.techsfo.com/blog/2013/06/rabbitmq-breaks-when-you-rename-hostname/ )

#  apt-get autoremove rabbitmq-server
#  apt-get purge rabbitmq-server
#  apt-get install rabbitmq-server

재 설치후에는 rabbitmqctl change_password guest [password] 적용해줘야 nova 관련 서비스가 정상 작동한다.

14.  compute 노드에서 nova-compute가 실행되지 않는 경우

# /etc/init.d/nova-compute start
nova-compute main process (6355) terminated with status 1

/etc/nova/ 의 화일 소유권이 nova:nova인지 확인한다.

 

 

 

답글 남기기

Your email address will not be published.