我使用 conjure-up 在 kvm 虚拟机上成功安装了 openstack nova-lxd。
VM 关闭后;当我启动 VM 时,openstack 没有启动。lxc 列表显示一个实例处于“正在运行”状态,而所有其他实例处于“已停止”状态。
我如何启动其他实例?
答案1
我也遇到了同样的问题。我在服务器上安装了全新的 Ubuntu 16.04,更新了所有软件包,然后我安装的唯一东西是 conjure-up,并用它来安装 lxc openstack。lxc list
显示所有实例都已启动并正在运行,我可以登录到 openstack 仪表板,这很棒。然后我重新启动...现在lxc list
显示除一个实例外,其他所有实例均未运行。@gangstaluv 回答您在我的环境中的问题:
juju 状态返回什么吗?
$ juju status
Model Controller Cloud/Region Version
conjure-up-openstack-novalxd-561 conjure-up-localhost-1e7 localhost/localhost 2.1.0.1
App Version Status Scale Charm Store Rev OS Notes
ceph-mon 10.2.5 active 0/3 ceph-mon jujucharms 7 ubuntu
ceph-osd 10.2.5 active 0/3 ceph-osd jujucharms 239 ubuntu
ceph-radosgw 10.2.5 active 0/1 ceph-radosgw jujucharms 245 ubuntu
glance 12.0.0 active 0/1 glance jujucharms 254 ubuntu
keystone 9.2.0 active 0/1 keystone jujucharms 262 ubuntu
lxd 2.0.9 active 0/1 lxd jujucharms 7 ubuntu
mysql 5.6.21-25.8 active 0/1 percona-cluster jujucharms 247 ubuntu
neutron-api 8.3.0 active 0/1 neutron-api jujucharms 247 ubuntu
neutron-gateway 8.3.0 active 0/1 neutron-gateway jujucharms 232 ubuntu
neutron-openvswitch 8.3.0 active 0/1 neutron-openvswitch jujucharms 238 ubuntu
nova-cloud-controller 13.1.2 active 0/1 nova-cloud-controller jujucharms 292 ubuntu
nova-compute 13.1.2 active 0/1 nova-compute jujucharms 262 ubuntu
ntp waiting 0 ntp jujucharms 17 ubuntu
openstack-dashboard 9.1.0 active 0/1 openstack-dashboard jujucharms 243 ubuntu exposed
rabbitmq-server 3.5.7 active 0/1 rabbitmq-server jujucharms 59 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-mon/0 unknown lost 0 10.0.8.183 agent lost, see 'juju show-status-log ceph-mon/0'
ceph-mon/1 unknown lost 1 10.0.8.209 agent lost, see 'juju show-status-log ceph-mon/1'
ceph-mon/2 unknown lost 2 10.0.8.141 agent lost, see 'juju show-status-log ceph-mon/2'
ceph-osd/0 unknown lost 3 10.0.8.159 agent lost, see 'juju show-status-log ceph-osd/0'
ceph-osd/1 unknown lost 4 10.0.8.115 agent lost, see 'juju show-status-log ceph-osd/1'
ceph-osd/2 unknown lost 5 10.0.8.216 agent lost, see 'juju show-status-log ceph-osd/2'
ceph-radosgw/0 unknown lost 6 10.0.8.48 80/tcp agent lost, see 'juju show-status-log ceph-radosgw/0'
glance/0 unknown lost 7 10.0.8.61 9292/tcp agent lost, see 'juju show-status-log glance/0'
keystone/0 unknown lost 8 10.0.8.117 5000/tcp agent lost, see 'juju show-status-log keystone/0'
mysql/0 unknown lost 9 10.0.8.123 agent lost, see 'juju show-status-log mysql/0'
neutron-api/0 unknown lost 10 10.0.8.96 9696/tcp agent lost, see 'juju show-status-log neutron-api/0'
neutron-gateway/0 unknown lost 11 10.0.8.140 agent lost, see 'juju show-status-log neutron-gateway/0'
nova-cloud-controller/0 unknown lost 12 10.0.8.238 8774/tcp agent lost, see 'juju show-status-log nova-cloud-controller/0'
nova-compute/0 unknown lost 13 10.0.8.190 agent lost, see 'juju show-status-log nova-compute/0'
lxd/0 unknown lost 10.0.8.190 agent lost, see 'juju show-status-log lxd/0'
neutron-openvswitch/0 unknown lost 10.0.8.190 agent lost, see 'juju show-status-log neutron-openvswitch/0'
openstack-dashboard/0 unknown lost 14 10.0.8.111 80/tcp,443/tcp agent lost, see 'juju show-status-log openstack-dashboard/0'
rabbitmq-server/0 unknown lost 15 10.0.8.110 5672/tcp agent lost, see 'juju show-status-log rabbitmq-server/0'
Machine State DNS Inst id Series AZ
0 down 10.0.8.183 juju-ec5bf1-0 xenial
1 down 10.0.8.209 juju-ec5bf1-1 xenial
2 down 10.0.8.141 juju-ec5bf1-2 xenial
3 down 10.0.8.159 juju-ec5bf1-3 xenial
4 down 10.0.8.115 juju-ec5bf1-4 xenial
5 down 10.0.8.216 juju-ec5bf1-5 xenial
6 down 10.0.8.48 juju-ec5bf1-6 xenial
7 down 10.0.8.61 juju-ec5bf1-7 xenial
8 down 10.0.8.117 juju-ec5bf1-8 xenial
9 down 10.0.8.123 juju-ec5bf1-9 xenial
10 down 10.0.8.96 juju-ec5bf1-10 xenial
11 down 10.0.8.140 juju-ec5bf1-11 xenial
12 down 10.0.8.238 juju-ec5bf1-12 xenial
13 down 10.0.8.190 juju-ec5bf1-13 xenial
14 down 10.0.8.111 juju-ec5bf1-14 xenial
15 down 10.0.8.110 juju-ec5bf1-15 xenial
Relation Provides Consumes Type
mon ceph-mon ceph-mon peer
mon ceph-mon ceph-osd regular
mon ceph-mon ceph-radosgw regular
ceph ceph-mon glance regular
ceph ceph-mon nova-compute regular
cluster ceph-radosgw ceph-radosgw peer
identity-service ceph-radosgw keystone regular
cluster glance glance peer
identity-service glance keystone regular
shared-db glance mysql regular
image-service glance nova-cloud-controller regular
image-service glance nova-compute regular
amqp glance rabbitmq-server regular
cluster keystone keystone peer
shared-db keystone mysql regular
identity-service keystone neutron-api regular
identity-service keystone nova-cloud-controller regular
identity-service keystone openstack-dashboard regular
lxd-migration lxd lxd peer
lxd lxd nova-compute regular
cluster mysql mysql peer
shared-db mysql neutron-api regular
shared-db mysql nova-cloud-controller regular
cluster neutron-api neutron-api peer
neutron-plugin-api neutron-api neutron-gateway regular
neutron-plugin-api neutron-api neutron-openvswitch regular
neutron-api neutron-api nova-cloud-controller regular
amqp neutron-api rabbitmq-server regular
cluster neutron-gateway neutron-gateway peer
quantum-network-service neutron-gateway nova-cloud-controller regular
amqp neutron-gateway rabbitmq-server regular
neutron-plugin neutron-openvswitch nova-compute regular
amqp neutron-openvswitch rabbitmq-server regular
cluster nova-cloud-controller nova-cloud-controller peer
cloud-compute nova-cloud-controller nova-compute regular
amqp nova-cloud-controller rabbitmq-server regular
lxd nova-compute lxd subordinate
neutron-plugin nova-compute neutron-openvswitch subordinate
compute-peer nova-compute nova-compute peer
amqp nova-compute rabbitmq-server regular
ntp-peers ntp ntp peer
cluster openstack-dashboard openstack-dashboard peer
cluster rabbitmq-server rabbitmq-server peer
您可以运行 lxc start 来重新启动它们。
当我尝试这样做时,出现一个错误,这可能解释了为什么事情没有自行恢复:
$ lxc start juju-ec5bf1-0
error: Missing parent 'conjureup0' for nic 'eth1'
Try `lxc info --show-log juju-ec5bf1-0` for more info
我不确定该怎么做。我还能检查些什么吗?我重新安装了 Ubuntu 和 conjure-up,以防我做错了什么,但每次它都能正常工作,直到重新启动,然后它又会处于这种状态。
编辑 1:我没有想到要添加它lxc info
所说的要看的内容,现在就添加它。
$ lxc info --show-log juju-ec5bf1-0
Name: juju-ec5bf1-0
Remote: unix:/var/lib/lxd/unix.socket
Architecture: x86_64
Created: 2017/02/20 04:12 UTC
Status: Stopped
Type: persistent
Profiles: default, juju-conjure-up-openstack-novalxd-561
Log:
lxc 20160220041252.329 WARN lxc_start - start.c:signal_handler:322 - Invalid pid for SIGCHLD. Received pid 437, expected pid 452.
编辑-2:我刚刚修好了我自己的!
经过大量研究我发现了命令lxc profile show
$ lxc profile show juju-conjure-up-openstack-novalxd-561
config:
boot.autostart: "true"
linux.kernel_modules: openvswitch,nbd,ip_tables,ip6_tables,netlink_diag
raw.lxc: |
lxc.aa_profile=unconfined
lxc.mount.auto=sys:rw
security.nesting: "true"
security.privileged: "true"
description: ""
devices:
eth0:
mtu: "9000"
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
eth1:
mtu: "9000"
name: eth1
nictype: bridged
parent: conjureup0
type: nic
root:
path: /
type: disk
name: juju-conjure-up-openstack-novalxd-561
从输出中lxc info --show-log juju-ec5bf1-0
我推测 juju(或其他组件)以某种方式看到了我的其他网卡(如果我没看错的话,我是在真实硬件上运行它,而不是在虚拟机上运行 Mirto Busico),并且正在寻找一个不存在的名为 conjureup0 的桥接器。我怀疑某个地方有错误,这就是为什么没有创建它。我想我可以做两件事之一来解决这个问题。1) 创建缺失的桥接器 2) 从配置文件中删除 eth1 设备。我选择了后者。
$ lxc profile device remove juju-conjure-up-openstack-novalxd-561 eth1
重新启动后,现在lxc list
显示我的所有实例均按预期启动并运行,并且我的仪表板再次正常工作。
答案2
@gangstaluv:3 月份重新尝试从头开始安装,结果 conjureup0 在重启后仍然可用。
重启后,所有 lxd 容器都已启动并运行,但 rabbitmq-server 出现错误(为此,我将打开另一个线程)
答案3
我以前也遇到过这种问题,很可能是网络设置的问题。默认情况下,首次部署 conjure-up openstack 时 MTU 大小的值应为 1500。我的解决方案是在 lxc 配置文件配置中更改 MTU 大小。您也可以尝试这样做。