我正在编写一些引导脚本,但如果不重新启动,我无法让用户模式 LXC 容器在 vanilla Ubuntu 14.04 headless 上工作。
这就是我所做的。
首先,我下载并安装Ubuntu 服务器 14.04.1 amd64一切默认都在新的(VirtualBox 下的虚拟来宾)机器上。
然后我登录并更新和升级它,如果内核升级则重新启动。
然后我登录并发出以下命令:
$ sudo apt-get --yes install lxc
Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
bridge-utils cgmanager cloud-image-utils debootstrap distro-info
distro-info-data dnsmasq-base euca2ools genisoimage libaio1
libboost-system1.54.0 libboost-thread1.54.0 liblxc1 libmnl0
libnetfilter-conntrack3 librados2 librbd1 libseccomp2 libxslt1.1
lxc-templates python-distro-info python-lxml python-requestbuilder
python-setuptools python3-lxc qemu-utils sharutils uidmap
Suggested packages:
cgmanager-utils shunit2 wodim cdrkit-doc lxctl qemu-user-static
python-lxml-dbg bsd-mailx mailx
The following NEW packages will be installed:
bridge-utils cgmanager cloud-image-utils debootstrap distro-info
distro-info-data dnsmasq-base euca2ools genisoimage libaio1
libboost-system1.54.0 libboost-thread1.54.0 liblxc1 libmnl0
libnetfilter-conntrack3 librados2 librbd1 libseccomp2 libxslt1.1 lxc
lxc-templates python-distro-info python-lxml python-requestbuilder
python-setuptools python3-lxc qemu-utils sharutils uidmap
0 upgraded, 29 newly installed, 0 to remove and 0 not upgraded.
Need to get 5219 kB of archives.
...
$ rm -rf /home/zosia/.config/lxc /home/zosia/.local/share/lxc
$ sudo mkdir /opt/lxc
$ sudo chown -R zosia /opt/lxc
$ mkdir /opt/lxc/config /opt/lxc/store
$ ln -s /opt/lxc/store /home/zosia/.local/share/lxc
$ ln -s /opt/lxc/config /home/zosia/.config/lxc
$ sudo usermod --add-subuids 100000-165536 zosia
$ sudo usermod --add-subgids 100000-165536 zosia
$ sudo chmod +x /home/zosia
$ tee /home/zosia/.config/lxc/default.conf <<EOT
lxc.include = /etc/lxc/default.conf
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
EOT
$ echo 'zosia veth lxcbr0 10' | sudo tee -a /etc/lxc/lxc-usernet
zosia veth lxcbr0 10
$ mkdir -p /home/zosia/.cache/lxc
$ sudo chmod -R +x /home/zosia/.local
$ lxc-create -t download -n usik -- -d ubuntu -r trusty -a amd64
Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs
---
You just created an Ubuntu container (release=trusty, arch=amd64, variant=default)
To enable sshd, run: apt-get install openssh-server
For security reason, container images ship without user accounts
and without a root password.
Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.
$ lxc-start -n usik
lxc_container: call to cgmanager_create_sync failed: invalid request
lxc_container: Failed to create hugetlb:usik
lxc_container: Error creating cgroup hugetlb:usik
lxc_container: failed creating cgroups
lxc_container: failed to spawn 'usik'
lxc_container: The container failed to start.
lxc_container: Additional information can be obtained by setting the --logfile and --logpriority options.
除非主机在执行所有这些命令后重新启动,否则lxc-start -n usik
会引发错误。重新启动服务lxc
,lxc-net
或者cgmanager
也没有帮助。
日志文件内容如下:
lxc-start 1418283881.262 INFO lxc_start_ui - using rcfile /home/zosia/.local/share/lxc/usik/config
lxc-start 1418283881.262 INFO lxc_confile - read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 1418283881.262 INFO lxc_confile - read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 1418283881.263 WARN lxc_log - lxc_log_init called with log already initialized
lxc-start 1418283881.263 INFO lxc_lsm - LSM security driver AppArmor
lxc-start 1418283881.264 DEBUG lxc_conf - allocated pty '/dev/pts/1' (5/6)
lxc-start 1418283881.264 DEBUG lxc_conf - allocated pty '/dev/pts/6' (7/8)
lxc-start 1418283881.264 DEBUG lxc_conf - allocated pty '/dev/pts/7' (9/10)
lxc-start 1418283881.264 DEBUG lxc_conf - allocated pty '/dev/pts/8' (11/12)
lxc-start 1418283881.264 INFO lxc_conf - tty's configured
lxc-start 1418283881.264 DEBUG lxc_start - sigchild handler set
lxc-start 1418283881.264 DEBUG lxc_console - opening /dev/tty for console peer
lxc-start 1418283881.264 DEBUG lxc_console - using '/dev/tty' as console
lxc-start 1418283881.264 DEBUG lxc_console - 3809 got SIGWINCH fd 17
lxc-start 1418283881.264 DEBUG lxc_console - set winsz dstfd:14 cols:151 rows:41
lxc-start 1418283881.309 INFO lxc_start - 'usik' is initialized
lxc-start 1418283881.309 DEBUG lxc_start - Not dropping cap_sys_boot or watching utmp
lxc-start 1418283881.309 INFO lxc_start - Cloning a new user namespace
lxc-start 1418283881.309 INFO lxc_cgroup - cgroup driver cgmanager initing for usik
lxc-start 1418283881.310 ERROR lxc_cgmanager - call to cgmanager_create_sync failed: invalid request
lxc-start 1418283881.311 ERROR lxc_cgmanager - Failed to create hugetlb:usik
lxc-start 1418283881.311 ERROR lxc_cgmanager - Error creating cgroup hugetlb:usik
lxc-start 1418283881.312 INFO lxc_cgmanager - cgroup removal attempt: hugetlb:usik did not exist
lxc-start 1418283881.312 INFO lxc_cgmanager - cgroup removal attempt: perf_event:usik did not exist
lxc-start 1418283881.312 INFO lxc_cgmanager - cgroup removal attempt: blkio:usik did not exist
lxc-start 1418283881.312 INFO lxc_cgmanager - cgroup removal attempt: freezer:usik did not exist
lxc-start 1418283881.313 INFO lxc_cgmanager - cgroup removal attempt: devices:usik did not exist
lxc-start 1418283881.313 INFO lxc_cgmanager - cgroup removal attempt: memory:usik did not exist
lxc-start 1418283881.313 INFO lxc_cgmanager - cgroup removal attempt: cpuacct:usik did not exist
lxc-start 1418283881.313 INFO lxc_cgmanager - cgroup removal attempt: cpu:usik did not exist
lxc-start 1418283881.313 INFO lxc_cgmanager - cgroup removal attempt: cpuset:usik did not exist
lxc-start 1418283881.313 INFO lxc_cgmanager - cgroup removal attempt: name=systemd:usik did not exist
lxc-start 1418283881.313 ERROR lxc_start - failed creating cgroups
lxc-start 1418283881.314 ERROR lxc_start - failed to spawn 'usik'
lxc-start 1418283881.315 ERROR lxc_start_ui - The container failed to start.
lxc-start 1418283881.315 ERROR lxc_start_ui - Additional information can be obtained by setting the --logfile and --logpriority options.
答案1
可能是(只是猜测),打开 user_namespaces对你的情况会有一些帮助:
sysctl -w kernel.unprivileged_userns_clone=1
答案2
您需要重新启动 dbus。您必须注销并重新登录(我使用的是 SSH),但它会正确设置 cgroup,并且您将能够启动容器而无需重新启动整个服务器。
如果您不想注销并重新登录,可以尝试使用 cgm 手动创建 cgroup,如此处所述https://linuxcontainers.org/cgmanager/getting-started/。我能够启动一个容器,但在注销并重新登录后我无法再使用它,因为我手动创建的 cgroup 与登录时自动创建的 cgroup 不同。