Proxmox/KVM:无法克隆 VM

Proxmox/KVM:无法克隆 VM

我想创建 KVM 机器的克隆 - 昨天我能够在几分钟内创建同一台机器的克隆。现在却花了几个小时,而且仍然卡在“0%”:

create full clone of drive ide0 (local:100/vm-100-disk-1.raw)
Formatting '/var/lib/vz/images/111/vm-111-disk-1.raw', fmt=raw size=536870912000
transferred: 0 bytes remaining: 536870912000 bytes total: 536870912000 bytes progression: 0.00 %

我不知道为什么会这样。任务正在运行(几分钟前我重新启动了它)-http://d.pr/tJn4

知道又出了什么问题吗?

编辑:磁盘 I/O 较低,因为任务没有将任何数据写入硬盘(http://d.pr/1furj)。

如果我跟踪概述中给出的进程 ID,我会得到以下结果:

select(24, [16 19], NULL, NULL, {0, 553078}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

select(24, [16 19], NULL, NULL, {1, 0}) = 0 (Timeout)

如果我跟踪进程 233113(实际上正在运行),我会得到以下结果:

lseek(7, 49650073600, SEEK_DATA)        = 49650073600

lseek(7, 49650073600, SEEK_HOLE)        = 49650077696

lseek(7, 49650077696, SEEK_DATA)        = 51797557248

lseek(7, 51797557248, SEEK_DATA)        = 51797557248

lseek(7, 51797557248, SEEK_HOLE)        = 51797565440

lseek(7, 51797565440, SEEK_DATA)        = 51797622784

lseek(7, 51797622784, SEEK_DATA)        = 51797622784

lseek(7, 51797622784, SEEK_HOLE)        = 51797630976

lseek(7, 51797630976, SEEK_DATA)        = 51797688320

lseek(7, 51797688320, SEEK_DATA)        = 51797688320

lseek(7, 51797688320, SEEK_HOLE)        = 51801108480

lseek(7, 51801108480, SEEK_DATA)        = 51831242752

lseek(7, 51831242752, SEEK_DATA)        = 51831242752

lseek(7, 51831242752, SEEK_HOLE)        = 51833991168

lseek(7, 51833991168, SEEK_DATA)        = 51931774976

lseek(7, 51931774976, SEEK_DATA)        = 51931774976

lseek(7, 51931774976, SEEK_HOLE)        = 51962368000

lseek(7, 51962368000, SEEK_DATA)        = 51962380288

lseek(7, 51962380288, SEEK_DATA)        = 51962380288

lseek(7, 51962380288, SEEK_HOLE)        = 51963707392

lseek(7, 51963707392, SEEK_DATA)        = 51963756544

lseek(7, 51963756544, SEEK_DATA)        = 51963756544

lseek(7, 51963756544, SEEK_HOLE)        = 51963973632

lseek(7, 51963973632, SEEK_DATA)        = 53945040896

lseek(7, 53945040896, SEEK_DATA)        = 53945040896

lseek(7, 53945040896, SEEK_HOLE)        = 53945049088

lseek(7, 53945049088, SEEK_DATA)        = 53945106432

lseek(7, 53945106432, SEEK_DATA)        = 53945106432

lseek(7, 53945106432, SEEK_HOLE)        = 53945110528

lseek(7, 53945110528, SEEK_DATA)        = 53945171968

lseek(7, 53945171968, SEEK_DATA)        = 53945171968

lseek(7, 53945171968, SEEK_HOLE)        = 53947260928

lseek(7, 53947260928, SEEK_DATA)        = 53978726400

lseek(7, 53978726400, SEEK_DATA)        = 53978726400

lseek(7, 53978726400, SEEK_HOLE)        = 53981294592

lseek(7, 53981294592, SEEK_DATA)        = 54079258624

lseek(7, 54079258624, SEEK_DATA)        = 54079258624

lseek(7, 54079258624, SEEK_HOLE)        = 54096543744

lseek(7, 54096543744, SEEK_DATA)        = 54096560128

lseek(7, 54096560128, SEEK_DATA)        = 54096560128

lseek(7, 54096560128, SEEK_HOLE)        = 54096625664

lseek(7, 54096625664, SEEK_DATA)        = 56092524544

lseek(7, 56092524544, SEEK_DATA)        = 56092524544

lseek(7, 56092524544, SEEK_HOLE)        = 56092532736

lseek(7, 56092532736, SEEK_DATA)        = 56092590080

lseek(7, 56092590080, SEEK_DATA)        = 56092590080

lseek(7, 56092590080, SEEK_HOLE)        = 56092594176

lseek(7, 56092594176, SEEK_DATA)        = 56092655616

lseek(7, 56092655616, SEEK_DATA)        = 56092655616

lseek(7, 56092655616, SEEK_HOLE)        = 56094593024

lseek(7, 56094593024, SEEK_DATA)        = 56126210048

lseek(7, 56126210048, SEEK_DATA)        = 56126210048

lseek(7, 56126210048, SEEK_HOLE)        = 56128737280

lseek(7, 56128737280, SEEK_DATA)        = 56226742272

lseek(7, 56226742272, SEEK_DATA)        = 56226742272

lseek(7, 56226742272, SEEK_HOLE)        = 56250884096

lseek(7, 56250884096, SEEK_DATA)        = 58240008192

lseek(7, 58240008192, SEEK_DATA)        = 58240008192

lseek(7, 58240008192, SEEK_HOLE)        = 58240016384

lseek(7, 58240016384, SEEK_DATA)        = 58240073728

lseek(7, 58240073728, SEEK_DATA)        = 58240073728

lseek(7, 58240073728, SEEK_HOLE)        = 58240077824

lseek(7, 58240077824, SEEK_DATA)        = 58240139264

lseek(7, 58240139264, SEEK_DATA)        = 58240139264

lseek(7, 58240139264, SEEK_HOLE)        = 58241294336

VM 的大小为500GB并且机器上有足够的空间:

udev             10M       0   10M    0% /dev
tmpfs            26G    319M   25G    2% /run
/dev/sda4       3,6T    878G  2,6T   26% /
tmpfs            63G     43M   63G    1% /dev/shm
tmpfs           5,0M       0  5,0M    0% /run/lock
tmpfs            63G       0   63G    0% /sys/fs/cgroup
/dev/sda3       946M    188M  694M   22% /boot
cgmfs           100K       0  100K    0% /run/cgmanager/fs
/dev/fuse        30M     28K   30M    1% /etc/pve
tmpfs           100K       0  100K    0% /run/lxcfs/controllers
tmpfs            13G       0   13G    0% /run/user/1000

这是一个没有集群的单一 proxmox 安装。

我正在运行以下 proxmox 系统:

proxmox-ve: 4.1-28 (running kernel: 4.2.6-1-pve)

pve-manager: 4.1-2 (running version: 4.1-2/78c5f4a2)

pve-kernel-4.2.6-1-pve: 4.2.6-28

pve-kernel-2.6.32-43-pve: 2.6.32-166

pve-kernel-4.2.2-1-pve: 4.2.2-16

pve-kernel-2.6.32-26-pve: 2.6.32-114

pve-kernel-4.2.3-2-pve: 4.2.3-22

lvm2: 2.02.116-pve2

corosync-pve: 2.3.5-2

libqb0: 0.17.2-1

pve-cluster: 4.0-29

qemu-server: 4.0-42

pve-firmware: 1.1-7

libpve-common-perl: 4.0-42

libpve-access-control: 4.0-10

libpve-storage-perl: 4.0-38

pve-libspice-server1: 0.12.5-2

vncterm: 1.2-1

pve-qemu-kvm: 2.4-18

pve-container: 1.0-35

pve-firewall: 2.0-14

pve-ha-manager: 1.0-16

ksm-control-daemon: 1.2-1

glusterfs-client: 3.5.2-2+deb8u1

lxc-pve: 1.1.5-5

lxcfs: 0.13-pve2

cgmanager: 0.39-pve1

criu: 1.6.0-1

系统日志显示:

Dec 29 19:03:49 hostname pvedaemon[26539]: <root@pam> starting task UPID:hostname:0000681B:0329CBE4:5682CB05:qmclone:100:root@pam:
Dec 29 19:04:59 hostname pveproxy[19936]: worker exit
Dec 29 19:04:59 hostname pveproxy[24482]: worker 19936 finished
Dec 29 19:04:59 hostname pveproxy[24482]: starting 1 worker(s)
Dec 29 19:04:59 hostname pveproxy[24482]: worker 27331 started
Dec 29 19:05:02 hostname lxcfs[18709]: select: No such file or directory
Dec 29 19:05:12 hostname pvedaemon[26539]: <root@pam> successful auth for user 'root@pam'
Dec 29 19:06:51 hostname lxcfs[18709]: select: Success
Dec 29 19:07:55 hostname kernel: [530914.403182] audit: type=1400 audit(1451412475.136:1387): apparmor="DENIED" operation="ptrace" profile="lxc-container-default" pid=1994 comm="sw-collectd" requested_mask="trace" denied_mask="trace" peer="unconfined"
Dec 29 19:07:55 hostname kernel: [530914.403204] audit: type=1400 audit(1451412475.136:1388): apparmor="DENIED" operation="ptrace" profile="lxc-container-default" pid=1994 comm="sw-collectd" requested_mask="read" denied_mask="read" peer="unconfined"
Dec 29 19:09:25 hostname pvedaemon[26539]: <root@pam> successful auth for user 'root@pam'
Dec 29 19:09:42 hostname pveproxy[24482]: worker 27331 finished
Dec 29 19:09:42 hostname pveproxy[24482]: starting 1 worker(s)
Dec 29 19:09:42 hostname pveproxy[24482]: worker 29677 started
Dec 29 19:09:44 hostname pveproxy[29676]: got inotify poll request in wrong process - disabling inotify
Dec 29 19:09:46 hostname pveproxy[29676]: worker exit
Dec 29 19:10:02 hostname lxcfs[18709]: select: Success
Dec 29 19:10:32 hostname pvedaemon[25727]: worker exit
Dec 29 19:10:32 hostname pvedaemon[1160]: worker 25727 finished
Dec 29 19:10:32 hostname pvedaemon[1160]: starting 1 worker(s)
Dec 29 19:10:32 hostname pvedaemon[1160]: worker 30242 started
Dec 29 19:11:45 hostname lxcfs[18709]: select: Success
Dec 29 19:11:48 hostname pvedaemon[30242]: <root@pam> successful auth for user 'root@pam'
Dec 29 19:13:48 hostname pvedaemon[26539]: worker exit
Dec 29 19:13:48 hostname pvedaemon[1160]: worker 26539 finished
Dec 29 19:13:48 hostname pvedaemon[1160]: starting 1 worker(s)
Dec 29 19:13:48 hostname pvedaemon[1160]: worker 31618 started
Dec 29 19:13:55 hostname pveproxy[29677]: worker exit
Dec 29 19:13:55 hostname pveproxy[24482]: worker 29677 finished
Dec 29 19:13:55 hostname pveproxy[24482]: starting 1 worker(s)
Dec 29 19:13:55 hostname pveproxy[24482]: worker 31677 started
Dec 29 19:14:28 hostname pveproxy[24482]: worker 26350 finished
Dec 29 19:14:28 hostname pveproxy[24482]: starting 1 worker(s)
Dec 29 19:14:28 hostname pveproxy[24482]: worker 31867 started
Dec 29 19:14:31 hostname pveproxy[31866]: worker exit
Dec 29 19:15:02 hostname lxcfs[18709]: select: Success
Dec 29 19:15:40 hostname pvedaemon[23612]: worker exit
Dec 29 19:15:40 hostname pvedaemon[1160]: worker 23612 finished
Dec 29 19:15:40 hostname pvedaemon[1160]: starting 1 worker(s)
Dec 29 19:15:40 hostname pvedaemon[1160]: worker 32588 started

相关内容