将 cuda 分配给特定的 gpu

将 cuda 分配给特定的 gpu

安装了两张 NVIDIA 780Ti 卡。在 Ubuntu 14.04 上使用 cuda 7.5。安装后检查表显示 cuda 安装正确且功能正常。我的显示器连接到设备 0。我编译了 cuda 示例并运行了 nvidia-smi。它的输出显示了两张 NVIDIA 卡,正如预期的那样:

Fri Apr  1 01:04:31 2016       
+------------------------------------------------------+                       
| NVIDIA-SMI 352.79     Driver Version: 352.79         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 780 Ti  Off  | 0000:01:00.0     N/A |                  N/A |
| 38%   50C    P2    N/A /  N/A |   1084MiB /  3071MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 780 Ti  Off  | 0000:03:00.0     N/A |                  N/A |
| 29%   34C    P8    N/A /  N/A |     11MiB /  3071MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+

但是 deviceQuery 只显示 1 张卡:

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 780 Ti"
  CUDA Driver Version / Runtime Version          7.5 / 7.5
  CUDA Capability Major/Minor version number:    3.5
  Total amount of global memory:                 3072 MBytes (3221028864 bytes)
  (15) Multiprocessors, (192) CUDA Cores/MP:     2880 CUDA Cores
  GPU Max Clock rate:                            1084 MHz (1.08 GHz)
  Memory Clock rate:                             3500 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 1572864 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 3 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.5, CUDA Runtime Version = 7.5, NumDevs = 1, Device0 = GeForce GTX 780 Ti
Result = PASS

我尝试使用环境变量将 cuda 指向另一块显卡

CUDA_VISIBLE_设备=1

我添加了行

 export CUDA_VISIBLE_DEVICES=1 

到 .bashrc 并打开一个新的终端窗口。Printenv 向我展示了 CUDA_VISIBLE_DEVICES=1 等。

我运行了带宽测试。其输出为:

[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GTX 780 Ti
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)    Bandwidth(MB/s)
   33554432         11618.3

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)    Bandwidth(MB/s)
   33554432         12909.9

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)    Bandwidth(MB/s)
   33554432         265048.1

Result = PASS

我重新启动并重新运行带宽测试,但它仍然以以下内容开始输出:

[CUDA Bandwidth Test] - Starting...
    Running on...

     Device 0: GeForce GTX 780 Ti
     Quick Mode 

broadbandTest 仍在使用设备 0。我希望它使用设备 1。为什么 deviceQuery 只看到一张卡?我遗漏了什么?

答案1

这似乎是 deviceQuery 的一个问题。

当我开始

nvidia-smi -l 1 --query --display=PERFORMANCE >> gpu_utillization.log

然后启动一个cuda编译的示例应用程序,粒子

日志显示了一些有趣的东西。在“静止”状态下,在启动粒子之前,GPU0 处于性能状态 2,GPU1 处于性能状态 8。启动粒子后,两个性能状态均为 0。

退出粒子后,性能状态返回基线。GPU0 正在调整我的显示器,所以我想这就是它永远不会进入状态 8 的原因。

解释性能状态。

P0/P1 - Maximum 3D performance
P2/P3 - Balanced 3D performance-power
P8 - Basic HD video playback
P10 - DVD playback
P12 - Minimum idle power consumption

相关内容