RuntimeError:CUDA 未知错误 - 这可能是由于环境设置不正确造成的,例如更改环境变量 CUDA_VISIBLE_DEVICES

RuntimeError:CUDA 未知错误 - 这可能是由于环境设置不正确造成的,例如更改环境变量 CUDA_VISIBLE_DEVICES

我正在尝试检查 GPU 设备名称,但执行此代码后。我收到此未知运行时错误。请帮助我解决这个问题并提供解决此错误的完整说明。谢谢。

(base) kumar@kumar:~$ conda activate pytorch
        (pytorch) kumar@kumar:~$ python
        Python 3.8.5 (default, Sep  4 2020, 07:30:14) 
        [GCC 7.3.0] :: Anaconda, Inc. on linux
        Type "help", "copyright", "credits" or "license" for more information.
        >>> import torch
        >>> print(torch.__version__)
        1.9.0a0+gitb39eeb0
        >>> print(torch.version.cuda)
        11.2
        >>> print(torch.cuda.current_device())
        Traceback (most recent call last):
          File "<stdin>", line 1, in <module>
          File "/home/kumar/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/cuda/__init__.py", line 430, in current_device
            _lazy_init()
          File "/home/kumar/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/cuda/__init__.py", line 170, in _lazy_init
            torch._C._cuda_init()
        RuntimeError: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero.
        >>> exit()

以下是输出nvcc -V

  (pytorch) kumar@kumar:~$ nvcc -V
    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2021 NVIDIA Corporation
    Built on Sun_Feb_14_21:12:58_PST_2021
    Cuda compilation tools, release 11.2, V11.2.152
    Build cuda_11.2.r11.2/compiler.29618528_0
    (pytorch) kumar@kumar:~$ 

这是输出nvidia-smi

Thu Apr  8 15:04:49 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.67       Driver Version: 460.39       CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr: Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce RTX 3070    Off  | 00000000:01:00.0  On |                  N/A |
|  0%   38C    P8    10W / 220W |    525MiB /  7982MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1015      G   /usr/lib/xorg/Xorg                 70MiB |
|    0   N/A  N/A      1542      G   /usr/lib/xorg/Xorg                257MiB |
|    0   N/A  N/A      1675      G   /usr/bin/gnome-shell               89MiB |
|    0   N/A  N/A      3560      G   ...AAAAAAAAA= --shared-files       94MiB |
+-----------------------------------------------------------------------------+

但是,当我尝试运行此代码时:

print(torch.cuda.current_device())

我收到以下错误:

Traceback (most recent call last):
              File "<stdin>", line 1, in <module>
              File "/home/kumar/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/cuda/__init__.py", line 430, in current_device
                _lazy_init()
              File "/home/kumar/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/cuda/__init__.py", line 170, in _lazy_init
                torch._C._cuda_init()
            RuntimeError: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero.

答案1

取自这里 尝试这个:

sudo rmmod nvidia_uvm
sudo rmmod nvidia
sudo modprobe nvidia
sudo modprobe nvidia_uvm

考试

>>> import torch
>>> torch.cuda.is_available()
True

相关内容