Steve
Steve

Reputation: 21

GPU driver(cuda,cudf etc.)downloaded but it doesn't work

My gpu is gtx 2070. I have followed every steps from https://github.com/rapidsai/cudf(i use the step"for CUDA 10.1") but no luck. I can't use my gpu power. I have also reinstalled the ubuntu os and those drivers for many times. Anyone know how to slove this problem? I have been struggling in this step for few months..Appreciate it!!!!!

OS: ubuntu 16.04
Driver version: 430.64
CUDA Version: 10.1
python=3.6
cudf==0.13.0
the version is compatible link but why i can't run code with my gpu? Everytime I run my code in terminal, it show these error:

Traceback (most recent call last):
File "/home/user/Documents/test.py", line 5, in <module>
import cudf
File "/home/user/miniconda3/lib/python3.6/site-packages/cudf/__init__.py", line 7, in 
<module>
from cudf import core, datasets
File "/home/user/miniconda3/lib/python3.6/site-packages/cudf/core/__init__.py", line 3, in 
<module>
from cudf.core import buffer, column
 File "/home/user/miniconda3/lib/python3.6/site-packages/cudf/core/column/__init__.py", line 
1, in <module>
from cudf.core.column.categorical import CategoricalColumn  # noqa: F401
File "/home/user/miniconda3/lib/python3.6/site-packages/cudf/core/column/categorical.py", 
line 11, in <module>
import cudf._libxx as libcudfxx
File "/home/user/miniconda3/lib/python3.6/site-packages/cudf/_libxx/__init__.py", line 5, in 
<module>
from . import (
File "cudf/_libxx/aggregation.pxd", line 9, in init cudf._libxx.reduce
File "cudf/_libxx/aggregation.pyx", line 11, in init cudf._libxx.aggregation
File "/home/user/miniconda3/lib/python3.6/site-packages/cudf/utils/cudautils.py", line 7, in 
<module>
from numba import cuda, numpy_support
ImportError: cannot import name 'numpy_support'<br>

Code that i run:

import cupy as cp
import cudf
import pandas as pd
import glob

for f in glob.glob("/home/user/Documents/btc_test.csv"):
    data=cudf.read_csv(f)
    num=data.iloc[1:5]['low']
    numcp=cp.log(num)
    print(numcp)

Upvotes: 1

Views: 1491

Answers (2)

Steve
Steve

Reputation: 21

Thanks for the answer.I have found out that everytime when u want to use gpu instead of cpu for processing, u have to type this commandsource activate dask-cudf.

Upvotes: 0

MZe
MZe

Reputation: 168

I had the same error. This command worked for me in an Anaconda environment with Python 3.6. I have also Cuda 10.1 installed, so please make sure you use your installed version instead.

conda install -c rapidsai -c nvidia -c conda-forge \
    -c defaults cudf=0.14 python=3.6 cudatoolkit=10.1

Reference: https://rapids.ai/start.html

Upvotes: 1

Related Questions