1、nvidia显卡驱动,在装系统的时候已经集成
如果是deepin15,应该可以用显卡管理器换成nvidia闭源驱动
其他情况,则去官网下载驱动安装
如果已经安装可以用下面这条命令查看显卡信息,可以看出我已经安装了cuda
$ nvidia-smi
Fri Nov 6 15:58:58 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 965M Off | 00000000:01:00.0 Off | N/A |
| N/A 53C P8 N/A / N/A | 11MiB / 2002MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
2、安装conda
这里使用的是miniconda
https://docs.conda.io/en/latest/miniconda.html
3、安装cuda
这一步可以停一下,可以先试一试第四步是否可行
因为conda安装tensorflow-gpu指定版本时候,会自动帮你选择安装合适
的包,包括conda toolkit,cuDNN等杂七杂八的依赖,特别从官网下载安装
不仅费时间,还要注意版本信息,安装了以后可能会与本地软件包冲突等等问题。
如果你选择先安装cuda驱动,在安装完成后先看一下cuda版本
然后在第5步选择合适tenforflow-gpu版本,即可
sudo apt install nvidia-cuda-dev nvidia-cuda-toolkit
sudo apt install libcupti-dev nvidia-nsight nvidia-visual-profile
安装完cuda以后,使用nvcc查看版本,如下版本为9.2
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.2, V9.2.148
4、conda安装tensorflow_gpu
#上图为不同tensorflow-gpu适配的各软件版本,请按照如图配置软件版本
#选择安装tensorflow-gpu=1.12,因为上面的cuda版本为9.2
#如果你跳过上面一步,你可以试一试直接安装指定的tensorflow-gpu版本,
#他也会可能帮你解决cuda的问题,如果不行的话,返回上面一步即可,
#返回时候,由其安装cuda版本决定conda是否需要重新配置环境
#python版本为3.6
#剩下的依赖让conda来解决
#首先使用conda创建python36环境
conda createa --name python36 python=3.6
#激活环境
source activate python36
#安装tensorflow-gpu
#首先查看可用的tensorflow-gpu版本有那些
~$ conda search tensorflow-gpu
#Loading channels: done
# Name Version Build Channel
#tensorflow-gpu 1.4.1 0 pkgs/main
#tensorflow-gpu 1.5.0 0 pkgs/main
#tensorflow-gpu 1.6.0 0 pkgs/main #
#tensorflow-gpu 1.7.0 0 pkgs/main
#tensorflow-gpu 1.8.0 h7b35bdc_0 pkgs/main
#tensorflow-gpu 1.9.0 hf154084_0 pkgs/main
#tensorflow-gpu 1.10.0 hf154084_0 pkgs/main
#tensorflow-gpu 1.11.0 h0d30ee6_0 pkgs/main
#tensorflow-gpu 1.12.0 h0d30ee6_0 pkgs/main
#tensorflow-gpu 1.13.1 h0d30ee6_0 pkgs/main
#tensorflow-gpu 1.14.0 h0d30ee6_0 pkgs/main
#tensorflow-gpu 1.15.0 h0d30ee6_0 pkgs/main
#tensorflow-gpu 2.0.0 h0d30ee6_0 pkgs/main
#tensorflow-gpu 2.1.0 h0d30ee6_0 pkgs/main
#tensorflow-gpu 2.2.0 h0d30ee6_0 pkgs/main
#可以看出版本中包括我们想要的版本
conda install tensorflow-gpu=1.12
#安装结束后进行测试
python
import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
#如果结果中有GPU的名称,显存等信息,表示tensorflow可以使用GPU了
~$ python
>>> import tensorflow as tf
#/home/ains/miniconda3/envs/keras_gpu/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is #deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
# _np_qint8 = np.dtype([("qint8", np.int8, 1)])
#/home/ains/miniconda3/envs/keras_gpu/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is #deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
# _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
#/home/ains/miniconda3/envs/keras_gpu/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is #deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
# _np_qint16 = np.dtype([("qint16", np.int16, 1)])
#/home/ains/miniconda3/envs/keras_gpu/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is #deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
# _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
#/home/ains/miniconda3/envs/keras_gpu/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is #deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
# _np_qint32 = np.dtype([("qint32", np.int32, 1)])
#/home/ains/miniconda3/envs/keras_gpu/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is #deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
# np_resource = np.dtype([("resource", np.ubyte, 1)])
>>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
#2020-11-06 16:24:03.718066: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU #supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
#2020-11-06 16:24:04.021124: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] #successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
#2020-11-06 16:24:04.021375: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
#name: GeForce GTX 965M major: 5 minor: 2 memoryClockRate(GHz): 1.15
#pciBusID: 0000:01:00.0
#totalMemory: 1.96GiB freeMemory: 1.91GiB
#2020-11-06 16:24:04.021392: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] #Adding visible gpu devices: 0
#2020-11-06 16:24:04.315574: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] #Device interconnect StreamExecutor with strength 1 edge matrix:
#2020-11-06 16:24:04.315634: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
#2020-11-06 16:24:04.315693: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
#2020-11-06 16:24:04.315916: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] #Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1663 MB memory) -> physical GPU (device: 0, name: GeForce GTX 965M, pci bus id: 0000:01:00.0, compute capability: 5.2)
#Device mapping:
#/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device
#/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 965M, pci bus id: 0000:01:00.0, compute capability: 5.2
#/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device
#2020-11-06 16:24:04.317697: I tensorflow/core/common_runtime/direct_session.cc:307] Device mapping:
#/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device
#/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 965M, pci bus id: 0000:01:00.0, compute capability: 5.2
#/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device
#测试成功后,可以添加其他软件,这是我常用
conda install keras
conda install jupyter notebook
测试代码
from __future__ import print_function
'''
Basic Multi GPU computation example using TensorFlow library.
Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
'''
'''
This tutorial requires your machine to have 2 GPUs
"/cpu:0": The CPU of your machine.
"/gpu:0": The first GPU of your machine
"/gpu:1": The second GPU of your machine
'''
import numpy as np
import tensorflow as tf
import datetime
# Processing Units logs
log_device_placement = True
# Num of multiplications to perform
n = 10
'''
Example: compute A^n + B^n on 2 GPUs
Results on 8 cores with 2 GTX-980:
* Single GPU computation time: 0:00:11.277449
* Multi GPU computation time: 0:00:07.131701
'''
# Create random large matrix
A = np.random.rand(5000, 5000).astype('float32')
B = np.random.rand(5000, 5000).astype('float32')
# Create a graph to store results
c1 = []
c2 = []
def matpow(M, n):
if n < 1: #Abstract cases where n < 1
return M
else:
return tf.matmul(M, matpow(M, n-1))
'''
Single GPU computing
'''
with tf.device('/gpu:0'):
a = tf.placeholder(tf.float32, [5000, 5000])
b = tf.placeholder(tf.float32, [5000, 5000])
# Compute A^n and B^n and store results in c1
c1.append(matpow(a, n))
c1.append(matpow(b, n))
with tf.device('/cpu:0'):
sum = tf.add_n(c1) #Addition of all elements in c1, i.e. A^n + B^n
t1_1 = datetime.datetime.now()
with tf.Session(config=tf.ConfigProto(log_device_placement=log_device_placement)) as sess:
# Run the op.
sess.run(sum, {a:A, b:B})
t2_1 = datetime.datetime.now()
'''
Multi GPU computing
'''
# # GPU:0 computes A^n
# with tf.device('/gpu:0'):
# # Compute A^n and store result in c2
# a = tf.placeholder(tf.float32, [10000, 10000])
# c2.append(matpow(a, n))
# # GPU:1 computes B^n
# with tf.device('/gpu:1'):
# # Compute B^n and store result in c2
# b = tf.placeholder(tf.float32, [10000, 10000])
# c2.append(matpow(b, n))
# with tf.device('/cpu:0'):
# sum = tf.add_n(c2) #Addition of all elements in c2, i.e. A^n + B^n
# t1_2 = datetime.datetime.now()
# with tf.Session(config=tf.ConfigProto(log_device_placement=log_device_placement)) as sess:
# # Run the op.
# sess.run(sum, {a:A, b:B})
# t2_2 = datetime.datetime.now()
print("Single GPU computation time: " + str(t2_1-t1_1))
# print("Multi GPU computation time: " + str(t2_2-t1_2))
测试代码2 cnn
''' Multi-GPU Training Example.
Train a convolutional neural network on multiple GPU with TensorFlow.
This example is using TensorFlow layers, see 'convolutional_network_raw' example
for a raw TensorFlow implementation with variables.
This example is using the MNIST database of handwritten digits
(http://yann.lecun.com/exdb/mnist/)
Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
'''
from __future__ import division, print_function, absolute_import
import numpy as np
import tensorflow as tf
import time
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# Training Parameters
num_gpus = 1
num_steps = 200
learning_rate = 0.001
batch_size = 128
display_step = 10
# Network Parameters
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units
# Build a convolutional neural network
def conv_net(x, n_classes, dropout, reuse, is_training):
# Define a scope for reusing the variables
with tf.variable_scope('ConvNet', reuse=reuse):
# MNIST data input is a 1-D vector of 784 features (28*28 pixels)
# Reshape to match picture format [Height x Width x Channel]
# Tensor input become 4-D: [Batch Size, Height, Width, Channel]
x = tf.reshape(x, shape=[-1, 28, 28, 1])
# Convolution Layer with 64 filters and a kernel size of 5
x = tf.layers.conv2d(x, 64, 5, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
x = tf.layers.max_pooling2d(x, 2, 2)
# Convolution Layer with 256 filters and a kernel size of 5
x = tf.layers.conv2d(x, 256, 3, activation=tf.nn.relu)
# Convolution Layer with 512 filters and a kernel size of 5
x = tf.layers.conv2d(x, 512, 3, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
x = tf.layers.max_pooling2d(x, 2, 2)
# Flatten the data to a 1-D vector for the fully connected layer
x = tf.contrib.layers.flatten(x)
# Fully connected layer (in contrib folder for now)
x = tf.layers.dense(x, 2048)
# Apply Dropout (if is_training is False, dropout is not applied)
x = tf.layers.dropout(x, rate=dropout, training=is_training)
# Fully connected layer (in contrib folder for now)
x = tf.layers.dense(x, 1024)
# Apply Dropout (if is_training is False, dropout is not applied)
x = tf.layers.dropout(x, rate=dropout, training=is_training)
# Output layer, class prediction
out = tf.layers.dense(x, n_classes)
# Because 'softmax_cross_entropy_with_logits' loss already apply
# softmax, we only apply softmax to testing network
out = tf.nn.softmax(out) if not is_training else out
return out
def average_gradients(tower_grads):
average_grads = []
for grad_and_vars in zip(*tower_grads):
# Note that each grad_and_vars looks like the following:
# ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))
grads = []
for g, _ in grad_and_vars:
# Add 0 dimension to the gradients to represent the tower.
expanded_g = tf.expand_dims(g, 0)
# Append on a 'tower' dimension which we will average over below.
grads.append(expanded_g)
# Average over the 'tower' dimension.
grad = tf.concat(grads, 0)
grad = tf.reduce_mean(grad, 0)
# Keep in mind that the Variables are redundant because they are shared
# across towers. So .. we will just return the first tower's pointer to
# the Variable.
v = grad_and_vars[0][1]
grad_and_var = (grad, v)
average_grads.append(grad_and_var)
return average_grads
# By default, all variables will be placed on '/gpu:0'
# So we need a custom device function, to assign all variables to '/cpu:0'
# Note: If GPUs are peered, '/gpu:0' can be a faster option
PS_OPS = ['Variable', 'VariableV2', 'AutoReloadVariable']
def assign_to_device(device, ps_device='/cpu:0'):
def _assign(op):
node_def = op if isinstance(op, tf.NodeDef) else op.node_def
if node_def.op in PS_OPS:
return "/" + ps_device
else:
return device
return _assign
# Place all ops on CPU by default
with tf.device('/cpu:0'):
tower_grads = []
reuse_vars = False
# tf Graph input
X = tf.placeholder(tf.float32, [None, num_input])
Y = tf.placeholder(tf.float32, [None, num_classes])
# Loop over all GPUs and construct their own computation graph
for i in range(num_gpus):
with tf.device(assign_to_device('/gpu:{}'.format(i), ps_device='/cpu:0')):
# Split data between GPUs
_x = X[i * batch_size: (i+1) * batch_size]
_y = Y[i * batch_size: (i+1) * batch_size]
# Because Dropout have different behavior at training and prediction time, we
# need to create 2 distinct computation graphs that share the same weights.
# Create a graph for training
logits_train = conv_net(_x, num_classes, dropout,
reuse=reuse_vars, is_training=True)
# Create another graph for testing that reuse the same weights
logits_test = conv_net(_x, num_classes, dropout,
reuse=True, is_training=False)
# Define loss and optimizer (with train logits, for dropout to take effect)
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits_train, labels=_y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
grads = optimizer.compute_gradients(loss_op)
# Only first GPU compute accuracy
if i == 0:
# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.equal(tf.argmax(logits_test, 1), tf.argmax(_y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
reuse_vars = True
tower_grads.append(grads)
tower_grads = average_gradients(tower_grads)
train_op = optimizer.apply_gradients(tower_grads)
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start Training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
# Keep training until reach max iterations
for step in range(1, num_steps + 1):
# Get a batch for each GPU
batch_x, batch_y = mnist.train.next_batch(batch_size * num_gpus)
# Run optimization op (backprop)
ts = time.time()
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
te = time.time() - ts
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
Y: batch_y})
print("Step " + str(step) + ": Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc) + ", %i Examples/sec" % int(len(batch_x)/te))
step += 1
print("Optimization Finished!")
# Calculate accuracy for MNIST test images
print("Testing Accuracy:", \
np.mean([sess.run(accuracy, feed_dict={X: mnist.test.images[i:i+batch_size],
Y: mnist.test.labels[i:i+batch_size]}) for i in range(0, len(mnist.test.images), batch_size)]))
参考链接
https://tensorflow.google.cn/install/source
https://blog.csdn.net/HappyCtest/article/details/86747306
https://www.cnblogs.com/minglex/p/9464980.html
https://blog.csdn.net/wangjie5540/article/details/100527558
网友评论