美文网首页
Run TFLearn on GPU

Run TFLearn on GPU

作者: MapleLuv | 来源:发表于2018-06-29 15:52 被阅读0次
    • 在公司windows上运行代码,要在GPU上跑了。之前在mac上运行,从来没了解过GPU。。
    • 同一时间,发现tflearn很好用,节省冗长的代码时间。所以查资料tflearn怎么选择在GPU上跑,因为tensorflow上要添写好几句话。但是,好想没找到怎么配置tflearn的GPU。
    • 最有价值的也只有这个了:

    TFLearn

    • Easy-to-use and understand high-level API for implementing deep neural networks, with tutorial and examples.
      (容易使用和易于理解的高层次 API 用于实现深度神经网络,附带教程和例子)
    • Fast prototyping through highly modular built-in neural network layers, regularizers, optimizers, metrics…
      (通过高度模块化的内置神经网络层、正则化器、优化器等进行快速原型设计)
    • Full transparency over Tensorflow. All functions are built over tensors and can be used independently of TFLearn.
      (对 TensorFlow 完全透明,所有函数都是基于 tensor,可以独立于 TFLearn 使用)
    • Powerful helper functions to train any TensorFlow graph, with support of multiple inputs, outputs and optimizers.
      (强大的辅助函数,可训练任意 TensorFlow 图,支持多输入、多输出和优化器)
    • Easy and beautiful graph visualization, with details about weights, gradients, activations and more…
      (简单而美观的图可视化,可以查看关于权值、梯度、特征图等细节)
      Effortless device placement for using multiple CPU/GPU.
      (无需人工干预,可使用多 CPU、多 GPU)

    所以大概理解的就是不需要设置什么。

    给一个示例吧:

    tflearn.init_graph(num_cores=8, gpu_memory_fraction=0.5)
    net = tflearn.input_data(shape=[None, 784])
    net = tflearn.fully_connected(net, 64)
    net = tflearn.dropout(net, 0.5)
    net = tflearn.fully_connected(net, 10, activation='softmax')
    net = tflearn.regression(net, optimizer='adam', loss='categorical_crossentropy')
    model = tflearn.DNN(net)
    model.fit(X, Y)

    • 有时我们想要对计算资源进行限制,比如分配更多或更少的GPU RAM memory。为了达到这一目的,TFLearn提供了一个graph initializer在运行之前配置一个graph:

    tflearn.init_graph(set_seed=8888, num_cores=16, gpu_memory_fraction=0.5)

    查源代码发现,文件/Users/apple/anaconda3/lib/python3.6/site-packages/tflearn/config.py 里面这样写:

    def init_graph(seed=None, log_device=False, num_cores=0, gpu_memory_fraction=0,
               soft_placement=True):
    """ init_graph.
    
    Initialize a graph with specific parameters.
    
    Arguments:
        seed: `int`. Set the graph random seed.
        log_device: `bool`. Log device placement or not.
        num_cores: Number of CPU cores to be used. Default: All.
        gpu_memory_fraction: A value between 0 and 1 that indicates what
            fraction of the available GPU memory to pre-allocate for each
            process. 1 means to pre-allocate all of the GPU memory,
            0.5 means the process allocates ~50% of the available GPU
            memory. Default: Use all GPU's available memory.
        soft_placement: `bool`. Whether soft placement is allowed. If true,
            an op will be placed on CPU if:
                1. there's no GPU implementation for the OP
                    or
                2. no GPU devices are known or registered
                    or
                3. need to co-locate with reftype input(s) which are from CPU.
    """
    if seed: tf.set_random_seed(seed)
    gs = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_memory_fraction)
    config = tf.ConfigProto(log_device_placement=log_device,
                            inter_op_parallelism_threads=num_cores,
                            intra_op_parallelism_threads=num_cores,
                            gpu_options=gs,
                            allow_soft_placement=soft_placement)
    tf.add_to_collection(tf.GraphKeys.GRAPH_CONFIG, config)
    
    return config
    

    参考链接

    相关文章

      网友评论

          本文标题:Run TFLearn on GPU

          本文链接:https://www.haomeiwen.com/subject/apjayftx.html