美文网首页Python
TensorFlow学习之基础(一)

TensorFlow学习之基础(一)

作者: zhglance | 来源:发表于2019-11-29 11:21 被阅读0次

    一、简述

    TensorFlow 是由 Google大脑 开发的功能强大的深度神经网络开源学习软件库,TensorFlow 屏蔽了底层的CPU/GPU等,并提供API。TensorFlow支持求导、支持递归神经网络(RNN)、卷积神经网络(CNN)和深度置信网络(DBN)。支持Python、Java、Go和R语言等。

    TensorFlow 分为CPU和GPU版本区别可参考如下文章:
    https://blog.csdn.net/sinat_36458870/article/details/78783587

    二、TensorFlow的安装

    2.1 安装Anaconda

    网址下载:https://www.anaconda.com,在windows环境正常安装即可。
    在“运行”->cmd命令行界面中输入:
    conda -V
    如果显示Anaconda的版本号,则表示安装成功,否则安装失败。

    2.2 创建tensorflow

    conda create -n tensorflow python=3.7

    image.png

    激活:
    activate tensorflow

    关闭:
    deactivate

    然后在pycharm中安装tensorflow(记得指定版本号为1.15.0),安装时间比较长,需要耐心等一下。


    image.png
    2.3 第一个程序Hello World:
    import tensorflow as tf
    
    if __name__ == "__main__":
        print("==========start tensorflow===================")
    
        msg = tf.constant('Hello World!')
    
        sess = tf.Session()
    
        result = sess.run(msg)
    
        sess.close()
        print(result)
        print("==========end tensorflow===================")
    
    

    输出结果:

    ==========start tensorflow===================
    WARNING:tensorflow:From D:/zhangzh/python/lance-tensorflow-demo/lance/tensorflow/demo/HelloWorld.py:8: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
    
    2019-12-03 10:34:20.288735: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations:  AVX AVX2
    To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
    2019-12-03 10:34:20.292028: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 4. Tune using inter_op_parallelism_threads for best performance.
    b'Hello World!'
    ==========end tensorflow===================
    

    备注:要删除引号和“b”(表示字节,byte),保留单引号内的内容,可以使用 decode() 方法。

    三、TensorFlow程序开发

    3.1 基本加减乘除:
    import tensorflow as tf
    
    if __name__ == "__main__":
        print("==========start AddMatrixDemo===================")
        matrix_1 = tf.constant([100, 200, 1, 50, 2, 1000])
        matrix_2 = tf.constant([123, 23454, 54, 6, 657, 5])
    
        matrix_add = tf.add(matrix_1, matrix_2)
    
        matrix_subtraction = matrix_1 - matrix_2
    
        matrix_multiplication = matrix_1 * matrix_2
    
        matrix_division = matrix_1 / matrix_2
    
        sess = tf.compat.v1.Session()
    
        result_add = sess.run(matrix_add)
    
        result_subtraction = sess.run(matrix_subtraction)
    
        result_multiplication = sess.run(matrix_multiplication)
    
        result_division = sess.run(matrix_division)
    
        sess.close()
        print("add result:" + str(result_add))
        print("subtraction result:" + str(result_subtraction))
        print("multiplication result:" + str(result_multiplication))
        print("division result:" + str(result_division))
        print("==========end AddMatrixDemo===================")
    
    

    输出结果:

    ==========start AddMatrixDemo===================
    2019-12-03 14:08:06.320353: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations:  AVX AVX2
    To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
    2019-12-03 14:08:06.322350: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 4. Tune using inter_op_parallelism_threads for best performance.
    add result:[  223 23654    55    56   659  1005]
    subtraction result:[   -23 -23254    -53     44   -655    995]
    multiplication result:[  12300 4690800      54     300    1314    5000]
    division result:[8.13008130e-01 8.52733009e-03 1.85185185e-02 8.33333333e+00
     3.04414003e-03 2.00000000e+02]
    ==========end AddMatrixDemo===================
    
    3.2 TensorFlow的张量

    张量,可理解为一个 n 维矩阵,标量(0维,如4,7,8等)、矢量(一维,如[1,2,3,4])和矩阵(如[{1,2,3,4},{2,4,6,8}])等都是特殊类型的张量。

    import tensorflow as tf
    
    if __name__ == "__main__":
        print("==========start tensorDemo===================")
        # 标量常量定义
        scalar = tf.constant(100)
    
        # 向量常量定义
        vector = tf.constant([1, 2, 3, 4])
    
        # 创建一个 [6, 8] 的零元素矩阵,即所有元素的值为零
        zero_tensor = tf.zeros([6, 8], tf.int32)
    
        # 创建一个 [10, 10] 的零元素矩阵,即所有元素的值为零
        one_tensor = tf.ones([10, 10], tf.int32)
    
        # TensorFlow还支持等差排列(linspace),正态分布随机数组(random_normal),截尾正态分布随机数组(truncated_normal),伽马分布随机数组(random_uniform)等
    
        # 定义正态分布随机数组[10, 10]矩阵,平均值mean=2,标准差为stddev=4,随机生成的初始种子值seed=100
        normal_distribution = tf.random_normal([10, 10], mean=2.0, stddev=4, seed=100)
    
        # TensorFlow变量,
        # 变量一般在神经网络中用于权重和偏置。
        variable = tf.Variable(normal_distribution)
    
        # TensorFlow占位符
        x = tf.placeholder(tf.int32, shape=None, name="demo")
    
        print("==========end tensorDemo===================")
    
    

    矩阵的输出:

    import tensorflow as tf
    
    if __name__ == "__main__":
        print("==========start interactive demo===================")
    
        sess = tf.compat.v1.InteractiveSession()
        matrix = tf.eye(5)
        print("单位矩阵[5,5]:")
        print(matrix.eval())
    
        matrix_v = tf.Variable(tf.eye(8))
        matrix_v.initializer.run()
        print("单位矩阵变量[8,8]:")
        print(matrix_v.eval())
    
        random_normal_v = tf.Variable(tf.random.normal([5,5]))
        random_normal_v.initializer.run()
        print("随机正态分布矩阵变量[5,5]:")
        print(random_normal_v.eval())
    
        sess.close()
    
        print("==========end interactive demo===================")
    
    

    输出结果:

    ==========start interactive demo===================
    2019-12-03 15:46:19.350444: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations:  AVX AVX2
    To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
    2019-12-03 15:46:19.352187: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 4. Tune using inter_op_parallelism_threads for best performance.
    单位矩阵[5,5]:
    [[1. 0. 0. 0. 0.]
     [0. 1. 0. 0. 0.]
     [0. 0. 1. 0. 0.]
     [0. 0. 0. 1. 0.]
     [0. 0. 0. 0. 1.]]
    单位矩阵变量[8,8]:
    [[1. 0. 0. 0. 0. 0. 0. 0.]
     [0. 1. 0. 0. 0. 0. 0. 0.]
     [0. 0. 1. 0. 0. 0. 0. 0.]
     [0. 0. 0. 1. 0. 0. 0. 0.]
     [0. 0. 0. 0. 1. 0. 0. 0.]
     [0. 0. 0. 0. 0. 1. 0. 0.]
     [0. 0. 0. 0. 0. 0. 1. 0.]
     [0. 0. 0. 0. 0. 0. 0. 1.]]
    随机正态分布矩阵变量[5,5]:
    [[-0.29164872  2.2333603   0.02524624  0.5131775  -4.4677563 ]
     [-1.9241036  -0.04562743 -0.9280727  -0.8892461  -1.5311104 ]
     [ 1.2640982  -0.81486684 -1.3741654  -1.6512661  -0.2854395 ]
     [-1.7600229   1.3423405  -0.49197945 -0.6679723   0.16603552]
     [-0.8282764   0.15621257 -0.8586982   0.9839028   0.5759689 ]]
    ==========end interactive demo===================
    

    相关文章

      网友评论

        本文标题:TensorFlow学习之基础(一)

        本文链接:https://www.haomeiwen.com/subject/dzgbwctx.html