美文网首页
解决机器学习问题的标准步骤及其范例程序

解决机器学习问题的标准步骤及其范例程序

作者: LabVIEW_Python | 来源:发表于2021-09-21 15:45 被阅读0次

    解决机器学习问题的标准步骤如下:

    • 获得训练数据。
    • 定义模型。
    • 定义损失函数。
    • 遍历训练数据,从目标值计算损失。
    • 计算该损失的梯度,并使用optimizer调整变量以适合数据。
    • 计算结果。

    一个线性回归的范例程序如下,展示了完整的标准步骤:

    from sys import platform
    import tensorflow as tf 
    import matplotlib.pyplot as plt 
    
    TRUE_W = 3.0
    TRUE_B = 2.0
    NUM_SAMPLES = 1000
    # 生成数据标签对
    x = tf.random.normal(shape=[NUM_SAMPLES], stddev=3)
    noise = tf.random.normal(shape=[NUM_SAMPLES],stddev=1)
    y = x * TRUE_W + TRUE_B + noise
    
    #定义模型
    class MyModel(tf.Module):
        def __init__(self, **kwargs):
            super().__init__(**kwargs)
            self.w = tf.Variable(tf.random.normal(()))
            self.b = tf.Variable(0.0)
        def __call__(self, x):
            return x * self.w + self.b 
    
    model = MyModel()
    
    #定义损失函数
    def loss(target_y, pred_y):
        return tf.reduce_mean(tf.square(target_y - pred_y))
    
    #绘制训练前的情况
    plt.scatter(x, y, c='b')
    plt.scatter(x, model(x), c='r')
    plt.show()
    print("Start Loss: %1.6f"%loss(y, model(x)).numpy())
    
    #定义训练步骤
    def train_step(model, x, y, lr):
    
        with tf.GradientTape() as tape:
            #计算损失
            current_loss = loss(y, model(x))
    
        #计算损失函数相对于模型参数的梯度
        dw, db = tape.gradient(current_loss, [model.w, model.b])
    
        #使用梯度更新模型参数
        model.w.assign_sub(lr*dw)
        model.b.assign_sub(lr*db)
    
    ws, bs = [], []
    epochs = range(30)
    
    #定义训练循环 Training Loop
    for epoch in epochs:
        train_step(model, x, y, lr=0.1)
        ws.append(model.w.numpy())
        bs.append(model.b.numpy())
        current_loss = loss(y, model(x))
    
        print(f"Epoch:{epoch}, w={ws[-1]},b={bs[-1]}, Loss:{current_loss}")
        
    #绘制训练后的情况
    plt.scatter(x, y, c='b')
    plt.scatter(x, model(x), c='r')
    plt.show()
    

    Start Loss: 36.010113
    Epoch:0, w=4.407978057861328,b=0.37662187218666077, Loss:21.411149978637695
    Epoch:1, w=1.9188339710235596,b=0.7346959114074707, Loss:12.916654586791992
    Epoch:2, w=3.811997890472412,b=0.978225588798523, Loss:7.968216896057129
    Epoch:3, w=2.381014347076416,b=1.205700159072876, Loss:5.0818190574646
    Epoch:4, w=3.469748020172119,b=1.3630000352859497, Loss:3.395845651626587
    Epoch:5, w=2.6471059322357178,b=1.5076169967651367, Loss:2.4095771312713623
    Epoch:6, w=3.2732346057891846,b=1.6091227531433105, Loss:1.8316911458969116
    Epoch:7, w=2.8003251552581787,b=1.701125979423523, Loss:1.4925024509429932
    Epoch:8, w=3.1604185104370117,b=1.7665724754333496, Loss:1.2930482625961304
    Epoch:9, w=2.8885650634765625,b=1.825140118598938, Loss:1.1755305528640747
    Epoch:10, w=3.0956637859344482,b=1.8673056364059448, Loss:1.1061452627182007
    Epoch:11, w=2.93939208984375,b=1.9046097993850708, Loss:1.0650876760482788
    Epoch:12, w=3.0585029125213623,b=1.931757926940918, Loss:1.0407360792160034
    Epoch:13, w=2.968674898147583,b=1.9555307626724243, Loss:1.026258111000061
    Epoch:14, w=3.037182331085205,b=1.9729998111724854, Loss:1.0176283121109009
    Epoch:15, w=2.985548973083496,b=1.9881565570831299, Loss:1.0124708414077759
    Epoch:16, w=3.0249528884887695,b=1.9993914365768433, Loss:1.009380578994751
    Epoch:17, w=2.9952750205993652,b=2.009058952331543, Loss:1.007523536682129
    Epoch:18, w=3.0179402828216553,b=2.0162811279296875, Loss:1.0064043998718262
    Epoch:19, w=3.000882625579834,b=2.0224497318267822, Loss:1.0057282447814941
    Epoch:20, w=3.0139200687408447,b=2.027090549468994, Loss:1.0053184032440186
    Epoch:21, w=3.0041167736053467,b=2.0310280323028564, Loss:1.0050692558288574
    Epoch:22, w=3.0116162300109863,b=2.034008741378784, Loss:1.0049173831939697
    Epoch:23, w=3.0059826374053955,b=2.03652286529541, Loss:1.0048246383666992
    Epoch:24, w=3.010296583175659,b=2.0384368896484375, Loss:1.004767656326294
    Epoch:25, w=3.007059335708618,b=2.0400424003601074, Loss:1.004732608795166
    Epoch:26, w=3.0095410346984863,b=2.0412709712982178, Loss:1.0047111511230469
    Epoch:27, w=3.007680892944336,b=2.0422966480255127, Loss:1.0046977996826172
    Epoch:28, w=3.009108781814575,b=2.0430850982666016, Loss:1.0046894550323486
    Epoch:29, w=3.008039951324463,b=2.0437405109405518, Loss:1.0046844482421875

    训练前
    训练后

    相关文章

      网友评论

          本文标题:解决机器学习问题的标准步骤及其范例程序

          本文链接:https://www.haomeiwen.com/subject/kfabgltx.html