教电脑学会加法运算---RNN的应用例子

作者: gaoshine | 来源:发表于2017-09-13 15:03 被阅读719次

    教电脑学会加法运算

    教电脑学会加法运算---RNN的应用例子

    刚看到这个题目,我心里就想,计算机进行加法运算还不简单吗? print x+y 不就行啦? so easy!
    如果我们认真探究一下,我们人类如何做加法运算的话,让计算机模仿人类来学习加法运算,那就没那么简单了。
    我们想想下面的情景:

    1. 我们小的时候对数字开始认识,是从分糖块开始认识数字的,最好的计算工具就是双手。
    2. 随着掌握的增多,我记得10以内的加法,3+5=8 会随口而出,这个貌似不用数手指头了。
    3. 现在我们每个人都可以随口而出的10以内2数的加法运算,没有一个人会想想,也没有人数手指头,都是随口而出的。
      以上情景告诉我们,我们的大脑不是计算器,简单的运算是靠记忆算出来的,如果让计算机也模仿人脑,而不采用计算器,那么这个话题就是今天我们要讨论的。

    这是一个循环神经网络(RNN)的应用,通过大量数据集(1+1=2, 2+3=5, ...)教会计算机学会加法运算,像我们人类大脑一样会脱口而出说出答案,而且也像我们人类一样会偶尔犯错误弄错答案。

    当然我们也不会记忆所有的数据,有些可以靠推断进行运算。
    下面的例子是教会电脑三位数的加法(XYZ+ABC),数据集模拟生成50000个记录,实际上全部可能应该是 1000X1000即100万,就是我们的数据集只是实际的5%,这个也符合我们人类的认知习惯,学以一小部分就可以掌握全部知识。

    采用seq2seq (顺序到顺序)的神经网络的加法运算的实现

    输入: "535+61"
    输出: "596"
    字串使用多个空格填充

    采用序列到序列的神经网络进行机器学习 "Sequence to Sequence Learning with Neural Networks"
    本代码来自 keras的官方github上的examples中的 addition_rnn.py, 描述使用RNN进行加法运算的例子,本人做了一点改动,主要是便于理解,内容详见:
    [keras examples](https://github.com/fchollet/keras/tree/master/examples

    %matplotlib inline
    from __future__ import print_function
    from keras.models import Sequential
    from keras import layers
    import numpy as np
    from six.moves import range
    
    from IPython.display import SVG
    from keras.utils.vis_utils import model_to_dot
    import pandas as pd
    import matplotlib.pyplot as plt
    
    Using TensorFlow backend.
    

    采用 XYZ+ABC的加法训练集,需要我们自动生成

    class CharacterTable(object):
        """
        生成字符集:
        编码为热独数
        解码热独数为字符输出
        解码字符的向量的概率
        """
        
        """Given a set of characters:
        + Encode them to a one hot integer representation
        + Decode the one hot integer representation to their character output
        + Decode a vector of probabilities to their character output
        """
        def __init__(self, chars):
            """
            初始化字符表
            参数
            字符为可能输入的字母
            """
            """Initialize character table.
            # Arguments
                chars: Characters that can appear in the input.
            """
            self.chars = sorted(set(chars))
            self.char_indices = dict((c, i) for i, c in enumerate(self.chars))
            self.indices_char = dict((i, c) for i, c in enumerate(self.chars))
    
        def encode(self, C, num_rows):
            """One hot encode given string C.
            # Arguments
                num_rows: Number of rows in the returned one hot encoding. This is
                    used to keep the # of rows for each data the same.
            """
            x = np.zeros((num_rows, len(self.chars)))
            for i, c in enumerate(C):
                x[i, self.char_indices[c]] = 1
            return x
    
        def decode(self, x, calc_argmax=True):
            if calc_argmax:
                x = x.argmax(axis=-1)
            return ''.join(self.indices_char[x] for x in x)
    
    
    class colors:
        ok = '\033[92m'
        fail = '\033[91m'
        close = '\033[0m'
        
        
    
    # 模型和数据集的参数   Parameters for the model and dataset.
    # TRAINING_SIZE训练集的大小,数据集模拟生成50000个记录作为训练集,占实际可能的总体数据的5%
    # DIGITS 输入数字的长度
    # MAXLEN 输入数据的最大长度
    # 例如 '345+678' DIGITS =3 MAXLEN = 3+1+3
    # INVERT是数字是否反转 我个人感觉有点怪怪的感觉,就做了 False,略过了。
    
    TRAINING_SIZE = 50000
    DIGITS = 3
    INVERT = False
    
    # Maximum length of input is 'int + int' (e.g., '345+678'). Maximum length of
    # int is DIGITS.
    MAXLEN = DIGITS + 1 + DIGITS
    
    
    # 模拟生成50000个记录(questions和expected)作为训练集
    # 所有的数字,加号和空格
    # All the numbers, plus sign and space for padding.
    chars = '0123456789+ '
    ctable = CharacterTable(chars)
    
    questions = []
    expected = []
    seen = set()
    print('Generating data...')
    while len(questions) < TRAINING_SIZE:
        f = lambda: int(''.join(np.random.choice(list('0123456789'))
                        for i in range(np.random.randint(1, DIGITS + 1))))
        a, b = f(), f()
        # Skip any addition questions we've already seen
        # Also skip any such that x+Y == Y+x (hence the sorting).
        key = tuple(sorted((a, b)))
        if key in seen:
            continue
        seen.add(key)
        # Pad the data with spaces such that it is always MAXLEN.
        q = '{}+{}'.format(a, b)
        query = q + ' ' * (MAXLEN - len(q))
        ans = str(a + b)
        # Answers can be of maximum size DIGITS + 1.
        ans += ' ' * (DIGITS + 1 - len(ans))
        if INVERT:
            # Reverse the query, e.g., '12+345  ' becomes '  543+21'. (Note the
            # space used for padding.)
            query = query[::-1]
        questions.append(query)
        expected.append(ans)
    print('Total addition questions:', len(questions))
    # 打印数据集的前100个问题和答案
    print(questions[1:100])
    print('Total addition  expected:', len(expected))
    print(expected[1:100]) 
    
    Generating data...
    Total addition questions: 50000
    ['51+9   ', '346+7  ', '2+11   ', '750+8  ', '539+84 ', '439+3  ', '778+7  ', '0+438  ', '47+4   ', '7+22   ', '8+7    ', '0+95   ', '900+285', '84+1   ', '975+6  ', '7+6    ', '828+830', '183+8  ', '63+254 ', '781+31 ', '4+2    ', '27+9   ', '453+550', '6+38   ', '927+47 ', '500+507', '593+536', '55+11  ', '479+2  ', '19+56  ', '377+946', '776+718', '85+920 ', '28+327 ', '2+92   ', '70+4   ', '0+9    ', '8+3    ', '32+200 ', '37+8   ', '6+64   ', '204+949', '96+94  ', '36+21  ', '63+101 ', '442+77 ', '463+988', '608+24 ', '2+6    ', '453+11 ', '413+7  ', '61+590 ', '1+556  ', '76+140 ', '9+6    ', '7+0    ', '9+260  ', '1+73   ', '2+5    ', '4+5    ', '84+8   ', '1+1    ', '5+0    ', '544+0  ', '906+4  ', '72+11  ', '213+954', '110+67 ', '245+28 ', '224+4  ', '975+412', '96+58  ', '26+335 ', '84+43  ', '9+8    ', '5+7    ', '1+0    ', '5+1    ', '6+95   ', '453+69 ', '61+230 ', '3+179  ', '1+4    ', '474+12 ', '3+81   ', '6+46   ', '52+4   ', '55+8   ', '337+22 ', '35+0   ', '815+29 ', '202+98 ', '796+81 ', '89+47  ', '68+827 ', '0+2    ', '74+191 ', '7+357  ', '99+52  ']
    Total addition  expected: 50000
    ['60  ', '353 ', '13  ', '758 ', '623 ', '442 ', '785 ', '438 ', '51  ', '29  ', '15  ', '95  ', '1185', '85  ', '981 ', '13  ', '1658', '191 ', '317 ', '812 ', '6   ', '36  ', '1003', '44  ', '974 ', '1007', '1129', '66  ', '481 ', '75  ', '1323', '1494', '1005', '355 ', '94  ', '74  ', '9   ', '11  ', '232 ', '45  ', '70  ', '1153', '190 ', '57  ', '164 ', '519 ', '1451', '632 ', '8   ', '464 ', '420 ', '651 ', '557 ', '216 ', '15  ', '7   ', '269 ', '74  ', '7   ', '9   ', '92  ', '2   ', '5   ', '544 ', '910 ', '83  ', '1167', '177 ', '273 ', '228 ', '1387', '154 ', '361 ', '127 ', '17  ', '12  ', '1   ', '6   ', '101 ', '522 ', '291 ', '182 ', '5   ', '486 ', '84  ', '52  ', '56  ', '63  ', '359 ', '35  ', '844 ', '300 ', '877 ', '136 ', '895 ', '2   ', '265 ', '364 ', '151 ']
    
    # 数据集做预处理,向量化
    # 数据打乱后,45000个做训练数据集,5000个做验证数据集
    print('Vectorization...')
    x = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool)
    y = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool)
    for i, sentence in enumerate(questions):
        x[i] = ctable.encode(sentence, MAXLEN)
    for i, sentence in enumerate(expected):
        y[i] = ctable.encode(sentence, DIGITS + 1)
    
    # Shuffle (x, y) in unison as the later parts of x will almost all be larger
    # digits.
    indices = np.arange(len(y))
    np.random.shuffle(indices)
    x = x[indices]
    y = y[indices]
    
    # Explicitly set apart 10% for validation data that we never train over.
    split_at = len(x) - len(x) // 10
    (x_train, x_val) = x[:split_at], x[split_at:]
    (y_train, y_val) = y[:split_at], y[split_at:]
    
    print('Training Data:')
    print(x_train.shape)
    print(y_train.shape)
    
    print('Validation Data:')
    print(x_val.shape)
    print(y_val.shape)
    
    Vectorization...
    Training Data:
    (45000, 7, 12)
    (45000, 4, 12)
    Validation Data:
    (5000, 7, 12)
    (5000, 4, 12)
    
    # 下面的代码是构建模型的部分,具体可以看model.summary的内容和生成的模型的SVG图形
    # 构建自己的一个RNN模型而不是直接采用 GRU 或 SimpleRNN.
    # Try replacing GRU, or SimpleRNN.
    RNN = layers.LSTM
    HIDDEN_SIZE = 128
    BATCH_SIZE = 128
    LAYERS = 1
    
    print('Build model...')
    model = Sequential()
    # 编码输入序列使用RNN,输出产生隐藏层。
    # 值得一提的是: 你的输入序列是个可变长度,使用input_shape=(None, num_feature),其中num_feature=len(chars)。
    # "Encode" the input sequence using an RNN, producing an output of HIDDEN_SIZE.
    # Note: In a situation where your input sequences have a variable length,
    # use input_shape=(None, num_feature).
    model.add(RNN(HIDDEN_SIZE, input_shape=(MAXLEN, len(chars))))
    # 作为解码RNN的输入,重复提供RNN循环一次每次的最新状态。
    # 重复 'DIGITS + 1'次,作为输出的最大长度,例如:当DIGITS=3,最大输出为 999+999=1998.
    # As the decoder RNN's input, repeatedly provide with the last hidden state of
    # RNN for each time step. Repeat 'DIGITS + 1' times as that's the maximum
    # length of output, e.g., when DIGITS=3, max output is 999+999=1998.
    model.add(layers.RepeatVector(DIGITS + 1))
    # 解码 RNN可以是多个层,也可以是单个层。
    # The decoder RNN could be multiple layers stacked or a single layer.
    for _ in range(LAYERS):
        
        # 通过设置 return_sequence=True,不仅返回最后一次的输出,也包括所有的输出,即便在是(num_samples, timesteps, output_dim).
        # 这个是 keras中LSTM,采用TimeDistributed包装层的问题,这个演示中语焉不详,大家可是看看LSTM网络的不同方式和TimeDistributed层的作用
        
        # By setting return_sequences to True, return not only the last output but
        # all the outputs so far in the form of (num_samples, timesteps,
        # output_dim). This is necessary as TimeDistributed in the below expects
        # the first dimension to be the timesteps.
        model.add(RNN(HIDDEN_SIZE, return_sequences=True))
        
    # 为每个输入的时间片段提供一个连接层。
    # 对每一步的输出序列,决定选择哪个字符。
    # Apply a dense layer to the every temporal slice of an input. For each of step
    # of the output sequence, decide which character should be chosen.
    model.add(layers.TimeDistributed(layers.Dense(len(chars))))
    model.add(layers.Activation('softmax'))
    model.compile(loss='categorical_crossentropy',
                  optimizer='adam',
                  metrics=['accuracy'])
    model.summary()
    SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
    
    Build model...
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    lstm_1 (LSTM)                (None, 128)               72192     
    _________________________________________________________________
    repeat_vector_1 (RepeatVecto (None, 4, 128)            0         
    _________________________________________________________________
    lstm_2 (LSTM)                (None, 4, 128)            131584    
    _________________________________________________________________
    time_distributed_1 (TimeDist (None, 4, 12)             1548      
    _________________________________________________________________
    activation_1 (Activation)    (None, 4, 12)             0         
    =================================================================
    Total params: 205,324
    Trainable params: 205,324
    Non-trainable params: 0
    _________________________________________________________________
    
    RNN模型
    # 训练模型,每次训练都做一回训练结果的验证。
    # Train the model each generation and show predictions against the validation
    # dataset.
    loss = []
    acc = []
    val_loss = []
    val_acc = []
    for iteration in range(1, 200):
        print()
        print('-' * 50)
        print('Iteration', iteration)
        hist = model.fit(x_train, y_train,
                  batch_size=BATCH_SIZE,
                  epochs=1,
                  validation_data=(x_val, y_val))
        # Select 10 samples from the validation set at random so we can visualize
        # errors.
        print(hist.history)
        loss.append(hist.history['loss'][0])
        acc.append(hist.history['acc'][0])
        val_loss.append(hist.history['val_loss'][0])
        val_acc.append(hist.history['val_acc'][0])    
        for i in range(10):
            ind = np.random.randint(0, len(x_val))
            rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])]
            preds = model.predict_classes(rowx, verbose=0)
            q = ctable.decode(rowx[0])
            correct = ctable.decode(rowy[0])
            guess = ctable.decode(preds[0], calc_argmax=False)
            print('Q', q[::-1] if INVERT else q)
            print('T', correct)
            if correct == guess:
                print(colors.ok + '☑' + colors.close, end=" ")
            else:
                print(colors.fail + '☒' + colors.close, end=" ")
            print(guess)
            print('---')
    
    训练过程
    --------------------------------------------------
    Iteration 1
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 1.8916 - acc: 0.3204 - val_loss: 1.7901 - val_acc: 0.3464
    {'val_loss': [1.7900752948760987], 'val_acc': [0.34644999999999998], 'loss': [1.8915647541681926], 'acc': [0.32042222222222222]}
    Q 18+532 
    T 550 
    �[91m☒�[0m 129 
    ---
    Q 326+29 
    T 355 
    �[91m☒�[0m 122 
    ---
    Q 78+753 
    T 831 
    �[91m☒�[0m 100 
    ---
    Q 738+62 
    T 800 
    �[91m☒�[0m 102 
    ---
    Q 28+159 
    T 187 
    �[91m☒�[0m 109 
    ---
    Q 230+401
    T 631 
    �[91m☒�[0m 122 
    ---
    Q 427+724
    T 1151
    �[91m☒�[0m 102 
    ---
    Q 57+777 
    T 834 
    �[91m☒�[0m 109 
    ---
    Q 298+55 
    T 353 
    �[91m☒�[0m 109 
    ---
    Q 416+29 
    T 445 
    �[91m☒�[0m 129 
    ---
    
    --------------------------------------------------
    Iteration 2
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 1.7554 - acc: 0.3522 - val_loss: 1.6911 - val_acc: 0.3728
    {'val_loss': [1.6910695877075195], 'val_acc': [0.37275000000000003], 'loss': [1.7553744444105359], 'acc': [0.35220555555025734]}
    Q 192+12 
    T 204 
    �[91m☒�[0m 221 
    ---
    Q 353+99 
    T 452 
    �[91m☒�[0m 103 
    ---
    Q 381+886
    T 1267
    �[91m☒�[0m 1222
    ---
    Q 35+50  
    T 85  
    �[91m☒�[0m 55  
    ---
    Q 150+62 
    T 212 
    �[91m☒�[0m 567 
    ---
    Q 285+4  
    T 289 
    �[91m☒�[0m 33  
    ---
    Q 698+699
    T 1397
    �[91m☒�[0m 1692
    ---
    Q 447+3  
    T 450 
    �[91m☒�[0m 35  
    ---
    Q 91+215 
    T 306 
    �[91m☒�[0m 121 
    ---
    Q 695+7  
    T 702 
    �[91m☒�[0m 103 
    ---
    
    --------------------------------------------------
    Iteration 3
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 1.6050 - acc: 0.3986 - val_loss: 1.5398 - val_acc: 0.4189
    {'val_loss': [1.5398105825424195], 'val_acc': [0.41894999999999999], 'loss': [1.6049728581110636], 'acc': [0.39855000001589458]}
    Q 9+123  
    T 132 
    �[91m☒�[0m 123 
    ---
    Q 676+76 
    T 752 
    �[91m☒�[0m 762 
    ---
    Q 956+59 
    T 1015
    �[91m☒�[0m 1021
    ---
    Q 215+249
    T 464 
    �[91m☒�[0m 355 
    ---
    Q 793+975
    T 1768
    �[91m☒�[0m 1687
    ---
    Q 615+1  
    T 616 
    �[91m☒�[0m 621 
    ---
    Q 60+947 
    T 1007
    �[91m☒�[0m 1021
    ---
    Q 39+820 
    T 859 
    �[91m☒�[0m 891 
    ---
    Q 321+1  
    T 322 
    �[91m☒�[0m 22  
    ---
    Q 339+7  
    T 346 
    �[91m☒�[0m 337 
    ---
    
    --------------------------------------------------
    Iteration 4
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 1.4618 - acc: 0.4521 - val_loss: 1.3862 - val_acc: 0.4834
    {'val_loss': [1.3862218780517579], 'val_acc': [0.4834], 'loss': [1.4617731448067559], 'acc': [0.45206111112170749]}
    Q 58+326 
    T 384 
    �[91m☒�[0m 353 
    ---
    Q 323+9  
    T 332 
    �[92m☑�[0m 332 
    ---
    Q 845+808
    T 1653
    �[91m☒�[0m 1554
    ---
    Q 773+6  
    T 779 
    �[91m☒�[0m 744 
    ---
    Q 496+259
    T 755 
    �[91m☒�[0m 667 
    ---
    Q 34+83  
    T 117 
    �[91m☒�[0m 13  
    ---
    Q 26+431 
    T 457 
    �[91m☒�[0m 466 
    ---
    Q 91+25  
    T 116 
    �[91m☒�[0m 12  
    ---
    Q 23+25  
    T 48  
    �[91m☒�[0m 36  
    ---
    Q 165+5  
    T 170 
    �[91m☒�[0m 166 
    ---
    
    --------------------------------------------------
    Iteration 5
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 1.3244 - acc: 0.5041 - val_loss: 1.2633 - val_acc: 0.5250
    {'val_loss': [1.2633318698883056], 'val_acc': [0.52500000000000002], 'loss': [1.3243814388910928], 'acc': [0.50412777777777773]}
    Q 49+51  
    T 100 
    �[91m☒�[0m 10  
    ---
    Q 885+176
    T 1061
    �[91m☒�[0m 100 
    ---
    Q 318+68 
    T 386 
    �[91m☒�[0m 421 
    ---
    Q 98+523 
    T 621 
    �[91m☒�[0m 641 
    ---
    Q 73+469 
    T 542 
    �[91m☒�[0m 521 
    ---
    Q 49+75  
    T 124 
    �[91m☒�[0m 111 
    ---
    Q 76+365 
    T 441 
    �[91m☒�[0m 421 
    ---
    Q 279+617
    T 896 
    �[91m☒�[0m 804 
    ---
    Q 402+322
    T 724 
    �[91m☒�[0m 767 
    ---
    Q 835+435
    T 1270
    �[91m☒�[0m 1277
    ---
    
    
    --------------------------------------------------
    Iteration 11
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 36s - loss: 0.6989 - acc: 0.7403 - val_loss: 0.6459 - val_acc: 0.7578
    {'val_loss': [0.64586523256301875], 'val_acc': [0.75775000000000003], 'loss': [0.69888072902891374], 'acc': [0.7403388889100817]}
    Q 256+30 
    T 286 
    �[91m☒�[0m 277 
    ---
    Q 8+721  
    T 729 
    �[92m☑�[0m 729 
    ---
    Q 341+36 
    T 377 
    �[92m☑�[0m 377 
    ---
    Q 263+112
    T 375 
    �[91m☒�[0m 474 
    ---
    Q 1+705  
    T 706 
    �[91m☒�[0m 707 
    ---
    Q 713+95 
    T 808 
    �[92m☑�[0m 808 
    ---
    Q 951+737
    T 1688
    �[91m☒�[0m 1679
    ---
    Q 47+767 
    T 814 
    �[91m☒�[0m 824 
    ---
    Q 415+625
    T 1040
    �[91m☒�[0m 967 
    ---
    Q 300+4  
    T 304 
    �[91m☒�[0m 305 
    ---
    
    
    --------------------------------------------------
    Iteration 21
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0879 - acc: 0.9817 - val_loss: 0.1036 - val_acc: 0.9716
    {'val_loss': [0.10357598056793213], 'val_acc': [0.97165000000000001], 'loss': [0.087886847303973309], 'acc': [0.98168333332273694]}
    Q 571+58 
    T 629 
    �[92m☑�[0m 629 
    ---
    Q 396+781
    T 1177
    �[92m☑�[0m 1177
    ---
    Q 234+980
    T 1214
    �[92m☑�[0m 1214
    ---
    Q 145+627
    T 772 
    �[91m☒�[0m 762 
    ---
    Q 91+137 
    T 228 
    �[92m☑�[0m 228 
    ---
    Q 7+419  
    T 426 
    �[92m☑�[0m 426 
    ---
    Q 772+75 
    T 847 
    �[92m☑�[0m 847 
    ---
    Q 705+128
    T 833 
    �[92m☑�[0m 833 
    ---
    Q 520+6  
    T 526 
    �[92m☑�[0m 526 
    ---
    Q 732+74 
    T 806 
    �[92m☑�[0m 806 
    ---
    
    --------------------------------------------------
    Iteration 22
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0881 - acc: 0.9788 - val_loss: 0.0858 - val_acc: 0.9790
    {'val_loss': [0.085751581120491027], 'val_acc': [0.97899999999999998], 'loss': [0.088109891496764292], 'acc': [0.97884444448682995]}
    Q 63+61  
    T 124 
    �[92m☑�[0m 124 
    ---
    Q 807+60 
    T 867 
    �[92m☑�[0m 867 
    ---
    Q 768+63 
    T 831 
    �[92m☑�[0m 831 
    ---
    Q 438+52 
    T 490 
    �[92m☑�[0m 490 
    ---
    Q 51+763 
    T 814 
    �[92m☑�[0m 814 
    ---
    Q 775+84 
    T 859 
    �[92m☑�[0m 859 
    ---
    Q 642+69 
    T 711 
    �[92m☑�[0m 711 
    ---
    Q 66+622 
    T 688 
    �[92m☑�[0m 688 
    ---
    Q 166+500
    T 666 
    �[92m☑�[0m 666 
    ---
    Q 624+28 
    T 652 
    �[92m☑�[0m 652 
    ---
    
    
    --------------------------------------------------
    Iteration 30
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0276 - acc: 0.9949 - val_loss: 0.0443 - val_acc: 0.9864
    {'val_loss': [0.044335523164272306], 'val_acc': [0.98640000000000005], 'loss': [0.027582680843936072], 'acc': [0.99492777774598862]}
    Q 498+7  
    T 505 
    �[92m☑�[0m 505 
    ---
    Q 444+207
    T 651 
    �[92m☑�[0m 651 
    ---
    Q 546+746
    T 1292
    �[92m☑�[0m 1292
    ---
    Q 673+68 
    T 741 
    �[92m☑�[0m 741 
    ---
    Q 316+45 
    T 361 
    �[92m☑�[0m 361 
    ---
    Q 8+109  
    T 117 
    �[92m☑�[0m 117 
    ---
    Q 538+594
    T 1132
    �[92m☑�[0m 1132
    ---
    Q 58+876 
    T 934 
    �[92m☑�[0m 934 
    ---
    Q 99+867 
    T 966 
    �[92m☑�[0m 966 
    ---
    Q 302+840
    T 1142
    �[92m☑�[0m 1142
    ---
    
    
    
    --------------------------------------------------
    Iteration 40
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0084 - acc: 0.9992 - val_loss: 0.0225 - val_acc: 0.9932
    {'val_loss': [0.022462421262264252], 'val_acc': [0.99324999999999997], 'loss': [0.0084158510135279758], 'acc': [0.99916666670905219]}
    Q 666+930
    T 1596
    �[92m☑�[0m 1596
    ---
    Q 21+26  
    T 47  
    �[92m☑�[0m 47  
    ---
    Q 70+245 
    T 315 
    �[92m☑�[0m 315 
    ---
    Q 927+65 
    T 992 
    �[92m☑�[0m 992 
    ---
    Q 614+16 
    T 630 
    �[92m☑�[0m 630 
    ---
    Q 799+40 
    T 839 
    �[92m☑�[0m 839 
    ---
    Q 11+28  
    T 39  
    �[92m☑�[0m 39  
    ---
    Q 429+86 
    T 515 
    �[92m☑�[0m 515 
    ---
    Q 295+94 
    T 389 
    �[92m☑�[0m 389 
    ---
    Q 728+5  
    T 733 
    �[92m☑�[0m 733 
    ---
    
    --------------------------------------------------
    Iteration 41
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0347 - acc: 0.9901 - val_loss: 0.0987 - val_acc: 0.9671
    {'val_loss': [0.098733154940605167], 'val_acc': [0.96714999999999995], 'loss': [0.034694219418366749], 'acc': [0.99013888886769608]}
    Q 560+9  
    T 569 
    �[92m☑�[0m 569 
    ---
    Q 416+390
    T 806 
    �[91m☒�[0m 706 
    ---
    Q 991+860
    T 1851
    �[92m☑�[0m 1851
    ---
    Q 85+532 
    T 617 
    �[92m☑�[0m 617 
    ---
    Q 54+13  
    T 67  
    �[92m☑�[0m 67  
    ---
    Q 436+330
    T 766 
    �[92m☑�[0m 766 
    ---
    Q 14+809 
    T 823 
    �[92m☑�[0m 823 
    ---
    Q 55+602 
    T 657 
    �[92m☑�[0m 657 
    ---
    Q 90+824 
    T 914 
    �[92m☑�[0m 914 
    ---
    Q 909+44 
    T 953 
    �[92m☑�[0m 953 
    ---
    
    
    --------------------------------------------------
    Iteration 50
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 41s - loss: 0.0336 - acc: 0.9891 - val_loss: 0.0226 - val_acc: 0.9926
    {'val_loss': [0.022579708782583474], 'val_acc': [0.99265000000000003], 'loss': [0.033597714668015637], 'acc': [0.98904999999999998]}
    Q 82+44  
    T 126 
    �[92m☑�[0m 126 
    ---
    Q 97+434 
    T 531 
    �[92m☑�[0m 531 
    ---
    Q 157+85 
    T 242 
    �[92m☑�[0m 242 
    ---
    Q 951+319
    T 1270
    �[92m☑�[0m 1270
    ---
    Q 131+0  
    T 131 
    �[92m☑�[0m 131 
    ---
    Q 80+935 
    T 1015
    �[92m☑�[0m 1015
    ---
    Q 313+41 
    T 354 
    �[92m☑�[0m 354 
    ---
    Q 393+11 
    T 404 
    �[92m☑�[0m 404 
    ---
    Q 214+197
    T 411 
    �[92m☑�[0m 411 
    ---
    Q 371+1  
    T 372 
    �[92m☑�[0m 372 
    ---
    
    
    --------------------------------------------------
    Iteration 80
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0309 - acc: 0.9907 - val_loss: 0.0110 - val_acc: 0.9969
    {'val_loss': [0.011006494026631116], 'val_acc': [0.99690000000000001], 'loss': [0.030902873884472583], 'acc': [0.99073333335452607]}
    Q 929+75 
    T 1004
    �[92m☑�[0m 1004
    ---
    Q 211+71 
    T 282 
    �[92m☑�[0m 282 
    ---
    Q 205+6  
    T 211 
    �[92m☑�[0m 211 
    ---
    Q 0+148  
    T 148 
    �[92m☑�[0m 148 
    ---
    Q 51+518 
    T 569 
    �[92m☑�[0m 569 
    ---
    Q 813+618
    T 1431
    �[92m☑�[0m 1431
    ---
    Q 649+86 
    T 735 
    �[92m☑�[0m 735 
    ---
    Q 59+514 
    T 573 
    �[92m☑�[0m 573 
    ---
    Q 790+31 
    T 821 
    �[92m☑�[0m 821 
    ---
    Q 707+932
    T 1639
    �[92m☑�[0m 1639
    ---
    
    --------------------------------------------------
    Iteration 81
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0020 - acc: 0.9999 - val_loss: 0.0081 - val_acc: 0.9979
    {'val_loss': [0.0081367643300443888], 'val_acc': [0.99785000000000001], 'loss': [0.0020035561124069822], 'acc': [0.99992777777777775]}
    Q 245+34 
    T 279 
    �[92m☑�[0m 279 
    ---
    Q 2+78   
    T 80  
    �[92m☑�[0m 80  
    ---
    Q 63+391 
    T 454 
    �[92m☑�[0m 454 
    ---
    Q 45+888 
    T 933 
    �[92m☑�[0m 933 
    ---
    Q 28+653 
    T 681 
    �[92m☑�[0m 681 
    ---
    Q 45+826 
    T 871 
    �[92m☑�[0m 871 
    ---
    Q 33+814 
    T 847 
    �[92m☑�[0m 847 
    ---
    Q 552+978
    T 1530
    �[92m☑�[0m 1530
    ---
    Q 802+2  
    T 804 
    �[92m☑�[0m 804 
    ---
    Q 22+538 
    T 560 
    �[92m☑�[0m 560 
    ---
    
    --------------------------------------------------
    Iteration 82
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0012 - acc: 1.0000 - val_loss: 0.0069 - val_acc: 0.9979
    {'val_loss': [0.0068942253075540069], 'val_acc': [0.99790000000000001], 'loss': [0.0012494151224899622], 'acc': [0.99998888888888893]}
    Q 7+881  
    T 888 
    �[92m☑�[0m 888 
    ---
    Q 166+500
    T 666 
    �[92m☑�[0m 666 
    ---
    Q 333+3  
    T 336 
    �[92m☑�[0m 336 
    ---
    Q 720+55 
    T 775 
    �[92m☑�[0m 775 
    ---
    Q 752+189
    T 941 
    �[92m☑�[0m 941 
    ---
    Q 935+479
    T 1414
    �[92m☑�[0m 1414
    ---
    Q 30+882 
    T 912 
    �[92m☑�[0m 912 
    ---
    Q 66+37  
    T 103 
    �[92m☑�[0m 103 
    ---
    Q 289+717
    T 1006
    �[91m☒�[0m 9006
    ---
    Q 174+72 
    T 246 
    �[92m☑�[0m 246 
    ---
    
    --------------------------------------------------
    Iteration 83
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0010 - acc: 1.0000 - val_loss: 0.0062 - val_acc: 0.9981
    {'val_loss': [0.0062321711093187336], 'val_acc': [0.99814999999999998], 'loss': [0.0010194639238839348], 'acc': [0.99999444444444441]}
    Q 604+393
    T 997 
    �[92m☑�[0m 997 
    ---
    Q 206+9  
    T 215 
    �[92m☑�[0m 215 
    ---
    Q 708+38 
    T 746 
    �[92m☑�[0m 746 
    ---
    Q 31+434 
    T 465 
    �[92m☑�[0m 465 
    ---
    Q 240+48 
    T 288 
    �[92m☑�[0m 288 
    ---
    Q 772+75 
    T 847 
    �[92m☑�[0m 847 
    ---
    Q 979+445
    T 1424
    �[92m☑�[0m 1424
    ---
    Q 591+168
    T 759 
    �[92m☑�[0m 759 
    ---
    Q 55+7   
    T 62  
    �[92m☑�[0m 62  
    ---
    Q 60+877 
    T 937 
    �[92m☑�[0m 937 
    ---
    
    --------------------------------------------------
    Iteration 84
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 39s - loss: 9.5497e-04 - acc: 1.0000 - val_loss: 0.0072 - val_acc: 0.9979
    {'val_loss': [0.0072223034381866452], 'val_acc': [0.99785000000000001], 'loss': [0.00095497155957337883], 'acc': [0.99998333333333334]}
    Q 91+25  
    T 116 
    �[92m☑�[0m 116 
    ---
    Q 429+86 
    T 515 
    �[92m☑�[0m 515 
    ---
    Q 5+218  
    T 223 
    �[92m☑�[0m 223 
    ---
    Q 5+583  
    T 588 
    �[92m☑�[0m 588 
    ---
    Q 84+362 
    T 446 
    �[92m☑�[0m 446 
    ---
    Q 428+4  
    T 432 
    �[92m☑�[0m 432 
    ---
    Q 35+768 
    T 803 
    �[92m☑�[0m 803 
    ---
    Q 430+21 
    T 451 
    �[92m☑�[0m 451 
    ---
    Q 95+486 
    T 581 
    �[92m☑�[0m 581 
    ---
    Q 0+642  
    T 642 
    �[92m☑�[0m 642 
    ---
    
    --------------------------------------------------
    Iteration 85
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 39s - loss: 9.4473e-04 - acc: 1.0000 - val_loss: 0.0069 - val_acc: 0.9980
    {'val_loss': [0.0069201477531343697], 'val_acc': [0.998], 'loss': [0.00094472687745259865], 'acc': [0.99998333333333334]}
    Q 772+75 
    T 847 
    �[92m☑�[0m 847 
    ---
    Q 94+634 
    T 728 
    �[92m☑�[0m 728 
    ---
    Q 836+335
    T 1171
    �[92m☑�[0m 1171
    ---
    Q 64+8   
    T 72  
    �[92m☑�[0m 72  
    ---
    Q 8+984  
    T 992 
    �[92m☑�[0m 992 
    ---
    Q 806+61 
    T 867 
    �[92m☑�[0m 867 
    ---
    Q 1+576  
    T 577 
    �[92m☑�[0m 577 
    ---
    Q 57+256 
    T 313 
    �[92m☑�[0m 313 
    ---
    Q 393+37 
    T 430 
    �[92m☑�[0m 430 
    ---
    Q 69+968 
    T 1037
    �[92m☑�[0m 1037
    ---
    
    --------------------------------------------------
    Iteration 86
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 40s - loss: 0.0420 - acc: 0.9873 - val_loss: 0.0184 - val_acc: 0.9942
    {'val_loss': [0.018363300597667696], 'val_acc': [0.99424999999999997], 'loss': [0.041988874252306088], 'acc': [0.98727777777777781]}
    Q 636+4  
    T 640 
    �[92m☑�[0m 640 
    ---
    Q 63+439 
    T 502 
    �[92m☑�[0m 502 
    ---
    Q 245+34 
    T 279 
    �[92m☑�[0m 279 
    ---
    Q 574+387
    T 961 
    �[92m☑�[0m 961 
    ---
    Q 802+70 
    T 872 
    �[92m☑�[0m 872 
    ---
    Q 659+234
    T 893 
    �[92m☑�[0m 893 
    ---
    Q 29+505 
    T 534 
    �[92m☑�[0m 534 
    ---
    Q 87+217 
    T 304 
    �[92m☑�[0m 304 
    ---
    Q 849+988
    T 1837
    �[92m☑�[0m 1837
    ---
    Q 9+874  
    T 883 
    �[92m☑�[0m 883 
    ---
    
    --------------------------------------------------
    Iteration 87
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 40s - loss: 0.0031 - acc: 0.9997 - val_loss: 0.0095 - val_acc: 0.9975
    {'val_loss': [0.0095259528324007983], 'val_acc': [0.99750000000000005], 'loss': [0.003114491027055515], 'acc': [0.99965000000000004]}
    Q 659+447
    T 1106
    �[92m☑�[0m 1106
    ---
    Q 39+614 
    T 653 
    �[92m☑�[0m 653 
    ---
    Q 550+776
    T 1326
    �[92m☑�[0m 1326
    ---
    Q 825+66 
    T 891 
    �[92m☑�[0m 891 
    ---
    Q 848+344
    T 1192
    �[92m☑�[0m 1192
    ---
    Q 823+648
    T 1471
    �[92m☑�[0m 1471
    ---
    Q 241+364
    T 605 
    �[92m☑�[0m 605 
    ---
    Q 3+283  
    T 286 
    �[92m☑�[0m 286 
    ---
    Q 177+87 
    T 264 
    �[92m☑�[0m 264 
    ---
    Q 770+995
    T 1765
    �[92m☑�[0m 1765
    ---
    
    --------------------------------------------------
    Iteration 88
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 40s - loss: 0.0014 - acc: 0.9999 - val_loss: 0.0088 - val_acc: 0.9973
    {'val_loss': [0.0088253471940755845], 'val_acc': [0.99729999999999996], 'loss': [0.0014127093569686014], 'acc': [0.99994444444444441]}
    Q 93+749 
    T 842 
    �[92m☑�[0m 842 
    ---
    Q 76+537 
    T 613 
    �[92m☑�[0m 613 
    ---
    Q 52+256 
    T 308 
    �[92m☑�[0m 308 
    ---
    Q 43+554 
    T 597 
    �[92m☑�[0m 597 
    ---
    Q 470+32 
    T 502 
    �[92m☑�[0m 502 
    ---
    Q 729+66 
    T 795 
    �[92m☑�[0m 795 
    ---
    Q 58+869 
    T 927 
    �[92m☑�[0m 927 
    ---
    Q 112+655
    T 767 
    �[92m☑�[0m 767 
    ---
    Q 503+9  
    T 512 
    �[92m☑�[0m 512 
    ---
    Q 3+199  
    T 202 
    �[92m☑�[0m 202 
    ---
    
    --------------------------------------------------
    Iteration 89
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0010 - acc: 1.0000 - val_loss: 0.0072 - val_acc: 0.9980
    {'val_loss': [0.0071900495752692225], 'val_acc': [0.99804999999999999], 'loss': [0.001022424675565627], 'acc': [0.99998888888888893]}
    Q 23+54  
    T 77  
    �[92m☑�[0m 77  
    ---
    Q 7+281  
    T 288 
    �[92m☑�[0m 288 
    ---
    Q 566+330
    T 896 
    �[92m☑�[0m 896 
    ---
    Q 700+97 
    T 797 
    �[92m☑�[0m 797 
    ---
    Q 6+826  
    T 832 
    �[92m☑�[0m 832 
    ---
    Q 58+446 
    T 504 
    �[92m☑�[0m 504 
    ---
    Q 461+601
    T 1062
    �[92m☑�[0m 1062
    ---
    Q 541+78 
    T 619 
    �[92m☑�[0m 619 
    ---
    Q 188+359
    T 547 
    �[92m☑�[0m 547 
    ---
    Q 283+83 
    T 366 
    �[92m☑�[0m 366 
    ---
    
    --------------------------------------------------
    Iteration 90
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 8.3694e-04 - acc: 1.0000 - val_loss: 0.0066 - val_acc: 0.9981
    {'val_loss': [0.0066044864896684886], 'val_acc': [0.99809999999999999], 'loss': [0.00083693527401321459], 'acc': [0.99999444444444441]}
    Q 915+99 
    T 1014
    �[92m☑�[0m 1014
    ---
    Q 874+530
    T 1404
    �[92m☑�[0m 1404
    ---
    Q 40+244 
    T 284 
    �[92m☑�[0m 284 
    ---
    Q 321+503
    T 824 
    �[92m☑�[0m 824 
    ---
    Q 146+9  
    T 155 
    �[92m☑�[0m 155 
    ---
    Q 400+528
    T 928 
    �[92m☑�[0m 928 
    ---
    Q 620+818
    T 1438
    �[92m☑�[0m 1438
    ---
    Q 86+24  
    T 110 
    �[92m☑�[0m 110 
    ---
    Q 659+67 
    T 726 
    �[92m☑�[0m 726 
    ---
    Q 584+737
    T 1321
    �[92m☑�[0m 1321
    ---
    
    --------------------------------------------------
    Iteration 91
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 7.5902e-04 - acc: 1.0000 - val_loss: 0.0068 - val_acc: 0.9981
    {'val_loss': [0.0068030334025621416], 'val_acc': [0.99809999999999999], 'loss': [0.00075902411151263448], 'acc': [0.99998888888888893]}
    Q 96+511 
    T 607 
    �[92m☑�[0m 607 
    ---
    Q 97+558 
    T 655 
    �[92m☑�[0m 655 
    ---
    Q 16+78  
    T 94  
    �[92m☑�[0m 94  
    ---
    Q 97+87  
    T 184 
    �[92m☑�[0m 184 
    ---
    Q 34+480 
    T 514 
    �[92m☑�[0m 514 
    ---
    Q 518+50 
    T 568 
    �[92m☑�[0m 568 
    ---
    Q 786+370
    T 1156
    �[92m☑�[0m 1156
    ---
    Q 55+23  
    T 78  
    �[92m☑�[0m 78  
    ---
    Q 96+725 
    T 821 
    �[92m☑�[0m 821 
    ---
    Q 638+9  
    T 647 
    �[92m☑�[0m 647 
    ---
    
    --------------------------------------------------
    Iteration 92
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0319 - acc: 0.9898 - val_loss: 0.0276 - val_acc: 0.9911
    {'val_loss': [0.027552293083071708], 'val_acc': [0.99109999999999998], 'loss': [0.03191252367479934], 'acc': [0.98983333332273693]}
    Q 58+497 
    T 555 
    �[92m☑�[0m 555 
    ---
    Q 71+40  
    T 111 
    �[92m☑�[0m 111 
    ---
    Q 0+175  
    T 175 
    �[92m☑�[0m 175 
    ---
    Q 600+97 
    T 697 
    �[92m☑�[0m 697 
    ---
    Q 940+115
    T 1055
    �[92m☑�[0m 1055
    ---
    Q 25+863 
    T 888 
    �[92m☑�[0m 888 
    ---
    Q 480+82 
    T 562 
    �[92m☑�[0m 562 
    ---
    Q 463+2  
    T 465 
    �[92m☑�[0m 465 
    ---
    Q 177+47 
    T 224 
    �[92m☑�[0m 224 
    ---
    Q 455+83 
    T 538 
    �[92m☑�[0m 538 
    ---
    
    --------------------------------------------------
    Iteration 93
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 39s - loss: 0.0047 - acc: 0.9990 - val_loss: 0.0096 - val_acc: 0.9969
    {'val_loss': [0.009624259917996824], 'val_acc': [0.99690000000000001], 'loss': [0.0046642686388972737], 'acc': [0.99896666666666667]}
    Q 4+459  
    T 463 
    �[92m☑�[0m 463 
    ---
    Q 846+408
    T 1254
    �[92m☑�[0m 1254
    ---
    Q 678+15 
    T 693 
    �[92m☑�[0m 693 
    ---
    Q 85+474 
    T 559 
    �[92m☑�[0m 559 
    ---
    Q 573+12 
    T 585 
    �[92m☑�[0m 585 
    ---
    Q 437+243
    T 680 
    �[92m☑�[0m 680 
    ---
    Q 224+34 
    T 258 
    �[92m☑�[0m 258 
    ---
    Q 96+588 
    T 684 
    �[92m☑�[0m 684 
    ---
    Q 410+41 
    T 451 
    �[92m☑�[0m 451 
    ---
    Q 18+210 
    T 228 
    �[92m☑�[0m 228 
    ---
    
    --------------------------------------------------
    Iteration 94
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 39s - loss: 0.0012 - acc: 1.0000 - val_loss: 0.0066 - val_acc: 0.9982
    {'val_loss': [0.0066092373952269558], 'val_acc': [0.99819999999999998], 'loss': [0.0011797052747259537], 'acc': [0.99998333333333334]}
    Q 647+774
    T 1421
    �[92m☑�[0m 1421
    ---
    Q 976+86 
    T 1062
    �[92m☑�[0m 1062
    ---
    Q 275+598
    T 873 
    �[92m☑�[0m 873 
    ---
    Q 342+634
    T 976 
    �[92m☑�[0m 976 
    ---
    Q 830+42 
    T 872 
    �[92m☑�[0m 872 
    ---
    Q 85+85  
    T 170 
    �[92m☑�[0m 170 
    ---
    Q 677+521
    T 1198
    �[92m☑�[0m 1198
    ---
    Q 940+949
    T 1889
    �[92m☑�[0m 1889
    ---
    Q 41+750 
    T 791 
    �[92m☑�[0m 791 
    ---
    Q 12+6   
    T 18  
    �[92m☑�[0m 18  
    ---
    
    --------------------------------------------------
    Iteration 95
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 39s - loss: 8.5304e-04 - acc: 1.0000 - val_loss: 0.0059 - val_acc: 0.9982
    {'val_loss': [0.0059119948847219349], 'val_acc': [0.99819999999999998], 'loss': [0.00085304205637011261], 'acc': [1.0]}
    Q 13+57  
    T 70  
    �[92m☑�[0m 70  
    ---
    Q 534+71 
    T 605 
    �[92m☑�[0m 605 
    ---
    Q 762+703
    T 1465
    �[92m☑�[0m 1465
    ---
    Q 136+8  
    T 144 
    �[92m☑�[0m 144 
    ---
    Q 707+932
    T 1639
    �[92m☑�[0m 1639
    ---
    Q 31+51  
    T 82  
    �[92m☑�[0m 82  
    ---
    Q 2+804  
    T 806 
    �[92m☑�[0m 806 
    ---
    Q 823+648
    T 1471
    �[92m☑�[0m 1471
    ---
    Q 655+86 
    T 741 
    �[92m☑�[0m 741 
    ---
    Q 612+799
    T 1411
    �[92m☑�[0m 1411
    ---
    
    --------------------------------------------------
    Iteration 96
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 40s - loss: 0.0021 - acc: 0.9996 - val_loss: 0.0922 - val_acc: 0.9789
    {'val_loss': [0.092193102195858953], 'val_acc': [0.97894999999999999], 'loss': [0.0021130382461680306], 'acc': [0.99959444446563717]}
    Q 97+558 
    T 655 
    �[92m☑�[0m 655 
    ---
    Q 183+583
    T 766 
    �[92m☑�[0m 766 
    ---
    Q 20+32  
    T 52  
    �[92m☑�[0m 52  
    ---
    Q 26+58  
    T 84  
    �[92m☑�[0m 84  
    ---
    Q 166+66 
    T 232 
    �[92m☑�[0m 232 
    ---
    Q 98+523 
    T 621 
    �[92m☑�[0m 621 
    ---
    Q 92+673 
    T 765 
    �[92m☑�[0m 765 
    ---
    Q 12+565 
    T 577 
    �[92m☑�[0m 577 
    ---
    Q 460+97 
    T 557 
    �[92m☑�[0m 557 
    ---
    Q 703+17 
    T 720 
    �[92m☑�[0m 720 
    ---
    
    --------------------------------------------------
    Iteration 97
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 40s - loss: 0.0276 - acc: 0.9914 - val_loss: 0.0154 - val_acc: 0.9949
    {'val_loss': [0.015444085666537285], 'val_acc': [0.99495], 'loss': [0.027583559429811107], 'acc': [0.99141666666666661]}
    Q 535+32 
    T 567 
    �[92m☑�[0m 567 
    ---
    Q 863+966
    T 1829
    �[92m☑�[0m 1829
    ---
    Q 699+374
    T 1073
    �[92m☑�[0m 1073
    ---
    Q 29+190 
    T 219 
    �[92m☑�[0m 219 
    ---
    Q 1+994  
    T 995 
    �[92m☑�[0m 995 
    ---
    Q 422+84 
    T 506 
    �[92m☑�[0m 506 
    ---
    Q 633+525
    T 1158
    �[92m☑�[0m 1158
    ---
    Q 604+376
    T 980 
    �[92m☑�[0m 980 
    ---
    Q 863+966
    T 1829
    �[92m☑�[0m 1829
    ---
    Q 874+530
    T 1404
    �[92m☑�[0m 1404
    ---
    
    --------------------------------------------------
    Iteration 98
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 40s - loss: 0.0018 - acc: 0.9999 - val_loss: 0.0067 - val_acc: 0.9979
    {'val_loss': [0.0066717502040788534], 'val_acc': [0.99790000000000001], 'loss': [0.001755557962672578], 'acc': [0.99988888888888894]}
    Q 561+65 
    T 626 
    �[92m☑�[0m 626 
    ---
    Q 979+81 
    T 1060
    �[92m☑�[0m 1060
    ---
    Q 361+78 
    T 439 
    �[92m☑�[0m 439 
    ---
    Q 3+104  
    T 107 
    �[92m☑�[0m 107 
    ---
    Q 104+974
    T 1078
    �[92m☑�[0m 1078
    ---
    Q 660+545
    T 1205
    �[92m☑�[0m 1205
    ---
    Q 72+243 
    T 315 
    �[92m☑�[0m 315 
    ---
    Q 99+617 
    T 716 
    �[92m☑�[0m 716 
    ---
    Q 567+55 
    T 622 
    �[92m☑�[0m 622 
    ---
    Q 290+286
    T 576 
    �[92m☑�[0m 576 
    ---
    
    --------------------------------------------------
    Iteration 99
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 39s - loss: 8.7092e-04 - acc: 1.0000 - val_loss: 0.0058 - val_acc: 0.9984
    {'val_loss': [0.0057973034713417288], 'val_acc': [0.99839999999999995], 'loss': [0.00087091847145929937], 'acc': [0.99998888888888893]}
    Q 95+638 
    T 733 
    �[92m☑�[0m 733 
    ---
    Q 407+26 
    T 433 
    �[92m☑�[0m 433 
    ---
    Q 565+786
    T 1351
    �[92m☑�[0m 1351
    ---
    Q 505+227
    T 732 
    �[92m☑�[0m 732 
    ---
    Q 778+309
    T 1087
    �[92m☑�[0m 1087
    ---
    Q 929+138
    T 1067
    �[92m☑�[0m 1067
    ---
    Q 436+330
    T 766 
    �[92m☑�[0m 766 
    ---
    Q 356+55 
    T 411 
    �[92m☑�[0m 411 
    ---
    Q 276+195
    T 471 
    �[92m☑�[0m 471 
    ---
    Q 149+962
    T 1111
    �[92m☑�[0m 1111
    ---
    
    --------------------------------------------------
    Iteration 100
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 7.1227e-04 - acc: 1.0000 - val_loss: 0.0054 - val_acc: 0.9983
    {'val_loss': [0.0053955351205542687], 'val_acc': [0.99834999999999996], 'loss': [0.00071227497477084395], 'acc': [0.99999444444444441]}
    Q 31+437 
    T 468 
    �[92m☑�[0m 468 
    ---
    Q 426+38 
    T 464 
    �[92m☑�[0m 464 
    ---
    Q 941+842
    T 1783
    �[92m☑�[0m 1783
    ---
    Q 323+463
    T 786 
    �[92m☑�[0m 786 
    ---
    Q 646+72 
    T 718 
    �[92m☑�[0m 718 
    ---
    Q 834+296
    T 1130
    �[92m☑�[0m 1130
    ---
    Q 688+214
    T 902 
    �[91m☒�[0m 802 
    ---
    Q 349+669
    T 1018
    �[92m☑�[0m 1018
    ---
    Q 232+67 
    T 299 
    �[92m☑�[0m 299 
    ---
    Q 613+309
    T 922 
    �[92m☑�[0m 922 
    ---
    
    --------------------------------------------------
    Iteration 101
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 6.9362e-04 - acc: 1.0000 - val_loss: 0.0060 - val_acc: 0.9980
    {'val_loss': [0.0059655166847631339], 'val_acc': [0.99804999999999999], 'loss': [0.00069362363646634749], 'acc': [0.99998333333333334]}
    Q 472+332
    T 804 
    �[92m☑�[0m 804 
    ---
    Q 8+636  
    T 644 
    �[92m☑�[0m 644 
    ---
    Q 258+138
    T 396 
    �[91m☒�[0m 496 
    ---
    Q 873+788
    T 1661
    �[91m☒�[0m 1671
    ---
    Q 238+452
    T 690 
    �[92m☑�[0m 690 
    ---
    Q 932+11 
    T 943 
    �[92m☑�[0m 943 
    ---
    Q 2+74   
    T 76  
    �[92m☑�[0m 76  
    ---
    Q 33+633 
    T 666 
    �[92m☑�[0m 666 
    ---
    Q 971+81 
    T 1052
    �[92m☑�[0m 1052
    ---
    Q 5+218  
    T 223 
    �[92m☑�[0m 223 
    ---
    
    --------------------------------------------------
    Iteration 102
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 0.0038 - acc: 0.9990 - val_loss: 0.0370 - val_acc: 0.9884
    {'val_loss': [0.036963613131642342], 'val_acc': [0.98839999999999995], 'loss': [0.0037796633920735784], 'acc': [0.99903888891008163]}
    Q 769+8  
    T 777 
    �[92m☑�[0m 777 
    ---
    Q 982+72 
    T 1054
    �[92m☑�[0m 1054
    ---
    Q 803+148
    T 951 
    �[92m☑�[0m 951 
    ---
    Q 768+63 
    T 831 
    �[92m☑�[0m 831 
    ---
    Q 711+580
    T 1291
    �[92m☑�[0m 1291
    ---
    Q 901+516
    T 1417
    �[92m☑�[0m 1417
    ---
    Q 383+56 
    T 439 
    �[92m☑�[0m 439 
    ---
    Q 9+951  
    T 960 
    �[92m☑�[0m 960 
    ---
    Q 91+594 
    T 685 
    �[92m☑�[0m 685 
    ---
    Q 708+425
    T 1133
    �[92m☑�[0m 1133
    ---
    
    --------------------------------------------------
    Iteration 103
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0352 - acc: 0.9895 - val_loss: 0.0113 - val_acc: 0.9969
    {'val_loss': [0.011299159627594054], 'val_acc': [0.99690000000000001], 'loss': [0.035152347180123132], 'acc': [0.9894722222222222]}
    Q 537+648
    T 1185
    �[92m☑�[0m 1185
    ---
    Q 244+8  
    T 252 
    �[92m☑�[0m 252 
    ---
    Q 277+351
    T 628 
    �[92m☑�[0m 628 
    ---
    Q 967+82 
    T 1049
    �[92m☑�[0m 1049
    ---
    Q 496+259
    T 755 
    �[92m☑�[0m 755 
    ---
    Q 165+696
    T 861 
    �[92m☑�[0m 861 
    ---
    Q 212+8  
    T 220 
    �[92m☑�[0m 220 
    ---
    Q 215+71 
    T 286 
    �[92m☑�[0m 286 
    ---
    Q 75+586 
    T 661 
    �[92m☑�[0m 661 
    ---
    Q 51+518 
    T 569 
    �[92m☑�[0m 569 
    ---
    
    --------------------------------------------------
    Iteration 104
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0015 - acc: 0.9999 - val_loss: 0.0069 - val_acc: 0.9980
    {'val_loss': [0.0068515809554606675], 'val_acc': [0.99804999999999999], 'loss': [0.0014726223781704902], 'acc': [0.99992222222222227]}
    Q 73+53  
    T 126 
    �[92m☑�[0m 126 
    ---
    Q 9+184  
    T 193 
    �[92m☑�[0m 193 
    ---
    Q 226+73 
    T 299 
    �[92m☑�[0m 299 
    ---
    Q 110+83 
    T 193 
    �[92m☑�[0m 193 
    ---
    Q 235+9  
    T 244 
    �[92m☑�[0m 244 
    ---
    Q 2+667  
    T 669 
    �[92m☑�[0m 669 
    ---
    Q 203+4  
    T 207 
    �[92m☑�[0m 207 
    ---
    Q 45+2   
    T 47  
    �[92m☑�[0m 47  
    ---
    Q 2+804  
    T 806 
    �[92m☑�[0m 806 
    ---
    Q 398+628
    T 1026
    �[92m☑�[0m 1026
    ---
    
    --------------------------------------------------
    Iteration 105
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 8.6203e-04 - acc: 1.0000 - val_loss: 0.0056 - val_acc: 0.9985
    {'val_loss': [0.005607170724496245], 'val_acc': [0.99850000000000005], 'loss': [0.00086202879945437112], 'acc': [0.99997777777777774]}
    Q 751+229
    T 980 
    �[92m☑�[0m 980 
    ---
    Q 725+134
    T 859 
    �[92m☑�[0m 859 
    ---
    Q 0+674  
    T 674 
    �[92m☑�[0m 674 
    ---
    Q 29+687 
    T 716 
    �[92m☑�[0m 716 
    ---
    Q 428+9  
    T 437 
    �[92m☑�[0m 437 
    ---
    Q 93+847 
    T 940 
    �[92m☑�[0m 940 
    ---
    Q 29+922 
    T 951 
    �[92m☑�[0m 951 
    ---
    Q 24+135 
    T 159 
    �[92m☑�[0m 159 
    ---
    Q 217+818
    T 1035
    �[92m☑�[0m 1035
    ---
    Q 60+974 
    T 1034
    �[92m☑�[0m 1034
    ---
    
    --------------------------------------------------
    Iteration 106
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 7.0560e-04 - acc: 1.0000 - val_loss: 0.0053 - val_acc: 0.9987
    {'val_loss': [0.0052594130329787735], 'val_acc': [0.99865000000000004], 'loss': [0.00070560133192274302], 'acc': [0.99998333333333334]}
    Q 95+349 
    T 444 
    �[92m☑�[0m 444 
    ---
    Q 680+50 
    T 730 
    �[92m☑�[0m 730 
    ---
    Q 8+285  
    T 293 
    �[92m☑�[0m 293 
    ---
    Q 93+67  
    T 160 
    �[92m☑�[0m 160 
    ---
    Q 905+13 
    T 918 
    �[92m☑�[0m 918 
    ---
    Q 153+2  
    T 155 
    �[92m☑�[0m 155 
    ---
    Q 20+99  
    T 119 
    �[92m☑�[0m 119 
    ---
    Q 281+3  
    T 284 
    �[92m☑�[0m 284 
    ---
    Q 55+952 
    T 1007
    �[92m☑�[0m 1007
    ---
    Q 660+12 
    T 672 
    �[92m☑�[0m 672 
    ---
    
    --------------------------------------------------
    Iteration 107
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 5.6956e-04 - acc: 1.0000 - val_loss: 0.0053 - val_acc: 0.9987
    {'val_loss': [0.0053374502418562769], 'val_acc': [0.99870000000000003], 'loss': [0.00056955946002983386], 'acc': [1.0]}
    Q 87+37  
    T 124 
    �[92m☑�[0m 124 
    ---
    Q 6+968  
    T 974 
    �[92m☑�[0m 974 
    ---
    Q 1+733  
    T 734 
    �[92m☑�[0m 734 
    ---
    Q 2+78   
    T 80  
    �[92m☑�[0m 80  
    ---
    Q 61+260 
    T 321 
    �[92m☑�[0m 321 
    ---
    Q 374+94 
    T 468 
    �[92m☑�[0m 468 
    ---
    Q 25+582 
    T 607 
    �[92m☑�[0m 607 
    ---
    Q 62+919 
    T 981 
    �[92m☑�[0m 981 
    ---
    Q 522+0  
    T 522 
    �[92m☑�[0m 522 
    ---
    Q 963+61 
    T 1024
    �[92m☑�[0m 1024
    ---
    
    --------------------------------------------------
    Iteration 108
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 5.5234e-04 - acc: 1.0000 - val_loss: 0.0056 - val_acc: 0.9984
    {'val_loss': [0.0056324227192439142], 'val_acc': [0.99844999999999995], 'loss': [0.00055234070724497234], 'acc': [0.99999444444444441]}
    Q 560+1  
    T 561 
    �[92m☑�[0m 561 
    ---
    Q 58+801 
    T 859 
    �[92m☑�[0m 859 
    ---
    Q 41+592 
    T 633 
    �[92m☑�[0m 633 
    ---
    Q 486+317
    T 803 
    �[92m☑�[0m 803 
    ---
    Q 467+312
    T 779 
    �[92m☑�[0m 779 
    ---
    Q 146+0  
    T 146 
    �[92m☑�[0m 146 
    ---
    Q 4+448  
    T 452 
    �[92m☑�[0m 452 
    ---
    Q 5+583  
    T 588 
    �[92m☑�[0m 588 
    ---
    Q 233+91 
    T 324 
    �[92m☑�[0m 324 
    ---
    Q 20+128 
    T 148 
    �[92m☑�[0m 148 
    ---
    
    --------------------------------------------------
    Iteration 109
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 0.0199 - acc: 0.9938 - val_loss: 0.1282 - val_acc: 0.9617
    {'val_loss': [0.12818750436902046], 'val_acc': [0.9617], 'loss': [0.019872152358955807], 'acc': [0.99382777778837417]}
    Q 0+642  
    T 642 
    �[92m☑�[0m 642 
    ---
    Q 115+3  
    T 118 
    �[92m☑�[0m 118 
    ---
    Q 5+269  
    T 274 
    �[92m☑�[0m 274 
    ---
    Q 175+23 
    T 198 
    �[92m☑�[0m 198 
    ---
    Q 85+267 
    T 352 
    �[92m☑�[0m 352 
    ---
    Q 735+74 
    T 809 
    �[92m☑�[0m 809 
    ---
    Q 9+145  
    T 154 
    �[92m☑�[0m 154 
    ---
    Q 56+24  
    T 80  
    �[92m☑�[0m 80  
    ---
    Q 942+3  
    T 945 
    �[92m☑�[0m 945 
    ---
    Q 821+18 
    T 839 
    �[92m☑�[0m 839 
    ---
    
    --------------------------------------------------
    Iteration 110
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 0.0168 - acc: 0.9950 - val_loss: 0.0078 - val_acc: 0.9976
    {'val_loss': [0.0077659709699451924], 'val_acc': [0.99765000000000004], 'loss': [0.016768622599252395], 'acc': [0.99503333333333333]}
    Q 938+374
    T 1312
    �[92m☑�[0m 1312
    ---
    Q 93+67  
    T 160 
    �[92m☑�[0m 160 
    ---
    Q 657+810
    T 1467
    �[92m☑�[0m 1467
    ---
    Q 915+7  
    T 922 
    �[92m☑�[0m 922 
    ---
    Q 80+86  
    T 166 
    �[92m☑�[0m 166 
    ---
    Q 528+585
    T 1113
    �[92m☑�[0m 1113
    ---
    Q 2+395  
    T 397 
    �[92m☑�[0m 397 
    ---
    Q 667+5  
    T 672 
    �[92m☑�[0m 672 
    ---
    Q 748+96 
    T 844 
    �[92m☑�[0m 844 
    ---
    Q 383+53 
    T 436 
    �[92m☑�[0m 436 
    ---
    
    
    --------------------------------------------------
    Iteration 120
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 5.8026e-04 - acc: 1.0000 - val_loss: 0.0044 - val_acc: 0.9987
    {'val_loss': [0.0043738329783082012], 'val_acc': [0.99870000000000003], 'loss': [0.00058025503645961485], 'acc': [1.0]}
    Q 442+17 
    T 459 
    �[92m☑�[0m 459 
    ---
    Q 54+240 
    T 294 
    �[92m☑�[0m 294 
    ---
    Q 51+93  
    T 144 
    �[92m☑�[0m 144 
    ---
    Q 835+86 
    T 921 
    �[92m☑�[0m 921 
    ---
    Q 816+79 
    T 895 
    �[92m☑�[0m 895 
    ---
    Q 771+166
    T 937 
    �[92m☑�[0m 937 
    ---
    Q 167+7  
    T 174 
    �[92m☑�[0m 174 
    ---
    Q 80+935 
    T 1015
    �[92m☑�[0m 1015
    ---
    Q 2+201  
    T 203 
    �[92m☑�[0m 203 
    ---
    Q 6+893  
    T 899 
    �[92m☑�[0m 899 
    ---
    
    --------------------------------------------------
    Iteration 121
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 4.9366e-04 - acc: 1.0000 - val_loss: 0.0042 - val_acc: 0.9988
    {'val_loss': [0.0042192966386675832], 'val_acc': [0.99880000000000002], 'loss': [0.00049365888878496154], 'acc': [1.0]}
    Q 276+579
    T 855 
    �[92m☑�[0m 855 
    ---
    Q 86+94  
    T 180 
    �[92m☑�[0m 180 
    ---
    Q 524+95 
    T 619 
    �[92m☑�[0m 619 
    ---
    Q 610+635
    T 1245
    �[92m☑�[0m 1245
    ---
    Q 994+77 
    T 1071
    �[92m☑�[0m 1071
    ---
    Q 49+232 
    T 281 
    �[92m☑�[0m 281 
    ---
    Q 782+50 
    T 832 
    �[92m☑�[0m 832 
    ---
    Q 49+59  
    T 108 
    �[92m☑�[0m 108 
    ---
    Q 98+700 
    T 798 
    �[92m☑�[0m 798 
    ---
    Q 515+67 
    T 582 
    �[92m☑�[0m 582 
    ---
    
    --------------------------------------------------
    Iteration 122
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 4.3406e-04 - acc: 1.0000 - val_loss: 0.0045 - val_acc: 0.9984
    {'val_loss': [0.0045446309591643513], 'val_acc': [0.99844999999999995], 'loss': [0.00043406057129614055], 'acc': [1.0]}
    Q 296+31 
    T 327 
    �[92m☑�[0m 327 
    ---
    Q 95+638 
    T 733 
    �[92m☑�[0m 733 
    ---
    Q 312+67 
    T 379 
    �[92m☑�[0m 379 
    ---
    Q 198+894
    T 1092
    �[91m☒�[0m 1192
    ---
    Q 541+78 
    T 619 
    �[92m☑�[0m 619 
    ---
    Q 130+95 
    T 225 
    �[92m☑�[0m 225 
    ---
    Q 14+285 
    T 299 
    �[92m☑�[0m 299 
    ---
    Q 783+326
    T 1109
    �[92m☑�[0m 1109
    ---
    Q 2+711  
    T 713 
    �[92m☑�[0m 713 
    ---
    Q 416+771
    T 1187
    �[92m☑�[0m 1187
    ---
    
    --------------------------------------------------
    Iteration 123
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 4.0240e-04 - acc: 1.0000 - val_loss: 0.0040 - val_acc: 0.9987
    {'val_loss': [0.0040482636697590354], 'val_acc': [0.99870000000000003], 'loss': [0.00040240162641016976], 'acc': [1.0]}
    Q 1+988  
    T 989 
    �[92m☑�[0m 989 
    ---
    Q 516+65 
    T 581 
    �[92m☑�[0m 581 
    ---
    Q 72+208 
    T 280 
    �[92m☑�[0m 280 
    ---
    Q 65+819 
    T 884 
    �[92m☑�[0m 884 
    ---
    Q 496+37 
    T 533 
    �[92m☑�[0m 533 
    ---
    Q 87+87  
    T 174 
    �[92m☑�[0m 174 
    ---
    Q 713+3  
    T 716 
    �[92m☑�[0m 716 
    ---
    Q 9+468  
    T 477 
    �[92m☑�[0m 477 
    ---
    Q 740+5  
    T 745 
    �[92m☑�[0m 745 
    ---
    Q 404+81 
    T 485 
    �[92m☑�[0m 485 
    ---
    
    --------------------------------------------------
    Iteration 130
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 3.7717e-04 - acc: 1.0000 - val_loss: 0.0042 - val_acc: 0.9986
    {'val_loss': [0.0041852503305301074], 'val_acc': [0.99860000000000004], 'loss': [0.00037716550330320993], 'acc': [1.0]}
    Q 11+425 
    T 436 
    �[92m☑�[0m 436 
    ---
    Q 816+72 
    T 888 
    �[92m☑�[0m 888 
    ---
    Q 411+39 
    T 450 
    �[92m☑�[0m 450 
    ---
    Q 965+855
    T 1820
    �[92m☑�[0m 1820
    ---
    Q 0+663  
    T 663 
    �[92m☑�[0m 663 
    ---
    Q 850+76 
    T 926 
    �[92m☑�[0m 926 
    ---
    Q 8+399  
    T 407 
    �[92m☑�[0m 407 
    ---
    Q 157+87 
    T 244 
    �[92m☑�[0m 244 
    ---
    Q 329+976
    T 1305
    �[92m☑�[0m 1305
    ---
    Q 1+656  
    T 657 
    �[92m☑�[0m 657 
    ---
    
    --------------------------------------------------
    Iteration 131
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 3.4509e-04 - acc: 1.0000 - val_loss: 0.0043 - val_acc: 0.9986
    {'val_loss': [0.00431418531849049], 'val_acc': [0.99855000000000005], 'loss': [0.00034508801567264728], 'acc': [1.0]}
    Q 848+95 
    T 943 
    �[92m☑�[0m 943 
    ---
    Q 98+996 
    T 1094
    �[92m☑�[0m 1094
    ---
    Q 109+3  
    T 112 
    �[92m☑�[0m 112 
    ---
    Q 165+696
    T 861 
    �[92m☑�[0m 861 
    ---
    Q 50+36  
    T 86  
    �[92m☑�[0m 86  
    ---
    Q 415+530
    T 945 
    �[92m☑�[0m 945 
    ---
    Q 25+68  
    T 93  
    �[92m☑�[0m 93  
    ---
    Q 44+600 
    T 644 
    �[92m☑�[0m 644 
    ---
    Q 8+256  
    T 264 
    �[92m☑�[0m 264 
    ---
    Q 238+452
    T 690 
    �[92m☑�[0m 690 
    ---
    
    --------------------------------------------------
    Iteration 132
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 3.0463e-04 - acc: 1.0000 - val_loss: 0.0043 - val_acc: 0.9989
    {'val_loss': [0.0042662992852739992], 'val_acc': [0.99885000000000002], 'loss': [0.00030463080654541652], 'acc': [1.0]}
    Q 20+675 
    T 695 
    �[92m☑�[0m 695 
    ---
    Q 416+688
    T 1104
    �[92m☑�[0m 1104
    ---
    Q 7+13   
    T 20  
    �[92m☑�[0m 20  
    ---
    Q 61+260 
    T 321 
    �[92m☑�[0m 321 
    ---
    Q 6+459  
    T 465 
    �[92m☑�[0m 465 
    ---
    Q 920+784
    T 1704
    �[92m☑�[0m 1704
    ---
    Q 40+270 
    T 310 
    �[92m☑�[0m 310 
    ---
    Q 601+492
    T 1093
    �[92m☑�[0m 1093
    ---
    Q 4+76   
    T 80  
    �[92m☑�[0m 80  
    ---
    Q 2+690  
    T 692 
    �[92m☑�[0m 692 
    ---
    
    --------------------------------------------------
    Iteration 140
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 2.9599e-04 - acc: 1.0000 - val_loss: 0.0040 - val_acc: 0.9987
    {'val_loss': [0.004036861338280141], 'val_acc': [0.99870000000000003], 'loss': [0.00029598567740370829], 'acc': [1.0]}
    Q 64+83  
    T 147 
    �[92m☑�[0m 147 
    ---
    Q 639+76 
    T 715 
    �[92m☑�[0m 715 
    ---
    Q 48+206 
    T 254 
    �[92m☑�[0m 254 
    ---
    Q 54+453 
    T 507 
    �[92m☑�[0m 507 
    ---
    Q 340+19 
    T 359 
    �[92m☑�[0m 359 
    ---
    Q 522+958
    T 1480
    �[92m☑�[0m 1480
    ---
    Q 256+58 
    T 314 
    �[92m☑�[0m 314 
    ---
    Q 2+458  
    T 460 
    �[92m☑�[0m 460 
    ---
    Q 750+450
    T 1200
    �[92m☑�[0m 1200
    ---
    Q 845+549
    T 1394
    �[92m☑�[0m 1394
    ---
    
    --------------------------------------------------
    Iteration 141
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0229 - acc: 0.9937 - val_loss: 0.1356 - val_acc: 0.9604
    {'val_loss': [0.13560998322889209], 'val_acc': [0.96040000000000003], 'loss': [0.022925767360006771], 'acc': [0.99366111115349665]}
    Q 49+30  
    T 79  
    �[92m☑�[0m 79  
    ---
    Q 41+720 
    T 761 
    �[92m☑�[0m 761 
    ---
    Q 854+27 
    T 881 
    �[92m☑�[0m 881 
    ---
    Q 589+54 
    T 643 
    �[92m☑�[0m 643 
    ---
    Q 840+2  
    T 842 
    �[91m☒�[0m 852 
    ---
    Q 440+95 
    T 535 
    �[91m☒�[0m 545 
    ---
    Q 7+864  
    T 871 
    �[92m☑�[0m 871 
    ---
    Q 713+95 
    T 808 
    �[91m☒�[0m 898 
    ---
    Q 587+29 
    T 616 
    �[92m☑�[0m 616 
    ---
    Q 752+61 
    T 813 
    �[92m☑�[0m 813 
    ---
    
    --------------------------------------------------
    Iteration 150
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 4.3214e-04 - acc: 1.0000 - val_loss: 0.0037 - val_acc: 0.9987
    {'val_loss': [0.0037493921197950838], 'val_acc': [0.99875000000000003], 'loss': [0.00043214037894374793], 'acc': [0.99998888888888893]}
    Q 704+36 
    T 740 
    �[92m☑�[0m 740 
    ---
    Q 429+82 
    T 511 
    �[92m☑�[0m 511 
    ---
    Q 341+630
    T 971 
    �[92m☑�[0m 971 
    ---
    Q 270+41 
    T 311 
    �[92m☑�[0m 311 
    ---
    Q 264+4  
    T 268 
    �[92m☑�[0m 268 
    ---
    Q 113+46 
    T 159 
    �[92m☑�[0m 159 
    ---
    Q 210+527
    T 737 
    �[92m☑�[0m 737 
    ---
    Q 92+673 
    T 765 
    �[92m☑�[0m 765 
    ---
    Q 180+895
    T 1075
    �[92m☑�[0m 1075
    ---
    Q 416+29 
    T 445 
    �[92m☑�[0m 445 
    ---
    
    --------------------------------------------------
    Iteration 151
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 3.4723e-04 - acc: 1.0000 - val_loss: 0.0034 - val_acc: 0.9991
    {'val_loss': [0.0034389279445633291], 'val_acc': [0.99909999999999999], 'loss': [0.00034723099155558479], 'acc': [1.0]}
    Q 774+32 
    T 806 
    �[92m☑�[0m 806 
    ---
    Q 107+799
    T 906 
    �[92m☑�[0m 906 
    ---
    Q 54+629 
    T 683 
    �[92m☑�[0m 683 
    ---
    Q 38+678 
    T 716 
    �[92m☑�[0m 716 
    ---
    Q 863+656
    T 1519
    �[92m☑�[0m 1519
    ---
    Q 580+829
    T 1409
    �[92m☑�[0m 1409
    ---
    Q 80+935 
    T 1015
    �[92m☑�[0m 1015
    ---
    Q 3+964  
    T 967 
    �[92m☑�[0m 967 
    ---
    Q 485+983
    T 1468
    �[92m☑�[0m 1468
    ---
    Q 69+943 
    T 1012
    �[92m☑�[0m 1012
    ---
    
    
    --------------------------------------------------
    Iteration 160
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 3.6144e-04 - acc: 1.0000 - val_loss: 0.0032 - val_acc: 0.9991
    {'val_loss': [0.0031691691763699056], 'val_acc': [0.99909999999999999], 'loss': [0.00036144207823866355], 'acc': [1.0]}
    Q 8+285  
    T 293 
    �[92m☑�[0m 293 
    ---
    Q 98+36  
    T 134 
    �[92m☑�[0m 134 
    ---
    Q 593+791
    T 1384
    �[92m☑�[0m 1384
    ---
    Q 987+28 
    T 1015
    �[92m☑�[0m 1015
    ---
    Q 6+845  
    T 851 
    �[92m☑�[0m 851 
    ---
    Q 239+18 
    T 257 
    �[92m☑�[0m 257 
    ---
    Q 607+2  
    T 609 
    �[92m☑�[0m 609 
    ---
    Q 13+194 
    T 207 
    �[92m☑�[0m 207 
    ---
    Q 929+75 
    T 1004
    �[92m☑�[0m 1004
    ---
    Q 356+2  
    T 358 
    �[92m☑�[0m 358 
    ---
    
    --------------------------------------------------
    Iteration 161
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 3.1419e-04 - acc: 1.0000 - val_loss: 0.0033 - val_acc: 0.9990
    {'val_loss': [0.0032845419609919191], 'val_acc': [0.999], 'loss': [0.00031418984433015189], 'acc': [1.0]}
    Q 20+316 
    T 336 
    �[92m☑�[0m 336 
    ---
    Q 567+289
    T 856 
    �[92m☑�[0m 856 
    ---
    Q 627+290
    T 917 
    �[92m☑�[0m 917 
    ---
    Q 23+182 
    T 205 
    �[92m☑�[0m 205 
    ---
    Q 40+655 
    T 695 
    �[92m☑�[0m 695 
    ---
    Q 60+632 
    T 692 
    �[92m☑�[0m 692 
    ---
    Q 297+45 
    T 342 
    �[92m☑�[0m 342 
    ---
    Q 37+229 
    T 266 
    �[92m☑�[0m 266 
    ---
    Q 456+982
    T 1438
    �[92m☑�[0m 1438
    ---
    Q 734+863
    T 1597
    �[92m☑�[0m 1597
    ---
    
    --------------------------------------------------
    Iteration 162
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 37s - loss: 2.7604e-04 - acc: 1.0000 - val_loss: 0.0032 - val_acc: 0.9991
    {'val_loss': [0.0031552912279963494], 'val_acc': [0.99909999999999999], 'loss': [0.00027604213720187546], 'acc': [1.0]}
    Q 0+852  
    T 852 
    �[92m☑�[0m 852 
    ---
    Q 90+20  
    T 110 
    �[92m☑�[0m 110 
    ---
    Q 82+19  
    T 101 
    �[92m☑�[0m 101 
    ---
    Q 72+104 
    T 176 
    �[92m☑�[0m 176 
    ---
    Q 71+832 
    T 903 
    �[92m☑�[0m 903 
    ---
    Q 7+599  
    T 606 
    �[92m☑�[0m 606 
    ---
    Q 723+453
    T 1176
    �[92m☑�[0m 1176
    ---
    Q 409+42 
    T 451 
    �[92m☑�[0m 451 
    ---
    Q 104+948
    T 1052
    �[92m☑�[0m 1052
    ---
    Q 364+69 
    T 433 
    �[92m☑�[0m 433 
    ---
    
    
    --------------------------------------------------
    Iteration 182
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 3.4842e-04 - acc: 1.0000 - val_loss: 0.0040 - val_acc: 0.9989
    {'val_loss': [0.0039902217447757718], 'val_acc': [0.99890000000000001], 'loss': [0.00034841958908364177], 'acc': [1.0]}
    Q 1+887  
    T 888 
    �[92m☑�[0m 888 
    ---
    Q 51+71  
    T 122 
    �[92m☑�[0m 122 
    ---
    Q 228+83 
    T 311 
    �[92m☑�[0m 311 
    ---
    Q 569+32 
    T 601 
    �[92m☑�[0m 601 
    ---
    Q 8+198  
    T 206 
    �[92m☑�[0m 206 
    ---
    Q 39+448 
    T 487 
    �[92m☑�[0m 487 
    ---
    Q 410+471
    T 881 
    �[92m☑�[0m 881 
    ---
    Q 22+436 
    T 458 
    �[92m☑�[0m 458 
    ---
    Q 462+996
    T 1458
    �[92m☑�[0m 1458
    ---
    Q 74+420 
    T 494 
    �[92m☑�[0m 494 
    ---
    
    --------------------------------------------------
    Iteration 183
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 2.9004e-04 - acc: 1.0000 - val_loss: 0.0040 - val_acc: 0.9988
    {'val_loss': [0.0039785686612129213], 'val_acc': [0.99880000000000002], 'loss': [0.00029003697105476424], 'acc': [1.0]}
    Q 654+936
    T 1590
    �[92m☑�[0m 1590
    ---
    Q 6+530  
    T 536 
    �[92m☑�[0m 536 
    ---
    Q 982+460
    T 1442
    �[92m☑�[0m 1442
    ---
    Q 286+407
    T 693 
    �[92m☑�[0m 693 
    ---
    Q 169+20 
    T 189 
    �[92m☑�[0m 189 
    ---
    Q 56+848 
    T 904 
    �[92m☑�[0m 904 
    ---
    Q 912+8  
    T 920 
    �[92m☑�[0m 920 
    ---
    Q 16+942 
    T 958 
    �[92m☑�[0m 958 
    ---
    Q 454+480
    T 934 
    �[92m☑�[0m 934 
    ---
    Q 8+853  
    T 861 
    �[92m☑�[0m 861 
    ---
    
    
    --------------------------------------------------
    Iteration 192
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 0.0148 - acc: 0.9955 - val_loss: 0.0072 - val_acc: 0.9978
    {'val_loss': [0.0072148059517145157], 'val_acc': [0.99780000000000002], 'loss': [0.014829264489975241], 'acc': [0.99554444444444445]}
    Q 3+39   
    T 42  
    �[92m☑�[0m 42  
    ---
    Q 13+458 
    T 471 
    �[92m☑�[0m 471 
    ---
    Q 18+210 
    T 228 
    �[92m☑�[0m 228 
    ---
    Q 71+872 
    T 943 
    �[92m☑�[0m 943 
    ---
    Q 166+500
    T 666 
    �[92m☑�[0m 666 
    ---
    Q 57+606 
    T 663 
    �[92m☑�[0m 663 
    ---
    Q 705+34 
    T 739 
    �[92m☑�[0m 739 
    ---
    Q 6+413  
    T 419 
    �[92m☑�[0m 419 
    ---
    Q 484+648
    T 1132
    �[92m☑�[0m 1132
    ---
    Q 734+2  
    T 736 
    �[92m☑�[0m 736 
    ---
    
    --------------------------------------------------
    Iteration 193
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 8.2297e-04 - acc: 1.0000 - val_loss: 0.0036 - val_acc: 0.9989
    {'val_loss': [0.0035778338901698591], 'val_acc': [0.99890000000000001], 'loss': [0.00082297382206759516], 'acc': [0.9999555555555556]}
    Q 314+9  
    T 323 
    �[92m☑�[0m 323 
    ---
    Q 320+106
    T 426 
    �[92m☑�[0m 426 
    ---
    Q 591+168
    T 759 
    �[92m☑�[0m 759 
    ---
    Q 100+4  
    T 104 
    �[92m☑�[0m 104 
    ---
    Q 37+471 
    T 508 
    �[92m☑�[0m 508 
    ---
    Q 463+382
    T 845 
    �[92m☑�[0m 845 
    ---
    Q 0+663  
    T 663 
    �[92m☑�[0m 663 
    ---
    Q 90+14  
    T 104 
    �[92m☑�[0m 104 
    ---
    Q 905+478
    T 1383
    �[92m☑�[0m 1383
    ---
    Q 627+7  
    T 634 
    �[92m☑�[0m 634 
    ---
    
    --------------------------------------------------
    Iteration 194
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 3.8677e-04 - acc: 1.0000 - val_loss: 0.0035 - val_acc: 0.9990
    {'val_loss': [0.0035221813589334486], 'val_acc': [0.99895], 'loss': [0.00038677227831859556], 'acc': [1.0]}
    Q 52+835 
    T 887 
    �[92m☑�[0m 887 
    ---
    Q 97+559 
    T 656 
    �[92m☑�[0m 656 
    ---
    Q 274+751
    T 1025
    �[92m☑�[0m 1025
    ---
    Q 63+994 
    T 1057
    �[92m☑�[0m 1057
    ---
    Q 564+94 
    T 658 
    �[92m☑�[0m 658 
    ---
    Q 2+987  
    T 989 
    �[92m☑�[0m 989 
    ---
    Q 0+796  
    T 796 
    �[92m☑�[0m 796 
    ---
    Q 85+722 
    T 807 
    �[92m☑�[0m 807 
    ---
    Q 9+874  
    T 883 
    �[92m☑�[0m 883 
    ---
    Q 616+879
    T 1495
    �[92m☑�[0m 1495
    ---
    
    --------------------------------------------------
    Iteration 195
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 38s - loss: 2.9780e-04 - acc: 1.0000 - val_loss: 0.0033 - val_acc: 0.9990
    {'val_loss': [0.0032784495733678342], 'val_acc': [0.999], 'loss': [0.00029779781326651574], 'acc': [1.0]}
    Q 183+960
    T 1143
    �[92m☑�[0m 1143
    ---
    Q 52+583 
    T 635 
    �[92m☑�[0m 635 
    ---
    Q 64+83  
    T 147 
    �[92m☑�[0m 147 
    ---
    Q 657+335
    T 992 
    �[92m☑�[0m 992 
    ---
    Q 366+32 
    T 398 
    �[92m☑�[0m 398 
    ---
    Q 716+6  
    T 722 
    �[92m☑�[0m 722 
    ---
    Q 6+968  
    T 974 
    �[92m☑�[0m 974 
    ---
    Q 88+559 
    T 647 
    �[92m☑�[0m 647 
    ---
    Q 876+38 
    T 914 
    �[92m☑�[0m 914 
    ---
    Q 847+4  
    T 851 
    �[92m☑�[0m 851 
    ---
    
    
    --------------------------------------------------
    Iteration 198
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 39s - loss: 1.9145e-04 - acc: 1.0000 - val_loss: 0.0031 - val_acc: 0.9989
    {'val_loss': [0.003103120744228363], 'val_acc': [0.99890000000000001], 'loss': [0.00019145288352285408], 'acc': [1.0]}
    Q 1+191  
    T 192 
    �[92m☑�[0m 192 
    ---
    Q 106+242
    T 348 
    �[92m☑�[0m 348 
    ---
    Q 55+8   
    T 63  
    �[92m☑�[0m 63  
    ---
    Q 909+42 
    T 951 
    �[92m☑�[0m 951 
    ---
    Q 30+640 
    T 670 
    �[92m☑�[0m 670 
    ---
    Q 508+2  
    T 510 
    �[92m☑�[0m 510 
    ---
    Q 645+7  
    T 652 
    �[92m☑�[0m 652 
    ---
    Q 232+94 
    T 326 
    �[92m☑�[0m 326 
    ---
    Q 0+906  
    T 906 
    �[92m☑�[0m 906 
    ---
    Q 36+67  
    T 103 
    �[92m☑�[0m 103 
    ---
    
    --------------------------------------------------
    Iteration 199
    Train on 45000 samples, validate on 5000 samples
    Epoch 1/1
    45000/45000 [==============================] - 39s - loss: 1.7051e-04 - acc: 1.0000 - val_loss: 0.0032 - val_acc: 0.9990
    {'val_loss': [0.0031555439017713072], 'val_acc': [0.999], 'loss': [0.00017050956679094169], 'acc': [1.0]}
    Q 14+285 
    T 299 
    �[92m☑�[0m 299 
    ---
    Q 13+368 
    T 381 
    �[92m☑�[0m 381 
    ---
    Q 916+69 
    T 985 
    �[92m☑�[0m 985 
    ---
    Q 292+79 
    T 371 
    �[92m☑�[0m 371 
    ---
    Q 454+50 
    T 504 
    �[92m☑�[0m 504 
    ---
    Q 12+6   
    T 18  
    �[92m☑�[0m 18  
    ---
    Q 18+182 
    T 200 
    �[92m☑�[0m 200 
    ---
    Q 6+433  
    T 439 
    �[92m☑�[0m 439 
    ---
    Q 38+69  
    T 107 
    �[92m☑�[0m 107 
    ---
    Q 21+57  
    T 78  
    �[92m☑�[0m 78  
    ---
    
    # 图形化整个训练过程
    ax = pd.DataFrame(
        {
            'val_loss': val_loss,
            'val_acc': val_acc,
            'loss': loss,
            'acc': acc,      
        }
    ).rolling(50).mean()[50:].plot(title='Training loss', logy=True)
    
    ax.set_xlabel("Epochs")
    ax.set_ylabel("Loss&Acc")
    
    训练过程

    本人的运行环境是 python3 keras2 使用了Jupyter notebook,需要完整代码的可以email:582711548@qq.com索取。

    相关文章

      网友评论

        本文标题:教电脑学会加法运算---RNN的应用例子

        本文链接:https://www.haomeiwen.com/subject/smaksxtx.html