美文网首页
深度学习时间序列LSTM

深度学习时间序列LSTM

作者: 曦宝 | 来源:发表于2018-12-25 15:41 被阅读68次

参考

https://github.com/jaungiers/LSTM-Neural-Network-for-Time-Series-Prediction
https://blog.csdn.net/a819825294/article/details/54376781
之前做了挺长一段时间的时间序列方面的研究。但是感觉我还是学艺不精,用的传统的ARMA,效果极差。后来因为一些事情,研究了一段时间别的方面,最近在老公的鼓励之下,觉得还是努力的尽量的做好吧。
于是现在开始研究LSTM,原理在这里不赘述了,这里说一下网上经典例子的跑通过程吧,仅仅是跑通别人的经典代码,感觉就踩坑无数,这里就是致敬经典,然后说一下

如何避免踩坑。(这里还要提醒一下,因为使用的python版本的问题,在我参考的原文里面

1,xrange改成range

2,len(data)/prediction_len改为int(len(data)/prediction_len)

3,还有一个在代码里要把plot的save放在show之前,不然保存的图片是空白的)

首先,遇到的问题是:ImportError: cannot import name 'isna'方案和问题的根因,我写在我的简书里了

https://www.jianshu.com/p/073e5c02340d

其次,遇到的问题是:

image.png
我搜索了一下,有网友说问题出在了Keras 库的版本太高了
https://blog.csdn.net/zyh2004883/article/details/84337872
image.png
于是我在anaconda里面将Keras的版本降下来了,降成2.1.2了,跑了一下,真的成功了。
image.png

下面是运行结果。

D:\ProgramData\Anaconda3\envs\tensorflow\python.exe D:/pythonworkspace/深度学习时间序列LSTM/1example.py
Using TensorFlow backend.
> Loading data... 
data len: 4172
sequence len: 50
result len: 4121
result shape: (4121, 51)
[['1455.219971', '1399.420044', '1402.109985', '1403.449951', '1441.469971', '1457.599976', '1438.560059', '1432.25', '1449.680054', '1465.150024', '1455.140015', '1455.900024', '1445.569946', '1441.359985', '1401.530029', '1410.030029', '1404.089966', '1398.560059', '1360.160034', '1394.459961', '1409.280029', '1409.119995', '1424.969971', '1424.369995', '1424.23999', '1441.719971', '1411.709961', '1416.829956', '1387.119995', '1389.939941', '1402.050049', '1387.670044', '1388.26001', '1346.089966', '1352.170044', '1360.689941', '1353.430054', '1333.359985', '1348.050049', '1366.420044', '1379.189941', '1381.76001', '1409.170044', '1391.280029', '1355.619995', '1366.699951', '1401.689941', '1395.069946', '1383.619995', '1359.150024', '1392.140015']]
[[0.0, -0.03834466823710192, -0.03649619099406931, -0.03557539137153576, -0.009448743333663301, 0.001635495009297161, -0.011448380541775882, -0.01578453529895929, -0.003806927550748962, 0.006823747060848984, -5.494427068997165e-05, 0.0004673197272937468, -0.006631317046431495, -0.009524323659793943, -0.03689472593143783, -0.031053684597900588, -0.035135585010467096, -0.03893563387606924, -0.0653234142565241, -0.04175314468660485, -0.03156907059791858, -0.03167904297542101, -0.02078723533405935, -0.021199527641721727, -0.0212888646509648, -0.00927694800032397, -0.029899266686190917, -0.026380901695308046, -0.04679703230928234, -0.044859218057006656, -0.03653737789446543, -0.046419048904050575, -0.046013635281535015, -0.07499210234519249, -0.07081398623823587, -0.06495927205770868, -0.06994813088639229, -0.08373990766238593, -0.07364516989576142, -0.06102165223789391, -0.05224641739059799, -0.050480313948357725, -0.03164465023686791, -0.043938334598350504, -0.06844324431003834, -0.06082930537241782, -0.036784837390058, -0.04133397438097686, -0.04920216697603308, -0.06601747427502835, -0.04334736827220198]]
normalise_windows result shape: (4121, 51)
X_train shape: (3709, 50, 1)
y_train shape: (3709,)
X_test shape: (412, 50, 1)
y_test shape: (412,)
> Data Loaded. Compiling...
Compilation Time :  0.038895368576049805
Train on 3523 samples, validate on 186 samples
Epoch 1/1
2018-12-25 15:24:01.158574: W c:\l\tensorflow_1501907206084\work\tensorflow-1.2.1\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations.
2018-12-25 15:24:01.161823: W c:\l\tensorflow_1501907206084\work\tensorflow-1.2.1\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations.
2018-12-25 15:24:01.163144: W c:\l\tensorflow_1501907206084\work\tensorflow-1.2.1\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
2018-12-25 15:24:01.165020: W c:\l\tensorflow_1501907206084\work\tensorflow-1.2.1\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-12-25 15:24:01.165745: W c:\l\tensorflow_1501907206084\work\tensorflow-1.2.1\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-12-25 15:24:01.168269: W c:\l\tensorflow_1501907206084\work\tensorflow-1.2.1\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2018-12-25 15:24:01.170652: W c:\l\tensorflow_1501907206084\work\tensorflow-1.2.1\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-12-25 15:24:01.171459: W c:\l\tensorflow_1501907206084\work\tensorflow-1.2.1\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.

 512/3523 [===>..........................] - ETA: 16s - loss: 0.0040
1024/3523 [=======>......................] - ETA: 11s - loss: 0.0046
1536/3523 [============>.................] - ETA: 8s - loss: 0.0038 
2048/3523 [================>.............] - ETA: 5s - loss: 0.0031
2560/3523 [====================>.........] - ETA: 3s - loss: 0.0026
3072/3523 [=========================>....] - ETA: 1s - loss: 0.0023
3523/3523 [==============================] - 14s 4ms/step - loss: 0.0021 - val_loss: 0.0012
multiple_predictions shape: (8, 50)
full_predictions shape: (412,)
predicted shape: (412, 1)
point_by_point_predictions shape: (412,)
Training duration (s) :  32.810912132263184

Process finished with exit code 0

相关文章

  • 深度学习时间序列LSTM

    参考 https://github.com/jaungiers/LSTM-Neural-Network-for-T...

  • LSTM

    2018-12-06来看看udacity的深度学习课的lstm实现代码 RNN和LSTM 假设你有一个事件序列,这...

  • lstm示例

    tensorflow下用LSTM网络进行时间序列预测 用LSTM做时间序列预测的思路,tensorflow代码实现...

  • Keras使用精选

    参考文献 使用Keras进行LSTM实战 将时间序列预测问题转换为python中的监督学习问题 时间序列交叉验证 ...

  • Keras 开发基于Bi-LSTM的文本分类器

    Overview 双向LSTM是传统LSTM的扩展,可以改善序列分类问题上的模型性能。 在输入序列的所有时间步均可...

  • Matlab 深度学习工具箱™入门指南(2)

    创建简单序列分类网络 本示例说明了如何创建简单的长期短期记忆(LSTM)分类网络。 要训​​练深度神经网络对序列数...

  • python 词云生成

    背景 最近在研究一些深度学习序列模型,比如RNN和LSTM,这种主要来处理时序数据的神经网络。传统的语言模型主要是...

  • [2020-08-13]时间序列预测分析

    目录 为什么要学习时间序列分析 pandas 深度学习算法 numpy中文网 matplotlib中文网 时间序列...

  • LSTM

    LSTM 原理 多变量时间序列预测 https://machinelearningmastery.com/time...

  • 零基础入门深度学习(6) - 长短时记忆网络(LSTM) (转载

    零基础入门深度学习(6) - 长短时记忆网络(LSTM)

网友评论

      本文标题:深度学习时间序列LSTM

      本文链接:https://www.haomeiwen.com/subject/bvuflqtx.html