DeepLens becomes an amazing playground to test how some of the emerging technologies such as IoT, edge computing, machine learning, and server-less computing come together to address powerful scenarios.
Configuration:
AWS IoT System
The following diagram illustrates how AWS DeepLens works.
1. When turned on, the AWS DeepLens captures a video stream.
2. AWS DeepLens produces two output streams:
Device stream – the video stream is passed through with no processing.
Project stream – the results of the model's processing video frames.
3. The Inference Lambda function receives unprocessed video frames.
4. The Inference Lambda function passes the unprocessed frames to the project's deep learning model where they are processed.
5. The Inference Lambda function receives the processed frames back from the model and then passes the processed frames on in the project stream.
Device Registration遇到的问题:
连接wifi后deeplens.config页面无法解析,可以用arp -a找到DeepLens的IP进入Setup页面
导入Customized Model
由于之前训练好的Fire Detection模型用的是ResNet,而DeepLens暂时不支持这种网络,目前尝试绕开AWS的GreenGrass+Lambda模式,直接接入视频流Device/Live Stream,运行python程序,导出到Project Stream。
实验结果:暂不可行
1.已训练好的模型使用的tensorflow版本为1.9.0,如果DeepLens不用同样版本,frozen_inference_graph无法导入,会报No op named NonMaxSuppressionV3错误
2. 而DeepLens使用pip安装tensorflow1.9.0版本,“import tensorflow as tf”时会直接报错“Illegal instruction (core dumped)”
3. 使用Bazel安装tensorflow来build from source, it may take hours to compile...(https://blog.csdn.net/MyArrow/article/details/79923600)成功安装了,可以运行
4. 现在tensorflow/core会报警告allocation exceed 10% of system,不确定是硬件问题还是程序问题,未完待续。。。
如果不成功,需要用SageMaker+S3重做整个训练。
import matplotlib.pyplot as plt会报错,让安装python-tk
原因应该是服务器没有TkAgg Backend导致。matplotlib.use('agg') 并在matplorlibrc文件中加入backend:agg
Useful Links:
网友评论