TY - GEN
T1 - Convolutional Neural Networks on Apache Storm
AU - Zhang, Wenyu
AU - Lu, Yanfeng
AU - Li, Yi
AU - Qiao, Hong
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - the performance of deep learning largely depends on the size of data. One data source is real-time streaming data, produced from mobile devices, sensors or social media, etc. Streaming data is high-speed and large-scale, which needs real-time processing. However, current mainstream frameworks are mainly designed for off-line data. To suit this, we first propose a deep learning framework based on Apache Storm, which is a distributed stream processing frame, fast and fault-tolerant. Our framework implements the distributed training of CNNs. which is different from MMLSpark or TensorFlowOnSpark that is a pure Java implementation. The design of message passing and synchronization is also suitable to other MapReduce-family distributed computing platforms. To validate our work, MNIST and Cifar -10 datasets are used for evaluation and comparison with similar architectures. The results show our framework, in resource-limited environment, realizes about 10 times speedup.
AB - the performance of deep learning largely depends on the size of data. One data source is real-time streaming data, produced from mobile devices, sensors or social media, etc. Streaming data is high-speed and large-scale, which needs real-time processing. However, current mainstream frameworks are mainly designed for off-line data. To suit this, we first propose a deep learning framework based on Apache Storm, which is a distributed stream processing frame, fast and fault-tolerant. Our framework implements the distributed training of CNNs. which is different from MMLSpark or TensorFlowOnSpark that is a pure Java implementation. The design of message passing and synchronization is also suitable to other MapReduce-family distributed computing platforms. To validate our work, MNIST and Cifar -10 datasets are used for evaluation and comparison with similar architectures. The results show our framework, in resource-limited environment, realizes about 10 times speedup.
KW - Computer Vision
KW - Distributed Systems
KW - Neural Networks
KW - Speed up
KW - Streaming Datoe
UR - https://www.scopus.com/pages/publications/85080037478
U2 - 10.1109/CAC48633.2019.8996300
DO - 10.1109/CAC48633.2019.8996300
M3 - 会议稿件
AN - SCOPUS:85080037478
T3 - Proceedings - 2019 Chinese Automation Congress, CAC 2019
SP - 2399
EP - 2404
BT - Proceedings - 2019 Chinese Automation Congress, CAC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 Chinese Automation Congress, CAC 2019
Y2 - 22 November 2019 through 24 November 2019
ER -