TY - JOUR
T1 - Lightweight Configuration Adaptation With Multi-Teacher Reinforcement Learning for Live Video Analytics
AU - Zhang, Yuanhong
AU - Zhang, Weizhan
AU - Yuan, Muyao
AU - Xu, Liang
AU - Yan, Caixia
AU - Gong, Tieliang
AU - Du, Haipeng
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - The proliferation of video data and advancements in Deep Neural Networks (DNNs) have greatly boosted live video analytics, driven by the growing video capture capabilities of mobile devices. However, resource limitations necessitate the transmission of endpoint-collected videos to servers for inference. To meet real-time requirements and ensure accurate inference, it is essential to adjust video configurations at the endpoint. Traditional methods rely on deterministic strategies, posing difficulties in adapting to dynamic networks and video content. Meanwhile, emerging learning-based schemes suffer from trial-and-error exploration mechanisms, resulting in a concerning long-tail effect on upload latency. In this paper, we propose a novel lightweight and robust configuration adaptation policy (LCA), which fuses heuristic and RL-based agents using multi-teacher knowledge distillation (MKD) theory. First, we propose a content-sensitive and bandwidth-adaptive RL agent and introduce a Lyapunov-based optimization agent for ensuring latency robustness. To leverage both agents' strengths, we design a feature-guided multi-teacher distillation network to transfer their advantages to the student. The experimental results across two vision tasks (pose estimation and semantic segmentation) demonstrate that LCA significantly reduces transmission latency compared to prior work (average reduction of 47.11%-89.55%, 95-percentile reduction of 27.63%-88.78%) and computational overhead while maintaining comparable inference accuracy.
AB - The proliferation of video data and advancements in Deep Neural Networks (DNNs) have greatly boosted live video analytics, driven by the growing video capture capabilities of mobile devices. However, resource limitations necessitate the transmission of endpoint-collected videos to servers for inference. To meet real-time requirements and ensure accurate inference, it is essential to adjust video configurations at the endpoint. Traditional methods rely on deterministic strategies, posing difficulties in adapting to dynamic networks and video content. Meanwhile, emerging learning-based schemes suffer from trial-and-error exploration mechanisms, resulting in a concerning long-tail effect on upload latency. In this paper, we propose a novel lightweight and robust configuration adaptation policy (LCA), which fuses heuristic and RL-based agents using multi-teacher knowledge distillation (MKD) theory. First, we propose a content-sensitive and bandwidth-adaptive RL agent and introduce a Lyapunov-based optimization agent for ensuring latency robustness. To leverage both agents' strengths, we design a feature-guided multi-teacher distillation network to transfer their advantages to the student. The experimental results across two vision tasks (pose estimation and semantic segmentation) demonstrate that LCA significantly reduces transmission latency compared to prior work (average reduction of 47.11%-89.55%, 95-percentile reduction of 27.63%-88.78%) and computational overhead while maintaining comparable inference accuracy.
KW - Mobile and edge intelligence
KW - configuration adaptation
KW - machine-centric video streaming
KW - multi-teacher knowledge distillation
UR - https://www.scopus.com/pages/publications/105002302613
U2 - 10.1109/TMC.2025.3526359
DO - 10.1109/TMC.2025.3526359
M3 - 文章
AN - SCOPUS:105002302613
SN - 1536-1233
VL - 24
SP - 4466
EP - 4480
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
IS - 5
ER -