TY - JOUR
T1 - NB-Cache
T2 - Non-Blocking In-Network Caching for High-Performance Content Routers
AU - Pan, Tian
AU - Lin, Xingchen
AU - Song, Enge
AU - Xu, Cheng
AU - Zhang, Jiao
AU - Li, Hao
AU - Lv, Jianhui
AU - Huang, Tao
AU - Liu, Bin
AU - Zhang, Beichuan
N1 - Publisher Copyright:
© 1993-2012 IEEE.
PY - 2021/10/1
Y1 - 2021/10/1
N2 - Information-Centric Networking (ICN) provides scalable and efficient content distribution at the Internet scale due to in-network caching and native multicast. To support these features, a content router needs high performance at its data plane, which consists of three forwarding steps: checking the Content Store (CS), then the Pending Interest Table (PIT), and finally the Forwarding Information Base (FIB). In this work, we build an analytical model of the router and identify that CS is the actual bottleneck. Then, we propose a novel mechanism called 'NB-Cache' to address CS's performance issue from a network-wide point of view. In NB-Cache, when packets arrive at a router whose CS is fully loaded, instead of being blocked and waiting for the CS, these packets are forwarded to the next-hop router, whose CS may not be fully loaded. This approach essentially utilizes Content Stores of all the routers along the forwarding path in parallel rather than checking each CS sequentially. NB-Cache follows a design pattern of on-demand load balancing and can be formulated into a non-trivial N-queue bypass model. We use the Markov chain to establish its theoretical base and find an algorithm for automated transition rate matrix generation. Experiments show significant improvement of data plane performance: 70% reduction in round-trip time (RTT) and 130% increase in throughput. NB-Cache decouples the fast packet forwarding from the slower content retrieval thus substantially reducing CS's heavy dependency on fast but expensive memory.
AB - Information-Centric Networking (ICN) provides scalable and efficient content distribution at the Internet scale due to in-network caching and native multicast. To support these features, a content router needs high performance at its data plane, which consists of three forwarding steps: checking the Content Store (CS), then the Pending Interest Table (PIT), and finally the Forwarding Information Base (FIB). In this work, we build an analytical model of the router and identify that CS is the actual bottleneck. Then, we propose a novel mechanism called 'NB-Cache' to address CS's performance issue from a network-wide point of view. In NB-Cache, when packets arrive at a router whose CS is fully loaded, instead of being blocked and waiting for the CS, these packets are forwarded to the next-hop router, whose CS may not be fully loaded. This approach essentially utilizes Content Stores of all the routers along the forwarding path in parallel rather than checking each CS sequentially. NB-Cache follows a design pattern of on-demand load balancing and can be formulated into a non-trivial N-queue bypass model. We use the Markov chain to establish its theoretical base and find an algorithm for automated transition rate matrix generation. Experiments show significant improvement of data plane performance: 70% reduction in round-trip time (RTT) and 130% increase in throughput. NB-Cache decouples the fast packet forwarding from the slower content retrieval thus substantially reducing CS's heavy dependency on fast but expensive memory.
KW - Bloom filter
KW - ICN
KW - N-queue bypass model
KW - bottleneck bypassing
KW - content router
KW - non-blocking I/O
UR - https://www.scopus.com/pages/publications/85107364400
U2 - 10.1109/TNET.2021.3083599
DO - 10.1109/TNET.2021.3083599
M3 - 文章
AN - SCOPUS:85107364400
SN - 1063-6692
VL - 29
SP - 1976
EP - 1989
JO - IEEE/ACM Transactions on Networking
JF - IEEE/ACM Transactions on Networking
IS - 5
ER -