A distributed cache architecture for quality-of-service routing in communication networks
Thesis DisciplineComputer Science
Degree GrantorUniversity of Canterbury
Degree NameDoctor of Philosophy
The rapid advances in telecommunication and Internet technologies is the driving force behind the emergence of a new generation of network-oriented multimedia applications such as Internet telephony and real time video applications. To successfully deploy these new applications, the communication networks should be QoS capable so that network resources such as bandwidth can be reserved during the lifetime of an application. One of the most fundamental elements of a QoS-capable network is QoS routing. The computing load caused by frequent execution of such algorithms, especially in large networks, is a concern. Additionally, QoS routing algorithms need up-to-date network topology and state information to compute the QoS routes. This mandates the regular distribution of the state and topology information across the network. The overhead traffic caused by frequent update and redistribution of state and topology information, especially in large networks is a concern. In this thesis, we propose a distributed cache architecture to reduce the route computing load caused by on-demand execution of the QoS routing algorithms, assuming a bandwidth-based QoS model. The distributed cache architecture has been designed to easily scale to large networks. In addition, we show that such an architecture helps to increase the robustness of QoS routing in the presence of inaccurate network state information caused by long network state update intervals. This means that the proposed distributed cache architecture can also reduce the overhead traffic caused by the frequent distribution of the network state information, while achieving a good performance. To further utilize the advantages offered by the distributed cache architecture, we propose several novel techniques to improve its performance. Firstly, we introduce a distributed technique called cache snooping. The goal of cache snooping is to alleviate the effects of the changes in the network states so that the accuracy of the cached routes is increased. In this way, the distributed cache architecture can reduce the route computing load more effectively. Cache snooping selects and monitors those segments that are more likely to be reused by arriving calls. Monitoring is performed by sending snooping packets across the network links. The second proposed technique is called route freezing. While cache snooping has been designed to increase the accuracy of the cached routes in a statistical fashion, route freezing has been designed to provide 100% accurate routes that are not affected by changes in the network states. By manipulating the normal tear down procedure that is performed when a call ends, route freezing creates frozen cached routes. The third proposed technique is called route borrowing. Normally, a cached route can be reused only at its terminal points, where the route starts and ends. Route borrowing allows the long end-to-end cached routes to be partially reused from their intermediate points as well as the end points. The goal of route borrowing is to increase the likelihood of re-using cached routes by arriving calls. This means that fewer calls have to be routed by on-demand computing so that the route computing load is even reduced even further. We propose and evaluate realistic and practical solutions that can be deployed in real life large networks. We use stochastic discrete-event simulation to evaluate the performance of the proposed distributed cache architecture and its associated techniques. We consider realistic network topologies, routing algorithms, traffic models, and topology aggregation techniques.