## Matrix Multiplication and All Pairs Shortest Paths

##### View/Open

##### Author

##### Date

2002##### Permanent Link

http://hdl.handle.net/10092/14771##### Degree Grantor

University of Canterbury##### Degree Level

Doctoral##### Degree Name

OtherThe all pairs shortest path (APSP) problem is to compute shortest paths between all pairs of vertices of a directed graph with non-negative real numbers as edge costs. We focus on shortest distances between veritices, as shortest paths can be obtained with a slight increase of cost. Classically the APSP problem can be solved in cubic time of O(n 3 ). The problem here is to achieve a sub-cubic time for a graph with small integer costs. A directed graph is given by G = (V, E), where V = {1, · · · , n}, the set of vertices, and E is the set of edges. The cost of edge (i, j) ∈ E is denoted by dij. The (n, n)-matrix D is one whose (i, j) element is dij. We assume that dij ≥ 0 and dii = 0 for all i, j. If there is no edge from i to j, we let dij = ∞. The cost, or distance, of a path is the sum of costs of the edges in the path. The length of a path is the number of edges in the path. The shortest distance from vertex i to vertex j is the minimum cost over all paths from i to j, denoted by d ∗ ij. Let D∗ = {d ∗ ij}. We call n the size of the matrices. Let A and B are (n, n)-matrices. The three products are defined using the elements of A and B as follows: (1) Ordinary matrix product over a ring C = AB, (2) Boolean matrix product C = A · B, and (3) Distance matrix product C = A × B, where (1) cij = Xn k=1 aikbkj , (2) cij = _n k=1 aik ∧ bkj , (3) cij = min 1≤k≤n {aik + bkj}. The matrix C is called a product in each case; the computational process is called multiplication, such as distance matrix multiplication. In those three cases, k changes through the entire set {1, ..., n}. We define a partial matrix product of A and B by taking k in a subset I of V . In other words, a partial product is obtained by multiplying a vertically rectangular matrix, A(∗, I), whose columns are extracted from A corresponding to the set I, and similarly a horizontally rectangular matrix, B(I, ∗), extracted from B with rows corresponding to I. Intuitively I is the set of check points k, when we go from i to j. The best algorithm [3] computes (1) in O(n ω ) time, where ω = 2.376. We carry three decimal points. To compute (2), we can regard Boolean values 0 and 1 in A and B as integers