Minimum Spanning Tree Verification, Fast Priority Queues, and Massively Parallel Factoring (Thesis)

Report ID: TR-428-93
Author: Dixon, Brandon
Date: 1993-10-00
Pages: 104
Download Formats: |Postscript|
Abstract:

We present algorithms for problems from three different areas of research on parallel computation. The first problem that we consider is verifying minimum spanning trees. Given a weighted, undirected graph $G$ and a spanning tree $T$ of $G$, our algorithms determine whether $T$ is a minimum spanning tree. We present the first linear time algorithm for this problem, that is, if $G$ has $n$ vertices and $m$ edges, our algorithm runs in $O(m + n)$ time given a unit cost random access machine (RAM) with $O(log n)$ word size. Along with the spanning tree verification problem, we consider the spanning tree sensitivity analysis problem. This problem is to determine the maximum possible change for each individual edge cost that does not destroy the minimality of the given spanning tree. We give two algorithms for this problem: the first runs in $O(m+n)$ expected time and the second runs in optimal deterministic time, but the exact time bound is not known. We additionally present optimal parallel algorithms for the minimum spanning tree verification and sensitivity analysis problems. The parallel algorithms share top level ideas with the above sequential algorithms, but their implementations require different techniques. The parallel verification algorithm runs in $O(log n)$ time using $Theta((m+n)/log n)$ processors in the concurrent read, exclusive write (CREW) PRAM model. The parallel sensitivity analysis algorithms are the parallel equivalents of the sequential sensitivity analysis algorithms; we give an algorithm that runs in $O(log n)$ expected time using $Theta((m+n)/log n)$ CREW processors and a deterministic algorithm that run is optimal but unknown time using $Theta((m+n)/log n)$ processors as well. The second problem that we consider is that of allowing concurrent operations on a priority queue over some bounded universe. Given a bounded universe, say $[0 ldots N]$, we give algorithms for performing insertions, deletions and find minimum operations on a set of elements from the universe. Each operation completes in $O(log log N)$ time. Our algorithms are unique because the operations can be pipelined in such a way that a new operation can always begin on the set in a constant amount of time. This allows concurrent operations on the set and gives a way to increase the throughput of the data structure without increasing the latency of any single operation. We give an application of this data structure to the longest increasing subsequence problem, giving an optimal linear time algorithm using $Theta(log log N)$ processors. The last problem we consider is that of factoring integers. Our contribution to this widely studied problem is to give efficient factoring algorithms for massively parallel single instruction, multiple data (SIMD) machines. The algorithms presented here have been implemented on a 16K processor MasPar SIMD machine. We present both special purpose factoring algorithms (where the running time depends strongly on the size of the smallest factor) and general purpose methods (where the running time depends solely on the size of the integer to be factored). Additionally we present fast algorithms for the multiplication of large integers modulo a fixed integer, a problem useful for many applications in the area of computational number theory and cryptography.