Many dynamic graph algorithms have an
amortized
update time, rather than a stronger
worst-case
guarantee. But amortized data structures are not suitable for real-time systems, where each individual operation has to be executed quickly. For this reason, there exist many recent randomized results that aim to provide a guarantee stronger than amortized expected. The strongest possible guarantee for a randomized algorithm is that it is always correct (Las Vegas) and has
high-probability worst-case
update time, which gives a bound on the time for each individual operation that holds with high probability.
In this article, we present the first polylogarithmic high-probability worst-case time bounds for the dynamic spanner and the dynamic maximal matching problem.
(1)
For dynamic spanner, the only known
o
(
n
) worst-case bounds were
O
(
n
3/4
) high-probability worst-case update time for maintaining a 3-spanner and
O
(
n
5/9
) for maintaining a 5-spanner. We give a
O
(1)
k
log
3
(
n
) high-probability worst-case time bound for maintaining a (
2k-1
)-spanner, which yields the first worst-case polylog update time for all constant
k
. (All the results above maintain the optimal tradeoff of stretch
2k-1
and Õ(
n
1+1/k
) edges.)
(2)
For dynamic
maximal
matching, or dynamic 2-approximate maximum matching, no algorithm with
o(n)
worst-case time bound was known and we present an algorithm with
O
(log
5
(
n
)) high-probability worst-case time; similar worst-case bounds existed only for maintaining a matching that was (2+ϵ)-approximate, and hence not maximal.
Our results are achieved using a new approach for converting amortized guarantees to worst-case ones for randomized data structures by going through a third type of guarantee, which is a middle ground between the two above: An algorithm is said to have
worst-case expected
update time ɑ if for
every
update σ, the expected time to process σ is at most ɑ. Although stronger than amortized expected, the worst-case expected guarantee does not resolve the fundamental problem of amortization: A worst-case expected update time of
O(1)
still allows for the possibility that every 1/
f(n)
updates requires ϴ (
f(n)
) time to process, for arbitrarily high
f(n)
. In this article, we present a
black-box
reduction that converts any data structure with worst-case expected update time into one with a high-probability worst-case update time: The query time remains the same, while the update time increases by a factor of
O
(log
2(n)
).
Thus, we achieve our results in two steps: (1) First, we show how to convert existing dynamic graph algorithms with
amortized
expected polylogarithmic running times into algorithms with
worst-case expected
polylogarithmic running times. (2) Then, we use our black-box reduction to achieve the polylogarithmic high-probability worst-case time bound. All our algorithms are Las-Vegas-type algorithms.