Howard’s policy iteration algorithm is one of the most widely used algorithms for finding optimal policies for controlling
Markov Decision Processes
(MDPs). When applied to weighted directed graphs, which may be viewed as
MDPs (DMDPs), Howard’s algorithm can be used to find Minimum Mean-Cost cycles (MMCC). Experimental studies suggest that Howard’s algorithm works extremely well in this context. The theoretical complexity of Howard’s algorithm for finding MMCCs is a mystery. No polynomial time bound is known on its running time. Prior to this work, there were only linear lower bounds on the number of iterations performed by Howard’s algorithm. We provide the first weighted graphs on which Howard’s algorithm performs Ω(
) iterations, where
is the number of vertices in the graph.