Examples with Decreasing Largest Inscribed Ball for Deterministic Rescaling Algorithms

Recently, Pena and Soheili presented a deterministic rescaling perceptron algorithm and proved that it solves a feasible perceptron problem in $O(m^2n^2\log(\rho^{-1}))$ perceptron update steps, where $\rho$ is the radius of the largest inscribed ball. The original stochastic rescaling perceptron algorithm of Dunagan and Vempala is based on systematic increase of $\rho$, while the proof of Pena and Soheili is based on the increase of the volume of a so-called cap. In this note we present a perceptron example to show that with this deterministic rescaling method, $\rho$ may decrease after one rescaling step. Furthermore, inspired by our previous work on the duality relationship between the perceptron and the von Neumann algorithms, we propose a deterministic rescaling von Neumann algorithm which is a direct transformation of the deterministic rescaling perceptron algorithm. Though the complexity of this algorithm is not proved yet, we show by constructing a von Neumann example that $\rho$ does not increase monotonically for the deterministic rescaling von Neumann algorithm either. The von Neumann example serves as the foundation of the perceptron example. This example also shows that proving the complexity of the rescaling von Neumann algorithm cannot be based on monotonic expansion of $\rho$. At last, we present computational results of the deterministic rescaling von Neumann algorithm. The results show that the performance of the rescaling algorithm is improved compared with the original von Neumann algorithm when solving the test problems.

Article

Download

View Examples with Decreasing Largest Inscribed Ball for Deterministic Rescaling Algorithms