Note: The results presented on this page were obtained using the first prototype version. The current version provides significantly better results.
One distinct feature of twinchers is their ability to overcome the compute-efficiency frontier - a heuristic observation that the error of a neural network decreases polynomially with increasing training resources but never reaches zero. This behaviour has been observed across a wide variety of problems (see a video from Welch Labs and Refs. [1-3]). Here we model this phenomenon using a simple problem and demonstrate how twinchers achieve zero error on idealized data within limited resources.