Reliable, Efficient Nonconvex Learning Algorithms
Machine-learning algorithms based on nonconvex optimization techniques are dramatically expanding what we can learn from massive data sets. These algorithms have created new technological capabilities in health care, imaging, transportation, and information processing. Despite success, currently no coherent mathematical foundation explains why these algorithms work and what kinds of tasks they provably solve. Establishing a mathematical foundation for these algorithms could guide efforts to improve their performance.
With this CAREER award, Damek Davis, Operations Research and Information Engineering, is advancing the design, analysis, and deployment of rigorously justified nonconvex optimization algorithms in order to establish a mathematical foundation for their operation. Davis is focusing on a class of algorithms that are uniquely scalable to modern high-dimensional statistical estimation and learning tasks. The project uses simple iterative methods that compute with data in their ambient form. The overarching goal is 1) to understand when these methods converge to local or global optima and 2) to provide efficiency estimates of their performance, measured in terms of both data and computational resources consumed.
This research will create guaranteed procedures for training practical machine-learning systems that are used in government and industry—with the potential to produce predictive machine-learning models that are more reliable and robust while requiring fewer data and computational resources.