Can I speed up distributed QP solving with deep learning?
Deep Distributed Optimization for Large-Scale Quadratic Programming
This paper introduces DeepDistributedQP, a novel approach to solving large-scale Quadratic Programming (QP) problems relevant to fields like machine learning and robotics by distributing the computational load. It leverages a combination of deep learning (by "unfolding" optimization iterations into network layers) and a new distributed optimization algorithm called DistributedQP, which itself is based on the existing OSQP solver and the ADMM consensus approach.
For LLM-based multi-agent systems, DeepDistributedQP offers a potential route to efficiently manage the computational demands of large-scale collaborative problem-solving. By training on smaller problems, the learned policy can then be applied to much larger multi-agent scenarios, improving scalability and potentially communication efficiency. The reliance on QP also links it to other areas where LLMs interact with optimization, such as optimal control and resource allocation.