GBOML: a structure-exploiting optimization modelling language in Python

Mixed-Integer Linear Programs (MILPs) have many practical applications. Most modelling tools for MILPs fall in two broad categories. Indeed, tools such as algebraic modelling languages allow practitioners to compactly encode models using syntax close to mathematical notation but usually lack support...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Optimization methods & software Jg. 39; H. 1; S. 227 - 256
Hauptverfasser: Miftari, Bardhyl, Berger, Mathias, Derval, Guillaume, Louveaux, Quentin, Ernst, Damien
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Abingdon Taylor & Francis 01.01.2024
Taylor & Francis Ltd
Schlagworte:
ISSN:1055-6788, 1029-4937, 1029-4937
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Mixed-Integer Linear Programs (MILPs) have many practical applications. Most modelling tools for MILPs fall in two broad categories. Indeed, tools such as algebraic modelling languages allow practitioners to compactly encode models using syntax close to mathematical notation but usually lack support for special structures, while other tools instead provide predefined components that can be easily assembled but modifying or adding new components is difficult. In this work, we present the inner workings of the Graph-Based Optimization Modelling Language (GBOML), an open-source modelling tool implemented in Python combining the strengths of both worlds. GBOML natively supports special structures that can be encoded by a hierarchical hypergraph, offers syntax close to mathematical notation and facilitates the modular construction and reuse of time-indexed models. We detail design choices enabling these features and show that they simplify problem encoding, lead to faster instance generation times and sometimes faster solve times. We benchmark the times taken by GBOML, JuMP, Plasmo, Pyomo and AMPL to generate instances of a structured MILP. We find that GBOML outperforms Plasmo and Pyomo, is tied with JuMP but is slower than AMPL. With parallel model generation, GBOML outperforms JuMP and closes the gap with AMPL. GBOML has the smallest memory footprint. 
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
scopus-id:2-s2.0-85169916717
ISSN:1055-6788
1029-4937
1029-4937
DOI:10.1080/10556788.2023.2246169