Transformer-Based Feature Learning for Algorithm Selection in Combinatorial Optimisation
Gespeichert in:
| Titel: | Transformer-Based Feature Learning for Algorithm Selection in Combinatorial Optimisation |
|---|---|
| Autoren: | Pellegrino, Alessio, Akgün, Özgür, Dang, Nguyen, Kiziltan, Zeynep, Miguel, Ian |
| Weitere Verfasser: | Alessio Pellegrino and Özgür Akgün and Nguyen Dang and Zeynep Kiziltan and Ian Miguel, de la Banda, Maria Garcia, EPSRC, University of St Andrews.Global Research Centre for Diverse Intelligences, University of St Andrews.Centre for Interdisciplinary Research in Computational Algebra, University of St Andrews.School of Computer Science, University of St Andrews.Centre for Research into Ecological & Environmental Modelling, University of St Andrews.Sir James Mackenzie Institute for Early Diagnosis |
| Verlagsinformationen: | Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2025. |
| Publikationsjahr: | 2025 |
| Schlagwörter: | MCC, Algorithm selection, machine learning, Constraint modelling, feature extraction, Machine learning, NDAS, Feature extraction, ddc:004, algorithm selection, Software, transformer architecture, Transformer architecture |
| Beschreibung: | Given a combinatorial optimisation problem, there are typically multiple ways of modelling it for presentation to an automated solver. Choosing the right combination of model and target solver can have a significant impact on the effectiveness of the solving process. The best combination of model and solver can also be instance-dependent: there may not exist a single combination that works best for all instances of the same problem. We consider the task of building machine learning models to automatically select the best combination for a problem instance. Critical to the learning process is to define instance features, which serve as input to the selection model. Our contribution is the automatic learning of instance features directly from the high-level representation of a problem instance using a transformer encoder. We evaluate the performance of our approach using the Essence modelling language via a case study of three problem classes. |
| Publikationsart: | Conference object |
| Dateibeschreibung: | application/pdf |
| Sprache: | English |
| DOI: | 10.4230/lipics.cp.2025.31 |
| Zugangs-URL: | https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2025.31 https://portal.findresearcher.sdu.dk/da/publications/ab8994c1-7cca-4b14-96e7-c824875f4317 https://doi.org/10.4230/LIPIcs.CP.2025.31 https://hdl.handle.net/10023/32828 |
| Rights: | CC BY |
| Dokumentencode: | edsair.dedup.wf.002..2a2c584619a1f4d5b102bacfc7cd330f |
| Datenbank: | OpenAIRE |
| Abstract: | Given a combinatorial optimisation problem, there are typically multiple ways of modelling it for presentation to an automated solver. Choosing the right combination of model and target solver can have a significant impact on the effectiveness of the solving process. The best combination of model and solver can also be instance-dependent: there may not exist a single combination that works best for all instances of the same problem. We consider the task of building machine learning models to automatically select the best combination for a problem instance. Critical to the learning process is to define instance features, which serve as input to the selection model. Our contribution is the automatic learning of instance features directly from the high-level representation of a problem instance using a transformer encoder. We evaluate the performance of our approach using the Essence modelling language via a case study of three problem classes. |
|---|---|
| DOI: | 10.4230/lipics.cp.2025.31 |
Nájsť tento článok vo Web of Science