The procedural content generation benchmark : an open-source testbed for generative challenges in games
Saved in:
| Title: | The procedural content generation benchmark : an open-source testbed for generative challenges in games |
|---|---|
| Authors: | Khalifa, Ahmed, Gallotta, Roberto, Barthet, Matthew, Liapis, Antonios, Togelius, Julian, Yannakakis, Georgios N., FDG '25 : International Conference on the Foundations of Digital Games |
| Publisher Information: | ACM |
| Publication Year: | 2025 |
| Collection: | University of Malta: OAR@UM / L-Università ta' Malta |
| Subject Terms: | Application software, Computational intelligence, User interfaces (Computer systems), Video games -- Programming, Artificial intelligence, Benchmarking (Management) |
| Description: | This paper introduces the Procedural Content Generation Benchmark for evaluating generative algorithms on different game content creation tasks. The benchmark comes with 12 game-related problems with multiple variants on each problem. Problems vary from creating levels of different kinds to creating rule sets for simple arcade games. Each problem has its own content representation, control parameters, and evaluation metrics for quality, diversity, and controllability. This benchmark is intended as a first step towards a standardized way of comparing generative algorithms. We use the benchmark to score three baseline algorithms: a random generator, an evolution strategy, and a genetic algorithm. Results show that some problems are easier to solve than others, as well as the impact the chosen objective has on quality, diversity, and controllability of the generated artifacts. ; peer-reviewed |
| Document Type: | conference object |
| Language: | English |
| Relation: | https://www.um.edu.mt/library/oar/handle/123456789/135741 |
| DOI: | 10.1145/3723498.3723794 |
| Availability: | https://www.um.edu.mt/library/oar/handle/123456789/135741 https://doi.org/10.1145/3723498.3723794 |
| Rights: | info:eu-repo/semantics/openAccess ; The copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder. |
| Accession Number: | edsbas.5FF923A5 |
| Database: | BASE |
| Abstract: | This paper introduces the Procedural Content Generation Benchmark for evaluating generative algorithms on different game content creation tasks. The benchmark comes with 12 game-related problems with multiple variants on each problem. Problems vary from creating levels of different kinds to creating rule sets for simple arcade games. Each problem has its own content representation, control parameters, and evaluation metrics for quality, diversity, and controllability. This benchmark is intended as a first step towards a standardized way of comparing generative algorithms. We use the benchmark to score three baseline algorithms: a random generator, an evolution strategy, and a genetic algorithm. Results show that some problems are easier to solve than others, as well as the impact the chosen objective has on quality, diversity, and controllability of the generated artifacts. ; peer-reviewed |
|---|---|
| DOI: | 10.1145/3723498.3723794 |
Nájsť tento článok vo Web of Science