Distribution network path planning method and system based on artificial intelligence optimization algorithm
Planning the most efficient, cost-effective, and reliable pathways for transmitting electricity from electrical substations to consumers is known as distribution network path planning. It’s not an easy task, but it’s necessary to meet the changing needs of the load and include renewable energy sourc...
Uloženo v:
| Vydáno v: | Discover applied sciences Ročník 7; číslo 10; s. 1074 - 18 |
|---|---|
| Hlavní autoři: | , , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Cham
Springer International Publishing
26.09.2025
Springer Nature B.V Springer |
| Témata: | |
| ISSN: | 3004-9261, 2523-3963, 3004-9261, 2523-3971 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | Planning the most efficient, cost-effective, and reliable pathways for transmitting electricity from electrical substations to consumers is known as distribution network path planning. It’s not an easy task, but it’s necessary to meet the changing needs of the load and include renewable energy sources. A computerized model of the electrical system that includes power cables, nodes (such as transformers and substations), and the capacity, impedance, and position of each component. A basic topic with diverse applications, the route planning issue is a staple in many domains. Scholarly interest in finding a solution to the route optimization issue using deep reinforcement learning technologies has grown in recent years, making it a popular avenue for path planning problems. In this research, we will examine a power distribution optimization route approach, use deep reinforcement learning to address the continuous route planning issue, and do experiments in a Miniworld maze. In a study that compared Deep Deterministic Policy Gradient (DDPG) to genetic algorithms, Binary Swarm Optimization, while the historical average approach, it was found that the latter had a small and less than ideal accuracy rate, was easy to calculate, and showed little change in accuracy with increasing data. A neural network representation of the reward function is used to suggest a reward shaping DDPG algorithm that optimizes the reward functionality dynamically. The genetic algorithms accuracy hovers around 70%; it degrades with increasing training size. Eventually stabilizing at about 83%, the forecasting accuracy rate increased in tandem with the training system’s expansion, leading to a deeper learning model with a higher training level. |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 3004-9261 2523-3963 3004-9261 2523-3971 |
| DOI: | 10.1007/s42452-025-07699-3 |