Dynamic appliance scheduling and energy management in smart homes using adaptive reinforcement learning techniques

Smart home energy management is complicated because of varying user preferences, expenses, and consumption. These dynamics are difficult for traditional systems to handle, but new developments in reinforcement learning and optimization may be able to help. The paper introduces a novel Demand Respons...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Scientific reports Ročník 15; číslo 1; s. 24594 - 26
Hlavní autoři: Saroha, Poonam, Singh, Gopal, Lilhore, Umesh Kumar, Simaiya, Sarita, Khan, Monish, Alroobaea, Roobaea, Alsafyani, Majed, Alsufyani, Hamed
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Nature Publishing Group UK 09.07.2025
Nature Publishing Group
Nature Portfolio
Témata:
ISSN:2045-2322, 2045-2322
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Smart home energy management is complicated because of varying user preferences, expenses, and consumption. These dynamics are difficult for traditional systems to handle, but new developments in reinforcement learning and optimization may be able to help. The paper introduces a novel Demand Response (DR) method that integrates a Self-Adaptive Puma Optimizer Algorithm (SAPOA) with a Multi-Objective Deep Q-Network (MO-DQN), improving smart home energy consumption, cost, and user preferences management. SAPOA adaptively maximizes numerous objectives, while DQN improves decision-making by assimilating interactions. The proposed method adapts to user preferences by learning from previous energy usage patterns and optimizing the scheduling of critical household appliances, enhancing energy efficiency. Static optimization in traditional home energy management systems (HEMS) makes it difficult to handle changing expenses and dynamic user preferences. Reinforcement learning (RL) methods now in use frequently lack sophisticated optimization integration. The experimental results show that the outperforming multiobjective reinforcement learning puma optimizer algorithm (MORL–POA), SAPOA, and POA methods, the suggested solution dramatically lowers the peak-to-average ratio (PAR) value from 3.4286 to 1.9765 without RES and 1.0339 with RES. By combining SAPOA with DQN, the suggested approach maximizes energy management, optimizes appliance scheduling, and efficiently manages uncertainty, improving performance and flexibility. Metrics like peak average ratio (PAR), energy usage, and electricity cost are used to assess performance, while the Matlab platform is used for implementation.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-025-08125-9