Toggle Main Menu Toggle Search

Open Access padlockePrints

A New Error Temporal Difference Algorithm for Deep Reinforcement Learning in Microgrid Optimization

Lookup NU author(s): Fulong YaoORCiD, Dr Wanqing ZhaoORCiD, Dr Matthew ForshawORCiD

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

Predictive control approaches based on deep reinforcement learning (DRL) have gained significant attention in microgrid energy optimization. However, existing research often overlooks the issue of uncertainty stemming from imperfect prediction models, which can lead to suboptimal control strategies. This paper presents a new error temporal difference (ETD) algorithm for DRL to address the uncertainty in predictions, aiming to improve the performance of microgrid operations. First, a microgrid system integrated with renewable energy sources (RES) and energy storage systems (ESS), along with its Markov decision process (MDP), is modelled. Second, a predictive control approach based on adeep Q network (DQN) is presented, in which a weighted average algorithm and a new ETD algorithm are designed to quantify and address the prediction uncertainty, respectively. Finally, simulations on a realworld US dataset suggest that the developed ETD effectively improves the performance of DRL in optimizing microgrid operations.


Publication metadata

Author(s): Yao F, Zhao W, Forshaw M

Publication type: Conference Proceedings (inc. Abstract)

Publication status: Published

Conference Name: 2024 9th International Conference on Renewable Energy and Conservation (ICREC 2024)

Year of Conference: 2024

Online publication date: 22/11/2024

Acceptance date: 13/08/2024

Date deposited: 11/12/2024

ISSN: 1865-3537

Publisher: Springer

URL: https://www.icrec.org/2024.html

ePrints DOI: 10.57711/yz04-5338

Series Title: Green Energy and Technology


Share