Scientists in Poland have developed a machine-learning method to help solar+storage system owners manage battery charging and discharging, so they can avoid buying power from the grid.
Researchers from Poland's Warsaw University of Technology have proposed a reinforcement learning approach to minimize energy purchases from the grid in PV systems linked to storage systems.
The trade-off between storing too much and too little power in a battery is key for this complex task. Storing too much electricity would mean some surplus solar power could be lost, while storing too little could expose system owners to higher electricity prices.
Reinforcement learning is a kind of machine-learning method that can be used to take suitable action to maximize rewards in uncertain, potentially complex environments.
“To make this approach applicable, a novel formulation of the decision problem is presented, which focuses on the optimization of grid energy purchases rather than on direct storage control,” the scientists said.
Their approach is based on the Q-learning algorithm, which allows agents to learn a response from a reward. The scientists have combined this with a neural network. They conducted a realistic simulation on a grid-connected PV system linked to a battery that, due to stability reasons, could not inject surplus power into the network.
“The decision module that performs energy purchasing continuously monitors real-time PV generation, loads and electricity prices, as well as the State of Charge (SoC) of the storage,” they said. “Based on this information, the module determines the charging energy volume and sends the corresponding signal to the controller.”
Under their proposed approach, the controller generates recommendations for electricity purchases from the grid, rather than controlling charge and discharge operations in the battery.
“It makes it possible to define a relatively small set of discrete actions, corresponding to discretized relative purchased energy volumes,” the scientists explained.
The Polish group tested the effectiveness of this technique on two different solar+storage configurations.
“The base configuration assumes a large-size storage with 1MWh, energy capacity, while in the second case a small-size 200 kWh storage is assumed,” they said. “One should expect quite different purchase decisions, depending on the size of the storage. In both cases, power capacity is 25% of the total capacity and both charging and discharging have an 80% efficiency.”
Their analysis showed that the Q-learning algorithm is able to select more frequent and lower electricity purchases from the grid. They noted that these are made under lower, off-peak prices.
“The cost distributions suggest that a small-size storage is much better suited to the needs of the system because a large-size battery is often overloaded with potentially unused energy, some of which is bought,” they said. “These results clearly indicates the importance of both sizing, and control decisions as noticed also by other researchers.”
The scientists now plan to conduct further research on how to apply the approach to other forms of storage or to day-ahead operation planning. “More realistic, large-scale simulations should be performed with the results compared to those obtained using other learning methods to confirm the credibility of achieved performance improvements,” they concluded.
They presented their findings in “Real-time energy purchase optimization for a storage-integrated photovoltaic system by deep reinforcement learning,” which was recently published in Control Engineering Practice.
Lắp đặt điện mặt trời Khải Minh Tech
https://ift.tt/2X7bF6x
0906633505
info.khaiminhtech@gmail.com
80/39 Trần Quang Diệu, Phường 14, Quận 3
Lắp đặt điện mặt trời Khải Minh Tech
https://ift.tt/2ZH4TRU
Không có nhận xét nào:
Đăng nhận xét