LAPSE:2023.12652
Published Article

LAPSE:2023.12652
Carbon-Neutral Cellular Network Operation Based on Deep Reinforcement Learning
February 28, 2023
Abstract
With the exponential growth of traffic demand, ultra-dense networks have been proposed to cope with such demand. However, the increase of the network density causes more power use, and carbon neutrality becomes an important concept to decrease the emission and production of carbon. In cellular networks, emission and production can be directly related to power consumption. In this paper, we aim to achieve carbon neutrality, as well as maximize network capacity with given power constraints. We assume that base stations have their own renewable energy sources to generate power. For carbon neutrality, we control the power consumption for base stations by adjusting the transmission power and switching off base stations to balance the generated power. Given such power constraints, our goal is to maximize the network capacity or the rate achievable for the users. To this end, we carefully design the objective function and then propose an efficient Deep Deterministic Policy Gradient (DDPG) algorithm to maximize the objective. A simulation is conducted to validate the benefits of the proposed method. Extensive simulations show that the proposed method can achieve carbon neutrality and provide a better rate than other baseline schemes. Specifically, up to a 63% gain in the reward value was observed in the DDPG algorithm compared to other baseline schemes.
With the exponential growth of traffic demand, ultra-dense networks have been proposed to cope with such demand. However, the increase of the network density causes more power use, and carbon neutrality becomes an important concept to decrease the emission and production of carbon. In cellular networks, emission and production can be directly related to power consumption. In this paper, we aim to achieve carbon neutrality, as well as maximize network capacity with given power constraints. We assume that base stations have their own renewable energy sources to generate power. For carbon neutrality, we control the power consumption for base stations by adjusting the transmission power and switching off base stations to balance the generated power. Given such power constraints, our goal is to maximize the network capacity or the rate achievable for the users. To this end, we carefully design the objective function and then propose an efficient Deep Deterministic Policy Gradient (DDPG) algorithm to maximize the objective. A simulation is conducted to validate the benefits of the proposed method. Extensive simulations show that the proposed method can achieve carbon neutrality and provide a better rate than other baseline schemes. Specifically, up to a 63% gain in the reward value was observed in the DDPG algorithm compared to other baseline schemes.
Record ID
Keywords
carbon neutrality, DDPG, reinforcement learning
Subject
Suggested Citation
Kim H, So J, Kim H. Carbon-Neutral Cellular Network Operation Based on Deep Reinforcement Learning. (2023). LAPSE:2023.12652
Author Affiliations
Journal Name
Energies
Volume
15
Issue
12
First Page
4504
Year
2022
Publication Date
2022-06-20
ISSN
1996-1073
Version Comments
Original Submission
Other Meta
PII: en15124504, Publication Type: Journal Article
Record Map
Published Article

LAPSE:2023.12652
This Record
External Link

https://doi.org/10.3390/en15124504
Publisher Version
Download
Meta
Record Statistics
Record Views
166
Version History
[v1] (Original Submission)
Feb 28, 2023
Verified by curator on
Feb 28, 2023
This Version Number
v1
Citations
Most Recent
This Version
URL Here
https://psecommunity.org/LAPSE:2023.12652
Record Owner
Auto Uploader for LAPSE
Links to Related Works
