Afficher la notice abrégée
dc.contributor.author |
Remmache, Mohammed Idris |
|
dc.contributor.author |
Boudouh, Saida Sarra; |
|
dc.contributor.author |
Bendouma, Tahar; |
|
dc.contributor.author |
abdelhafidi, Zohra |
|
dc.date.accessioned |
2025-05-20T08:00:10Z |
|
dc.date.available |
2025-05-20T08:00:10Z |
|
dc.date.issued |
2024-10-25 |
|
dc.identifier.uri |
http://depot.umc.edu.dz/handle/123456789/14619 |
|
dc.description.abstract |
The rapid evolution of edge computing, particularly Mobile Edge Computing
(MEC), has prompted the need for efficient task offloading strategies to optimize
network resources, energy consumption, and latency. As applications requiring
low latency and high data processing capabilities proliferate, offloading
computational tasks from mobile devices to edge servers has become essential.
This paper explores the use of Deep Reinforcement Learning (RL) models,
specifically Deep Q-Networks (DQN) and Deep Deterministic Policy Gradient
(DDPG), to optimize task offloading in MEC systems. We analyze a multi-user,
single-server scenario and compare the performance of DQN and DDPG in
reducing energy consumption and delay. Our results demonstrate that DQN
outperforms DDPG in terms of reward stability, energy efficiency, and latency
management, making it more suitable for real-time applications. The study
highlights the potential of RL strategies to improve MEC performance and
suggests future research on multi-server environments. |
fr_FR |
dc.language.iso |
en |
fr_FR |
dc.publisher |
Université Frères Mentouri - Constantine 1 |
fr_FR |
dc.subject |
Learning-Driven |
fr_FR |
dc.title |
Reinforcement Learning-Driven Task Offloading: Improving MEC Efficiency with DQN and DDPG |
fr_FR |
dc.type |
Article |
fr_FR |
Fichier(s) constituant ce document
Ce document figure dans la(les) collection(s) suivante(s)
Afficher la notice abrégée