DSpace Repository

Reinforcement Learning-Driven Task Offloading: Improving MEC Efficiency with DQN and DDPG

Show simple item record

dc.contributor.author Remmache, Mohammed Idris
dc.date.accessioned 2025-03-17T10:09:59Z
dc.date.available 2025-03-17T10:09:59Z
dc.date.issued 2024
dc.identifier.uri http://depot.umc.edu.dz/handle/123456789/14529
dc.description.abstract The rapid evolution of edge computing, particularly Mobile Edge Computing (MEC), has prompted the need for efficient task offloading strategies to optimize network resources, energy consumption, and latency. As applications requiring low latency and high data processing capabilities proliferate, offloading computational tasks from mobile devices to edge servers has become essential. This paper explores the use of Deep Reinforcement Learning (RL) models, specifically Deep Q-Networks (DQN) and Deep Deterministic Policy Gradient (DDPG), to optimize task offloading in MEC systems fr_FR
dc.title Reinforcement Learning-Driven Task Offloading: Improving MEC Efficiency with DQN and DDPG fr_FR
dc.type Article fr_FR


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account