الخلاصة:
The rapid evolution of edge computing, particularly Mobile Edge Computing
(MEC), has prompted the need for efficient task offloading strategies to optimize
network resources, energy consumption, and latency. As applications requiring
low latency and high data processing capabilities proliferate, offloading
computational tasks from mobile devices to edge servers has become essential.
This paper explores the use of Deep Reinforcement Learning (RL) models,
specifically Deep Q-Networks (DQN) and Deep Deterministic Policy Gradient
(DDPG), to optimize task offloading in MEC systems