Network slicing for vehicular communications: a multi-agent deep reinforcement learning approach.

Citation metadata

Date: Oct. 2021
From: Annales des Telecommunications(Vol. 76, Issue 9-10)
Publisher: Springer
Document Type: Report; Brief article
Length: 374 words

Document controls

Main content

Abstract :

Keywords: Network slicing; Vehicle-to-vehicle; Non-orthogonal multiple access; Resource allocation; Deep Q learning; Deep reinforcement learning Abstract This paper studies the multi-agent resource allocation problem in vehicular networks using non-orthogonal multiple access (NOMA) and network slicing. Vehicles want to broadcast multiple packets with heterogeneous quality-of-service (QoS) requirements, such as safety-related packets (e.g., accident reports) that require very low latency communication, while raw sensor data sharing (e.g., high-definition map sharing) requires high-speed communication. To ensure heterogeneous service requirements for different packets, we propose a network slicing architecture. We focus on a non-cellular network scenario where vehicles communicate by the broadcast approach via the direct device-to-device interface (i.e., sidelink communication). In such a vehicular network, resource allocation among vehicles is very difficult, mainly due to (i) the rapid variation of wireless channels among highly mobile vehicles and (ii) the lack of a central coordination point. Thus, the possibility of acquiring instantaneous channel state information to perform centralized resource allocation is precluded. The resource allocation problem considered is therefore very complex. It includes not only the usual spectrum and power allocation, but also coverage selection (which target vehicles to broadcast to) and packet selection (which network slice to use). This problem must be solved jointly since selected packets can be overlaid using NOMA and therefore spectrum and power must be carefully allocated for better vehicle coverage. To do so, we first provide a mathematical programming formulation and a thorough NP-hardness analysis of the problem. Then, we model it as a multi-agent Markov decision process. Finally, to solve it efficiently, we use a deep reinforcement learning (DRL) approach and specifically propose a deep Q learning (DQL) algorithm. The proposed DQL algorithm is practical because it can be implemented in an online and distributed manner. It is based on a cooperative learning strategy in which all agents perceive a common reward and thus learn cooperatively and distributively to improve the resource allocation solution through offline training. We show that our approach is robust and efficient when faced with different variations of the network parameters and compared to centralized benchmarks. Author Affiliation: (1) Department of Electrical and Computer Engineering, Université de Sherbrooke, Sherbrooke, Canada (a) zoubeir.mlika@usherbrooke.ca Article History: Registration Date: 07/20/2021 Received Date: 02/16/2021 Accepted Date: 07/19/2021 Online Date: 08/09/2021 Byline:

Source Citation

Source Citation   

Gale Document Number: GALE|A679580258