Home > Research > Publications & Outputs > Path planning based on reinforcement learning

Links

Text available via DOI:

View graph of relations

Path planning based on reinforcement learning

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Path planning based on reinforcement learning. / Lin, Jin.
In: Applied and Computational Engineering, Vol. 5, No. 1, 31.05.2023, p. 853-858.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Lin, J 2023, 'Path planning based on reinforcement learning', Applied and Computational Engineering, vol. 5, no. 1, pp. 853-858. https://doi.org/10.54254/2755-2721/5/20230728

APA

Lin, J. (2023). Path planning based on reinforcement learning. Applied and Computational Engineering, 5(1), 853-858. https://doi.org/10.54254/2755-2721/5/20230728

Vancouver

Lin J. Path planning based on reinforcement learning. Applied and Computational Engineering. 2023 May 31;5(1):853-858. doi: 10.54254/2755-2721/5/20230728

Author

Lin, Jin. / Path planning based on reinforcement learning. In: Applied and Computational Engineering. 2023 ; Vol. 5, No. 1. pp. 853-858.

Bibtex

@article{2005a98545934891a175abe0cfe296d0,
title = "Path planning based on reinforcement learning",
abstract = "With the wide application of mobile robots in industry, path planning has always been a difficult problem for mobile robots. Reinforcement learning algorithms such as Q-learning play a huge role in path planning. Traditional Q-learning algorithm mainly uses - greedy search policy. But for a fixed search factor -greedy. For example, the problems of slow convergence speed, time-consuming and many continuous action transformations (such as the number of turns during robot movement) are not conducive to the stability requirements of mobile robots in industrial transportation. Especially for the transportation of dangerous chemicals, continuous transformation of turns will increase the risk of objects toppling. This paper proposes a new method based on - greedy 's improved dynamic search strategy is used to improve the stability of mobile robots in motion planning. The experiment shows that the dynamic search strategy converges faster, consumes less time, has less continuous transformation times of action, and has higher motion stability in the test environment.",
author = "Jin Lin",
year = "2023",
month = may,
day = "31",
doi = "10.54254/2755-2721/5/20230728",
language = "English",
volume = "5",
pages = "853--858",
journal = "Applied and Computational Engineering",
issn = "2755-2721",
publisher = "EWA Publishing",
number = "1",

}

RIS

TY - JOUR

T1 - Path planning based on reinforcement learning

AU - Lin, Jin

PY - 2023/5/31

Y1 - 2023/5/31

N2 - With the wide application of mobile robots in industry, path planning has always been a difficult problem for mobile robots. Reinforcement learning algorithms such as Q-learning play a huge role in path planning. Traditional Q-learning algorithm mainly uses - greedy search policy. But for a fixed search factor -greedy. For example, the problems of slow convergence speed, time-consuming and many continuous action transformations (such as the number of turns during robot movement) are not conducive to the stability requirements of mobile robots in industrial transportation. Especially for the transportation of dangerous chemicals, continuous transformation of turns will increase the risk of objects toppling. This paper proposes a new method based on - greedy 's improved dynamic search strategy is used to improve the stability of mobile robots in motion planning. The experiment shows that the dynamic search strategy converges faster, consumes less time, has less continuous transformation times of action, and has higher motion stability in the test environment.

AB - With the wide application of mobile robots in industry, path planning has always been a difficult problem for mobile robots. Reinforcement learning algorithms such as Q-learning play a huge role in path planning. Traditional Q-learning algorithm mainly uses - greedy search policy. But for a fixed search factor -greedy. For example, the problems of slow convergence speed, time-consuming and many continuous action transformations (such as the number of turns during robot movement) are not conducive to the stability requirements of mobile robots in industrial transportation. Especially for the transportation of dangerous chemicals, continuous transformation of turns will increase the risk of objects toppling. This paper proposes a new method based on - greedy 's improved dynamic search strategy is used to improve the stability of mobile robots in motion planning. The experiment shows that the dynamic search strategy converges faster, consumes less time, has less continuous transformation times of action, and has higher motion stability in the test environment.

U2 - 10.54254/2755-2721/5/20230728

DO - 10.54254/2755-2721/5/20230728

M3 - Journal article

VL - 5

SP - 853

EP - 858

JO - Applied and Computational Engineering

JF - Applied and Computational Engineering

SN - 2755-2721

IS - 1

ER -