An improved version of the NEH algorithm and its application to large-scale flow-shop scheduling problems

2007 ◽  
Vol 39 (2) ◽  
pp. 229-234 ◽  
Author(s):  
Feng Jin ◽  
Shiji Song ◽  
Cheng Wu
2013 ◽  
Vol 30 (05) ◽  
pp. 1350014 ◽  
Author(s):  
ZHICONG ZHANG ◽  
WEIPING WANG ◽  
SHOUYAN ZHONG ◽  
KAISHUN HU

Reinforcement learning (RL) is a state or action value based machine learning method which solves large-scale multi-stage decision problems such as Markov Decision Process (MDP) and Semi-Markov Decision Process (SMDP) problems. We minimize the makespan of flow shop scheduling problems with an RL algorithm. We convert flow shop scheduling problems into SMDPs by constructing elaborate state features, actions and the reward function. Minimizing the accumulated reward is equivalent to minimizing the schedule objective function. We apply on-line TD(λ) algorithm with linear gradient-descent function approximation to solve the SMDPs. To examine the performance of the proposed RL algorithm, computational experiments are conducted on benchmarking problems in comparison with other scheduling methods. The experimental results support the efficiency of the proposed algorithm and illustrate that the RL approach is a promising computational approach for flow shop scheduling problems worthy of further investigation.


2019 ◽  
Vol 50 (1) ◽  
pp. 87-100
Author(s):  
Fuqing Zhao ◽  
Xuan He ◽  
Yi Zhang ◽  
Wenchang Lei ◽  
Weimin Ma ◽  
...  

4OR ◽  
2006 ◽  
Vol 4 (1) ◽  
pp. 15-28 ◽  
Author(s):  
Jean-Louis Bouquard ◽  
Christophe Lenté

Sign in / Sign up

Export Citation Format

Share Document