An Adaptive Meta-Reinforcement Learning Framework for Dynamic Flexible Job Shop Scheduling

Research output: Contribution to journalArticlepeer-review

Abstract

The optimization of flexible job shop scheduling is essential for improving manufacturing efficiency and performance in dynamic production environments. However, existing scheduling methods face challenges in scalability and adaptability, which limits their effectiveness in such environments. To address these limitations, this paper proposes a generalized and modular DFJSP framework that systematically decomposes shop-floor elements into key modules, enabling flexible and resilient scheduling under dynamic conditions. Building on this framework, an Adaptive Markov Decision Process (AMDP) is formulated to capture real-time shop-floor states and guide optimal action selection. Leveraging Meta-Reinforcement Learning (MRL), the proposed approach integrates Model-Agnostic Meta-Learning (MAML) with Proximal Policy Optimization (PPO) to facilitate rapid adaptation to new scheduling tasks while enhancing policy generalization. Numerical experiments demonstrate that the framework effectively balances multiple dynamic objectives, including makespan and energy consumption, and adapts efficiently to real-time variations in job priorities, machine availability, and processing times. The results highlight the potential of combining modular problem formulation, AMDP modeling, and MRL for scalable, efficient, and robust DFJSP solutions in modern manufacturing environments.
Original languageEnglish
JournalIEEE Transactions on Automation Science and Engineering
DOIs
Publication statusPublished - 30 Oct 2025

Fingerprint

Dive into the research topics of 'An Adaptive Meta-Reinforcement Learning Framework for Dynamic Flexible Job Shop Scheduling'. Together they form a unique fingerprint.

Cite this