GAMHE

Group of advanced Automation of Machines, Highly complex processes and Environments

Authors: Beruvides, G; Juanes, C; Castaño, F; Haber, R.

Conference: IEEE International Conference on Industrial Informatics – INDIN15, Cambridge, UK pp. 1180-1185., July 22-24th, 2015

Summary:This paper presents a self-learning strategy for an artificial cognitive control based on a reinforcement learning strategy, in particular, an on-line version of a Q-learning algorithm. One architecture for artificial cognitive control was initially reported Sanchez-Boza et al. (2011), but without an effective self-learning strategy in order to deal with nonlinear and time variant behavior. The anticipation mode (i.e., inverse model control) and the single loop mode are two operating modes of the artificial cognitive control architecture. The main goal of the Q-learning algorithm is to deal with intrinsic uncertainty, nonlinearities and noisy behavior of processes in run-time. In order to validate the proposed method, experimental works are carried out for measuring and control the microdrilling process. The real-time application to control the drilling force is presented as a proof of concept. The performance of the artificial cognitive control system by means of the reinforcement learning is improved on the basis of good transient responses and acceptable steady-state error. The Q-learning mechanism built into a low-cost computing platform demonstrates the suitability of its implementation in an industrial setup.

DOI:  10.1109/INDIN.2015.7281903.