We contribute two new algorithms in this paper called PImGA and PIrlEA respectively in which we construct populations online in each iteration. Every iteration process in these two algorithms does not like the normal EA and GA in which they employ the inefficient value iteration method in general, instead of, in this paper, we employ the efficient policy iteration as the computation method for searching optimal control actions or policies. Meanwhile,these algorithms also do not like general EA and GA for selection operator to get a optimal policy, instead of we make the Agent learning a good or elite policy from its parents population. The resulted policy will be as one of elements of the next population. Because this policy is obtained by taking optimal reinforcement learning algorithm and greedy policy, the new population always can be constructed by applying better policies than its parents, that is to say, the child or offspring will inherit parents’ good or elite abilities. Intuitively, for a finite problem, the resulted population from simulation will accommodate the near optimal policies after a number of iterations. Our experiments show that the algorithms can work well.
Weitere Kapitel dieses Buchs durch Wischen aufrufen
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
- Reinforcement Learning Algorithms Based on mGA and EA with Policy Iterations
- Springer Berlin Heidelberg