2008 | OriginalPaper | Chapter
The Concept of Opposition and Its Use in Q-Learning and Q(λ) Techniques
Authors : Maryam Shokri, H. R. Tizhoosh, Mohamed S. Kamel
Published in: Oppositional Concepts in Computational Intelligence
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Reinforcement learning (RL) is a goal-directed method for solving problems in uncertain and dynamic environments. RL agents explore the states of the environment in order to find an optimal policy which maps states to reward-bearing actions. This chapter discusses recently introduced techniques to expedite some of the tabular RL methods for off-policy, step-by-step, incremental and model-free reinforcement learning with discrete state and action space. The concept of opposition-based reinforcement learning has been introduced for Q-value updating. Based on this concept, the Q-values can be simultaneously updated for action and opposite action in a given state. Hence, the learning process in general will be accelerated.Several algorithms are outlined in this chapter. The
OQ
(
λ
) has been introduced to accelerate
Q
(
λ
) algorithm in discrete state spaces. The
NOQ
(
λ
) method is an extension of
OQ
(
λ
) to operate in a broader range of non-deterministic environments. The update of the opposition trace in
OQ
(
λ
) depends on the next state of the opposite action (which generally is not taken by the agent). This limits the usability of this technique to the deterministic environments because the next state should be known to the agent.
NOQ
(
λ
) is presented to update the opposition trace independent of knowing the next state for the opposite action. The primary results show that
NOQ
(
λ
) can be employed in non-deterministic environments and performs even faster than
OQ
(
λ
).