Sarsa算法,为什么qvalue趋向于零?

2024-06-25 23:57:42 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试实现Sarsa算法来解决OpenAI-gym冻结的湖泊环境。我很快就开始着手解决这个问题,但我想我能理解。在

我也了解Sarsa算法是如何工作的,有很多网站可以找到伪代码,我就知道了。在我的问题中,我已经按照所有的步骤实现了这个算法,但是当我在所有事件之后检查最后的Q函数时,我注意到所有的值都趋于零,我不知道为什么。在

这是我的代码,我希望有人能告诉我为什么会这样。在

import gym
import random
import numpy as np

env = gym.make('FrozenLake-v0')

#Initialize the Q matrix 16(rows)x4(columns)
Q = np.zeros([env.observation_space.n, env.action_space.n])

for i in range(env.observation_space.n):
    if (i != 5) and (i != 7) and (i != 11) and (i != 12) and (i != 15):
        for j in range(env.action_space.n):
            Q[i,j] = np.random.rand()

#Epsilon-Greedy policy, given a state the agent chooses the action that it believes has the best long-term effect with probability 1-eps, otherwise, it chooses an action uniformly at random. Epsilon may change its value.

bestreward = 0
epsilon = 0.1
discount = 0.99
learning_rate = 0.1
num_episodes = 50000
a = [0,0,0,0,0,0,0,0,0,0]

for i_episode in range(num_episodes):

    # Observe current state s
    observation = env.reset()
    currentState = observation

    # Select action a using a policy based on Q
    if np.random.rand() <= epsilon: #pick randomly
        currentAction = random.randint(0,env.action_space.n-1)
    else: #pick greedily            
        currentAction = np.argmax(Q[currentState, :])

    totalreward = 0
    while True:
        env.render()

        # Carry out an action a 
        observation, reward, done, info = env.step(currentAction)
        if done is True:
            break;

        # Observe reward r and state s'
        totalreward += reward
        nextState = observation

        # Select action a' using a policy based on Q
        if np.random.rand() <= epsilon: #pick randomly
            nextAction = random.randint(0,env.action_space.n-1)
        else: #pick greedily            
            nextAction = np.argmax(Q[nextState, :])

        # update Q with Q-learning 
        Q[currentState, currentAction] += learning_rate * (reward + discount * Q[nextState, nextAction] - Q[currentState, currentAction])

        currentState = nextState
        currentAction = nextAction

        print "Episode: %d reward %d best %d epsilon %f" % (i_episode, totalreward, bestreward, epsilon)
        if totalreward > bestreward:
            bestreward = totalreward
        if i_episode > num_episodes/2:
            epsilon = epsilon * 0.9999
        if i_episode >= num_episodes-10:
            a.insert(0, totalreward)
            a.pop()
        print a

        for i in range(env.observation_space.n):
            print "-----"
            for j in range(env.action_space.n):
                print Q[i,j]

Tags: andinenvforifnprangeaction
1条回答
网友
1楼 · 发布于 2024-06-25 23:57:42

当一集结束时,在更新Q函数之前,你正在打破while循环。因此,当代理收到的奖励与零不同时(该代理已达到目标状态),该奖励中的Q函数永远不会更新。在

你应该在while循环的最后一部分检查这一集的结尾。在

相关问题 更多 >