DynaQ怎么了?(DynaQ vs Qlearning)

2024-06-24 13:24:17 发布

您现在位置:Python中文网/ 问答频道 /正文

我实现了Q-learning算法,并在OpenAI gym上的FrozenLake-v0上使用了它。 我在10000集的训练中获得185份奖励,在测试中获得7333份奖励。 这个好吗

我还尝试了Dyna-Q算法。但它的表现比Q-learning差。 培训期间的总奖励约为200,测试期间的总奖励约为700-900,共10000集,包含50个计划步骤

为什么会这样

下面是代码。代码有问题吗

# Setup
env = gym.make('FrozenLake-v0')

epsilon = 0.9
lr_rate = 0.1
gamma = 0.99
planning_steps = 0

total_episodes = 10000
max_steps = 100

培训和测试()

while t < max_steps:
    action = agent.choose_action(state)  
    state2, reward, done, info = agent.env.step(action)  
    # Removed in testing
    agent.learn(state, state2, reward, action)
    agent.model.add(state, action, state2, reward)
    agent.planning(planning_steps)
    # Till here
    state = state2
def add(self, state, action, state2, reward):
        self.transitions[state, action] = state2
        self.rewards[state, action] = reward

def sample(self, env):
    state, action = 0, 0
    # Random visited state
    if all(np.sum(self.transitions, axis=1)) <= 0:
        state = np.random.randint(env.observation_space.n)
    else:
        state = np.random.choice(np.where(np.sum(self.transitions, axis=1) > 0)[0])

    # Random action in that state
    if all(self.transitions[state]) <= 0:
        action = np.random.randint(env.action_space.n)
    else:    
        action = np.random.choice(np.where(self.transitions[state] > 0)[0])
    return state, action

def step(self, state, action):
    state2 = self.transitions[state, action]
    reward = self.rewards[state, action]
    return state2, reward

def choose_action(self, state):
    if np.random.uniform(0, 1) < epsilon:
        return self.env.action_space.sample()
    else:
        return np.argmax(self.Q[state, :])

def learn(self, state, state2, reward, action):
    # predict = Q[state, action]
    # Q[state, action] = Q[state, action] + lr_rate * (target - predict)
    target = reward + gamma * np.max(self.Q[state2, :])
    self.Q[state, action] = (1 - lr_rate) * self.Q[state, action] + lr_rate * target

def planning(self, n_steps):
    # if len(self.transitions)>planning_steps:
    for i in range(n_steps):
        state, action =  self.model.sample(self.env)
        state2, reward = self.model.step(state, action)
        self.learn(state, state2, reward, action)

Tags: selfenvratedefnpactionrandomsteps
2条回答

我想可能是因为环境是随机的。在随机环境中学习模型可能导致次优策略。在Sutton&;巴托的书,他们说,他们假设确定性环境

检查在采取模型步骤后,下一个状态即state2的计划步骤样本

否则,规划可能会从self.env给出的相同起始状态重复步骤

然而,我可能误解了self.env参数在self.model.sample(self.env)中的作用

相关问题 更多 >