2.策略梯度方法
目標是使策略 不斷更新,回報更高。
計算每一個軌跡的回報,和對應的概率
目標是使回報高的軌跡概率應該高。這樣整個策略的期望回報也會高。
什么是策略期望回報?
就是用這個策略跑了若干個軌跡,得到回報,然后求平均
2.1 策略梯度的主要原理
# 1. 采樣一個完整的 episodelog_probs = [] # 存儲每個 (s_t, a_t) 的 log π(a_t|s_t)rewards = [] # 存儲每個時間步的獎勵 r_twhile not done:action_probs = policy_net(state_tensor) # π(a|s)action = sample_action(action_probs) # a_t ~ π(a|s)log_prob = torch.log(action_probs[action]) # log π(a_t|s_t)log_probs.append(log_prob)next_state, reward, done = env.step(action)rewards.append(reward)# 2. 計算每個時間步的折扣回報 G_tdiscounted_rewards = compute_discounted_rewards(rewards, gamma=0.99)# 3. 計算策略梯度損失policy_loss = []for log_prob, G_t in zip(log_probs, discounted_rewards):policy_loss.append(-log_prob * G_t) # 負號因為 PyTorch 默認做梯度下降# 4. 反向傳播total_loss = torch.stack(policy_loss).sum() # 求和所有時間步的損失optimizer.zero_grad()total_loss.backward() # 計算梯度 ?θ J(θ)optimizer.step() # 更新 θ ← θ + α ?θ J(θ)
2.2 Reinforce 算法,也稱為蒙特卡洛策略梯度,是一種策略梯度算法,它使用來自整個 episode 的估計回報來更新策略參數
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as Fclass PolicyNetwork(nn.Module):"""策略網絡,輸入狀態,輸出動作概率"""def __init__(self, state_dim, action_dim, hidden_dim=64):super(PolicyNetwork, self).__init__()self.fc1 = nn.Linear(state_dim, hidden_dim)self.fc2 = nn.Linear(hidden_dim, action_dim)def forward(self, x):x = F.relu(self.fc1(x))x = F.softmax(self.fc2(x), dim=-1)return xdef reinforce(env, policy_net, optimizer, num_episodes=1000, gamma=0.99):"""REINFORCE算法實現參數:env: 環境policy_net: 策略網絡optimizer: 優化器num_episodes: 訓練episode數量gamma: 折扣因子返回:每個episode的獎勵列表"""episode_rewards = []for episode in range(num_episodes):state = env.reset()log_probs = []rewards = []# 采樣一個完整的episodedone = Falsewhile not done:# 將狀態轉換為tensorstate_tensor = torch.FloatTensor(state).unsqueeze(0) # shape: (1, state_dim)# 通過策略網絡獲取動作概率action_probs = policy_net(state_tensor) # shape: (1, action_dim)# 從概率分布中采樣一個動作action = torch.multinomial(action_probs, 1).item()# 也可以# dist = torch.distributions.Categorical(action_probs)# action = dist.sample() # 標量值# 計算動作的log概率log_prob = torch.log(action_probs.squeeze(0)[action]) # shape: scalar# 執行動作next_state, reward, done, _ = env.step(action)# 存儲log概率和獎勵log_probs.append(log_prob)rewards.append(reward)# 更新狀態state = next_state# 計算episode的折扣回報discounted_rewards = []R = 0for r in reversed(rewards):R = r + gamma * Rdiscounted_rewards.insert(0, R)# 標準化折扣回報(減少方差)discounted_rewards = torch.FloatTensor(discounted_rewards)discounted_rewards = (discounted_rewards - discounted_rewards.mean()) / (discounted_rewards.std() + 1e-9)# 計算策略梯度損失policy_loss = []for log_prob, R in zip(log_probs, discounted_rewards):policy_loss.append(-log_prob * R) # 負號因為我們要最大化回報# 反向傳播optimizer.zero_grad()policy_loss = torch.stack(policy_loss).sum() # shape: scalarpolicy_loss.backward()optimizer.step()# 記錄總獎勵episode_rewards.append(sum(rewards))return episode_rewards
開始以為policy_loss 計算的是策略梯度,感覺很不合理,其實不是的,差了一個求導呢。
總結,policy_loss 的梯度 和 目標函數的梯度符號相反。
兩者的梯度 符號相反。因此最大化目標函數等于最小化policy_loss