代碼原理分析
1. 核心思想
該代碼實現了一個基于擴散模型(Diffusion Model)的強化學習策略網絡。擴散模型通過逐步去噪過程生成動作,核心思想是:
? 前向過程:通過T步逐漸將專家動作添加高斯噪聲,最終變成純噪聲
? 逆向過程:訓練神經網絡預測噪聲,通過T步逐步去噪生成動作
? 數學基礎:基于DDPM(Denoising Diffusion Probabilistic Models)框架
算法步驟:
1.1 前向加噪:在動作空間逐步添加高斯噪聲,將真實動作分布轉化為高斯分布
q ( a t ∣ a t ? 1 ) = N ( a t ; 1 ? β t a t ? 1 , β t I ) q(\mathbf{a}_t|\mathbf{a}_{t-1}) = \mathcal{N}(\mathbf{a}_t; \sqrt{1-\beta_t}\mathbf{a}_{t-1}, \beta_t\mathbf{I}) q(at?∣at?1?)=N(at?;1?βt??at?1?,βt?I)
其中 β t \beta_t βt? 為噪聲調度參數(網頁4][網頁5][網頁8])。
1.2 逆向去噪:基于觀測 o t \mathbf{o}_t ot? 條件去噪生成動作
p θ ( a t ? 1 ∣ a t , o t ) = N ( a t ? 1 ; μ θ ( a t , o t , t ) , Σ t ) p_\theta(\mathbf{a}_{t-1}|\mathbf{a}_t, \mathbf{o}_t) = \mathcal{N}(\mathbf{a}_{t-1}; \mu_\theta(\mathbf{a}_t, \mathbf{o}_t, t), \Sigma_t) pθ?(at?1?∣at?,ot?)=N(at?1?;μθ?(at?,ot?,t),Σt?)
去噪網絡 μ θ \mu_\theta μθ? 預測噪聲殘差(網頁5][網頁6][網頁8])。
1.3 訓練目標:最小化噪聲預測誤差
L = E t , a 0 , ? [ ∥ ? ? ? θ ( α t a 0 + 1 ? α t ? , o t , t ) ∥ 2 ] \mathcal{L} = \mathbb{E}_{t,\mathbf{a}_0,\epsilon}\left[ \|\epsilon - \epsilon_\theta(\sqrt{\alpha_t}\mathbf{a}_0 + \sqrt{1-\alpha_t}\epsilon, \mathbf{o}_t, t)\|^2 \right] L=Et,a0?,??[∥???θ?(αt??a0?+1?αt???,ot?,t)∥2]
其中 α t = ∏ s = 1 t ( 1 ? β s ) \alpha_t = \prod_{s=1}^t (1-\beta_s) αt?=∏s=1t?(1?βs?)(網頁4][網頁8][網頁11])。
2. 關鍵數學公式
? 前向過程(擴散過程):
q(a_t|a_{t-1}) = N(a_t; √(α_t)a_{t-1}, (1-α_t)I)
α_t = 1 - β_t,α?_t = ∏_{i=1}^t α_i
a_t = √α?_t a_0 + √(1-α?_t)ε,其中ε ~ N(0,I)
? 訓練目標(噪聲預測):
L = ||ε - ε_θ(a_t, s, t)||^2
? 逆向過程(采樣過程):
p_θ(a_{t-1}|a_t) = N(a_{t-1}; μ_θ(a_t, s, t), Σ_t)
μ_θ = 1/√α_t (a_t - β_t/√(1-α?_t) ε_θ)
逐行代碼注釋
import torch
import gymnasium as gym
import numpy as npclass DiffusionPolicy(torch.nn.Module):def __init__(self, state_dim=4, action_dim=2, T=20):super().__init__()self.T = T # 擴散過程總步數self.betas = torch.linspace(1e-4, 0.02, T) # 噪聲方差調度self.alphas = 1 - self.betas # 前向過程參數self.alpha_bars = torch.cumprod(self.alphas, dim=0) # 累積乘積α?# 去噪網絡(輸入維度:state(4) + action(2) + timestep(1) = 7)self.denoiser = torch.nn.Sequential(torch.nn.Linear(7, 64), # 輸入層torch.nn.ReLU(), # 激活函數torch.nn.Linear(64, 2) # 輸出預測的噪聲)self.optimizer = torch.optim.Adam(self.denoiser.parameters(), lr=1e-3)def train_step(self, states, expert_actions):batch_size = states.size(0)t = torch.randint(0, self.T, (batch_size,)) # 隨機采樣時間步alpha_bar_t = self.alpha_bars[t].unsqueeze(1) # 獲取對應α?_t# 前向加噪(公式實現)noise = torch.randn_like(expert_actions) # 生成高斯噪聲noisy_actions = torch.sqrt(alpha_bar_t) * expert_actions + \torch.sqrt(1 - alpha_bar_t) * noise # 公式(2)# 輸入拼接(狀態、加噪動作、歸一化時間步)inputs = torch.cat([states, noisy_actions,(t.float() / self.T).unsqueeze(1) # 時間步歸一化到[0,1]], dim=1) # 最終維度:batch_size x 7pred_noise = self.denoiser(inputs) # 預測噪聲loss = torch.mean((noise - pred_noise)**2) # MSE損失return lossdef sample_action(self, state):state_tensor = torch.FloatTensor(state).unsqueeze(0)a_t = torch.randn(1, 2) # 初始化為隨機噪聲(動作維度2)# 逆向去噪過程(需要補全)for t in reversed(range(self.T)):# 應實現的步驟:# 1. 獲取當前時間步參數# 2. 拼接輸入(狀態,當前動作,時間步)# 3. 預測噪聲ε_θ# 4. 根據公式計算均值μ# 5. 采樣新動作(最后一步不添加噪聲)passreturn a_t.detach().numpy()[0] # 返回最終動作
執行過程詳解
訓練流程
- 隨機采樣時間步:為每個樣本隨機選擇擴散步t ∈ [0, T-1]
- 前向加噪:根據公式將專家動作添加對應程度的噪聲
- 輸入構造:拼接狀態、加噪動作和歸一化時間步
- 噪聲預測:神經網絡預測添加的噪聲
- 損失計算:最小化預測噪聲與真實噪聲的MSE
采樣流程(需補全)
- 初始化:從高斯噪聲開始
- 迭代去噪:從t=T到t=1逐步去噪
? 根據當前動作和狀態預測噪聲
? 計算前一步的均值
? 添加隨機噪聲(最后一步除外) - 輸出:得到最終去噪后的動作
關鍵改進建議
- 實現逆向過程:需要補充時間步循環和去噪公式
- 添加方差調度:在采樣時使用更復雜的方差計算
- 時間步嵌入:可以使用正弦位置編碼代替簡單歸一化
- 網絡結構優化:考慮使用Transformer或條件批歸一化
該實現展示了擴散策略的核心思想,但完整的擴散策略還需要實現完整的逆向采樣過程,并可能需要調整噪聲調度參數以獲得更好的性能。
最終可執行代碼:
import torch
import gymnasium as gym
import numpy as npclass DiffusionPolicy(torch.nn.Module):def __init__(self, state_dim=4, action_dim=2, T=20):super().__init__()self.T = Tself.betas = torch.linspace(1e-4, 0.02, T)self.alphas = 1 - self.betasself.alpha_bars = torch.cumprod(self.alphas, dim=0)# 去噪網絡(輸入維度:4+2+1=7)self.denoiser = torch.nn.Sequential(torch.nn.Linear(7, 64),torch.nn.ReLU(),torch.nn.Linear(64, 2))self.optimizer = torch.optim.Adam(self.denoiser.parameters(), lr=1e-3)def train_step(self, states, expert_actions):batch_size = states.size(0)t = torch.randint(0, self.T, (batch_size,))alpha_bar_t = self.alpha_bars[t].unsqueeze(1)# 前向加噪公式[2](@ref)noise = torch.randn_like(expert_actions)noisy_actions = torch.sqrt(alpha_bar_t) * expert_actions + torch.sqrt(1 - alpha_bar_t) * noise# 輸入拼接(維度對齊)[1](@ref)inputs = torch.cat([states, noisy_actions,(t.float() / self.T).unsqueeze(1)], dim=1) # 最終維度:batch_size x 7pred_noise = self.denoiser(inputs)loss = torch.mean((noise - pred_noise)**2)return lossdef sample_action(self, state):state_tensor = torch.FloatTensor(state).unsqueeze(0)a_t = torch.randn(1, 2) # 二維動作空間[2](@ref)# 逆向去噪過程[2](@ref)for t in reversed(range(self.T)):alpha_t = self.alphas[t]alpha_bar_t = self.alpha_bars[t]inputs = torch.cat([state_tensor,a_t,torch.tensor([[t / self.T]], dtype=torch.float32)], dim=1)pred_noise = self.denoiser(inputs)a_t = (a_t - (1 - alpha_t)/torch.sqrt(1 - alpha_bar_t) * pred_noise) / torch.sqrt(alpha_t)if t > 0:a_t += torch.sqrt(self.betas[t]) * torch.randn_like(a_t)return torch.argmax(a_t).item() # 離散動作選擇[1](@ref)if __name__ == "__main__":env = gym.make('CartPole-v1')policy = DiffusionPolicy()# 關鍵修復:確保狀態數據維度統一[1,2](@ref)states, actions = [], []state, _ = env.reset()for _ in range(1000):action = env.action_space.sample()next_state, _, terminated, truncated, _ = env.step(action)done = terminated or truncated# 強制轉換狀態為numpy數組并檢查維度[2](@ref)state = np.array(state, dtype=np.float32).flatten()if len(state) != 4:raise ValueError(f"Invalid state shape: {state.shape}")states.append(state) # 確保每個狀態是(4,)的數組actions.append(action)if done:state, _ = env.reset()else:state = next_state# 維度驗證與轉換[1](@ref)states_array = np.stack(states) # 強制轉換為(1000,4)if states_array.shape != (1000,4):raise ValueError(f"States shape error: {states_array.shape}")actions_onehot = np.eye(2)[np.array(actions)] # 轉換為one-hot編碼[2](@ref)states_tensor = torch.FloatTensor(states_array)actions_tensor = torch.FloatTensor(actions_onehot)# 訓練循環for epoch in range(100):loss = policy.train_step(states_tensor, actions_tensor)policy.optimizer.zero_grad()loss.backward()policy.optimizer.step()print(f"Epoch {epoch}, Loss: {loss.item():.4f}")# 測試state, _ = env.reset()for _ in range(200):action = policy.sample_action(state)state, _, done, _, _ = env.step(action)if done: break