Proximal Policy Optimization (PPO) 是一種強化學習算法,由 OpenAI 提出,旨在解決傳統策略梯度方法中策略更新過大的問題。PPO 通過引入限制策略更新范圍的機制,在保證收斂性的同時提高了算法的穩定性和效率。
PPO算法原理
PPO 算法的核心思想是通過優化目標函數來更新策略,但在更新過程中限制策略變化的幅度。具體來說,PPO 引入了裁剪(Clipping)和信賴域(Trust Region)的思想,以確保策略不會發生過大的改變。
PPO算法公式
PPO 主要有兩種變體:裁剪版(Clipped PPO)和信賴域版(Adaptive KL Penalty PPO)。本文重點介紹裁剪版的 PPO。
-
舊策略:
其中,
? 是上一次更新后的策略參數。
-
計算概率比率:
-
裁剪后的目標函數:
其中,
? 是優勢函數(Advantage Function),
?是裁剪范圍的超參數,通常取值為0.2。
-
更新策略參數:
PPO算法的實現
下面是用Python和TensorFlow實現 PPO 算法的代碼示例:
import tensorflow as tf
import numpy as np
import gym# 定義策略網絡
class PolicyNetwork(tf.keras.Model):def __init__(self, action_space):super(PolicyNetwork, self).__init__()self.dense1 = tf.keras.layers.Dense(128, activation='relu')self.dense2 = tf.keras.layers.Dense(128, activation='relu')self.logits = tf.keras.layers.Dense(action_space, activation=None)def call(self, inputs):x = self.dense1(inputs)x = self.dense2(x)return self.logits(x)# 定義值函數網絡
class ValueNetwork(tf.keras.Model):def __init__(self):super(ValueNetwork, self).__init__()self.dense1 = tf.keras.layers.Dense(128, activation='relu')self.dense2 = tf.keras.layers.Dense(128, activation='relu')self.value = tf.keras.layers.Dense(1, activation=None)def call(self, inputs):x = self.dense1(inputs)x = self.dense2(x)return self.value(x)# 超參數
learning_rate = 0.0003
clip_ratio = 0.2
epochs = 10
batch_size = 64
gamma = 0.99# 創建環境
env = gym.make('CartPole-v1')
obs_dim = env.observation_space.shape[0]
n_actions = env.action_space.n# 創建策略和值函數網絡
policy_net = PolicyNetwork(n_actions)
value_net = ValueNetwork()# 優化器
policy_optimizer = tf.keras.optimizers.Adam(learning_rate)
value_optimizer = tf.keras.optimizers.Adam(learning_rate)def get_action(observation):logits = policy_net(observation)action = tf.random.categorical(logits, 1)return action[0, 0]def compute_advantages(rewards, values, next_values, done):advantages = []gae = 0for i in reversed(range(len(rewards))):delta = rewards[i] + gamma * next_values[i] * (1 - done[i]) - values[i]gae = delta + gamma * gaeadvantages.insert(0, gae)return np.array(advantages)def ppo_update(observations, actions, advantages, returns):with tf.GradientTape() as tape:old_logits = policy_net(observations)old_log_probs = tf.nn.log_softmax(old_logits)old_action_log_probs = tf.reduce_sum(old_log_probs * tf.one_hot(actions, n_actions), axis=1)logits = policy_net(observations)log_probs = tf.nn.log_softmax(logits)action_log_probs = tf.reduce_sum(log_probs * tf.one_hot(actions, n_actions), axis=1)ratio = tf.exp(action_log_probs - old_action_log_probs)surr1 = ratio * advantagessurr2 = tf.clip_by_value(ratio, 1.0 - clip_ratio, 1.0 + clip_ratio) * advantagespolicy_loss = -tf.reduce_mean(tf.minimum(surr1, surr2))policy_grads = tape.gradient(policy_loss, policy_net.trainable_variables)policy_optimizer.apply_gradients(zip(policy_grads, policy_net.trainable_variables))with tf.GradientTape() as tape:value_loss = tf.reduce_mean((returns - value_net(observations))**2)value_grads = tape.gradient(value_loss, value_net.trainable_variables)value_optimizer.apply_gradients(zip(value_grads, value_net.trainable_variables))# 訓練循環
for epoch in range(epochs):observations = []actions = []rewards = []values = []next_values = []dones = []obs = env.reset()done = Falsewhile not done:obs = obs.reshape(1, -1)observations.append(obs)action = get_action(obs)actions.append(action)value = value_net(obs)values.append(value)obs, reward, done, _ = env.step(action.numpy())rewards.append(reward)dones.append(done)if done:next_values.append(0)else:next_value = value_net(obs.reshape(1, -1))next_values.append(next_value)returns = compute_advantages(rewards, values, next_values, dones)advantages = returns - valuesobservations = np.concatenate(observations, axis=0)actions = np.array(actions)returns = np.array(returns)advantages = np.array(advantages)ppo_update(observations, actions, advantages, returns)print(f'Epoch {epoch+1} completed')
總結
PPO 算法通過引入裁剪機制和信賴域約束,限制了策略更新的幅度,提高了訓練過程的穩定性和效率。其簡單而有效的特性使其成為目前強化學習中最流行的算法之一。通過理解并實現 PPO 算法,可以更好地應用于各種強化學習任務,提升模型的性能。