在nanoGPT的data文件夾有兩個很相似的文件夾結構:shakespeare和shakespeare-char,這兩種都是對shakespeare數據集的處理,但是shakespeare使用的是tiktoken對文字進行編碼,另一個則是使用自己構建的詞表
一、shakespeare-char(自己構建詞表)
數據獲取
data_path = os.path.join(os.path.dirname(__file__), 'input.txt')
if not os.path.exists(data_path):url = 'https://cdn.jsdelivr.net/gh/karpathy/char-rnn@master/data/tinyshakespeare/input.txt'with open(data_path, 'w', encoding='utf-8') as f:f.write(requests.get(url).text)
with open(data_path, 'r', encoding='utf-8') as f:data = f.read()
我這里在運行的時候是沒有辦法直接下載的,如果出現這個情況就直接打開網址手動下載就好
構建詞表
chars = sorted(list(set(data)))
stoi = {s: i for i, s in enumerate(chars)}
itos = {i: s for i, s in enumerate(chars)}def encode(x):return [stoi[s] for s in x]
def decode(l):return ''.join([itos[i] for i in l])
劃分訓練集和測試集
n = len(data)
train_data = data[: int(0.9 * n)]
val_data = data[int(0.9 *n):]
train_idx = encode(train_data)
val_idx = encode(val_data)
對訓練集和測試集分別編碼
train_idx = np.array(train_idx, dtype=np.uint16)
val_idx = np.array(val_idx, dtype=np.uint16)
train_idx.tofile(os.path.join(os.path.dirname(__file__), 'train.bin'))
val_idx.tofile(os.path.join(os.path.dirname(__file__), 'val.bin'))
保存詞表為meta.pkl文件(在sample.py中會用)
meta = {'voavb_size': len(chars),'itos': itos,'stoi': stoi
}
with open(os.path.join(os.path.dirname(__file__), 'meta.pkl'), 'wb') as f:pickle.dump(meta, f)
print('finish')
二、shakespeare(利用tiktoken)
數據加載、劃分數據集的部分都相同,就不再贅述了
數據編碼
enc = tiktoken.get_encoding('gpt2')
train_ids = enc.encode_ordinary(train_data)
val_ids = enc.encode_ordinary(val_data)
print(f"train has {len(train_ids):,} tokens")
print(f"val has {len(val_ids):,} tokens")
保存數據
train_ids = np.array(train_ids, dtype=np.uint16)
val_ids = np.array(val_ids, dtype=np.uint16)
val_ids.tofile(os.path.join(os.path.dirname(__file__), 'val.bin'))
train_ids.tofile(os.path.join(os.path.dirname(__file__), 'train.bin'))
三、關于保存數據的幾種方式對比
不知道大家發現沒有,就這幾十行代碼中有三種文件讀寫方式
(1)f.write/f.read
直接讀寫字符串或字節流,不涉及格式解釋(如txt)
(2)val_ids.tofile
原始二進制存儲(如bin),但不保存shape需要提前知道數據格式
(3)pickle.dump
把任意Python對象(列表、字典、類、模型等)序列化或二進制流