
我们来到了第二部分,关于训练
Evaluating generative text models#
简要回顾了第四章的文本生成后,我们将设置我们的 LLM 进行文本生成,然后讨论评估生成文本质量的基本方法。接着,我们将计算训练和验证损失。
Using GPT to generate text#
使用 GPTModel 实例,我们采用第 4 章中的 generate_text_simple 函数,并引入两个实用的函数: text_to_token_ ids 和 token_ids_ to_text 。这些函数便于在文本和 token 表示之间进行转换

import tiktoken
from chapter04 import generate_text_simple
def text_to_token_ids(text, tokenizer):
encoded = tokenizer.encode(text, allowed_special={'<|endoftext|>'})
encoded_tensor = torch.tensor(encoded).unsqueeze(0) #1
return encoded_tensor
def token_ids_to_text(token_ids, tokenizer):
flat = token_ids.squeeze(0) #2
return tokenizer.decode(flat.tolist())
start_context = "Every effort moves you"
tokenizer = tiktoken.get_encoding("gpt2")
token_ids = generate_text_simple(
model=model,
idx=text_to_token_ids(start_context, tokenizer),
max_new_tokens=10,
context_size=GPT_CONFIG_124M["context_length"]
)
print("Output text:\n", token_ids_to_text(token_ids, tokenizer))Calculating the text generation loss 计算文本生成损失#

左侧显示的每个三个输入标记,我们计算一个包含对应于词汇表中每个标记的概率得分的向量。每个向量中最高概率得分的索引位置代表最可能的下一个标记 ID。与最高概率得分相关的这些标记 ID 被选中,并将其映射回表示模型生成的文本的文本中。
比如
inputs = torch.tensor([[16833, 3626, 6100], # ["every effort moves",
[40, 1107, 588]]) # "I really like"]期望的结果是
targets = torch.tensor([[3626, 6100, 345 ], # [" effort moves you",
[1107, 588, 11311]]) # " really like chocolate"]也就是往后偏移一个单词,接下来准备实现文本评估函数,模型训练的目标是增加对应于正确目标标记 ID 的 softmax 概率

计算逻辑如下

Calculating the training and validation set losses 计算训练集和验证集损失#

为了在训练和验证数据集上计算损失,我们使用了一个非常小的文本数据集,即爱迪丝·华顿的短篇小说《审判》,我们在第 2 章中已经使用过它。通过选择公共领域的文本,我们避免了任何与使用权相关的问题。
file_path = "the-verdict.txt"
with open(file_path, "r", encoding="utf-8") as file:
text_data = file.read()进行 Token 拆解
total_characters = len(text_data)
total_tokens = len(tokenizer.encode(text_data))
print("Characters:", total_characters)
print("Tokens:", total_tokens)为了实现数据拆分和加载,我们首先定义一个 train_ratio 来使用 90%的数据进行训练,并将剩余的 10%作为验证数据,用于模型在训练过程中的评估:
train_ratio = 0.90
split_idx = int(train_ratio * len(text_data))
train_data = text_data[:split_idx]
val_data = text_data[split_idx:]Train loader:
torch.Size([2, 256]) torch.Size([2, 256])
torch.Size([2, 256]) torch.Size([2, 256])
torch.Size([2, 256]) torch.Size([2, 256])
torch.Size([2, 256]) torch.Size([2, 256])
torch.Size([2, 256]) torch.Size([2, 256])
torch.Size([2, 256]) torch.Size([2, 256])
torch.Size([2, 256]) torch.Size([2, 256])
torch.Size([2, 256]) torch.Size([2, 256])
torch.Size([2, 256]) torch.Size([2, 256])
Validation loader:
torch.Size([2, 256]) torch.Size([2, 256])我们实现一个实用函数来计算通过训练和验证加载器返回的给定批次的交叉熵损失
def calc_loss_batch(input_batch, target_batch, model, device):
input_batch = input_batch.to(device) #1
target_batch = target_batch.to(device)
logits = model(input_batch)
loss = torch.nn.functional.cross_entropy(
logits.flatten(0, 1), target_batch.flatten()
)
return loss
def calc_loss_loader(data_loader, model, device, num_batches=None):
total_loss = 0.
if len(data_loader) == 0:
return float("nan")
elif num_batches is None:
num_batches = len(data_loader) #1
else:
num_batches = min(num_batches, len(data_loader)) #2
for i, (input_batch, target_batch) in enumerate(data_loader):
if i < num_batches:
loss = calc_loss_batch(
input_batch, target_batch, model, device
)
total_loss += loss.item() #3
else:
break
return total_loss / num_batches #4使用如下代码进行计算
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device) #1
with torch.no_grad(): #2
train_loss = calc_loss_loader(train_loader, model, device) #3
val_loss = calc_loss_loader(val_loader, model, device)
print("Training loss:", train_loss)
print("Validation loss:", val_loss)到这里就完成了,如图部分

Training an LLM 训练一个 LLM#
现在是时候实现预训练 LLM 的代码了,也就是我们的 GPTModel 。为此,我们关注一个简单的训练循环,以保持代码简洁易读。

概述了八个步骤,从迭代每个 epoch 开始,处理批次,重置梯度,计算损失和新梯度,以及更新权重,最后以打印损失和生成文本样本等监控步骤结束
def train_model_simple(model, train_loader, val_loader,
optimizer, device, num_epochs,
eval_freq, eval_iter, start_context, tokenizer):
train_losses, val_losses, track_tokens_seen = [], [], [] # 1
tokens_seen, global_step = 0, -1
for epoch in range(num_epochs): # 2
model.train()
for input_batch, target_batch in train_loader:
optimizer.zero_grad() # 3
loss = calc_loss_batch(
input_batch, target_batch, model, device
)
loss.backward() # 4
optimizer.step() # 5
tokens_seen += input_batch.numel()
global_step += 1
if global_step % eval_freq == 0: # 6
train_loss, val_loss = evaluate_model(
model, train_loader, val_loader, device, eval_iter)
train_losses.append(train_loss)
val_losses.append(val_loss)
track_tokens_seen.append(tokens_seen)
print(f"Ep {epoch + 1} (Step {global_step:06d}): "
f"Train loss {train_loss:.3f}, "
f"Val loss {val_loss:.3f}"
)
generate_and_print_sample( # 7
model, tokenizer, device, start_context
)
return train_losses, val_losses, track_tokens_seen
device = "cpu" # "mps"
torch.manual_seed(123)
model = GPTModel(GPT_CONFIG_124M)
model.to(device)
optimizer = torch.optim.AdamW(
model.parameters(), # 1
lr=0.0004, weight_decay=0.1
)
num_epochs = 10
train_losses, val_losses, tokens_seen = train_model_simple(
model, train_loader, val_loader, optimizer, device,
num_epochs=num_epochs, eval_freq=5, eval_iter=5,
start_context="Every effort moves you", tokenizer=tokenizer
)
Decoding strategies to control randomness控制随机性的解码策略#
文本生成策略(也称为解码策略),以生成更具原创性的文本。
tokenizer = tiktoken.get_encoding("gpt2")
token_ids = generate_text_simple(
model=model,
idx=text_to_token_ids("Every effort moves you", tokenizer),
max_new_tokens=25,
context_size=GPT_CONFIG_124M["context_length"]
)
print("Output text:\n", token_ids_to_text(token_ids, tokenizer))每次生成步骤中都会选择对应于词汇表中所有标记中概率分数最大的标记。这意味着即使我们在相同的起始上下文( Every effort moves you )上多次运行前面的 generate_text_simple 函数,LLM 也会始终生成相同的输出。
Temperature scaling 温度缩放#
通过一个称为温度缩放的概念进一步控制分布和选择过程。温度缩放只是一个花哨的描述,即用大于 0 的数字除以 logits:
def softmax_with_temperature(logits, temperature):
scaled_logits = logits / temperature
return torch.softmax(scaled_logits, dim=0)Top-k 采样#
将 top-k 采样与概率采样和温度缩放相结合,可以提高文本生成结果。在 top-k 采样中,我们可以将采样的标记限制为最可能的 top-k 个标记,并通过掩码它们的概率分数来排除所有其他标记的选择过程,
结合上文的方式,得到结果如下
def generate(model, idx, max_new_tokens, context_size,
temperature=0.0, top_k=None, eos_id=None):
for _ in range(max_new_tokens): #1
idx_cond = idx[:, -context_size:]
with torch.no_grad():
logits = model(idx_cond)
logits = logits[:, -1, :]
if top_k is not None: #2
top_logits, _ = torch.topk(logits, top_k)
min_val = top_logits[:, -1]
logits = torch.where(
logits < min_val,
torch.tensor(float('-inf')).to(logits.device),
logits
)
if temperature > 0.0: #3
logits = logits / temperature
probs = torch.softmax(logits, dim=-1)
idx_next = torch.multinomial(probs, num_samples=1)
else: #4
idx_next = torch.argmax(logits, dim=-1, keepdim=True)
if idx_next == eos_id: #5
break
idx = torch.cat((idx, idx_next), dim=1)
return idxLoading and saving model weights in PyTorch 在 PyTorch 中加载和保存模型权重#
保存 PyTorch 模型相对简单。推荐的方法是使用 torch.save 函数保存模型的所有层及其参数的字典
torch.save(model.state_dict(), "model.pth")在通过 state_dict 保存模型权重后,我们可以将模型权重加载到新的 GPTModel 模型实例中
model = GPTModel(GPT_CONFIG_124M)
model.load_state_dict(torch.load("model.pth", map_location=device))
model.eval()dropout 有助于防止模型过度拟合训练数据,通过在训练过程中随机“丢弃”层中的神经元。然而,在推理过程中,我们不希望随机丢弃网络学习到的任何信息。使用 model.eval() 将模型切换到评估模式进行推理,禁用 model 的 dropout 层。如果我们计划稍后继续预训练模型——例如,使用本章前面定义的 train_model_simple 函数——保存优化器状态也是推荐的。
类似 AdamW 的自适应优化器为每个模型权重存储额外的参数。AdamW 使用历史数据动态调整每个模型参数的学习率。没有它,优化器会重置,模型可能无法以最优方式学习,甚至可能无法正确收敛,这意味着它将失去生成连贯文本的能力。使用 torch.save ,我们可以保存模型和优化器 state_dict 的内容:
torch.save({
"model_state_dict": model.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
},
"model_and_optimizer.pth"
)
checkpoint = torch.load("model_and_optimizer.pth", map_location=device)
model = GPTModel(GPT_CONFIG_124M)
model.load_state_dict(checkpoint["model_state_dict"])
optimizer = torch.optim.AdamW(model.parameters(), lr=5e-4, weight_decay=0.1)
optimizer.load_state_dict(checkpoint["optimizer_state_dict"])
model.train();Loading pretrained weights from OpenAI 从 OpenAI 加载预训练权重#

将它们从 settings 和 params 字典转移到我们的 GPTModel 实例
import tiktoken
import torch
import numpy as np
from gpt_download import download_and_load_gpt2
from main import GPT_CONFIG_124M, GPTModel, generate, text_to_token_ids, token_ids_to_text
settings, params = download_and_load_gpt2(
model_size="124M", models_dir="gpt2"
)
print("Settings:", settings)
print("Parameter dictionary keys:", params.keys())
model_configs = {
"gpt2-small (124M)": {"emb_dim": 768, "n_layers": 12, "n_heads": 12},
"gpt2-medium (355M)": {"emb_dim": 1024, "n_layers": 24, "n_heads": 16},
"gpt2-large (774M)": {"emb_dim": 1280, "n_layers": 36, "n_heads": 20},
"gpt2-xl (1558M)": {"emb_dim": 1600, "n_layers": 48, "n_heads": 25},
}
model_name = "gpt2-small (124M)"
NEW_CONFIG = GPT_CONFIG_124M.copy()
NEW_CONFIG.update(model_configs[model_name])
NEW_CONFIG.update({"context_length": 1024})
NEW_CONFIG.update({"qkv_bias": True}) # OpenAI 在多头注意力模块的线性层中使用了偏置向量来实现查询、键和值矩阵的计算。
gpt = GPTModel(NEW_CONFIG)
gpt.eval()
device = "cpu"
def assign(left, right):
if left.shape != right.shape:
raise ValueError(f"Shape mismatch. Left: {left.shape}, "
"Right: {right.shape}"
)
return torch.nn.Parameter(torch.tensor(right))
def load_weights_into_gpt(gpt, params): # 1
gpt.pos_emb.weight = assign(gpt.pos_emb.weight, params['wpe'])
gpt.tok_emb.weight = assign(gpt.tok_emb.weight, params['wte'])
for b in range(len(params["blocks"])): # 2
q_w, k_w, v_w = np.split( # 3
(params["blocks"][b]["attn"]["c_attn"])["w"], 3, axis=-1)
gpt.trf_blocks[b].att.W_query.weight = assign(
gpt.trf_blocks[b].att.W_query.weight, q_w.T)
gpt.trf_blocks[b].att.W_key.weight = assign(
gpt.trf_blocks[b].att.W_key.weight, k_w.T)
gpt.trf_blocks[b].att.W_value.weight = assign(
gpt.trf_blocks[b].att.W_value.weight, v_w.T)
q_b, k_b, v_b = np.split(
(params["blocks"][b]["attn"]["c_attn"])["b"], 3, axis=-1)
gpt.trf_blocks[b].att.W_query.bias = assign(
gpt.trf_blocks[b].att.W_query.bias, q_b)
gpt.trf_blocks[b].att.W_key.bias = assign(
gpt.trf_blocks[b].att.W_key.bias, k_b)
gpt.trf_blocks[b].att.W_value.bias = assign(
gpt.trf_blocks[b].att.W_value.bias, v_b)
gpt.trf_blocks[b].att.out_proj.weight = assign(
gpt.trf_blocks[b].att.out_proj.weight,
params["blocks"][b]["attn"]["c_proj"]["w"].T)
gpt.trf_blocks[b].att.out_proj.bias = assign(
gpt.trf_blocks[b].att.out_proj.bias,
params["blocks"][b]["attn"]["c_proj"]["b"])
gpt.trf_blocks[b].ff.layers[0].weight = assign(
gpt.trf_blocks[b].ff.layers[0].weight,
params["blocks"][b]["mlp"]["c_fc"]["w"].T)
gpt.trf_blocks[b].ff.layers[0].bias = assign(
gpt.trf_blocks[b].ff.layers[0].bias,
params["blocks"][b]["mlp"]["c_fc"]["b"])
gpt.trf_blocks[b].ff.layers[2].weight = assign(
gpt.trf_blocks[b].ff.layers[2].weight,
params["blocks"][b]["mlp"]["c_proj"]["w"].T)
gpt.trf_blocks[b].ff.layers[2].bias = assign(
gpt.trf_blocks[b].ff.layers[2].bias,
params["blocks"][b]["mlp"]["c_proj"]["b"])
gpt.trf_blocks[b].norm1.scale = assign(
gpt.trf_blocks[b].norm1.scale,
params["blocks"][b]["ln_1"]["g"])
gpt.trf_blocks[b].norm1.shift = assign(
gpt.trf_blocks[b].norm1.shift,
params["blocks"][b]["ln_1"]["b"])
gpt.trf_blocks[b].norm2.scale = assign(
gpt.trf_blocks[b].norm2.scale,
params["blocks"][b]["ln_2"]["g"])
gpt.trf_blocks[b].norm2.shift = assign(
gpt.trf_blocks[b].norm2.shift,
params["blocks"][b]["ln_2"]["b"])
gpt.final_norm.scale = assign(gpt.final_norm.scale, params["g"])
gpt.final_norm.shift = assign(gpt.final_norm.shift, params["b"])
gpt.out_head.weight = assign(gpt.out_head.weight, params["wte"]) # 4
load_weights_into_gpt(gpt, params)
gpt.to(device)
tokenizer = tiktoken.get_encoding("gpt2")
torch.manual_seed(123)
token_ids = generate(
model=gpt,
idx=text_to_token_ids("Every effort moves you", tokenizer).to(device),
max_new_tokens=25,
context_size=NEW_CONFIG["context_length"],
top_k=50,
temperature=1.5
)
print("Output text:\n", token_ids_to_text(token_ids, tokenizer))