我正在建立一个通用的NN,它将图像(狗/无狗)和电影评论(好/坏)分类。我必须坚持非常具体的体系结构和损失函数,因此改变这两者似乎不可行。我的体系结构是一个两层网络,其中包含relu,其后是S型和交叉熵损失函数。通过1000个纪元和大约.001的学习率,我获得了100%的训练准确度和.72的测试准确度。我一直在寻找提高我的测试准确度的建议。这是我所拥有的布局:
'''
def train_net(epochs,batch_size,train_x,train_y,model_size,lr):
n_x,n_h,n_y=model_size
model = Net(n_x, n_h, n_y)
optim = torch.optim.Adam(model.parameters(),lr=0.005)
loss_function = nn.BCELoss()
train_losses = []
accuracy = []
for epoch in range(epochs):
count=0
model.train()
train_loss = []
batch_accuracy = []
for idx in range(0, train_x.shape[0], batch_size):
batch_x = torch.from_numpy(train_x[idx : idx + batch_size]).float()
batch_y = torch.from_numpy(train_y[:,idx : idx + batch_size]).float()
model_output = model(batch_x)
batch_accuracy=[]
loss = loss_function(model_output, batch_y)
train_loss.append(loss.item())
preds = model_output > 0.5
nb_correct = (preds == batch_y).sum()
count+=nb_correct.item()
optim.zero_grad()
loss.backward()
# Scheduler made it worse
# scheduler.step(loss.item())
optim.step()
if epoch % 100 == 1:
train_losses.append(train_loss)
print("Iteration : {}, Training loss: {} ,Accuracy %: {}".format(epoch,np.mean(train_loss),(count/train_x.shape[0])*100))
plt.plot(np.squeeze(train_losses))
plt.ylabel('loss')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(lr))
plt.show()
return model
''' 我的模型参数:
batch_size = 32
lr = 0.0001
epochs = 1500
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
model_size=n_x,n_h,n_y
model=train_net(epochs,batch_size,train_x,train_y,model_size,or)
这是测试阶段。
model.eval() #Setting the model to eval mode, hence making it deterministic.
test_loss = []
count=0;
loss_function = nn.BCELoss()
for idx in range(0, test_x.shape[0], batch_size):
with torch.no_grad():
batch_x = torch.from_numpy(test_x[idx : idx + batch_size]).float()
batch_y = torch.from_numpy(test_y[:,idx : idx + batch_size]).float()
model_output = model(batch_x)
preds = model_output > 0.5
loss = loss_function(model_output, batch_y)
test_loss.append(loss.item())
nb_correct = (preds == batch_y).sum()
count+=nb_correct.item()
print("test loss: {},test accuracy: {}".format(np.mean(test_loss),count/test_x.shape[0]))
我尝试过的事情: 顺应学习率,保持动力,使用调度程序和更改批处理大小,当然,这些主要是猜测,并非基于任何有效的假设。
根据您的陈述,即您的训练准确度为100%,而测试准确度则低至72%,这似乎表明您过度拟合了数据集。
简而言之,这意味着您的模型正在对自己提供的训练数据进行过专门的训练,从而发现了训练数据中可能存在的但并非分类固有的怪癖。例如,如果您的训练数据中的狗都是白色的,则该模型最终将学会将白色与狗相关联,并且很难在测试数据集中识别该颜色的其他颜色的狗。
There are many avenues to address this issue: a well sourced overview of the subject written in simple terms can be found here.
如果没有关于更改神经网络架构的特定约束的更多信息,很难确定要更改什么。但是,权重正则化和辍学通常会起到很大的作用(并在上一篇文章中进行了描述。)您还应该可以自由地实现模型的早期停止和权重约束。
我将留给您查找有关如何在pytorch中实施这些特定策略的资源,但这将提供一个很好的起点。