首页 > 后端开发 > Python教程 > 优化你的神经网络

优化你的神经网络

DDD
发布: 2024-10-13 06:15:02
原创
953 人浏览过

上週我發布了一篇關於如何建立簡單神經網絡,特別是多層感知器的文章。本文將深入探討神經網路的細節,討論如何透過調整神經網路的配置來最大限度地提高神經網路的效能。

訓練模型多長時間

訓練模型時,您可能會認為如果模型訓練得夠多,模型就會變得完美無缺。這可能是真的,但這只適用於它所訓練的資料集。事實上,如果你給它另一組值不同的數據,模型可能會輸出完全錯誤的預測。

為了進一步理解這一點,假設您每天都在不移動方向盤的情況下直線駕駛來練習駕駛考試。 (請不要這樣做。)雖然您可能會在直線加速賽中表現出色,但如果在實際考試中被告知要左轉,您最終可能會變成停車標誌。

這種現象稱為過度擬合。您的模型可以學習其訓練資料的所有方面和模式,但如果它學習的模式與訓練資料集過於緊密地結合,那麼當給定新資料集時,您的模型將表現不佳。同時,如果您沒有充分訓練模型,那麼您的模型將無法正確識別其他資料集中的模式。在這種情況下,您將欠擬合

Optimizing Your Neural Networks


過度擬合的範例。驗證損失(以橙色線表示)逐漸增加,而訓練損失(以藍色線表示)逐漸減少。

在上面的範例中,當驗證損失達到最小值時,停止訓練模型的最佳位置是正確的。可以透過早期停止來做到這一點,一旦在任意數量的訓練週期(時期)後驗證損失沒有改善,它就會停止訓練。

訓練模型就是在過度擬合和欠擬合之間找到平衡,同時在必要時利用提前停止。這就是為什麼您的訓練資料集應該盡可能代表總體人群,以便您的模型可以更準確地對其未見過的資料進行預測。

損失函數

也許可以調整的最重要的訓練配置之一是損失函數,它是模型的預測與其實際值之間的「不準確性」。 「不準確度」可以用多種不同的數學方式來表示,最常見的一種是均方誤差(MSE)

MSE=i=1n(yiˉyi)2ntext{MSE} = frac{sum_{i=1}^n (bar{y_i} - y_i)^2}{n} MSE=ni=1n(yiˉyi)2

哪里 yiˉ栏{y_i} yˉ 是模型的预测, yiy_i yi 是真正的价值。有一个类似的变体,称为平均绝对误差(MAE)

MAE=Σi=1nyiˉyin文本{MAE} = frac{sum_{i=1}^n |bar{y_i} - y_i|}{n} MAE= nΣi=1 nyiˉyi

What's the difference between these two and which one is better? The real answer is that it depends on a variety of factors. Let's consider a simple 2-dimensional linear regression example.

In many cases, there can be data points that act outliers, points that are far away from other data points. In terms of linear regression, this means that there are a few points on the xyxy xy -plane that are far away from the rest of them. If you remember from your statistics classes, it's points like these that can significantly affect the linear regression line that's calculated.

Optimizing Your Neural Networks
A simple graph with points on (1, 1), (2, 2), (3, 3), and (4, 4)

If you wanted to think of a line that could cross all four points, then y=xy = x y=x would be a great choice because this line would go through all the points.

Optimizing Your Neural Networks
A simple graph with points on (1, 1), (2, 2), (3, 3), and (4, 4) and the line y=xy = x y=x going through it

However, let's say I decide to add another point at (5,1)(5, 1) (5,1) . Now what should the regression line be? Well, it turns out that it's completely different: y=0.2x 1.6y = 0.2x 1.6 y=0.2x 1.6

Optimizing Your Neural Networks


A simple graph with points on (1, 1), (2, 2), (3, 3), (4, 4), and (5,1) with a linear regression line going through it.

Given the previous data points, the line would expect that the value of yy y when x=5x = 5 x=5 is 5, but because of the outlier and its MSE, the regression line is "pulled downwards" significantly.

This is just a simple example, but this poses a question that you, as a machine learning developer, need to stop and think about: How sensitive should my model be to outliers? If you want your model to be more sensitive to outliers, then you would choose a metric like MSE, because in that case, errors involving outliers are more pronounced due to the squaring and your model will adjust itself to minimize that. Otherwise, you would choose a metric like MAE, which doesn't care as much about outliers.

Optimizers

In my previous post, I also discussed the concept of backpropagation, gradient descent, and how they work to minimize the loss of the model. The gradient is a vector that points towards the direction of greatest change. A gradient descent algorithm will calculate this vector and move in the exact opposite direction so that it eventually reaches a minimum.

Most optimizers have a specific learning rate, commonly denoted as αalpha α that they adhere to. Essentially, this represents how much the algorithm will move towards the minimum each time it calculates the gradient. Be careful of setting your learning rate to be too large! Your algorithm may never reach the minimum due to the large steps it takes that could repeatedly skip over the minimum.

Optimizing Your Neural Networks
[Tensorflow's neural network playground](https://playground.tensorflow.org) showing what can happen if you set the learning rate to be too large. Notice how the testing and training loss are both `NaN`.

Going back to gradient descent, while it is effective in minimizing loss, this might significantly slow down the training process as the loss function is calculated on the entire dataset. There are several alternatives to gradient descent that are more efficient but have their respective downsides.

Stochastic Gradient Descent

One of the most popular alternatives to standard gradient descent is a variant called stochastic gradient descent (SGD). As with gradient descent, SGD has a fixed learning rate. But rather than running through the entire dataset like gradient descent, SGD takes a small sample is randomly selected and the weights of your neural network are updated based on the sample instead. Eventually, the parameter values converge to a point that approximately (but not exactly) minimizes the loss function. This is one of the downsides of SGD, as it doesn't always reach the exact minimum. Additionally, similar to gradient descent, it remains sensitive to the learning rate that you set.

The Adam Optimizer

The name, Adam, is derived from adaptive moment estimation. It essentially combines two variants of SGD to adjust the learning rate for each input parameter based on how often it gets updated during each training iteration (adaptive learning rate). At the same time, it also keeps track of past gradient calculations as a moving average to smooth out updates (momentum). However, because of its momentum characteristic, it can sometimes take longer to converge than other algorithms.

Putting it All Together

Now for an example!

I've created an example walkthrough on Google Colab that uses PyTorch to create a neural network that learns a simple linear relationship.

If you're a bit new to Python, don't worry! I've included some explanations that discuss what's going on in each section.

Reflection

While this obviously doesn't cover everything about optimizing neural networks, I wanted to at least cover a few of the most important concepts that you can take advantage of while training your own models. Hopefully you've learned something this week and thanks for reading!

以上是优化你的神经网络的详细内容。更多信息请关注PHP中文网其他相关文章!

来源:dev.to
本站声明
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn
热门教程
更多>
最新下载
更多>
网站特效
网站源码
网站素材
前端模板