Neural Networks
The structure of the neural network
A neuron can be a binary logistic regression unit
公式形式:
b: We can have an “always on” feature, which gives a class prior, or separate it out, as a bias term------b我們常常認(rèn)為是偏置
A neural network = running several logistic regressions at the same time
我們輸入一個(gè)向量并且通過一系列的邏輯回歸函數(shù),我們可以得到一個(gè)輸出向量,但是我們不需要提前決定,邏輯回歸試圖預(yù)測(cè)的向量是什么
多層神經(jīng)網(wǎng)絡(luò)
我們可以通過另外一個(gè)logistic回歸函數(shù),損失函數(shù)將指導(dǎo)中間變量是什么,為了更好的預(yù)測(cè)下一層的目標(biāo)。
Matrix notation for a layer--矩陣表示
for example:
我們有:
總結(jié):
f的運(yùn)算:
非線性的f的必須
重要性:
沒有非線性的激活函數(shù),深度神經(jīng)網(wǎng)絡(luò)無法做比線性變換更復(fù)雜的運(yùn)算
多個(gè)線性的神經(jīng)網(wǎng)絡(luò)層相當(dāng)于一個(gè)簡單的線性變換:$W_1W_2x=Wx$
如果采用更多的非線性激活函數(shù),它們可以擬合更復(fù)雜的函數(shù)
命名主體識(shí)別Named Entity Recognition (NER)
The task: findand classifynames in text, for example:
Possible purposes:
Tracking mentions of particular entities in documents---跟蹤文檔中特殊的實(shí)體
For question answering, answers are usually named entities------回答一些關(guān)于命名主體識(shí)別的問題
A lot of wanted information is really associations between named entities---抽取命名主體之間關(guān)系
The same techniques can be extended to other slot-filling classifications----可以擴(kuò)展到分類任務(wù)
Why might NER be hard?
實(shí)體的邊界很難計(jì)算
很難指導(dǎo)某個(gè)物體是否是一個(gè)實(shí)體
很難知道未知/新奇實(shí)體的類別
很難識(shí)別實(shí)體---當(dāng)實(shí)體是模糊的,并且依賴于上下文
Binary word window classification ---小窗口上下文文本分類器
問題:
in general,很少對(duì)單個(gè)單詞進(jìn)行分類
ambiguity arise in context,一詞多義的問題
example1:auto-antonyms
"To sanction" can mean "to permit" or "to punish”
"To seed" can mean "to place seeds" or "to remove seeds"
example2:resolving linking of ambiguous named entities
Paris ->Paris, France vs. Paris Hilton vs. Paris, Texas
Hathaway ->Berkshire Hathaway vs. Anne Hathaway
Window classification: Softmax
Idea: classify a word in its context window of neighboring words---在相鄰詞的上下文窗口對(duì)一個(gè)詞進(jìn)行分類
A simple way to classify a word in context might be to average the word vectors in a window and to classify the average vector ---一個(gè)簡單方法是對(duì)上下文的所有詞向量去平均,但是這個(gè)方法會(huì)丟失位置信息
另一種方法: Train softmaxclassifier to classify a center word by taking concatenation of word vectors surrounding it in a window---將上下文所有單詞的詞向量串聯(lián)起來
for example: Classify “Paris” in the context of this sentence with window length 2
Resulting vector $x_{window}=xvarepsilon R^{5d}$一個(gè)列向量 然后通過softmax分類器
Binary classification with unnormalizedscores ---給分類的結(jié)果一個(gè)非標(biāo)準(zhǔn)化分?jǐn)?shù)
之前的例子中:
假設(shè)我們需要確認(rèn)一個(gè)中心詞是否是一個(gè)地點(diǎn)
和word2vec類似,我們遍歷語料庫的所有位置,但是這一次,它只能在一些地方得到高分
the positions that have an actual NER Location in their center are “true” positions and get a high score ---它們中符合標(biāo)準(zhǔn)的會(huì)獲得最高分
Neural Network Feed-forward Computation
采用神經(jīng)網(wǎng)絡(luò)激活函數(shù)a簡單的給出一個(gè)非標(biāo)準(zhǔn)化分?jǐn)?shù)
我們采用一個(gè)3層的神經(jīng)網(wǎng)絡(luò)計(jì)算得分
s = score("museums in Paris are amazing”)
Main intuition for extra layer
中間層的作用學(xué)習(xí)的是輸入詞向量的非線性交互----Example: only if “museums”is first vector should it matter that “in”is in the second position
The max-margin loss
Idea for training objective: Make true window’s score larger and corrupt window’s score lower (until they’re good enough)---訓(xùn)練思路,讓真實(shí)窗口的準(zhǔn)確率提高,讓干擾窗口的得分降低
s = score(museums in Paris are amazing)
最小化
這是不可微分,但是是連續(xù)的,所以我們可以采用sgd算法進(jìn)行優(yōu)化
Each window with an NER location at its center should have a score +1 higher than any window without a location at its center -----每個(gè)中心有ner位置的窗口得分應(yīng)該比中心沒有位置的窗口高1分
For full objective function: Sample several corrupt windows per true one. Sum over all training windows---使用類似于負(fù)采樣的方法,為真實(shí)窗口采樣幾個(gè)錯(cuò)誤窗口
矩陣求導(dǎo)---不詳細(xì)推導(dǎo)
Example Jacobian: Elementwise activation Function
多元求導(dǎo)的例子
針對(duì)我們的公式 進(jìn)行求導(dǎo)換算:
在上面式子中,我們通過鏈?zhǔn)椒▌t可以得出
note: Neural Networks, Backpropagation
Neural Networks: Foundations
A neuron
A neuron is a generic computational unit that takes n inputs and produces a single output. What differentiates the outputs of different neurons is their parameters (also referred to as their weights). -----神經(jīng)元作用 ? ? 通過圖形可視化上面公式: ?
我們可以看到,神經(jīng)元是可以允許非線性在網(wǎng)絡(luò)中積累的次數(shù)之一
A single layer of neurons
我們將單個(gè)神經(jīng)元的思想擴(kuò)展到多個(gè)神經(jīng)元,考慮輸入x作為多個(gè)這樣的神經(jīng)元輸入
If we refer to the different neurons’ weights as ?and the biases as, we can say the respective activations are : ?
公式簡化 ? 我們可以縮放成: z=Wx+b ? 激活函數(shù)變化為: ?
feed-forward computation
首先我們考慮一個(gè)nlp中命名實(shí)體識(shí)別問題作為例子: "Museums in Paris are amazing" 我們想判別中心詞Paris是不是命名主體。在這種情況下,我們很可能不僅想要捕捉窗口中單詞向量,還想要捕捉單詞間的一些其他交互,方便我們分類。For instance, maybe it should matter that "Museums" is the ?rst word only if "in" is the second word. --上面可能存在順序的約束的問題。所以這樣的非線性決策通常不能被直接輸入softmax,而是需要一個(gè)中間層進(jìn)行score。因此我們使用另一個(gè)矩陣與激活輸出計(jì)算得到的歸一化得分用于分類任務(wù)。 ? 這里的f代表的是激活函數(shù) ?
Analysis of Dimensions: If we represent each word using a 4 dimensional word vector and we use a 5-word window as input (as in the above example), then the input . If we use 8 sigmoid units in the hidden layer and generate 1 score output from the activations, then
Maximum Margin Objective Function
我們采用:Maximum Margin Objective Function(最大間隔目標(biāo)函數(shù)),使得保證對(duì)‘真’的數(shù)據(jù)的可能性比‘假’數(shù)據(jù)的可能性要高
定義符號(hào):Using the previous example, if we call the score computed for the "true" labeled window "Museums in Paris are amazing" as S and the score computed for the "false" labeled window "Not all museums in Paris" as Sc (subscripted as c to signify that the window is "corrupt"). ---正確窗口S,錯(cuò)誤窗口Sc
隨后,我們目標(biāo)函數(shù)最大化S-Sc或者最小化Sc-S.然而,我們修改目標(biāo)函數(shù)來保證誤差在Sc>S的時(shí)候在進(jìn)行計(jì)算。這么做的目的是我們只在關(guān)心‘真’數(shù)據(jù)的得分高于‘假’數(shù)據(jù)的得分,其他不是很重要。因此,誤差Sc>S時(shí)候存在為Sc-S,反之不存在,即為0。所以我們的優(yōu)化目標(biāo)是:
However, the above optimization objective is risky in the sense that it does not attempt to create a margin of safety. We would want the "true" labeled data point to score higher than the "false" labeled data point by some positive margin ?. In other words, we would want error to be calculated if (s?sc< ?) and not just when (s?sc?< 0). Thus, we modify the optimization objective: ----上面的優(yōu)化目標(biāo)函數(shù)是存在風(fēng)險(xiǎn)的,它不能創(chuàng)造一個(gè)比較安全的間隔,所以我們希望存在一個(gè)這樣的間隔,并且這個(gè)間隔需要大于0 ?
We can scale this margin such that it is ? = 1 and let the other parameters in the optimization problem adapt to this without any change in performance.-----有希望了解的可以看svm的推導(dǎo),這里的意思是說,我們把間隔設(shè)置為1,這樣我們可以讓其他參數(shù)在優(yōu)化過程中自動(dòng)進(jìn)行調(diào)整,并不會(huì)影響模型的表現(xiàn)
梯度更新
方向傳播是一種利用鏈?zhǔn)椒▌t來計(jì)算模型上任意參數(shù)的損失梯度的方法
1. ?is an input to the neural network. ---輸入 2. s is the output of the neural network.---輸出 3. ?Each layer (including the input and output layers) has neurons which receive an input and produce an output. The j-th neuron of layer k receives the scalar inputand produces the scalar activation output---每層中的神經(jīng)元,上角標(biāo)是層數(shù),下角標(biāo)的神經(jīng)元位置 4. We will call the backpropagated error calculated atas. ---定義z的反向傳播誤差 5. Layer 1 refers to the input layer and not the ?rst hidden layer. ----輸入層不是隱藏層 6.is the transfer matrix that maps the output from the k-th layer to the input to the (k+1)-th -----轉(zhuǎn)移矩陣
定義:
開始反向傳播
現(xiàn)在開始反向傳播:假設(shè)損失函數(shù) ?為正值, 我們想更新參數(shù), ?我們看到 只參與了 ? ?和 ? ?的計(jì)算。這點(diǎn)對(duì)于理解反向傳搖是非常重要的一一反向傳搖的梯度只受它們所貢獻(xiàn)的值的影響。 在隨后的前向計(jì)算中和 ?相乘計(jì)算得分。我們可以從最大間隔損失看到: ?
我們只分析
其中, 指輸入層的輸入。我們可以看到梯度計(jì)算最后可以簡化為, ?其中 ?本質(zhì)上是第 2 層中第 ?i ?個(gè)神經(jīng)元反向傳播的誤差。? ?與 ?Wij ?相乘的結(jié)果, 輸入第 2 層中第 ?i ?個(gè)神經(jīng)元中。
Training with Backpropagation – Vectorized
對(duì)更定的參數(shù) , ?我們知道它的誤差梯度是 ,其中 ?? 是將 ? ?映射到 ? ?的矩 陣。因此我們可以確定整個(gè)矩陣 ? ?的梯度誤差為:
因此我們可以將整個(gè)矩陣形式的梯度寫為在矩陣中反向傳播的誤差向量和前向激活輸出的外積,并且對(duì)于誤差的估算: 其中代表矩陣中每個(gè)元素相乘
Neural Networks: Tips and Tricks
Gradient Check
Given a model with parameter vector θ and loss function J, the numerical gradient around θiis simply given by centered difference formula:
其實(shí)就是一個(gè)梯度的估計(jì)
Now, a natural question you might ask is, if this method is so precise, why do we not use it to compute all of our network gradients instead of applying back-propagation? The simple answer, as hinted earlier, is inef?ciency – recall that every time we want to compute the gradient with respect to an element, we need to make two forward passes through the network, which will be computationally expensive. Furthermore, many large-scale neural networks can contain millions of parameters, and computing two passes per parameter is clearly not optimal. -----雖然上面的梯度估計(jì)公式很有效,但是這僅僅是隨機(jī)檢測(cè)我們梯度是否正確的方法。我們最有效的并且最實(shí)用(運(yùn)算效率最高的)算法就是反向傳播算法
Regularization ---正則化
As with many machine learning models, neural networks are highly prone to over?tting, where a model is able to obtain near perfect performance on the training dataset, but loses the ability to generalize to unseen data. ----和很多機(jī)器學(xué)習(xí)模型一樣,神經(jīng)網(wǎng)絡(luò)也會(huì)陷入過擬合,這回讓它無法泛化到測(cè)試集上。一個(gè)常見的解決過擬合的問題就是采用L2正則化(只需要給損失函數(shù)J添加一個(gè)正則項(xiàng)),改進(jìn)的損失函數(shù)
對(duì)上面公式的參數(shù)解釋,λ是一個(gè)超參數(shù),控制正則項(xiàng)的權(quán)值大小,是第i層的權(quán)值矩陣的Froenius范數(shù):Forenius范數(shù)=矩陣中每個(gè)元素的平方求和在開根號(hào),
正則項(xiàng)的作用: what regularization is essentially doing is penalizing weights for being too large while optimizing over the original cost function---在優(yōu)化損失函數(shù)的時(shí)候,懲罰數(shù)值太大的權(quán)值,讓權(quán)值分配更均勻,防止出現(xiàn)權(quán)值過大的情況
Due to the quadratic nature of the Frobenius norm (which computes the sum of the squared elements of a matrix), L2 regularization effectively reduces the ?exibility of the model and thereby reduces the over?tting phenomenon. Imposing such a constraint can also be interpreted as the prior Bayesian belief that the optimal weights are close to zero – how close depends on the value of λ.-----因?yàn)檎齽t化有一個(gè)二次項(xiàng)的存在,這有利有弊,它會(huì)降低模型的靈活性但是也會(huì)降低過擬合的可能性。在貝葉斯學(xué)說的認(rèn)為下,正則項(xiàng)可以優(yōu)化權(quán)值并且使得其接近0,但是這個(gè)取決于你的一個(gè)λ值的大小
Too high a value of λ causes most of the weights to be set too close to 0, and the model does not learn anything meaningful from the training data, often obtaining poor accuracy on training, validation, and testing sets. ---λ取值要合適
為什么偏置沒有正則項(xiàng)
正則化的目的是為了防止過擬合,但是過擬合的表現(xiàn)形式是模型對(duì)于輸入的微小變化產(chǎn)生了巨大差異,這主要是因?yàn)閃的原因,有些w的參數(shù)過大。但是b是不背鍋的,偏置b對(duì)于輸入的改變是不敏感的,不管輸入改變大還是小。
Dropout---部分參數(shù)拋棄運(yùn)算
the idea is simple yet effective – during training, we will randomly “drop” with some probability (1?p) a subset of neurons during each forward/backward pass (or equivalently, we will keep alive each neuron with a probability p). Then, during testing, we will use the full network to compute our predictions. The result is that the network typically learns more meaningful information from the data, is less likely to over?t, and usually obtains higher performance overall on the task at hand. One intuitive reason why this technique should be so effective is that what dropout is doing is essentially doing is training exponentially many smaller networks at once and averaging over their predictions.---------dropout的思想就是在每次前向傳播或者反向傳播的時(shí)候我們按照一定的概率(1-P)凍結(jié)神經(jīng)元,但是剩下概率為p的神經(jīng)元是激活的,然后在測(cè)試階段,我們使用所有的神經(jīng)元。使用dropout的網(wǎng)絡(luò)可以從數(shù)據(jù)中學(xué)到更多的知識(shí)
However, a key subtlety is that in order for dropout to work effectively, the expected output of a neuron during testing should be approximately the same as it was during training – else the magnitude of the outputs could be radically different, and the behavior of the network is no longer well-de?ned. Thus, we must typically divide the outputs of each neuron during testing by a certain value --------為了使得的dropout能夠有效,測(cè)試階段的神經(jīng)元的預(yù)期輸出應(yīng)該和訓(xùn)練階段大致相同---否則輸出的大小存在很大差異,所以我們通常需要在測(cè)試階段將每個(gè)神經(jīng)元的輸出除以P(P是存活神經(jīng)元的概率)
Parameter Initialization--參數(shù)初始化
A key step towards achieving superlative performance with a neural network is initializing the parameters in a reasonable way. A good starting strategy is to initialize the weights to small random numbers normally distributed around 0 ---通常我們的權(quán)值隨機(jī)初始化在0附近 是W(fan-in)的輸入單元數(shù),是W(fan-out)的輸出單元數(shù)
加餐:優(yōu)化算法----此內(nèi)容來源于datawhale'顏值擔(dān)當(dāng)'
基于歷史的動(dòng)態(tài)梯度優(yōu)化算法結(jié)構(gòu)
SGD
Momentum
NAG
AdaGrad
AdaDelta
Adam
Nadam
責(zé)任編輯:xj
原文標(biāo)題:【CS224N筆記】一文詳解神經(jīng)網(wǎng)絡(luò)來龍去脈
文章出處:【微信公眾號(hào):深度學(xué)習(xí)自然語言處理】歡迎添加關(guān)注!文章轉(zhuǎn)載請(qǐng)注明出處。
-
神經(jīng)網(wǎng)絡(luò)
+關(guān)注
關(guān)注
42文章
4774瀏覽量
100899 -
函數(shù)
+關(guān)注
關(guān)注
3文章
4338瀏覽量
62751 -
神經(jīng)元
+關(guān)注
關(guān)注
1文章
363瀏覽量
18473
原文標(biāo)題:【CS224N筆記】一文詳解神經(jīng)網(wǎng)絡(luò)來龍去脈
文章出處:【微信號(hào):zenRRan,微信公眾號(hào):深度學(xué)習(xí)自然語言處理】歡迎添加關(guān)注!文章轉(zhuǎn)載請(qǐng)注明出處。
發(fā)布評(píng)論請(qǐng)先 登錄
相關(guān)推薦
評(píng)論