從頭學習計算機網絡_如何從頭開始構建三層神經網絡

從頭學習計算機網絡

by Daphne Cornelisse

達芙妮·康妮莉絲(Daphne Cornelisse)

如何從頭開始構建三層神經網絡 (How to build a three-layer neural network from scratch)

In this post, I will go through the steps required for building a three layer neural network. I’ll go through a problem and explain you the process along with the most important concepts along the way.

在這篇文章中,我將通過建立一個三層神經網絡所需的步驟 我將解決一個問題,并向您解釋整個過程以及最重要的概念。

要解決的問題 (The problem to solve)

A farmer in Italy was having a problem with his labelling machine: it mixed up the labels of three wine cultivars. Now he has 178 bottles left, and nobody knows which cultivar made them! To help this poor man, we will build a classifier that recognizes the wine based on 13 attributes of the wine.

意大利的一位農民的貼標機出現了問題:它混淆了三個葡萄酒品種的標簽。 現在他只剩下178瓶了,沒人知道是哪個品種栽培的! 為了幫助這個可憐的人,我們將建立一個分類器 ,該分類器基于葡萄酒的13個屬性來識別葡萄酒。

The fact that our data is labeled (with one of the three cultivar’s labels) makes this a Supervised learning problem. Essentially, what we want to do is use our input data (the 178 unclassified wine bottles), put it through our neural network, and then get the right label for each wine cultivar as the output.

我們的數據被標記(帶有三個品種的標簽之一)的事實使這成為監督學習問題。 本質上,我們想要做的是使用我們的輸入數據(178個未分類的葡萄酒瓶),通過我們的神經網絡對其進行輸入 ,然后為每個葡萄酒品種獲得正確的標簽作為輸出。

We will train our algorithm to get better and better at predicting (y-hat) which bottle belongs to which label.

我們將訓練我們的算法,使其越來越好地預測(y-hat)哪個瓶子屬于哪個標簽。

Now it is time to start building the neural network!

現在是時候開始構建神經網絡了!

方法 (Approach)

Building a neural network is almost like building a very complicated function, or putting together a very difficult recipe. In the beginning, the ingredients or steps you will have to take can seem overwhelming. But if you break everything down and do it step by step, you will be fine.

建立一個神經網絡幾乎就像建立一個非常復雜的函數,或者將一個非常困難的配方組合在一起。 在開始時,您將要采取的成分或步驟似乎不堪重負。 但是,如果您將所有內容分解并逐步進行,則可以。

In short:

簡而言之:

  • The input layer (x) consists of 178 neurons.

    輸入層(x)由178個神經元組成。
  • A1, the first layer, consists of 8 neurons.

    第一層A1由8個神經元組成。
  • A2, the second layer, consists of 5 neurons.

    第二層A2由5個神經元組成。
  • A3, the third and output layer, consists of 3 neurons.

    A3,即第三層和輸出層,由3個神經元組成。

步驟1:通常的準備 (Step 1: the usual prep)

Import all necessary libraries (NumPy, skicit-learn, pandas) and the dataset, and define x and y.

導入所有必要的庫(NumPy,skicit-learn,pandas)和數據集,并定義x和y。

#importing all the libraries and dataset
import pandas as pdimport numpy as np
df = pd.read_csv('../input/W1data.csv')df.head()
# Package imports
# Matplotlib import matplotlibimport matplotlib.pyplot as plt
# SciKitLearn is a machine learning utilities libraryimport sklearn
# The sklearn dataset module helps generating datasets
import sklearn.datasetsimport sklearn.linear_modelfrom sklearn.preprocessing import OneHotEncoderfrom sklearn.metrics import accuracy_score

步驟2:初始化 (Step 2: initialization)

Before we can use our weights, we have to initialize them. Because we don’t have values to use for the weights yet, we use random values between 0 and 1.

在使用權重之前,我們必須對其進行初始化。 由于我們尚無權重值,因此我們使用0到1之間的隨機值。

In Python, the random.seed function generates “random numbers.” However, random numbers are not truly random. The numbers generated are pseudorandom, meaning the numbers are generated by a complicated formula that makes it look random. In order to generate numbers, the formula takes the previous value generated as its input. If there is no previous value generated, it often takes the time as a first value.

在Python中, random.seed函數生成“隨機數”。 但是,隨機數并不是真正的隨機數。 生成的數字是偽隨機數 ,這意味著這些數字是由復雜的公式生成的,該公式使它看起來是隨機的。 為了生成數字,該公式將先前生成的值作為其輸入。 如果沒有以前的值生成,則通常將時間作為第一個值。

That is why we seed the generator — to make sure that we always get the same random numbers. We provide a fixed value that the number generator can start with, which is zero in this case.

這就是我們為生成器提供種子的原因-確保我們始終獲得相同的隨機數 。 我們提供一個固定的值,數字生成器可以從該值開始,在這種情況下為零。

np.random.seed(0)

步驟3:向前傳播 (Step 3: forward propagation)

There are roughly two parts of training a neural network. First, you are propagating forward through the NN. That is, you are “making steps” forward and comparing those results with the real values to get the difference between your output and what it should be. You basically see how the NN is doing and find the errors.

訓練神經網絡大約有兩個部分。 首先,您正在通過NN向前傳播。 也就是說,您正在“逐步”并將這些結果與實際值進行比較,以獲取輸出與實際值之間的差異。 您基本上可以看到NN的運行情況并找到錯誤。

After we have initialized the weights with a pseudo-random number, we take a linear step forward. We calculate this by taking our input A0 times the dot product of the random initialized weights plus a bias. We started with a bias of 0. This is represented as:

用偽隨機數初始化權重后,我們向前線性邁進了一步。 我們通過將輸入值A0乘以隨機初始化權重的點積偏差來計算 。 我們以0的偏差開始。表示為:

Now we take our z1 (our linear step) and pass it through our first activation function. Activation functions are very important in neural networks. Essentially, they convert an input signal to an output signal — this is why they are also known as Transfer functions. They introduce non-linear properties to our functions by converting the linear input to a non-linear output, making it possible to represent more complex functions.

現在我們采取z1(線性步長)并將其傳遞給第一個激活函數 。 激活函數在神經網絡中非常重要。 本質上,它們將輸入信號轉換為輸出信號-這就是為什么它們也被稱為傳遞函數的原因。 通過將線性輸入轉換為非線性輸出,它們將非線性屬性引入了我們的函數,從而可以表示更復雜的函數。

There are different kinds of activation functions (explained in depth in this article). For this model, we chose to use the tanh activation function for our two hidden layers — A1 and A2 — which gives us an output value between -1 and 1.

有各種不同的激活功能(在深度在解釋這個文章)。 對于此模型,我們選擇對兩個隱藏層A1和A2使用tanh激活函數,從而為我們提供介于-1和1之間的輸出值。

Since this is a multi-class classification problem (we have 3 output labels), we will use the softmax function for the output layer — A3 — because this will compute the probabilities for the classes by spitting out a value between 0 and 1.

由于這是一個多類分類問題 (我們有3個輸出標簽),因此我們將softmax函數用于輸出層A3,因為這將通過吐出0到1之間的值來計算類的概率。

By passing z1 through the activation function, we have created our first hidden layer — A1 — which can be used as input for the computation of the next linear step, z2.

通過將z1傳遞到激活函數,我們創建了我們的第一個隱藏層A1,可用作下一個線性步驟z2的計算輸入。

In Python, this process looks like this:

在Python中,此過程如下所示:

# This is the forward propagation functiondef forward_prop(model,a0):        # Load parameters from model    W1, b1, W2, b2, W3, b3 = model['W1'], model['b1'], model['W2'], model['b2'], model['W3'],model['b3']        # Do the first Linear step     z1 = a0.dot(W1) + b1        # Put it through the first activation function    a1 = np.tanh(z1)        # Second linear step    z2 = a1.dot(W2) + b2        # Put through second activation function    a2 = np.tanh(z2)        #Third linear step    z3 = a2.dot(W3) + b3        #For the Third linear activation function we use the softmax function    a3 = softmax(z3)        #Store all results in these values    cache = {'a0':a0,'z1':z1,'a1':a1,'z2':z2,'a2':a2,'a3':a3,'z3':z3}    return cache

In the end, all our values are stored in the cache.

最后,我們所有的值都存儲在cache中 。

步驟4:向后傳播 (Step 4: backwards propagation)

After we forward propagate through our NN, we backward propagate our error gradient to update our weight parameters. We know our error, and want to minimize it as much as possible.

在通過NN向前傳播之后,我們向后傳播誤差梯度以更新權重參數。 我們知道我們的錯誤,并希望將其最小化。

We do this by taking the derivative of the error function, with respect to the weights (W) of our NN, using gradient descent.

我們通過使用梯度下降相對于我們的NN的權重(W)取誤差函數導數來實現

Lets visualize this process with an analogy.

讓我們以類比的方式可視化此過程。

Imagine you went out for a walk in the mountains during the afternoon. But now its an hour later and you are a bit hungry, so it’s time to go home. The only problem is that it is dark and there are many trees, so you can’t see either your home or where you are. Oh, and you forgot your phone at home.

想象一下,您下午在山上散步。 但是現在一個小時后,您有點餓了,是時候回家了。 唯一的問題是它很暗,有很多樹,所以您看不到自己的房屋或所在的位置。 哦,您忘記了家里的電話。

But then you remember your house is in a valley, the lowest point in the whole area. So if you just walk down the mountain step by step until you don’t feel any slope, in theory you should arrive at your home.

但是,您還記得自己的房子在山谷中,這是整個地區的最低點。 因此,如果您只是一步一步走下山直到感覺不到任何坡度,理論上您應該回到家中。

So there you go, step by step carefully going down. Now think of the mountain as the loss function, and you are the algorithm, trying to find your home (i.e. the lowest point). Every time you take a step downwards, we update your location coordinates (the algorithm updates the parameters).

因此,您可以逐步仔細地進行下去。 現在將山視為損失函數,您就是算法,試圖找到您的房屋(即最低點 )。 每次您向下移動時,我們都會更新您的位置坐標(算法會更新參數 )。

The loss function is represented by the mountain. To get to a low loss, the algorithm follows the slope — that is the derivative — of the loss function.

損失函數由山表示。 為了達到低損耗,該算法遵循損耗函數的斜率(即導數)。

When we walk down the mountain, we are updating our location coordinates. The algorithm updates the parameters of the neural network. By getting closer to the minimum point, we are approaching our goal of minimizing our error.

當我們走下山時,我們正在更新位置坐標。 該算法更新神經網絡的參數。 通過接近最低點,我們正在實現將錯誤最小化的目標

In reality, gradient descent looks more like this:

實際上,梯度下降看起來更像這樣:

We always start with calculating the slope of the loss function with respect to z, the slope of the linear step we take.

我們總是從計算損耗函數相對于z的斜率開始,z是我們采取的線性步長的斜率。

Notation is as follows: dv is the derivative of the loss function, with respect to a variable v.

表示法如下:dv是損失函數相對于變量v的導數。

Next we calculate the slope of the loss function with respect to our weights and biases. Because this is a 3 layer NN, we will iterate this process for z3,2,1 + W3,2,1 and b3,2,1. Propagating backwards from the output to the input layer.

接下來,我們計算損失函數相對于我們的權重和偏差的斜率 。 由于這是3層NN,因此我們將迭代z3,2,1 + W3,2,1和b3,2,1的過程。 從輸出向后傳播到輸入層。

This is how this process looks in Python:

這是此過程在Python中的外觀:

# This is the backward propagation functiondef backward_prop(model,cache,y):
# Load parameters from model    W1, b1, W2, b2, W3, b3 = model['W1'], model['b1'], model['W2'], model['b2'],model['W3'],model['b3']        # Load forward propagation results    a0,a1, a2,a3 = cache['a0'],cache['a1'],cache['a2'],cache['a3']        # Get number of samples    m = y.shape[0]        # Calculate loss derivative with respect to output    dz3 = loss_derivative(y=y,y_hat=a3)
# Calculate loss derivative with respect to second layer weights    dW3 = 1/m*(a2.T).dot(dz3) #dW2 = 1/m*(a1.T).dot(dz2)         # Calculate loss derivative with respect to second layer bias    db3 = 1/m*np.sum(dz3, axis=0)        # Calculate loss derivative with respect to first layer    dz2 = np.multiply(dz3.dot(W3.T) ,tanh_derivative(a2))        # Calculate loss derivative with respect to first layer weights    dW2 = 1/m*np.dot(a1.T, dz2)        # Calculate loss derivative with respect to first layer bias    db2 = 1/m*np.sum(dz2, axis=0)        dz1 = np.multiply(dz2.dot(W2.T),tanh_derivative(a1))        dW1 = 1/m*np.dot(a0.T,dz1)        db1 = 1/m*np.sum(dz1,axis=0)        # Store gradients    grads = {'dW3':dW3, 'db3':db3, 'dW2':dW2,'db2':db2,'dW1':dW1,'db1':db1}    return grads

步驟5:訓練階段 (Step 5: the training phase)

In order to reach the optimal weights and biases that will give us the desired output (the three wine cultivars), we will have to train our neural network.

為了達到最佳權重和偏見 ,這將為我們提供理想的產量(三個葡萄酒品種),我們將必須訓練我們的神經網絡。

I think this is very intuitive. For almost anything in life, you have to train and practice many times before you are good at it. Likewise, a neural network will have to undergo many epochs or iterations to give us an accurate prediction.

我認為這是非常直觀的。 對于生活中幾乎所有事情,您都必須多次訓練和練習,然后再擅長于此。 同樣,神經網絡將必須經歷許多時期或迭代才能為我們提供準確的預測。

When you are learning anything, lets say you are reading a book, you have a certain pace. This pace should not be too slow, as reading the book will take ages. But it should not be too fast, either, since you might miss a very valuable lesson in the book.

當您學習任何東西時,可以說您正在讀書, 步伐一定。 這個步伐不應該太慢,因為閱讀本書會花費很多時間。 但這也不應該太快,因為您可能會錯過本書中非常有價值的一課。

In the same way, you have to specify a “learning rate” for the model. The learning rate is the multiplier to update the parameters. It determines how rapidly they can change. If the learning rate is low, training will take longer. However, if the learning rate is too high, we might miss a minimum. The learning rate is expressed as:

同樣,您必須為模型指定一個“ 學習率 ”。 學習率是更新參數的乘數。 它決定了它們可以多快地改變。 如果學習率低,則培訓將花費更長的時間。 但是,如果學習率太高,我們可能會錯過最低水平。 學習率表示為:

  • := means that this is a definition, not an equation or proven statement.

    :=表示這是一個定義,而不是等式或經過證明的陳述。

  • a is the learning rate called alpha

    一個 學習率稱為alpha

  • dL(w) is the derivative of the total loss with respect to our weight w

    dL(w)是相對于我們的體重w的總損失的導數

  • da is the derivative of alpha

    daalpha的導數

We chose a learning rate of 0.07 after some experimenting.

經過一些實驗,我們選擇了0.07的學習率。

# This is what we return at the endmodel = initialise_parameters(nn_input_dim=13, nn_hdim= 5, nn_output_dim= 3)model = train(model,X,y,learning_rate=0.07,epochs=4500,print_loss=True)plt.plot(losses)

Finally, there is our graph. You can plot your accuracy and/or loss to get a nice graph of your prediction accuracy. After 4,500 epochs, our algorithm has an accuracy of 99.4382022472 %.

最后,有我們的圖。 您可以繪制準確度和/或損失的圖,以獲得預測準確度的良好圖形。 在4,500個紀元后,我們的算法的準確度為99.4382022472%。

簡要總結 (Brief summary)

We start by feeding data into the neural network and perform several matrix operations on this input data, layer by layer. For each of our three layers, we take the dot product of the input by the weights and add a bias. Next, we pass this output through an activation function of choice.

我們首先將數據饋入神經網絡,然后逐層對該輸入數據執行幾個矩陣運算。 對于我們的三層中的每一層,我們將輸入的點乘積乘以權重并添加一個偏差。 接下來,我們通過選擇的激活函數傳遞此輸出。

The output of this activation function is then used as an input for the following layer to follow the same procedure. This process is iterated three times since we have three layers. Our final output is y-hat, which is the prediction on which wine belongs to which cultivar. This is the end of the forward propagation process.

然后,此激活功能的輸出將用作下一層的輸入,以遵循相同的過程。 由于我們分為三層,因此此過程重復了三遍。 我們的最終輸出是y-hat ,這是對哪種酒屬于哪個品種的預測 。 這是正向傳播過程的結束。

We then calculate the difference between our prediction (y-hat) and the expected output (y) and use this error value during backpropagation.

然后,我們計算預測值(y-hat)與預期輸出(y)之間的 ,并在反向傳播期間使用此誤差值。

During backpropagation, we take our error — the difference between our prediction y-hat and y — and we mathematically push it back through the NN in the other direction. We are learning from our mistakes.

在反向傳播期間,我們采用了誤差-預測的y-hat和y之間的差-,并且在數學上將其從另一個方向推回了NN。 我們正在從錯誤中學習。

By taking the derivative of the functions we used during the first process, we try to discover what value we should give the weights in order to achieve the best possible prediction. Essentially we want to know what the relationship is between the value of our weight and the error that we get out as the result.

通過獲取在第一步中使用的函數的導數,我們嘗試發現應該賦予權重的值,以便獲得最佳的預測 。 本質上,我們想知道體重值與結果錯誤之間的關系。

And after many epochs or iterations, the NN has learned to give us more accurate predictions by adapting its parameters to our dataset.

在經過許多時期或迭代之后,神經網絡已學會通過將其參數調整為數據集來為我們提供更準確的預測。

This post was inspired by the week 1 challenge from the Bletchley Machine Learning Bootcamp that started on the 7th of February. In the coming nine weeks, I’m one of 50 students who will go through the fundamentals of Machine Learning. Every week we discuss a different topic and have to submit a challenge, which requires you to really understand the materials.

這篇文章的靈感來自于2月7日開始的Bletchley機器學習訓練營的第1周挑戰。 在接下來的九周中,我將成為50位將學習機器學習基礎知識的學生之一。 每周我們討論一個不同的主題,并且必須提交一個挑戰,這要求您真正了解材料。

If you have any questions or suggestions or, let me know!

如果您有任何問題或建議,或者讓我知道!

Or if you want to check out the whole code, you can find it here on Kaggle.

或者,如果你想看看整個代碼,你可以找到它在這里的Kaggle。

Recommended videos to get a deeper understanding on neural networks:

推薦的視頻可以使您對神經網絡有更深入的了解:

  • 3Blue1Brown’s series on neural networks

    3Blue1Brown神經網絡系列

  • Siraj Raval’s series on Deep Learning

    Siraj Raval的深度學習系列

翻譯自: https://www.freecodecamp.org/news/building-a-3-layer-neural-network-from-scratch-99239c4af5d3/

從頭學習計算機網絡

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/394498.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/394498.shtml
英文地址,請注明出處:http://en.pswp.cn/news/394498.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

python 文件處理

f open(chenli.txt) #打開文件 first_line f.readline() print(first line:,first_line) #讀一行 print(我是分隔線.center(50,-)) data f.read() # 讀取剩下的所有內容,文件大時不要用 print(data) #打印讀取內容f.close() #關閉文件1…

第五章 MVC之Bundle詳解

一、簡述 Bundle,英文原意就是捆、收集、歸攏。在MVC中的Bundle技術,也就是一個對css和js文件的捆綁壓縮的技術。 它的用處: 將多個請求捆綁為一個請求,減少服務器請求數 壓縮javascript,css等資源文件,減小…

所給服務器端程序改寫為能夠同時響應多個客戶端連接請求的服務器程序_一文讀懂客戶端請求是如何到達服務器的...

點擊上方“藍色字體”,選擇 “設為星標”關鍵訊息,D1時間送達!互聯網是人類歷史上最偉大的發明創造之一,而構成互聯網架構的核心在于TCP/IP協議。那么TCP/IP是如何工作的呢,我們先從數據包開始講起。1、數據包一、HTTP…

消息服務器 推送技術,SSE服務器推送技術

SSE即 server send event 服務器發送事件,在在早期可能會使用ajax向服務器輪詢的方式,使瀏覽器第一時間接受到服務器的消息,但這種頻率不好控制,消耗也比較大。但是對于SSE來說,當客戶端向服務端發送請求,服…

Contest2162 - 2019-3-28 高一noip基礎知識點 測試5 題解版

傳送門 T1 單調棧 按照b排序 在家每一個物品時,判斷一下a和b的關系 如果s[sta[top]].a>s[i].b,就彈棧 記錄所有時候的height,并取最大值 T2 單調棧裸題 單調棧是干什么的?? 單調棧是記錄一個數的一側的第一個比他大…

在package.json里面的script設置環境變量,區分開發及生產環境。注意mac與windows的設置方式不一樣...

在package.json里面的script設置環境變量,區分開發及生產環境。 注意mac與windows的設置方式不一樣。 "scripts": {"publish-mac": "export NODE_ENVprod&&webpack -p --progress --colors","publish-win": "…

leetcode 978. 最長湍流子數組(動態規劃)

978. 最長湍流子數組 當 A 的子數組 A[i], A[i1], …, A[j] 滿足下列條件時&#xff0c;我們稱其為湍流子數組&#xff1a; 若 i < k < j&#xff0c;當 k 為奇數時&#xff0c; A[k] > A[k1]&#xff0c;且當 k 為偶數時&#xff0c;A[k] < A[k1]&#xff1b; 或 …

人工智能取代工作_人工智能正在取代人們的工作-開發人員是下一個嗎?

人工智能取代工作I was recently asked to comment on whether there was any point in becoming a developer right now, because AI might be doing your job very soon.最近有人要求我評論一下現在成為開發人員是否有任何意義&#xff0c;因為AI可能很快就會完成您的工作。 …

python類self_Python類中的self到底是干啥的

Python編寫類的時候&#xff0c;每個函數參數第一個參數都是self&#xff0c;一開始我不管它到底是干嘛的&#xff0c;只知道必須要寫上。后來對Python漸漸熟悉了一點&#xff0c;再回頭看self的概念&#xff0c;似乎有點弄明白了。首先明確的是self只有在類的方法中才會有&…

PHP中關于取模運算及符號

執行程序段<?php echo 8%(-2) ?>&#xff0c;輸出結果是&#xff1a; %為取模運算&#xff0c;以上程序將輸出0 $a%$b,其結果的正負取決于$a的符號。 echo ((-8)%3); //將輸出-2 echo (8%(-3)); //將輸出2轉載于:https://www.cnblogs.com/457248499-qq-com/p…

[pytorch] Pytorch入門

Pytorch入門 簡單容易上手&#xff0c;感覺比keras好理解多了&#xff0c;和mxnet很像&#xff08;似乎mxnet有點借鑒pytorch&#xff09;&#xff0c;記一記。 直接從例子開始學&#xff0c;基礎知識咱已經看了很多論文了。。。 import torch import torch.nn as nn import to…

無線服務器密碼讓別人改了,wifi密碼被改了怎么辦_wifi密碼被別人改了怎么辦?-192路由網...

wifi密碼被別人改了怎么辦&#xff1f;wifi密碼之所以被別人修改&#xff0c;是因為其他人知道了你路由器的登錄密碼。所以&#xff0c;如果發現自己wifi密碼被別人修改了&#xff0c;應該立刻登錄到路由器設置界面&#xff0c;修改路由器登錄密碼、修改wifi密碼、并調整wifi加…

[archlinux][hardware] 查看SSD的使用壽命

因為最近把16GB的SSD做成了HDD的cache&#xff0c;所以比較關系壽命問題。 使用smartctl工具。 參考&#xff1a;https://www.v2ex.com/t/261373 linux 下面只有 smartmontools 這一個工具&#xff0c;而且只對像三喪和 intel 這樣的大廠支持良好&#xff0c;其余的廠家文檔不全…

leetcode174. 地下城游戲(動態規劃)

一些惡魔抓住了公主&#xff08;P&#xff09;并將她關在了地下城的右下角。地下城是由 M x N 個房間組成的二維網格。我們英勇的騎士&#xff08;K&#xff09;最初被安置在左上角的房間里&#xff0c;他必須穿過地下城并通過對抗惡魔來拯救公主。 騎士的初始健康點數為一個正…

如何設置Windows版Go —快速簡便的指南

by Linda Gregier琳達格雷格(Linda Gregier) Another great language to add to your full-stack developer tool belt is the simple and productive general-purpose programming language of Go.添加到您的全棧開發人員工具帶中的另一種很棒的語言是Go的簡單而高效的通用編…

python計算現場得分_淺談用 Python 計算文本 BLEU 分數

淺談用 Python 計算文本 BLEU 分數BLEU, 全稱為 Bilingual Evaluation Understudy(雙語評估替換), 是一個比較候選文本翻譯與其他一個或多個參考翻譯的評價分數盡管 BLEU 一開始是為翻譯工作而開發, 但它也可以被用于評估文本的質量, 這種文本是為一套自然語言處理任務而生成的…

Unity的幾個特殊文件夾

1.以.開頭的文件夾會被unity忽略&#xff0c;資源不會被導入&#xff0c;腳本不會編譯。 2.Standard Assets和Pro Standard Assets&#xff1a;在這個文件夾中的腳本最先被編譯。 3.Editor&#xff1a;以Editor命名的文件夾允許其中的腳本訪問Unity Editor的API。如果腳本中使用…

怎么上傳文件到kk服務器,VS Code 關于SFTP上傳文件到多服務器的配置

工欲善其事&#xff0c;必先利其器&#xff01;剛學前端的時候一直用的DW來編寫代碼&#xff0c;其功能非常強大&#xff0c;但在Linux下不能用&#xff0c;所以就轉VS Code了。但是剛開始使用VS Code的時候&#xff0c;很多DW上的功能需要自己安裝擴展&#xff0c;并配置才可以…

CentOS7 Firewall NAT 及端口映射

本節介紹用CentOS7的Firewalll來做NAT以及端口映射實驗拓撲:因為我的環境里CentOS7上有KVM虛擬機需要共享網卡上網&#xff0c;所以我把網卡都添加到了橋里面&#xff0c;當然這里也可以不用橋&#xff0c;直接用物理網口&#xff1b;用nmcli創建橋&#xff0c;并添加網口到橋&…

JVM源碼---教你傻瓜式編譯openjdk7(JAVA虛擬機愛好者必看)

LZ經過一個星期斷斷續續的研究&#xff0c;終于成功的搞定了JDK的成功編譯與調試。盡管網絡上的教程也有不少&#xff0c;包括源碼中也有自帶的編譯步驟說明&#xff0c;但真正自己動手的話&#xff0c;還是會遇到不少意料之外的錯誤。 為了方便各位猿友編譯&#xff0c;LZ臨時…