神經網絡 卷積神經網絡_如何愚弄神經網絡?

神經網絡 卷積神經網絡

Imagine you’re in the year 2050 and you’re on your way to work in a self-driving car (probably). Suddenly, you realize your car is cruising at 100KMPH on a busy road after passing through a cross lane and you don’t know why.

想象一下,您現在正處于2050年,并且正在駕駛自動駕駛汽車(可能)工作。 突然,您發現您的汽車在通過十字路口后,在繁忙的道路上以100KMPH的速度行駛,而您不知道為什么。

PURE TERROR!

純恐怖!

What could’ve happened?

會發生什么事?

Well, there might be many reasons. But in this article, we are going to focus on one particular reason — the car was fooled.

好吧,可能有很多原因。 但是在本文中,我們將重點關注一個特殊原因- 汽車被騙了

To be precise, the neural network that saw a signboard at the intersection was tricked into thinking a STOP sign as a 100KMPH sign and that resulted in its instantaneous acceleration.

確切地說,在交叉路口看到一個招牌的神經網絡被欺騙,以為將STOP標志視為100KMPH標志,并導致其瞬時加速。

Is that even possible?

那有可能嗎?

Yes, it is. But before getting deep into it first, let’s understand what a neural network sees after it gets trained. It is believed that every independent neuron in the network works similarly as of our biological neuron and we assume that the neural network thinks the same as of our brain when it looks at an image. Practically, it isn’t the case. Let’s look at it with an example.

是的。 但是在開始深入研究之前,讓我們先了解一下神經網絡在訓練后會看到什么。 可以相信,網絡中的每個獨立神經元的工作方式都與我們的生物神經元類似,并且我們假設神經網絡在查看圖像時會認為與大腦相同。 實際上,并非如此。 我們來看一個例子。

Guess what the below image is.

猜猜下圖是什么。

Image for post
Source資源

You guessed it right. It’s a temple and the neural network predicts it as a temple with 97% confidence.

你猜對了。 這是一座廟宇,神經網絡將其預測為擁有97%置信度的廟宇。

Now, guess what this image is.

現在,猜猜這是什么圖像。

Image for post
Source資源

Temple again?

再來一次圣殿?

They look identical but they aren’t. The above image is predicted as an ostrich with 98% confidence by the same model we used for the previous one. The network is fooled by this image now. But how?

它們看起來相同,但事實并非如此。 上面的圖像被我們用于上一個模型的模型預測為具有98%的置信度的鴕鳥。 現在,該圖像欺騙了網絡。 但是如何?

This second image didn’t come from a real-world camera but instead, it was hand-engineered specifically to fool the neural network classifier while being the same to our visual system.

第二張圖像不是來自真實世界的相機,而是經過手工設計,目的是欺騙神經網絡分類器,同時使其與我們的視覺系統相同。

Image for post
Source資源

This noisy guy is responsible for the misclassification by the model. The addition of this noise to the first image resulted in the modified second image and this is called an adversarial example. And the external noise added is called a perturbation.

這個嘈雜的家伙負責模型的錯誤分類。 將此噪聲添加到第一圖像會導致修改后的第二圖像,這稱為對抗示例。 加上的外部噪聲稱為擾動。

Image for post
Source資源

In the same way, the car might have misclassified the STOP sign with a 100KMPH sign in this manner.

同樣,汽車可能會以這種方式將STOP符號歸為100KMPH符號。

Image for post
Designed using Canva
使用Canva設計

Let me give you an idea of why this is a very significant threat to a lot of real-world machine learning applications apart from the above self-driving cars case.

讓我讓您了解一下,除了上述自動駕駛汽車案例之外,為什么這對許多現實世界的機器學習應用程序構成了非常重大的威脅。

  • It is also possible to create a pair of 3D printed glasses but when you put them on, all of a sudden you are unrecognizable to any existing facial recognition software.

    也可以創建一副3D打印眼鏡,但是當戴上它們時,突然之間您將無法識別任何現有的面部識別軟件。
Image for post
Source資源
  • Also, printing a custom license plate that looks perfectly normal but that gets misregistered by any existing traffic surveillance camera.

    另外,打印自定義車牌看起來完全正常,但是任何現有的交通監控攝像頭都會注冊錯誤。

In this way, there are a ton of different attacks neural networks are prone to. There are white-box attacks, black-box attacks, physical attacks, digital attacks, perceptible and imperceptible attacks, and whatnot. While working under any real-world situation, the network must be robust to all such types of attacks.

這樣,神經網絡容易遭受大量不同的攻擊。 有白盒攻擊,黑盒攻擊,物理攻擊,數字攻擊,可感知和不可感知的攻擊等等。 在任何現實情況下工作時,網絡都必須對所有此類攻擊都具有魯棒性。

這是如何運作的? (How does this work?)

There’s a very interesting blog on this written by Andrej Karpathy and you could read it here. Here’s a small sneak peek of it.

Andrej Karpathy在這方面有一個非常有趣的博客,您可以在此處閱讀。 這是一個小小的偷看。

So what do we do in a traditional training process? We get the loss function, we backpropagate, calculate the gradient, take this gradient and use it to perform a parameter update, which wiggles every parameter in the model a tiny amount in the correct direction, to increase the prediction score. These parameter updates are responsible for increasing the confidence scores of the right class of the input image.

那么我們在傳統的培訓過程中會做什么? 我們得到損失函數,反向傳播,計算梯度,取該梯度并用它來執行參數更新 ,該更新將模型中的每個參數向正確的方向擺動一小部分,從而增加了預測得分。 這些參數更新負責增加輸入圖像的正確類別的置信度得分。

Notice how this worked. We kept the input image fixed, and we regulated the model parameters to increase the score of whatever class we wanted. On the other way round, we can easily flip this process around to create fooling images. That is, we will hold the model parameters fixed, and instead, we’re computing the gradient of all pixels in the input image on any class we wish. For example, we can ask a question that —

注意這是如何工作的。 我們保持輸入圖像固定不變,并調節模型參數以增加所需類別的分數。 另一方面,我們可以輕松地翻轉此過程以創建欺騙圖像。 也就是說,我們將固定模型參數,而要計算希望的任何類上輸入圖像中所有像素的梯度。 例如,我們可以問一個問題-

What happens to the score of (whatever class you want) when I tweak the pixels of the image instead?

當我調整圖像的像素時,(無論您想要什么類)的分數會怎樣?

Image for post
Designed using Canva
使用Canva設計

We compute the gradient just as before with backpropagation, and then we can perform an image update instead of a parameter update, with the end result being that we increase the score of whatever class we want. For example, we can take a panda image and regulate every pixel according to the gradient of that image on the cat class. This would change the image a tiny amount, but the score of the cat would now increase. Somewhat unintuitively, it turns out that you don’t have to change the image too much to toggle the image from being classified correctly as a panda to being classified as anything else (e.g. cat).

我們像以前一樣通過反向傳播計算梯度,然后可以執行圖像更新而不是參數更新,最終結果是我們增加了所需類別的分數。 例如,我們可以拍攝熊貓圖像,并根據類上該圖像的梯度來調整每個像素。 這會稍微改變圖像,但是的分數現在會增加。 有點不直觀,事實證明,您不必更改圖像就可以將圖像從正確地分類為熊貓分類為其他分類(例如cat )。

Now that you have got a basic idea of how this works, there’s one popular technique you should know called the Fast Gradient Sign Method, used to generate adversarial examples, which was discussed by Ian J. Goodfellow in Explaining and Harnessing Adversarial Examples.

現在您已經知道了它是如何工作的基本概念,您應該知道一種流行的技術,稱為快速梯度符號法,用于生成對抗性示例,Ian J. Goodfellow在解釋和利用對抗性示例中進行了討論。

快速梯度符號法 (Fast Gradient Sign Method)

In this method, you take an input image and use the gradients of the loss function with respect to the input image to create a new image that maximizes the existing loss. In this way, we achieve an image with the change that is almost imperceptible to our visual system but the same neural network could see a significant difference. This new image is called the adversarial image. This can be summarised using the following expression:

在這種方法中,您將獲取一個輸入圖像,并使用損失函數相對于輸入圖像的梯度來創建一個使現有損失最大化的新圖像。 通過這種方式,我們獲得了視覺系統幾乎察覺不到的變化的圖像,但是相同的神經網絡可能會看到很大的差異。 該新圖像稱為對抗圖像。 可以使用以下表達式進行總結:

adv_x = x + ? * sign(?x * J(θ,x,y))

where

哪里

  • adv_x: Adversarial image.

    adv_x:對抗圖像。
  • x: Original input image.

    x:原始輸入圖像。
  • y: Original input label.

    y:原始輸入標簽。
  • ?: Multiplier to ensure the perturbations are small.

    ?:乘數可確保擾動很小。
  • θ: Model parameters.

    θ:模型參數。
  • J: Loss.

    J:虧損。

You can play around with this method by generating your own adversarial examples for images in this notebook. Here, you’ll find a model trained on the MNIST dataset and you can see how the confidence scores change while tweaking the ?(epsilon) parameter.

您可以通過為筆記本中的圖像生成自己的對抗示例來使用此方法。 在這里,您將找到在MNIST數據集上訓練的模型,并且可以看到在調整? (ε)參數時置信度得分如何變化。

Image for post
Results from Colab
來自Colab的結果

For any x → y, x indicates actual class and y indicates the predicted class.

對于任何x→y,x表示實際類別,y表示預測類別。

As you can see, if you increase the epsilon value, the perturbations become more evident and it becomes a perceptible change to our visual system. Nevertheless, our neural system is robust enough to predict the correct class.

如您所見,如果增加epsilon值,則擾動會變得更加明顯,并且對我們的視覺系統也會產生明顯的變化。 但是,我們的神經系統足夠強大,可以預測正確的類別。

This method achieves this by finding how much each pixel in the given input image contributes to the loss value, and it adds the perturbation accordingly.

該方法通過找到給定輸入圖像中的每個像素對損耗值有多大貢獻來實現這一目標,并相應地增加了擾動。

Not only the Fast Gradient Sign Method, but we also have some other popular methods called the adversarial patch method, the single-pixel attack method, creating 3D models by adversarially perturbating them, and many more. Let’s take a look at some of them.

不僅快速梯度符號方法,而且我們還有其他一些流行的方法,稱為對抗補丁方法,單像素攻擊方法,通過對抗性干擾來創建3D模型等。 讓我們看看其中的一些。

對抗補丁 (Adversarial Patch)

Google in the year 2018, came up with a unique idea of placing an adversarial patch in the image frame in the following way.

Google在2018年提出了一個獨特的想法,可以通過以下方式在圖像幀中放置一個對抗性補丁。

Image for post
Source資源

This paper shows how it is possible to show the model any image and it classifies the given image as a toaster. This patch is designed in such a way that it can fool any underlying neural network that is responsible for classification into thinking it as a toaster, no matter what image you give it. You just need to place this sticker beside the object. It works pretty well and is capable enough to fool models which are not robust enough.

本文展示了如何向模型顯示任何圖像,并將給定圖像分類為烤面包機 。 該補丁的設計方式使其可以欺騙任何負責分類的底層神經網絡,將其視為烤面包機,無論您提供什么樣的圖像。 您只需要將此標簽放在對象旁邊。 它工作得很好,并且足以欺騙不夠魯棒的模型。

打印對抗性干擾的3D模型 (Printing a 3D Model which is Adversarially Perturbated)

Not only images, but you can also create a 3D model that is specifically designed to fool the model at any angle.

不僅是圖像,而且您還可以創建3D模型,該3D模型專門設計用于以任何角度欺騙模型。

演示地址

Source資源

Now that we’ve seen how these adversarial examples fool a neural network, the same examples can also be used to train the neural network to make the model robust from attacking. This could also act as a good regularizer.

既然我們已經看到了這些對抗性示例如何欺騙神經網絡,那么同樣的示例也可以用于訓練神經網絡,以使模型在攻擊中變得強大。 這也可以充當良好的調節器。

Image for post
Source資源

From the above graph, it is evident that after training with adversarial examples the model is now less prone to get fooled.

從上圖可以看出,經過對抗性示例訓練后,該模型現在更不容易被愚弄。

And now the final question.

現在是最后一個問題。

Do we humans have adversarial examples ourselves?

我們人類自己有對抗性的例子嗎?

And I think the answer is Yes! For example, if you look at some optical illusions like this,

我認為答案是肯定的! 例如,如果您看到類似這樣的錯覺,

Image for post
Source資源

You’ll notice that the lines don’t look parallel at first. But when closely observed these lines are parallel to one another.

您會注意到這些線起初看起來并不平行。 但是當仔細觀察時,這些線是彼此平行的。

And yes, these are exactly what the adversarial examples are. They are the images where we see something that we shouldn’t be seeing. So, we can see that our human visual system can also be fooled with certain examples but very clearly we are robust to adversarial examples that fool our neural networks.

是的,這些正是對抗性例子。 它們是我們看到不該看到的圖像的地方。 因此,我們可以看到,我們的人類視覺系統也可能被某些示例所迷惑,但是非常明顯地,我們對于使我們的神經網絡蒙蔽的對抗性示例具有魯棒性。

結論 (Conclusion)

These adversarial examples are not just limited to images. Any model from a simple perceptron to models of natural language processing is prone to such attacks. But these can be curbed to an extent with some strategies such as Reactive and Proactive Strategies which will be discussed in detail in my upcoming articles.

這些對抗性例子不僅限于圖像。 從簡單的感知器到自然語言處理的任何模型都容易受到此類攻擊。 但是,可以通過一些策略(例如“ React式”和“ 主動式策略”)在一定程度上抑制這些問題,這些策略將在我的后續文章中進行詳細討論。

On a brighter side, I think these adversarial examples hint at some very interesting new research directions that we can use to improve our existing models. I hope you’ve got to learn something new today!

從好的方面來說,我認為這些對抗性例子暗示了一些非常有趣的新研究方向,我們可以使用這些研究方向來改進現有模型。 希望您今天學到一些新知識!

If you’d like to get in touch, connect with me on LinkedIn.

如果您想取得聯系,請通過LinkedIn與我聯系。

翻譯自: https://towardsdatascience.com/how-to-fool-a-neural-network-958ba5d82d8a

神經網絡 卷積神經網絡

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/391722.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/391722.shtml
英文地址,請注明出處:http://en.pswp.cn/news/391722.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

數據特征分析-分布分析

分布分析用于研究數據的分布特征,常用分析方法: 1、極差 2、頻率分布 3、分組組距及組數 df pd.DataFrame({編碼:[001,002,003,004,005,006,007,008,009,010,011,012,013,014,015],\小區:[A村,B村,C村,D村,E村,A村,B村,C村,D村,E村,A村,B村,C村,D村,E村…

開發工具總結(2)之全面總結Android Studio2.X的填坑指南

前言:好多 Android 開發者都在說Android Studio太坑了,老是出錯,導致開發進度變慢,出錯了又不知道怎么辦,網上去查各種解決方案五花八門,有些可以解決問題,有些就是轉來轉去的寫的很粗糙&#x…

無聊的一天_一人互聯網公司背后的無聊技術

無聊的一天Listen Notes is a podcast search engine and database. The technology behind Listen Notes is actually very very boring. No AI, no deep learning, no blockchain. “Any man who must say I am using AI is not using True AI” :)Listen Notes是一個播客搜索…

如何在Pandas中使用Excel文件

From what I have seen so far, CSV seems to be the most popular format to store data among data scientists. And that’s understandable, it gets the job done and it’s a quite simple format; in Python, even without any library, one can build a simple CSV par…

Js實現div隨鼠標移動的方法

HTML: <div id"odiv" style" COLOR: #666; padding: 2px 8px; FONT-SIZE: 12px; MARGIN-RIGHT: 5px; position: absolute; background: #fff; display: block; border: 1px solid #666; top: 50px; left: 10px;"> Move_Me</div>第一種&…

leetcode 867. 轉置矩陣

給你一個二維整數數組 matrix&#xff0c; 返回 matrix 的 轉置矩陣 。 矩陣的 轉置 是指將矩陣的主對角線翻轉&#xff0c;交換矩陣的行索引與列索引。 示例 1&#xff1a; 輸入&#xff1a;matrix [[1,2,3],[4,5,6],[7,8,9]] 輸出&#xff1a;[[1,4,7],[2,5,8],[3,6,9]] …

數據特征分析-對比分析

對比分析是對兩個互相聯系的指標進行比較。 絕對數比較(相減)&#xff1a;指標在量級上不能差別過大&#xff0c;常用折線圖、柱狀圖 相對數比較(相除)&#xff1a;結構分析、比例分析、空間比較分析、動態對比分析 df pd.DataFrame(np.random.rand(30,2)*1000,columns[A_sale…

Linux基線合規檢查中各文件的作用及配置腳本

1./etc/motd 操作&#xff1a;echo " Authorized users only. All activity may be monitored and reported " > /etc/motd 效果&#xff1a;telnet和ssh登錄后的輸出信息 2. /etc/issue和/etc/issue.net 操作&#xff1a;echo " Authorized users only. All…

tableau使用_使用Tableau升級Kaplan-Meier曲線

tableau使用In a previous article, I showed how we can create the Kaplan-Meier curves using Python. As much as I love Python and writing code, there might be some alternative approaches with their unique set of benefits. Enter Tableau!在上一篇文章中 &#x…

踩坑 net core

webclient 可以替換為 HttpClient 下載獲取url的內容&#xff1a; 證書&#xff1a; https://stackoverflow.com/questions/40014047/add-client-certificate-to-net-core-httpclient 轉載于:https://www.cnblogs.com/zxs-onestar/p/7340386.html

我從參加#PerfMatters會議中學到的東西

by Stacey Tay通過史黛西泰 我從參加#PerfMatters會議中學到的東西 (What I learned from attending the #PerfMatters conference) 從前端的網絡運行情況發布會上的注意事項 (Notes from a front-end web performance conference) This week I had the privilege of attendin…

修改innodb_flush_log_at_trx_commit參數提升insert性能

最近&#xff0c;在一個系統的慢查詢日志里發現有個insert操作很慢&#xff0c;達到秒級&#xff0c;并且是比較簡單的SQL語句&#xff0c;把語句拿出來到mysql中直接執行&#xff0c;速度卻很快。 這種問題一般不是SQL語句本身的問題&#xff0c;而是在具體的應用環境中&#…

leetcode 1178. 猜字謎(位運算)

外國友人仿照中國字謎設計了一個英文版猜字謎小游戲&#xff0c;請你來猜猜看吧。 字謎的迷面 puzzle 按字符串形式給出&#xff0c;如果一個單詞 word 符合下面兩個條件&#xff0c;那么它就可以算作謎底&#xff1a; 單詞 word 中包含謎面 puzzle 的第一個字母。 單詞 word…

Nexus3.x.x上傳第三方jar

exus3.x.x上傳第三方jar&#xff1a; 1. create repository 選擇maven2(hosted)&#xff0c;說明&#xff1a; proxy&#xff1a;即你可以設置代理&#xff0c;設置了代理之后&#xff0c;在你的nexus中找不到的依賴就會去配置的代理的地址中找hosted&#xff1a;你可以上傳你自…

責備的近義詞_考試結果危機:我們應該責備算法嗎?

責備的近義詞I’ve been considering writing on the topic of algorithms for a little while, but with the Exam Results Fiasco dominating the headline news in the UK during the past week, I felt that now is the time to look more closely into the subject.我一直…

電腦如何設置終端設置代理_如何設置一個嚴肅的Kubernetes終端

電腦如何設置終端設置代理by Chris Cooney克里斯庫尼(Chris Cooney) 如何設置一個嚴肅的Kubernetes終端 (How to set up a serious Kubernetes terminal) 所有k8s書呆子需要的CLI工具 (All the CLI tools a growing k8s nerd needs) Kubernetes comes pre-packaged with an ou…

spring cloud(二)

1. Feign應用 Feign的作用&#xff1b;使用Feign實現consumer-demo代碼中調用服務 導入啟動器依賴&#xff1b;開啟Feign功能&#xff1b;編寫Feign客戶端&#xff1b;編寫一個處理器ConsumerFeignController&#xff0c;注入Feign客戶端并使用&#xff1b;測試 <dependen…

c/c++編譯器的安裝

MinGW(Minimalist GNU For Windows)是個精簡的Windows平臺C/C、ADA及Fortran編譯器&#xff0c;相比Cygwin而言&#xff0c;體積要小很多&#xff0c;使用較為方便。 MinGW最大的特點就是編譯出來的可執行文件能夠獨立在Windows上運行。 MinGW的組成&#xff1a; 編譯器(支持C、…

滲透工具

滲透工具 https://blog.csdn.net/Fly_hps/article/details/89306104 查詢工具 https://blog.csdn.net/Fly_hps/article/details/89070552 轉載于:https://www.cnblogs.com/liuYGoo/p/11347693.html

numpy 線性代數_數據科學家的線性代數—用NumPy解釋

numpy 線性代數Machine learning and deep learning models are data-hungry. The performance of them is highly dependent on the amount of data. Thus, we tend to collect as much data as possible in order to build a robust and accurate model. Data is collected i…