協作機器人 ai算法_如果我們希望人工智能為我們服務而不是不利于我們,我們需要協作設計...

協作機器人 ai算法

by Mariya Yao

姚iya(Mariya Yao)

如果我們希望人工智能為我們服務而不是不利于我們,我們需要協作設計 (If we want AI to work for us — not against us — we need collaborative design)

The trope “there’s an app for that” is becoming “there’s an AI for that.”

有人說“有一個應用程序”變成了“有一個AI的應用程序”。

Want to assess the narrative quality of a story? Disney’s got an AI for that.

想評估一個故事的敘事質量嗎? 迪士尼為此擁有了AI 。

Got a shortage of doctors but still need to treat patients? IBM Watson prescribes the same treatment plan as human physicians 99% of the time.

缺少醫生但仍然需要治療患者? IBM Watson規定99%的時間與人類醫師制定相同的治療計劃。

Tired of waiting for George R.R. Martin to finish writing Game of Thrones? Rest easy, because a neural network has done the hard work for him.

厭倦了等待喬治·RR·馬丁完成《權力的游戲》的寫作? 別緊張,因為神經網絡為他做了辛苦的工作 。

But is all this rapid-fire progress good for humanity? Elon Musk, our favorite AI alarmist, recently took down Mark Zuckerberg’s positive outlook on AI. He dismissed the latter’s views as “limited”.

但是,所有這些快速發動的進步對人類都有好處嗎? 我們最喜歡的AI預警員Elon Musk最近拒絕了 Mark Zuckerberg對AI的樂觀看法。 他認為后者的觀點“有限”。

Whether you’re in Camp Zuck of “AI is awesome” or in Camp Musk of “AI will doom us all”, one fact is clear. With AI touching all aspects of our lives, intelligent technology needs deliberate design to reflect and serve human needs and values.

無論您是在“人工智能真棒”的扎克營地,還是在“人工智能必將毀滅我們所有人”的馬斯克營地中,一個事實都顯而易見。 隨著AI觸及我們生活的方方面面,智能技術需要進行精心設計,以反映并服務于人類的需求和價值。

偏差的AI會帶來意想不到的嚴重后果 (Biased AI has unexpected and severe consequences)

Software applications used by U.S. government agencies for crime litigation and prevention algorithmically generate information that influence human decisions about sentencing, bail, and parole. Some of these programs have been found to erroneously attribute a much higher likelihood of committing further criminal activity to black defendants. The same algorithms also err in attributing much lower risk assessment scores to white defendants.

美國政府機構用于犯罪訴訟和預防的軟件應用程序通過算法生成的信息會影響人對量刑,保釋和假釋的決定。 已經發現其中一些程序錯誤地將更多的犯罪活動歸咎于黑人被告。 同樣的算法也會給白人被告人帶來低得多的風險評估分數。

According to a study from Carnegie Mellon University, Google served targeted ads for getting high-paying jobs (those that pay more than $200,000) much more often to men (1,800 times) than women (just a paltry 300).

根據卡內基梅隆大學的一項研究 ,谷歌投放針對性廣告,以獲取高薪工作(薪水超過200,000美元),而男性(1800倍)要比女性(微不足道的300人)多得多。

It is unclear if the discrepancy is the result of advertisers’ preferences. Or if it is an inadvertent outcome of machine learning (ML) algorithms behind the ad recommendation engine. The outcome is that a professional landscape that already demonstrates preferential treatment for one gender over another is being reinforced at scale with technology.

目前尚不清楚差異是否是廣告客戶偏好的結果。 或者,如果這是廣告推薦引擎背后的機器學習(ML)算法的意外結果。 結果是,已經通過技術大規模增強了一種職業格局,這種格局已經證明一種性別優先于另一種性別。

In the field of healthcare, AI systems are at risk of producing unreliable insights even if algorithms were perfectly implemented. Underlying healthcare data is driven by social inequalities. Poorer communities lack access to digital healthcare. This leaves a gaping hole in the trove of medical information that AI systems feed to algorithms. Randomized control trials often exclude groups such as pregnant women, the elderly, or those suffering from other medical complications.

在醫療保健領域,即使算法得到了完美的實施,人工智能系統也有產生不可靠洞察力的風險。 基礎醫療保健數據是由社會不平等驅動的。 貧困社區無法獲得數字醫療服務。 這在AI系統提供給算法的醫學信息中留下了一個空白。 隨機對照試驗通常將孕婦,老年人或患有其他醫療并發癥的人群排除在外 。

A Princeton University study demonstrated that ML systems inherit human biases found in English language texts. Since language is a reflection of culture and society, our everyday biases get picked up in the mathematical models behind natural language processing (NLP) tasks. Failing to carefully review and de-bias such models has real-world consequences. Google’s Perspective API is intended to analyze online conversations and flag “toxic” content. But it unintentionally flags non-white entities like names and food as being far more toxic than their white counterparts.

普林斯頓大學的一項研究表明 ,機器學習系統繼承了英語文本中發現的人類偏見。 由于語言是文化和社會的反映,因此我們日常的偏見在自然語言處理(NLP)任務背后的數學模型中得到了體現。 未能仔細檢查和消除此類模型的偏差會產生現實后果。 Google的Perspective API旨在分析在線對話并標記“有毒”內容。 但是它無意中將非白人實體(例如名稱和食物) 標記為比白人具有更大的毒性。

Many gender, economic and racial biases in AI have been documented over the last few years.

在過去的幾年中,已經記錄了人工智能中的許多性別,經濟和種族偏見。

With AI also becoming integral in the fields of security, defense and warfare, how do we design systems that don’t backfire?

隨著AI在安全,防御和戰爭領域也變得不可或缺,我們如何設計不會適得其反的系統?

機制和宣言是一個開始…… (Mechanisms and manifestos are a start…)

AI systems can’t only succeed in completing their core tasks. They must do so without harming human society. Designing safe and ethical AI is a monumental challenge, but a critical one to tackle now.

人工智能系統不僅可以成功完成其核心任務。 他們必須在不損害人類社會的情況下這樣做。 設計安全和符合道德的AI是一項艱巨的挑戰,但現在卻是一個至關重要的挑戰。

In a joint study, Google DeepMind and The Future of Humanity Institute explored the possibility of AI going rogue. They recommended that AI be designed to have a ”big red button” that can be activated by a human operator to “prevent an AI agent from continuing a harmful sequence of actions.” In practical terms, this red button will be a trigger or a signal that will “trick” the machine to internally make a decision to stop, without recognizing it as a shutdown signal by an external agent.

在一項聯合研究中 ,Google DeepMind和人類未來研究所探討了AI流氓的可能性。 他們建議對AI進行設計,使其具有一個“紅色大按鈕”,操作員可以激活它,以“防止AI代理繼續執行有害的操作序列。” 實際上,該紅色按鈕將是觸發或信號,將“誘騙”機器內部做出停止決策,而不會被外部代理識別為停機信號。

Meanwhile, the world’s largest association of technical professionals Institute of Electrical and Electronics Engineers (IEEE) published its General Principles for Ethically Aligned Design. It covers all types of artificial intelligence and autonomous systems.

同時,全球最大的技術專業人士協會電氣與電子工程師協會(IEEE)發布了其《道德統一設計通則》 。 它涵蓋了所有類型的人工智能和自治系統。

The document sets a general standard for designers to ensure that AI and autonomous systems:

該文檔為設計師設置了確保AI和自治系統的通用標準:

  1. do not infringe human rights

    不侵犯人權
  2. that they are transparent to a wide range of stakeholders

    他們對廣泛的利益相關者透明
  3. that their benefits and associated risks can be extended or minimized

    他們的利益和相關風險可以擴展或最小化
  4. that accountability for their design and operation is clearly laid out

    明確規定了其設計和操作的責任制

…但是協作設計對于成功至關重要 (…but collaborative design is critical for success)

Hypothetical fail-safe mechanisms and hopeful manifestos are important. But they are insufficient to address the myriad of ways that AI systems can go wrong. Creations adopt the biases of their creators. Homogeneous development teams, insular thinking, and lack of perspective lie at the root of many of the challenges already manifesting in AI today.

假設的故障安全機制和有希望的宣言很重要。 但是它們不足以解決AI系統出錯的各種方式。 創作采用創作者的偏見。 同類開發團隊,孤立的思維和缺乏遠見是當今AI中已經表現出的許多挑戰的根源 。

Diversity and user-centered design in technology have never been so important. Luckily, as AI education and tooling becomes more accessible, designers and other domain experts are increasingly empowered to contribute to a field that was previously reserved for academics and a niche community of experts.

技術上的多樣性和以用戶為中心的設計從未如此重要。 幸運的是,隨著AI教育和工具變得越來越容易獲得 ,設計人員和其他領域專家越來越有能力為以前供學者和小眾專家群體使用的領域做出貢獻。

增強AI合作的三種方法 (Three approaches to enhance collaboration in AI)

方法#1:構建用戶友好的產品以收集更好的AI數據 (Approach #1: Build user-friendly products to collect better data for AI)

Elaine Lee, an AI Designer at eBay emphasizes that human input and user generated data are critical for smarter AI. If the products collecting requisite data to power AI systems do not encourage positive engagement, then the data generated from user interactions tend to be incomplete, incorrect, or compromised. In Lee’s words, “We need to design experiences that incentivize engagement and improve AI.”

eBay的AI設計師Elaine Lee 強調 ,人工輸入和用戶生成的數據對于更智能的AI至關重要。 如果收集支持AI系統所需數據??的產品不鼓勵積極參與,那么通過用戶交互生成的數據往往不完整,不正確或受到破壞。 用Lee的話來說,“我們需要設計能夠激發參與度并改善AI的體驗。”

Google Design’s Jess Holbrook recommends a 7-step approach to designing human-centered ML systems. He cautions against relying on algorithms to tell you what problems to solve. Instead he encourages designers to build systems that enable “co-learning and adaptation” between man and machine as technologies evolve. Holbrook also points out that many legitimate problems do not need ML to be successfully solved.

Google Design的Jess Holbrook建議采用7步方法來設計以人為中心的ML系統。 他告誡不要依靠算法告訴您要解決的問題。 取而代之的是,他鼓勵設計師構建能夠隨著技術的發展在人機之間實現“共同學習和適應”的系統。 霍爾布魯克還指出,許多合法問題并不需要成功解決ML。

Collaborating with users seems like a common sense procedure. But few companies go beyond cursory user research and passive behavioral data collection. The next step is to enable a productive, long-term feedback loop so that users of AI systems actively define the functionality and vision of your technology,. Yet also perform important tasks like flagging and minimizing biases.

與用戶合作似乎是一個常識性過程。 但是,很少有公司能進行粗略的用戶研究和被動行為數據收集。 下一步是啟用有效的長期反饋循環,以便AI系統的用戶積極定義技術的功能和愿景。 還要執行重要任務,例如標記和最小化偏差。

方法2:將領域專業知識和業務價值置于算法之上 (Approach: #2: Prioritize domain expertise and business value over algorithms)

Michael Schrage, research fellow at MIT Sloan, argues that “strategically speaking, a brilliant data-driven algorithm typically matters less than thoughtful UX design. Thoughtful UX designs can better train machine learning systems to become even smarter.

麻省理工學院斯隆分校的研究員邁克爾·施拉格(Michael Schrage) 認為 :“從策略上講,出色的數據驅動算法通常比考慮周到的用戶體驗設計重要。 周到的UX設計可以更好地訓練機器學習系統,使其變得更加智能。

“In order to develop “thoughtful UX”, you need domain expertise and business value. A common pattern in both academia and industry engineering teams is the propensity to optimize for tactical wins over strategic initiatives. While brilliant minds worry about achieving marginal improvements in competition benchmarks, the nitty gritty issues of productizing and operationalizing AI for real-world use cases is often ignored. Who cares if you can solve a problem with 99% accuracy, if no one needs that problem solved? Or your tool is so arcane, no one is sure what problem it’s trying to solve in the first place?

“為了開發“周到的UX”,您需要領域專業知識和業務價值。 學術界和工業工程團隊中的一個常見模式是傾向于優化戰勝戰略計劃的戰術。 聰明的人擔心要在競爭基準上實現微不足道的改進,而針對實際用例的AI生產和操作AI的棘手問題卻常常被忽略。 誰在乎您是否可以以99%的精度解決問題,如果沒有人需要解決該問題? 還是您的工具太神秘了,沒有人確定它首先要解決的問題是什么?

“In working with Fortune 500 enterprises looking to re-invent their workflows with automation and AI, a complaint I commonly hear about promising AI startups is this: “These guys seem really smart and their product has a lot of bells and whistles. But they don’t understand my business.”

“在與《財富》 500強企業合作,希望通過自動化和AI重新改造其工作流程時,我通常聽到關于有前途的AI初創公司的抱怨是:“這些家伙看起來真的很聰明,他們的產品充滿了風吹草動。 但是他們不了解我的生意。”

方法3:賦予人類設計師機器智能 (Approach #3: Empower human designers with machine intelligence)

Designing AI is yet another challenge where human and machine can combine forces for superior results. Software developer, author and inventor Patrick Hebron demonstrates that machine learning can be used to simplify design tools without limiting creativity or removing control from human designers.

設計AI是另一個挑戰,人與機器可以結合力量以獲得卓越的結果。 軟件開發人員,作家兼發明家Patrick Hebron 證明了機器學習可用于簡化設計工具,而不會限制創造力或消除人類設計師的控制權。

Hebron describes several ways ML can transform how people interact with design tools. These include emergent feature sets, design through exploration, design by description, process organization, and conversational interfaces. He believes these approaches can streamline the design process and enable human designers to focus on the creative and imaginative side of the process instead of the technical aspects (i.e., how to use a particular design software). This way, “designers will lead the tool, not the other way around.”

Hebron描述了ML可以改變人們與設計工具交互方式的幾種方式。 其中包括緊急功能集,通過探索進行設計,通過描述進行設計,過程組織以及對話界面。 他認為,這些方法可以簡化設計過程,并使人類設計師能夠專注于過程的創造性和富于想象力的方面,而不是技術方面(即,如何使用特定的設計軟件)。 這樣,“設計人員將主導工具,而不是相反。”

Designing AI is yet another challenge where human and machine can combine forces for superior results. Software developer, author and inventor Patrick Hebron demonstrates that machine learning can be used to simplify design tools without limiting creativity or removing control from human designers.

設計AI是另一個挑戰,人與機器可以結合力量以獲得卓越的結果。 軟件開發人員,作家和發明家Patrick Hebron 證明了機器學習可用于簡化設計工具,而不會限制創造力或消除人類設計師的控制權。

Hebron describes several ways ML can transform how people interact with design tools. These include emergent feature sets, design through exploration, design by description, process organization, and conversational interfaces. He believes these approaches can streamline the design process, and enable human designers to focus on the creative and imaginative side of the process instead of the technical aspects such as how to use a particular design software. This way, “designers will lead the tool, not the other way around.”

Hebron描述了ML可以改變人們與設計工具交互方式的幾種方式。 其中包括緊急功能集,通過探索進行設計,通過描述進行設計,過程組織以及對話界面。 他認為,這些方法可以簡化設計過程,并使人類設計師能夠專注于過程的創造性和富于想象力的方面,而不是技術方面,例如如何使用特定的設計軟件。 這樣,“設計師將主導工具,而不是相反。”

The field of “AI Design” is nascent. We are still figuring out which best practices we should preserve and what new ones we need to invented. But many promising AI-driven creative tools already exist. Greater access to tools and education mean that experts from all fields and functions can help evolve a field that is traditionally driven by an elite few. With AI’s exponential impact on all aspects of our lives, this collaboration will be essential to developing technology that works for everyone, everyday.

“ AI設計”領域是新生的。 我們仍在確定應該保留哪些最佳實踐以及需要發明哪些新的最佳實踐。 但是已經存在許多有前途的AI驅動的創意工具。 獲得更多工具和教育的機會意味著來自各個領域和職能部門的專家可以幫助發展傳統上由少數精英推動的領域。 隨著AI對我們生活各個方面的指數影響,這種合作對于開發適用于每一個人的技術至關重要。

Thanks for reading. You can read more of my writing on AI by following me here and checking out the TOPBOTS blog.

謝謝閱讀。 您可以在這里關注我并查看TOPBOTS博客 ,以關于AI的文章 。

翻譯自: https://www.freecodecamp.org/news/if-we-want-ai-to-work-for-us-and-not-against-us-we-need-collaborative-design-a627175e5d60/

協作機器人 ai算法

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/395561.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/395561.shtml
英文地址,請注明出處:http://en.pswp.cn/news/395561.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

Shadow Brokers 公布 2.1 萬美元的 0day 訂閱服務

神秘黑客組織 Shadow Brokers 宣布將向支付 2.1 萬美元 0day 訂閱服務的個人公布最新一批的 NSA 工具,這一聲明給全世界的白帽子黑客或安全研究人員造成了一場倫理危機。 一方面,Shadow Brokers 此前釋出過創造出勒索軟件 WannaCry 的 NSA 工具&#xff…

linux awk 常見字符串處理

awk指定輸出列: awk {print $0} file #打印所有列awk {print $1} file #打印第一列 awk {print $1, $3} file #打印第一和第三列 cat file | awk {print $3, $1} #打印第三列和第一列,注意先后順序。 cat file | awk {print $3, $NF} #打印第三列…

oracle ldap 配置,ldap 安裝

一、安裝步驟1:配置yum源掛著盤鏡像時用到: 這里不做解釋;(yum clean all && yum makecache)2:安裝OpenLDAP組件1)安裝OpenLDAP組件命令如下:[rootgitea ~]# yum install openldap openldap-servers openldap-clients openldap-devel compat-openldap -ycom…

scp跨主機拷貝工具

參考:http://www.cnblogs.com/hitwtx/archive/2011/11/16/2251254.html SSH上A機,要將10.1.17.95機/tpdata/shell_script/下面的crontab.tar.gz文件拷貝到A機的當前文件夾下面: scp weblogic10.1.17.95:/tpdata/shell_script/crontab.tar.gz …

Google Chrome瀏覽器可能在您不知情的情況下破壞了您的測試

by Robert Axelsen羅伯特阿克森(Robert Axelsen) Google Chrome瀏覽器可能在您不知情的情況下破壞了您的測試 (Google Chrome might have broken your tests without you even knowing about it) My colleague just discovered that Chrome 58 (released April 19th) has sile…

Java 9 將采用新的版本字符串格式

在現有的版本編碼格式使用了兩年之后,從Java 9開始,Java版本方案將根據業內軟件版本編碼的最佳實踐進行修改。使用或解析Java版本字符串的應用程序開發人員要注意了,因為這種變化可以會影響他們的應用程序。 正如JEP 223所闡述的那樣&#xf…

oracle 表更新表,Oracle 更新表(另一張表)

JUC學習筆記--Thread多線程基礎實現多線程的兩種方法 java 實現多線程通過兩種方式1.繼承Thread類 ,2.實現Runnable接口 class Newthead extends Thread{ public void ru ...SharePoint中新創建的Web Application在瀏覽器中報404錯誤問題描述:在安裝完成SharePoint 2010后,進入…

jQuery(愛前端)

一 jQuery 簡介 官網:www.jquery.com 口號:寫更少的代碼,做更多的事情 jQuery 是一個快速、小型的、特性很多的JS庫,它把很多事兒都變得簡單。jQuery是免費的、開源的。 jQuery 是 DOM 編程領域的霸主,極大的簡化了原生…

跳過 centos部署 webpy的各種坑

用centos部署webpy發現的各種坑: 1、python 版本: 2、中文編碼: 3、web模塊路徑: 在命令行里輸入python,能import web,但是網站錯誤報告一直報告沒有找到web模塊,說明web模塊路徑有問題。python…

撰寫本文的所有基本React.js概念

Update: This article is now part of my book “React.js Beyond The Basics”.更新:本文現在是我的書《超越基礎的React.js》的一部分。 Read the updated version of this content and more about React at jscomplete.com/react-beyond-basics.在jscomplete.com…

CentOS 7 firewalld使用簡介

2019獨角獸企業重金招聘Python工程師標準>>> Centos升級到7之后,發現無法使用iptables控制Linuxs的端口,google之后發現Centos 7使用firewalld代替了原來的iptables。下面記錄如何使用firewalld開放Linux端口: 1.快速使用說明 開啟…

簡述java語言的特點

簡述java語言的特點: ① 簡單的特性 ② 面向對象的特性 ③ 分布式處理的特性 ④ 健壯的特性 ⑤ 結構中立的特性 ⑥ 安全特性 ⑦ 可移植的特性 ⑧ 解釋的特性 ⑨ 高性能的特性 ⑩ 多線程的特性 轉載于:https://www.cnblogs.com/qq1335…

php函數嵌套 作用域,javascript 嵌套的函數(作用域鏈)_javascript技巧

嵌套的函數(作用域鏈)當你進行函數的嵌套時,要注意實際上作用域鏈是發生變化的,這點可能看起來不太直觀。你可把下面的代碼置入firebug監視值的變化。var testvar window屬性;var o1 {testvar:1, fun:function(){alert(o1: this.testvaro1.fun();1o2.f…

【C#-枚舉】枚舉的使用

枚舉是用戶定義的整數類型。 namespace ConsoleApplication1 {/// <summary>/// 在枚舉中使用一個整數值&#xff0c;來表示一天的階段/// 如&#xff1a;TimeOfDay.Morning返回數字0/// </summary>class EnumExample{public enum TimeOfDay{Morning 0,Afternoon …

Elixir 初嘗試 5 -- 遇見Actor

Actor模型的定義 wiki如是說 The actor model in computer science is a mathematical model of concurrent computation that treats "actors" as the universal primitives of concurrent computation. In response to a message that it receives, an actor can: …

創建外部快照_快照事件:現在如何僅通過拍照即可創建日歷事件

創建外部快照by Arjun Krishna Babu通過Arjun Krishna Babu 快照事件&#xff1a;現在如何僅通過拍照即可創建日歷事件 (Snap Event: How you can now create calendar events just by taking a picture) Google just published my first Android app, Snap Event, in their P…

一個備份sql server文件.bak還原成兩個數據庫

一直對這個概念很模糊&#xff0c;今天具體一點。 備份文件只要是正常的.bak文件就好。 數據庫>還原數據庫 直接填寫還原之后的文件名就行。 用一份備份文件還原兩個一樣的庫&#xff0c;只是名稱不一樣。 轉載于:https://www.cnblogs.com/Ly426/p/10209825.html

linux服務器防病毒,Linux系統中你不需要防病毒?_服務器評論-中關村在線

誤區4&#xff1a;Linux是無病毒。Linux的安全性這么好&#xff0c;這是否意味著Linux是無病毒嗎&#xff1f;現實&#xff1a;Linux是非常安全&#xff0c;并不是沒有針對Linux方面的病毒。有許多針對Linux的已知病毒。但是幾乎所有的已知病毒對于Linux在本質上都是非破壞性的…

外置接口請求

1. 請求接口 /*** 請求接口** param url* param paramsStr* param type Connection.Method.POST* param heads* return*/ public JSONObject sendUpload(String url, String paramsStr, Connection.Method type, Map<String, String> heads) {//發送上傳訂單請求Str…

python面向對象-1方法、構造函數

類是指&#xff1a;描述一種事物的定義&#xff0c;是個抽象的概念 實例指&#xff1a;該種事物的一個具體的個體&#xff0c;是具體的東西 打個比方&#xff1a; “人”是一個類。“張三”是人類的一個具體例子 在編程時也是同樣的道理&#xff0c;你先自己定義一個“類”&am…