Learning bad is easy to learn! Artificial intelligence will inherit human race and prejudice.

(Original title: Learning bad is hard to learn! Artificial intelligence will inherit human race and gender biases)

??

Editor's note: Sunstein proposed in the "Network Republic" that the algorithm affects our cognitive world, and in the "Information Utopia" for the first time clearly put forward the algorithm to make people form the "information mortuary" harm. This is the effect of the algorithm on the human brain. The algorithm is used in artificial intelligence and it also makes biases exist in artificial intelligence. Because the language itself is biased, artificial intelligence acquires them through algorithms, and even artificial intelligence may strengthen this bias. . But whether this is a bad thing is still worth exploring.

In the past few years, programs such as Google Translate have made rapid progress in language translation. This advancement has benefited from new machine learning techniques and a large number of available online textual data, which allows algorithms to be tested.

Artificial intelligence (AI) tools have revolutionized the ability of computers to translate everyday language but have also demonstrated significant gender and racial prejudice. According to the latest research in the journal Science, the easier it is for machines to acquire the linguistic abilities of human beings, the easier it is for them to acquire ingrained prejudices in the language application paradigm.

As more and more problems affecting our daily lives are transferred to robots to make decisions, existing social inequality and prejudice are strengthened in new and unpredictable ways, and this discovery makes this nightmare linger in people's hearts.

The easier it is for a machine to acquire the linguistic abilities of humans, the easier it is for them to acquire ingrained prejudices in the language application paradigm. Image from: KTS Design/Getty Images/Science Photo Library RF

Joanna Bryson, a computer scientist and essay co-author of the University of Bath, said: "Many people think this shows that AI is biased. It is not. This shows that we are biased and that AI is learning this bias."

But Bryson also warned that AI has the potential to reinforce existing prejudices, because unlike humans, algorithms cannot consciously resist learned biases. She said: "The danger is that the AI ​​system is not driven by morality. Once you have such a system, it is bad."

Text Embedding: Meaning of Cultural and Social Background Behind Words

The paper focuses on machine learning tools, namely "word embedding," which has changed the way computers translate speech and text. It is claimed that the next step naturally is to make machines develop human capabilities, such as common sense judgments and logic.

Arvind Narayanan, a computer scientist and senior author at Princeton University, said: “The main reason why we chose to study text embedding is that in recent years, the effort to help the machine understand the language has achieved amazing success.”

This method, which has been applied to web search and machine translation, works by establishing a mathematical representation of a language. In this mathematical representation, words are abstracted into a series of numbers (ie, vectors), based on their frequently occurring meanings. Although this is surprising, the algorithm seems to be acquiring a rich cultural and social background behind a word in a dictionary that cannot be defined.

For example, in the mathematicalized “linguistic space”, words of “flowers” ​​are always associated with lexical meanings, while words of “insects” are the opposite, reflecting the common view of different values ​​of insects and flowers. The latest paper shows that some of the more insidious hidden prejudices in human psychological experiments can also be easily learned by the algorithm. "Female" and "woman" are more easily associated with arts and humanities posts and families, while "male" and "men" are related to mathematics and engineering. At the same time, artificial intelligence systems are more likely to associate the names of European-Americans with ambiguous words, such as “talent” or “happiness,” and non-American Americans are generally more likely to associate derogatory terms with their names.

This finding shows that people (at least in the UK and the United States) have linked ambiguous words with white faces in implicit association tests, and the algorithm has learned this prejudice.

These prejudices have a profound effect on human behavior. Studies have shown that for an identical resume, a candidate with an European-American name is more likely to be interviewed than a candidate with an African-American name, which is more than 50%. The latest results show that unless explicitly programmed, the algorithm will be filled with the same social bias.

"If you don't believe there is a link between names and racism, that's evidence," Bryson said.

In this study, the machine learning tool was tested based on a database called "Web crawler" - it contains online published data, including 844 billion words. Using Google News data as a test, similar results have been obtained.

Algorithms provide opportunities for dealing with bias

Sandra Wachter, a researcher on data ethics and algorithms at Oxford University, said: "The world is biased and historical data is biased, so it's not surprising that we get biased results," she added. Saying that algorithms represent a threat, they are more likely to provide opportunities to deal with prejudice and eliminate these biases at the right time."

"At least we may be aware of this prejudice when algorithms are biased," she said. "And humans can lie about not hiring someone. In comparison, we don't expect the algorithm to deceive us."

However, Wachter claims that the challenge in the future is how to eliminate the unreasonable prejudice in the algorithm and still retain its powerful translation ability. After all, the algorithm is designed to understand the language.

"In theory, we can establish systems to detect biased decisions and take action on them," Wachter said. She joined others in calling for the establishment of artificial intelligence supervision. "This is a complicated task, but it is us." Social responsibility cannot be avoided."????

燑br>

Mesh Pods 1.0

Mesh Pods 1.0


ZGAR electronic cigarette uses high-tech R&D, food grade disposable pods and high-quality raw material. A new design of gradient our disposable vape is impressive.We equip with breathing lights in the vape pen and pods.


Our team has very high requirements for product quality, taste allocation and packaging design. Designers only use Hong Kong designers, e-cigarette liquid only imports from the United States, materials are food grade, and assembly factory wants medical grade without ground workshop.


We offer best price, high quality Mesh Pods,Pod System Vape,Pods Systems Touch Screen,Empty Pod System, Pod Vape System,Disposable Pod device,Vape Pods to all over the world.



Pod Systems Vape And Smoke,Vape Pod System Device,Pod System Vape Kit,Pod System Mini Vape Pod

Shenzhen WeiKa Technology Co.,Ltd. , https://www.sze-cigarette.com

Posted on