Article count:16439 Read by:87952319

Hottest Technical Articles
Exclusive: A senior executive of NetEase Games was taken away for investigation due to corruption
OPPO is going global, and moving forward
It is reported that Xiaohongshu is testing to directly direct traffic to personal WeChat; Luckin Coffee is reported to enter the US and hit Starbucks with $2, but the official declined to comment; It is reported that JD Pay will be connected to Taobao and Tmall丨E-commerce Morning News
Yu Kai of Horizon Robotics stands at the historical crossroads of China's intelligent driving
Lei Jun: Don't be superstitious about BBA, domestic brands are rising in an all-round way; Big V angrily criticized Porsche 4S store recall "sexy operation": brainless and illegal; Renault returns to China and is building a research and development team
A single sentence from an overseas blogger caused an overseas product to become scrapped instantly. This is a painful lesson. Amazon, Walmart, etc. began to implement a no-return and refund policy. A "civil war" broke out between Temu's semi-hosted and fully-hosted services.
Tmall 3C home appliances double 11 explosion: brands and platforms rush to
Shareholders reveal the inside story of Huayun Data fraud: thousands of official seals were forged, and more than 3 billion yuan was defrauded; Musk was exposed to want 14 mothers and children to live in a secret family estate; Yang Yuanqing said that Lenovo had difficulty recruiting employees when it went overseas in the early days
The app is coming! Robin Li will give a keynote speech on November 12, and the poster reveals a huge amount of information
It is said that Zhong Shanshan asked the packaged water department to sign a "military order" and the entire department would be dismissed if the performance did not meet the standard; Ren Zhengfei said that it is still impossible to say that Huawei has survived; Bilibili reported that employees manipulated the lottery丨Leifeng Morning News
Account Entry

Don’t dismiss all black box models. Make them work.

Latest update time:2019-04-15
    Reads:

▲Click above Leifeng.com Follow


Black box systems can have a potential and positive impact on fields such as science, technology, engineering, mathematics, etc., and can generate value, optimize results, and inspire innovation.

Text | Yang Xiaofan

Leifeng.com AI Technology Review: Black box systems such as deep learning have always been criticized. Even though deep learning has made some achievements in interpretability, there are still many voices of doubt and resistance. However, Elizabeth A. Holm, a professor of materials science and engineering at CMU, recently published a short review article in Science magazine, which rarely gives some affirmation to black box systems. The perspective of this article also reminds us to reconsider whether it is the best practice to stay away from black box systems as soon as we hear about them. The full text of Leifeng.com AI Technology Review is translated as follows.

Once upon a time, science fiction writer Douglas Adams imagined that humans had built the most powerful computer ever, named Deep Thought, and that the programs running on it could answer the most profound questions humans could ask, such as "What is the meaning of life?", "Why does the universe exist?", and all other questions. After calculating for 7.5 million years, Deep Thought gave an answer: the number "42".

As AI systems begin to enter all areas of human endeavor, including science, engineering, and health care, humans must now confront the question that Douglas Adams so cleverly posed in this story: Is it worth knowing the answer when we don’t understand why it occurred? Is a black-box system good or bad?

In the eyes of most of my colleagues in physical science and engineering at our school, the biggest reason for not using AI methods such as deep learning is that they don't know how to explain how the answers given by AI are generated. This objection is very powerful, and the implicit concerns may include practical, ethical, and even legal. The mission of scientists and the responsibilities of engineers require not only to predict what will happen, but also to understand why it happens.

An engineer can learn to predict whether a bridge will collapse, and an AI system can actually learn to do the same thing, but only an engineer can explain how his decision was made through a physical model, and then communicate with others and let them evaluate his thinking. Suppose there are two bridges, and a human engineer thinks one bridge will not collapse, and an AI thinks the other bridge will not collapse, which bridge would you be more confident about?

It is not just scientists and engineers who are not completely convinced by the answers given by black box systems. The "General Data Protection Regulation of the European Union" (GDPR) proposed in 2018 requires that automatic decision-making systems based on personal data can provide decision-makers with "meaningful explanations of the decision logic involved." People are still discussing how this requirement will be implemented in judicial practice, but we can already see that the judicial system does not trust unexplainable systems.

In this atmosphere of suspicion throughout society, the actions of AI researchers are understandable. They no longer publicly promote black box decision systems, but they conduct more research to try to better understand how black box systems make decisions - this is what we often call the "explainability" problem. In fact, this is also one of the biggest challenges in computer science today.

However, it may be a bit reckless to reject all black box systems. In reality, scientists and engineers, as human beings, make decisions based on their existing judgment and experience just like everyone else, just like the "deep learning system" in their own brains.

So, neuroscience faces the same explainability challenges as computer science. However, we often accept the decisions and conclusions made by humans without any defense, and do not try to fully understand the process from which they come. In this sense, the answers given by AI systems may be worth considering, and they may also have similar benefits; if they can be confirmed, then we should still use them.

The first and most obvious one is when the cost of a wrong answer is much lower than the value of a correct answer. Targeted advertising is a classic example.

From the advertiser's perspective, the cost of placing an ad that the target group doesn't want to see is small, but successful advertising can bring considerable revenue. In my own research field, materials science, image segmentation tasks usually require humans to manually outline the boundaries of the complex internal structure of the part of interest in the material image. This process is very costly, so whether it is a doctoral thesis or an industrial-level quality control system, once there is a part that requires image segmentation, the number of images required for this part must be as small as possible.

If replaced by an AI system, it can quickly complete large-scale image segmentation tasks with high fidelity (although not perfect). Here, perfect image segmentation results are not necessary for these systems, because the cost of a few misclassified pixels is much lower than the time and effort of graduate students without the AI ​​system.

The second example of a black box system is equally obvious, but a little more dynamic. If a black box system produces the best results, then we should use it. For example, when evaluating standard flat medical images, a trained AI system can help human radiologists get more accurate cancer assessments. Although the cost of an incorrect answer (either a false positive or a false negative) in this case is not low, the black box system allows us to achieve a high accuracy that cannot be achieved by any other solution, making it the current best solution.

Of course, some would argue that having AI read X-rays is acceptable, in part because there will always be a human doctor to review the AI's results; having AI drive a car is more concerning because the black box system makes decisions that can affect life and death, but at the same time it leaves no room for human intervention. Even so, self-driving cars will one day be safer than human-driven cars, and they will do better than human drivers in terms of both accident rates and mortality rates.

If we use some reasonable indicators to measure, we will know immediately once that day comes, but whether to let human drivers give way to AI drivers will be a decision made by the entire society, which needs to take into account many aspects such as human moral concepts, fairness, and accountability of non-human entities.

However, it should be noted that the fact that we can list these situations does not mean that black box models are directly permitted in these scenarios. In the above two cases, we assume an ideal black box, someone is responsible for its operation, and can determine its cost, or can clearly define what is the best result. Both assumptions may fall into error. AI systems may have a series of shortcomings, including bias, inapplicability outside the training domain, and fragility (easily deceived).

More importantly, evaluating costs and optimal outcomes is a complex decision-making problem that requires weighing economic, individual needs, social and cultural, ethical considerations, and many other factors. Worse still, these factors may be intertwined: a biased model may imply some costs, which can be manifested as the model itself making wrong predictions or as inaccurate assessments of the fairness of the model by outsiders.

A brittle model may contain blind spots that can lead to horribly wrong decisions at some point. As with any decision-making system, using a black box system still requires knowledge, judgment, and responsibility.

By definition, humans cannot explain how a black box algorithm came up with a specific answer. However, black box systems can still bring us value when they can produce the best output, or when the cost of giving wrong answers is small, or when they can inspire new thinking.

Although AI’s thinking processes are limited, may contain biases, or may even be outright wrong, they are very different from the way humans think, and may reveal new connections and new approaches. In this way, black box systems have a third scenario in which they can be used: as a tool to guide human thinking and questioning.

For example, in a breakthrough medical imaging study, scientists trained a deep learning system to diagnose diabetic retinopathy based on eye photos, and the results obtained were close to or better than the performance of a group of ophthalmologists. Even more surprising is that this system can also analyze other information that is not involved in ophthalmological diagnosis, including the risk of heart disease, age, gender, etc.

No one has ever paid attention to the differences between the retinas of different sexes before, so the discovery of this black box system provides researchers with new clues to further explore the differences between the retinas of different sexes. The research on these issues no longer belongs to the field of explainable AI systems and black box systems of human intelligence.

Having said that, let's take a look at the answer "42" given by Deep Thought mentioned at the beginning. We cannot use black box AI systems to find causal relationships, build knowledge and logical systems, and achieve understanding. A black box system cannot tell us why the bridge collapsed, what are the answers to various questions about life and the universe, and explain everything in the world.

At least for now, these problems belong to the realm of human intelligence and evolving explainable AI. But at the same time, we can still accept black box systems in an appropriate way. Black box systems can have a potential and positive impact on science, technology, engineering, mathematics and other fields, generating value, optimizing results and inspiring innovation.

via science.sciencemag.org/content/364/6435/26,Science 05 Apr 2019: Vol. 364, Issue 6435, pp. 26-27. Translated by Leifeng.com AI Technology Review

- END -


Recommended Reading


Zhou Hongyi's version of brotherhood: Ten meters away, send everyone off

Jack Ma: Being able to work 996 is a great blessing

In JD.com, Liu Qiangdong has no "brothers"

JD.com responds to the cancellation of the contract with the 2019 campus recruitment; experts explain the three major difficulties in publishing the black hole photo



Latest articles about

Xiaomi air conditioners are selling like hot cakes. Lu Weibing: A competitor's product that costs 3,000 yuan is sold for 20,000 yuan. Dong Mingzhu is caught in the crossfire. Royole Technology declares bankruptcy. Employees' claims may not be repaid. Zhong Shanshan says he looks down on entrepreneurs who sell goods through live streaming. 
Baidu: Making big model applications more practical 
Dahua Technology joins hands with Hongmeng, is it the direction of the tide or the collision of wisdom? 
Leading the westward expansion of e-commerce, the 150 billionth package will be delivered on Pinduoduo in 2024 
Exclusive: Vipshop Senior Operations Director Fan Li resigns 
Performance exploded! Xiaomi Motors' quarterly revenue sprinted to 10 billion yuan, Lu Weibing said there is no upper limit on the investment in intelligent driving; the widow of the founder of Shanshan Holdings took over from her eldest son as chairman; Zeekr executives called for vigilance against pig-killing scams 
Alibaba Cloud returns to growth track 
Scolding employees and being criticized for being overbearing, Dong Mingzhu: You are so funny, I am the boss; Hycan Auto was exposed to have defaulted on compensation for laid-off employees; Chairman of a state-owned enterprise responded to the high school education of the operations director丨Leifeng Morning News 
1688 is an OEM brand, not following the old path of strict selection 
The Double 11 changes in online retail: Who is driving the direction of the tide? 

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号