2020 has been accompanied by many unprecedented things. After this bumpy year, what kind of new era will we usher in?
On the last day of 2020, Andrew Ng invited top AI scholars such as Harry Shum and Fei-Fei Li to look forward to the development of AI technology in 2021. As talents continue to flow into the industry and conventional architecture computing power reaches a bottleneck, what should practitioners see? This is what they said...
As the new year approaches, Andrew Ng shared his three wishes for the development of artificial intelligence in the coming year:
-
Close the gap between proof of concept and production.
While building a good model is important, many people now realize that more work is needed to put it into practice, from data management to deployment to tracking. In 2021, I hope we can better understand the full cycle of machine learning projects, build MLOps tools to support related work, and systematically build, produce, and maintain AI models.
-
Strengthen the shared values of the AI community.
Over the past decade, Deeplearning.ai has grown from a few thousand members to millions worldwide, and part of our success comes from opening our arms to anyone who wants to join us. At the same time, this can also lead to some misunderstandings. Therefore, it is more important than ever to establish a set of shared values.
-
Ensure that the results of our work are fair and equitable.
Issues of bias and fairness in the field of artificial intelligence have been widely discussed. There is still a lot of difficult and important work to be done in these areas, and we must not let up. At the same time, AI's contribution to the gap between the rich and the poor has received less attention. Many high-tech companies seem to correspond to the principle of "winner takes all". Is the world becoming a world where wealth is concentrated in a few companies? How can we ensure fair distribution?
I’m very optimistic about AI and the role you all will play in it in 2021, and I look forward to us solving these challenging problems together!
In addition, many famous scholars and entrepreneurs from the AI community also shared their outlook for 2021.
Ayanna Howard, Georgia Institute of Technology: Training ethical AI
Ayanna Howard, Director of Interactive Computing, Georgia Institute of Technology
As AI engineers, we have the tools to design and build technology-based solutions, but many AI developers do not see their responsibility to address potential negative impacts, and we see inequalities in health care, educational opportunities, and more.
In the new year,
I hope that the AI community can reach a broad consensus on how to build ethical AI.
We need to consider our work in the context of its deployment and take responsibility for the potential harm it may cause, just as we take responsibility for identifying and fixing bugs in our code.
This sounds like a sea change, but it can happen quickly. Just like during the COVID-19 pandemic, many companies implemented work-from-home policies that they previously thought were impossible. A characteristic of technology is that when the top players change, others follow to avoid losing their competitive advantage. It only takes a few leaders to set a new direction, and the entire field will change.
Stanford University Professor Fei-Fei Li: Activating the AI ecosystem and reversing the trend of top talent flowing to the industry
Fei-Fei Li, a member of the National Academy of Engineering of the United States, a professor at Stanford University, and a famous scholar in artificial intelligence
I hope that 2021 will be the year that the U.S. government makes a firm commitment to supporting AI innovation.
The United States has always been a leader in science and technology because its innovation ecosystem takes full advantage of contributions from academia, government, and industry. However, the emergence of artificial intelligence has tilted it toward industry, largely because the three most important resources for AI research and development - computing power, data, and talent - are concentrated in a few companies. For example, according to data in the AI21 Labs paper "THE COST OF TRAINING NLP MODELS", OpenAI and Microsoft may have spent $5 million to $10 million worth of resources to train the large-scale language model GPT-3. No American university can perform calculations on this scale.
Big data is also crucial to the development of artificial intelligence. But today, the richest databases are in the hands of large companies. The lack of sufficient computing power and data has hindered the research of academic researchers and accelerated the flow of top AI talents from academia to private enterprises.
In 2020, the U.S. government provided some new support for colleges and universities, but it is far from enough. At the Stanford Institute for Human-Centered AI (HAI), which I co-direct with philosopher John Etchemendy, we have proposed a National Research Cloud initiative. The initiative will invest $1 billion to $10 billion per year over the next decade to inject new energy into collaboration between academia, government, and industry. This initiative will provide academic researchers with the computing power and data they need to conduct cutting-edge research, which in turn will attract and retain new faculty and students, potentially reversing the loss of academic researchers to industry.
Progress on the National Research Cloud is encouraging, and several agencies, including the National Science Foundation and the National Institutes of Health, have issued calls for proposals for AI projects.
AI is a tool, and a powerful one at that. But every tool is a double-edged sword, and how it is used inevitably reflects the values of its designers, developers, and implementers. There are still many challenges to ensuring that AI is safe, fair, protects individual privacy, and benefits all of humanity. Activating the AI research ecosystem is an important part of addressing these issues.
Matthew Mattina: I hope that micro ML technologies such as TinyML and small devices will play a greater role
Matthew Mattina, Distinguished Engineer and Senior Director, Machine Learning Research Lab, Arm
Using the tip of a standard No. 2 pencil as an example, imagine performing more than a trillion multiplication operations per second in the area of the tip. This task can be accomplished using today’s 7nm semiconductor technology. Combining this massive computing power with deep neural networks on small, low-cost, battery-powered devices will help address challenges ranging from Covid-19 to Alzheimer’s disease.
The neural networks behind remarkable systems like AlphaGo, Alexa, GPT-3, and AlphaFold need this computing power to work their magic. These systems typically run on data center servers, GPUs, and massive power supplies, but soon they will be able to run on devices that consume less electricity than a single LED bulb on a holiday light string.
A machine learning technique called TinyML is bringing these large, math-heavy neural networks to sensors, wearables, and phones. Neural networks rely heavily on multiplication, and emerging hardware uses low-precision numbers (8 bits or less) to perform multiplication operations. Compared with the typical 32-bit single-precision floating-point multipliers, chip designers can build more multipliers in a smaller area and power envelope. Research shows that in many real-world cases, using low-precision numbers in neural networks has little impact on accuracy. This approach can provide ultra-efficient neural network inference where it is most needed.
For example, in the response to the Covid-19 pandemic, detection and confirmation of infected people have become major obstacles. Recent research shows that an ensemble of neural networks trained on thousands of "forced cough" audio clips may be able to detect whether a cougher is infected with Covid-19, even if he/she does not show symptoms. The neural network used in this case has a very high computational cost, requiring trillions of multiplication operations per second. TinyML can run such cough analysis neural networks.
As we head into 2021, I hope that complex medical applications powered by massive neural networks running on small devices can usher in a new era of personalized medicine, improving the lives of billions of people.
Dr. Harry Shum: I hope AI can help humans create art
Harry Shum, foreign member of the U.S. National Academy of Engineering, chairman of XiaoIce, and dual-appointed professor at Tsinghua University
In 2021, I hope the AI community will create more tools to help humans unleash their creativity. AI will help people around the world communicate and express their emotions in their own unique ways.
We have created machines that excel at logical tasks and can perform large-scale calculations much faster than humans. This achievement was greatly demonstrated in the recent lunar exploration missions. In our daily lives, we use tools such as Microsoft Word and Excel to increase our productivity. However, there are other tasks where humans still have an absolute advantage, especially in the field of art.
The human left brain is responsible for logic, while the right brain is responsible for creativity and imagination. The two complement each other. The creative right brain ignites many daily interactions. We use language to communicate with each other and express abstract concepts and emotions. We also express ourselves artistically, creating music, paintings, dance and design.
Recent advances in AI, especially deep learning techniques such as generative adversarial networks and language models (such as GPT-3), have made it possible to synthesize realistic images and plausible text from scratch. The XiaoIce chatbot has demonstrated human-like performance in poetry, painting, and music. For example, XiaoIce helped WeChat users write more poems in one week than all the poems in Chinese history!
Top performers in the arts, such as painting, music, poetry, or dance, must undergo years of training. There is a saying that it takes 10,000 hours of practice to become an expert in a field. Tools like XiaoIce can significantly reduce the time investment, allowing everyone to gain more complex, creative, and imaginative ways of expression.
I expect to see more AI-created tools in 2021 to help people express their artistic ideas and inspirations. AI has proven that it can help improve productivity, so now let us look forward to AI helping humans unleash their creativity.
Ilya Sutskever, co-founder of OpenAI: Looking forward to the integration of language and vision
Ilya Sutskever, co-founder and chief scientist of OpenAI
This past year, general purpose models generated economic value for the first time. GPT-3 showed that large language models have amazing linguistic capabilities and can perform a wide range of useful tasks. I expect that the models that come next will be even better, and that the best models of 2021 will eclipse the best models of 2020, while also unlocking many applications that are unimaginable today.
In 2021, language models will begin to understand the visual world. Text alone can tell a lot about the world, but it’s incomplete because we live in a visual world, too. The next generation of AI models will be able to edit text inputs and generate images, and we hope they’ll be able to better understand text based on the images they’ve seen.
The ability to jointly process text and images will make models smarter. Humans are exposed to not only what they read, but also what they see and hear. If models can process similar data, they can learn concepts in a human-like way. This inspiration has not yet been proven, and I hope to see progress in this area in 2021.
As models get smarter, we also need to make them safe. GPT-3 can handle multiple tasks, but it's not as reliable as we thought it would be. We want to give the model a task and have it return an output that doesn't need to be changed or confirmed. At OpenAI, we've come up with a new approach: reinforcement learning with human feedback. This approach allows human judges to use reinforcement signals to guide the model's behavior in the way we want, so we can reinforce desired behaviors and suppress unwanted behaviors.
Systems like GPT-3 absorb information passively. They absorb data and internalize its relevance, which is a big problem when the training dataset contains examples of behaviors we don't want the model to imitate. Using reinforcement learning with human feedback, we can have the language model exhibit a variety of behaviors, and human judges give feedback on whether the behavior meets expectations. We found that the GPT-3 language model is able to learn quickly from this feedback, allowing us to quickly and precisely adjust the model's behavior with relatively little human interaction.
By having language models process both data modalities, text and images, and train them through interaction with humans, we see a path to making models more powerful, more trustworthy, and therefore more useful to more people. This path will provide more exciting developments in 2021.
Source: Synced Editor:
Du Wei, Mowang, Danjiang
This article comes from the artificial intelligence weekly newsletter "The Batch" (WeChat public account @deeplearningaichina), edited by Andrew Ng, founder of Deep Learning AI and CEO of Landing AI.
Original link: https://blog.deeplearning.ai/blog/the-batch-new-year-wishes-from-fei-fei-li-harry-shum-ayanna-howard-ilya-sutskever-matthew-mattina
Reply to any content you want to search in
the
official
, such as problem keywords, technical terms, bug codes, etc.,
and you can easily get relevant professional technical content feedback
. Go and try it!
Since the WeChat official account has recently changed its push rules, if you want to see our articles frequently, you can click "Like" or "Reading" at the bottom of the page after each reading, so that each pushed article will appear in your subscription list as soon as possible.
Or set our public account as a star. After entering the public account homepage, click the "three small dots" in the upper right corner, click "Set as Star", and a yellow five-pointed star will appear next to our public account name (the operation is the same for Android and iOS users).
Focus on industry hot spots and understand the latest frontiers
Please pay attention to EEWorld electronic headlines
https://www.eeworld.com.cn/mp/wap
Copy this link to your browser or long press the QR code below to browse
The following WeChat public accounts belong to
EEWorld (www.eeworld.com.cn)
Welcome to long press the QR code to follow us!
EEWorld Subscription Account: Electronic Engineering World
EEWorld Service Account: Electronic Engineering World Welfare Club
Featured Posts
-
cc3200 GPIO input mode error
- UsetheDOoutputofthephotoresistortooutputhighandlowlevels,andusetheGPIOinputtoreadthelevelvaluetocontrolthebrightnessofthesmalllight.However,Istillcan'tgetthedesiredphenomenon,and,althoughallthelightsaretur
-
Joy19大哥
Wireless Connectivity
-
Low power external wake-up
- Iamcurrentlyworkingonawirelessproduct.Duetostructuralreasons,Icannotuseabuttontowakeitup.InadditiontotheHallswitch,arethereanyothergoodnon-contactwake-upsolutions?
Periodicwake-up,vibrationswitch,acceleration
-
zhuzd
Integrated technical exchanges
-
[GD32L233C-START Review] Unboxing
-
Package
Thepackagingisprettygood,withmagnetsattached.Insideisananti-staticbagcontainingaboard.Althoughthespaceisabitwasted,theboxismadeofcardboardandiseasytoopenwithmagnets.Itcanbeusedtostoresm
-
hl23889909
GD32 MCU
-
Where can I find the driver code for this LCD?
-
WherecanIfindthedrivercodeforthisLCD?Ihavesearchedalotonlinefortesting,butitalwayswon'tlightup,andtheofficialwebsitedoesn'thaveanydownloads.ThedriverICisRA8835AP3N
IfoundthisarticleontheInternet,please
-
pcf2000
Domestic Chip Exchange
Latest articlesabout