Article count:16428 Read by:87919360

Hottest Technical Articles
Exclusive: A senior executive of NetEase Games was taken away for investigation due to corruption
OPPO is going global, and moving forward
It is reported that Xiaohongshu is testing to directly direct traffic to personal WeChat; Luckin Coffee is reported to enter the US and hit Starbucks with $2, but the official declined to comment; It is reported that JD Pay will be connected to Taobao and Tmall丨E-commerce Morning News
Yu Kai of Horizon Robotics stands at the historical crossroads of China's intelligent driving
Lei Jun: Don't be superstitious about BBA, domestic brands are rising in an all-round way; Big V angrily criticized Porsche 4S store recall "sexy operation": brainless and illegal; Renault returns to China and is building a research and development team
A single sentence from an overseas blogger caused an overseas product to become scrapped instantly. This is a painful lesson. Amazon, Walmart, etc. began to implement a no-return and refund policy. A "civil war" broke out between Temu's semi-hosted and fully-hosted services.
Tmall 3C home appliances double 11 explosion: brands and platforms rush to
Shareholders reveal the inside story of Huayun Data fraud: thousands of official seals were forged, and more than 3 billion yuan was defrauded; Musk was exposed to want 14 mothers and children to live in a secret family estate; Yang Yuanqing said that Lenovo had difficulty recruiting employees when it went overseas in the early days
The app is coming! Robin Li will give a keynote speech on November 12, and the poster reveals a huge amount of information
It is said that Zhong Shanshan asked the packaged water department to sign a "military order" and the entire department would be dismissed if the performance did not meet the standard; Ren Zhengfei said that it is still impossible to say that Huawei has survived; Bilibili reported that employees manipulated the lottery丨Leifeng Morning News
Account Entry

Federated learning provides a new learning paradigm with broad applications, without leaving the local data and enjoying the benefits of big data training models.

Latest update time:2021-09-03 13:34
    Reads:

▲Click above Leifeng.com Follow


It’s only been two years since it was proposed.

Text | Jia Wei

Leifeng.com AI Technology Review: Recently, Blaise Aguëray Arcas, one of the originators of the concept of federated learning, held an online workshop on federated learning in South Korea for the world.

Blaise Aguëray Arcas joined Google in 2014. Prior to that, he worked as a distinguished engineer at Microsoft. After joining Google, Blaise led the on-device machine intelligence project at Google, and was also responsible for basic research and new product development.

The concept of federated learning was first proposed by Blaise et al. in a blog post published on Google AI Blog in 2017. Although it has only been two years since the concept was proposed, research on it has become very popular, with at least one related paper published almost every day. At the end of 2018, federated learning even became an IEEE international standard under the promotion of Professor Qiang Yang of HKUST and others.

The main reason why federated learning has been able to quickly transform from an idea into a discipline in such a short period of time is that federated learning technology, as a learning paradigm, can solve the "data island" problem while ensuring user data privacy.

However, unlike the domestic focus on federated learning for "data islands" between enterprises, Blaise and others (perhaps also representing Google to some extent) are more concerned with federated learning on devices, which is also the application scenario when the concept of federated learning was first proposed.

1. The initial motivation for proposing federated learning

Blaise started researching federated learning shortly after joining Google five years ago. It wasn't until 2017, when they achieved some results, that they published it in a blog post.

At first, federated learning was just a concept, but it was soon developed into a discipline in the field of artificial intelligence. There are already thousands of articles discussing federated learning. There will also be a special topic on federated learning at NeurIPS, the top machine learning conference held in Vancouver in December this year. On the other hand, many companies are now also building their models based on this. This shows that the entire artificial intelligence community has begun to pay attention to this technology.

So why has federated learning been taken seriously by the entire community so quickly?

As you all know, artificial intelligence has now developed to a point where we hope to be able to do more work with less data. This is also one of the core topics of current artificial intelligence.

Neural networks can do a lot of cognitive work, such as language processing, speech synthesis, image recognition, and even playing Go. These can reach or even surpass the level of humans. This is what we have achieved in the past few years. However, compared with humans, current neural networks still lack one thing, which is learning efficiency. They need a lot of data for training. Therefore, when some large companies, such as Google, Microsoft, and Amazon, began to provide artificial intelligence services, they needed to collect a lot of data to train large neural networks. This is also what the entire community has been doing.

For smart applications on the device side (such as mobile phones), the usual model is that the data generated by the user on the device will be uploaded to the server, and then the neural network model deployed on the server will be trained based on the large amount of data collected to obtain a model, and the service provider will provide services to users based on this model. As the data on the user's device side is continuously updated and uploaded to the server, the server will update the model based on these updated data. Obviously, this is a centralized model training method.

However, this approach has several problems: 1) The user's data privacy cannot be guaranteed, and all data generated during the user's use of the device will be collected by the service provider; 2) It is difficult to overcome the lag caused by network delays, which is particularly evident in services that require real-time performance (such as input methods).

Blaise and others wondered whether they could create a large-scale distributed neural network model training framework so that users could get the same service experience while keeping their data local (training on their own devices).

2. Federated Learning on Devices

The solution is: upload weights, not data.

We know that neural network models are made up of connections between neurons in different layers. The connections between layers are realized through weights. These weights determine what the neural network can do: some weights are used to distinguish between cats and dogs; another group can distinguish between tables and chairs. Everything from visual recognition to audio processing is determined by weights. Training a neural network model is essentially training these weights.


The device-side federated learning proposed by Blaise no longer requires users to send data to the server and then train the model on the server. Instead, users train locally and upload the training model (weight) in an encrypted manner. The server will integrate thousands of user models and then provide users with feedback on model improvement plans.

For example, input method is a typical intelligent recommendation application. When people use Google keyboard Gboard to send messages to family and friends, traditionally, the data of your keyboard typing will be uploaded to Google's server. They collect a large amount of data to train an intelligent recommendation that is more in line with user habits. But after applying federated learning, the user's keyboard data will always remain locally. There is a constantly updated model in the user's mobile phone that will learn and update based on this data, and encrypt the updated weights and upload them to the server. After the server receives a large number of user models, it will conduct comprehensive training based on these models and feedback to the user for model update and iteration.

It may be worth emphasizing here that this model on the device side is compressed, not like the large neural network model in the server. Therefore, the energy consumption of model training is very small and almost undetectable. In addition, Blaise gave a very vivid metaphor, that is, people will update their brain cognitive system through dreaming when they sleep; similarly, the system of the device terminal can also train and update the model when it is idle. So overall, this will not have any impact on the user experience.

Let’s summarize the process of on-device federated learning: 1) The device downloads the current version of the model; 2) Improve the model by learning from local data; 3) Summarize the improvements to the model into a relatively small update; 4) The update is encrypted and sent to the cloud; 5) Instantly integrate with other users’ updates as an improvement to the shared model.

The whole process has three key links:
1) Each mobile phone makes personalized improvements to the model locally based on user usage;
2) Forms an overall model modification plan;
3) Apply it to the shared model. This process will continue to cycle.

The advantages are obvious.

First, we don’t have to upload data to the cloud, so service providers can’t see the user’s data, which can improve the privacy of user data. Therefore, in this way, we don’t have to make a trade-off between privacy and functionality, but can have both. This is particularly important in the current situation where data privacy is increasingly valued.

Secondly, it reduces latency. Although the 5G era is coming, the Internet speed is not guaranteed at any location under any circumstances. If all user data is uploaded to the cloud, and the service itself is also fed back from the cloud, then in an environment with slow Internet speed, network latency will greatly reduce the user experience. However, this will not happen with services supported by federated learning because the service itself comes from the local area.

Of course, perhaps another benefit is that under the traditional method, users are just spectators of artificial intelligence - I use it, but I don't participate. In the federated learning scenario, everyone is a "dragon tamer" and everyone is a participant in the development of artificial intelligence.

3. Learn a new paradigm

In fact, the idea of ​​federated learning is not only applicable to the privacy protection and model update of device user data. We abstract the device user as the owner of the data, which can be a mobile phone holder, a company, a hospital, a bank, etc., and the server or cloud is regarded as a comprehensive model sharing platform.

Therefore, federated learning is a new learning paradigm with the following characteristics:

Under the framework of federated learning, all participants have equal status and can achieve fair cooperation;

Data is retained locally to avoid data leakage and meet user privacy protection and data security requirements;

It can ensure that all parties involved can exchange information and model parameters in an encrypted manner while maintaining independence and achieving growth at the same time;

The modeling effect is not much different from that of traditional deep learning algorithms;

Federated learning is a “closed-loop” learning mechanism, and the model effect depends on the contribution of data providers.

Such characteristics are exactly what is facing the current dilemma in the development of artificial intelligence.

Currently, most application fields have the problem of limited data and poor quality. In some highly professional sub-fields (such as medical diagnosis), it is even more difficult to obtain labeled data sufficient to support the implementation of artificial intelligence technology.

At the same time, there are insurmountable barriers between different data sources. Except for a few "giant" companies with massive users and product and service advantages, most companies find it difficult to cross the data gap in the implementation of artificial intelligence in a reasonable and legal way, or they need to pay huge costs to solve this problem.

In addition, with the development of big data, paying attention to data privacy and security has become a global trend. The introduction of a series of regulations such as the EU General Data Protection Regulation (GDPR) has further increased the difficulty of data acquisition, which has also brought unprecedented challenges to the implementation of artificial intelligence.

Judging from the current research progress, federated learning is also the only option to solve the above problems.
Note from Leifeng.com: For the further development of federated learning in China, please refer to the article " From concept to technology, and then to international standards and open source communities, federated learning only took two years " published by Leifeng.com . It is worth mentioning that there is a story about the name of "federated learning": in the early days, "Federated Learning" was mostly translated as "joint learning" in China, and now it is mostly called "federated learning". The difference is that if the user is an individual, it is indeed to "join" their models to learn, as Blaise and others have done; and if the user is a big data owner such as an enterprise, bank, or hospital, this technology is more like combining many "city-states", and the term "federation" is more accurate. This change in name also reflects the changing trend of the research subject of federated learning from theory to practical application.

New arrival! "AI Investment Research" has now launched the complete video of the CCF GAIR 2019 summit and white papers on major theme sessions, including the Robotics Frontier Session, Intelligent Transportation Session, Smart City Session, AI Chip Session, AI Finance Session, AI Healthcare Session, Smart Education Session, etc. "AI Investment Research" members can watch the annual summit videos and research reports for free, scan the QR code to enter the member page to learn more, or send a private message to teaching assistant Xiao Mu (WeChat: moocmm) for consultation.

You are still watching

Featured Posts


Latest articlesabout

Database "Suicide Squad" 
Exclusive: Yin Shiming takes over as President of Google Cloud China 
After more than 150 days in space, the US astronaut has become thin and has a cone-shaped face. NASA insists that she is safe and healthy; it is reported that the general manager of marketing of NetEase Games has resigned but has not lost contact; Yuanhang Automobile has reduced salaries and laid off employees, and delayed salary payments 
Exclusive: Google Cloud China's top executive Li Kongyuan may leave, former Microsoft executive Shen Bin is expected to take over 
Tiktok's daily transaction volume is growing very slowly, far behind Temu; Amazon employees exposed that they work overtime without compensation; Trump's tariff proposal may cause a surge in the prices of imported goods in the United States 
OpenAI's 7-year security veteran and Chinese executive officially announced his resignation and may return to China; Yan Shuicheng resigned as the president of Kunlun Wanwei Research Institute; ByteDance's self-developed video generation model is open for use丨AI Intelligence Bureau 
Seven Swordsmen 
A 39-year-old man died suddenly while working after working 41 hours of overtime in 8 days. The company involved: It is a labor dispatch company; NetEase Games executives were taken away for investigation due to corruption; ByteDance does not encourage employees to call each other "brother" or "sister" 
The competition pressure on Douyin products is getting bigger and bigger, and the original hot-selling routines are no longer effective; scalpers are frantically making money across borders, and Pop Mart has become the code for wealth; Chinese has become the highest-paid foreign language in Mexico丨Overseas Morning News 
ByteDance has launched internal testing of Doubao, officially entering the field of AI video generation; Trump's return may be beneficial to the development of AI; Taobao upgrades its AI product "Business Manager" to help Double Eleven丨AI Intelligence Bureau 

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us About Us Service Contact us Device Index Site Map Latest Updates Mobile Version

Site Related: TI Training

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

EEWORLD all rights reserved 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号 Copyright © 2005-2021 EEWORLD.com.cn, Inc. All rights reserved