Article count:16439 Read by:87952319

Hottest Technical Articles
Exclusive: A senior executive of NetEase Games was taken away for investigation due to corruption
OPPO is going global, and moving forward
It is reported that Xiaohongshu is testing to directly direct traffic to personal WeChat; Luckin Coffee is reported to enter the US and hit Starbucks with $2, but the official declined to comment; It is reported that JD Pay will be connected to Taobao and Tmall丨E-commerce Morning News
Yu Kai of Horizon Robotics stands at the historical crossroads of China's intelligent driving
Lei Jun: Don't be superstitious about BBA, domestic brands are rising in an all-round way; Big V angrily criticized Porsche 4S store recall "sexy operation": brainless and illegal; Renault returns to China and is building a research and development team
A single sentence from an overseas blogger caused an overseas product to become scrapped instantly. This is a painful lesson. Amazon, Walmart, etc. began to implement a no-return and refund policy. A "civil war" broke out between Temu's semi-hosted and fully-hosted services.
Tmall 3C home appliances double 11 explosion: brands and platforms rush to
Shareholders reveal the inside story of Huayun Data fraud: thousands of official seals were forged, and more than 3 billion yuan was defrauded; Musk was exposed to want 14 mothers and children to live in a secret family estate; Yang Yuanqing said that Lenovo had difficulty recruiting employees when it went overseas in the early days
The app is coming! Robin Li will give a keynote speech on November 12, and the poster reveals a huge amount of information
It is said that Zhong Shanshan asked the packaged water department to sign a "military order" and the entire department would be dismissed if the performance did not meet the standard; Ren Zhengfei said that it is still impossible to say that Huawei has survived; Bilibili reported that employees manipulated the lottery丨Leifeng Morning News
Account Entry

Exclusive Interview | Song Jiqiang, Director of Intel China Research Institute: gcForest is open source, what hardware should be used for training?

Latest update time:2017-06-07
    Reads:

300+ star startups and 3000+ industry professionals gathered at the Global Artificial Intelligence and Robotics Summit GAIR 2017 to witness the peak of the AI ​​wave! Tickets for the summit are in hot demand. Today, 5 unconditional coupons with a direct discount of 1,300 yuan are released (see the end of the article). Open the coupon link in the browser and use it immediately.


Recently, Professor Zhou Zhihua open-sourced a new algorithm he studied in the field of deep learning, gcForest. In his paper, he mentioned that, unlike the neural network structure of DNN, it is a method based on decision tree integration. At the same time, compared with DNN, the training process of gcForest is efficient and scalable, and it can operate normally even with only small-scale training data. Not only that, as a method based on decision trees, gcForest should also be easier to analyze theoretically than deep neural networks.


In addition, Zhou Zhihua specifically mentioned at the end of the paper that for his new method, Intel's KNL may provide potential acceleration like GPU for DNN.


What exactly caused such a result? With this question, Leifeng.com interviewed Song Jiqiang, director of Intel China Research Institute, in order to analyze the computational advantages of KNL in the gcForest algorithm for readers.


Song Jiqiang told Leifeng.com that he is happy to see new algorithm ideas like gcForest, because such diversity is of great benefit to the development of AI technology. Regarding the hardware acceleration of artificial intelligence algorithms, Song Jiqiang said that there is no universal solution that can solve all problems. Different hardware platforms should be selected for different application scenarios. He systematically explained the scope of application of KNL and GPU from a technical perspective. He did not shy away from the computing power of GPU in deep neural networks, but explained the advantages of KNL over GPU in gcForest and even DNN computing from a dialectical perspective.


He also analyzed the positioning of Intel's three important product lines in the field of artificial intelligence chips, including the KN series, Lake Crest, and FPGA, and gave corresponding selection suggestions, which is a good response to the raging AI chip architecture debate.


At the same time, the interview also touched on topics such as Intel China Research Institute’s AI plans for the second half of the year, the positioning relationship between the institute and the newly established AIPG, and the cooperation with Zhou Zhihua’s team.


Song Jiqiang


Dr. Song Jiqiang is the director of Intel China Lab. His research interests include interaction technology between intelligent robots and the outside world, innovation of various forms of intelligent devices, mobile multimedia computing, performance optimization of mobile platforms, new human-computer interfaces, and creating software and hardware environments for new application usage models.


Dr. Song joined Intel China Research Institute in 2008 as the Director of Application R&D of Tsinghua University-Intel Advanced Mobile Computing Center. He was a core member in creating the Intel Edison product prototype. After the successful productization of Edison, he promoted the development of Edison-based smart device development kits to promote the popularization of Edison technology in the maker community and invented a new device category called interactive porcelain. He is currently committed to the development of an intelligent service robot platform based on Intel technology.


From 2001 to 2008, he served as a postdoctoral researcher at the Chinese University of Hong Kong, chief engineer at the Hong Kong Applied Science and Technology Research Institute (ASTRI), and director of multimedia R&D at Beijing Jianyue Nano Electronics Co., Ltd. In 2003, the algorithm he developed won the first prize in the IAPR GREC International Arc Recognition Algorithm Competition. In 2006, the computer image reading technology research he participated in won the "Second Prize for Science and Technology of Higher Education Institutions of the Ministry of Education" (second completer). He is a senior member of IEEE and CCF, and has published more than 40 academic papers in international journals and conferences such as IEEE TPAMI, IEEE TCSVT, Pattern Recognition, CVPR, and ICPR.


Song Jiqiang received his Ph.D. in computer science from Nanjing University in 2001, and his doctoral dissertation was rated as a national outstanding doctoral dissertation.


The following is the transcript of the interview with President Song Jiqiang. Leifeng.com has edited it without changing the original meaning:


1. About gcForest algorithm


Leifeng.com: What do you think of deep forest (gcForest) and deep neural network (DNN)?


Song Jiqiang: gcForest is still in its infancy, just like deep neural networks in 2006. Now that Professor Zhou has made gcForest open source to the entire academic and industrial communities, this is a very good thing. At present, its advantages over DNN are reflected in two aspects: one is interpretability, and the other is applicable fields.


Explainability


Neural networks are still a black box at present, with many hyperparameters, initial settings and subsequent tuning processes, and it is difficult for people to figure out its theoretical basis. Although there is a research hotspot now, which is to make neural networks interpretable, it is still in its infancy.


If it is replaced with some other intermediate learning components, such as a series of work based on decision trees like gcForest, the biggest advantage is that when the feature comes in, it will make a judgment at each branch. For example, when the feature is greater than a certain value, it will branch to the left, and when it is less than a certain value, it will branch to the right, so there are some natural rules there. Its parameter space will be much smaller, and it can be analyzed and explained after training. This is why most of the winning models on the famous Kaggle data science competition platform are still based on decision tree expansion and integration (such as XGBoost).


If this can be explained theoretically, we will have full confidence in this model. After machine learning actually produces a result, I can turn it into knowledge, rather than just staying at the model level. Because from letting the machine learn to finally turning it into theory and knowledge that people can recognize, this is a complete process.


Application Areas


In fact, Professor Zhou also mentioned a problem at the beginning of his paper. In the field of deep learning, everyone agrees that when doing representation learning, if the model is deep and has many layers, it can better represent such problems. In order to achieve such a result, the capacity of the model needs to be large enough, but does it have to be trained with a neural network? This is debatable.


I think neural networks are not the best training and learning model in all cases, because sometimes using them can complicate things.


⬆️ Intel’s classification of the AI ​​field


On the other hand, although deep neural networks are very suitable for speech and image recognition, the field of AI actually includes more than just these, but also includes understanding, reasoning and making corresponding decisions. The whole process is well constructed, that is what allows artificial intelligence to fully complete the task.


Now deep neural networks help us interpret visual information, because visual information is a series of images with good spatial relationships. What we see is made up of pixels, and there are spatial relationships between them. The convolution method is very suitable for extracting features in spatial relationships, and it constructs a method of extracting features in multiple dimensions, directions, and multiple resolutions, and finally puts it into a huge multi-layer network to find its corresponding association.


At the next level, the timeline changes. For example, here is a person, there is a table, there is a chair, as time goes by, the person moves, and there may be an extra object on the table. So when we want to continue to understand this scene, we may not be able to completely rely on the current neural network method to extract features. At this time, we need to rely on other methods.


The several test sets used in the gcForest paper are very interesting. The processing of the test sets at the image level is actually similar to that of the deep neural network, and is slightly comparable. Even when the data set is large, the deep neural network is still better than it.


However, when it comes to the time dimension, things that are continuous, or when emotions need to be extracted from disorganized text, the data with poor spatial relationships has been found to have improved significantly. For example, a simple recognition method called hand moment recognition uses electromechanical signals to recognize gestures, which is nearly 100% higher than other methods. This is very convincing, which means that it is very concise and efficient in processing these continuous and regular signals (because gestures have certain rules).


In short, in the direction of further exploring AI technology, first, the decision tree-based model is more interpretable; second, deep forests can help us understand many things that have rules in time and space.


So I think this is a new idea. From the perspective of Intel, we welcome this new idea because it can truly reflect diversity.


2. The architecture debate and software ecosystem of AI chips


Leifeng.com: Can you explain from the perspective of product interpretation why KNL has an advantage over GPU in running gcForest? What are the differences and connections between KNL and GPU or even TPU? What do you think about the architecture debate of AI chips?


Song Jiqiang: In fact, we should look at this issue from the perspective of different application fields. In the field of AI chips, anyone who says there is a universal solution is wrong. Intel has never said that we have one thing that can solve all problems.


So Intel has Movidius at the end level; at the Edge level we use solutions like FPGA; and at the cloud level we have KNL, Lake Crest and others.


For different applications, we provide users with the most suitable solutions, which take into account data acceleration, bandwidth processing, and even communication processing, power consumption at the front end, etc.


Take gcForest as an example. It has many different trees and each tree can be trained separately, so it has a large degree of model parallelism. Both KNL and GPU can be used, but the advantage of KNL is that it has 72 cores, each of which can have 4 Hyper treads, which means it can provide 288 threads, making it an ideal model-level parallel accelerator.


⬆️ KNL hardware architecture diagram


At the same time, for tree processing, there are many conditional jumps in the middle. This is actually very easy on the original X86 architecture, because X86 is based on instruction operations for acceleration, and SIMD and SIDD are its strengths.


However, handling this kind of situation with many jumps in the GPU will actually lead to a huge waste of resources, because it is mostly doing parallel processing of multiple instructions and multiple data streams at the same time. It needs to prepare and package a lot of data, and then process them together after they are formed.


At this time, if there are many branches in the middle, the branch means that the instruction may jump to another place to execute, and the data-level parallelism is broken, which will cause many problems. The data may be in another memory location and need to be retrieved. KNL has a natural advantage in this kind of processing.

Therefore, Mr. Zhou specifically proposed on the last page of his second edition of the paper:


  • GPU is a suitable accelerator for DNN;

  • However, if we consider situations like decision trees, KNL is more suitable as an accelerator.


When we compare, we will see that different acceleration modes should be selected under different data processing modes.


KNL can allocate data in memory and high-speed memory very well. Because the new KNL allows users to configure how high-speed memory is used, it has better flexibility in how the algorithm uses data.


At the same time, we use OPA, a new data interconnection acceleration, to break the bottleneck of accessing memory and multi-node interconnection. Because we know that when GPU is expanded to multiple GPU nodes, there will be an IO limit. When the limit is reached, adding more GPUs will not improve the training performance much. Because IO has become a bottleneck. However, if we use KNL, we can break through this bottleneck and increase to even thousands of nodes while still maintaining a nearly linear increase. At the same time, it includes some new technologies such as IO storage, which can bring different types of scalability.


As for TPU, it is at the same level as Lake Crest in Intel's AI roadmap. Both are accelerators specially customized for deep neural networks. The degree of customization is very high. If you change it to Deep Forest, you may not be able to adapt quickly.


So when we talk about it, we won’t directly say that KNL is better than GPU, or GPU is better than KNL. We have to put it in the context of what kind of application it is accelerating.


Leifeng.com: Does that mean that GPU is more suitable than KNL for accelerating deep neural networks?


Song Jiqiang: It’s not that KNL cannot accelerate deep neural networks. We have tried it before. We also released such data at AI Day last year. When we put a Caffe code on KNL, we got a basic performance. Later, after our software department optimized the software in parallel, it improved the performance on KNL by 400 times.


It can be seen that there is a big difference between doing software-level optimization and not doing it. For deep neural networks, we can also improve the optimization performance many times. At the same time, with the continuous improvement of hardware performance, KNL has improved by 3 times compared to KNC, and KNM, which will be released in the second half of this year, has improved the performance by 4 times compared to the KNL hardware itself. Therefore, for algorithm researchers using the KN series, they can enjoy this bonus.


On the one hand, you can use software optimization, and on the other hand, you can enjoy the benefits brought by the annual performance improvement of the KN series hardware.


In the field of deep learning, if the model capacity is large, it is definitely necessary to use acceleration like KNL and KNM. GPU is a coprocessor, and the model and data are generally stored in the video memory. The latest graphics card is generally 16G, and the largest may be 24G, but it is difficult to buy. In some occasions, a particularly large model is required, which may not fit into the video memory, which is a big problem. But for KNL, it may be a better way.


As for whether it is better to use GPU or KNL for the intermediate components, it actually depends on the situation. For example, Professor Zhou said that he thinks it is also good to put the scanning of the previous features on the GPU, because there are many data operations in that area, such as block operations, which can be integrated.


Looking back, our KNL actually provides two 512-bit wide vector accelerators (AVX-512) on each core. If used properly, this vector accelerator is also very effective in processing block parallel operations. In fact, if used properly, the performance of KNL can be very good if vector acceleration calculations, which we call SIMD calculations, and calculations that require many branch processing can be parallelized well. It is possible to improve the original unoptimized performance to more than a thousand times.


Leifeng.com: What is the relationship between Lake Crest and KNL in terms of product positioning? Can we say that Lake Crest is Intel's competitive product for NVIDIA's GPU? KNL is a hardware that is more suitable for decision tree model algorithms because it can better cooperate with the CPU.


Song Jiqiang: That’s not quite accurate.


Lake Crest is a product that is deeply customized for deep neural networks in order to achieve a faster acceleration ratio. Its customization requires comprehensive consideration of the computing and bandwidth requirements of DNN, as well as factors such as the number of expansion nodes. Because there are many computing nodes in Lake Crest, each Lake Crest chip can be interconnected with 12 other chips to form a super grid, and the interconnection bandwidth is very high. Therefore, the speed will be very strong in the field of deep learning large-scale model training.


For example, if you are processing a massive amount of video, new problems will constantly come in, and you need to train the model repeatedly. Moreover, the amount of data will be much larger than before, because scenes like "safe cities" will deploy a large number of cameras, and the data will be greatly improved after retraining. Therefore, it is necessary to have such capabilities in the cloud to perform fast training and update the model to the front end.


In addition to the camera data in the cloud, the data generated by unmanned vehicles every day will also contain many special cases that cannot be solved now. For example, pedestrians in different countries and objects on the ground in different countries are actually different, so repeated training is required, and the amount of data is also very large.


We do not think that the current GPU solution is the end point. We must need a faster and more energy-saving solution. Currently, TPU and Lake Crest are all ASIC solutions, which are the best solutions for performance-to-power ratio because they are deeply customized. So if you want to solve the ultimate massive amount of data and keep the operating cost low for the data center, that is, use less electricity, ASIC must be the final solution, and its scalability will be very good.


If there are new algorithms in the future that can handle more other applications, such as gcForest, TPU and Lake Crest may not be the best accelerators for these new methods. In this case, we can use KNL and KNM.


For NVIDIA, GPGPU can actually support different fields, but compared with other fields, it can provide the highest acceleration ratio for deep neural networks.


Similar to GPU, someone needs to do special optimization on it. In fact, NVIDIA has been working on the CUDA project for ten years, and finally found a killer app for deep learning. If a new killer app comes out, it may not be the best on the CUDA platform. In order to deal with the unknown new killer apps in the future, it can be said that FPGA and KNL are both good solutions.


So I think this is the right way to divide it so that we can more completely explain the important layout Intel has made in the field of AI chips.


  • For the now known Killer App, when using deep neural networks to perform similar visual processing, Lake Crest is the obvious choice.

  • If it is to deal with the less well-known Killer APP or the many well-known life science and financial analyses, or other weather and meteorological applications that are originally high-performance computing applications, KNL itself is doing this, and it will continue to support those applications that require flexible tuning but have not yet reached the level of ASIC, all of which can be done with KNL.

  • If the user has some understanding of hardware acceleration but is still unsure, he can try it with FPGA. FPGA is a platform that is faster than ASIC and can be used for hardware-level experiments.


Leifeng.com: KNL has a previous generation product called KNC. In comparison, what upgrades has KNL made?


Song Jiqiang: This upgrade is quite big. Let me talk about a few key points.


First, the overall performance has been improved by 2.5 to 3 times, and the performance of each single core has also been improved by nearly 3 times. KNL can have up to 72 cores. In terms of computing power, the double-precision floating point of the KNL processor is about 3TFlops, and the single-precision floating point can reach about 6TFlops.


Compared with the original KNC, KNL now uses a new two-dimensional grid architecture. It allows instructions to be executed out of order, so when the processor executes instructions, it can better fill in the gaps caused by cache misses, branches, etc., and can better improve the parallelism of instruction processing.


The current KNL has expanded the vector acceleration unit AVX to 512 bits wide, which can process 64 bytes at the same time to perform vector operations. At the same time, it has also made an improvement in the memory area, adding high-speed memory, which is a multi-channel memory. It integrates 16GB, and users can configure it as a cache or configure it as a memory that they manage themselves. For example, the user wants to put some data that is frequently accessed by the processor Core, and does not want it to be automatically mapped through the cache. Because the cache will generate cache misses, it is managed by itself, and the user is responsible for loading and re-accessing its data.


The bandwidth is as high as 500G bytes/second, which is 4-5 times that of the original DDR4. This is a significant improvement. And now, for the first time, Omni-Path is integrated into KNL, which is a high-speed network interconnection. KNC is positioned as a coprocessor, and it must have a host to cooperate with it. This host can be a Xeon or Core series processor.


There are two types of KNL:


  • One can continue to serve as a coprocessor;

  • The other is a separate processor that does not require a host, which is also its new feature.


These are more convenient for users. We also provide different levels of KNL SKUs (sales models) with different price options. Some of them are more focused on computing, some are more focused on data I/O, etc.


At the same time, we have been experimenting in the United States since last year, providing users with an Educational Cluster platform in the cloud. Because for students, multi-node KNL is still quite expensive, but with this cloud platform, users can submit their own tasks to conduct experiments.


This year, we hope to deploy and open the cloud experiment platform in China. We hope to support people in doing this kind of computing acceleration and node expansion acceleration experiments in two ways, using different algorithms and different applications to do corresponding experiments. In this way, I think more people can try more hardware. Intel will also provide more and more tool support in software.


I think KNL is still a very promising hardware acceleration product. It has parallelism for many models, and there is also a certain degree of parallelism between data. It is also particularly helpful when making some decisions in the middle.


Leifeng.com: Many deep learning users have said that they are more concerned about the ecosystem of software and hardware than the underlying hardware architecture of the chip. How does Intel do this?


Song Jiqiang: In terms of software ecology, Intel supports a general software stack for AI applications (Leifeng.com note: as shown in the figure below). The middle three layers are used to shield the implementation differences of the underlying hardware, support popular open source frameworks, and tools to accelerate application solutions.

⬆️ AI solutions provided by Intel


Specifically, Intel plans to provide a more consistent software interface at the software layer.


For users who are already using open source frameworks


If you are a user who is already using open source frameworks, such as the popular TensorFlow, MXNet, and Caffe, Intel's strategy is that we have a unified interface for them, and developers can use the open source frameworks they are familiar with to train models.

At the same time, we will connect to Intel's middle layer, which can map it to Intel's MKL acceleration and what we call the scalability acceleration library. What does scalability acceleration do? Because MKL accelerates mathematical operations, such as tensor operations. But if you want to expand it to multiple nodes, such as 72 cores or even 1,000 cores, you need to use scalability acceleration to help users solve some of the acceleration of data transmission and control transmission between the interconnections without the user having to worry about it.

Although the user is using an open source framework at this time, he can easily use the underlying hardware acceleration capabilities, whether using KNL or Lake Crest, and map it to different hardware.


For new learners


If you are a new learner, you have two choices: one is to use an open source framework, and the other is to use the Neon-based framework provided by Intel, which was the framework before Nervana became an independent company and was also ranked in the top 10. Now they are still doing something based on this framework.

If users choose this framework, they will be able to directly use a lot of acceleration. At the same time, Nervana is also doing something, of course, it is now Intel's AIPG. They are working on a software layer tool called nGraph, which means that there are actually many parallel places in the user's program, but because the algorithm is actually very complex, if the user analyzes it by himself, it may not be able to split it well.

The nGraph tool is used to extract the parallelizable parts and distribute them to different hardware acceleration units. For example, it can be distributed to different Lake Crests, which have many accelerated vector computing units. How to distribute them? It is used for optimization, a bit like the Thread Building Block in Intel's Parallel Studio in our traditional high-performance computing, which automatically extracts the parallelizable threads in the program and puts them on different cores.

For beginners, they don’t have to figure out how to optimize every layer of the software stack by themselves. They can first use open source tools or tools provided by Intel to train their algorithms and models, and then map them to different levels through the tool chains we provide.


This is our current strategy, combining software and hardware.


Leifeng.com: In addition to deep forest and DNN, what other applications and algorithms is KNL suitable for?


Song Jiqiang: KNL also has many applications in Life Science, such as gene sequencing and precision medicine. In some retail fields, it can handle many different user requests at the same time and do some related security verification. In fact, the application field is very suitable for multiple cores to do it at the same time, because they themselves need to divide these tasks to process. In addition, the processing process is not always comparing blocks of the image, generating many complete block vectors and then moving forward step by step. It also requires a lot of flexible programs to run.


3. About Intel China Research Institute


Leifeng.com: Can you tell us more about the cooperation between Intel and Zhou Zhihua’s team?


Song Jiqiang: Professor Zhou is an outstanding scholar and leader in the field of artificial intelligence in China. Therefore, Intel's product department and academic research department have been paying attention to him. We are now forming a strategic partnership, and this framework will include several aspects:


First, the academic community is good at defining algorithms and new theories, but it is still difficult to directly improve performance to the industry benchmark level. Different AI applications require different types of software and hardware acceleration solutions.

The industry has strong engineering resources and computing capabilities. For example, Intel can provide some new software and hardware to the academic community to verify the corresponding academic algorithms, and our technical team can also cooperate with them to optimize performance. This can achieve a strong combination and truly promote new theories to the level of market application.


On the other hand, Intel wants to do three things in the field of AI:


  • The first thing is to Fuel AI, empowering AI. It will not be limited to the few fields that everyone sees now, but can be more widely used in various new industries. This is to enhance the capabilities of AI.

  • The second thing is to democratize AI. Let AI be used more by general engineering developers. We think that open source like Professor Zhou's is very good. After open source, many students, teams, and teachers can try it out, and then use commercial platforms like Intel. For example, they are now doing it on different PCs, so the cost is relatively much lower. If you are willing to use Intel's KNL to do experiments, we will also release cloud-based or actual computing device-based plans in China to assist the academic community to use KNL for experiments and reduce costs.

  • The third thing is that Intel ensures that AI does the right thing and guides AI to do things that are very beneficial to society.

Now we mainly focus on the first two.


In the industry, Intel is a leading company. In the academic world, Professor Zhou's team is an academic team with appeal and influence. Together, we will promote the AI ​​ecosystem better. This is the current situation of our cooperation.


Leifeng.com: Intel established the AIPG this year. At the company level, what kind of collaboration and division of labor does the institute have with them in promoting AI?


Song Jiqiang: AIPG is a product department. It is Intel's direct export of AI-driven products. This export will actually integrate many technical resources of Intel's internal software, hardware and even algorithms, and form solutions to promote them to the outside world.


The research institute is relatively internal and does not directly provide products to the outside world. Some of the AI ​​research we do may be more forward-looking and have more variables. We will see what happens in the academic world outside and do some academic research accordingly.


In addition to following external research hotspots, we even look for new hotspots to work on. We have won world championships in some relevant CV (computer vision) competitions, such as emotion recognition. These technologies are developed in the research institute. When they reach a certain level and can be directly applied to certain products of the product department, the product department will transform these technologies into their products.

I think the Institute and AIPG have a very good cooperative relationship. Our timelines do not overlap, and it is a relationship of technical research and output.


Leifeng.com: Does the research institute have any new plans for the second half of the year, especially in the field of AI?


Song Jiqiang: We will mainly focus on deep learning training, including working with American research institutes to see how to improve the training of deep neural networks and compress the networks. We will also conduct some new experiments on new hardware, such as KNL, Knight Mills, and Knight Landing. This is a method that may use more nodes for training.


In addition, after the neural network is trained, we will do corresponding tailoring and implement it on hardware such as Lake Crest, FPGA, and Movidius. In this way, the deep learning capabilities of AI can be put on smaller devices.


So there are two directions: we need to conduct larger-scale training in the cloud while shortening the training time; and finally, we need to try to make the trained model smaller.


100+ high-quality booths, 1000+ traditional supply chain players, and the world's top technology solution providers all appeared to help enterprises quickly connect with AI technology solutions and tap into the trillion-dollar AI industry! High-end resources, high-quality booths, and limited places. If you don't apply now, they will be gone! Contact by phone or WeChat: 15013779392

1300 discount coupon on June 7

Only for "Conference Tickets"


https://gair.leiphone.com/gair/coupon/s/5937b2ebd65d0

https://gair.leiphone.com/gair/coupon/s/5937a65428a3f

https://gair.leiphone.com/gair/coupon/s/5937a654287ba

https://gair.leiphone.com/gair/coupon/s/5937a6542858c

https://gair.leiphone.com/gair/coupon/s/5937a654282dd


ps: The coupon is only valid for each "conference ticket" and can only be used once . The amount decreases by 50 yuan per day and is valid for 1 day. Long press the copy link and open it in the browser to use it immediately.


Open the link in your browser and use it immediately


Latest articles about

Xiaomi air conditioners are selling like hot cakes. Lu Weibing: A competitor's product that costs 3,000 yuan is sold for 20,000 yuan. Dong Mingzhu is caught in the crossfire. Royole Technology declares bankruptcy. Employees' claims may not be repaid. Zhong Shanshan says he looks down on entrepreneurs who sell goods through live streaming. 
Baidu: Making big model applications more practical 
Dahua Technology joins hands with Hongmeng, is it the direction of the tide or the collision of wisdom? 
Leading the westward expansion of e-commerce, the 150 billionth package will be delivered on Pinduoduo in 2024 
Exclusive: Vipshop Senior Operations Director Fan Li resigns 
Performance exploded! Xiaomi Motors' quarterly revenue sprinted to 10 billion yuan, Lu Weibing said there is no upper limit on the investment in intelligent driving; the widow of the founder of Shanshan Holdings took over from her eldest son as chairman; Zeekr executives called for vigilance against pig-killing scams 
Alibaba Cloud returns to growth track 
Scolding employees and being criticized for being overbearing, Dong Mingzhu: You are so funny, I am the boss; Hycan Auto was exposed to have defaulted on compensation for laid-off employees; Chairman of a state-owned enterprise responded to the high school education of the operations director丨Leifeng Morning News 
1688 is an OEM brand, not following the old path of strict selection 
The Double 11 changes in online retail: Who is driving the direction of the tide? 

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号