Apple AI, tends to develop its own chips
????If you hope to meet more often, please mark the star ?????? and add it to your collection~
Source: The content is compiled from hpcwire by Semiconductor Industry Observer (ID: ic bank), thank you.
The new Mac development tools will take advantage of Apple's own chips, limiting HPC users' ability to use parallel programming frameworks from Intel or Nvidia.
Apple's latest programming framework, Xcode 16, was unveiled at the recent WWDC conference and has a series of new features that use AI to make programming and app integration easier.
However, it also added new features to its Swift programming language, such as Code Complete, which can predict and complete code. Another feature called SwiftAssist can answer coding questions and help use APIs.
Apple's personal computers now use self-developed Apple Silicon, which includes GPU, CPU and AI chips. Macs previously relied on x86 chips and GPUs from AMD and Nvidia, but no longer support external GPUs. This has given Mac developers a limited environment for writing AI applications, putting them in a difficult position.
At WWDC, Apple encouraged developers to move machine learning models to its CoreML format, which takes advantage of its own CPUs, GPUs, and neural processors.
An open source Python package called CoreML Tools converts PyTorch models to be compatible with Apple's AI hardware. Developers can also use JAX, TensorFlow, or MLX.
Intel and Nvidia aren't wasting time on MacOS support. Intel has canceled MacOS support for the latest 2024 version of its OneAPI parallel programming framework.
Apple shared its wider AI plans at WWDC, revealing that it has trained its Law Master (LLM) on Google's Tensor Processing Unit.
Apple has also built its own private computing cloud, which will be hosted in Google's data centers. Apple will not rely on Nvidia GPUs to implement AI in the cloud because its AI strategy focuses on energy efficiency and has its own algorithm technology. Nvidia's GPUs run training and inference on larger LLMs, which consumes more power.
Nvidia discontinued MacOS support for its AI and HPC CUDA programming tools years ago. Developers had to switch to Linux or Windows to create applications for Nvidia GPUs.
CUDA provides the necessary tools for AI applications to run on Nvidia hardware. Like Apple, Nvidia is trying to lock customers into its hardware and software. Nvidia's development tools are packaged into a suite called AI Enterprise, but they are not free.
Apple has its own gaming and AI framework called Metal that is optimized for its GPUs. Metal was supported on a handful of very old AMD and Nvidia GPUs, and now is fully supported on Apple's in-house GPUs.
However, Mac developers will be able to use Nvidia GPUs hosted in the cloud, which is now common. Cloud providers typically provide an environment for Nvidia GPUs to work without any connection to the PC operating system.
Apple plans to produce its own server chips this year
Apple Inc will roll out some of its upcoming artificial intelligence features this year through data centers equipped with its own in-house processors, part of a major move to infuse its devices with AI capabilities.
Apple is putting high-end chips, similar to those designed for Macs, into cloud computing servers to handle the most advanced artificial intelligence tasks on Apple devices, according to people familiar with the matter. Simpler AI-related functions will be handled directly on iPhones, iPads and Macs, the people said, asking not to be identified because the plan remains confidential.
The move is part of Apple’s much-anticipated push into generative AI, the technology behind ChatGPT and other popular tools. The company is playing catch-up to big tech rivals in the space but is preparing to lay out an ambitious AI strategy at its Worldwide Developers Conference on June 10.
Apple planned to use its own chips and process AI tasks in the cloud about three years ago, but the company accelerated the timeline after the AI boom driven by OpenAI's ChatGPT and Google's Gemini forced it to move faster.
The first AI server chip will be the M2 Ultra, which launched last year as part of the Mac Pro and Mac Studio computers, but the company is already eyeing future versions based on the M4 chip.
Apple shares hit an intraday high of $184.59 in New York trading after Bloomberg reported the details. The stock is down more than 4 percent this year. A representative for Cupertino, California-based Apple declined to comment.
Relatively simple AI tasks, such as providing users with a summary of iPhone notifications they missed or text messages they received, can be handled by the chips inside Apple devices. More complex tasks, such as generating images or summarizing lengthy news articles and creating long responses in emails, may require a cloud-based approach, as would an updated version of Apple's Siri voice assistant.
The move, part of Apple's fall rollout of iOS 18, represents a shift for the company. For years, Apple has prioritized on-device processing, touting it as a better way to ensure security and privacy. But people involved in creating Apple's server project, codenamed ACDC, or Apple Chip in the Data Center, say there are already components in its processors that protect user privacy. The company uses an approach called Secure Enclave, which isolates data from security breaches.
For now, Apple plans to use its own data centers to operate cloud functions, but will eventually rely on outside facilities — as it does with iCloud and other services. Some aspects of the server plans were earlier reported by The Wall Street Journal.
Apple Chief Financial Officer Luca Maestri hinted at this approach during last week's earnings call. "We have our own data center capacity, and then we use capacity from third parties," he said when asked about the company's AI infrastructure. "Historically, that model has worked well for us, and we plan to continue along the same path."
Processing on-device AI features will remain an important part of Apple's AI strategy. But some of these features will require its latest chips, such as the A18 chip introduced in last year's iPhone and the M4 chip that debuted in the iPad Pro earlier this week. These processors include major upgrades to the so-called neural engine, the part of the chip that handles AI tasks.
Apple is rapidly upgrading its product line with more powerful chips. It is introducing the next-generation processor M4 to its entire Mac computer line for the first time. According to a Bloomberg News report in April, the Mac mini, iMac and MacBook Pro will get the M4 later this year, while the chip will be used in the MacBook Air, Mac Studio and Mac Pro next year.
Taken together, these plans lay the groundwork for Apple to incorporate artificial intelligence into much of its product line. The company will focus on features that make it easier for users to go about their daily lives — for example, by making recommendations and providing customized experiences. Apple doesn’t plan to launch its own ChatGPT-style service, though it has been discussing offering that option through partnerships.
In May, Apple said the ability to run artificial intelligence on its devices would help it stand out from competitors.
“We believe in the transformative power and promise of AI, and we believe we have the strengths to differentiate ourselves in this new era, including Apple’s uniquely seamless combination of hardware, software, and services,” CEO Tim Cook said on the earnings call.
Cook didn’t get into specifics but said Apple’s own semiconductors would give it an edge in the nascent field, adding that the company’s focus on privacy “underpins everything we create.”
The company has invested hundreds of millions of dollars in the cloud initiative over the past three years, according to people familiar with the matter. But gaps in its products remain. For users who want chatbots, Apple has held discussions with Alphabet Inc.’s Google and OpenAI about integrating chatbots into iPhones and iPads.
Talks between Apple and OpenAI have intensified recently, pointing to a possible partnership, and people familiar with the matter said Apple could also offer a range of options from outside companies.
Chip project code also exposed
Apple has been developing its own chips to run artificial intelligence (AI) tools in data centers, The Wall Street Journal reported in early May, but it was unclear whether the semiconductors would be deployed.
The move would build on Apple's previous efforts to produce in-house chips that run its iPhones, Macs and other devices, The Wall Street Journal said, citing unnamed people familiar with the matter.
The server project, code-named ACDC (Apple Data Center Chip) within the company, aims to apply Apple's expertise in chip design to the company's server infrastructure, the newspaper said.
While the move has been in the works for several years, a specific timeline or possible release date remains uncertain, the report said.
The Cupertino, California-based company, which has been playing catch-up in generative artificial intelligence, the technology that underlies chatbots and other popular new tools, is preparing to unveil a new AI strategy at its Worldwide Developers Conference next month.
"We remain very optimistic about the opportunities in generative AI and are investing significantly," Apple Chief Executive Tim Cook told Reuters in an interview last week.
Apple's server chips will focus primarily on running AI models, the so-called inference process, rather than training AI models.
According to the Wall Street Journal, the company has been working closely with Taiwan Semiconductor Manufacturing Company to design and start production of chips, but it is uncertain whether the cooperation has produced clear results.
Apple's strategy is expected to focus on new proactive features that can help users in their daily lives. The company has also held talks with potential partners such as Alphabet Inc's Google and OpenAI to provide generative artificial intelligence services.
If Apple goes ahead with developing its own server processors, it would be following in the footsteps of several of the largest technology companies. Amazon.com Inc’s Amazon Web Services, Google, Microsoft Corp and Meta Platforms Inc all operate data centers that use in-house designed semiconductors to some extent. Those efforts have eroded the traditional dominance of Intel Corp’s components.
Meanwhile, Apple planned to hold a virtual event yesterday where it was expected to show off new iPad models, some of which may feature new chips designed to speed up AI tasks performed by the devices.
Carolina Milanesi, an analyst at Creative Strategies Inc, said the updated iPads could be a way for Apple to get the new chips to market ahead of its developer conference next month, where the company is likely to reveal more about its plans for tackling artificial intelligence.
Reference Links
END
*Disclaimer: This article is originally written by the author. The content of the article is the author's personal opinion. Semiconductor Industry Observer reprints it only to convey a different point of view. It does not mean that Semiconductor Industry Observer agrees or supports this point of view. If you have any objections, please contact Semiconductor Industry Observer.
Today is the 3827th content shared by "Semiconductor Industry Observer" for you, welcome to follow.
Recommended Reading
★ Important report on EUV lithography machine released by the United States
Silicon carbide "surge": catching up, involution, and substitution
★ Chip giants all want to “kill” engineers!
Apple , playing with advanced packaging
★ Continental Group, developing 7nm chips
★
Zhang Zhongmou's latest interview: China will find a way to fight back
"The first vertical media in semiconductor industry"
Real-time professional original depth
Public account ID: icbank
If you like our content, please click "Reading" to share it with your friends.