Chip companies take action to eliminate copper interconnection
????If you hope to meet more often, please mark the star ?????? and add it to your collection~
Source: Content compiled from nextplatform, thank you.
With each generation of AI machines, bottlenecks between compute engines, memory, and network adapters grow larger, and the need to shift data center systems from copper to fiber has never been more urgent.
In fact, AI is the killer app that those developing silicon photonics and interconnects based on this family of technologies have been waiting for. For most hyperscalers, cloud builders, and HPC centers, the normal two-year cadence of doubling network bandwidth is acceptable due to the modest memory and network requirements between cluster nodes, but there is an impedance mismatch problem because the PCI-Express peripheral bus takes three years to double. Not only that, it takes a year or two for a new PCI-Express to go into production.
This is annoying for those of us making system interconnects, but the real issue is that if we are going to have AI infrastructure models that are well supported and run efficiently, we need to increase the bandwidth in and out of our compute engines, memory, and network interfaces by an order of magnitude. Otherwise, we are just doing a bad thing.
To put it bluntly, we need at least PCI-Express 8.0 bandwidth today, which should be 1 TB/sec on a x16 duplex card when the first devices come to market around 2029 or 2030. (The PCI-Express 8.0 spec is expected to be ratified around 2028, and will likely use 256 Gb/sec signaling, and perhaps even some advanced form of pulse amplitude modulation beyond the current PAM-4.)
Hence the interest and investment in silicon photonics and optical interconnects to connect compute engines and storage directly together, and at a very large scale – at the scale of a data center or a region.
The excitement around silicon photonics and optical interconnects is perfectly reflected in the $400 million Series D round raised by Lightmatter, an optical interconnect startup founded in 2017. According to the company, this Series D round brings the total raised to $822 million, with a current valuation of $4.4 billion. This represents a 4x increase in valuation and a 2x increase in funding.
Lightmatter’s latest funding round was led by T. Rowe Price, with participation from Google Ventures and Fidelity Investments. Hewlett Packard Enterprise and Lockheed Martin also participated in previous rounds, along with other venture capital firms eager to cash in on the AI boom.
We first spoke with Lightmatter in 2021 when it came out of stealth mode, detailing its Passage optical interposer and Envise matrix math accelerator. But the Passage interposer (which we discussed in detail earlier this year) may be the more valuable technology Lightmatter has created, and perhaps the one that will propel it toward an IPO. Perhaps after it completes its Series E funding round in the near future.
Lightmatter is the darling of the next wave of silicon photonics, raising the most funding. Ayar Labs raised $219.7 million in three rounds, Celestial AI raised $338.9 million in three rounds, Eliyan raised more than $100 million in two rounds, and new startup Xscape Photonics just exited the market two weeks ago, raising $57 million in a single round. This is by no means an exhaustive list of silicon photonics startups, of course, chip design companies Marvell, Broadcom, and Intel also have large silicon photonics projects.
Breaking up the copper interconnect monopoly will likely require a number of different approaches, and one can infer that many startups will either sell their technology to hyperscalers, cloud builders, compute engine and network chip makers, or be acquired by one of these companies seeking to gain and maintain a technological edge. Nvidia can make acquisitions as it pleases, as can Taiwan Semiconductor Manufacturing Co., which may want to acquire Passage technology.
Lightmatter co-founder and CEO Nicholas Harris received his Ph.D. from MIT, where he wrote his Ph.D. thesis on programmable nanophotonics for quantum computing and AI computing engines. Chief scientist and co-founder Darius Bunandar also received his Ph.D. from MIT, majoring in physics, researching quantum computing and nanophotonic circuit communications. Co-founder Thomas Graham was a Morgan Stanley investment banker and a product manager for several Google projects. Now, they have a new CFO, Simona Jankowski, who was a managing director at Goldman Sachs and moved to Nvidia as CFO from 2017 to 2024. If you're planning to go public in a big way, Jankowski is one of the few who's experienced a rocket-like growth like Nvidia.
It will be interesting to see if Nvidia uses Passage to replace copper interconnects and NVSwitches in its future optical interconnect systems. You can bet that this is a deal that all the silicon photonics startups are chasing.
But we know that Nvidia is either developing its own optical interconnects or licensing technology to make its own. If the rumors are true, we’ll have to wait until the “Rubin Ultra” generation of GPU accelerators arrives in 2027 to find out. This gives Nvidia’s growing number of competitors (including hyperscalers and cloud builders with their own AI chips) the opportunity to adopt optical technology first and try to grab some AI market share.
Celestial AI Acquires Rockley Photonics Patent Portfolio
Celestial AI today announced the acquisition of silicon photonics intellectual property from Rockley Photonics, including issued and pending patents worldwide. The combination of Celestial AI and Rockley IP creates one of the strongest IP portfolios in silicon photonics for optical computing interconnects, bringing its total global IP portfolio to more than 200 patents. The acquired patent portfolio includes three major technology categories, including optoelectronic system-level packaging, electro-absorption modulators (EAMs), and optical switch technologies, which are relevant to a variety of AI data center infrastructure applications.
Rockley is an early pioneer in silicon photonics, with foundational IP dating back to 2014, predating the creation of many market competitors. The acquired IP aligns directly with Celestial AI’s core technology roadmap and enhances the company’s deployment and commercialization strategy for its Photonic Fabric technology platform.
The IP complements Celestial AI’s existing product portfolio, which has rapidly grown into the industry’s leading optical interconnect technology platform. The company is focused on delivering solutions to hyperscale datacenter customers, both directly and through its ecosystem partners, to enable transformative performance, scalability and energy efficiency advantages in next-generation AI computing and networking.
“The addition of Rockley’s IP to our technology platform further accelerates the growth of Celestial AI’s valuable IP portfolio and strengthens our position. These patents fit well with our expanding photonic structures patent portfolio, which covers advanced packaging, thermally stable silicon photonics, and system architectures for optical computing interconnects,” said Dave Lazovsky, founder and CEO of Celestial AI. “This acquisition reflects Celestial AI’s commitment to protecting the photonic structures-based solutions being implemented in our customers’ AI data center infrastructure.”
Ayar Labs CEO: Optical chiplets will soon be used in SOCs
In the field of artificial intelligence, time is money. Top artificial intelligence companies are investing billions of dollars in computing infrastructure to meet the demand for speed. However, these companies' computing limitations at the chip, memory and I/O levels are hindering the development of artificial intelligence. In this regard, the startup Ayar Labs is at the right time.
Ayar Labs has another solution: Replace wires with pulses of light, allowing complex chips and memory to communicate faster over short distances. This would increase system utilization, thereby boosting revenue and productivity.
Ayar Labs has a product coming to market soon, and CEO Mark Wade sat down with HPCwire to discuss the company’s product and future direction.
HPCwire: Could you please tell us more about your current situation, your focus, and future development plans for the company?
WADE: We are building optical I/O solutions, which means a full range of products that support optical communications directly from the ASIC package.
We have two main revenue-generating products right now. One is our SuperNova light source -- this is a remote light source external to the package. Think of it as an optical power source that sits somewhere external to the ASIC package.
We also manufacture and sell the TeraPHY optical I/O chip, a silicon wafer containing approximately 70 million transistors and over 10,000 optical devices. We integrate the silicon photonics devices into a CMOS process to create a silicon wafer that we sell as a chip. This chip is integrated into the customer SOC package.
The focus is on enabling optical communications directly from the SOC package. Many system-level performance bottlenecks come from the connectivity and bandwidth limitations between different SOC or ASIC packages.
If you push optics into the SOC package in the right way, and you get high bandwidth, low power, low latency, and optical connectivity directly out of the package, you also break the traditional bandwidth-distance tradeoff that comes with today's electrical communications. You can extend the transmission distance by half a meter, 10 meters, or a kilometer over the same fiber.
HPCwire: Why is optics better suited for larger-scale implementations, especially in the areas of AI and high-performance computing?
WADE: The value of our optical solution becomes apparent when you see a lot of bandwidth spilling out of the package and running workloads that require hundreds of SOC packages to work together. This is a regime that the high-performance computing community is very familiar with.
As Moore's Law slowed, we could still fit more transistors per package, but the speed at which data could be fed in could not keep up. Looking at the trend lines for memory capacity and memory bandwidth, we realized that we really needed a way to break this I/O limitation. We made that decision a long time ago.
For the past 10 years or so, we have been saying that computing systems are on a path to electrical I/O failure, and that these failures will get worse.
HPCwire: When did you start focusing on AI workloads?
WADE: We recognize that large-scale AI systems or AI workloads really require HPC-like systems to run effectively. The idea we raised in our Series A in 2018 was that large-scale AI clusters would be the biggest opportunity for commercial data centers, and to scale large-scale AI workloads, you had to have optical I/O.
What really changed the world's perspective was ChatGPT. Everyone started to realize that AI workloads are very different from what we think of as AI.
HPCwire: Do customers have to buy the chips outright, or do they have to source and manufacture the IP themselves?
WADE: Our primary business model right now is selling actual products. There has been a whole paradigm shift in the SOC space to drive chiplet adoption. If you take the lid off an ASIC, you’ll see multiple chips inside.
We sell what we call "KGD" optical chips in our customers' packages. In the case of optical I/O chips, we sell them as revenue-generating products. Customers simply buy the chips directly from us.
HPCwire: In terms of product delivery, can you walk us through when you were founded, when your product was launched, and what happened in between? How has it been going so far?
WADE: We worked with our manufacturing partners to develop core technologies several years ago, and now they are functional and we are shipping small batches.
In the past 18 months, we have shipped more than 15,000 products to multiple Tier 1 commercial customers, with a steady monthly shipment. These products are mainly used for small-scale system building, helping customers improve the manufacturing and integration process of large-scale, deeply integrated optical systems.
We are delivering thousands to tens of thousands of units per year. This sets the stage for us to achieve volume production in a two-year focus window between mid-2026 and mid-2028. We expect production to scale to hundreds of thousands to millions of units per month, potentially reaching over 100 million units per year by 2028 and beyond.
HPCwire: What are these volume intercepts? This is a two-part question, but I'll let you answer the first part first and then I'll follow up.
WADE: The main commercial block driving mass production is large AI systems - AI clusters, rack-scale and multi-rack-scale AI clusters for training and inference. That's really what's driving the vast majority of mass production adoption. There are a lot of other custom things that are also interesting, but they're much smaller scale than AI drives.
HPCwire: Was this the first intercept? And then you mentioned the second intercept?
WADE: This is really a multi-generational set of products that are driving the development of large-scale optical fabrics in optically connected AI systems. We also have other applications in telecommunications and more general data center architectures and infrastructure.
The U.S. government has been a long-term supporter of our company, and today we provide many applications to the defense and aerospace sectors.
Over time, we see this move to optical I/O as a general paradigm shift occurring across many different application areas, but the rationale for volume adoption and large-scale investment in these optical technologies and products is driven by AI systems.
HPCwire: These AI systems are one of the chip manufacturers that are driving the industry right now. Are you dependent on these chip manufacturers or do you have a large customer base?
WADE: Our market strategy is focused on solving high-volume, high-quality manufacturing problems in the photonics space. We have strategic partnerships with major companies such as GlobalFoundries, Applied Materials, Intel, and TSMC, and are working with all the first-tier CMOS manufacturers.
We have also established a strategic partnership with Nvidia, a leader in large-scale AI systems, to integrate our technology into future AI systems. Our direct customers are building SOCs and SOC systems with a best-in-class ecosystem including Nvidia, AMD, Intel, Broadcom, and Qualcomm.
End customers building large-scale AI models, such as Anthropic and OpenAI, are critical. Data centers have had many serious problems trying to scale AI workloads. We found that these companies’ visions for the future are similar to what we have predicted for years, which confirms this.
Our success depends on access to these areas. We are addressing the challenges in photonics technology, especially in high-volume, high-quality manufacturing. This approach allows us to push the boundaries of AI technology by partnering with key industry players while addressing the needs of end users.
HPCwire: Do you need to get customers to think chip first rather than silicon first?
WADE: There has been a lot of education going on about why people are using chiplets and whether the chiplet ecosystem can stabilize quickly so that customers can consider it as a low-risk insertion point in their designs.
I don't know if customers are thinking chip first, but they are increasingly thinking system first. So how do those requirements translate into requirements from the SOC packaging?
AMD, Intel, Nvidia — all the Tier 1 companies have already siliconized. The rest of the ecosystem has to follow because the Tier 1 companies are blazing that trail. We want to use this as a springboard more… Now we just need to introduce the concept of optical chips.
HPCwire: Tier 2 and Tier 3 offer various chips, such as CPUs or GPUs. How do you think your products fit into this ecosystem? For example, could your optical chips be an add-on product for companies that currently sell RISC-V CPU chips?
WADE: Yes, it is definitely an option we are considering. Our model is to be a pure play optical solutions provider, and we don’t know what package we will be integrated into. This creates an exciting business model and solution portfolio that gives us the flexibility to deliver connectivity value that can scale elegantly.
We may have customers who only want one optical chip per package, but we also have customers who want 8 to 12 optical chips per package. We can meet different customer needs and give them flexibility in how they adopt our technology - how many chips they use, what system-level form factor they want to integrate into.
HPCwire: How do you justify the cost and power consumption of optical interconnects compared to electrical interconnects?
WADE: We focus on the economics at the application level, especially for large-scale AI. The unit economics of large-scale AI are no longer valid - the cost is too high. We need to compare the pros and cons of AI at the application level, not just the component level indicators, such as power consumption.
We developed a system architecture simulator to estimate profitability, interactivity, and throughput of AI workloads and core technology components. Our results show that while current systems have improved in performance, they do not offer a significant advantage when profitability is the criterion.
However, when comparing next-generation systems built using electrical I/O and optical I/O, we see a huge difference in profitability and interactivity. This economic factor is driving the move to optical I/O.
HPCwire: Is this more of a CapEx or OpEx consideration for your customers?
WADE: This is primarily a CapEx consideration. The main question from customers is unit economics - tokens per second per dollar. This is primarily driven by CapEx, specifically the cost of the system amortized over its ability to produce a high throughput token stream. Our estimates show a cost structure of about 80-90% CapEx amortization and 10-20% OpEx.
Essentially, the capital expenditure for a system is divided by the total useful throughput it can provide.
HPCwire: As a startup, how do you deal with scaling challenges?
WADE: We are currently in the fundraising phase to drive the company to scale over the next two to three years. Our main challenge is to drive this growth with our go-to-market ecosystem partners, including supply chain and early customers.
We view Tier 1 companies as ecosystem builders. With the right business model and product strategy, our goal is to enable Tier 2 and Tier 3 customers within 9-18 months after Tier 1 companies establish mass production.
HPCwire: Will you license your IP to companies if they ask for it?
WADE: While we are not opposed to IP licensing, our current focus is on delivering actual products. This approach is more scalable and simplifies customer adoption of optical I/O. In the coming years, we believe we are best positioned to deliver products that are successfully integrated into the manufacturing ecosystem.
We support customization and IP licensing conversations for customers in different application areas. However, approximately 90% of our focus is on delivering optical chiplet products, with only a small portion focused on IP models.
Reference Links
END
????Semiconductor boutique public account recommendation????
▲Click on the business card above to follow
Focus on more original content in the semiconductor field
▲Click on the business card above to follow
Focus on the trends and developments of the global semiconductor industry
*Disclaimer: This article is originally written by the author. The content of the article is the author's personal opinion. Semiconductor Industry Observer reprints it only to convey a different point of view. It does not mean that Semiconductor Industry Observer agrees or supports this point of view. If you have any objections, please contact Semiconductor Industry Observer.
Today is the 3924th content shared by "Semiconductor Industry Observer" for you, welcome to follow.
Recommended Reading
★ Important report on EUV lithography machine released by the United States
Silicon carbide "surge": catching up, involution, and substitution
★ Chip giants all want to “kill” engineers!
Apple , playing with advanced packaging
★ Continental Group, developing 7nm chips
★
Zhang Zhongmou's latest interview: China will find a way to fight back
"The first vertical media in semiconductor industry"
Real-time professional original depth
Public account ID: icbank
If you like our content, please click "Reading" to share it with your friends.