Article count:25239 Read by:103424336

Account Entry

Simulation AI may turn this company into a unicorn

Latest update time:2019-07-05
    Reads:

Source: This article is translated from "nextplatform" by the public account Semiconductor Industry Observer (ID: icbank), author Nicole Hemsoth, thank you.


To date, there are two trends in artificial intelligence reasoning, and only a few companies are deeply involved in them.


A trend that brings us back to the future of analog computing engines, which can greatly reduce power consumption and potential costs, but with some impact on the creation and avoidance of complexity. More on this soon.


The second trend is that any company entering the inference market will take a similar approach, hedging their bets with a dual focus on data center and edge inference. More specifically, using the same architecture in edge devices because it can be crammed into PCIe like an accelerator.


While we talk about trends in AI inference, remember that in the data center, the CPU is still king. Offloading accelerated models has not yet caught on in this particular part of the workflow, but as trained model complexity (and ROI) continues to grow, it will become meaningful enough for large companies to build their own. That said, until someone does, it will be economically and technically infeasible.


One of the rare inference startups that is funded just got a lot of liquidity to fund their push into the data center and address the challenges mentioned above. Mythic, which we covered last year, announced a $30 million Series B-1, bringing its total raised to $86 million.


If you must skip our architectural introduction, the short story begins in 2013 with a company that started doing a lot of work on decades of analog devices and claims to have perfected some extremely complex analog-to-digital (and back again) circuits, as well as optimizations for complex neural network inference operations (CNNs, RNNs, with an eye on transformer networks and other new approaches to AI coming out of Google and other hyperscale research labs).


One could argue that Mythic’s datacenter ambitions suffer from an overemphasis on edge devices, which is where the real mainstream opportunity lies. Especially since that market is made up mostly of super-large computers that either test a new device long enough to decide to build one themselves, or put a startup in the precarious position of running out of money and failing development before their chips find the inside of those datacenters.


While margins can be maintained in the emerging edge mainstream, the mindshare potential of the data center is a hot commodity. This is what several AI chip startups aiming for training and inference are seeking, but given the limited number of potential large customers, their tendency to build and buy at scale, and the extended hardware qualification time, it is difficult to say whether any of these needs are being met well. The need for a lot of software integration, the need for an advanced roadmap... To put it bluntly, the data center may be just a distant hope, even for the best technology and development.


But, as Mythic CEO Mike Henry told us, there are ways around the obstacles that prevent them from getting big customers. And he thinks they might be able to make an offer that the biggest companies can't refuse. Well, more accurately, they won't want to refuse because it doesn't make sense from a development and cost perspective.


“The key is to have something really different that those very large computers couldn’t build themselves, and if they had to, it had to be low cost enough to drive them to do so,” Henry said, and he does have a point. “These companies have large systems and hardware teams, but they excel at building large-scale digital integration and systems. I haven’t seen any complex analog chips produced by these companies, other than some network communication structures.” That’s not to say that teams aren’t working hard on this, but as Henry explained, it takes years to get all of this analog/digital conversion right, not to mention the many other optimizations required for their devices to work at the edge or in the data center.


“If you look at a relatively simple analog device, like an automotive sensor from Texas Instruments, there are eight analog-to-digital converters on the chip. They have high sampling rates, 8-16 bits of precision, and there are maybe eight chips on the sensor,” he said. “Our problem is putting about 22,000 of these chips on the chip while keeping roughly the same power budget. The scale of these converters on the chip is much larger than anything that has come before. And we have to figure out how to make them small and thin so that we can line them up with the flash memory without affecting the power budget.”


Training a neural network that an analog device can train digitally is no small task. The complex data flows for most deep learning jobs simply don’t lend themselves to analog, Henry said. “We had to build a lot of digital wraparound fabric to give these networks a programmable and flexible architecture. When we thought about what we needed over the next few years, we realized it was a lot of raw brute force matrix math power on the compute side, without any specific network accelerators, but rather the ability to build in digital conversion of data in and out to run CNNs, RNNs, tensor and transformer networks and those new things. It took us more than five years to keep up with the changes in topology. But the key is analog to digital (and vice versa).”


“AI is an entirely new workload as known to the semiconductor industry. It focuses on low precision, is memory intensive, and computationally simple from a control flow perspective. Using existing memory for analog computing can address bottlenecks that Moore’s Law cannot solve.”


This one is hard to say. Google, Facebook, Amazon and a few others could certainly find and leverage the expertise to build analog devices, but why would they bother with reinvention of the drivers and devices themselves when the complexity and ramp-up time are so long, and existing and relatively cheap storage technology means the cost/benefit starts to look a bit lopsided. The only advantage would be full control of the architecture to enable advancements in development speed software, but even that is a stretch.


On the other hand, there is so much analog expertise on the planet, so much memory that it can become anyone’s game to make an inference chip, and on that basis things will become easier, and those startups can take the same dual market approach that Mythic did. But that’s just the way business and competition work, and as we heard at a recent VC panel, inference is still anyone’s game.


“Sure, in grad school someone can put together some circuits that show analog computation, but that’s a far cry from mass production; it’s not like shipping it into a bunch of devices or loading a trained network in TensorFlow. The hard part is loading it with negligible accuracy and ensuring consistency across millions of chips,” Henry added.


The risk of Mythic’s extended research and development over the past few years can be mitigated by emphasizing that the market is still largely undefined and potentially explosive for certain niches. This makes a new round of investment in a dual strategy sound less scary than the money that would be poured into AI accelerators, which eventually enter the market long after the frameworks and de facto devices have changed (as we saw with training accelerators).


We very much expect to hear about a whole new crop of chip startups trying to spin off from the edges. We also expect to hear about some of the established companies that have made great strides in analog devices for decades and finally get a chance to activate their businesses with the hottest workloads of the year. Like so many other things in this space, there will be far more “solutions” on the market than those that are complex enough to warrant anything off the shelf. But the real battle for hardware will be driven by problems rooted in AI if they are persistent enough.


At this point, the biggest analog AI inference chip story comes from IBM, which is also developing a device based on 8-bit phase-change memory.


Mythic's story also tells us that this is a good time to pay attention to what is happening in the world's memory makers. Although the number is small, in the Mythic news, we saw industry giant Micron, and Micron Ventures also joined the round of financing.


**Click on the end of the article to read the original English text.


*Disclaimer: This article is originally written by the author. The content of the article is the author's personal opinion. Semiconductor Industry Observer reprints it only to convey a different point of view. It does not mean that Semiconductor Industry Observer agrees or supports this point of view. If you have any objections, please contact Semiconductor Industry Observer.


Today is the 1996th issue of content shared by "Semiconductor Industry Observer" for you, welcome to follow.

Recommended Reading


Semiconductor Industry Observation

" The first vertical media in semiconductor industry "

Real-time professional original depth


Scan the QR code , reply to the keywords below, and read more

Huawei|Samsung|TSMC|Broadcom|EDA|AI|US|IGBT


Reply Submit your article and read "How to become a member of "Semiconductor Industry Observer""

Reply Search and you can easily find other articles that interest you!


Click here to read the original English article!

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号