In the verification field, the driving force behind the widespread acceptance of transaction-based verification methodologies is emerging standards. Standards such as OSCI's TLM (Transaction Level Modeling) 2.0 and Accellera's Standard Co-Simulation Modeling Interface (SCE-MI) have led to a surge in interest in transactions. In addition, verification flows now use hardware acceleration and simulation to accelerate transaction-based verification.
Why use transactions?
Ran Avinun, director of product management for system design and verification at Cadence Design Systems, said there are three goals to achieve when modeling a system design. “The first is early software development, the second is early system definition, and the third is the description of executable specifications, which is what you need initially when you’re making architectural tradeoffs,” Avinun said.
Where do transactions fit in? Why would a designer want to start with transaction-level models and eventually implement them with hardware acceleration? For many users, the answer is that it enables much faster simulations. “If you write your models as TLMs or communicate through transaction-based verification, you can achieve much faster simulations,” Avinun said.
Another benefit of adopting TLM is faster and easier debugging. "Generally speaking, if you write TLM, you will generate fewer bugs and spend less time debugging. It also provides an opportunity to distinguish between functionality and implementation," Avinun said.
“You want to write a model that describes the functionality and then separate the constraints. They can be clock constraints or things that change over time for a specific process node. It’s easier to reuse the model as you move from one application to another or from one node to another,” Avinun said.
How transactions are used
According to Frank Schirrmeister, director of product marketing for system-level solutions at Synopsys, when it comes to TLM, at least five usage models have become mainstream (see table below). At the top of the list is the reuse scenario.
In this case, a large portion of your design is already written in RTL. At this point, the best approach is a mixed-mode simulation methodology, where the existing RTL runs on the FPGA. At the same time, the TLM for the new modules in the design runs as a virtual prototype.
The second-highest ranking is the verification use case, where you launch a testbench and then develop a virtual prototype before RTL is available. This is achieved with the help of timing-free TLM.
“People start with the nontiming model and define the verification scenarios they want to cover,” Schirrmeister said. “When I’m playing games and downloading stuff on my phone, can it still answer calls? With a virtual platform, you can define these cases very easily early on because they’re done in software running on the processor. Then you can use them in your project.”
The third usage model is to evaluate the connections between the system and the outside world. These connections can be either physical or virtual I/O forms.
“For example, with a USB interface, you want to connect to the real world with high fidelity. But if the interface doesn’t exist yet, you can connect to it virtually so you can start developing software,” Schirrmeister said.
While USB is a compelling example, Schirrmeister also cited an example where the design team used the methodology to implement a wireless interface in a cell phone. It was implemented on the virtual side through FPGA software.
The fourth use model is for remote software development. To meet the needs of situations where physical hardware does not yet exist, early software development environments can be used in the form of virtual prototypes. "In this case, you create a development environment where the software developers working in the environment do not even need to know whether their software is running on an FPGA prototype or a virtual platform." As Schirrmeister said, this is a way to "keep the software developers out of the lab."
The fifth (and final) use case is a simpler software development approach that involves building hardware prototypes on FPGAs. “It turns out that FPGAs are not the first choice for running processors because FPGAs are more focused on DSP,” Schirrmeister said. By turning the processor model itself into a software implementation, you can get a more balanced processing speed by connecting the hardware prototype and doing the processing on the software side. In addition, you can get very fast running speeds because you no longer have to worry about certain parts of the software.
Tool and process evolution
To achieve further development, TLM needs to meet three requirements. The most obvious two are embedded software development and design verification. "Verification engineers need simple, direct tests," Schirrmeister said. Another evolving requirement is random test generation, which uses check monitors and includes coverage expressed in TLM for system-on-chip (SoC) comprehensive verification. In time, the generation of random test patterns, coverage checking, and the use of monitors will spread to the TLM world and virtual prototyping.
The third requirement for TLM to be integrated is a direct link to implementation. "We call this flow 'TLM to GDSII,'" Schirrmeister said. "In the past, there were two worlds. One focused on virtual platforms and the other focused on high-level synthesis. We think that at some point in the future, these two worlds will merge into one."
The question for many design engineers and EDA vendors is how to establish the link between the virtual platform and the high-level synthesis (HLS) flow.
“There has always been an attempt to bridge these worlds using TLM,” said Brett Cline, vice president of marketing and sales at Forte Design Systems. “But the problem has always been that the standard only considers verification and not synthesis. There are some very basic things missing from the TLM specification that are critical to hardware design. For example, there is no proprietary reset mechanism.”
The effort within OSCI eventually led to a revision of the TLM 1.0 standard, which led to TLM 2.0. "We expanded on OSCI TLM 1.0 and did something you might expect us to do, which was to look at synthesis," Cline said. "TLM 2.0 is a more synthesis-focused standard that focuses primarily on bus-based systems." TLM 2.0 includes a variety of transactional application programming interfaces (APIs) for bus-based systems.
Cline believes that virtual platforms and HLS are separated because of the gap between verification engineers and implementation. "People look at it from two perspectives. You are either a verifier (from a virtual platform perspective) or an implementer who designs hardware in SystemC," Cline said. "Now, verifiers have realized that TLM represents a viable path to get to the implementation without rewriting. At the same time, implementers are starting to understand how to integrate some things into system models that run very fast."
Schirrmeister said that in the past, most vendors and users have used virtual platforms and/or HLS in isolation. "TLM 2.0 was created to help early software development and high-performance simulation, but it didn't consider HLS much. In fact, we are pushing the TLM 2.0 standard to address HLS requirements with the synthesizable subset. This is the direction the industry needs to deal with," he said.
Hardware Generation
An important component of transaction-based verification is hardware that can achieve very high-speed verification with the help of TLM. Recently, Synopsys launched its HAPS-60 series of rapid prototyping systems as part of the Confirma platform. Built on Xilinx Virtex-6 FPGAs, the HAPS-60 system is the latest response to the "build or buy" decision that has been faced in the past when it comes to rapid prototyping technology.
The series includes three models: HAPS-61 (single FPGA, up to 4.5 million gates), HAPS-62 (dual FPGA, 9 million gates) and HAPS-64 (4 FPGA, 18 million gates). In addition to doubling the capacity of the previous generation HAPS-50 series, HAPS-60 will also perform up to 200MHz clock frequency.
A high-level overview of the components in the Confirma rapid prototyping system (Figure 1) starts with the RTL design files that run through synthesis. The design is then partitioned on the rapid prototyping board. The system's Confirma software performs this partitioning process, and the software understands that its target is a HAPS board. Users can then instantiate the interfaces required for simulation prototyping and link the design into other environments required for co-simulation and transaction-based verification.
Figure 1: A Confirma rapid prototyping system starts with RTL design files, proceeds to synthesis and then partitions the design.
Early rapid prototyping systems ran afoul of bandwidth limitations caused by the inability of FPGA pin counts to keep up with design size and speed requirements. In the past, the solution to this problem was interconnect multiplexing, which was a workaround but ultimately limited the overall performance of the system.
The HAPS-60 system avoids these bandwidth limitations with automatic high-speed time-division multiplexing. The system's software automatically inserts the time-division multiplexing logic (rather than forcing the user to do it manually) (Figure 2, left). "The old way would have required a deep dive into the RTL design files," said Doug Amos, business development manager for solutions marketing at Synopsys.
Figure 2: The HAPS-60 system avoids bandwidth limitations by using automatic high-speed time-division multiplexing. The system's software automatically inserts time-division multiplexing logic (rather than forcing the user to do it manually) (Figure 2, left)
This automated approach ultimately achieved a 1-Gb data rate coupled with automatic timing synchronization. This equates to a 7x improvement in pin bandwidth efficiency and a 30% improvement in average system performance (Figure 2, right).
The inclusion of the UMRbus architecture makes the HAPS-60 system particularly suitable for transaction-based authentication (Figure 3). The UMRbus is a high-performance, low-latency communication bus that provides connectivity for all onboard FPGAs, memories, registers, and other resources.
“UMRbus is used for overall board control,” Amos said. It supports remote access to the entire system for configuration and monitoring. Many design interaction and monitoring features (Figure 3, right) are included. “The user can control the design, access the design, add to the design, read back memory and debug,” Amos said.
UMRbus also supports several advanced modes, including transaction-based verification and co-simulation (Figure 3, left). "Users can write programs to implement various design interaction and monitoring functions," Amos said. The system includes many host-based debugging modes that were traditionally associated with simulation.
When it comes to transaction-based verification, the HAPS-60 system can significantly reduce verification time by using the SCE-MI 2.0 transaction interface (Figure 4). "This is exactly what SCE-MI 2.0 was developed for," Amos said. "The SCE-MI interface allows us to do transactions in software, pass the transactions to the hardware, and have the hardware regenerate the transactions. This technology is used in a simulator-type environment to mimic real-world practices."
Now, the HAPS-60 system makes this simulator-type methodology possible on a rapid prototyping system. SCE-MI allows advanced concepts to be used on the prototype side. “This system blurs the line between prototype and simulator, and SCE-MI is the enabler that makes it possible,” Amos said. The result is a simplified testbench that can be 10,000 times faster than simulation when running on HAPS hardware.
Support TLM 2.0
Another hardware vendor that provides support for transaction-based verification, EVE, recently added support for the TLM 2.0 standard to its ZeBu simulation platform product line. TLM 2.0 is an interface standard for SystemC model interoperability and reuse by the Open SystemC Initiative (OSCI). "Given that we introduced the simulator in this context, for us this is more like transaction-based co-simulation," said Lauro Rizzatti, general manager of EVE-USA.
EVE has implemented support for TLM 2.0 through a transaction adapter (Figure 5) that supports multiple targets and initiators, blocking and non-blocking transport interfaces, and loosely timed (LT), loosely timed decoupled (LTD), and approximate timing (AT) encoding modes.
At the system level, users can integrate virtual platforms, TLM 2.0 transaction adapters, and advanced SystemVerilog hardware verification environments. At the simulator level, the ZeBu TLM-2.0 transaction adapter is an open framework that enables interoperability with other ZeBu transaction processors, either from EVE's transaction processor catalog or generated by ZEMI-3. ZEMI-3 is EVE's behavioral SystemVerilog compiler for transaction processor bus functional models (BFMs), which makes it easy to write cycle-accurate BFMs and exchange information with C++ or SystemVerilog testbenches.
According to Rizzatti, 70 to 80 percent of EVE customers use ZeBu in transaction-based mode. "They may also use it for calls in simple C-based loop mode (not transactional mode). But even if they do that, they still use transactional mode because of the benefits," he said.
Ron Choi, EVE marketing director, said support for TLM 2.0 takes EVE's simulator to the next level in terms of interoperability. "For many years, we had a transaction-level interface. But it had to be implemented through a proprietary API. It was a very useful methodology, but now there is a stronger demand for a standards-based approach," he said.
The TLM 2.0 transaction adapter solves the problem of designers having to write different codes to bridge different products. "Typically, ESL tools have always had the ability to connect to RTL simulators through programming language interfaces (PLIs) and to emulators through APIs calling C/C++ functions," said Rizzatti. "This requires them to write peripheral applications for different interfaces. They have to write their own interoperability when programming. A better approach is to use TLM 2.0, which defines an interoperability layer that frees users from having to worry about the underlying implementation. In this way, it doesn't matter whether they use SystemC models or not."
Previous article:Performance of thin film resistors for high temperature environments and electrical loads
Next article:Object-based storage
- Popular Resources
- Popular amplifiers
- High signal-to-noise ratio MEMS microphone drives artificial intelligence interaction
- Advantages of using a differential-to-single-ended RF amplifier in a transmit signal chain design
- ON Semiconductor CEO Appears at Munich Electronica Show and Launches Treo Platform
- ON Semiconductor Launches Industry-Leading Analog and Mixed-Signal Platform
- Analog Devices ADAQ7767-1 μModule DAQ Solution for Rapid Development of Precision Data Acquisition Systems Now Available at Mouser
- Domestic high-precision, high-speed ADC chips are on the rise
- Microcontrollers that combine Hi-Fi, intelligence and USB multi-channel features – ushering in a new era of digital audio
- Using capacitive PGA, Naxin Micro launches high-precision multi-channel 24/16-bit Δ-Σ ADC
- Fully Differential Amplifier Provides High Voltage, Low Noise Signals for Precision Data Acquisition Signal Chain
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Can experienced friends recommend some useful power modules?
- What is the appropriate sampling rate for an oscilloscope?
- In what fields are panoramic high-definition video recorders used?
- What is the development trend of machine vision surface defect detection system?
- What does the MCU REFO pin stabilization time mean? How is it related to the connected capacitor?
- [Project source code] Static address alignment and dynamic address alignment of NIOS II custom IP core
- How to design a suitable power supply
- New uses for old phones (6) - Optimizing file management
- Two Problems in DSP/BIOS Programming
- Lingdongwei MM32 series MDK5 project from 0 to 1