An exploration of SoC-based system design

Publisher:binggegeLatest update time:2012-09-29 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Debugging a complex electronic system is never an easy task, but it is doable. You find the problem. Using your most trusted "oscilloscope," you can trace the problem back to its source by converting the analog circuit to digital. Then, write a small test program to check the drivers and peripherals, add some logic probes, and work your way back to the peripheral controller and CPU bus to finally solve the problem. Of course, this requires using some code from someone else.

System-on-a-Chip (SoC) integration has fundamentally changed all of this. Today, the microprocessor, bus, peripheral controllers, and most of the memory and analog circuits are enclosed in a single package. It can be an ASSP, an advanced microcontroller, an FPGA, or an ASIC of your own design. Whatever the SoC is, the fact is that unless the chip design team is willing to help you, you can't get a good look inside the chip.

Debugging hardware in an embedded CPU provides you with traditional debugging features such as breakpoints and near real-time tracing to help you complete the debugging work. However, on their own, debugging kernels only give you a view of the CPU appearance of your system. If an event cannot be defined more broadly in terms of the overall system state, then when an event occurs, you may need to write a short diagnostic code to temporarily stop the system and write the appropriate data back to the CPU. This process is at best a waste of your time, and at worst, it is a waste of time that may not necessarily solve the problem, which is very inefficient.

Silicon intellectual property (IP) vendors have addressed this challenge by providing increasingly sophisticated SoC instrumentation debugging. However, today’s products are personalized, not standardized. System designers need to make independent choices in the early stages of chip design, while other teams in another company determine their goals based on whether the product can be launched on time, die area, etc., and do not care about whether the system can be debugged easily. However, participants such as chip designers, software developers and system designers still have one thing in common - collaborating with each other to find problems at the system level.

Plan from the beginning

Whether you can successfully find problems in your system actually depends on how long ago—two years ago, during the development of the SoC you are currently using. Brad Quinton, chief planner for embedded verification at Tektronix, believes that planning early in the chip design process is critical, not only in terms of being able to fully explore the chip, but also in terms of what debugging hardware to use. However, this is not common.

The chip design team does front-end work in the test structure, but for other reasons. Test design and built-in self-test hardware are built into the chip to support IC testing. However, these resources are not used for debugging and generally provide very little diagnostic information. The chip designers will build tools specifically for the silicon development team, but these tools may only have internal documentation and are disabled once the silicon is released. There will be boundary scans on high-speed serial ports and even very complex instrumentation functions that must be tuned in the system. However, these are meant to establish and verify connectivity at the board level, not for system debugging.

Quinton questioned that while these structures are very useful, they will cause more problems for chip designers. "Think about it at the system level. Where are the key interfaces, the high-level state machines? What information do you have to know which subsystem is working?" Quinton said.

These reflections have led some IP vendors to develop a new class of modules: instruments and controllers designed to be implemented in an SoC and used not only by the IC team designing the chip, but also by the system design team that uses it, as shown in Figure 1.

Figure 1. On-chip debugging circuitry can become a practical design in its own right.

Working from different directions, two vendors validate this trend. ARM has expanded its CPU core debug core CoreSight to cover most of the work for multicore SoCs, while Quinton Veridae, now part of Tektronix, started with instrumentation modules and focused on developing trigger/trace controllers to include CPU debug cores. Both approaches are invaluable to system debuggers. Both provide important concepts for extending explorability on a SoC basis.

Data Source

While you'll want to immediately start adding all sorts of data acquisition nodes to your system block diagram, Quinton starts by asking some basic questions. Who will be using the debug capabilities: application programmers, analog designers, or mechanical engineers? At what level of abstraction will they define events; function calls, signal-to-noise ratios, torque readings? What will these users do: identify hot spots in their code, find a noisy transient, or understand why a drive rod failed? Only when you understand the problem can you find the data to solve it.

The trick now, Quinton said, is to determine where to collect the data. The obvious first step is to collect all the data at the source: A/D converter (ADC) outputs, status registers, network interfaces, etc. Of course, you want to get some information as close to the source as possible, such as the state of the physical device being controlled.

However, in other cases, thinking ahead about where to collect data can reduce measurement overhead and the amount of data needed for post-analysis. “Find key places to observe the system,” Quinton advises. “You might be surprised at the amount of system state that passes through a single point: for example, the interface of the CPU to the system bus.”

Where to sample data also depends on the level of abstraction you want. For example, it might be a system exchanging information over a PCI Express (PCIe) bus. Probing the serializer/deserializer (SERDES) and PCIe controller gives you detailed information about how the underlying protocol layers work, which is important for debugging the bus interface. However, if you want the bus interface to work properly and watch the flow of information, you're better off monitoring the buffers in main memory and ignoring the bus controller.

The relativity of the problem

Once you have found the data you need and determined where in the system you want to extract it, you then need to collect the data, understand the correlation of the data with time, find the trigger mode, acquire the data you want to save, extract the data from the system, and get it into the analysis tool. In a discrete system, this process is relatively simple: everything can be done back in the logic analyzer, which provides a unified time basis. In an SoC-based system, you may want to get all the data back to the central module of the SoC, as shown in Figure 2. The good news is that IP from Tektronix and ARM simplifies this process.

Figure 2. A complete on-chip debug system combines a traditional CPU core debugger with a data collection station and information routing to provide a method for getting data out of the chip.

However, using IP brings new problems. The delay between the central module and the chip on the other side of the board can be dozens of clock cycles. Even within the chip, there are dozens of clock domains, and when you cross the clock domain boundary, the delay will increase by many CPU cycles. How do you know that the two data are simultaneous?

If you want to develop your own debugging tools, this is difficult. You can estimate the propagation delay between the data acquisition module and the central controller, and then post-process the data stream to align it. However, this approach has difficulty in dealing with the non-deterministic delays of clock domain crossings. You can distribute a master clock and use it to time-stamp your acquired data, but this requires a lot of circuit overhead. Commercial solutions such as Tektronix use both hard-core IP and software algorithms to automatically do all of this work at the bottom level. These algorithms can see events in different clock domains and different physical locations of the SoC in a time-correlated view, often revealing unexpected system behavior.

Even when done automatically, there are still variations. What if we define triggers as small events that happen simultaneously in different parts of the system, and only those events are defined—for example, an ADC output equals 0 and a CPU core enters an interrupt service routine? What if the events involve not only different places, but also different levels of abstraction—for example, a stack overflow occurs after the serial port's receive eye closes? Quinton describes triggering on events across these domains as the "holy grail" of system triggering. As this metaphor suggests, it's hard to find the right solutions. But collecting enough data and thinking hard to build local triggers that can capture these complex events often leads to these solutions.

From Chip to System

We have discussed in detail the chip-level debug capabilities, which are a huge step up from simple CPU debug cores. For system designers developing their own SoCs, this data is very useful. But what about everyone else, who will be using someone else's chip design? Many of the concepts here still apply.

The most important thing is to pay attention to planning in advance: identify users and their usage environment, develop a test strategy, answer the questions these users may ask, and plan data collection to support this strategy. The biggest difference is that chip designers ask these questions and then develop structures in their SoCs. System designers also ask questions and then answer them through tools provided by SoC vendors and their support.

Accordingly, system design teams ask their SoC vendors at least three types of difficult questions. First, does the chip vendor provide a debug workbench—for example, a Tektronix or ARM master software package—to control the SoC's debug hardware? Does the master software package fit well with your existing system debug environment?

Second, at what point in the SoC does the hardware actually touch? Do you get trigger/trace capabilities for just one CPU core, or does the chip provide extended acquisition, trace, and cross-triggering capabilities for the CPU, accelerators, buses, and peripheral controllers? Third, what methods does the debug subsystem provide for observing the state of other chips and devices in the system?

The answers to these questions will define what types of external measurement equipment workbenches the system team connects to and on which workbenches their system debug plans can be implemented. As in so much of system engineering, it is critical to start planning for debug as early as possible - during the architectural design phase of the project. Starting to debug an SoC-based system without adequate data present can quickly become untenable.

Reference address:An exploration of SoC-based system design

Previous article:Extending battery life with soft-start circuit
Next article:Microphone sensitivity specifications explained and their application

Latest Analog Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号