What can FPGA-based prototyping do for you?

Publisher:岭南布衣Latest update time:2011-09-27 Keywords:FPGA Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

As advocates of FPGA-based prototyping, some people might think that we only see the advantages of this method and turn a blind eye to its shortcomings. But that is not our intention. Our goal in this book, The Handbook of FPGA-Based Prototyping, is to fully expose the pros and cons of FPGA-based prototyping, because ultimately we don’t want to see someone embark on this long journey when they could have achieved their goals better using other methods (such as System C-based virtual prototyping).

Let's take a closer look at the purpose and limitations of FPGA-based prototyping methods and their suitability for system-level verification and other purposes. Keeping the focus on the purpose of the prototyping project makes it easier to make decisions on platforms, IP usage, design export, debugging, and other design aspects. In this way, we can learn from other prototyping teams around the world by analyzing their projects.

FPGA-based prototypes can meet different needs

Prototyping is not a process that can be completed by pressing a few buttons. It requires careful attention and thought at different stages. In addition to explaining the work and expertise involved in this process, we should also explain why prototyping should (or should not) be performed in SoC projects. In talking to prototypers over the years, one of the most common questions we are asked is "Why do you do this?" The answers vary, and we have summarized them into a few common reasons in Table 1. For example, "real-world data effects" may refer to a team's work to use prototyping to get a model of a system running at full speed and connect it to other systems or peripherals, perhaps to test compliance with a new interface standard. Their general reason for prototyping is "interfacing with the real world", and prototyping does provide the fastest and most accurate way to achieve this goal before the actual silicon device is available.

Common Purposes and Reasons for Using FPGA-Based Prototyping
Table 1 Common purposes and reasons for using FPGA-based prototypes

Having a systematic understanding of the purpose of these projects and why we are prototyping will help us determine if FPGA-based prototyping can help us with our next project.

So let's explore the goals described in Table 1 and how an FPGA-based prototyping approach can help achieve them. In many cases, real-world examples are given, and I would like to thank in advance those who have contributed their experience to guide others to success.

High performance and accuracy

Only an FPGA-based prototype can provide the speed and accuracy required to properly test every aspect of the design. We put this reason first because, while there are many given goals that need to be achieved for a project, this may be the most fundamental reason of all for the team that needs to prototype. For example, the team's goal may be to verify the embedded software of some SoC and see it running at full speed on real hardware, but the fundamental reason for using a prototype is to ensure high performance and accuracy. We can verify this software at a higher performance level in a virtual system, but we cannot achieve the accuracy that we can achieve using real RTL .

Real-time data streaming

One of the reasons why it is difficult to verify an SoC is because its state depends on many variables, including its previous state, the order of its inputs, and the broader system effects of the SoC output (and possible feedback). Connecting the SoC design to the rest of the system and running it at real-time speeds allows us to immediately observe the effects of changes in real-time conditions, inputs, and system feedback.

A good example of this is the real-time data streaming in the HDMI prototype developed by the IP team at Synopsys in Porto, Portugal . In this example, high-definition (HD) media data flows through the prototype processing core and outputs to an HD display, as shown in the block diagram of Figure 1. Note that the bottom of the block diagram shows the real-time audio and HD video data streams being received by the receiver (from an external source), flowing through the prototype, and output to the real-time HDMI PHY connected to an external monitor. By using pre-silicon prototypes, we can immediately see and hear the effect of different HD data on our design and vice versa. This real-time data streaming is only possible with an FPGA-based prototyping approach and is a great benefit not only to this type of multimedia application, but also to many other applications that require real-time response to incoming data streams.

HDMI Prototype Block Diagram
Figure 1 HDMI prototype block diagram

Software and hardware integration

In the above example, the reader may have noticed that the prototype uses a small MicroBlazeTM CPU, with peripherals and memory, thus reflecting all the common blocks of an SoC. In this design, the software running on the CPU is mainly used to load and control A/V processing. However, in many SoC designs, software consumes the most energy.

Given that software has become the main part of SoC development work, it is increasingly common for software work to occupy a key position in the project schedule. When the SoC can effectively reach mass production standards, it is the software development and verification work that determines the actual completion date of the project. In this case, how can the system development team improve the efficiency of software development and verification work? To answer this question, we need to look at where the software development team spends its time.

Modeling SoCs for Software DevelopmentSoftware, by its very nature, is difficult to perfect. We are all accustomed to the software upgrades, service packs, and bug fixes we encounter in our daily use of computers. However, this endless approach to software improvement hits a snag when it comes to software embedded in an SoC. On the other hand, the usage patterns and environmental conditions that the system interacts with embedded software are more easily determined than general-purpose computer software. Moreover, embedded software developed for simpler systems can be simpler and therefore easier to fully verify.


For example, an SoC that controls a vehicle subsystem or an electronic toy is easier to test thoroughly than a smartphone that runs many applications and processes on a real-time operating system (RTOS) .

If we look more closely at the software running on such smartphones, such as the Android software shown in Figure 2, we can see a multi-layered arrangement called a software stack. (This diagram is based on an original diagram by software designer Frank Abelson in his book Unlocking Android.
)

Android Software Stack
Figure 2 Android software protocol stack

When looking at the software stack, we find that the lowest level of the stack, those closest to the hardware, are primarily designed to map the software onto the SoC hardware. This requires an absolute understanding of the hardware, even down to the addresses and clock cycles. Designers of the lowest levels of the software stack often call themselves platform engineers, and their job is to accurately describe the hardware so that higher levels of the stack can recognize and reuse it. This description is called a board support package (BSP) by some RTOS vendors, similar to the basic input/output system (BIOS) of the PC we use every day.

The second layer of the protocol stack from the bottom up contains the kernel of the RTOS and the necessary drivers that connect the higher-level software to the described hardware. In the lowest level of these protocol stacks, platform engineers and driver developers need to verify their code on real SoCs or fully accurate SoC models. Software developers at this level need a comprehensive understanding of the behavior of the software at each clock cycle.

At the other end of the spectrum for software developers, at the top of the protocol stack, we see user space, where multiple applications can run simultaneously, such as a contact manager, a video display, an Internet browser, and the phone subsystem that actually makes calls in a smartphone. Each of these applications cannot directly access the SoC hardware, and in fact violates all hardware considerations to some extent. These applications rely on software running at lower levels of the protocol stack to communicate on their behalf with the SoC hardware and the rest of the system.

We can summarize this as follows: at each layer of the protocol stack, software developers only need a model that is accurate enough for their code to think it is running on the target SoC. More accuracy than necessary will only make the model run slower on the simulator. In fact, SoC modeling at any level requires us to describe the hardware and protocol stack to a lower level than the current level for verification. And ideally, we should only require enough accuracy to achieve the highest performance.

For example, application developers at the top of the protocol stack can test their code on a real SoC or SoC model. In this case, the accuracy of the model is enough to make the application think it is running on a real SoC, it does not need to be accurate to the clock cycle, nor does it require detailed knowledge of the hardware structure. But speed is very important here because in many cases multiple applications will run simultaneously and interface with data in the real environment.

This modeling approach of providing "sufficient accuracy" only for the software layer provides different software developers with a variety of different modeling environments for use at different stages of the SoC project. You can use a language such as SystemC to model the transaction processing level and create a simulator model with low accuracy but fast enough to run many applications simultaneously. If real-time processing of real data is not important, it is better to consider a virtual prototype approach.

However, an FPGA-based prototyping approach is best suited when the entire software stack must be run in its entirety or when data must be processed in a real-world environment.

Only with FPGA-based prototyping can we break the inherent trade-off between accuracy and performance in modeling approaches. With FPGAs, we can achieve real-time speeds while modeling with full RTL cycle accuracy. This allows a single prototype to be used for both accurate models required for low-level software verification and high-speed models required by high-level application developers. In fact, the entire SoC software stack can be modeled on a single FPGA-based prototype. A good example of using FPGAs to verify software is a project by Scott Constable and his team in the Mobile Products Division at Freescale Semiconductor in Austin, Texas.

Freescale is very keen to speed up the SoC development process because the short product life cycle in the mobile phone market requires products to enter the market as soon as possible. This is not only to win the competition, but also to avoid rapid obsolescence. By analyzing the most time-consuming links in the process, Freescale found that the most significant effect can be achieved by speeding up the 3G protocol testing of mobile phones. If the testing work can be completed before tape-out, Freescale can save months on the project time. This is very important considering the product life cycle, which is usually only one to two years.

Protocol testing is a complex process that would take a day to complete even at high real-time speeds. Using RTL-level simulation would take years, and running on a faster simulator would take weeks, which is not practical. FPGAs are used because they are the only way to achieve the necessary clock speeds to complete the testing in a timely manner.

Protocol testing requires the development of various software features of the product, including hardware drivers, operating systems, and protocol stack code. Although the main purpose is protocol testing as mentioned earlier, by using FPGA, all of this software development work can be completed before tape-out, greatly accelerating the development progress of various final products.

Freescale built a prototype of a multi-chip system that includes a dual-core MXC2 baseband processor and the digital portion of an RF transceiver chip. The baseband processor includes a Freescale StarCore DSP core for modem processing, an ARM®926 core for user application processing, and more than 60 peripherals.

The Synopsys HAPS-54 prototyping board was used to implement the prototype (see Figure 3). The baseband processor has more than 5 million ASIC gates, and Scott's team used the Synopsys Certify tool to partition it across three Xilinx Virtex-5 FPGAs on the prototyping board, while placing the digital RF design in a fourth FPGA. Freescale decided not to prototype the analog portion, but instead to provide mobile network data in digital form directly from the Antritsu protocol test box.

Freescale's SoC design partitioned on the HAPS-54 prototyping board
Figure 3 Freescale's SoC design partitioned on the HAPS-54 prototype board

Some of the design techniques used in the older cores worked very well for ASICs, but not so well for FPGAs. Also, some of the RTL was automatically generated from the system-level design code, which was not very FPGA-friendly due to its overly complex clock networks. Therefore, some adjustments had to be made to the RTL to make it more compatible with FPGAs, and this paid off significantly. [page]

In addition to being able to speed up protocol testing, by the time Freescale engineers had completed the first silicon, they were able to:
• Release debugger software without major modifications after silicon implementation;
• Complete driver software;
• Boot the SoC at an operating system prompt;
• Implement modem camp and registration.

Just one month after completing the first chip, the Freescale team successfully made the first mobile phone call from the system, shaving more than six months off the product development schedule, a milestone.

As Scott Constable said, “In addition to accelerating our stated protocol testing goals, our FPGA system prototype has once again proven its value by accelerating projects in a number of other areas.

Perhaps most importantly, it brings inestimable benefits to developers: it allows engineers to participate in projects early, allowing
all development teams from design to software to verification to applications to have a thorough understanding of the product six months before the chip is completed. This acceleration of the process of recognizing product expertise is difficult
to measure with a Gantt chart, but it may be the most beneficial. "

“Given these advantages, it was a natural progression to accelerate ASIC development with an FPGA-based prototyping solution. We subsequently introduced this approach to Freescale’s Networking and Microcontroller Divisions, where we are also using it for new IP verification, driver development, debugger development, and customer demonstrations.”

This example shows how an FPGA-based prototyping approach can provide a value-added tool to software development teams and bring significant returns in terms of product quality and project progress.

Interface Advantage: Testing Data Effects Under Real Conditions It is difficult to imagine a SoC design that does not follow the basic structure of input data, processing data, and generating output data. In fact, if we drill down into the SoC design, we will find countless sub-modules that follow the same structure down to the individual gate level. To verify correct processing at each of these levels, we need to provide a complete input data set and observe whether the output data of the processed result is correct. This task is simple for a single gate and possible for small RTL modules. However, as systems become more complex, it is statistically impossible to ensure the integrity of input data and initial conditions, especially when there is software running on more than one processor.

In order to improve the efficiency and coverage of traditional verification methods and to cope with the challenges brought by this complexity, we have invested a lot of research work and funds. At the full SoC level, we need to use multiple different verification methods to cover all input combinations and eliminate combinations in extreme cases.

This last point is very important because unpredictable input data can disrupt all SoC systems, even carefully designed critical SoC designs. Combined with new input data or unusual combinations or sequences of input data, there are a large number of possible previous states of the SoC that may put the SoC in an unverifiable state. Of course, this situation is not necessarily a problem, and the SoC can recover without intervention from other parts of the system or the user noticing at all.

However, unverifiable states must be avoided in the final chip, so we need methods to test the design as thoroughly as possible. During the functional simulation of the design, verification engineers use powerful methods such as constrained random stimulus and advanced test tools to complete multiple tests, aiming to achieve acceptable test coverage. However, completeness is still limited by the direction chosen by the verification engineer and the given constraints, and is limited by the time available to run the simulation.

The results, while never exhaustive, can greatly increase
confidence that we have tested all possible combinations of inputs, including both possible and corner case inputs.

To screen for corner case combinations, we can complement our verification efforts by observing the design running under real-world conditions on an FPGA-based prototype. By embedding the SoC design into a prototype, we can run the design at speeds and accuracy comparable to the final silicon, thus “immersing” the design in the final environmental data as it would be run on the final silicon.

A good example of how DS2 in Valencia, Spain, uses FPGA-based prototyping to immerse SoC designs in a real-world environment.

Example: Immersing in Real-World Data

Broadband over Power Line (BPL) technology typically uses undetectable signals to send and receive information over power lines. A typical application of BPL is to transmit high-definition video from a receiver through power lines to any display in the room, as shown in Figure 4. The core of DS2's BPL design is complex hardware and embedded software algorithms that encode and retrieve high-speed transmission signals to and from the power lines. These power lines can operate in extremely noisy electrical environments, so a key part of the development work is to verify these algorithms under a variety of real-world conditions.

WiFi Range Extender with Broadband over Power Line (BPL) Technology
Figure 4. Broadband over Power Line (BPL) technology used in WiFi range extenders

At the heart of DS2's BPL design are complex hardware and embedded software algorithms that encode and retrieve high-speed transmission signals that are sent to and from the power lines. These power lines can operate in extremely noisy electrical environments, so a key part of the development effort was validating these algorithms under a variety of real-world conditions.

Javier Jimenez, ASIC Design Manager at DS2, explains how FPGA-based prototyping has been useful to them: “It is important to use robust verification techniques to develop reliable high-speed communications. It requires a lot of experimentation with different channel and noise models, and only FPGA-based prototyping allows us to fully test our algorithms and run the designed embedded software on the prototype. In addition, we can take the prototypes out of the lab and perform extensive field testing. We can place multiple prototypes in real home and office environments, some of which are very harsh electrically. We could not consider using an emulator system for this purpose because it is very expensive and not portable.” This use of FPGA-based prototyping outside the lab was very instructive because we knew that creating a reliable, portable platform was important to our success.

Advantages for Lab Feasibility Experiments In the initial stages of a project, basic decisions need to be made about the chip topology, performance, power consumption, and on-chip communication structure. Some of these decisions can be performed well using algorithmic or system-level modeling tools, but some additional experiments can be performed using FPGAs. Is this true FPGA-based prototyping? We are prototyping a concept using FPGAs, but this is different from using algorithmic or mathematical tools because we need some RTL that may be generated by these high-level tools. Once in the FPGA, early information can be captured to help drive optimization of the algorithm and the final SoC architecture. The advantage of FPGA-based prototyping at this stage of the project is that more accurate models can be used, and these models can run very fast and interact with real-time input.

These types of experimental prototypes are worth mentioning because they are another way to use FPGA-based prototyping hardware and tools in full-scale SoC projects, which can provide a higher return on our investment.

Using prototypes outside the lab

One truly unique aspect of FPGA-based prototyping for validating SoC designs is its ability to work independently. This is because the FPGA can be configured via a Flash EEPROM card or other independent media, without the need for a host PC to manage it. As a result, the prototype can not only operate independently, but can also be used to test SoC designs in a variety of environments, which is completely different from the environment provided by other modeling techniques (such as simulation that requires host intervention).

In extreme cases, the prototype can be completely taken out of the lab and used in real-world conditions, such as by installing the prototype in a moving vehicle to study the design's dependence on changes in external noise, movement, antenna field strength, etc. For example, the author of this article installed a prototype baseband for a mobile phone in a vehicle and made calls on the move over the public GSM network.

Chip architects and other product experts need to interact with early customers to demonstrate important features of their algorithms. FPGA-based prototyping can be a critical advantage at this very early stage of a project, but this approach is slightly different from mainstream SoC prototyping.

Another very common use of FPGA-based prototypes outside the lab is pre-manufacturing demonstrations of new product functionality at trade shows. Let's examine a case study of the use of FPGA-based prototypes outside the lab and at trade shows by the R&D department of the BBC in the UK.

Examples: Real-world prototypes

The power of FPGAs operating independently has been demonstrated in a BBC R&D project in the UK to promote DVB-T2, the latest industry-leading open standard that enables high-definition television to be delivered via terrestrial transmitters.

Reasons for using FPGA-based prototypes Like most international standards, the DVB-T2 specification took several years to perfect and 30,000 engineering hours from researchers and technical experts from all over the world. Only FPGAs can be highly flexible to meet the changing requirements during the development process. The specification was finalized in March 2008 and published three months later on June 26 as the "DVB Blue Book".

Because the BBC was already using FPGA-based prototypes while the specification was being developed, the BBC implementation team, led by Justin Mitchell of BBC R&D, was able to develop a hardware-based modulator and demodulator for DVB-T2. The modulator is based on a Synopsys HAPS-51 card with a Xilinx Virtex-5 FPGA. The card connects to a daughter card designed by BBC R&D. The daughter card provides an ASI interface to receive the incoming transport stream. The incoming transport stream is then passed to the FPGA, encoded to the DVB-T2 standard, and then passed back to the daughter card for direct up-conversion to UHF.

The modulator was used to transmit the industry’s first DVB-T2 standard signal from a live TV transmitter on the same day the specification was released. The demodulator also used HAPS as the basis for another FPGA prototype, completing the end-to-end working chain, which was demonstrated at the 2008 IBC show in Amsterdam, three months after the specification was finalized. This was an extraordinary achievement and helped build confidence for the system to go live in 2009.

BBC R&D has also been involved in other important parts of the DVB-T2 project, including the very successful Plug and Play event in Turin in March 2009. At this event, five different modulators and six different demodulators were demonstrated, working together in various modes. The robust portable construction of the BBC prototype made it a highlight of the Plug and Play event.

Justin Mitchell commented on the FPGA-based prototype, “One of the biggest advantages of FPGAs is the ability to track specification changes from the early stages to announced transmissions. The ability to quickly adapt the modulator to changes in the specification is very important. It is difficult to think of another technology that can develop modulators and demodulators so quickly and support portability, which allows the modulators and demodulators to be used independently in live transmitters and public exhibitions.”

What are the drawbacks of FPGA-based prototyping?

Our goal in this article is to take a fair look at the advantages and limitations of FPGA-based prototyping, so after discussing the advantages above, we will discuss some of the limitations below.

First and foremost, an FPGA prototype is not an RTL simulator. If our goal is to write some RTL and then implement it in an FPGA as quickly as possible to see if it works, then we should rethink what we are overlooking. A simulator has two basic components, which can be thought of as an engine and a dashboard. The engine's job is to stimulate the model and record the results, while the dashboard's job is to help us verify those results. We can run the simulator in small increments and then adjust it through our dashboard; we may have some very complex stimulus, which is basically the job of the simulator. Can an FPGA-based prototype do the same thing? Of course not.

FPGAs are indeed a faster engine for running RTL "models", but once we start setting up the model, the speed advantage is greatly reduced. In addition, the dashboard portion of the simulator provides complete control over the stimulus and understanding of the results. We should think about ways to instrument the FPGA to gain deep insight into the functionality of the design, but even the most complete design in this regard will only provide a portion of the information that can really be used in the RTL simulator dashboard. Therefore, the simulator is a more ideal environment for iterative writing and evaluating RTL code, so we should wait until the simulation is basically completed and the RTL is fairly mature before handing it off to the FPGA prototyping team.

FPGA-based prototyping is not ESL

Electronic system-level (ESL) tools or algorithmic tools such as Synopsys' Innovator or Synphony can complete designs in SystemC or build them from predefined model libraries. Then, we can not only simulate these designs in the same tools, but also gain insight into their system-level performance, including running software, to make hardware-software trade-offs at an early stage of the project.

With FPGA-based prototyping, we need RTL, so it is not well suited for studying algorithms or architectures, which are not usually expressed in RTL. The advantage of FPGA prototyping for software is that when the RTL is mature enough to build a hardware platform, the software can be run in a more accurate and realistic environment. For those with wild ideas, a small amount of RTL can be written and run on the FPGA to perform feasibility studies. This is a rare but very important use of FPGA prototyping, but don't confuse it with system-level or algorithmic research of the entire SoC.

Continuity is key

Good engineers tend to choose the right tools for the job, but there should always be a way to hand off a work-in-progress to someone else to continue to complete. We should be able to hand off a design from an ESL simulation to an FPGA-based prototype with minimal additional work. In addition, some ESL tools can implement the design through high-level synthesis and generate RTL for the SoC project as a whole. The FPGA-based prototype can take this RTL and run it on the board with high cycle accuracy. But again, we need to wait until the RTL is relatively stable, which requires waiting until the hardware and software partitioning and architecture research phase of the project is completed.

Why use FPGAs for prototyping?

Today, SoCs are the product of the work of many experts, from algorithm researchers to hardware designers, software engineers, and chip layout teams. As projects continue to evolve, each type of expert has its own needs. The success of an SoC project depends largely on the hardware verification, hardware and software co-verification, and software verification methods used by these experts. FPGA-based prototyping can bring different advantages to each type of expert.

For hardware teams, the speed of verification tools can have a huge impact on verification throughput. In most SoC developments, it is necessary to run multiple simulations and repeat regression tests as the project matures. Emulators and simulators are the most common platforms used for this type of RTL verification. However, even with TLM-based simulation and modeling, some interactions within the RTL or between the RTL and external stimuli cannot be recreated in simulation or simulation due to the long runtime. Therefore, some teams use FPGA-based prototypes to provide a higher performance platform for this hardware testing. For example, we can run the entire operating system bootloader in near real-time conditions, saving simulation time that would take days to achieve the same goal.

For software development teams, FPGA-based prototyping provides a unique pre-silicon model of the target chip, enabling high-speed and highly accurate software debugging near the end of development.

The critical phase of an SoC project for the entire team is when the hardware and software first come together. The hardware will be executed by the final software in ways that cannot be foreseen or predicted by a hardware-only validation approach, and new hardware issues will eventually appear. This is especially prevalent in multicore systems or those running simultaneous real-time applications. If such hardware and software adoption waits until the first device is manufactured, it is not an exaggeration to say that it would be unpleasant to discover new defects by then.

FPGA-based prototyping helps bring software into high-speed, cycle-accurate hardware models early. SoC teams often tell us that the biggest advantage of FPGA prototyping is that the system and software are ready to run the same day when the first device is available.

Keywords:FPGA Reference address:What can FPGA-based prototyping do for you?

Previous article:Design of high-speed real-time data transmission system based on FPGA
Next article:FPGA Implementation of Floating-Point LMS Algorithm

Recommended ReadingLatest update time:2024-11-16 15:31

Free I/O: Improving FPGA Clock Distribution Control
Clock signals in synchronous digital systems (such as those used in telecommunications) define the time reference for data transfer in the system. A clock distribution network consists of multiple clock signals, all distributed from a single point to all components that require a clock signal. Because clock signals pe
[Embedded]
Free I/O: Improving FPGA Clock Distribution Control
UART Design Based on FPGA
Abstract: UART has been widely used as the control interface of RS232 protocol. Integrating the function of UART in FPGA chip can make the whole system more flexible and compact, reduce the volume of the whole circuit, and improve the reliability and stability of the system. A method for implementing UART base
[Embedded]
UART Design Based on FPGA
Design of digital power supply control module based on TMS320C6713 and FPGA
The Heavy Ion Accelerator Cooling Storage Ring (HIRFL-CSR) is a key project of the country during the "Ninth Five-Year Plan" period. It is mainly composed of the main ring CSRm and the experimental ring CSRe, which can realize the synchronous acceleration, cooling and storage of heavy ions. Magnets and power sup
[Power Management]
FPGA realizes the integrated design of UART and MCU
Abstract: Modern digital electronic system design is developing in a new direction, that is, using FPGA technology for system design. This paper introduces a digital system that uses FPGA to implement a universal serial asynchronous receiver and transmitter (UA-RT) and an MCU that controls communication. The under
[Embedded]
FPGA realizes the integrated design of UART and MCU
Integrated Design of Turbo Codec under LTE Standard
LTE (Long Term Evolution)是3GPP展开的对UMTS技术的长期演进计划。LTE具有高数据速率、低延迟、分组传送、广域覆盖和向下兼容等显著优势 ,在各种“准4G”标准中脱颖而出,最具竞争力和运营潜力。运营商普遍选择LTE,为全球移动通信产业指明了技术发展的方向。设备制造商亦纷纷加大在LTE领域的投入,其中包括华为、北电、NEC和大唐等一流设备制造商,从而有力地推动LTE不断前进,使LTE的商用相比其他竞争技术更加令人期待。 Turbo code was selected as one of the channel coding schemes for the LTE standard due t
[Security Electronics]
Integrated Design of Turbo Codec under LTE Standard
What is the role of this FPGA in the oscilloscope?
Background: Oscilloscope is an instrument that we can see in the fields of physical experiments, circuit hardware debugging, intelligent hardware development, etc. It is like a doctor's stethoscope, and hardware development engineers must use it. So, what kind of structure is inside? Curious players may have wanted to
[Test Measurement]
What is the role of this FPGA in the oscilloscope?
Strategic Considerations on SoC FPGAs
introduction SoC FPGAs, semiconductor devices that integrate FPGA architecture, hard-core CPU subsystems, and other hard-core IP, have reached a "pivotal point" where they will become widely available over the next decade, providing system designers with more options. These SoC FPGAs complement the more than a
[Embedded]
Strategic Considerations on SoC FPGAs
Design and simulation of DDC filter based on FPGA
In recent years, software radio has become a new development direction in the field of communications. Digital Down Converter (DDC) is one of the core technologies of software radio and also the part with the largest amount of calculation. FPGA-based DDC design is generally composed of CIC, HB, and FIR cascades. At
[Analog Electronics]
Design and simulation of DDC filter based on FPGA
Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号