1 Introduction to MultiGen Creator and Vega Software
1.1 Creator modeling software
MultiGen Creator is an interactive tool developed by MultiGen-Paradigm for creating and editing visualization system databases. MultiGen Creator is the world's leading real-time 3D database generation system. It has a complete interactive real-time 3D modeling system. A wide range of options enhance its features and functions. It is used to generate highly optimized and high-precision real-time 3D content, and can be used to generate, edit and view visual databases for complex scenes such as battlefield simulation, urban simulation and computational visualization. This advanced technology is supported by powerful integrated options including automated large-scale terrain and 3D cultural landscape generators, road generators, etc.
MuhiGen Creat0r can generate large-area scenes with good realism and simulation-oriented under the premise of meeting real-time requirements. It can provide modeling system tools for more than 25 different types of image generators. Its OpenFlight format has become the most popular image format in the real-time 3D field and has become the industry standard in the simulation field.
1.2 Vega Real-time Simulation
Vega is a software environment used by MultiGen-Paradigm in the fields of real-time visual simulation, sound simulation and virtual reality. It combines advanced simulation functions with easy-to-use tools to create the simplest but most creative architecture to create, edit and run high-performance real-time applications. Vega uses the Lynx interface to define and preview applications. Vega contains all the APIs necessary to create an application. However, simple applications can be implemented with Lynx alone. Lynx is a point-and-click graphics environment based on X/Motif technology. Users can drive objects in graphics and real-time controls in animations with just a mouse. It allows users to configure an application without writing source code at the same time. Vega also includes a complete C language application interface, providing software developers with maximum software control and flexibility.
2. Building a virtual training ground
Taking the 3D reconstruction of a comprehensive training ground as an example, virtual reality technology is further discussed. The system development process is shown in Figure 1.
2.1 Acquisition and processing of modeling data
Virtual comprehensive training ground scene. The modeling data that needs to be obtained mainly refers to the parameter information of various equipment and facilities in the training ground, the information of surrounding buildings, the distribution information of the entire training ground, and the texture information of the environmental landscape.
Process the acquired information, mainly the texture information of the photo. First take a photo with a digital camera. Then use Photoshop or the texture processing tools provided by Creator to intercept, correct, scale and process. Although Creator does not have too many restrictions on the format and size of textures, Vega has relatively strict requirements on texture data. Textures that do not meet the requirements cannot be displayed correctly, so the textures used by Creator must be edited. Since the terrain of the comprehensive training ground is relatively flat, the Delaunay algorithm can be used to convert the terrain data in DED format. [
Research on the design of virtual training ground based on MultiGen Creator and Vega
2.2 Three-dimensional modeling
According to the acquired modeling data, Creator can be used to build a 3D scene model, including various equipment and facilities of the training ground, nearby buildings, flowers, trees, roads, etc. The hierarchical structure of the training ground scene model is shown in Figure 2. The scene model of the comprehensive training ground is mainly divided into static entities and dynamic entities.
(1) Static entity modeling. Static entities mainly include fixed training equipment facilities, roads, trees, buildings, etc. Since there are many training facilities in the training ground, the amount of data to be modeled is relatively large. It is impossible to put all models in the entire training ground scene for modeling. Therefore, each training equipment can be made separately. Finally, all models are integrated into the entire training ground scene.
For a single fixed training equipment, you can decompose the structure of the training equipment, use Creator's various geometric tools to construct it, and then combine it to form a complete single entity.
Since there are relatively few buildings around the training ground and their shapes are relatively regular, it is easy to perform geometric modeling and apply textures. The billboard in Creator allows the model to always face the viewpoint during the simulation. It is usually used to create symmetrical entities such as street lamps or trees in the scene. The method is to apply a transparent texture representing the object to the surface of the model. Then, at runtime, the model will automatically rotate and always face the viewpoint. We can use this method to model models such as trees and street lamps in the training ground, thereby reducing the number of polygons in the model and improving simulation efficiency.
According to the needs of real-time roaming characteristics, we can choose cylindrical and hemispherical models to model the sky, and use texture mapping to reflect the sky background.
(2) Dynamic entity modeling. The red flag in the training ground is a dynamic entity. Modeling it with Creator is mainly achieved by switching textures. Take the red flag fluttering in the wind as an example. First, use Photoshop to process the texture of the red flag. Determine several pictures to be displayed in a loop. Create several child nodes under the root node. Each child node creates a face, and paste the processed red flag texture on each face. Set the time for each picture to be displayed in turn, and finally hide all nodes except the first child node. In this way, a red flag fluttering in the wind is completed.
2.3 Optimizing the model database
The ultimate goal of using Creator to model is to use it in simulation programs. Under the premise of not affecting the realism of the scene, in order to maintain the smoothness of program operation and improve the real-time performance of system operation, the model should be optimized as much as possible in the later stage of model making. In the process of building a virtual training scene, the following optimization methods are mainly used:
(1) Delete unnecessary polygons. We can reduce the number of polygons by deleting polygons that are not visible in the viewing cone. These polygons include polygons inside the model, detail polygons hidden behind other polygons, polygons on the bottom of the model, etc.
(2) Use multi-level detail models. Without affecting the realism of the model, the number of LOD layers, the switching distance between each layer, the size and fineness of the texture, etc. can be reasonably set. Although this will increase the workload, it will save system resources and improve the system operation speed.
(3)采用实例化的方法对模型进行处理。通常应用于表示三维场景数据库中多次重复出现的对象实体.例如道路两旁的树木。实例化模型的优点主要体现在能够显著节省磁盘空间便于创建、编辑和修改模型。
3 Scene Tour
After all the training ground scene models are built, the models need to be further driven to realize the real-time roaming function of the scene. First, the LvnX parameters need to be set. The functional modules such as observers, motion modes, and environmental special effects need to be preliminarily set. The specific method is: set all object models that an observer can see in Scenes; set observers in Observers; set motion modes for observers in Motion Model; set sky and cloud effects in Environments and Environment Effects. Then set two types of collision detection in the system: one is collision detection with the ground, so that the observer always changes the height of the viewpoint as the terrain changes; the other is collision detection with training facilities and buildings to prevent the observer from passing through the wall.
In Vega, you can use keyboard to control roaming and fixed path automatic roaming. This article mainly uses keyboard to control roaming. Specifically, select Drive motion mode in the Motion Models panel in Vega. Finally, you need to generate an executable file, and activate its function library in VC.
4 Conclusion
This paper uses virtual reality technology to carry out 3D reconstruction of the virtual comprehensive training ground scene, and uses Creator to build various realistic training facilities, environments, landforms and other models. At the same time, the model is optimized. Then the roaming function is developed using the Vega tool. The reconstruction of the virtual training ground is basically completed. It can interact with it in a natural, 3D visual way, with a real sense of immersion. The realization of the virtual comprehensive training ground provides a feasible method for establishing other forms of training scenes. It has certain application value
Previous article:A brief discussion on the application of project teaching in the practical training of electronic technology
Next article:The principle and production of controlling multiple groups of lights with a single switch
- Popular Resources
- Popular amplifiers
- Molex leverages SAP solutions to drive smart supply chain collaboration
- Pickering Launches New Future-Proof PXIe Single-Slot Controller for High-Performance Test and Measurement Applications
- CGD and Qorvo to jointly revolutionize motor control solutions
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- Nidec Intelligent Motion is the first to launch an electric clutch ECU for two-wheeled vehicles
- Bosch and Tsinghua University renew cooperation agreement on artificial intelligence research to jointly promote the development of artificial intelligence in the industrial field
- GigaDevice unveils new MCU products, deeply unlocking industrial application scenarios with diversified products and solutions
- Advantech: Investing in Edge AI Innovation to Drive an Intelligent Future
- CGD and QORVO will revolutionize motor control solutions
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Today's Live Broadcast | Introduction to New Features of FSP4.0.0 of Renesas Electronics RA Series Product Development Tools
- What is the function of this capacitor between the primary and secondary sides of this transformer?
- Integrated Circuit Static Timing Analysis and Modeling
- I found a chip placement machine made with Raspberry Pi RP2040? It's completely open source, so I think DIY is possible.
- [Sipeed GW2A FPGA development board]——ARM Cortex-M0 soft core processor_serial port printing
- Another upset? The island nation is not weak at all
- Live streaming portal is now open | Application of TI low-power technology in Wi-Fi camera and PIR infrared sensor design
- [MPS Mall Huge Discount Experience Season]--MPM3620AGQV-Z
- mattersecurity and privacy
- [Sipeed GW2A FPGA development board]——ARM Cortex-M0 soft core processor_light up digital tube