Research on the design of virtual training ground based on MultiGen Creator and Vega

Publisher:玉米哥哥Latest update time:2011-05-25 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

1 Introduction to MultiGen Creator and Vega Software

1.1 Creator modeling software

MultiGen Creator is an interactive tool developed by MultiGen-Paradigm for creating and editing visualization system databases. MultiGen Creator is the world's leading real-time 3D database generation system. It has a complete interactive real-time 3D modeling system. A wide range of options enhance its features and functions. It is used to generate highly optimized and high-precision real-time 3D content, and can be used to generate, edit and view visual databases for complex scenes such as battlefield simulation, urban simulation and computational visualization. This advanced technology is supported by powerful integrated options including automated large-scale terrain and 3D cultural landscape generators, road generators, etc.

MuhiGen Creat0r can generate large-area scenes with good realism and simulation-oriented under the premise of meeting real-time requirements. It can provide modeling system tools for more than 25 different types of image generators. Its OpenFlight format has become the most popular image format in the real-time 3D field and has become the industry standard in the simulation field.

1.2 Vega Real-time Simulation

Vega is a software environment used by MultiGen-Paradigm in the fields of real-time visual simulation, sound simulation and virtual reality. It combines advanced simulation functions with easy-to-use tools to create the simplest but most creative architecture to create, edit and run high-performance real-time applications. Vega uses the Lynx interface to define and preview applications. Vega contains all the APIs necessary to create an application. However, simple applications can be implemented with Lynx alone. Lynx is a point-and-click graphics environment based on X/Motif technology. Users can drive objects in graphics and real-time controls in animations with just a mouse. It allows users to configure an application without writing source code at the same time. Vega also includes a complete C language application interface, providing software developers with maximum software control and flexibility.

2. Building a virtual training ground

Taking the 3D reconstruction of a comprehensive training ground as an example, virtual reality technology is further discussed. The system development process is shown in Figure 1.

2.1 Acquisition and processing of modeling data

Virtual comprehensive training ground scene. The modeling data that needs to be obtained mainly refers to the parameter information of various equipment and facilities in the training ground, the information of surrounding buildings, the distribution information of the entire training ground, and the texture information of the environmental landscape.

Process the acquired information, mainly the texture information of the photo. First take a photo with a digital camera. Then use Photoshop or the texture processing tools provided by Creator to intercept, correct, scale and process. Although Creator does not have too many restrictions on the format and size of textures, Vega has relatively strict requirements on texture data. Textures that do not meet the requirements cannot be displayed correctly, so the textures used by Creator must be edited. Since the terrain of the comprehensive training ground is relatively flat, the Delaunay algorithm can be used to convert the terrain data in DED format. [

Research on the design of virtual training ground based on MultiGen Creator and Vega

2.2 Three-dimensional modeling

According to the acquired modeling data, Creator can be used to build a 3D scene model, including various equipment and facilities of the training ground, nearby buildings, flowers, trees, roads, etc. The hierarchical structure of the training ground scene model is shown in Figure 2. The scene model of the comprehensive training ground is mainly divided into static entities and dynamic entities.

(1) Static entity modeling. Static entities mainly include fixed training equipment facilities, roads, trees, buildings, etc. Since there are many training facilities in the training ground, the amount of data to be modeled is relatively large. It is impossible to put all models in the entire training ground scene for modeling. Therefore, each training equipment can be made separately. Finally, all models are integrated into the entire training ground scene.

For a single fixed training equipment, you can decompose the structure of the training equipment, use Creator's various geometric tools to construct it, and then combine it to form a complete single entity.

Since there are relatively few buildings around the training ground and their shapes are relatively regular, it is easy to perform geometric modeling and apply textures. The billboard in Creator allows the model to always face the viewpoint during the simulation. It is usually used to create symmetrical entities such as street lamps or trees in the scene. The method is to apply a transparent texture representing the object to the surface of the model. Then, at runtime, the model will automatically rotate and always face the viewpoint. We can use this method to model models such as trees and street lamps in the training ground, thereby reducing the number of polygons in the model and improving simulation efficiency.

According to the needs of real-time roaming characteristics, we can choose cylindrical and hemispherical models to model the sky, and use texture mapping to reflect the sky background.

(2) Dynamic entity modeling. The red flag in the training ground is a dynamic entity. Modeling it with Creator is mainly achieved by switching textures. Take the red flag fluttering in the wind as an example. First, use Photoshop to process the texture of the red flag. Determine several pictures to be displayed in a loop. Create several child nodes under the root node. Each child node creates a face, and paste the processed red flag texture on each face. Set the time for each picture to be displayed in turn, and finally hide all nodes except the first child node. In this way, a red flag fluttering in the wind is completed.

2.3 Optimizing the model database

The ultimate goal of using Creator to model is to use it in simulation programs. Under the premise of not affecting the realism of the scene, in order to maintain the smoothness of program operation and improve the real-time performance of system operation, the model should be optimized as much as possible in the later stage of model making. In the process of building a virtual training scene, the following optimization methods are mainly used:

(1) Delete unnecessary polygons. We can reduce the number of polygons by deleting polygons that are not visible in the viewing cone. These polygons include polygons inside the model, detail polygons hidden behind other polygons, polygons on the bottom of the model, etc.

(2) Use multi-level detail models. Without affecting the realism of the model, the number of LOD layers, the switching distance between each layer, the size and fineness of the texture, etc. can be reasonably set. Although this will increase the workload, it will save system resources and improve the system operation speed.

(3)采用实例化的方法对模型进行处理。通常应用于表示三维场景数据库中多次重复出现的对象实体.例如道路两旁的树木。实例化模型的优点主要体现在能够显著节省磁盘空间便于创建、编辑和修改模型。

3 Scene Tour

After all the training ground scene models are built, the models need to be further driven to realize the real-time roaming function of the scene. First, the LvnX parameters need to be set. The functional modules such as observers, motion modes, and environmental special effects need to be preliminarily set. The specific method is: set all object models that an observer can see in Scenes; set observers in Observers; set motion modes for observers in Motion Model; set sky and cloud effects in Environments and Environment Effects. Then set two types of collision detection in the system: one is collision detection with the ground, so that the observer always changes the height of the viewpoint as the terrain changes; the other is collision detection with training facilities and buildings to prevent the observer from passing through the wall.

In Vega, you can use keyboard to control roaming and fixed path automatic roaming. This article mainly uses keyboard to control roaming. Specifically, select Drive motion mode in the Motion Models panel in Vega. Finally, you need to generate an executable file, and activate its function library in VC.

4 Conclusion

This paper uses virtual reality technology to carry out 3D reconstruction of the virtual comprehensive training ground scene, and uses Creator to build various realistic training facilities, environments, landforms and other models. At the same time, the model is optimized. Then the roaming function is developed using the Vega tool. The reconstruction of the virtual training ground is basically completed. It can interact with it in a natural, 3D visual way, with a real sense of immersion. The realization of the virtual comprehensive training ground provides a feasible method for establishing other forms of training scenes. It has certain application value

Reference address:Research on the design of virtual training ground based on MultiGen Creator and Vega

Previous article:A brief discussion on the application of project teaching in the practical training of electronic technology
Next article:The principle and production of controlling multiple groups of lights with a single switch

Latest Industrial Control Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号