【Abstract】 This paper introduces the application, design and management of database replication in distributed database systems, and gives the design steps and factors to be considered during the design. It also analyzes the conflict problems that arise in the design and application and gives solutions. It also describes the maintenance and management of the database replication environment.
Keywords: database, distributed system, replication, management
1 Principle of database replication
Database replication is a widely used technology in distributed database systems. Distributed database systems are composed of multiple identical or different databases, which are connected through local area networks or wide area networks. The data in different databases are closely related. In distributed database systems, applications can access local and remote databases and use client-server architecture to process information requests. Different physical databases are connected using database links, so that clients can access them just like accessing a logical database. In a distributed database system, each database works together, but each database node is an independent database that can be managed and upgraded by different departments. A database failure will not affect the use of other databases. Each independent database in a distributed database system is generally called a site or database site.
In applications, database replication is used because information needs to be disseminated regularly or information needs to be obtained regularly from a certain place.
The basic principle of database operation can be explained by taking the Oracle database as an example. Oracle database replication is automatically implemented using the database background process. The number of background processes responsible for replication and the time they are activated are determined by database parameters. The background processes of the database are managed by the system process, and they perform their scheduled work at set time intervals to achieve regular data transmission from the source site to the target site. Oracle database has basic replication and advanced replication methods. The so-called basic replication means that the data generated from the master table to the replication point is read-only, and the user cannot modify the data generated by the replication. This entity is called a snapshot. The information of the snapshot will be continuously updated from the master table according to the time interval defined during the design. The interval setting can be determined according to the real-time requirements of the application for information. Compared with basic replication, advanced replication allows each replication point to modify the data. The modifications made by each database site can be propagated to other sites, and each site can see the data entered or modified by other sites. Data consistency is an issue that must be fully considered when using advanced replication.
Database replication has a wide range of uses, but different applications have different requirements. The replication method provided by Oracle can meet most needs, but some practical problems, such as replication with statistical summary calculation functions, can only be solved by developing replication software by yourself using the ideas provided by Oracle, combining replication and summary into one. After understanding the basic principles of replication, it is easy to develop replication software by yourself.
2 Purpose of database replication
Replication is a method of obtaining information from multiple databases in a distributed database system, such as in the military database system, which is used by the superior authorities to understand and obtain the status of weapons, ammunition, supplies, etc. of subordinate units. It can also be used in environments where there is no condition to be connected all the time, distributed use of information, information aggregation, performance improvement, backup, load balancing of applications, large-scale data transmission, etc. For example, it is very important for the head office of a chain store to unify retail prices and publish the prices of goods to each retail store in a timely manner. To achieve this goal, each retail store's database has a copy of the price list, which is updated from the main table of the head office every night. This is a typical example of distributed use of information. Large-scale data transmission is mainly used to import data from other database systems, such as copying information from an online transaction processing system to a decision support system.
Generally speaking, applications only access data in the local database and limit access to remote database data to reduce the burden on the network and improve performance; on the other hand, if important data has been copied to other servers, the system will still be available when the local server fails. The use of replication technology can also improve the performance of database access, data security, and strengthen application restrictions.
The use of database replication is conducive to timely obtaining the latest information, reducing the duplication of user work, and maintaining the consistency of the database in a timely manner.
3 Basic steps in database replication design
The basic working method of database replication is to define a query and use the database's job management mechanism to automatically and regularly update data from the master table.
3.1 Basic replication
The design of basic replication mainly includes the following steps:
(1) Establish a replication environment, determine which master tables will be replicated and to which sites' databases;
(2) Establish replication users and replication database links at each replication site;
(3) Establish update logs for all master tables in the replicated database. The update log is the key to supporting fast updates;
(4) Use database links to establish snapshots at each replication site;
(5) Establish snapshot update groups at each replication site, define update times and update intervals, and add snapshots that need to be updated simultaneously to an update group;
(6) Grant users who need to use snapshots the right to query snapshots.
3.1.1 Replication environment
When designing a distributed database system, determine which master tables will be replicated, check each table participating in replication, and confirm that each table has a primary key. Determine which information in each table is replicated to which site's database in what form. Group the master tables to be replicated according to their data relationships, application relationships, and application requirements for real-time data.
It is recommended that the tables to be replicated in different servers are located under the same user name in different databases, and the corresponding users use the same password for easy operation. The structure of tables with data replication relationships using advanced replication should be consistent in each site.
3.1.2 Establish user and database links
Establish corresponding users at the master site and replication site participating in replication to include replication objects. Establish database links from the database where the master table is located to each replication site. Although replication is generally performed between the same users in different databases, the username and password for the connection must be explicitly specified when establishing a database link. Do not establish a public database link, and generally establish a private link belonging to the replication user who is directly connected.
3.1.3 Create a log
The log is used to record the input, update, and deletion operations of the master table data. With the log, the system can identify which data is newly generated and needs to be replicated, and which data has been replicated. Therefore, each update only needs to replicate the latest information to reduce the amount of data replication, reduce network traffic, and shorten the replication time.
Create an update log for all master tables in the replicated database. The update log only needs to be created once to support replication from one site to multiple sites.
3.1.4 Create a snapshot
At each replication site, create the required snapshot. The logical structure of the snapshot on the replication side is defined by the SQL statement that queries from one or more master tables. At the same time, the SQL statement also defines the part of the information from the master table contained in the snapshot. It cannot contain aggregation, grouping, connection, and collection functions.
3.1.5 Update
According to the grouping of the master table, create an update group for the snapshot at each replication site, define the update time and update interval as needed, and add the snapshots that need to be updated at the same time to an update group.
It is necessary to set how and when to update the data on the replication side. You can choose full update, fast update, manual update, and automatic update methods. The update time should be set in accordance with the following principles:
·The next update time is in the future;
·The update interval is greater than the time required to complete an update;
·A specific time can be specified for manual update.
When a network problem or system error occurs, the update will be temporarily interrupted. Once the problem is resolved, the update will resume normally.
3.1.6 Permissions
For snapshots generated using the simple replication method, only query permissions need to be granted to users, because snapshots are read-only. For replicas generated using the advanced replication method, different permissions can be set as needed. However, since the greater the permissions, the more operations the user can perform and the greater the possibility of replication conflicts, permissions must be granted with restrictions.
3.2 Advanced replication
Advanced replication is divided into multi-site replication and updateable snapshot replication. Its design principles and steps are similar to basic replication and are briefly described as follows. Multi-site replication includes the following steps:
(1) Design and establish a replication environment, determine which sites will serve as advanced replication sites and which master tables will participate in replication;
(2) Use the replication manager to define the sites participating in replication, establish a database link containing users of replication entities and replication at each replication site, establish a replication management user, and configure a data update plan;
(3) Establish a master replication group. Different entities that need to participate in replication can be added to different groups;
(4) Grant appropriate permissions to users who need to use replication information.
Updatable snapshot replication includes the following steps:
(1) Design and establish a replication environment, determine which sites will serve as replication masters, which master tables and entities will participate in replication, and determine which sites will be used to set up updatable snapshots;
(2) Establish snapshot management users at each replication site, establish user and replication database links containing updatable snapshot entities, and configure the time and interval for data updates;
(3) Establish snapshot logs at the master site;
(4) Establish necessary update groups at the replication point;
(5) Establish snapshot groups, which can include entities such as tables, stored procedures, packages, functions, synonyms, and views;
(6) Grant appropriate permissions to users who need to use replication information.
4 Design of database replication
4.1 Distribution of databases
How to distribute and design databases depends on the geographical distribution, organizational relationships, data volume, funding, and data security of specific engineering applications.
The design of database replication should be as simple as possible, and data relationships should be as simple as possible, as long as they meet the requirements. Minimize the workload of maintenance and management.
Data that is frequently accessed by the application should be located in the local database as much as possible. If the data is managed by a remote database, it is necessary to consider copying it to the local database, which can not only simplify the design of the application, but also improve data security, reduce network traffic, and improve performance. If the application needs to access local and remote databases at the same time, it is necessary to commit or roll back transactions at the same time to ensure data consistency during
design. If advanced replication is used, because multiple sites may modify or enter the same data at the same time, the first issue to be considered in data replication is data conflicts. Data conflicts should be avoided as much as possible during design. The inevitable data conflicts during design should be resolved during design, and configured in the replication environment during runtime so that the database can solve the problem in time when the conflict occurs, so as not to affect the entire replication process.
4.2 Resolution of data conflicts
Data conflicts are the main problem that designers solve when designing database replication for distributed database systems. Data conflicts mainly occur in advanced replication environments.
4.2.1 Unique constraint conflict
If the replicated data will violate the integrity constraint (such as when transactions at different sites insert records with the same primary key into a table participating in the replication), a unique constraint conflict will occur.
If the replica violates the unique constraint when inserting or modifying, the replica detects a unique constraint conflict.
When replicating at multiple sites, the conflict caused by users at different databases on the network modifying or deleting the same record at the same time or inserting records with the same primary key at the same time should be fully considered during the design. The following methods can be used to avoid the conflict:
(1) Minimize the number of sites that need to update data at the same time;
(2) Create sequence generators with different value ranges to generate primary keys to avoid the conflict of duplicate primary keys. However, this method is not suitable for situations with many sites. Another way to avoid duplicate primary keys is to use the same full-range sequence generator at each site and combine it with the site's unique identifier as a composite primary key;
(3) Do not allow modification of primary keys in the application.
4.2.2 Delete conflict
If the primary key does not exist when the replica performs a delete or update operation, the replica will detect a delete conflict. If a transaction attempts to delete a record that has been deleted by another user or is being modified by another user, a deletion conflict will occur. Delete conflicts must be avoided in a replication environment. An asynchronous deletion method can be used, that is, instead of using the delete command in the application, the record to be deleted is marked, and then the system periodically and uniformly performs the deletion operation to avoid deletion conflicts.
4.2.3 Update conflicts
When transactions at different sites modify the same record at almost the same time, an update conflict will occur.
Update conflicts should be avoided as much as possible during design, but they cannot be completely eliminated. Conflicts that cannot be avoided during design should be sorted out during design, and the conflict resolution methods provided by the database should be configured for different situations. However, the built-in conflict resolution function of the database cannot resolve deletion conflicts, modifications to primary keys, violations of reference consistency, and other conflicts. These need to be resolved through application design.
4.3 Parameter settings
To enable replication to work automatically and normally, the following parameters need to be appropriately set and adjusted in the parameter file:
They determine the time interval for starting the replication process and the number of replication processes. Their settings need to be coordinated with the number and interval of the replicated jobs.
Generally, the first parameter is 2 (must be greater than 0). If there are many jobs configured in the database and multiple jobs are updated at the same time, this parameter needs to be increased. Its value range is 0~9, A~Z.
The second parameter unit is seconds, which defines the time interval for waking up the server background process. The default value is 60.
If the two settings are not coordinated, the database will not be able to update data in time or the background process will be deadlocked.
5 Management of database replication
The design of database replication is mainly completed by the developer, and its management mainly refers to the daily management work after the system is put into use, which is mainly completed by the database administrator of the user.
5.1 Modification of parameters
During the design, the designer will consider various factors to set the parameters, but with the development of user business and the increase of data volume, the database administrator should adjust
the JOB QUEUE PROCESS and JOB QUEUE
INTERVAL parameters
in time before the database has any problems
. If the set parameters cannot meet the needs of the application, data replication will have problems. This allows you to view snapshots and database log files. If an error occurs, there will be a record in the log and a CORE file will be generated. The database can solve problems with background processes by itself, but the CORE file needs to be deleted by the database administrator in a timely manner, otherwise it will soon fill up the entire hard disk due to its rapid growth.
5.2 Change the interval between replication updates
The time when replication jobs are executed and the time interval when they are triggered are also factors that administrators often need to adjust.
Database administrators can change the replication interval as needed to meet changes in demand. Each user can only change the execution interval of their own jobs.
The job interval setting must not be less than the time required to perform a data update.
5.3 Cancel replication jobs
When the replication operation does not need to be continued for some reason, the replication job can be canceled. There are two options: temporary termination and complete deletion. Temporarily terminated replication jobs can be resumed when needed in the future, while completely deleted replication jobs cannot be resumed and can only be rebuilt.
Users can only stop or cancel their own replication jobs.
5.4 Possible problems
Database replication cannot be performed normally in the following cases: lack of background processes, network failure, database routine failure, and errors in the replication program.
If the replication job cannot be executed normally for some reason, the database will repeat the job at a certain time interval. If it still fails after a certain number of attempts, the job will be automatically marked as "terminated" by the database. Once the problem that caused the termination is solved, the terminated job can be manually allowed to execute replication (once the execution is successful, the status of the job will automatically change), or its status can be changed from terminated to automatic execution, so that the database can automatically execute the replication. If the problem that caused the termination is solved within the time range from the database discovering the problem to the set number of attempts, the replication job will be executed normally without termination.
If the replica does not get the data update at the set time, it may be that the background process is deadlocked. There are three solutions: manually execute the replication job to copy the data; if the user does not need to see the updated data urgently, then wait for a while, Oracle's system monitoring process will automatically kill the dead replication process and regenerate a new background process, and replication will automatically resume; close and reopen the database, and replication will automatically resume.
Each user can only manually execute his own job.
5.5 Problems with expanding sites
If a new snapshot needs to be created in an established replication environment to meet new requirements, the system will fail if the following three conditions are met at the same time:
(1) The new snapshot must be updated using the fast update method;
(2) The new snapshot and an existing snapshot are based on the same master table;
(3) The existing snapshot can be quickly updated when a new snapshot is created.
The solution to this problem is to use the full update method for the new snapshot, or to use the following method to avoid the problem: before creating a new snapshot, create a virtual snapshot based on the same master table to avoid data updates when the snapshot is created, then create the required snapshot using the fast update method, and finally delete the virtual snapshot.
6 Conclusion
Database replication is a very flexible technology. Proper application of this technology will bring unexpected benefits to users and designers. However, the application of replication often brings a lot of trouble to the management of the database system. Therefore, when designing a database replication environment, it should be as simple and practical as possible to improve the availability of the system. At the same time, the database administrator of a distributed database system must have comprehensive database knowledge to ensure that the system works normally and stably.
Previous article:Design of intelligent charger based on UCC3895 and PIC microcontroller
Next article:Methods for Reconstructing Finished Single-Chip Microcomputer System
- Popular Resources
- Popular amplifiers
- Molex leverages SAP solutions to drive smart supply chain collaboration
- Pickering Launches New Future-Proof PXIe Single-Slot Controller for High-Performance Test and Measurement Applications
- CGD and Qorvo to jointly revolutionize motor control solutions
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- Nidec Intelligent Motion is the first to launch an electric clutch ECU for two-wheeled vehicles
- Bosch and Tsinghua University renew cooperation agreement on artificial intelligence research to jointly promote the development of artificial intelligence in the industrial field
- GigaDevice unveils new MCU products, deeply unlocking industrial application scenarios with diversified products and solutions
- Advantech: Investing in Edge AI Innovation to Drive an Intelligent Future
- CGD and QORVO will revolutionize motor control solutions
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications