Active-active in the Same City: How to Achieve Data Synchronization Between Computer Rooms?

2024.10.15

In the early stages of business, in order to control investment costs, many companies usually only use one data center to provide services. However, as business develops and traffic grows, the requirements for service response speed and availability gradually increase. At this time, it is necessary to consider deploying services in different regions to provide a better user experience. This is also the only way for Internet companies in the traffic growth stage.

The company I worked for previously had a continuous increase in traffic for three consecutive years. Once, the external network of the computer room was suddenly disconnected, causing the online service to be completely offline, and the network provider could not be contacted. Since there was no backup computer room, we spent three days on emergency coordination and re-wiring to restore the service. The impact of this accident was very large, and the company lost tens of millions of yuan. After learning from this lesson, we migrated the service to a larger computer room and decided to build dual computer rooms in the same city to improve the availability of the service. In this way, when a computer room fails, users can quickly switch to another normal computer room through the HttpDNS interface.

In order to ensure that when one computer room fails, the other computer room can directly take over the traffic, we purchased equipment for the two computer rooms in a 1:1 ratio. However, if one computer room is in cold standby state for a long time, it will cause a waste of resources. Therefore, we hope that both computer rooms can provide services to the outside world at the same time, that is, to achieve dual active in the same city. However, a key issue of the dual active solution is how to achieve database synchronization between the two computer rooms.

Core data center design

Since the database uses a master-slave architecture, there can only be one master database in the entire network to update data. We can only deploy the master database in one computer room, and then this computer room will synchronize the data to other backup computer rooms. Although there is a dedicated line connection between the two computer rooms, the complete stability of the network cannot be guaranteed. If the network fails, we need to ensure that data synchronization can be quickly restored between computer rooms after the network is restored.

Some people may think that directly adopting a distributed database can solve this problem. However, changing the existing service system and fully migrating to a distributed database not only takes a long time, but is also very costly and is not practical for most companies. Therefore, we need to consider how to transform the existing system and achieve database synchronization in the dual-active data center in the same city. This is also our goal.

The core database center solution is a common implementation method, which is only suitable for computer rooms that are no more than 50 kilometers apart.

In this solution, the master database is centrally deployed in a core computer room, and the databases in other computer rooms serve as slave databases. When there is a data modification request, the master database in the core computer room will complete the modification first, and then transmit the updated data to the slave databases in other backup computer rooms through master-slave synchronization.

Since users usually obtain information from the cache, in order to reduce the delay of master-slave synchronization, the backup computer room will write the updated data directly to the local cache. At the same time, the client will record the last timestamp of the data modification locally (if not, it will record the current time). When the client initiates a request to the server, the server will automatically compare the update time of the data in the cache with the modification time of the client locally. If the update time in the cache is earlier than the time recorded by the client, the server will trigger a synchronization operation and try to find the latest data in the slave library; if there is no latest data in the slave library, the latest data will be obtained from the master library and updated to the cache of the computer room.

In this way, data update delays between computer rooms can be effectively avoided, ensuring that users can obtain the latest data in a more timely manner.

In addition, the client will request the scheduling interface to allow users to access the same computer room in a short period of time, avoiding update and merge conflicts caused by simultaneous modification of data in different computer rooms when users switch back and forth between multiple computer rooms. Overall, this solution is relatively simple in design, but it also has some obvious disadvantages.

For example, if a core computer room fails, other computer rooms will not be able to perform data updates. During the failure, the master-slave library configuration of each proxy needs to be manually switched to restore the service. After the failure is restored, manual intervention is also required to restore the master-slave synchronization. In addition, due to the certain delay in master-slave synchronization, the newly updated data will be invisible for a short time in the backup computer room. This delay will cause the business logic to manually handle this situation, which is cumbersome and increases the complexity of implementation.

Here I give you a common network delay reference:

Server in the same data center: 0.1 ms Server in the same city (within 100 km): 1ms (10 times in the same data center) Beijing to Shanghai: 38ms (380 times in the same data center) Beijing to Guangzhou: 53ms (530 times in the same data center)

It should be noted that the above design is only a single RTT request, while synchronization within the computer room involves multiple sequentially superimposed request operations. If data is to be updated on a large scale, the synchronization delay between the master and slave databases will be more significant. Therefore, the amount of data in this dual-active computer room solution cannot be too large, and the frequency of business data updates cannot be too high. In addition, if the service requires strong consistency, that is, all operations must be "remotely executed" on the master database, this will also increase the delay in master-slave synchronization.

In addition to the above problems, the dedicated line between the two computer rooms occasionally fails. I once encountered a dedicated line disconnection that lasted for two hours. During this period, I could only temporarily maintain synchronization through the public network, but the public network synchronization was unstable, and the delay fluctuated between 10ms and 500ms, resulting in a master-slave delay of more than 1 minute. Fortunately, since the user center service mainly relies on long-term cached data, the main business process was not greatly affected, but the speed at which users modify information became very slow.

The synchronization of two computer rooms may occasionally cause master-slave synchronization interruption, so it is recommended to set up an alarm processing mechanism. Once this happens, a notification should be sent to the fault alarm group immediately, and the DBA personnel should perform manual repairs. In addition, I have also encountered a situation where the auto-increment ID was repeated when the user registered during the period of master-slave asynchrony, resulting in a primary key conflict. For this reason, I recommend replacing the auto-increment ID with an ID generated based on the SnowFlake algorithm to reduce the risk of primary key conflicts.

In general, although this core database centralized solution achieves dual active-active in the same city, the manpower investment cost is very high. The DBA needs to manually maintain synchronization. Once the master-slave synchronization is interrupted, it is very time-consuming and labor-intensive to restore, and R&D personnel also need to pay attention to the master-slave asynchrony at all times. Therefore, I recommend using another solution: the database synchronization tool Otter.

Cross-data center synchronization tool: Otter

Otter is a database synchronization tool developed by Alibaba. It can quickly synchronize data across data centers, cities, and countries. As shown in the figure below, its core implementation is to monitor the Row binlog of the main database MySQL through Canal and synchronize data updates to MySQL in other data centers in parallel.

Because we want to achieve dual-active in two data centers in the same city, we use Otter here to achieve dual-master in the same city (note: dual-master is not universal and is not recommended for services with high consistency requirements). In this way, the dual-active data centers can synchronize in both directions:

As shown in the figure above, each computer room has its own master and slave databases. The cache can be master-slave synchronization across computer rooms or local master-slave synchronization, depending on specific business needs. Otter uses Canal to synchronize data changes in the master database in the computer room to the Otter Node, and then synchronizes the data to the Node node of the other computer room after sorting through Otter's SETL (Select, Extract, Transform, Load) mechanism, thereby achieving data synchronization between the two computer rooms.

Here we need to mention the way Otter handles data conflicts to solve the problem of two data centers modifying the same data at the same time. Data conflicts in Otter are divided into two categories: row conflicts and field conflicts. Row conflicts can be resolved by comparing the modification time of the data, or by performing a back-source query to overwrite the target database when a conflict occurs. For field conflicts, you can overwrite based on the modification time, or merge multiple modification operations. For example, if data center a and data center b each perform a -1 operation on a field, the final modification value of the field after the merge is -2, thereby achieving final consistency of the data.

However, it should be noted that this merging strategy is not suitable for inventory data management because it may lead to overselling. If there is a similar requirement, it is recommended to use long-term cache to avoid data inconsistency caused by concurrent modifications.

Summary

Data synchronization between computer rooms has always been a difficult problem in the industry. Due to its high implementation cost, if active-active mode cannot be achieved, one computer room will inevitably run empty with a 1:1 number of machines. In addition, in the event of a failure, there is no guarantee that the cold standby computer room can immediately provide services to the outside world. However, the maintenance cost of the active-active mode is not low, and data synchronization between computer rooms is often interrupted due to network delays or data conflicts, which ultimately leads to inconsistent data in the two computer rooms.

Fortunately, Otter has taken a variety of measures for data synchronization, which can ensure data integrity in most cases and reduce the difficulty of implementing dual-active in the same city. Even so, in business operations, we still need to manually sort out business processes to try to avoid multiple computer rooms modifying the same data at the same time. To this end, we can use HttpDNS scheduling to allow users to be active in only one computer room for a period of time to reduce the possibility of data conflicts. For services that are frequently modified and have high resource contention, complete transaction operations are usually performed locally in the computer room to avoid synchronization errors caused by simultaneous modifications across computer rooms.