Zhang Xianguo, DYXnet: Breaking the game with AI-native hyper-interconnection

In today's era of digital intelligence, AI and large model applications have become key technologies to promote enterprise innovation and sustainable development. With the continuous development of technology, the privatization and deployment of large models has shown an increasingly obvious trend. On the one hand, model customization for specific business scenarios can better meet the actual needs of enterprises and promote business development. On the other hand, the deployment of a large privatization model can ensure data security and privacy protection, and there is no need for data outflow, reducing the risk of data leakage.

However, when enterprises balance traditional business and deploy privatized large-scale business, they face challenges from computing power, network, data security and other aspects, forcing enterprises to seek new technologies and solutions to deal with these problems. As an "AI + Cloud Network Security Service" provider, DYXnet is keenly aware of this trend and is committed to building an AI-native hyper-interconnection architecture to meet the needs of enterprises in the process of digital and intelligent transformation.

In a recent interview with 51CTO, Zhang Xianguo, director of the DYXnet intelligent computing network technology department of DYXnet, said that in the past, data was only stored in business databases, and enterprises only retrieved it simply, and many unstructured data were difficult to play their value. But now, with the power of AI, the value of data can be better unlocked. For enterprises, the key is how to maximize the value of private data while protecting their own data, and avoid making their data only a part of the larger platform, which is the real value of enterprise transformation in the future.

With the full opening of the era of large models, DYXnet has closely followed the higher requirements of enterprises in terms of security, AI, and computing power, and built an AI-native hyper-interconnection architecture infrastructure through technological innovation and ecological cooperation, realizing the deep integration of computing power and network, and providing enterprises with an enterprise trusted computing space that integrates AI computing power, private domain data, models, and privacy computing, which can support enterprisesInnovation in the field of AI native.

In response to the changes in the situation, we have evolved from network empowerment to AI + cloud network security service providers

In a complex and fierce market environment, innovation has become the foundation for enterprises to win competitive advantages and achieve sustainable development. Friends who are familiar with the front line know that this company, founded in 1999, has always adapted to every trend change with a high-speed growth trajectory. From the original telecom neutral network service provider to the world's leading "AI + cloud network security" service provider, DYXnet continues to be at the forefront of innovation, constantly breaking through itself and leading the development of the industry.

Zhang Xianguo told 51CTO that in the 25 years of development, FRONTLINE has always adhered to the principle of "service innovation." From providing MPLS VPN services to launching the first SD-WAN network in the industry, we continue to upgrade service quality, support the close collaboration of enterprises' global digital application scenarios, and build core network service capabilities covering 700+ cities around the world.

In the era of AI and large models, more and more enterprises have begun to explore the application of large models in enterprise business scenarios, and the deployment of privatized large model applications has become the core technology to promote the rapid development of enterprises.

Zhang Xianguo said that with the deepening and development of large models, more and more enterprises have started digital and intelligent upgrading. The privatization of large models is showing an increasingly obvious trend. Only by integrating private data, expertise, and industry experience into large models to create AI-native business forms and generate content that is more in line with the actual and professional needs of enterprises can enterprises provide more targeted decision support and solutions, and thus better help enterprises gain competitive advantages.

During the interview, Zhang Xianguo took the financial industry as an example and introduced in detail the important role played by large models in promoting enterprise innovation. He said that financial institutions can train exclusive financial models for business scenarios such as risk assessment and investment decision-making, so that they can deal with complex business problems more accurately. However, in the process of training exclusive large models, enterprises put forward higher requirements for data security, network, and computing power. This requires the integration of multiple elements such as computing power, large models, and private data, and ensuring that the new architecture form can provide support for privacy and security.

In the following time, Zhang Xianguo shared in detail the many challenges faced by enterprises in the process of deploying the application of privatization large models.

First, there is the challenge posed by the blowout of computing power demand. In the large model training stage, due to the huge number of parameters and training data sets, there are extremely high requirements for computing power. Taking the financial industry as an example, the amount of data that needs to be processed when training a proprietary financial model can often reach tens of terabytes or even petabytes. However, in the current market, GPU resources are tight and the cost of computing power is high, and how to effectively use computing resources has become a major problem faced by enterprises.

Second, the dual pressure of network bandwidth and network security. AI native business intelligence is built based on enterprise private data and applications, and the amount of data transmission is huge. The internal network of enterprises needs to support high-speed interconnection from the data end to the computing power end, and the demand for network bandwidth is surging. At the same time, with the continuous escalation of cyber attack methods, enterprises are facing unprecedented cyber security pressure. How to ensure data security while ensuring data transmission efficiency has become an urgent problem for enterprises to solve.

Finally, there is the data privacy protection challenge. As the core asset of enterprises, data leaks and privacy violations have occurred frequently in recent years, which has made enterprises increasingly concerned about data privacy protection. How to ensure data privacy and security in the process of large model training has become an important issue that enterprises must face.

Zhang Xianguo emphasized that driven by the new trend, DYXnet is committed to bringing together innovative technologies and AI computing resources, building an AI-native hyper-connected architecture, and carrying the comprehensive upgrade of enterprise AI nativeization.

Riding on the trend, it empowers enterprise transformation with AI-native hyper-connected architecture

Based on a deep insight into the needs of enterprises, with deep technical accumulation and rich industry experience, DYXnet has built an AI-native hyper-connected architecture and launched a series of innovative products and solutions to help enterprises cope with the challenges of computing power, network, and security.

According to Zhang Xianguo, DYXnet's AI-native hyper-interconnection architecture realizes a high-speed secure channel for direct computing power between campuses and buildings, solves the problem of tight computing resources and surging demand for data transmission, and helps enterprises achieve higher levels of data security while training large models based on private data.

In terms of AI computing power, DYXnet has formed a service chassis with multiple computing power blessings by integrating multiple AI computing resources.

At the network level, DYXnet has built an innovative AI-native hyper-interconnection architecture based on remote RDMA and other technologies, and uses dynamic slicing technology to dynamically switch networks. When customers need large computing power, it can quickly allocate a high-bandwidth, secure and private direct-connected network, and enable more than 100G bandwidth for them, ensuring fast and efficient interconnection between huge amounts of data and computing power, and automatically releasing resources after the task is completed.

At the level of data security, DYXnet has built a privacy-preserving computing platform and created end-to-end encryption + release capability after calculation. First of all, DYXnet has enhanced real-time high-bandwidth encryption capabilities in the network to ensure that the data transmission between the client and the computing power server is encrypted throughout the process. When an enterprise establishes a connection with the computing power server, it will be supported by a computing network with both performance and security, and at the same time, ensure that data will not be transmitted to the outside world.

Second, all computing servers will only have memory and no hard disks, so as to achieve diskless privacy computing, and private data will be cleared after calculation. This means that the results are sent back after the calculations are completed, and the relevant data is then emptied. The next time another customer uses it, a dynamic private connection is established to launch a new computing service in the memory space, and the calculation is completed without leaving any traces and savings. In this way, it is possible to ensure that the data only exists in the private computing space within the current computing space, and users can use the relevant computing services with confidence.

In the course of this interview, Zhang Xianguo also introduced in detail the practice of FRONTLINE in helping enterprises deploy privatization large models through a case.

According to reports, taking an education enterprise as an example, LINE built a new private domain for it to store data and provide computing power private network services. By building the data into a knowledge base in the form of a private vector database, and connecting it with the privacy-preserving computing power provided by Xfront, the privacy-preserving computing power is used for fine-tuning training. During the training process, all data is processed in a private domain to ensure that it is not transmitted externally. Finally, a new AI model is trained and deployed on the public network for use by educational enterprise users.

At present, TLXnet's ICT services have evolved from network empowerment to a full-stack service system of AI + cloud-network security, empowering more than 2,000 enterprise customers.

Take advantage of the momentum and continue to explore AINet integration and upgrading

Talking about the overall development plan of DYX1, Zhang Xianguo said that in the face of the development trend of AI, BNXline is committed to upgrading to AI applications and large model services + AI native infrastructure services. In terms of AI-native infrastructure services, one of the strategic directions of the first line is to build AINet. In this strategy, the first line has taken the following two routes:

The first route is to innovate at the traditional business level. This includes SD-WAN/MPLS VPN, core network architecture, and SASE security. Currently, DYXnet is working on combining these technologies with AI to achieve the following goals: intelligent traffic prediction through SD-WAN/MPLS VPN+AI; Core network + AI to build an InsightNet intelligent network; Integrate SASE's AI capabilities into SASE POPs at the edge of the core network to create an intelligent security protection system.

The second route is to build an AI-native hyper-connected architecture to meet the needs of future enterprise AI native upgrades.

AINet, which is formed by the convergence of two-way routes, will provide service empowerment with on-demand network and dual-network linkage, and provide intelligent foundation support for enterprise AI business application training, inference, and other scenarios.

At the end of the interview, Zhang Xianguo said that FRONTLINE will continue to explore and promote the integration of AI capabilities and cloud-network-security services, so as to build emerging service capabilities such as smart infrastructure services and smart application services, and work with enterprises to achieve data value.

Editor-in-Chief's Perspective:

The many challenges brought about by the privatization deployment of large models have given birth to new technologies and solutions. It is precisely with its keen industry insight and strong technical strength that DYXnet has created an AI-native hyper-interconnection architecture in the process of continuous transformation and upgrading, which helps enterprises solve various problems encountered in the privatization of large models, which not only meets the needs of traditional businesses, but also makes it easier for enterprises to deploy privatized large model applications, laying a solid foundation for continuous business innovation and development.

It is believed that in the future, with the continuous development of AI technology and the continuous expansion of application scenarios, DYXnet will continue to deepen the field of ICT services, continue to upgrade to AI applications and large model services + AI native infrastructure services, and provide more and better solutions and services for enterprises.