Mon-Sun 9:00-21:00
The MTT KUAE is Moore Threads' full-stack solution for artificial intelligence data center, based on the MTT S4000 and the dual 8-card GPU server MCCX D800, addressing the construction and operational management issues of large-scale GPU computing power through an integrated delivery approach.
Quick Delivery
Cluster construction in 30 days
Best Practice
Comprehensive optimization of computation, storage, and networking
Ready-to-Use
A complete set of tools and software stack
High Performance
Distributed training support for 100B models
Hardware-Software Integration, Ready-to-Use
The MTT KUAE full-stack solution, built on top of Moore Threads‘ full-featured GPUs, is an integrated hardware and software solution. Centered around the KUAE computational cluster, the MTT KUAE is completed by the cluster management platform KUAE Platform and the model services KUAE Model Studio. This end-to-end solution focuses on resolving the intricate challenges in building and operating large-scale GPU computational power construction with an integrated deployment approach.
Core Features
The MTT KUAE full-stack solution fully leverages the advantages of Moore Threads GPUs.
Product Portfolio
MTT KUAE Core Components
MTT KUAE Platform
An integrated hardware and software platform for AI large model training, distributed graphic rendering, stream-media processing, and scientific computing, deeply integrating the full-featured GPU computation, networking, and storage, Providing highly reliable, high-performance computing services.
Through this platform, users can flexibly manage multi-data center, multi-cluster computational resources, integrating multi-dimensional operational monitoring, alerts, and logging systems. With this platform, artificial intelligence data centers can achieve operational and maintenance automation.
MTT KUAE ModelStudio
Covers pre-training, fine-tuning, and inference for all major open-source large models.
Using the MUSIFY tool, developers can effortlessly adapt their GPU resources to the MUSA architecture and deploy language model services with one-click containerization.
Offers large model lifecycle management with a user-friendly interface, enabling easy workflow organization and lowering the barrier to using large models.
Key Issues MTT KUAE Addresses
GPU Large-scale GPU computational power construction modular design, flexible deployment
Optimization of the linear acceleration ratio of GPU computational power
Construction of a high-speed parameter transmission network
Construction and scheduling of heterogeneous computing clusters
Design and construction of a computational power service support system
Cloud-native GPU cluster elastic compute power scheduling
Reliability and security of computation and storage
High reliability automatic problem diagnosis and recovery