
We helped found the Open Compute Project in 2011 because we believe open source hardware makes data center infrastructure more efficient, scalable, and sustainable. As the scale of AI grows, OCP‘s open sourcing mission is more important than ever.
This week, OCP held a global summit themed “Leading the Future of AI,” where we brought attention to the OCP open data center initiative, unveiled new hardware and networking architecture to support our AI workloads, and outlined new standards for sustainable infrastructure.
Promoting Open Infrastructure
We believe the future of AI requires a new level of collaboration across the data center industry. To keep pace with the growth of demand and maximize the benefit of AI to society, the data center industry must standardize its approach to building physical infrastructure in a way that encourages interoperability while still allowing for differentiation and innovation.
That’s why we joined industry peers in supporting Open Compute Project’s Open Data Center Initiative, which proposes common infrastructure standards for data center power, cooling, mechanical structure, and telemetry. Read the OCP open letter to learn more about our call to action for industry stakeholders.
Our Open Hardware Innovations
We announced the next generation of network fabrics for our AI training clusters, with a focus on open hardware designs to benefit companies across the industry, including new switches integrating NVIDIA’s Spectrum Ethernet into our networking architecture. This new generation of AI fabric is specifically designed for AI workloads, giving engineers the freedom to reimagine networking hardware engineering at unprecedented scale. In addition to these networking innovations, we’ve become initiating members of Ethernet for Scale-Up Networking (ESUN), OCP’s new ethernet workstream, which aims to improve connectivity as AI systems scale up.
We also introduced specifications for the Open Rack Wide (ORW) form factor, a new open source data rack standard for AI design. The ORW specifications are built for power, cooling, and efficiency for the next generation of AI systems and mark a major leap forward in open infrastructure innovation. In support of the new specifications, AMD announced Helios, their most advanced AI rack yet, designed on our ORW open standards. Helios and our ORW form factor represent a fundamental move toward standardized, interoperable, and scalable hardware data center design across the industry.
In addition to Helios, we unveiled a range of next-generation AI hardware platforms that support new and emerging AI use cases. These platforms are designed to deliver significant improvements in performance, reliability, and serviceability for large-scale generative AI training and inference workloads.
Driving Sustainability in Infrastructure
At this week’s summit, we presented Design for Sustainability, a new set of principles for reducing IT hardware emissions. These technical design principles aim to help hardware designers reduce the environmental impact of IT racks by integrating design strategies such as modularity, reuse, retrofitting, dematerialization, and extending hardware lifecycles.
We also shared details of the methodology we created to track emissions from millions of hardware components in our data centers, and we leveraged our own Llama AI models to optimize the database to track these emissions.
At Meta, we’re focused on reaching our sustainability goals, and we’re inviting the wider industry to join us in adopting the strategies and frameworks outlined here to help them reach theirs.
Hardware innovation will be essential to meeting the challenges of the future of AI. We’re excited about the progress we’ve already made, and look forward to continuing to drive openness and collaborating with industry partners as the complexity of AI systems grows.
The post Open Hardware is the Future of AI Data Center Infrastructure appeared first on Meta Newsroom.