1. Home >
  2. Extreme

Intel, DOE Announce First-Ever Exascale Supercomputer 'Aurora'

Intel and the DOE have announced the first exascale computer expected to be deployed. Codenamed Aurora, the system should be ready by 2021.
By Joel Hruska
Aurora-Feature-Image

Intel and the Department of Energy have announced plans to deploy the first supercomputer with a sustained performance of one exaflop by 2021. That's a bit of a slip compared to previous milestones -- in fact, that 2021 delivery date means Horst Simon should win the bet he made in 2013 that supercomputers wouldn't hit exascale performance until after 2020.

“Today is an important day not only for the team of technologists and scientists who have come together to build our first exascale computer – but also for all of us who are committed to American innovation and manufacturing,” said Bob Swan, Intel CEO. “The convergence of AI and high-performance computing is an enormous opportunity to address some of the world’s biggest challenges and an important catalyst for economic opportunity.”

Aurora will deploy a future Intel Xeon CPU, Intel's Optane DC Persistent Memory (we've covered how DC PM could change the high performance computing market before), Intel's Xe compute architecture, and Intel's One API software. In short, this is an Intel win, top-to-bottom, back to front. That's actually fairly surprising -- in the last few years, we've seen a lot more companies opting for hybrid Intel-Nvidia deployments rather than going all-in with just Intel. Taking a bet on Intel Xe before consumer or HPC hardware is even in-market implies Intel showed off some impressive expected performance figures. A video about the project is available below:

Aurora is intended to push the envelope in a number of fields, including simulation-based computational science, machine learning, cosmological simulation, and other emerging fields. The system will be built in partnership with Cray, using that company's next-generation supercomputing hardware platform, codenamed Shasta, and its high-performance scalable interconnect, codenamed Slingshot.

Exascale has more than symbolic importance. The level of compute capability in the human brain at the neural level has been estimated to be in the ballpark of exascale computing, though I can't say strongly enough that such estimates are an incredible simplification of the differences between how the human brain performs computations versus how computers do. And simply hitting exascale (that's 1018 FLOPS) doesn't actually help us use those transistors to build a working model of a human brain. Having the theoretical computational capability of a brain doesn't actually equal a working whole-brain computer model, any more than having a huge heap of concrete, steel, and enriched uranium is equivalent to a functional nuclear reactor. It's how you put the thing together that dictates its function, and we're a long way from that.

But hitting exascale computing levels is one critical component to how we get to the point of running those simulations and running them at scale. This is not to downplay the difficulty of simulating a human brain -- open source projects like OpenWorm are working on a worm, C. elegans, with only a thousand cells in its entire body. Booting up a digital human consciousness is quite some ways away. But with exascale computers, we're moving into new frontiers of complexity -- and new discoveries undoubtedly await.

Now Read:

Tagged In

Hpc Exascale Xeon Phi Shasta ARM

More from Extreme

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up