It takes an exascale supercomputer to drive carbon capture
Here on Earth, we bury our problems and simulate our way out of them later
Over the course of four decades, global carbon dioxide emissions increased by 90 percent and it goes without saying, especially this summer week, that the impact is keenly felt.
It would be convenient to bury these facts and the CO2 somewhere out of sight, which is exactly what carbon capture efforts aim to do.
The goal of carbon capture is to grab CO2 at the emission location, pocket it to a facility, and isolate it underground. The goal, of course, is to keep it from entering the atmosphere, but the process itself comes with some nasty byproducts, which also need to be stored and managed.
There are a number of methods depending on the type of reactor, but one of the most promising is still on the horizon. It could be revolutionary because it eliminates nitrous oxide and other offshoots from capture reactions. The issue is that doing this at meaningful scale has been difficult because as the size of a reactor goes up, so too does the complexity of the problem.
The irony is that it takes CO2-generating supercomputer powerhouses to start cracking the CO2 capture problem. In this case, it’s America's first exascale supercomputer, the 21-megawatt Frontier, at Oak Ridge National Laboratory, although to be fair, this system is one of the few hydro-powered HPC giants.
Jordan Musser, a scientist at the National Energy Technology Laboratory (NETL) in the US, is leading an effort to use the entire Frontier supercomputer later this year to model the feasibility of moving clean carbon capture from a small-scale lab experiment to much larger scale.
There are only about 20 projects queued up that can gobble most of the cores on the exascale machine outside of the NETL group's work, but the code to model the new approach to carbon capture means billions of particles need to be tracked individually to simulate a gas-solid interaction over defined time scales. As one might imagine, this is more than a little computationally-intensive.
- BASF looks to quantum computing to improve weather modeling
- NASA pulls together pieces for its most powerful supercomputer yet
- Israel aims to build its own upgradable quantum computer
- BAE scores $699 million contract to support US Army supercomputers
"We are using a metal oxide to provide oxygen for the reaction so there's no nitrogen available, therefore when the reaction occurs with the fossil energy source, there's no nitrous oxide or other byproduct produced. Further, the only resulting gases are carbon dioxide and water vapor so it's possible to condense water vapor and get a pure CO2 stream for use or storage," Musser explained.
NETL's small experimental carbon capture system using this approach is already functional, but "as you make the reactors bigger, the particle sizes remain the same but it changes all the flow conditions. You get different mixing behaviors, different amounts of contact between gas and solid, so this changes the overall performance of the unit," he added. Changes therefore have to be made to the geometry and flow behavior to get the right amount of mixing for heat transfer, chemical reactions, and other processes that have to fit into a particular window of time.
"The advantage of having exascale capabilities is we can look at larger systems in much higher resolution," Musser said. "With limited computing we'd do coarsening of approximations of the system. Now, we can look at mid-to-large-scale units, which takes us into the demo pilot range for these to provide insight into operational conditions or potential problems."
Just as hopping from a small experimental device to one much larger, scaling isn't linear. Being able to consume most or all of the compute on an exascale machine is far from trivial. Musser and team had to completely rewrite the physical models from their legacy MFIX code, then port those to GPUs, and test those out.
The code "allows us to look inside the corrosive environment of these reactors and see how the process is behaving," Musser said. "An extension of the legacy MFIX primarily used for lab-scale devices will allow a ramp-up of problem size, speed, and accuracy on exascale computers, like Frontier, over the next decade." ®