Quo Vadis Magnetic Dynamo ?

by Julius Donnert

Recently a colleague asked me: “Why are you actually doing this ? We have lots of cluster simulations, also with MHD. This push for resolution and fidelity, what will we get out of this ?” This is a very valid question, after all our work costs tax payer money.

My answer was somewhere along the following lines. We are seeing an incredible flood of radio observations, from instrument that each are 100s of Millions Euros/Dollars of public investment. JVLA, MWA and LOFAR are producing absolutely amazing data: Very sensitive (micro Jy), high S/N, high resolution (arcsec) observations of large parts of the sky. These observations show incredible detail and complexity. If we don’t model the data, we will never understand them.

My personal poster child is the Sausage relic, because I have worked extensively on the object. Let’s have a look at the current data: In the image below you first see an “old” GMRT map of the relic at 610 MHz (van Weeren et al. 2010) above more recent maps with JVLA (di Gennaro et al. 2018) and LOFAR (Hoang et al. 2017).

The Sausage Relic. Top: at 610 MHz with GMRT (van Weeren et al. 2010). Middle: with JVLA at 1.5 GHz (di Gennaro et al. 2018). Bottom: with LOFAR HBA at 150 MHz (Hoang et al. 2017). Notice how the relic get “thicker” with decreasing frequency, because the lifetime of cosmic-ray electrons is longer at lower energies.

Isn’t that amazing ? This is a 2 Mpc shock monster at the virial radius of a galaxy cluster resolved at 5 kpc ! The GMRT observation 8 years ago was very good already, but JVLA and LOFAR are even better. That dot in the white box in the middle image, that’s actually the JVLA beam. And more data is incoming: Dr. Andra Stroe (CfA) has a JVLA project running on the Sausage to get data above 10 GHz - absolutely exciting stuff !

In the JVLA data the relic dissolves into filaments: Are these flux tubes ? Or is the shock intrinsically irregular ? Is this just MHD we are seeing, or do we need more plasma physics ?

Using a cosmological simulation, we could find it out ! But we need a simulation that actually matches the performance of the radio interferometers: shocks resolved to <5 kpc, i.e. a net resolution of 1 kpc at the cluster outskirts. Nothing we have comes close at the moment, even though computers are fast enough and can handle the storage and memory requirements.

But the data will get even better, because SKA is coming in strong for rotation measures. Its precursors ASKAP, MeerKAT and the VLASS survey will improve data from ~10 RM sources to ~40 RM sources in nearby clusters. SKA itself will observe a few 100 sources in nearby clusters (Bonafede et al. 2015). SKA will likely observe RMs in filaments (Giovannini et al. 2015) and even polarization in radio haloes, allowing Faraday tomography studies to resolve the Alfven scale, where the magnetic cascade turns over (Govoni et al. 2015). This data directly tests the amplification mechanism for large scale magnetic fields, the turbulent magnetic dynamo. Plasma theorists are thrilled to find the Alfven scale in clusters, because it can rule out a basic model for the ICM plasma.

This is extremely hard to simulate accurately. Not necessarily because simulations have to match observations reaching kpc resolution, but because turbulence and magnetic field growth need accurate, expensive MHD fluid algorithms to be resolved close to the grid scale. Second order MHD algorithms behave like honey for weak motions close to the grid resolution and dissipate velocity and magnetic energy into heat. So we have to do better.

To make matters worse, the growth rate, i.e. the speed with which the turbulent dynamo grows magnetic fields, depends on resolution as well. It is set by the smallest eddy turnover time in the system (Beresnyak et al. 2016). Resolution adaptive simulations might bias magnetic field growth to high density regions and can introduce a density dependency into field distributions. Existing fancy galaxy formation techniques might get the field intrinsically wrong, because they are all adaptive. I am writing “might”, because at the moment there is no unbiased Eulerian simulation to compare to, so we can’t be sure ! We are not even sure how Lagrangian galaxy formation codes behave in idealized dynamo simulations …

If you’re interested in the nitty-gritty details, I’ve lead a review article about magnetic field growth in clusters and the large scale structure, where we discuss these issues at length : Donnert, Vazza, Brueggen, zuHone SSRv 2018.

Finally, this is why high-order cosmological simulations with WOMBAT are the obvious next step. We have the problem, we have the computers for it, all we need are the codes. If you look at the current lineup of supercomputers, we can do these simulations. Performance will reach PFlops and data is going to be in the PByte regime, just like the LOFAR survey produces a PByte a year (Shimwell priv. com.).

So let’s go for it, the simulator community will have to re-write all their codes for the exa-scale anyway !

UPDATE: Our last paper was featured on HPC Wire for MHD at the exa-scale.