## 1 Introduction

## 2 Generative adversarial networks

## 3 N-body simulations data

^{3}and 1024

^{3}particles respectively. We used L-PICOLA (Howlett et al. 2015) to create 10 independent simulation boxes for both box sizes. The cosmological model used was ΛCDM (Cold Dark Matter) with Hubble constant \(H_{0}=100\), \(h=70\) km s

^{−1}Mpc

^{−1}, dark energy density \(\varOmega_{\varLambda} = 0.72\) and matter density \(\varOmega _{m} = 0.28\). We used the particle distribution at redshift \(z=0\). We cut the boxes into thin slices to create grayscale, two-dimensional images of the cosmic web. This is accomplished by dividing the x-coordinates into uniform intervals to create 1000 segments. We then selected 500 non-consecutive slices and repeated this process for the y and z axes, which gave us 1500 samples from each of the 10 realizations, yielding a total of \(15{,}000\) samples as our training dataset. We pixelised these slices into \(256 \times256\) pixel images. The value at each pixel corresponded to its particle count. After the pixelisation, the images are smoothed with a Gaussian kernel with standard deviation of one pixel. This step is done to decrease the particle shot noise.

## 4 Implementation and training

Layer | Operation | Output | Dimension |
---|---|---|---|

Discriminator
| |||

X
| m × 256 × 256 × 1 | ||

\(h_{0}\)
| conv | LeakyRelu–BatchNorm | m × 128 × 128 × 64 |

\(h_{1}\)
| conv | LeakyRelu–BatchNorm | m × 64 × 64 × 128 |

\(h_{2}\)
| conv | LeakyRelu–BatchNorm | m × 32 × 32 × 256 |

\(h_{3}\)
| conv | LeakyRelu–BatchNorm | m × 16 × 16 × 512 |

\(h_{4}\)
| linear | sigmoid (identity) | m × 1 |

Generator
| |||

z
| m × 200 (m × 100) | ||

\(h_{0}\)
| linear | Relu–BatchNorm | m × 16 × 16 × 512 |

\(h_{1}\)
| deconv | Relu–BatchNorm | m × 32 × 32 × 256 |

\(h_{2}\)
| deconv | Relu–BatchNorm | m × 64 × 64 × 128 |

\(h_{3}\)
| deconv | Relu–BatchNorm | m × 128 × 128 × 64 |

\(h_{4}\)
| deconv | tanh | m × 256 × 256 × 1 |

Hyperparameter | GAN | Description | |
---|---|---|---|

Standard | Wasserstein-1 | ||

Batch size | 16 | 16 | Number of training samples used to compute the gradient at each update |

z dimension | 200 | 100 | Dimension of the gaussian prior distribution |

Learning rate D | 1⋅10 ^{−5} | 1⋅10 ^{−5} | Discriminator learning rate used by the Adam optimizer |

\(\beta_{1}\)
| 0.5 | 0.5 | Exponential decay for the Adam optimizer |

\(\beta_{2}\)
| 0.999 | 0.999 | Exponential decay for the Adam optimizer |

Learning rate G | 1⋅10 ^{−5} | 1⋅10 ^{−8} | Generator learning rate used by the Adam optimizer |

Gradient penalty | - | 1000 | Gradient penalty applied for Wasserstein-1 |

a
| 4 | 4 | Parameter in s(x) to obtain the scaled images |

## 5 Diagnostics

## 6 Results

### 6.1 Large images of size 500 Mpc

### 6.2 Small images of size 100 Mpc

## 7 Conclusion

^{1}and LSST

^{2}projects. The need for fast simulations will be amplified further by the emergence of new analysis methods, which can be based on advanced statistics (Petri et al. 2013) or deep learning (Schmelzle et al. 2017). These methods aim to extract more information from cosmological data and often use large simulation datasets. While we demonstrated the performance of GANs for 2D images using training on a single GPU, this approach can naturally be extended to generate 3D mass distributions (Ravanbakhsh et al. 2016) for estimating cosmological parameters from dark matter simulations.