Abstract — Digital Rock Imaging is constrained by detector hardware, and a trade-off between the image field of view (FOV) and image resolution must be made. This can be compensated for with super resolution (SR) techniques that take a wide FOV, low resolution (LR) image, and super resolve a high resolution (HR). The Enhanced Deep Super Resolution Generative Adversarial Network (EDSRGAN) is trained on the Deep Learning Digital Rock Super Resolution Dataset, a diverse compilation of raw and processed uCT images in 2D and 3D. The 2D and 3D trained networks show comparable performance of 50\% to 70\% reduction in relative error over bicubic interpolation. GAN performance in recovering texture shows superior visual similarity compared to SRCNN and other methods. Difference maps indicate that SRCNN section of the SRGAN network recovers large scale edge (grain boundaries) features while the GAN network regenerates perceptually indistinguishable high frequency texture. Physical accuracy of SR reconstruction is measured by permeability and phase topology on consistently segmented images, showing SRGAN results achieving the closest match. Network performance is generalised with augmentation, showing high adaptability to noise and blur. HR images are fed into the network, generating HR-SR images to extrapolate network performance to sub-resolution features present in the HR images themselves. Results show that under-resolution features are regenerated despite the network operating outside of trained specifications. Comparison with SEM images shows details are consistent with the underlying geometry. Recovery of textures benefits the characterisation of digital rocks with a high proportion of under-resolution micro-porous features. Images that are normally constrained by the mineralogy of the rock, by fast transient imaging, or by the energy of the source, can be super resolved accurately for further analysis downstream.