Unsupervised anomaly detection has gained tremendous momentum in medical applications, with Generative Adversarial Networks (GANs) playing a pivotal role in deep anomaly detection. However, GAN-based methods may not always be effective in accurately detecting anomalies especially at the pixel-level, where finer features are necessary for accurate localization. In this paper, we propose F-UNetGAN, a novel GAN-based fast residual attention network for fine-grained anomaly detection and localization in a fully unsupervised manner. Firstly, a novel U-Net-based discriminator architecture is introduced that enables the model to learn finer details of the input image by extracting low-level features, thereby enhancing its ability to output both global and local information. We define four variants of this new U-Net discriminator. Additionally, we incorporate an encoder network to the GAN model to facilitate fast mapping from images to the latent space. Moreover, we propose new cost functions to consider the new discriminator architecture, ensuring fine-grained anomaly localization. Specifically, we introduce a per-pixel consistency regularization technique using Mixup, which enhances pixel-level details by leveraging feedback from the U-Net discriminator. Furthermore, we integrate attention modules to capture spatial and channel-specific features, improving the identification of important regions and the extraction of more intricate features. We evaluate our method on a COVID-19 dataset and validate its generalization ability on four benchmark synthetic and medical datasets. Experimental results demonstrate that the proposed method achieves more accurate anomaly localization compared to other state-of-the-art methods.