Fish feeding intensity assessment (FFIA) aims to evaluate the change of fish appetite during the feeding process, which is potentially useful in industrial aquaculture. Previous methods are mainly based on computer vision techniques. However, these methods are limited by water refraction and uneven illumination. In this paper, we introduce a new approach for FFIA using audio. We create a new audio dataset for FFIA, namely AFFIA3K, which contains 3000 labelled audio clips of different fish feeding intensity (None, Weak, Medium, Strong). We present a deep learning framework for FFIA, where the audio signal is first transformed into acoustic features, i.e. mel spectrogram, then a convolutional neural network (CNN)-based model is used to classify the fish feeding intensity. Experimental results show that our approach achieves an mAP of 0.74 on the test set of AFFIA3K, and considerably outperforms baseline systems. This indicates the potential of our proposed approach in aquaculture applications.