Deep neural networks have revolutionized the field of image compressed sensing (CS) by delivering unprecedented performance gains. Despite significant achievements, future development and practical applications are hindered by the inflexibility and inadaptability of deep neural networks, including non-content-aware sampling, non-context-aware feature representation, and the weak generalization of network models to different sampling modes. To resolve these issues, many emerging techniques have been proposed. The first trend is adaptive sensing, which enables the sampling matrix to be trained and even realize adaptive rate allocation. The second is adaptive feature learning, which leverages the relationships between the image features, blocks, and network stages. The third is to achieve model-adaption using a series of scalable schemes. This review summarizes these techniques as adaptive learning for image CS and presents the development process. We first review the inverse imaging problem, traditional sparse models and optimization algorithms encountered in CS research, and then introduce the basic frameworks of image CS using deep learning. The development of deep learning-based image CS is divided into three directions and presented separately. Reviewing previous studies, we discuss the current limitations and suggest possible future research directions.