Conventional von Neumann architecture faces many challenges in dealing with data-intensive artificial intelligence tasks efficiently due to huge amounts of data movement between physically separated data computing and storage units. Novel computing-in-memory (CIM) architecture implements data processing and storage in the same place, and thus can be much more energy-efficient than state-of-the-art von Neumann architecture. Compared with their counterparts, resistive random-access memory (RRAM)-based CIM systems could consume much less power and area when processing the same amount of data. In this paper, we first introduce the principles and challenges related to RRAM-based CIM systems. Then, recent works on the circuit and macro levels of RRAM-CIM systems will be reviewed to highlight the trends and challenges in this field.