Brain-machine interface (BMI) allows disabled people to use their neural signals to control the external device accomplishing their movement intents. However, the brain keeps adapting during the interaction with the environment. The neural activity changes over time at both the ensemble and single-cell levels, which poses a challenge to maintain a stable decoding performance using a fixed model. The key is to design quantitative modeling on the nonstationary neural signals from different scales and develop adaptive decoders to merge with the plastic brain in a co-adaptive way. This chapter provides a comprehensive overview of the challenges for BMI decoder design, the effect of nonstationary neural activity on decoder performance, and the development of adaptive models. At the neural ensemble level, reinforcement learning (RL)-based decoders explore neural-action mappings through trial and error. Series of RL methods in decoder design are introduced, which explore the large state-action space with more efficiency, fast adaptation, and stable performance. At the single-cell level, the point process model statistically describes how neural spike timings relate to the spiking history, concurrent ensemble activity, and extrinsic stimuli or behavior. The main development of point process methods is presented from linear model to nonlinear model and from open-loop adaptation to closed-loop adaptation. The decoding results of different models are compared on the real neural data. Both RL algorithms and point process modeling provide the computational tools to describe the neural adaptation from multiple scales in BMIs, which helps the subject better control the neuroprostheses.