Learning based control methods have gained considerable interests in human-coupled robot control, since more complex cooperative scenarios have been considered. Most of learning methods are employed to dealing with human-robot interaction (pHRI) in such cooperative tasks. However, the pHRI in lower exoskeleton is changing with different pilots and walking patterns, which make the controller should be learned online to adapt changing pHRI. This paper presents a novel control strategy with Hierarchical Interactive Learning (HIL) framework, which aims to handle varying interaction dynamics. Two learning hierarchies are contained in the proposed HIL control strategy. In high-level motion learning, motion trajectories are modeled with Dynamic Movement Primitives (DMPs) and learned with Locally Weighted Regression (LWR) method. Reinforcement Learning (RL) method is utilized to learn the model-based controller in low-level controller learning hierarchy. The proposed HIL control strategy is demonstrated both on a single DOF platform and a human-powered augmentation lower exoskeleton. Experimental results indicate that the proposed control strategy has the ability to handle varying interaction dynamics and obtain better performance than traditional model-based control algorithms.