The goal of visual object navigation for an agent is to find the target objects accurately. Recent works mainly focus on the feature of embedding, attempting to learn better features with different variants, such as object distribution and graph representations. However, some typical navigation problems in complex environments, such as partially known and obstacle problems, may not be effectively addressed by previous feature embedding methods. In this paper, we propose a framework with a long-short objective policy, where the hidden states are classified according to the navigation objectives at that moment and separately rewarded. Specifically, we consider two objectives: the long-term objective is to go closer to the target, and the short-term objective is for obstacle avoidance and exploration. To alleviate the effect of long-term and short-term alternation, we build a state memory and propose an adjustment gate to update the state memory. Finally, all past hidden states are reweighted and combined for action prediction with an action-boosting gate. Experimental results on RoboTHOR show that the proposed method can significantly outperform the state-of-the-art.