Robotic grasping is a fundamental manipulation in multiple robotic tasks, which has great research significance. It is challenging to perform robotic grasping in cluttered environments due to the occlusion and stacking of objects. We propose an attention deep Q-learning network for robotic grasping with the assistance of pushing actions with non-sparse rewards. The attention network improves the performance of deep Q-learning network by weighting feature channels. The robot use pushing actions to dilute dense objects to create space for grasping. The pushing and grasping knowledge are learned by trial and error in a self-supervised way. To evaluate the robotic grasping performance, we present an overall performance metric, which contains three evaluation factors: task completion rate, grasping success rate and action efficiency. The experimental environment is established on the V-REP simulation software to verify our proposed model. The results show that our pushing strategy can not only improve robotic grasping performance but also avoid unnecessary pushing actions to improve action efficiency. At the same time, ablation studies prove the effectiveness of the attention mechanism. Our proposed method can achieve overall performance of 82.33% for robotic grasping.