Large language models (LLMs) have proven effective across a range of natural language processing tasks, yet their application in the legal domain, particularly for legal retrieval tasks, remains largely unexplored. This paper introduces a novel framework aiming to harness the capabilities of LLMs for statute law retrieval. Initially, we employed a legal-data-fine-tuned encoder language model to retrieve candidate articles. Subsequently, we incorporated LLMs to refine high-recall predictions from the candidate articles, aiming to enhance precision through in-context learning and self-generated explanations. Our experiments on the statute law retrieval task of COLIEE 2023 showcase the effectiveness of our framework, achieving a new state-of-the-art result with a 2.9% higher F2 score.