This paper proposes a defense framework against jailbreak attacks that exploit multi-language and multi-intent inputs. Research indicates these attacks are effective primarily due to two reasons: (1) LLMs may incorrectly capture key points and semantics in low-resource language inputs, generating malicious content; (2) multiple requests within a single input can cause attention flickering, leading to inadequate capture of implicit requests and incorrect responses. The proposed defense framework requires no additional training and works by mapping multi-language inputs to high-resource languages, guiding the model to think multiple times, decompose intents, and reflect. Experimental results show significant effectiveness of this framework in defending against attacks.