ChatGPT has demonstrated impressive capabilities in building conversations. However, for Spoken Language Understanding (SLU) with multiple intents, traditional approaches where Intent Detection and Slot Filling are jointly modeled with distinct formulations hinder networks from effectively extracting shared features. In this work, we describe a Prompt-based SLU (PromptSLU) framework, to intuitively unify two sub-tasks into the same form for a common pre-trained model. Specifically, variable intents are predicted first, then naturally embedded into prompts to guide slot-value inference from a semantic perspective. Furthermore, we are inspired by multi-task learning to introduce an auxiliary sub-task and a concise general objective, which helps to learn relationships among provided labels. Experiment results show that our framework outperforms several competitive baselines on two datasets. The source code is available at https://github.com/F2-Song/PromptSLU.