Abstractive summarization is a key technique for automatic text summarization. However, existing generative models typically rely on beam search decoding, which leads to poor performance due to the large search space. To address this, we propose a multi-task learning framework for text reordering. Specifically, we introduce a multi-task learning model to reorder the text according to different evaluation metrics, in order to select the optimal summary candidates. Furthermore, we replace the gating network in mixture-of-experts models with a smooth gating control method to alleviate the unsmooth parameter issue. To improve the ability of extracting semantic information, we incorporate textcnn into the basic encoder to enhance the semantic understanding of the model. Experiments on the Chinese long text summarization datasets CLTS and CLTS+ show significant improvements of our method over the best models reported in prior work, demonstrating the efficacy of the multi-task re-ordering framework and smooth gating control. Ablation studies are conducted to analyze the impact of different decoding methods and beam sizes, as well as the contribution of different re-ordering methods integrated into the framework.