Three-dimensional organ and cancer segmentation based on multi-sequence MRI is crucial for assisting clinical diagnosis. However, current automated segmentation methods often focus on specific sequences, specific organs, and specific cancers, i.e., lack of generality. To address this issue, we propose a universal segmentation network for multi-sequence MRI (UniMRISegNet) that can segment multiple organs and cancers. UniMRISegNet features a shared encoder-decoder architecture equipped with contextual prompt generation (CPG) and prompt-conditioned dynamic convolution (PCDC) modules. The CPG module encodes sequence-specific, position-specific, and organ/cancer-specific text prompts as prior information to inform UniMRISegNet about the specific task to be executed. The PCDC module can adaptively generate model weights based on the assigned prompts, enhancing the segmentation capabilities of the UniMRISegNet for specific tasks. To mitigate discrepancies between different sequences of the same organ and capture similarities between related sequences, we design a novel loss function called Semantic-Aware Cosine Similarity Loss (SACSL), which integrates the cosine similarity of text embeddings to reconcile discrepancies and similarities between MRI sequences of the same organ. We created a large-scale annotated multi-sequence, multi-organ, and multi-cancer segmentation workflow (MSOCS), and demonstrated that our UniMRISegNet outperforms other universal networks and single-task networks on MSOCS. Furthermore, the universal weights from MSOCS can be transferred to never-before-seen downstream tasks, achieving superior performance compared to training from scratch.