Abstract With decreasing costs of high-throughput sequencing, more and more datasets providing omics profiles of cancer patients become available. Thus, novel survival analysis approaches integrating these differently sized and heterogeneous molecular and clinical groups of variables start being developed. Due to the difficulty of the task of multi-omics data integration, the Cox Proportional-Hazards (PH) model using clinical data has remained one of the best-performing techniques, barely outperformed by models using molecular data modalities. There is therefore a need for methods that can successfully perform multi-omics integration in survival analysis and outperform the clinical Cox PH model. Moreover, while certain deep learning methods have been shown to provide state-of-the-art accuracy of cancer survival prediction, most of them show no benefit or even decay in performance when integrating a larger number of modalities, further motivating a need to investigate how modality-specific representations should be integrated when using neural networks for multi-omics integration. We benchmarked multiple integration techniques for a neural network architecture, revealing that hierarchical autoencoder-based integration of modality-specific representations outperformed other methods such as max-pooling and was comparable with state-of-the-art statistical approaches for multi-omics integration. Further, we showed that the hierarchical autoencoder-based integration of modality-specific representations achieved increased performance through a soft modality selection mechanism, focusing on the most informative modalities for each cancer. We thus framed multiomics integration as a partial group-wise feature selection problem, highlighting that only those models performed well that could adequately weight important modalities in the presence of the high noise imposed by less important modalities.