In this manuscript, multimodal emotion recognition using a decision-level fusion and feature-level fusion approach is proposed. In the first approach, decision-level fusion approach is proposed, which is considered a late fusion, where fine-tuned models are developed for each modality. For that, the input is taken from IEMOCAP Database, initially, it is tokenized to a length of 128 tokens and given to a transformer-based BERT model. In the second approach, a feature-level fusion approach is proposed which is considered an early fusion where features from each modality are combined and then fed to the attention-based LSTM. For that, the input is taken from IEMOCAP Database, which contains three modalities: text, speech and Video. Here, text features are extracted with the CNN model, speech features are extracted using the OPENSMILE toolkit and Video features are extracted using a 3D-CNN architecture. Then the proposed approaches are simulated with python. The performance metrics, such as accuracy, sensitivity, specificity, precision, and recall, are evaluated. Then the performance of the proposed first approach is compared with the second approach. The simulation results of the second approach provide a higher accuracy of 0.98%; a higher sensitivity of 0.96%, and a higher sensitivity of 0.75% than the first approach.