B Chaitanya,P Naga Lakshmi Devi,Sorabh Lakhanpal,B. Rohini,Q. Mohammad,B. T. Geetha
标识
DOI:10.1109/icaiihi57871.2023.10489205
摘要
In order to improve diagnostic precision, this study offers an original framework for multimodal health image fusion that makes use of cloud-based deep learning. A descriptive design is used with additional information gathering, utilizing an approach that is deductive along with an interpretivist perspective. The convolutional neural network-based suggested model is assessed in terms of its scalability, effectiveness, and stored in the cloud computational effectiveness. When results are compared to current techniques, they demonstrate higher diagnostic precision. The model's possible consequences on healthcare is highlighted by its interpretation along with clinical utility. Limitations are addressed through critical analysis, and suggestions include enhancing the model, investigating edge computing, and taking ethical issues into account. Subsequent efforts ought to concentrate on refining the model, growing the dataset, and guaranteeing the model's interpretability.