AMIN-CNN: Enhancing Brain Tumor Segmentation through Modality-Aware Normalization and Deep Learning
Abstract
Accurate segmentation of reliable brain tumor detection is essential for early diagnosis and treatment, which helps to increase patient survival rates. However, the inherent variability in tumor shape, size, and intensity across different MRI modalities makes automated segmentation a challenging task. Traditional deep learning approaches, such as U-Net and its variants, provide robust results but often struggle with modality-specific inconsistencies and generalization across diverse datasets. This research presented AMIN-CNN, an adaptive multimodal invariant normalization incorporating a novel 3D convolutional neural network to improve brain tumors segmentation across various MRI technologies. Through adaptive normalization, AMIN-CNN covers modality-specific differences more effectively than Basic CNN and U-Net, leading to improved integration of multimodal MRI input data. The model maintains strong learning performance with minimal overfitting beyond epoch 50. Regularization techniques can reduce this. AMIN-CNN stands out with the best Dice Score (about 0.92 WT, 0.87 ET, and 0.89 TC), Precision (0.3), accuracy of 93.2 % and can decrease false positives. The lower Sensitivity in AMIN-CNN results in it finding the smaller but more correct tumor regions, making it more precise. Compared with traditional methods, AMIN-CNN demonstrates a competitive or better segmentation result and maintains computational efficiency. The model has demonstrated strong independence, with a Hausdorff Distance of 20, compared to 100 for other models. According to these test results, AMIN-CNN is the most effective and clinically correct method among the different architectures, mainly due to its high precision and ability to measure tumors with accuracy.
Downloads
References
[2]. Menze, B. H., Jakab, A., Bauer, S., Kalpathy-Cramer, J., Farahani, K., Kirby, J., ... & Van Leemput, K. (2014). The multimodal brain tumor image segmentation benchmark (BRATS). IEEE transactions on medical imaging, 34(10), 1993-2024.
[3]. Bakas, S., Akbari, H., Sotiras, A., et al. (2017). Advancing the Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Scientific Data, 4, 170117. https://doi.org/10.1038/sdata.2017.117
[4]. Shen, D., Wu, G., & Suk, H. I. (2017). Deep Learning in Medical Image Analysis. Annual Review of Biomedical Engineering, 19, 221–248.
[5]. Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, 234–241.
[6]. Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J. (2018). UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In: Stoyanov, D., et al. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA ML-CDS 2018 2018. Lecture Notes in Computer Science(), vol 11045. Springer, Cham. https://doi.org/10.1007/978-3-030-00889-5_1
[7]. Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2017). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4), 834-848.
[8]. Isensee, F., Petersen, J., Klein, A., Zimmerer, D., Jaeger, P. F., Kohl, S., ... & Maier-Hein, K. H. (2018). nnu-net: Self-adapting framework for u-net-based medical image segmentation. arXiv preprint arXiv:1809.10486.
[9]. Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).
[10]. Wang, G., Li, W., Zuluaga, M. A., Pratt, R., Patel, P. A., Aertsen, M., ... & Vercauteren, T. (2018). Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE transactions on medical imaging, 37(7), 1562-1573.
[11]. Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., ... & Sánchez, C. I. (2017). A survey on deep learning in medical image analysis. Medical image analysis, 42, 60-88.
[12]. Zhao, X., Wu, Y., Song, G., Li, Z., Zhang, Y., & Fan, Y. (2018). A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Medical image analysis, 43, 98-111.
[13]. Li, X., Chen, H., Qi, X., Dou, Q., Fu, C. W., & Heng, P. A. (2018). H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE transactions on medical imaging, 37(12), 2663-2674.
[14]. Kamnitsas, K., Ledig, C., Newcombe, V. F., Simpson, J. P., Kane, A. D., Menon, D. K., ... & Glocker, B. (2017). Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical image analysis, 36, 61-78.
[15]. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O. (2016). 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M., Unal, G., Wells, W. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016. MICCAI 2016. Lecture Notes in Computer Science (), vol 9901. Springer, Cham. https://doi.org/10.1007/978-3-319-46723-8_49.
[16]. Wei, J., Xia, Y., & Zhang, Y. (2019). M3Net: A multi-model, multi-size, and multi-view deep neural network for brain magnetic resonance image segmentation. Pattern Recognition, 91, 366-378.
[17]. Qamar, S., Jin, H., Zheng, R. et al. Multi stream 3D hyper-densely connected network for multi modality isointense infant brain MRI segmentation. Multimed Tools Appl 78, 25807–25828 (2019). https://doi.org/10.1007/s11042-019-07829-1
[18]. Heinrich, M. P., Jenkinson, M., Bhushan, M., Matin, T., Gleeson, F. V., Brady, M., & Schnabel, J. A. (2012). MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration. Medical image analysis, 16(7), 1423-1435.
[19]. Huang, X., & Belongie, S. (2017). Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision (pp. 1501-1510).
[20]. X. Li, H. Chen, X. Qi, Q. Dou, C. -W. Fu and P. -A. Heng, "H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volumes," in IEEE Transactions on Medical Imaging, vol. 37, no. 12, pp. 2663-2674, Dec. 2018, doi: 10.1109/TMI.2018.2845918.
[21]. Oktay, O., Schlemper, J., Folgoc, L. L., Lee, M., Heinrich, M., Misawa, K., ... & Rueckert, D. (2018). Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999.
[22]. Pereira, S., Pinto, A., Alves, V., & Silva, C. A. (2016). Brain tumor segmentation using convolutional neural networks in MRI images. IEEE transactions on medical imaging, 35(5), 1240-1251.
[23]. Usharani, S., Lakshmanan, R., Rajakumaran, G., Basu, A., Nandam, A., & Depuru, S. (2025). Detection of location-specific intra-cranial brain tumors. IAES International Journal of Artificial Intelligence (IJ-AI), 14(1), 428-438. doi:http://doi.org/10.11591/ijai.v14.i1.pp428-438
Copyright (c) 2025 Sivakumar Depuru, M. Sunil Kumar

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution-ShareAlikel 4.0 International (CC BY-SA 4.0) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).