IJSMT Journal

International Journal of Science, Strategic Management and Technology

An International, Peer-Reviewed, Open Access Scholarly Journal Indexed in recognized academic databases · DOI via Crossref The journal adheres to established scholarly publishing, peer-review, and research ethics guidelines set by the UGC

ISSN: 3108-1762 (Online)
webp (1)

Plagiarism Passed
Peer reviewed
Open Access

AUTOMATED MYOCARDIAL INFARCTION CLASSIFICATION FROM CARDIAC MRI USING VISION TRANSFORMER ARCHITECTURE

AUTHORS:
R. PRADEEPA
C.USHARANI
Mentor
Dr.E.MARIAPPAN , Dr.M.KALIAPPAN
Affiliation
Dept. of Artificial Intelligence and Data ScienceRamco Institute of Technology Rajapalayam, India
CC BY 4.0 License:
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract

One of the main causes of death globally is a myocardial infarction, also referred to as a heart attack, which is a sudden reduction in the heart's blood flow. Survival rates are significantly increased by early identification of this illness. Because MRI can reveal the heart’s internal structure with high clarity, it can detect damaged heart muscle that standard tests may miss. However, manually analyzing large numbers of MRI images is time-consuming and heavily dependent on the observer’s expertise. In this work, an automated system for myocardial infarction detection from cardiac MRI images is proposed using a Vision Transformer (ViT) based deep learning model. The EMIDEC cardiac MRI dataset, which includes both healthy subjects and patients with myocardial infarction, is used in this study.To make key cardiac structures more visible, the MRI volumes are first transformed into two-dimensional slices and then preprocessed using noise reduction, contrast enhancement, scaling, and normalization. Further visual analysis with ROC curves and confidence-based assessment bolsters the system's efficacy and stability. The system has a high test accuracy of 97.63%, which shows that most of the MRI slices were correctly identified. This proposed system shows that Vision Transformer-based models can be effectively used for automatic myocardial infarction detection from cardiac MRI images, providing doctors with a useful tool for decision-support and facilitating prompt and precise diagnosis

Keywords
Article Metrics
Article Views
44
PDF Downloads
0
HOW TO CITE
APA

MLA

Chicago

Copy

PRADEEPA, R. & C.USHARANI, (2026). Automated Myocardial Infarction Classification from Cardiac MRI using Vision Transformer Architecture. International Journal of Science, Strategic Management and Technology, 02(03). https://doi.org/10.55041/ijsmt.v2i3.248

PRADEEPA, R., and C.USHARANI. "Automated Myocardial Infarction Classification from Cardiac MRI using Vision Transformer Architecture." International Journal of Science, Strategic Management and Technology, vol. 02, no. 03, 2026, pp. . doi:https://doi.org/10.55041/ijsmt.v2i3.248.

PRADEEPA, R., and C.USHARANI. "Automated Myocardial Infarction Classification from Cardiac MRI using Vision Transformer Architecture." International Journal of Science, Strategic Management and Technology 02, no. 03 (2026). https://doi.org/https://doi.org/10.55041/ijsmt.v2i3.248.

References
1.Dosovitskiy,A.,Beyer,L.,Kolesnikov,A.,etal.,“An Image is Worth 16×16 Words: Transformers for Image RecognitionatScale,”Proceedings of the International Conference on Learning Representations (ICLR), 2021.

2.Litjens,G.,Kooi,T.,Bejnordi,B.E.,etal.,“A survey on deep learning in medical image analysis,”Medical Image Analysis, Vol. 42, pp. 60–88, 2017.

3.Esteva, A., Kuprel, B., Novoa, R. A., et al.,“Dermatologist-level classification of skin cancer with deep neural networks,”Nature, Vol. 542, pp. 115–118, 2017.

4.Ronneberger,O.,Fischer,P.,andBrox,T.,“U-Net: Convolutional Networks for Biomedical ImageSegmentation,”MICCAI, 2015.

5.He,K.,Zhang,X.,Ren,S.,andSun,J.,“Deep Residual Learning for Image Recognition,”
Proceedings of CVPR, 2016.

6.Simonyan,K.,andZisserman,A.,“Very Deep Convolutional Networks for Large-Scale ImageRecognition,”ICLR, 2015.

7.Chen,J.,Lu,Y.,Yu,Q.,etal.,“TransUNet: Transformers Make Strong Encoders for Medical ImageSegmentation,”arXiv preprint arXiv:2102.04306, 2021.

8.Hatamizadeh, A., Nath, V., Tang, Y., et al.,“Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images,”MICCAI, 2022.

9.Isensee,F.,Petersen,J.,Kohl,S.,etal.,“nnU-Net: a self-configuring method for deep learning-basedbiomedicalimagesegmentation,”Nature Methods, Vol. 18, pp. 203–211, 2021.

10.Oktay,O.,Schlemper,J.,Folgoc,L.L.,etal.,“Attention U-Net: Learning Where to Look for the Pancreas,”arXiv preprint arXiv:1804.03999, 2018.
Ethics and Compliance
✓ All ethical standards met
This article has undergone plagiarism screening and double-blind peer review. Editorial policies have been followed. Authors retain copyright under CC BY-NC 4.0 license. The research complies with ethical standards and institutional guidelines.
Indexed In
Similar Articles
Investigating the Gut Microbiome Using Simple Fermentation Experiments
string(13) "Meera V. Nair" Nair, M. V.et al.
(2026)
DOI: 10.55041/ijsmt.v2i2.002
A Study on Parent Satisfaction at Orchid International School
string(8) "Rakesh M" M, R.
(2026)
DOI: 10.55041/ijsmt.v2i3.252
Impact of Hybrid/Remote Work Models on Employee Productivity and Engagement
string(18) "Vikram R. Malhotra" Malhotra, V. R.
(2026)
DOI: 10.55041/ijsmt.v2i2.004
Scroll to Top