PERFORMANCE ANALYSIS OF SVM AND KNN
Machine learning is one of the fastest-growing fields in computer science that enables systems to learn automatically from data and make intelligent decisions without being explicitly programmed. It plays a major role in real-world applications such as healthcare, banking, fraud detection, education, and recommendation systems. The primary objective of machine learning is to identify meaningful patterns and relationships in large datasets and use them to predict future outcomes. Among different learning approaches, supervised learning is widely used because it trains models using labeled data, and classification is one of its most important tasks. Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) are two commonly used supervised learning algorithms for classification and prediction problems. SVM works by finding an optimal hyperplane that separates classes with maximum margin, while KNN classifies new instances based on the majority class among the nearest neighbors using distance measures. This review paper presents an overview of machine learning concepts, discusses the importance of classification, explains the working principles of SVM and KNN, and compares their performance using evaluation metrics such as accuracy, precision, recall, F1-score, training time, testing time, scalability, and memory usage. The study concludes that SVM is more suitable for high-dimensional and complex datasets, whereas KNN is effective for smaller datasets due to its simplicity and ease of implementation.
v, S. S. K. (2026). Performance Analysis of SVM and KNN. International Journal of Science, Strategic Management and Technology, 02(03). https://doi.org/10.55041/ijsmt.v2i3.304
v, Shree. "Performance Analysis of SVM and KNN." International Journal of Science, Strategic Management and Technology, vol. 02, no. 03, 2026, pp. . doi:https://doi.org/10.55041/ijsmt.v2i3.304.
v, Shree. "Performance Analysis of SVM and KNN." International Journal of Science, Strategic Management and Technology 02, no. 03 (2026). https://doi.org/https://doi.org/10.55041/ijsmt.v2i3.304.
[2] V. N. Vapnik, The Nature of Statistical Learning Theory. New York, NY, USA: Springer, 1995.
[3] C. M. Bishop, Pattern Recognition and Machine Learning. New York, NY, USA: Springer, 2006.
[4] J. Han and M. Kamber, Data Mining: Concepts and Techniques, 3rd ed. Waltham, MA, USA: Elsevier, 2012.
[5] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge, U.K.: Cambridge University Press, 2000.
[6] D. W. Aha, D. Kibler, and M. K. Albert, “Instance-based learning algorithms,” Machine Learning, vol. 6, no. 1, pp. 37–66, Jan. 1991.
[7] I. H. Witten, E. Frank, and M. A. Hall, Data Mining: Practical Machine Learning Tools and Techniques, 3rd ed. Burlington, MA, USA: Morgan Kaufmann, 2011.
[8] S. Raschka and V. Mirjalili, Python Machine Learning, 2nd ed. Birmingham, U.K.: Packt Publishing, 2017.
[9] S. Lata and D. Singh, “A hybrid approach for cloud load balancing,” in Proc. 2nd Int. Conf. Advance Computing and Innovative Technologies in Engineering (ICACITE), 2022, pp. 548–552.
[10] S. Lata and R. Kumar, “A hybrid approach for ECG signal analysis,” in Proc. IEEE Int. Conf. Advances in Computing, Communication Control and Networking (ICACCCN), 2018, doi: 10.1109/ICACCCN.2018.8748858.