MULTIMODAL HUMAN DIGITAL TWIN AI FOR REAL TIME PRODUCTIVITY AND FATIGUE INTELLIGENCE
This project presents a Multi-modal Human Digital Twin System designed to continuously model human cognitive, emotional, and physical states through integrated vision, audio, and behavioral data. The system captures real-time facial expressions, voice characteristics, and interaction patterns to construct a comprehensive and dynamic digital representation of an individual. Deep learning techniques are employed to extract meaningful features from each modality, while advanced multi-modal fusion methods combine these features to capture complex inter-dependencies across human behaviors. Transformer-based and sequence modeling architectures analyze temporal patterns to estimate productivity, fatigue, and overall well-being. The framework supports both short-term monitoring and long- term trend analysis, enabling predictive assessment of performance degradation and health risks. Real-time insights, fatigue alerts, and personalized recommendations are generated to support proactive interventions. An interactive visualization dashboard presents interpret-able analytic, facilitating informed decision-making for individuals and organizations. By enabling continuous, accurate human state modeling, the proposed system aims to enhance productivity, improve well-being, and promote sustainable work practices through intelligent, data-driven support.
B, R. (2026). Multimodal Human Digital Twin AI for Real Time Productivity and Fatigue Intelligence. International Journal of Science, Strategic Management and Technology, 02(03). https://doi.org/10.55041/ijsmt.v2i3.206
B, Ratchita. "Multimodal Human Digital Twin AI for Real Time Productivity and Fatigue Intelligence." International Journal of Science, Strategic Management and Technology, vol. 02, no. 03, 2026, pp. . doi:https://doi.org/10.55041/ijsmt.v2i3.206.
B, Ratchita. "Multimodal Human Digital Twin AI for Real Time Productivity and Fatigue Intelligence." International Journal of Science, Strategic Management and Technology 02, no. 03 (2026). https://doi.org/https://doi.org/10.55041/ijsmt.v2i3.206.
[2] J. Su, Z. Liao, Q. Mao and Z. Sheng, “Deep Learning-based Digital Twin for Human Activity Recognition,” 2023 IEEE 29th Int. Conf. on Parallel and Distributed Systems (ICPADS), Dec. 2023, DOI: 10.1109/ICPADS60453.2023.00024.
[3] R. Zhong, B. Hu, Y. Feng et al., “Construction of Human Digital Twin Model Based on Multimodal Data and Its Application in Locomotion Mode Identification,” Chinese Journal of Mechanical Engineering, vol. 36, art. 126, Oct. 2023.
[4] Y. Lin, L. Chen, A. Ali et al., “Human Digital Twin: A Survey,” Journal of Cloud Computing, vol. 13, art. 131, Aug. 2024.
[5] M. R. Islam, M. Subramaniam and P. C. Huang, “Image-based Deep Learning for Smart Digital Twins: A Review,” Artificial Intelligence Review, Feb. 2025.
[6] “Human-Machine Symbiosis Using Multimodal Trace Data for the Design, Development, Testing and Implementation of Human Digital Twins,” IEEE SMCS Workshop, Feb. 2024.
[7] Q. He, L. Li, D. Li et al., “Human Digital Twin Framework and Perspectives in Human Factors,” Chinese Journal of Mechanical Engineering, Feb. 2024.
[8] “Human Digital Twins: A Systematic Literature Review and Concept Disambiguation for Industry 5.0,” Computer Industry, 2024.
[9] S. Chand, H. Zheng and Y. Lu, “A Vision-Enabled Fatigue-Sensitive Human Digital Twin Towards Human-Centric Human-Robot Collaboration,” Journal of Manufacturing Systems, vol. 77, Dec. 2024.
[10] X. You, C. Duan, J. Gong et al., “A Human Digital Twin Approach for Fatigue- Aware Task Planning in Human-Robot Collaborative Assembly,” Computers & Industrial Engineering, 2025.