Explore more publications!

AI Researcher Mahfuz Islam Khan Jabed Advances Ethical, Explainable AI for Workforce and Healthcare Applications

This research initiative highlights the importance of ethical and explainable artificial intelligence in workforce analytics and medical diagnostics

Transparency is essential when AI informs workforce and healthcare decisions.””
— Mahfuz Islam Khan Jabed
WOODBRIDGE, VA, UNITED STATES, March 2, 2026 /EINPresswire.com/ -- Mahfuz Islam Khan Jabed, an AI and machine learning researcher and graduate student at Washington University of Science and Technology, is advancing research focused on ethical and explainable artificial intelligence (XAI) across workforce analytics and healthcare applications.
This research effort emphasizes transparent model design to help stakeholders understand how predictions are produced, an approach widely viewed as important when AI outputs may influence decisions in hiring and retention planning, as well as clinical screening support.

Jabed’s research profile includes work in time-series forecasting for financial modeling, explainable predictive analytics for employee turnover, and interpretable deep learning approaches for diabetic
retinopathy detection. Across these domains, his focus is on balancing predictive performance with interpretability to support responsible adoption of AI systems.


In finance-oriented research, Jabed published “Stock Market Price Prediction Using Machine Learning Techniques” in February 2024, describing the application of machine learning approaches including LSTM and Prophet for forecasting using market data.



The publication is listed with a DOI and citation/usage metrics on the publisher page.

In workforce intelligence and HR analytics, Jabed developed a stacking ensemble framework for predicting employee turnover with explainability features using SHAP to support interpretability of model outputs. His profile notes that this work was accepted at the 2025 IEEE 2nd International Conference on Computing, Applications and Systems (COMPAS 2025). The COMPAS conference site states that accepted papers are submitted for inclusion in IEEE Xplore, subject to IEEE Xplore’s quality standards (and, per conference guidance, inclusion may depend on presentation and publication requirements).

In medical AI, Jabed’s profile includes an interpretable hybrid CNN–Vision Transformer framework for diabetic retinopathy detection accepted at ICCIAA 2026. The ICCIAA conference focuses on computational intelligence approaches and applications, with the 2026 edition scheduled to be hosted at the University of Petra in Amman, Jordan.

“Responsible AI depends on more than accuracy, it depends on clarity,” said Mahfuz Islam Khan Jabed. “When models are interpretable, it becomes easier for decision-makers to assess reliability, fairness considerations, and limitations before using outputs in real-world settings.”

Through internships and training roles including supervised modeling, performance assessment, and interpretability techniques, Jabed has gained expertise in practical machine learning processes in addition to doing research. Python and popular machine learning frameworks and tools including TensorFlow, PyTorch, scikit-learn, and SHAP-based interpretability methods are among his technical skills.

Jabed is more interested in explainable AI, workforce intelligence and HR analytics, medical AI applications for public health diagnostics, and predictive modeling for finance and risk forecasting. He is now pursuing a Master of Science in Information Technology (due in April 2027) after earning a Bachelor of Science in Computer Science and Engineering in January 2024.

As AI systems become increasingly integrated into economic, healthcare, and workforce infrastructures, the emphasis on transparency and ethical deployment continues to grow. Through ongoing graduate research and interdisciplinary collaboration, Jabed seeks to contribute to the development of AI models that balance technical performance with accountability, ensuring practical applicability in real-world environments.

His research and practical work emphasize responsible AI development for economic resilience, labor optimization, and healthcare innovation. His study attempts to promote more open, responsible, and data-driven decision-making in both public and private sector contexts by incorporating interpretability approaches into prediction algorithms.

About Jabed Mahfuz Islam Khan
Mahfuz Islam Khan Jabed is a Woodbridge, Virginia-based researcher who focuses on AI and machine learning. His research focuses on workforce intelligence and HR analytics, medical AI applications for public health diagnostics, explainable AI, and predictive modeling for finance and risk forecasting. He has written and presented work in the areas of forecasting, explainability, and interpretable deep learning techniques. He is now a Master of Science in Information Technology student at Washington University of Science and Technology.

Mahfuz Islam Khan Jabed
LeadersUniverse
mahfuzislamkhanjabed@gmail.com

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions