AWS-Machine Learning University AI/ML Research Symposium Student Poster Session
AWS-Machine Learning University
This showcase represents a powerful collaboration between AWS-Machine Learning University and diverse academic institutions, highlighting the innovative research happening at HBCUs and other universities across the country.
Discover groundbreaking artificial intelligence and machine learning research presented by emerging scholars. This poster session highlights student projects spanning:
- Healthcare and Medical AI
- Advanced AI & Computing Systems
- Natural Language Processing & Communication
- Ethics, Bias, and Responsible AI
- Industrial and Environmental Applications
Explore the lightning round video and listen to student researchers explain their research in under 40 seconds! https://youtu.be/Z_wbqxCIyUo
More info: https://AWS-Machine Learning University AI/ML Research Symposium
Filter displayed posters (151 keywords)
Enhancing Sports Operations with AI and Machine Learning
Bryce Coleman and Oscar Palacios
Predictive Modeling Based on Player Archetypes(Football)
Santiago Soto, Patrick Lucey
Optimizing Performance: Essential Physical Quantities for Track Runners from the Starting Line
Joel D., Oluwabusolami S., Ziporia I., Alyssa S., Licaiya E., Frank S., Justin C., Kendall J., Alexandria D. Ariana T., M. Shimizu, Dawit Hailu
AI-Driven Autonomous Swarms for Proactive Hydrogen Safety: A New Paradigm in Leak Detection and Response
Dr Raziq Yaqub
Vision Transformer-Based Multi-Modal MRI Segmentation for Accurate Brain Tumor Detection
Aalia Bello
Deep Learning Application to Smart Manufacturing Using Sound Data
Austin Gibson, Eunseob Kim, Mandoye Ndoye, Firas Akasheh, Ali Shakouri
Getting the Right Attention: Body-Part Aware Prompting for Person Re-Identification
Priti Gurung, Jiang Li, Danda B. Rawat
Advancing Synthetic Aperture Radar Target Recognition through Self-supervised Learning: A Vision for Enhanced Automatic Target Recognition Systems
Md Al Siam, Dewan Fahim Noor, and Moath Alsafasfeh
Smart Zoning Vision-Based Approach for Securing Cyber-Physical Systems
Wilson Samuels, Moath Alsafasfeh, Dewan Fahim Noor
Generative AI for Equity: Opportunities, Risks, and Pathways to Inclusion.
Malcolm coley
The goal of this research is twofold. The first is to explore how generative AI can be used to support learning, job growth, and entrepreneurship in communities that need greater access. The second is to look at the ethical challenges, such as bias, misinformation, loss of privacy, and job displacement.
The study uses a mixed approach. We reviewed existing research to understand the current debates about AI ethics. We also conducted interviews with teachers, community leaders, and small business owners from underrepresented groups. These conversations provided real-life insights into both the promise and risks of generative AI.
Preliminary results show that when used fairly, generative AI can give people new tools to learn faster, build businesses, and express creativity in ways they could not before. At the same time, many participants pointed out challenges such as limited digital literacy, a lack of trust in AI systems, and the absence of training data that reflects their communities and cultures.
The larger impact of this work is to shift the focus from seeing AI only as a technical breakthrough to also seeing it as a social one. If built and shared responsibly, generative AI can reduce gaps in access and opportunity. But if fairness and inclusion are ignored, it may widen those gaps. The challenge is to make sure this technology empowers everyone.
Automated Deep Learning Segmentation of Age-Related Tissue Changes in Thigh CT Images Using nnU-NetV2
Trey McVey
We used nnU-Net V2, a state-of-the-art medical image segmentation framework, to automatically segment five tissue classes in 2D thigh CT images: subcutaneous adipose tissue (SAT), muscle, cortical bone (CBONE), trabecular bone (TBONE), and intermuscular adipose tissue (IMAT). Our dataset comprised 197 CT image-label pairs with a training/validation/test split of 158/21/18.
The model achieved exceptional performance with an overall Dice score (DSC) of 0.95 on test images, above standard benchmarks and our previous results using the CNN model of 0.89. These results demonstrate that automated deep learning segmentation can meet manual annotation quality while dramatically reducing processing time and tiresome work from clinical professionals.
This advancement enables healthcare providers and researchers to efficiently quantify age-related changes in muscle mass, fat infiltration, and bone density—key signs for sarcopenia and osteoporosis. By automating this previously manual process, our approach facilitates large-scale population studies and provides clinical professionals with significantly faster and accurate results. While also providing tools for patient monitoring and treatment planning, ultimately improving care for aging populations.
Graph Neural Network-based Division-Aware Cell Tracking in Time Lapse Microscopy
Jian Zhao, Olaitan E. Oluwadare, Nagasoujanya V Annasamudram, and Sokratis Makrogiannis
Comparative Evaluation of Advanced Deep Neural Networks for Pediatric Wrist Fracture Classification Using the GRAZPEDWRI Dataset
Dhiwahar Adhithya Kennady¹², Lingling Liu¹, Chelsea Harris¹, Sokratis Makrogiannis¹
To ensure stable convergence and effective transfer learning, we employ a two-phase training protocol. In the initial warm-up stage, only the classification head is trained with frozen backbones; in the fine-tuning stage, the full network is unfrozen to adapt backbone representations while preserving transferable features. Standardized hyperparameters and consistent dataset splits are applied across all models. Performance is measured using accuracy, true positive rate (TPR), and true negative rate (TNR), metrics that jointly capture sensitivity and specificity—two critical factors in clinical diagnostics.
Among the tested backbones, MambaVision achieved the highest accuracy at 94%, surpassing both transformer-based and convolutional models. To better understand model behavior, interpretability analyses were conducted using Grad-CAM, Grad-CAM++, ScoreCAM, Occlusion Sensitivity, and SmoothGrad.
These complementary methods reveal decision pathways: gradient-based maps localize class- discriminative regions, perturbation-based occlusion highlights features whose removal alters predictions, and noise-averaged saliency methods reduce artifacts to emphasize consistent cues. Collectively, they provide insight into where and why models focus on fracture-relevant regions, enhancing transparency and clinical trust.
This study establishes a robust comparative baseline of advanced architectures on the GRAZPEDWRI dataset. By combining strong predictive performance—led by state-space modeling—with interpretable outputs, the findings offer guidance for selecting and deploying AI-driven diagnostic systems that are both accurate and explainable in pediatric radiology.
Breast Cancer Classification with Deep Learning: Performance and Explainability Across Multiple Models
Lingling Liu, Dhiwahar Adhithya Kennady, Chelsea Harris, Sokratis Makrogiannis
Training was initially performed on the large-scale VinDr-Mammo dataset, which provided a diverse foundation for learning breast density features. In this setting, our best-performing model, Swin V2, achieved a classification accuracy of 0.934 when both training and testing were conducted on VinDr -Mammo. To further evaluate cross-dataset adaptability, we employed a fine-tuning strategy that models pretrained on VinDr then fine-tuned on the INbreast dataset, which is smaller. This transfer not only stabilized convergence but also provides a better performance, with Swin V2 achieving a best accuracy of 0.950 on INbreast.
These results underscore the generalizability of our approach. The ability to maintain and even improve performance in a cross-dataset case demonstrates the robustness of our method and its potential for broader clinical applicability.
To unveil the “black-box” of deep learning models, we applied explainable AI (XAI) techniques such as, GradCAM, SmoothGrad to reveal class-relevant regions and decision-driving features. These methods provide transparency, enabling a deeper evaluation of the model’s reliability and decision-making process in addition to accuracy.
The significant findings of our study provide a comprehensive comparison of model performance and interpretability, offering valuable insights for the design and application of deep learning methods in breast cancer risk assessment. Our work supports the advancement of computer-aided diagnosis systems that are not only accurate but also reliable and trustworthy for clinical practice.
Developing and Evaluating CNN and Transformer-based Deep Learning Techniques for Thigh Muscle Segmentation in MRI
Lingling Liu,Mohammed N. Ibrahim,Nagasoujanya Annasumudram,Sokratis Makrogiannis
In this work, we evaluate the performance of several deep learning architectures for thigh muscle segmentation. Specifically, we implemented 3D UNET and transformer-based models including UNETR, Swin UNETR, and Swin UNETRV2. Swin UNETRV2 is an architecture that integrates hierarchical Swin Transformer blocks with a U-Net style encoder-decoder, enabling it to capture both local spatial detail and long-range contextual information. Its design allows efficient representation learning across multiple scales, which is particularly advantageous for segmenting small and irregular muscle groups.
Experiments were tested on two thigh MRI datasets, using five-fold cross-validation for performance assessment. All models achieved high Dice scores, with visual inspection confirming anatomically accurate and smooth predicted maps. Moreover, Swin-UNETR V2 demonstrated significant improvements in boundary delineation and handling class imbalance, outperforming conventional CNN-based methods with an average Dice score of 0.881 across all five folds and muscle groups.
Overall, our findings demonstrate that advanced transformer-based models such as Swin UNETRV2 hold great promise for reliable, automated segmentation of thigh muscle groups in MRI, thereby accelerating quantitative research and supporting clinical applications in aging and age-related diseases.
Building a Phylogenetic Pipeline: Using Python and Biopython to Retrieve, Filter, and Align DNA Sequences for Evolutionary Analysis.
Yasmeen Olass, Dr. Rita Hayford Dr. Chase Stratton, Dr. Gulni Ozbay
Reducing Inefficiencies in Black Women’s Reproductive Health Through AI-Powered Business Intelligence
Jordan Mason
Key Cellular Measurements for Accurate Breast Tumor Classification
Jared Bryant
Building trust through accountability: Governance and Policy Frameworks for Responsible AI
James Jordan III
Improving Machine Translation With Context-Aware Entity-Only Pre-translations using GPT 4o
Isaac Adjei, Jabez Agyemang-Prempeh, Saurav Aryal, PhD
Investigating Racial Bias in ML-Based Masked Face Recognition Systems
Joycelyn Rouse, Dr. Mustafa Atay
We selected 50 subjects (25 White and 25 Black) from the Chicago Face Database, each with unmasked and masked images. Masked images are synthetically generated using MaskTheFace software. For each subject, we used 9 images for training and 1 for testing. We created multiple experimental datasets which are White-only, Black-only, Balanced (25/25), White-Dominant (35 White/15 Black), and Black-Dominant (35 Black/15 White). All datasets were evaluated in both masked and unmasked conditions. LBP was used to extract texture-based facial features, and classification was performed using XGBoost, Linear Discriminant Analysis, Logistic Regression, Random Forest, and LightGBM. Evaluation metrics included accuracy, precision, recall, F1 score, and miss rate.
Preliminary results show that racial bias is prevalent in each dataset, particularly showing that white subjects consistently achieve higher accuracy than black subjects. Masking further amplifies these disparities. These findings highlight that black subjects are inherently at a disadvantage in face recognition systems and masks are proven to increase the disadvantage gap. Various dataset splits still hold this bias. Our work emphasizes the importance of demographic fairness through effective feature extraction, hyper tuning methods, and dataset design in mitigating racial bias, especially when using conventional approaches deployed in real-world applications. This research contributes to advancing social justice and fairness by exposing and addressing racial biases in masked face recognition, ultimately supporting the development of more reliable and ethically responsible AI applications. Future work will explore deep learning models, larger datasets, and advanced mitigation techniques to improve fairness across demographic groups.
The Physics of Speed: Analyzing Sha'Carri Richardson's Sprint Using Motion Tracking
Natalya Armenta, Onyekachi Adigwe, Melisa Anacius, Jasmine Arrey, Janiya Davis, Taylor Smith-Beach, Elena Jordan, Dawit Hailu
Quantum Corners-Improving Emergency Response with Smart Traffic Management and Quantum Sensors
Cameron Lewis, Ayron Fears
Combining Zero-Shot Claim Extraction and KNN-Based Classification for Cross-lingual Claim Matching
Suprabhat Rijal, Rahual Rai, Saurav K. Aryal PhD
The impact of social and structural supports and social responsibilities on anxiety among academic health professionals
Anietie Andy¹, Natasha Tonge², Praise-EL Michaels¹, Legand Burge¹, Marina K. Holz³
MUTUAL AFFECTIVE SYNCHRONY IN PARENT-CHILD CONVERSATIONS USING MACHINE LEARNING
Saksham Kapoor, Selin Zeytinoglu
Word Embedding Technology – From one-Hot-Encoding to Transformer Technology
Wendon Doswell
Using Retrieval-Augmented Generation in Large Language Models Optimization
Sakina Shrestha, Dr. Paul Wang
Forcespun Bacterial Cellulose Composite Fibers with Essential Oil for Potential Food Packaging Applications
Erin-Nicole Scott, Dr. Maria Calhoun, Dr. Vijay Rangari
Clustering Raman Spectral Data Using K-Means Algorithm and Dimension Reduction
Z. Bass, A. Brown, L Alexander, M. Boumedine, and H. Boukari
The authors acknowledge support from NSF award # 2219731 between Delaware State University and University of Virgin Islands, NSF award # 1955664.
A Study of Gender Bias in Facial Emotion Classification via Deep Learning Models
Abdul-Qadir Mohammed and Mustafa Atay
Facial Emotion Recognition (FER) systems can be vulnerable to a wide range of biases, leading to unequal performance across demographic groups. This study presents a large-scale analysis of gender disparities, evaluating five state-of-the-art architectures—Sequential CNN, ResNet-50, DenseNet, MobileNetV2, and InceptionV3—across seven emotion classes (anger, disgust, fear, happiness, sadness, surprise, neutral). Bias was rigorously assessed through ten experimental scenarios, generated by training and testing each model on all combinations of male-only (M), female-only (F), male-dominant (MD), female-dominant (FD) and balanced (B) subsets from the KDEF dataset, with performance gaps measured via F1-score delta (ΔF1). Key findings reveal significant and systematic performance gaps. Investigations into bias origins pointed to underlying dataset imbalances—evidenced by a sharp performance decline in cross-gender tests—and fundamental architectural limitations in feature processing, and architectural limitations, as evidenced by significant cross-gender performance variations that imply divergent feature processing strategies. To mitigate this bias, we introduce a novel feature fusion framework that combines raw pixel data with invariant geometric facial landmarks. This approach demonstrably reduced aggregate gender bias by nearly half across all tested emotions, achieving near-parity performance for key classes such as happiness and fear.
Our work provides three core contributions:
A standardized bias-evaluation protocol for FER
An actionable multi-modal mitigation strategy via feature fusion
Evidence that architectural adjustments enhance fairness alongside balanced data
These findings form a foundation for building more equitable FER systems. The future work is directed toward intersectional demographics including age, ethnicity, race and their intersection with gender.
From Static Scans to Adaptive Intelligence: Deep Reinforcement Learning for Network Mapping
Jaden Johnson
Evaluating Q-Learning Rewards in Noisy Cloud-Based Cyber-Physical Systems
Md. Imran Jahid Khan, Mohammad Rahman
Advancing ADE Detection in Clinical Texts with Sentence-Level Prompting and Fine-Tuned Language Models
Howard Prioleau, Santiago Romero-Brufau, and Saurav K. Aryal
Differences in Emotional Expression Across Ethnicity and Gender in Reddit Mental Health Forums
Praise-EL Michaels, Esau Hutcherson, Anietie Andy
Enhancing Multimodal Brain Tumor Segmentation with 3D U-Net
Erika M. Suber-Bey and Fatima Boukari
Autonomous Robots with OpenAI Integration: A Technical Case Study Using NAO6
Aayush Shrestha, Paul Wang
DeepTabCoder : Code-based Retrieval and In-context Learning for Question-Answering over Tabular Data
Saharsha Tiwari, Saujanya Thapaliya, Saurav K. Aryal PhD
Bee and Pollen Monitoring: A Deep Dive with Machine Learning
Kshitij Pingle
The Halting Problem and Ambiguity in Natural Language Processing (NLP)
Doron Reid
Ibom NLP: A Step Toward Inclusive Natural Language Processing forNigeria’s Minority Languages
Oluwadara Kalejaiye, Luel Hagos Beyene, David Ifeoluwa Adelani, Mmekut-Mfon Gabriel Edet, Aniefon Daniel Akpan, Eno-Abasi Urua, Anietie Andy
Synthetic ID Fraud in Credit Card Applications
Bryson Stringer, Ikeoluwa Gbolagun, Naima Haidara, Helen Reumann, Latreil Wimberly
Development of a Predictive Tool for Traumatic Diaphragmatic Injury Using Machine Learning and National Trauma Data
Paria Rezaei BS, Siobhan O. Nnorom MD, Anietie Andy PhD, Quyen Chu MD.
RIG AI: Enhancing Creativity without Replacing Creatives
Calvin Briggs Jr.
AI and Computer Interface for Human Occupational Health and Safety Evaluation
Sam Nwaneri
Advancing Urban Water Prediction using Machine Learning
Yogesh Bhattarai, Sanjib Sharma
A Python program that demonstrates Deep Q-Learning in CartPole environment
Antwain M. Sparks and Vojislav Stojkovic
AWS Machine Learning University Student Poster Session into Trustworthy AI Outputs through a Secure Data Protocol
Tarique Cummings and Vojislav Stojkovic
The proposed framework is implemented in Python. It combines three complementary cryptographic techniques: commitment schemes with hash functions, digital signatures, and zero-knowledge proofs.
Commitment schemes with hash functions guarantee that once an output is generated, it cannot be altered without detection.
Digital signatures provide strong authentication, verifying that the content originates from a trusted source and has not been manipulated.
Zero-knowledge proofs enable third parties to verify the correctness of outputs without revealing sensitive data or internal model details, preserving confidentiality while enabling transparency.
By integrating these mechanisms into a secure and verifiable pipeline, the framework protects against both external and insider threats. It secures outputs against forgery and unauthorized modification and ensures that confidential information is shielded from unnecessary disclosure during the verification process. The result is a practical and flexible solution that supports trustworthy AI deployment across multiple contexts.
This contribution advances both theory and practice by offering a conceptual model and a working prototype. Potential applications include secure AI-powered APIs, trusted outputs for legal. medical, academic, etc. systems, and verifiable content delivery in decentralized, blockchain-based infrastructures.















































