AWS-Machine Learning University AI/ML Research Symposium Student Poster Session

AWS-Machine Learning University

This showcase represents a powerful collaboration between AWS-Machine Learning University and diverse academic institutions, highlighting the innovative research happening at HBCUs and other universities across the country.


Discover groundbreaking artificial intelligence and machine learning research presented by emerging scholars. This poster session highlights student projects spanning:

  • Healthcare and Medical AI
  • Advanced AI & Computing Systems
  • Natural Language Processing & Communication
  • Ethics, Bias, and Responsible AI
  • Industrial and Environmental Applications


Explore the lightning round video and listen to student researchers explain their research in under 40 seconds! https://youtu.be/Z_wbqxCIyUo 



More info: https://AWS-Machine Learning University AI/ML Research Symposium

Filter displayed posters (151 keywords)

Deep Learning (4) AI (3) machine learning (3) Natural Language Processing (NLP) (2) artificial intelligence (2) deep learning (2) show more... 3D UNET (1) AI Policy (1) AWS EC2 (1) Accountability (1) Adjacency Stressor (1) Adverse Drug Events (ADEs) (1) Ambiguity (1) Analysis (1) Anxiety (1) Artificial intelligence (1) Automatic Speech Recognition (OpenAI Whisper) (1) Autonomous (1) BERT (1) Bacterial Cellulose (1) Bias Mitigation (1) Black Women’s Health (1) Breast density (1) CT Images (1) Cancer (1) CartPole (1) Clinical Text Mining (1) Clustering K-means Algorithm (1) Code Generation (1) Context-Aware Translation (1) Conversational AI (1) Cost Reduction (1) Cross-lingual fact-checking (1) Cyber-Physical Systems (1) Cybersecurity (1) Data Science Life Cycle (1) Dataset (1) Deep Q-learning (1) Deep Reinforcement Learning (1) Docker (1) Drone (1) Emotion recognition (1) Emotional expression (1) Entity Disambiguation (1) Ethnicity (1) Facial Expression Recognition (1) Facial recognition (1) Facility Definition (1) Fairness (1) Fine-Tuning (1) GPT models (1) GPT-based Models (1) Gender Bias (1) Gender differences (1) Halting Problem (1) Hazard Recognition (1) Honey Bee (1) Human-centered AI (1) Humanoid robot (1) Hydrogen (1) K-nearest neighbors (1) Knowledge-Augmented Translation (1) Large Language Model (LLM) (1) Large Language Models (LLMs) (1) Leak Detection (1) Low-Resource Languages (1) Machine Learning (1) Machine Translation (MT) (1) Machine and Human Practice Economy (1) Maternal Health Disparities (1) Modeling (1) Motion tracking (1) Multi-modal (1) Multilingual NLP (1) Multilingual claim retrieval (1) NAO6 (1) NLP (1) Named Entity Recognition (NER) (1) Natural language processing (NLP) (1) Network Mapping (1) Neural Networks (1) Object Detection (1) Object Tracking (1) Online mental health forums (1) Optimization (1) Pinecone (1) Pollen (1) Q-Learning (1) Q-learning (1) RAG (1) Random Forest (1) Real-Time Surveillance (1) Recyclability (1) Reddit (1) Reinforcement Learning (1) Reinforcement learning (1) Reproductive Healthcare (1) Responsible AI (1) Responsible AI • AI Governance • Ethics in AI (1) Schema-aware Prompting (1) Segmentation (1) Sentence-Level Prompting (1) Sentiment analysis (RoBERTa) (1) Social media misinformation (1) Spectra Analysis (1) Sports (1) Sustainability (1) Swarm (1) Synthetic ID Fraud (1) TabularQuestion Answering (Tabular QA) (1) Text Classification (1) Transformer (1) Transformers (1) Transparency (1) Trustworthy AI (1) U-Net (1) Undecidability (1) Vision-Based Zoning (1) Voluntary and Captive Injuries (1) Wikidata (1) XAI (1) Zero-shot classification (1) academia (1) automatic target recognition (ATR) (1) brain metastasis (1) cancer (1) computer vision (1) conversational AI (1) cryptography (1) digital signatures (1) disease diagnostics (1) education technology (1) faculty (1) fairness (1) health professions (1) human-robot interaction (1) integrity (1) internet of things (1) interpretable AI (1) masked faces (1) muscle segmentation (1) nnU-Net (1) racial bias (1) remote sensing (1) self-supervised learning (1) smart manufacturing (1) synthetic aperture radar (SAR) (1) thigh MRI (1) unauthorized object tracking (1) urban resilience (1) zero-knowledge proofs (1)
Show Posters:

Enhancing Sports Operations with AI and Machine Learning

Bryce Coleman and Oscar Palacios

Abstract
Presented by
Bryce Coleman
Institution
Morehouse College, Department of Computer Science
Keywords

Predictive Modeling Based on Player Archetypes(Football)

Santiago Soto, Patrick Lucey

Abstract
This research paper presents a predictive modeling approach for football players. Data was acquired by using player statistics from six leagues to create player archetypes. Next a ranking system for depth charts with predictive accuracy was recorded with peaking at 67.89% using logistic regression. A key challenge was that current player categories are too restrictive and initial clusters lacked discriminative power due to a high number of features. Future work will focus on using more player tracking and fitness data to identify and develop distinct player clusters, and to establish data-driven player valuation models.
Presented by
Santiago Soto
Institution
Morehouse College
Keywords

Optimizing Performance: Essential Physical Quantities for Track Runners from the Starting Line

Joel D., Oluwabusolami S., Ziporia I., Alyssa S., Licaiya E., Frank S., Justin C.,  Kendall J., Alexandria D. Ariana T., M. Shimizu, Dawit Hailu

Abstract
Presented by
Joel Duah
Institution
Bowie State University
Keywords

AI-Driven Autonomous Swarms for Proactive Hydrogen Safety: A New Paradigm in Leak Detection and Response

Dr Raziq Yaqub

Abstract
We propose an autonomous drone-swarm framework that integrates onboard sensors with centralized AI for 3D hydrogen leak detection, localization, severity assessment, and autonomous mitigation planning. This system delivers faster and more accurate detection than fixed sensors, ensuring resilient and continuous monitoring of hydrogen infrastructure.
Presented by
Romail Arif <romail.arif@bulldogs.aamu.edu>
Institution
Alabama A&M University
Keywords
Autonomous, Drone, Swarm, AI, Hydrogen, Leak Detection

Vision Transformer-Based Multi-Modal MRI Segmentation for Accurate Brain Tumor Detection

Aalia Bello

Abstract
Accurate brain tumor segmentation in magnetic resonance imaging (MRI) is critical for clinical decision-making, influencing rapid diagnosis accuracy, treatment planning, and prognosis. Manual segmentation is time-consuming and prone to inter-observer variability, necessitating the development of automated, reliable alternatives. This research presents a transformer-based segmentation pipeline utilizing the Vision Transformer (ViT) architecture, optimized for multimodal MRI sequences (T1, T1CE, T2, and FLAIR). Transformer models were chosen for their superior multi-modal data handling, reduced Unlike traditional methods such as thresholding, region growing, and CNN-based models like U-Net, transformer models effectively capture global contextual information and handle complex, multi-channel medical data with reduced reliance on handcrafted features. In the preprocessing stage, we used SimpleITK for skull stripping, intensity normalization on a benchmark of publicly available datasets. For data augmentation, we applied deformation techniques and noise injections. Model training and evaluation were performed using PyTorch, Hugging Face Transformers adapted for multi-modal inputs. Segmentation performance is quantitatively assessed using the Dice Similarity Coefficient and Intersection over Union (IoU), key metrics for evaluating medical segmentation accuracy. Preliminary results indicate that transformer-based models outperform traditional approaches in both accuracy and generalizability across datasets. Current accomplishments include the establishment of the full data pipeline, implementation of preprocessing techniques, augmentation techniques, ViT architectures, and preliminary evaluation setup. Future work will involve hyperparameter tuning, benchmarking against state-of-the-art models, and exploring 3D transformer variants to further enhance spatial context understanding. This project presents a comprehensive framework for automated brain tumor segmentation, emphasizing multi-modal MRI processing and transformer-based architectures. It contributes to the development of scalable and accurate segmentation tools with significant clinical potential,
Presented by
Aalia Bello
Institution
Delaware State University
Keywords

Deep Learning Application to Smart Manufacturing Using Sound Data

Austin Gibson, Eunseob Kim, Mandoye Ndoye, Firas Akasheh, Ali Shakouri

Abstract
The dominant paradigm in manufacturing in the late 20th century was lean manufacturing, which emphasized minimizing costs through the elimination of waste. Since then, the invention and advancement of the internet of things (IoT), big data and data analytics, and artificial intelligence (AI) models have laid the foundation for the rise in Industry 4.0 and smart manufacturing, where data generated in real-time is continually assessed by AI models for minimizing costs and maximizing operational efficiency. With these modern technologies, the condition of machines can be continually assessed to determine the ideal time to perform maintenance on them, which is called predictive maintenance. The effective application of predictive maintenance can improve equipment longevity, reduce time wasted in needless inspections, and minimize needless expenditures by proactively repairing machines before they break. Sound and temperature data of three vacuum pumps in the Birck Nanotechnology Center at Purdue University were recorded over four years with low-cost sensors attached to them. The data was curated into an organized dataset and analyzed to build a deep learning model to detect pump failure. The deep learning model of choice was an autoencoder, which was trained on the curated sound data. The model proved to readily detect pump failure based on the spike in reconstruction error. The model’s ability to detect steady changes in sound to preventatively predict pump failure remains for further investigation.
Presented by
Austin Gibson <agibson1912@tuskegee.edu>
Institution
Tuskegee University, College of Engineering
Keywords
smart manufacturing, deep learning, internet of things, artificial intelligence

Getting the Right Attention: Body-Part Aware Prompting for Person Re-Identification

Priti Gurung, Jiang Li, Danda B. Rawat

Abstract
Person re-identification (ReID) aims to match the same individual across different cameras or moments, a task made challenging by variations in pose, illumination, and background clutter. Models like CLIP do well at aligning images and text globally, but they often miss the fine details that really matter for distinguishing people. While FG-CLIP improves region-level feature understanding and fine-grained detail capture in general object recognition tasks, it still lacks the person-specific feature specialization and attention distribution balance needed for optimal performance in ReID scenarios. In this work, we propose fine-tuning FGCLIP with body-part-specific text prompts to guide attention toward critical body parts. We first evaluate how different prompts that range from general person-level prompts to body-part-specific prompts such as “head,” “torso,” and “legs” affect attention distribution. Using these insights, we fine-tune FGCLIP to strengthen underrepresented regions and optimize the attention balance across body parts, particularly between shoulders and torso. Experiments on standard ReID benchmarks show that our approach improves fine-grained feature extraction and boosts ReID accuracy. More broadly, our study demonstrates how prompt-guided fine-tuning can make vision-language models more effective in tasks that require focused attention on important image regions.
Presented by
Priti Gurung
Institution
Howard University
Keywords

Advancing Synthetic Aperture Radar Target Recognition through Self-supervised Learning: A Vision for Enhanced Automatic Target Recognition Systems

Md Al Siam, Dewan Fahim Noor, and Moath Alsafasfeh

Abstract
Automatic Target Recognition (ATR) faces critical challenges from inadequately labeled datasets and substantial performance gaps between simulated and real-world imagery. This research at Tuskegee College of Engineering introduces an innovative self-supervised learning methodology that eliminates synthetic data dependency while establishing new performance benchmarks through multi-task representation learning and comprehensive classifier validation. Our novel framework implements nine strategically designed pretext tasks encompassing geometric invariance, signal robustness, and multi-scale analysis. This comprehensive approach captures the inherent structural properties of radar imagery without requiring external supervision, enabling robust feature extraction from measured Synthetic Aperture Radar (SAR) imagery. We conduct extensive evaluation across diverse computational paradigms using the SAMPLE dataset, spanning traditional machine learning, deep neural architectures, and generative models. Our systematic assessment of varying data availability scenarios, from 5% to 100% training data, yields crucial insights for operational deployment strategies. Results demonstrate exceptional performance achievements. Support Vector Machines reach 99.63% classification accuracy, ResNet18 attains 97.40% accuracy, and Random Forest achieves 99.26% accuracy. Remarkably, traditional machine learning approaches show superior robustness under data-constrained conditions, maintaining 51.95% accuracy even with minimal training samples in 5% data availability, while deep learning methods show greater sensitivity to data scarcity. Cross-validation experiments validate framework generalizability across independent data partitions, ensuring reliable performance estimates. Computational analysis reveals efficient timing characteristics with total processing under 16 ms per image, enabling real-time operational deployment capability. Per-class ROC analysis demonstrates consistent discrimination across all target types, with area under curve values exceeding 0.99 for most target classes. This work establishes a foundation toward measured-data-only SAR ATR, eliminating synthetic augmentation while achieving state-of-the-art performance, advancing domain-specific representation learning in remote sensing applications.
Presented by
Md Al Siam
Institution
Tuskegee University, ECE Department
Keywords
self-supervised learning, remote sensing, computer vision, synthetic aperture radar (SAR), automatic target recognition (ATR)

Smart Zoning Vision-Based Approach for Securing Cyber-Physical Systems

Wilson Samuels, Moath Alsafasfeh, Dewan Fahim Noor

Abstract
Cyber-Physical Systems (CPS) play a critical role in modern infrastructure, manufacturing, and defense environments, where physical processes are tightly integrated with computational control. However, these systems are vulnerable to unauthorized access or object intrusion, especially in zones near critical equipment or restricted facilities. Traditional security solutions often rely on physical barriers or sensor networks that are expensive, rigid, or poorly suited for adaptable deployments. In this project, a vision-based zoning approach for securing CPS environments using deep learning and interactive user input is presented. The system enables operators to manually define Safe, Warning, and Restricted zones directly on a live video feed using a simple click-and-drag interface. A trained YOLOv8 object detection model is integrated to identify and classify tagged objects in real time. The system continuously monitors the position of each object relative to the defined zones. If an unauthorized object enters a restricted zone and remains beyond a predefined time threshold, an alert is triggered and logged for review. The proposed method was implemented and tested on both live and recorded video streams. The system successfully tracked unauthorized objects, recorded zone entry and exit events, and activated alerts when necessary. All events were logged to CSV files, and visual feedback was provided through an on-screen dashboard. This low-cost, adaptable approach has strong potential applications in smart manufacturing, military installations, robotic system monitoring, and critical infrastructure protection, where flexible zoning and automated alerts are essential. The current implementation supports static zone selection. Future work will extend the system to include dynamic zoning, multi-camera integration, and 3D spatial tracking, enhancing its capability to secure more complex or large-scale CPS environments.
Presented by
Wilson Samuels
Institution
Tuskegee University
Keywords
Cyber-Physical Systems, Vision-Based Zoning, unauthorized object tracking, Real-Time Surveillance

Generative AI for Equity: Opportunities, Risks, and Pathways to Inclusion.

Malcolm coley

Abstract
Generative artificial intelligence (AI) is a powerful technology that can create text, images, music, and even computer code. It has the potential to open new doors in education, careers, and creativity. At the same time, people who do not have equal access to technology or resources often face the greatest risk of being left out or negatively affected by this rapid change.

The goal of this research is twofold. The first is to explore how generative AI can be used to support learning, job growth, and entrepreneurship in communities that need greater access. The second is to look at the ethical challenges, such as bias, misinformation, loss of privacy, and job displacement.

The study uses a mixed approach. We reviewed existing research to understand the current debates about AI ethics. We also conducted interviews with teachers, community leaders, and small business owners from underrepresented groups. These conversations provided real-life insights into both the promise and risks of generative AI.

Preliminary results show that when used fairly, generative AI can give people new tools to learn faster, build businesses, and express creativity in ways they could not before. At the same time, many participants pointed out challenges such as limited digital literacy, a lack of trust in AI systems, and the absence of training data that reflects their communities and cultures.

The larger impact of this work is to shift the focus from seeing AI only as a technical breakthrough to also seeing it as a social one. If built and shared responsibly, generative AI can reduce gaps in access and opportunity. But if fairness and inclusion are ignored, it may widen those gaps. The challenge is to make sure this technology empowers everyone.

Presented by
Malcolm Coley
Institution
Delaware State University
Keywords

Automated Deep Learning Segmentation of Age-Related Tissue Changes in Thigh CT Images Using nnU-NetV2

Trey McVey

Abstract
Age-related deterioration of the thigh region and bone density sparks a critical health concern requiring high-level quantitative assessment. Manual segmentation of CT images is labor-intensive and time-consuming, impeding clinical workflows and research. This work demonstrates the application of deep learning for automated segmentation of thigh CT images to identify key tissue types associated with aging and sarcopenia.

We used nnU-Net V2, a state-of-the-art medical image segmentation framework, to automatically segment five tissue classes in 2D thigh CT images: subcutaneous adipose tissue (SAT), muscle, cortical bone (CBONE), trabecular bone (TBONE), and intermuscular adipose tissue (IMAT). Our dataset comprised 197 CT image-label pairs with a training/validation/test split of 158/21/18.

The model achieved exceptional performance with an overall Dice score (DSC) of 0.95 on test images, above standard benchmarks and our previous results using the CNN model of 0.89. These results demonstrate that automated deep learning segmentation can meet manual annotation quality while dramatically reducing processing time and tiresome work from clinical professionals.

This advancement enables healthcare providers and researchers to efficiently quantify age-related changes in muscle mass, fat infiltration, and bone density—key signs for sarcopenia and osteoporosis. By automating this previously manual process, our approach facilitates large-scale population studies and provides clinical professionals with significantly faster and accurate results. While also providing tools for patient monitoring and treatment planning, ultimately improving care for aging populations.
Presented by
Trey McVey
Institution
Delaware State University
Keywords
Segmentation, CT Images, nnU-Net, Deep Learning

Graph Neural Network-based Division-Aware Cell Tracking in Time Lapse Microscopy

Jian Zhao, Olaitan E. Oluwadare, Nagasoujanya V Annasamudram, and Sokratis Makrogiannis

Abstract
Presented by
Jian Zhao
Institution
Delaware State University
Keywords

Comparative Evaluation of Advanced Deep Neural Networks for Pediatric Wrist Fracture Classification Using the GRAZPEDWRI Dataset

Dhiwahar Adhithya Kennady¹², Lingling Liu¹, Chelsea Harris¹, Sokratis Makrogiannis¹

Abstract
Accurate detection of pediatric wrist fractures is a challenge in musculoskeletal radiology due to anatomical variability and subtle fracture patterns. This study presents a comparative evaluation of five advanced neural architectures—SwinV2-Tiny, Vision Transformer (ViT-B/16), DenseNet-121, MambaVision, and Vision Mamba—on the GRAZPEDWRI dataset for pediatric wrist fracture classification. These models span convolutional, transformer-based, and state-space paradigms, enabling a broad investigation into modern medical image analysis strategies.

To ensure stable convergence and effective transfer learning, we employ a two-phase training protocol. In the initial warm-up stage, only the classification head is trained with frozen backbones; in the fine-tuning stage, the full network is unfrozen to adapt backbone representations while preserving transferable features. Standardized hyperparameters and consistent dataset splits are applied across all models. Performance is measured using accuracy, true positive rate (TPR), and true negative rate (TNR), metrics that jointly capture sensitivity and specificity—two critical factors in clinical diagnostics.

Among the tested backbones, MambaVision achieved the highest accuracy at 94%, surpassing both transformer-based and convolutional models. To better understand model behavior, interpretability analyses were conducted using Grad-CAM, Grad-CAM++, ScoreCAM, Occlusion Sensitivity, and SmoothGrad.

These complementary methods reveal decision pathways: gradient-based maps localize class- discriminative regions, perturbation-based occlusion highlights features whose removal alters predictions, and noise-averaged saliency methods reduce artifacts to emphasize consistent cues. Collectively, they provide insight into where and why models focus on fracture-relevant regions, enhancing transparency and clinical trust.

This study establishes a robust comparative baseline of advanced architectures on the GRAZPEDWRI dataset. By combining strong predictive performance—led by state-space modeling—with interpretable outputs, the findings offer guidance for selecting and deploying AI-driven diagnostic systems that are both accurate and explainable in pediatric radiology.
Presented by
Dhiwahar Adhithya Kennady
Institution
¹Delaware State University, Dover, DE 19904, USA. ²New York University, New York, NY 10003, USA
Keywords
disease diagnostics, interpretable AI, deep learning

Breast Cancer Classification with Deep Learning: Performance and Explainability Across Multiple Models

Lingling Liu, Dhiwahar Adhithya Kennady, Chelsea Harris, Sokratis Makrogiannis

Abstract
According to the World Health Organization, breast cancer is the most common cancer among women, and it is one of the leading causes of cancer-related deaths. Early detection is therefore essential. Among various risk factors, breast density has emerged as a significant indicator closely associated with the likelihood of developing breast cancer. This study investigates the use of advanced deep learning models for breast density classification, including architectures such as SE-ResNet50, DenseNet121, SwinV2, Vision Transformer (ViT), and MambaVision.

Training was initially performed on the large-scale VinDr-Mammo dataset, which provided a diverse foundation for learning breast density features. In this setting, our best-performing model, Swin V2, achieved a classification accuracy of 0.934 when both training and testing were conducted on VinDr -Mammo. To further evaluate cross-dataset adaptability, we employed a fine-tuning strategy that models pretrained on VinDr then fine-tuned on the INbreast dataset, which is smaller. This transfer not only stabilized convergence but also provides a better performance, with Swin V2 achieving a best accuracy of 0.950 on INbreast.

These results underscore the generalizability of our approach. The ability to maintain and even improve performance in a cross-dataset case demonstrates the robustness of our method and its potential for broader clinical applicability.

To unveil the “black-box” of deep learning models, we applied explainable AI (XAI) techniques such as, GradCAM, SmoothGrad to reveal class-relevant regions and decision-driving features. These methods provide transparency, enabling a deeper evaluation of the model’s reliability and decision-making process in addition to accuracy.

The significant findings of our study provide a comprehensive comparison of model performance and interpretability, offering valuable insights for the design and application of deep learning methods in breast cancer risk assessment. Our work supports the advancement of computer-aided diagnosis systems that are not only accurate but also reliable and trustworthy for clinical practice.
Presented by
LingLing Liu
Institution
Delaware State Univeristy
Keywords
XAI, Breast density

Developing and Evaluating CNN and Transformer-based Deep Learning Techniques for Thigh Muscle Segmentation in MRI​

Lingling Liu,Mohammed N. Ibrahim,Nagasoujanya Annasumudram,Sokratis Makrogiannis

Abstract
Segmentation of anatomical structures in magnetic resonance imaging (MRI) is a fundamental task for tissue quantification in studies of aging and age-related diseases. It also plays a critical role in developing diagnostic workflows and monitoring therapeutic interventions. Thigh muscle groups are particularly challenging to segment automatically because they vary in size, lie in close anatomical proximity, and are often separated by ambiguous or faint boundaries in MRI scans. Manual delineation is therefore tedious and error-prone, especially under diseased conditions. Moreover, variability in muscle size leads to class imbalance, as smaller muscles are under-represented compared to larger groups. These challenges highlight the need for robust automated methods capable of producing accurate and consistent segmentations.

In this work, we evaluate the performance of several deep learning architectures for thigh muscle segmentation. Specifically, we implemented 3D UNET and transformer-based models including UNETR, Swin UNETR, and Swin UNETRV2. Swin UNETRV2 is an architecture that integrates hierarchical Swin Transformer blocks with a U-Net style encoder-decoder, enabling it to capture both local spatial detail and long-range contextual information. Its design allows efficient representation learning across multiple scales, which is particularly advantageous for segmenting small and irregular muscle groups.

Experiments were tested on two thigh MRI datasets, using five-fold cross-validation for performance assessment. All models achieved high Dice scores, with visual inspection confirming anatomically accurate and smooth predicted maps. Moreover, Swin-UNETR V2 demonstrated significant improvements in boundary delineation and handling class imbalance, outperforming conventional CNN-based methods with an average Dice score of 0.881 across all five folds and muscle groups.

Overall, our findings demonstrate that advanced transformer-based models such as Swin UNETRV2 hold great promise for reliable, automated segmentation of thigh muscle groups in MRI, thereby accelerating quantitative research and supporting clinical applications in aging and age-related diseases.

Presented by
LingLing Liu
Institution
Delaware State Univeristy
Keywords
thigh MRI, muscle segmentation, 3D UNET, Transformer

Building a Phylogenetic Pipeline: Using Python and Biopython to Retrieve, Filter, and Align DNA Sequences for Evolutionary Analysis.

Yasmeen Olass, Dr. Rita Hayford Dr. Chase Stratton, Dr. Gulni Ozbay

Abstract
As genomic databases explode in size, reproducible software pipelines have become essential for evolutionary biology. During Delaware State University’s Undergraduate Phylogenetics Boot Camp, I built an end‑to‑end workflow in Python that retrieves, filters, and aligns DNA barcode sequences. Using Biopython modules (Entrez, SeqIO, AlignIO) and MAFFT, I downloaded COI and 16S genes, removed low‑quality records (< 600 bp or > 5 % ambiguous bases), and generated high‑quality multiple‑sequence alignments. Time‑boxing (Pomodoro) and GTD task tracking kept coding and analysis on schedule.​ Recent studies have applied similar pipelines to questions ranging from insect host‑plant adaptation to the domestication of crop phytochemicals. Building on that template, I will scale the workflow to a curated set of medicinal and aromatic plants to test whether closely related species share characteristic metabolite profiles. By merging phylogenies with phytochemical and ecological datasets, the project links **evolutionary history** to **functional traits**—with applications in conservation, agriculture, and natural‑product discovery.​
Presented by
Yasmeen Olass
Institution
Delaware State University
Keywords

Reducing Inefficiencies in Black Women’s Reproductive Health Through AI-Powered Business Intelligence

Jordan Mason

Abstract
Increases in healthcare costs, along with limited access to healthcare services due to systemic inequities have shown to create challenges in the reproductive health of black women. Obstacles like high rates of maternal complications, delayed diagnoses, & inefficient care are not only sure to increase financial strain on our healthcare systems, but also negatively impact patient outcomes. The research provided will propose the use of AI-powered business intelligence (BI) methods to identify, as well as reduce inefficiencies in reproductive healthcare services for black women, with the goal of optimizing both cost effectiveness & quality of care. The objective is to use BI & integrate it into electronic health records to detect patterns of testing that would be flagged as repetitive in areas of predominantly black women. The BI would also target treatment delays & any resources that can be misallocated with female patients. Algorithms will be applied in order to detect inefficiencies & quickly generate insights for healthcare providers. To ensure the reliability of the research, early tests on healthcare data will be used to demonstrate that the methods with BI systems can effectively reduce any testing that is redundant by at least 15% & shorten treatment delays by 20% compared to baseline reporting methods. The result of these findings will suggest that the analytics not only help cut costs, but can also increase patient satisfaction by delivering more timely care. The impact of this research will be to strengthen financial sustainability & improve reproductive health outcomes for black women through methods that are more efficient, affordable, & place patient care at the highest importance. Ethical safeguards are considered such as including patient data privacy protocols, along with bias monitoring. By combining technological innovation with a focus of underrepresented populations, this research demonstrates the potential of AI-powered BI to transform reproductive healthcare into a more inclusive & effective system.
Presented by
Jordan Mason
Institution
Delaware State University
Keywords
Reproductive Healthcare, Black Women’s Health, Maternal Health Disparities, Cost Reduction

Key Cellular Measurements for Accurate Breast Tumor Classification

Jared Bryant

Abstract
Breast cancer remains one of the most common cancers worldwide, making early and accurate diagnosis critically important. In this project, we investigated whether a machine learning approach could effectively distinguish malignant from benign breast tumors and identify which tumor characteristics most strongly drive those predictions. Using the Wisconsin Diagnostic Breast Cancer dataset, we trained and tuned a Random Forest classifier. The model achieved high accuracy in classifying tumors, while feature importance analysis highlighted that tumor area, texture, and concave points were among the most influential factors. These findings suggest that machine learning models like Random Forests not only achieve strong predictive performance but can also provide interpretable insights into the biological features that matter most for diagnosis. This dual capability of accuracy and interpretability shows the promise of machine learning in supporting clinical decision-making and guiding future diagnostic tools.
Presented by
Jared Bryant
Institution
Delaware State University
Keywords
Random Forest, Cancer, Dataset

Building trust through accountability: Governance and Policy Frameworks for Responsible AI

James Jordan III

Abstract
Artificial intelligence (AI) is increasingly used in sensitive domains like healthcare, hiring, and finance. While it can improve efficiency, rapid deployment without strong governance has led to bias, opacity, and mistrust. This study compares the EU AI Act, the U.S. NIST AI Risk Management Framework, and Kenya’s Draft AI Policy to identify key elements of effective governance: enforceable accountability, stakeholder inclusion, and adaptability to risk. Findings suggest that policies combining ethical oversight with technical guidance enhance transparency and trust while supporting innovation.
Presented by
James Jordan III
Institution
Delaware State University
Keywords
Responsible AI • AI Governance • Ethics in AI,Accountability,Transparency,Bias Mitigation,AI Policy,Trustworthy AI

Improving Machine Translation With Context-Aware Entity-Only Pre-translations using GPT 4o

Isaac Adjei, Jabez Agyemang-Prempeh, Saurav Aryal, PhD

Abstract
Machine Translation (MT) often struggles with named entities such as people, places, and organizations, particularly in low-resource languages or ambiguous contexts. Errors in handling these entities can distort meaning and lead to culturally significant mistakes. We propose a three-step GPT-based translation pipeline that improves accuracy by combining natural language processing (NLP) techniques with external knowledge. Our method first uses Named Entity Recognition (NER) to identify entities in the source text, then queries Wikidata to retrieve canonical forms and translations, resolving ambiguities such as distinguishing between homonymous places or names. Finally, we integrate these entity-aware translations into a GPT-driven context-aware prompt to generate the final output. Evaluations across multiple languages demonstrate that this approach consistently improves translation quality compared to baseline GPT outputs, with the largest gains observed in complex scripts such as Arabic and Japanese. By explicitly detecting and disambiguating named entities before translation, our pipeline ensures higher accuracy, cultural appropriateness, and contextual correctness, paving the way for more robust MT systems in diverse linguistic settings.
Presented by
Isaac Adjei
Institution
Howard University
Keywords
Machine Translation (MT), Named Entity Recognition (NER), Natural Language Processing (NLP), Wikidata, Entity Disambiguation, Low-Resource Languages, Context-Aware Translation, GPT-based Models, Knowledge-Augmented Translation, Multilingual NLP
Chat with Presenter
Available September 26th, 3:15 PM - 4:00 PM

Investigating Racial Bias in ML-Based Masked Face Recognition Systems

Joycelyn Rouse, Dr. Mustafa Atay

Abstract
Face recognition systems have become increasingly prevalent in security, authentication, and surveillance applications. However, concerns about racial bias and performance disparities across demographic groups have raised technical challenges. This study investigates how dataset and facial masks influence racial bias in face recognition systems using conventional machine learning algorithms and Local Binary Pattern (LBP) feature extraction.

We selected 50 subjects (25 White and 25 Black) from the Chicago Face Database, each with unmasked and masked images. Masked images are synthetically generated using MaskTheFace software. For each subject, we used 9 images for training and 1 for testing. We created multiple experimental datasets which are White-only, Black-only, Balanced (25/25), White-Dominant (35 White/15 Black), and Black-Dominant (35 Black/15 White). All datasets were evaluated in both masked and unmasked conditions. LBP was used to extract texture-based facial features, and classification was performed using XGBoost, Linear Discriminant Analysis, Logistic Regression, Random Forest, and LightGBM. Evaluation metrics included accuracy, precision, recall, F1 score, and miss rate.

Preliminary results show that racial bias is prevalent in each dataset, particularly showing that white subjects consistently achieve higher accuracy than black subjects. Masking further amplifies these disparities. These findings highlight that black subjects are inherently at a disadvantage in face recognition systems and masks are proven to increase the disadvantage gap. Various dataset splits still hold this bias. Our work emphasizes the importance of demographic fairness through effective feature extraction, hyper tuning methods, and dataset design in mitigating racial bias, especially when using conventional approaches deployed in real-world applications. This research contributes to advancing social justice and fairness by exposing and addressing racial biases in masked face recognition, ultimately supporting the development of more reliable and ethically responsible AI applications. Future work will explore deep learning models, larger datasets, and advanced mitigation techniques to improve fairness across demographic groups.
Presented by
Joycelyn Rouse
Institution
Winston-Salem State University
Keywords
Facial recognition, racial bias, fairness, masked faces, machine learning

The Physics of Speed: Analyzing Sha'Carri Richardson's Sprint Using Motion Tracking​

Natalya Armenta, Onyekachi Adigwe, Melisa Anacius, Jasmine Arrey, Janiya Davis, Taylor Smith-Beach, Elena Jordan, Dawit Hailu​

Abstract
This analysis explores the physics behind Sha'Carri Richardson’s 100m sprint using motion tracking. Frame-by-frame video analysis provided displacement, velocity, and acceleration data, revealing rapid acceleration, a peak velocity of ~10.8 m/s, and efficient biomechanics including high stride frequency and short ground contact time. Estimated peak force output was ~250 N. The results illustrate how Richardson’s form aligns with sprint physics models and highlight key factors contributing to elite speed performance.​
Presented by
Natalya Armenta
Institution
Bowie State University
Keywords
Motion tracking, Analysis, Sports

Quantum Corners-Improving Emergency Response with Smart Traffic Management and Quantum Sensors

Cameron Lewis, Ayron Fears

Abstract
Presented by
Cameron Lewis
Institution
Howard University
Keywords

Combining Zero-Shot Claim Extraction and KNN-Based Classification for Cross-lingual Claim Matching

Suprabhat Rijal, Rahual Rai, Saurav K. Aryal PhD

Abstract
Presented by
Rahual Rai <rahual.rai@bison.howard.edu>
Institution
Howard University, Department of Electrical Engineering and Computer Science, CEA
Keywords
Cross-lingual fact-checking, Zero-shot classification, K-nearest neighbors, Multilingual claim retrieval, Social media misinformation

The impact of social and structural supports and social responsibilities on anxiety among academic health professionals

Anietie Andy¹, Natasha Tonge², Praise-EL Michaels¹, Legand Burge¹, Marina K. Holz³

Abstract
Academic faculty in the health professions are especially susceptible to anxiety-inducing factors as a consequence of their demanding professional lives. The COVID-19 pandemic placed special attention on the experiences of anxiety and stress among health professionals and academics; however, there is a lack of studies of the impact of specific types of social supports and social responsibilities of anxiety in health professions faculty. To address this critical gap, we administered the Generalized Anxiety Disorder (GAD-7) questionnaire to 549 self-identified academic faculty in health professions. Of those surveyed, 8% expressed severe or higher levels of anxiety and 16.7% were moderate or worse. Our results revealed that academic rank, specifically being non-tenure track or an adjunct or part-time faculty member were associated with lower levels of anxiety relative to other academic ranks. Having at least one parent in academia or having close family relationships were both associated with significantly lower anxiety. Social responsibilities of having children or a larger family size did not significantly impact levels of anxiety. In summary, our results showed that post-pandemic anxiety levels are still notable and potentially impairing among academic health professionals, and that structural and social supports can act as buffers against anxiety. Our study can inform future efforts by institutions to support academic health professional faculty mental health.
Presented by
Praise-EL Michaels
Institution
Howard University
Keywords
Anxiety, health professions, academia, faculty

MUTUAL AFFECTIVE SYNCHRONY IN PARENT-CHILD CONVERSATIONS USING MACHINE LEARNING

Saksham Kapoor, Selin Zeytinoglu

Abstract
Understanding the dynamics of parent-child synchrony—the coordination of physiological and behavioral processes within dyads—is critical for elucidating its role in children’s learning and development. Although parental communication contributes to children's social learning and affective outcomes (Zeytinoglu et al., 2025), little is known about the moment-to-moment synchrony in the emotional valence of parent-child conversations. We applied a novel machine-learning-based sentiment analysis approach to examine whether the emotional tone of parental speech predicts children’s subsequent speech valence, and whether children’s speech valence, in turn, predicts parental emotional tone. Parent-child dyads (N = 100; child age: 9–12; 51% female; 20% Hispanic; 50% non-Hispanic White, 15% Hispanic White, 12% Asian, 9% Black, 13% Biracial) participated in a study examining mother-child dynamics in anxiety transmission. Dyads completed a semi-structured conversation task discussing hypothetical peers. Speech data (~8 minutes/dyad) were transcribed using Whisper Large-V3. Sentiment analysis was conducted in Python using RoBERTa, yielding positive, negative, and neutral scores for each conversational turn. Linear mixed-effects models showed that parent sentiment predicted child sentiment in both valences (positive: β = .23, negative: β = .29, p < .001). Child sentiment also predicted subsequent parent sentiment (positive: β = .09, negative: β = .19, p < .01). Findings suggest bidirectional emotional contagion during parent-child conversations, with stronger parent-to-child effects. By quantifying moment-to-moment verbal affective synchrony using machine learning, this study provides a scalable approach for capturing emotional dynamics in naturalistic conversations. Future work can examine how affective synchrony relates to physiological synchrony and long-term emotional outcomes.
Presented by
Saksham Kapoor <sakshamk@terpmail.umd.edu>
Institution
University of Maryland, College Park
Keywords
Natural language processing (NLP), Sentiment analysis (RoBERTa), Automatic Speech Recognition (OpenAI Whisper), Conversational AI, Human-centered AI, Emotion recognition

Word Embedding Technology – From one-Hot-Encoding to Transformer Technology​

Wendon Doswell

Abstract
This work is a build on earlier work with word embeddings, it explores transformer-based models through Amazon MLU resources. We examined BERT's architecture, including embeddings, attention, encoder, and decoder, and fine-tuned DistilBERT on 2,000 Amazon reviews for sentiment analysis. This research included hyperparameter tuning and evaluation using accuracy, precision, recall, and F1. Our focus shifts from traditional embeddings to neural network driven transformer models.​
Presented by
Wendon Doswell
Institution
Norfolk State University
Keywords
NLP, BERT, Transformers, Neural Networks, Text Classification

Using Retrieval-Augmented Generation in Large Language Models Optimization

Sakina Shrestha, Dr. Paul Wang

Abstract
Large Language Models (LLMs) have revolutionized natural language processing at its core but are plagued by the whims of inaccuracy in contextual understanding, hallucination, and lack of access to real-time or domain knowledge. The current study explores how LLM performance can be optimized through the addition of Retrieval-Augmented Generation (RAG) to an academic support chatbot prototype for Morgan State University's Computer Science Department. RAG improves generation quality by injecting relevant external knowledge acquired in real time. Our system employs a FastAPI backend and a Vite/React frontend, coupled with Pinecone vector databases to perform semantic similarity search over carefully curated academic sources. The backend is containerized using Docker and deployed on AWS EC2 to achieve scalability, reproducibility, and fault tolerance. MySQL is used to store users' data. Initial testing with student query simulation demonstrates a clear reduction in average response time from 2.1 seconds with an LLM-only to 1.3 seconds with the RAG-augmented pipeline.
Presented by
Sakina Shrestha
Institution
Morgan state university, Department of Computer science
Keywords
RAG, Pinecone, Docker, AWS EC2
Chat with Presenter
Available September 26th 3:15pm - 4:00pm
Watch Presentation

Forcespun Bacterial Cellulose Composite Fibers with Essential Oil for Potential Food Packaging Applications

Erin-Nicole Scott, Dr. Maria Calhoun, Dr. Vijay Rangari

Abstract
Today, most food packaging materials are made from petroleum-based plastics. Most of these plastics are harmful to the environment and human health. The need for sustainable solutions has allowed exploration using bio-based materials to remedy these issues. Cellulose is a widely used bio-based material, usually derived from plant sources and isolated to achieve cellulose nanomaterials via acid hydrolysis. Bacterial cellulose (BC) is produced by bacteria proliferation, and has a similar chemical structure to plant derived cellulose. There is currently a focus on using agricultural waste to reduce production costs associated with the production of this cellulose. Therefore, in this study banana peels are used in the production of bacterial cellulose. Poly(lactic) acid, PLA, is a widely studied polymer that is biodegradable and can be used in a variety of applications. Many essential oils are known for their antimicrobial properties and are also bio-derived and environmentally safe. The aforementioned materials will be combined to make porous fibers via forcespinning. The goal in this research is to provide a basis for absorbent materials to collect excess moisture within food packaging. These will be characterized by their optical, thermal, hydrophilic, and antimicrobial properties.
Presented by
Erin-Nicole Scott <erin.scott.3721@gmail.com>
Institution
Tuskegee University, Materials Science and Engineering Department
Keywords
Bacterial Cellulose, Sustainability, Recyclability

Clustering Raman Spectral Data Using K-Means Algorithm and Dimension Reduction

Z. Bass, A. Brown, L Alexander, M. Boumedine, and H. Boukari

Abstract
Raman spectroscopy allows the acquisition of electromagnetic radiation emitted by energy shifts when a sample is struck with a laser, providing insights into molecular vibrations. Raman spectroscopy has become a versatile analytical tool with applications in chemistry, medicine, and molecular biology. The technique captures unique structural fingerprints of chemical components. This study is designed as a data science use-case to group or cluster molecule spectra according to similar finger prints. After extracting the most relevant features, we use the K-mean clustering algorithm to group the sample set molecules based of behavior similarities within the spectra using peak locations at a set interval of 50 cm-1. The extracted features are currently been tested in the for molecules classification tasks.

The authors acknowledge support from NSF award # 2219731 between Delaware State University and University of Virgin Islands, NSF award # 1955664.

Presented by
Zeidan Bass
Institution
University of the Virgin Islands and Delaware State University
Keywords
Spectra Analysis, Data Science Life Cycle, Clustering K-means Algorithm

A Study of Gender Bias in Facial Emotion Classification via Deep Learning Models

Abdul-Qadir Mohammed and Mustafa Atay

Abstract
Abstract

Facial Emotion Recognition (FER) systems can be vulnerable to a wide range of biases, leading to unequal performance across demographic groups. This study presents a large-scale analysis of gender disparities, evaluating five state-of-the-art architectures—Sequential CNN, ResNet-50, DenseNet, MobileNetV2, and InceptionV3—across seven emotion classes (anger, disgust, fear, happiness, sadness, surprise, neutral). Bias was rigorously assessed through ten experimental scenarios, generated by training and testing each model on all combinations of male-only (M), female-only (F), male-dominant (MD), female-dominant (FD) and balanced (B) subsets from the KDEF dataset, with performance gaps measured via F1-score delta (ΔF1). Key findings reveal significant and systematic performance gaps. Investigations into bias origins pointed to underlying dataset imbalances—evidenced by a sharp performance decline in cross-gender tests—and fundamental architectural limitations in feature processing, and architectural limitations, as evidenced by significant cross-gender performance variations that imply divergent feature processing strategies. To mitigate this bias, we introduce a novel feature fusion framework that combines raw pixel data with invariant geometric facial landmarks. This approach demonstrably reduced aggregate gender bias by nearly half across all tested emotions, achieving near-parity performance for key classes such as happiness and fear.

Our work provides three core contributions:

A standardized bias-evaluation protocol for FER

An actionable multi-modal mitigation strategy via feature fusion

Evidence that architectural adjustments enhance fairness alongside balanced data

These findings form a foundation for building more equitable FER systems. The future work is directed toward intersectional demographics including age, ethnicity, race and their intersection with gender.
Presented by
Abdul-Qadir Mohammed <abdulmohammed033@gmail.com>
Institution
Winston-Salem State University
Keywords
Facial Expression Recognition, Gender Bias, Fairness, Responsible AI, Deep Learning
Chat with Presenter
Available September 28, 2:00-3:00 PM
Watch Presentation

From Static Scans to Adaptive Intelligence: Deep Reinforcement Learning for Network Mapping

Jaden Johnson

Abstract
Effective network mapping is critical for cybersecurity, as it provides a detailed view of network structures, uncovers vulnerabilities, and identifies potential attack vectors. This project examines the use of deep reinforcement learning (DRL) to improve autonomous network mapping, enabling agents to dynamically learn and adapt for greater accuracy and efficiency. Unlike traditional approaches, DRL introduces an intelligent, adaptive method of analyzing networks that addresses limitations in scalability and responsiveness. Autonomous network mapping is particularly important as cyber threats grow increasingly sophisticated, demanding adaptive defense mechanisms. DRL-equipped agents can continuously monitor vulnerabilities, optimize resource allocation, and enhance overall network resilience. Prior studies emphasize DRL’s promise in advancing cybersecurity, but also highlight challenges such as model complexity, scalability, and the need for cooperative decision-making across agents. The research methodology begins with a literature review of network mapping techniques, their limitations, and DRL’s role in cybersecurity. Network topology data is gathered from repositories such as the Internet Topology Zoo and simulated in environments like Mininet and NS-3. The DRL-based framework is evaluated against established methods, including Nmap and traceroute, with metrics focused on accuracy, efficiency, and scalability. Visualization tools such as NetworkX and Matplotlib highlight the interconnected and evolving nature of modern infrastructures, underscoring the necessity of adaptive mapping approaches. Real-time data collection and state representation further strengthen resilience by allowing continuous adaptation. Findings indicate that DRL-based network mapping outperforms traditional methods by improving efficiency, security, and intelligent management of complex environments. The results demonstrate DRL’s ability to handle large-scale, dynamic systems and reinforce its potential as a foundation for autonomous cybersecurity strategies. Future research should expand on multi-agent DRL systems and real-time adaptive mapping to enhance scalability and responsiveness, ensuring stronger protection against emerging threats.
Presented by
Jaden Johnson
Institution
Norfolk State University, Department of Computer Science
Keywords
Deep Reinforcement Learning, Network Mapping, Cybersecurity,

Evaluating Q-Learning Rewards in Noisy Cloud-Based Cyber-Physical Systems

Md. Imran Jahid Khan, Mohammad Rahman

Abstract
This research investigates the performance of Q-learning, a reinforcement learning approach, in a cloud-based cyber-physical system using an inverted pendulum as the physical model. We evaluate learning under varying levels of Gaussian observation noise and compare two reward structures: a standard step-based reward and a cosine-based reward designed to encourage upright balance. Experimental results across multiple noise levels show that observation noise degrades performance with the standard reward, whereas the cosine-based reward enhances robustness, stability, and cumulative returns. These findings contribute to a better understanding of reinforcement learning under uncertainty and underscore its potential for real-world cyber-physical systems where noise and variability are inherent.
Presented by
Md. Imran Jahid Khan <mkhan7037@gmail.com>
Institution
Tuskegee University, Department of Computer Science
Keywords
Machine Learning, Reinforcement Learning, Q-Learning

Advancing ADE Detection in Clinical Texts with Sentence-Level Prompting and Fine-Tuned Language Models

Howard Prioleau, Santiago Romero-Brufau, and Saurav K. Aryal

Abstract
Adverse Drug Events (ADEs) represent a major public health concern, contributing to significant morbidity, mortality, and financial costs. While most prior detection approaches have relied on sequence models, the application of large language models (LLMs) to ADE extraction from unstructured clinical notes remains underexplored. In this work, we propose a sentence-level prompting and fine-tuning strategy to improve ADE detection. Using the n2c2 dataset of annotated MIMIC-III discharge summaries, we split notes into sentences with Stanza and reformulated ADE identification as sentence-level classification via structured prompts. We evaluated both commercial LLMs (Gemini, LLaMA) and a fine-tuned smaller model (Qwen2 500M). Results show that fine-tuned Qwen2 outperforms larger LLMs, achieving strong macro F1 scores across ADE-related tasks and competitive accuracy in drug–ADE pair extraction. These findings demonstrate that compact, fine-tuned models can effectively scale ADE detection while minimizing computational overhead, with potential to enrich annotation frameworks and enhance clinical NLP pipelines.
Presented by
Howard Prioleau
Institution
Howard University
Keywords
Adverse Drug Events (ADEs), Clinical Text Mining, Large Language Models (LLMs), Sentence-Level Prompting, Fine-Tuning

Differences in Emotional Expression Across Ethnicity and Gender in Reddit Mental Health Forums

Praise-EL Michaels, Esau Hutcherson, Anietie Andy

Abstract
Users of mental health online forums express different emotions in posts published in these forums. The ethnicity and gender of these users can influence how and what emotions they express in these posts. Prior work determined that on social media platforms (a) individuals with different cultural backgrounds express mental health support needs differently and (b) the same applies to individuals belonging to different genders. However, prior work did not study whether individuals belonging to different ethnic groups and genders express themselves differently on online health forums; for example, do Asian female users tend to express themselves differently from White male users on mental health online forums? This work aims to address the following research question: does the ethnicity and gender of users influence the kinds of emotions they express in posts published on mental health online forums? To do this, we identify users who self-declare their ethnicity and gender in posts published on the various subreddits on Reddit. From this set of users, we identify those who had published posts in one or more mental health-related forums on Reddit and we collect their posts published on these forums. Using RoBERTa model fine-tuned on GoEmotions, we extract 27 emotions, excluding neutral, expressed in each of these posts. With this dataset, we conduct statistical analysis using 2-way ANOVA and Tukey HSD (Honestly Significant Difference) test to determine if there are differences in the way users belonging to different ethnic groups and genders express each of these emotions. We discuss these findings and their implications in the discussion section. The findings from this work can inform the design of mental health online interventions.
Presented by
Praise-EL Michaels
Institution
Howard University
Keywords
Ethnicity, Gender differences, Emotional expression, Online mental health forums, Reddit
Chat with Presenter
Available September 26th - 12-5pm
Watch Presentation

Enhancing Multimodal Brain Tumor Segmentation with 3D U-Net

Erika M. Suber-Bey and Fatima Boukari

Abstract
Brain metastases, or secondary tumors are the most common type of brain malignancy in adults. They develop when cancer cells from primary tumors (i.e. lung, breast, skin) migrate and create secondary growths in the brain. Accurate segmentation of brain metastases is crucial for diagnosis, treatment planning, and monitoring. Unfortunately, this task is a significant challenge due to tumor variety and its similarity to healthy tissue. This can lead to inaccurate delineation and unreliable predictions of tumor growth. While deep learning models show promise, most existing research focus on 2D MRI scans, with limited exploration of 3D scans. For 3D models, it is a challenge to accurately segment small metastases and lesions near blood vessels, with trade-offs between maximizing tumor detection and minimizing false positives. To address these gaps, we propose a 3D U-Net model that can accurately segment even the smallest brain tumors from MRI volumes. Our methodology includes the selective use of T1, T1ce, and FLAIR modalities, intensity normalization, spatial cropping, and a model architecture composed of double convolutional layers with dropout. Training is done using a combination of Dice and focal loss functions with the Adam optimizer. By comparing with previous studies, we confirm that our modified U-Net model outperforms existing approaches. This highlights the potential of our method to support earlier and more effective clinical interventions through improved modeling of metastatic tumors.
Presented by
Erika Suber-Bey <emsuberbey23@students.desu.edu>
Institution
Delaware State University
Keywords
Deep Learning, Multi-modal, U-Net, Optimization, brain metastasis, cancer, AI

Autonomous Robots with OpenAI Integration: A Technical Case Study Using NAO6

Aayush Shrestha, Paul Wang

Abstract
Humanoid robots provide great research and educational potential but, in most instances, the freedom is restricted by scripted programming and limited adaptability. The NAO6 humanoid robot, with enhanced sensors, expressive movement, and multimodal input, has the aforementioned limitations. This study presents the development and implementation to connects NAO6 with OpenAI’s GPT models, enabling real-time conversational autonomy beyond fixed behaviors. The system design consists of two components. The first is an onboard client developed in Python 2.7 with NAOqi APIs for speech-to-text, wake words, and text-to-speech. The second is an independent Python 3 Flask server external to that which accepts input from NAO, processes it via GPT, and sends back a generated response. Applications such as Choregraphe, PuTTY, and WinSCP enabled motion programming, file transfer, and remote deployment, and error handling, retry logic, and server-side optimization minimized latency, avoided versioning restrictions, and stabilized communication. Evidence indicates that AI-equipped NAO6 is now capable of engaging in dynamic, open-ended conversation with explicit explanations, natural rhythm, and continuity across multiple turns. The robot now is able to sustain conversations without using scripted responses, demonstrating using AI can revive legacy robotic systems. Future evolution will enhance expressiveness by linking speech with gestures, LED patterns, and motion, and add adaptive memory to store names, preferences, and previous conversations. Emotion-sensitive features, such as face recognition and voice tone analysis, will make empathetic responses based on user state. This intelligent robot can be used in applications include education, tutoring, exercise training, interactive presentation, and public outreach, with NAO serving as a human-like, responsive assistant. This project demonstrates how an integration of robotics and large language models can turn script-based robots into, an autonomous reasoning, decision-making, and adapting in the real world.
Presented by
Aayush Shrestha
Institution
Department of Computer Science, Morgan State University
Keywords
Humanoid robot, NAO6, artificial intelligence, GPT models, conversational AI, human-robot interaction, machine learning, education technology

DeepTabCoder : Code-based Retrieval and In-context Learning for Question-Answering over Tabular Data

Saharsha Tiwari, Saujanya Thapaliya, Saurav K. Aryal PhD

Abstract
This research introduces DeepTabCoder, a method for leveraging large language models (LLMs) in question-answering over tabular data. DeepTabCoder leverages a code-based retrieval system combined with in-context learning to generate and execute Python code for answering queries over structured datasets. By utilizing DeepSeek-V3 for code generation, DeepTabCoder effectively integrates dataset-specific metadata into tailored prompts, enabling the model to reason over complex tabular structures without directly exposing full table contents. Our approach follows a three-step process: first, dataset-specific schema information is extracted and integrated into the prompt; second, in-context learning is employed to generate executable code for retrieving relevant answers; finally, the generated code is executed in a controlled environment to ensure correctness. This modular framework enhances generalization across diverse datasets while minimizing hallucinations in query responses. Results demonstrate the effectiveness of DeepTabCoder in handling various question types, including Boolean, categorical, numerical, and list-based queries. Our model achieves an accuracy of 81.42% on the DataBench dataset and 80.46% on the DataBench Lite dataset, significantly outperforming the baseline model, which achieves 26% and 27% accuracy, respectively. Notably, our approach excels in Boolean reasoning and numerical queries, though challenges remain in handling complex aggregation tasks requiring multi-hop reasoning. These findings reveal the potential of code-based retrieval and execution in tabular question-answering tasks. Future work will focus on advanced prompt engineering and execution strategies to improve performance on more complex queries, such as leveraging smaller LLMs to verify code execution for multi-hop reasoning and broader generalization.
Presented by
Saujanya Thapaliya <saujanya.thapaliya@gmail.com>
Institution
Howard University, AI4PC Lab, Institute for Human Centered Artificial Intelligence
Keywords
Large Language Model (LLM), TabularQuestion Answering (Tabular QA), Code Generation, Schema-aware Prompting

Bee and Pollen Monitoring: A Deep Dive with Machine Learning

Kshitij Pingle

Abstract
Honeybees are among the most important pollinators for many flowering plant species and are critical in maintaining an ecosystem. Furthermore, honey bees pollinate many crops essential for human food production and the economy. Bees use pollen to support protein requirements for their diets, which is essential for egg laying and the continuation of a beehive. Keeping accurate records of pollen collection by honey bees provides valuable insights into environmental health. In addition, pollen estimation for public health can help hundreds of thousands of individuals who suffer from pollen allergies yearly. However, keeping accurate records of pollen collection without harmful methods is slow, tedious, and labor-intensive. An effective technique for analyzing honey bees' pollen collection data is using machine learning. With a camera system attached to the side of a beehive, it is possible to record videos of the entrance of a beehive, along with bees coming inside the hive with collected pollen. Applying machine learning techniques for object detection and bee tracking allows us to estimate the incoming pollen into a beehive, offering critical information about the beehive's health and the surrounding environment. Such a system will allow for the steady 24/7 tracking of pollen to create appropriate datasets to track bee health, and assist in calculating the amount of pollen collected over a month for a colony to survive a winter season. Deep learning and ensemble methods will be used to create the proposed object detection models. Hence, we propose creating a machine-learning-based model capable of detecting bees carrying pollen and a second model to estimate the quantity of pollen collected.
Presented by
Kshitij Samir Pingle
Institution
California State University of Fullerton, Department of Computer Science
Keywords
Honey Bee, Pollen, Object Detection, Object Tracking, Deep Learning

The Halting Problem and Ambiguity in Natural Language Processing (NLP)

Doron Reid

Abstract
The halting problem is a fundamental concept in computability theory, establishing that no general algorithm can determine whether a given program will eventually terminate or run indefinitely. This problem demonstrates the existence of undecidable problems, meaning there is no universal algorithmic solution. With the ever-evolving field of Natural Language Processing (NLP), a critical question arises: does natural language understanding face similar undecidability constraints? NLP enables computers to deal with the complexities of human language, including lexical, syntactic, and semantic ambiguities, which often require infinite context or external knowledge to resolve. This paper explores whether natural language understanding is an undecidable problem by drawing parallels between undecidability in computation and the limitations of NLP models. Based on our findings, we can conclude if these NLP tasks are solvable or inherently lack general algorithmic solutions resembling the halting problem. Despite these theoretical limitations, modern AI systems such as Apple's Siri, Open AI's ChatGPT, and Google's Gemini have avoided strict formal limitations. These AI systems use solutions of statistical methods, heuristics, and deep learning architectures such as transformers. This investigation examines the correlation between NLP tasks and the halting problem, highlighting the implications for the future of language-based AI systems and their potential limits.
Presented by
Doron Reid
Institution
Howard University, College of Engineering and Architecture
Keywords
Ambiguity, Halting Problem, Undecidability, Natural Language Processing (NLP)

Ibom NLP: A Step Toward Inclusive Natural Language Processing for Nigeria’s Minority Languages

Oluwadara Kalejaiye, Luel Hagos Beyene, David Ifeoluwa Adelani, Mmekut-Mfon Gabriel Edet, Aniefon Daniel Akpan, Eno-Abasi Urua, Anietie Andy

Abstract
Presented by
Oluwadara Kalejaiye
Institution
Howard University
Keywords

Synthetic ID Fraud in Credit Card Applications

Bryson Stringer, Ikeoluwa Gbolagun, Naima Haidara, Helen Reumann, Latreil Wimberly

Abstract
The rise of synthetic identity fraud creates unique challenges for financial institutions, as fraudsters increasingly mix real and fabricated personal information to bypass traditional detection models. This project asked: What features could potentially go in a model used to detect synthetic identity fraud? To explore this, the research team examined a range of articles, industry reports, and case studies. From this research, patterns were identified in the methods fraudsters employ, including the manipulation of Social Security numbers, minimal but carefully built identities, and the use of multiple outlets to gain credit. By reviewing these observations, the study generated a list of potential features relevant for AI-driven fraud detection models in the form of themes. These include data discrepancies, sudden credit acquisition, frequent address changes, and bust out periods. The project emphasizes the importance of documenting these commonalities as a first step toward developing AI models capable of flagging potential synthetic identities with greater precision.
Presented by
Bryson Stringer
Institution
Bowie State University
Keywords
Synthetic ID Fraud, AI, Modeling
Chat with Presenter
Available September 26th, 3:00-4:00pm

Development of a Predictive Tool for Traumatic Diaphragmatic Injury Using Machine Learning and National Trauma Data

Paria Rezaei BS, Siobhan O. Nnorom MD, Anietie Andy PhD, Quyen Chu MD.

Abstract
Background Traumatic diaphragmatic injury (TDI) is an uncommon but high-risk condition that often presents with subtle or delayed symptoms, making timely diagnosis imperative. Missed diagnoses can lead to serious complications, while false positives may result in unnecessary exploratory surgery. This study aimed to develop a predictive model to estimate the likelihood of TDI using data from the National Trauma Data Bank (NTDB) and modern machine learning techniques. Methods We used the National Trauma Data Bank (NTDB) from 2010 to 2014 to develop a machine learning model to predict TDI. We included patients with complete clinical and injury information. Features included demographics (age, gender, race, ethnicity), injury severity (ISS), injury mechanism, and specific organ injuries (e.g., lung, heart, bowel, liver). The final dataset included 8,988,697 patients, with 27,916 diagnosed with TDI. We trained multiple machine learning classifiers—logistic regression, decision tree, gradient boosting, LightGBM, and random forest—to predict the presence of TDI. Random forest gave the highest accuracy and was selected as the final model. Results The random forest model achieved an overall accuracy of 97%, with a recall of 57% and F1-score of 10% for TDI detection. Analysis showed that injury severity score (ISS), age, and mechanism of injury were the top predictors. SHAP analysis further confirmed these findings. The SHAP interaction plot demonstrated that both age and gender influence the model’s predictions. For example, higher age combined with specific gender patterns contributed to an increased predicted risk of TDI, showing meaningful interaction between these features. Calculator Tool We also built an interactive risk calculator to estimate the probability of TDI for individual patients. Below are examples of the calculator output for different patients: • Example 1: 34-year-old Black Hispanic male with ISS 32, penetrating injury, hemothorax, heart injury, respiratory failure, CNS complication, sepsis, urinary complication, pulmonary insufficiency, spleen injury, bowel injury, rib fracture. Predicted Risk of TDI: 0.77 • Example 2: 68-year-old Asian Hispanic female with ISS 56, blunt injury, hemothorax, heart injury, respiratory failure, CNS complication, sepsis, urinary complication, pulmonary insufficiency, spleen injury, bladder injury, bowel injury. Predicted Risk of TDI: 0.96 • Example 3: 71-year-old Asian Hispanic female with ISS 10, blunt injury, hemothorax, lung injury, heart injury, respiratory failure, bladder injury, abdominal aorta injury. Predicted Risk of TDI: 0.12
Presented by
Paria Rezaei
Institution
Howard University
Keywords

RIG AI: Enhancing Creativity without Replacing Creatives

Calvin Briggs Jr.

Abstract
This study examines how AI “rigs” enhance film pre-production by automating routine tasks, accelerating idea development, and improving team collaboration. Evidence shows that AI tools can cut preparation time by nearly 50%, reduce costs by 5–20%, and streamline alignment through rapid visualization. Using a mixed-methods approach, the research highlights both the efficiency gains and the democratizing potential of AI rigs, while also pointing to future implications for creative roles, industry economics, and ethical considerations in storytelling.
Presented by
Calvin Briggs Jr
Institution
Clark Atlanta University
Keywords

AI and Computer Interface for Human Occupational Health and Safety Evaluation

Sam Nwaneri

Abstract
Abstract This study addresses workplace injury, health, and safety associated with human/machine interactions, which cost organizations lots of money annually. Organization Safety and Health Associations regulate workplace practices to enforce compliance. Results of these practices indicate that the severity of workplace injuries show that humans are less adaptive to machine interactions. The risky consensus is that clinical identifications and diagnostics of workplace injuries are normal safety practices, but humans have not normalized machine interactions such as AI practices. Human safety on machine and AI interactions were investigated for injuries. Risks due to human/AI interface in occupational health and safety is the statement of the problem. Health and safety of an individual in the workplace was compartmentalized into human practice economy and AI presence into machine platform economy. Most significantly, these economies are determined as risk stressors at known locations on the human body. They suggest direct attacks on the head, trunk, and leg with resultant injuries in the three workstation stressor segments. The purpose of the study is to launch the adjacency expectations interaction pressure as health and safety interactions between the two economies for the determination of injuries. The stressors determined management safety priorities, for example, how a head injury is adjacent to human practice economy. The objective useed adjacency-stressors to identify body hazards, and to protect the human practice economy from hazardous AI cultures like data labeling, machine and deep learning errors, including AI hallucinations. The head was examined for mental/psycho interactions, the trunk for energy dispersion, and the legs for locomotor interactions. The data collected were configured into adjacency matrices to determine AI “voluntary injuries” relative to human “captive” injuries. The result shows that feasible management expectations prevent workplace injuries, validate, and protect workplace opportunities as the promise of technology. But captive and voluntary injuries make opportunity risky, vulnerable, and possibly a nuisance in the workplace because they scramble opportunity between AI and the rest of humans. Keywords: Adjacency Stressor, Hazard Recognition, Machine and Human Practice Economy, Facility Definition, Voluntary and Captive Injuries.
Presented by
Madison McDonald and Taylor Brown
Institution
Alcorn State University
Keywords
Adjacency Stressor, Hazard Recognition, Machine and Human Practice Economy, Facility Definition, Voluntary and Captive Injuries

Advancing Urban Water Prediction using Machine Learning

Yogesh Bhattarai, Sanjib Sharma

Abstract
Enhancing urban resilience to environmental extremes poses nontrivial scientific challenges. Expanding urban development, aging water management infrastructure and intensifying climate change will amplify flood and drought risk. . A sound understanding of water hazards and risk dynamics is crucial for designing sustainable risk management strategies. The key objective of this study is to leverage recent advances in geospatial data, high performance computing and machine learning to understand key physical processes, improve models and analyses, and quantify predictive uncertainty for rapid high-resolution prediction of flooding and water quality in an urban environment. Our findings provide insights into the key responses, mechanics, and interactions driving floods and water quality in an urban environment. We further demonstrate diverse scientific machine learning approaches that can be leveraged for enhancing urban water predictions. Improved predictive models and datasets can help enhance community resilience to environmental extremes.
Presented by
Yogesh Bhattarai
Institution
Howard University, Department of Civil and Environmental Engineering
Keywords
urban resilience, machine learning

A Python program that demonstrates Deep Q-Learning in CartPole environment

Antwain M. Sparks and Vojislav Stojkovic

Abstract
Abstract Reinforcement learning (RL) is a branch of machine learning (ML) where an agent learns to make decisions by interacting with an environment. Q-learning (QL) is a reinforcement learning algorithm that trains an agent to assign values to its possible actions based on its current state, without requiring a model of the environment (model-free). Deep Q-learning (DQL) is an advanced reinforcement learning technique that combines Q-Learning with deep neural networks to handle complex environments with large state spaces. It is particularly effective in scenarios like video games, robotics, and other high-dimensional problems where traditional Q-learning struggles. The CartPole environment is a widely used benchmark in reinforcement learning research and education. It models a classic control problem in which a cart moves along a track while balancing an inverted pendulum attached by a hinge. The agent’s objective is to maintain the pole in an upright position by applying discrete left or right forces to the cart. The system is defined by a four-dimensional state space, consisting of cart position, cart velocity, pole angle, and pole angular velocity. The reward structure assigns a value of +1 for each step that the pole remains balanced, with episodes terminating if the pole deviates beyond a critical angle, the cart exceeds positional boundaries, or a maximum step limit is reached. The CartPole environment presents a non-trivial control challenge that makes it highly suitable for testing and comparing reinforcement learning algorithms, including Q-Learning, Deep Q-Learning (DQL), etc. Due to its accessibility and interpretability, CartPole is often regarded as the “Hello World” of reinforcement learning experiments, providing a foundational platform for evaluating algorithmic performance before advancing to more complex tasks. We build a Python program that performs Deep Q-learning in CartPole environment. The program has a variety of applications in Academic Research, Education, etc. in Reinforcement learning.
Presented by
Antwain M. Sparks <anspa2@morgan.edu>
Institution
Morgan State University, Department of Computer Science
Keywords
Reinforcement learning, Q-learning, Deep Q-learning, CartPole

AWS Machine Learning University Student Poster Session into Trustworthy AI Outputs through a Secure Data Protocol

Tarique Cummings and Vojislav Stojkovic

Abstract
Artificial intelligence (AI) systems are rapidly becoming central to decision-making, communication, and automation. However, the reliability of their outputs remains a critical concern, particularly in high-stakes domains such as finance, healthcare, and governance. Without safeguards, AI-generated content is vulnerable to tampering, forgery, and unauthorized use, which undermines trust and hinders adoption. This work addresses these challenges by introducing a cryptographic framework designed to ensure the integrity, authenticity, and confidentiality of AI-generated outputs.

The proposed framework is implemented in Python. It combines three complementary cryptographic techniques: commitment schemes with hash functions, digital signatures, and zero-knowledge proofs.

Commitment schemes with hash functions guarantee that once an output is generated, it cannot be altered without detection.

Digital signatures provide strong authentication, verifying that the content originates from a trusted source and has not been manipulated.

Zero-knowledge proofs enable third parties to verify the correctness of outputs without revealing sensitive data or internal model details, preserving confidentiality while enabling transparency.

By integrating these mechanisms into a secure and verifiable pipeline, the framework protects against both external and insider threats. It secures outputs against forgery and unauthorized modification and ensures that confidential information is shielded from unnecessary disclosure during the verification process. The result is a practical and flexible solution that supports trustworthy AI deployment across multiple contexts.

This contribution advances both theory and practice by offering a conceptual model and a working prototype. Potential applications include secure AI-powered APIs, trusted outputs for legal. medical, academic, etc. systems, and verifiable content delivery in decentralized, blockchain-based infrastructures.
Presented by
Tarique Cummings
Institution
Morgan State University, Department of Computer Science
Keywords
Artificial intelligence, cryptography, integrity, digital signatures, zero-knowledge proofs