Our Research Projects

Brain Health Assessment via Multi-Modal Retinal Imaging with Deep Learning and Large Language Models

This is a project which is currently making use of HPC facilities at Newcastle University. It is active.

Project Contacts

For further information about this project, please contact:


Project Description

This project aims to investigate the relationship between retinal imaging biomarkers and brain health by integrating multi-modal ophthalmic imaging data (OCT, OCTA, and fundus images) with advanced deep learning and large language models. The research focuses on developing foundation model based segmentation and representation learning frameworks to extract retinal structural and vascular features that are associated with neurodegenerative and cerebrovascular conditions. The project will involve training and evaluating large scale neural networks for retinal layer segmentation, vessel extraction, feature embedding, and cross modal analysis, with the goal of building predictive models for early brain health assessment.


Software or Compute Methods

The project requires GPU accelerated computation to train and evaluate deep learning models, including the Segment Anything Model (SAM/SAM2), Vision Transformers, and multimodal architectures that combine imaging and LLM generated text embeddings. The software stack includes Python, PyTorch, CUDA-enabled libraries, MONAI, and additional machine learning toolkits for medical image processing. Workflows involve large scale image preprocessing, model fine-tuning using LoRA, multi-modal feature extraction, and running extensive experiments on OCT/OCTA datasets. Training these models demands high-performance GPUs with sufficient VRAM, fast parallel processing, and large storage capacity for intermediate model checkpoints and experiment outputs.