This is a project which is currently making use of HPC facilities at Newcastle University. It is active.
For further information about this project, please contact:
This project aims to develop and evaluate machine‑learning and AI workflows relevant to current and upcoming faculty research. Participants will run controlled experiments on HPC resources to benchmark algorithms, test data‑processing pipelines, and assess model‑training strategies across the range of bioinformatical research domains. The activity will develop skills to support faculty research and develop validated workflows / best‑practice guidelines that can be used to support future research projects across the institution.
The project will make use of common machine‑learning and data‑science tools, including Python, PyTorch, TensorFlow, scikit‑learn, JAX, and associated data‑processing libraries. Workloads will include GPU‑accelerated model training, hyperparameter optimisation, distributed training experiments, and benchmarking of parallel data‑processing pipelines. The project will also evaluate workflow‑management tools such as Slurm job arrays, containerised environments (e.g., Singularity/Apptainer), and reproducible ML pipelines.