====== External HPC Facilities ====== ^ Facility Name ^ Facility Type ^ Link ^ Help Guides ^ Facility Summary ^ CPU Cores (architecture) ^ GPU Cards (model) ^ RAM (min-max) ^ Notes ^ | Archer2 | Tier 1 | https://www.archer2.ac.uk/ | [[https://docs.archer2.ac.uk/|Archer documentation]] | Cray / HPE | 750000 (x86_64) | | (256GB - 512GB) | Operated by EPCC / Edinburgh University. | | Bede | Tier 2 | https://n8cir.org.uk/bede/ | [[https://bede-documentation.readthedocs.io/en/latest/index.html|Bede documentation]] | IBM Power Architecture + Nvidia GPU, ARM + Nvidia GPU. | 1184 (Power), 360 (ARM) | 154 (V100/T4), 5 (H100) | (256GB - 512GB) | Operated and funded by the N8CIR, hosted by Durham University. | | DiRAC | | https://dirac.ac.uk/ | [[https://dirac.ac.uk/data-intensivecambridge-diac/|DIAC]] | Data intensive, Heterogeneous architecture for\\ complex simulation and modelling | 30,412 (x86_64) | 746,496 GPU cores | 157TB | Access to DiRAC resources is managed through the STFC’s independent Resource Allocation Committee (RAC). 4 different services hosted at Cambridge, Leicester, Durham, Edinburgh | | ::: | ::: | ::: | [[https://dirac.ac.uk/data-intensive-leicester/|DIAL]] | ::: | 25,600 (x86_64) | - | 102TB | ::: | | ::: | | ::: | [[https://dirac.ac.uk/memory-intensive-durham/|Cosma at Durham]] | Memory intensive Large-scale cosmological simulations | 25,600 (x86_64) | 7 (A100,V100,MI200,MI100) | 102TB | ::: | | ::: | | ::: | [[https://dirac.ac.uk/extreme-scaling-service-edinburgh/|Tursa at Edinburgh]] | Extreme Scaling optimised for particle physics | 4272 (x86_64) | 712 (A100) | 178TB | ::: | ---- ===== ARCHER2 ====== ARCHER2 is the UK National Supercomputing Service, a world class advanced computing resource for UK researchers. ARCHER2 is provided by UKRI, EPCC, HPE Cray and the University of Edinburgh. [[https://www.archer2.ac.uk/|ARCHER2 Website]] Extensive free online training materials for HPC are available at https://www.archer2.ac.uk/training/materials/ UK researchers must [[https://www.archer2.ac.uk/support-access/access.html|apply for time on ARCHER2]] but it is possible to gain temporary access with free CPU or GPU credits by passing the [[https://www.archer2.ac.uk/training/driving-test.html|ARCHER2 driving test]] Funding for the service is regularly reviewed and the current ARCHER2 service end date is 21st November 2026. Please check [[https://www.archer2.ac.uk/|News & Announcements]] for updates. ===== Bede ===== Bede has had support extended to **March 2026**. It is currently unknown if the service will continue past that time. Please plan any use of the Bede HPC facility accordingly. Newcastle is a member of the N8 Centre of Excellence in Computationally Intensive Research (N8 CIR). Through this membership Newcastle researchers have access to a new GPU focused HPC machine. Bede comprises of 32 IBM Power 9 dual-CPU nodes, each with 4 NVIDIA V100 GPUs and high-performance interconnect. This is the same architecture as the US government’s SUMMIT and SIERRA supercomputers which occupied the top two places in a recently published list of the world’s fastest supercomputers. Bede is the first supercomputer in the UK to make use of IBM’s Power IC922 server; making use of 6 additional nodes with NVIDIA T4 Tensor Core GPU Accelerators to improve AI inference. More information on Bede can be found on the [[https://n8cir.org.uk/bede/|N8 CIR website]]. If you want to make use of Bede's supercomputing capabilities, have a look at their [[https://bede-documentation.readthedocs.io/en/latest/usage/index.html|documentation]] page and find out how to get [[https://n8cir.org.uk/bede/rse-support-bede/|support]]. Local support at Newcastle University for Bede users is provided by the Research Software Engineering team. ---- ===== JADE / JADE2 ===== The JADE2 national Tier 2 HPC GPU facility __closed__ in **late 2024**. Further access is no longer possible. JADE2 was an EPSRC-funded Tier 2 regional High-Performance Computing Cluster based on GPUs. It was intended to be used to support Artificial Intelligence Research only. The computing nodes were based on NVIDIA DGX MAX-Q Deep Learning System platform. The cluster had 63 servers, each containing 8 NVIDIA Tesla V100 GPUs linked by NVIDIA’s NV link interconnects technology. Newcastle University was a member of the consortium of institutions sharing this resource. ---- [[:advanced:index|Back to Advanced Topics]]