====== Getting Started ======
**June 2025**
Dear Users, we are now in the 'Acceptance Testing' of the £2.5M Comet HPC system. At this critical period, the RSE-HPC team must prioritise this testing work and we apologise that this may mean we are unable to deal with your queries as quickly as we would like. Please bear with us, the introduction of Comet will lead to a great improvement in the HPC service we provide to you in the coming months.
===== Do I Need HPC? =====
Not all types of compute or workflow can benefit from the use of HPC facilities. We provide a description of the main types of compute facilities you can access at Newcastle University, and which of those facilities are the most suitable for various types of computational work.
* [[:started:do_i_need_hpc|What Are My Compute Options?]]
----
===== Registering =====
The ways you can request a new HPC project, and how funded and unfunded projects differ.
* [[:started:register|How Do I Register?]]
* [[:started:paying|Do I Have to Pay?]]
**Rocket will be replaced by Comet in 2025**
**All** users of University HPC facilities must take and pass the [[https://hpc.researchcomputing.ncl.ac.uk/quiz/|HPC Driving Test]] before accessing the **Comet** HPC service. If you have previously used Rocket __you must still take this test__.
== HPC Replacement ==
Please be aware that Rocket, our current High Performance Computing cluster, is end of life and will be replaced in Summer 2025 by a new cluster, named Comet. This means that you would need to migrate your workloads to the new system if you start working on Rocket now.
== Data Storage During Replacement ==
Currently, the entire Rocket /nobackup filesystem should be considered at-risk. It is beyond end of life and should only be being used to hold code and data which is currently active and being used to run jobs. There is a very real possibility of failure of this end-of-life filesystem, and this means that you should be prepared to lose code and data at any point. For research data, we strongly recommend you apply for storage on RDW https://services.ncl.ac.uk/itservice/core-services/filestore/research/ the first 5TB are free and the storage is mounted on the Rocket login nodes. [[https://nuservice.ncl.ac.uk/HEAT/Modules/SelfService/#serviceCatalog/request/88B64AB01D354037AB940E0608F34E4B|Application Form]]
== Code Storage ==
All code you create to run on the HPC, including batch scheduling jobs and processing scripts, should be version controlled using git. (//Not sure about git?// Sign up for a git workshop or study the materials at our [[training:index|Training & Workshops]] page.) [[https://github.com/|GitHub]] provides backup, enables collaboration, enables transfer to other HPCs and allows you to recover from errors. The University has a subscription to GitHub Enterprise. You should register your account using your University email address @newcastle.ac.uk . Code should be developed on your local machine (PC or laptop), pushed to a remote repository on GitHub and pulled to a clone of the repository inside `/nobackup/proj/`.
//NB GitHub is for versioning of text files. It is NOT a full backup solution and is not suitable for storage of data or binary files.//
----
===== Connecting =====
In addition to the normal methods of connecting shown below, the **Comet** HPC facility offers the ability to connect to interactive applications such as R Studio, Jupyter and Matlab (amongst others).
For more information see our [[:advanced:interactive|Open On-Demand and interactive applications]] guide in the [[:advanced:index|Advanced Topics]] section.
* [[:started:connecting_onsite|Connecting to HPC - On Campus]]
* [[:started:connecting_offsite|Connecting to HPC - Off Campus]]
* [[:started:data_transfer|Transferring Data]]
----
===== Basic HPC Concepts =====
This section explores basic HPC concepts, explains how they differ to desktop computing environments, the common software tools which are used and how they can benefit different types of compute requirements.
Understanding the concepts presented in this section will allow you to make more effective use of our HPC resources.
* [[:started:concepts|Basic HPC Concepts]] - Why is it different? What can it do? What can it not do?
* [[:started:module_basics|Introduction to software modules]] - Loading software on HPC
* [[:started:slurm_basics|Introduction to Slurm]] - What are these tools?
* [[:started:filesystems|Data and HPC Filesystems]] - Where should I work? Where should I store my data?
----
===== Your First HPC Job =====
If you have never used Slurm before, this section will show you how to write a //very basic// batch job. You will want to understand the basics in this section first, before you move on to more advanced Slurm job types.
* [[:started:first_job|Writing Your First HPC Job]]
----
===== HPC Resources & Jobs =====
* Available HPC Resources
* [[:started:resource_overview|Resources and Partitions]] - How are resources organised?
* [[:started:rocket_resources|Resources and Partitions - Rocket]]
* [[:started:comet_resources|Resources and Partitions - Comet]]
* Typical HPC Job Types
* [[:started:job_simple|Simple sequential job]]
* [[:started:job_ram|Single large (memory and ram) job]]
* [[:started:job_long|Long running serial job]]
* [[:started:job_parallel|Task array job]]
For more types of job customisation, see our [[:advanced:slurm|Advanced Slurm Job Optimisation]] page under the [[:advanced:index|Advanced Topics]] section.
----
===== Further Help =====
If you have reached this far you should be able to log in to our HPC facilities, navigate around the various filesystems and launch most common types of Slurm jobs.
For further information, including optimising your Slurm settings, guidance on specific software packages, containers and interactive desktop sessions, please see our [[:advanced:index|Advanced Topics]] section:
* [[:advanced:index|Advanced Topics]]
----
=== Recent Changes ===
These pages have recently been **changed** or **updated** within the //Getting Started// section:
{{changes>ns=started&count=15&type=edit&render=pagelist}}