The University of Sydney provides access to various research computing services and we help connect researchers to external resources.
Computing resources (at the University and elsewhere) come in different shapes and types, we hope to simplify your access and get your workloads onto the most suitable service. For more information and to discuss your need please contact us.
- Acknowledging use of research computing services
- University of Sydney Computing Infrastructure
- National Computational Infrastructure
- Commercial Providers
Acknowledging the use of Research Computing services
The authors acknowledge the scientific and/or technical assistance of (name of staff) of the Sydney Informatics Hub and access to the high performance computing facility, Artemis, and/or Virtual Research Desktop service, Argus) at the University of Sydney.
The authors acknowledge the Sydney Informatics Hub and the University of Sydney’s high performance computing cluster Artemis (and/or Virtual Research Desktop service, Argus) which have contributed to the results reported in this paper.
The authors acknowledge the use of the (insert National or commercial computation or data infrastructure) facilitated by the Sydney Informatics Hub, which have contributed to the results reported in this paper.
University of Sydney Resources
Artemis High Performance Computing Cluster
High Performance Computing (HPC) involves the use of supercomputers, parallel computing and/or computer clusters for advanced computing tasks including modelling, batch data processing and analysis.
The University runs a local cluster called Artemis. The Sydney Informatics Hub provides complementary support, training, and consultation for conducting your research on Artemis, HPC.
Artemis was upgraded in 2018 and hosts:
- 7636 compute cores
- 108 NVIDIA V100 GPUs
- 56 Gbps FDR Infiniband networking
- A high-performance Lustre filesystem
- Three high memory nodes with 6 terabytes of RAM per node
To get access to Artemis you simply need to fill out a Research Data Management Plan and request HPC access.
Find out more about the Artemis service, including hardware specifications, in the Artemis Online User Guide.
Further information about using the Artemis service can be found here:
- Artemis Cheat Sheet
- Artemis Live Statistics
- Artemis Software List
- Artemis FAQs
- Artemis Technical Support e.g. login, software installation and versions
- Support with using or doing research on Artemis e.g. Developing/optimising code for Artemis, best practices for using computing platforms.
Argus Virtual Research Desktops
The University’s ‘Argus’ Virtual Research Desktops (VRDs) deliver on-demand computing resources. It is designed for graphical processing and visualisation within a graphical user interface.
Argus offers virtual machines with the following operating systems: Windows 7, Windows 10, RedHat 7, Ubuntu 16.04 LTS. The University provides small, medium and large Virtual Machines suited to computationally-intensive applications.
You can get access to a small dedicated virtual machine (VM) computing environment for various research and research support needs. See the help page for more info
We work closely with ICT to provide additional tools, platforms, and services to help support and optimise your research experience. These include eNotebooks, REDCap, Office365, DropBox, GitHub, Matlab, CLC Genomics, IPA and we are adding more tools and software for researchers all the time. For a details about many of theses services see here and consult the ICT Knowledge Base.
Research Data Store
The University offers secure unlimited storage for all your research data. To get access simply fill out a Research Data Management Plan and request RDS access.
Training on the use of these platforms and more are covered by our regular workshops hosted at the university.
National Computational Infrastructure
Australia’s National Computational Infrastructure (NCI) facility is free for academic and research users, and provide computing complementary services for your research. Access to NCI is generally through a merit based application once a year, applications at NCMAS.
- HPC (Raijin)
- GPU computing (Raijin)
- Cloud computing (Tenjin)
- Remote Visualisation (Massive)
- Data Storage
- HPC (Magnus, Galaxy)
- Cloud computing (Nimbus)
- Remote Visualisation (Zeus)
- Data Storage
Australian Research Data Commons
- Cloud computing
- Virtual Labs
AARNet Cloudstor is formed on the backbone of AARNET in Australia, providing fast and secure connections between research institutes. It can be used to share data with external people also. Free for users with AAF (i.e. your unikey).
- Data Storage and transfer
The University has collaborations and affiliations with many retail computing services. Most services offer some free-tier level of compute. Many are looking for research workloads to test their facilities, and you can always purchase additional time and resources, to scale rapidly for your needs. Some common providers we can assist you with are:
Intersect offers several avenues for access to computing facilities and services. A mix of paid, merit, and free services for Usyd researchers. See there website for more details.
- Cloud computing
- Training workshops (these are regularly scheduled on campus).
Access Microsoft Azure through: * AI for Earth, Good, Accessibility, Humanitarian Action Grants * Project by project: Auda Eltahla
Access AWS through:
Access GCP through: * GCP Research Credits
For more information about accessing Commercial Cloud please contact firstname.lastname@example.org.
HPC - High Performance Computing. This term generally describes computational tasks that can’t be run on your local laptop/desktop. Computing jobs may be long running, use many cpus, require high amounts of RAM, have excessive reading/writing of files, and use large amounts of data storage (TerraBytes), and are run without interacting with the program instead are run via “batch” processing. The terms HTC (high throughput computing), supercomputing, parallel computing, mainframe computing, are specific parts of HPC.
Cloud Computing The cloud typically refers to any compute done in a remote location, with interfacing from your local laptop/desktop via the internet. Clouds do in fact have some physical location, normally located in various datacenters around the world that are filled with computers which make up the cloud. When you connect to the cloud, you are using one (or many) of these computers. The benefit of this is that you have dedicated hardware maintained by someone else. Cloud computing is good for workflows that may grow and shrink in size throughout your research workflow, then you only need to allocate computational resources (add/remove RAM/storage/CPUs/GPUs) as you require them. You can also install bespoke software, pipelines, and operating systems with administrator privileges.
Virtual Machine A virtual machine is normally a shared part of a computer hosted in the cloud. These are good for hosting things like web sites, where you need 24⁄7 availability but the work is usually not too computational intensive. These are one part of cloud computing. You can also have a virtual machine on your own local machine where you need to isolate specific versions of software and use specific custom operating systems.