Research Computing

The University of Sydney provides access to various research computing services and we help connect researchers to external resources.

Computing resources (at the University and elsewhere) come in different shapes and types, we hope to simplify your access and get your workloads onto the most suitable service. For more information and to discuss your need please contact us.


Acknowledging the use of Research Computing services

All University of Sydney resources are available to Sydney researchers free of charge. The use of the SIH services including the Artemis HPC and associated support and training warrants acknowledgement in any publications, conference proceedings or posters describing work facilitated by these services. The continued acknowledgment of the use of SIH facilities ensures the sustainability of our services. See some suggested ackonledgments and the terms of use below.

The authors acknowledge the scientific and/or technical assistance of (name of staff) of the Sydney Informatics Hub and access to the high performance computing facility, Artemis, and/or Virtual Research Desktop service, Argus) at the University of Sydney.

The authors acknowledge the Sydney Informatics Hub and the University of Sydney’s high performance computing cluster Artemis (and/or Virtual Research Desktop service, Argus) which have contributed to the results reported in this paper.

The authors acknowledge the use of the (insert National or commercial computation or data infrastructure) facilitated by the Sydney Informatics Hub, which have contributed to the results reported in this paper.

Research Project Support

Modelling, Simulation, Scientific Visualisation, and Informatics

We assist with data-intensive and computationally-intensive modelling, simulation, visualisation, and informatics executed on National HPC facilities, University of Sydney’s Artemis HPC and Argus VRDs, and Commercial Cloud. This includes support for accelerating, parallelising and scaling up algorithms, developing portable and reusable code, workflow management and automation, optimising code for, and transitioning code between HPC and Cloud services. We assist with analyses including computational chemistry, molecular dynamics, computational fluid dynamics, finite element analyses, mathematical modelling, machine learning, computer vision, image analyses, scientific visualisation, GPU programming.

We assist with preparing and reviewing project proposals for computing resources grants, such as the National Computational Merit Allocation Scheme, Intersect HPC Allocation Scheme, and Commercial Cloud research grants. We assist with resource benchmarking and demonstrating code scalability as required by these schemes.

To request assistance, submit the SIH assistance form. We will be in touch to arrange an initial meeting to discuss your project.

University of Sydney Resources

Artemis High Performance Computing Cluster

High Performance Computing (HPC) involves the use of supercomputers, parallel computing and/or computer clusters for advanced computing tasks including modelling, batch data processing and analysis.

The University runs a local cluster called Artemis. The Sydney Informatics Hub provides complementary support, training, and consultation for conducting your research on Artemis, HPC.

Artemis was upgraded in 2018 and hosts:

  • 7636 compute cores
  • 108 NVIDIA V100 GPUs
  • 56 Gbps FDR Infiniband networking
  • A high-performance Lustre filesystem
  • Three high memory nodes with 6 terabytes of RAM per node

Artemis Terms of Use

Please carefully read the Artemis HPC Terms of Use before you use Artemis. In using and continuing to use the HPC Service, you agree to be bound by these Terms. If you do not accept these Terms, you should stop using the HPC Service.

Artemis Documentation

To get access to Artemis you simply need to fill out a Research Data Management Plan and request HPC access.

Find out more about the Artemis service, including hardware specifications, in the Artemis Online User Guide.

Further information about using the Artemis service can be found here:

Argus Virtual Research Desktops

The University’s ‘Argus’ Virtual Research Desktops (VRDs) deliver on-demand computing resources. It is designed for graphical processing and visualisation within a graphical user interface.

Argus offers virtual machines with the following operating systems: Windows 7, Windows 10, RedHat 7, Ubuntu 16.04 LTS. The University provides small, medium and large Virtual Machines suited to computationally-intensive applications.

Small 4 16 GB 2
Medium 8 32 GB 4
Large 16 126 GB 8

Argus documentation

Contact us for information on how to gain access to a machine. For detailed information see the Argus user guide.

Virtual Machines

You can get access to a small dedicated virtual machine (VM) computing environment useful for hosting websites, applications, or databases. See the help page for more info


We work closely with ICT to provide additional tools, platforms, and services to help support and optimise your research experience. These include eNotebooks, REDCap, Office365, DropBox, GitHub, Matlab, CLC Genomics, IPA and we are adding more tools and software for researchers all the time. For a details about many of theses services see here and consult the ICT Knowledge Base.

Research Data Store

The University offers secure unlimited storage for all your research data. To get access simply fill out a Research Data Management Plan and request RDS access.

Training Workshops

Training on the use of these platforms and more are covered by our regular workshops hosted at the university.

National Computational Infrastructure


Australia’s National Computational Infrastructure (NCI) facility is free for academic and research users, and provide computing complementary services for your research. Access to NCI is generally through a merit based application once a year, applications at NCMAS.

NCI facilities:

  • HPC (Raijin)
  • GPU computing (Raijin)
  • Cloud computing (Tenjin)
  • Remote Visualisation (Massive)
  • Data Storage

Pawsey Supercomputing

The Pawsey Supercomputing Facility is free for academic research users. Access is gained via NCMAS and additional shares can be obtained time through various access schemes.

Pawsey facilities:

  • HPC (Magnus, Galaxy)
  • Cloud computing (Nimbus)
  • Remote Visualisation (Zeus)
  • Data Storage

Australian Research Data Commons

The ARDC NeCTAR research cloud is free for users with AAF (i.e. your unikey) for a limited trial period. At any time you can apply for more resources.

Nectar facilities:

  • Cloud computing
  • Virtual Labs


AARNet Cloudstor is formed on the backbone of AARNET in Australia, providing fast and secure connections between research institutes. It can be used to share data with external people also. Free for users with AAF (i.e. your unikey).

Cloudstor facilities:

  • Data Storage and transfer

Commercial Providers

The University has collaborations and affiliations with many retail computing services. Most services offer some free-tier level of compute. Many are looking for research workloads to test their facilities, and you can always purchase additional time and resources, to scale rapidly for your needs. Some common providers we can assist you with are:


Intersect offers several avenues for access to computing facilities and services. A mix of paid, merit, and free services for Usyd researchers. See there website for more details.

Intersect facilities:

Microsoft Azure

Access Microsoft Azure through: * AI for Earth, Good, Accessibility, Humanitarian Action Grants * Project by project: Auda Eltahla


Access AWS through:

Google Cloud

Access GCP through: * GCP Research Credits


For more information about accessing Commercial Cloud please contact


HPC - High Performance Computing. This term generally describes computational tasks that can’t be run on your local laptop/desktop. Computing jobs may be long running, use many cpus, require high amounts of RAM, have excessive reading/writing of files, and use large amounts of data storage (TerraBytes), and are run without interacting with the program instead are run via “batch” processing. The terms HTC (high throughput computing), supercomputing, parallel computing, mainframe computing, are specific parts of HPC.

Cloud Computing The cloud typically refers to any compute done in a remote location, with interfacing from your local laptop/desktop via the internet. Clouds do in fact have some physical location, normally located in various datacenters around the world that are filled with computers which make up the cloud. When you connect to the cloud, you are using one (or many) of these computers. The benefit of this is that you have dedicated hardware maintained by someone else. Cloud computing is good for workflows that may grow and shrink in size throughout your research workflow, then you only need to allocate computational resources (add/remove RAM/storage/CPUs/GPUs) as you require them. You can also install bespoke software, pipelines, and operating systems with administrator privileges.

Virtual Machine A virtual machine is normally a shared part of a computer hosted in the cloud. These are good for hosting things like web sites, where you need 247 availability but the work is usually not too computational intensive. These are one part of cloud computing. You can also have a virtual machine on your own local machine where you need to isolate specific versions of software and use specific custom operating systems.