The Danish National Life Science Supercomputing Center, Computerome is a HPC Facility specialized for Life Science. Users include Research groups from all Danish Universities and large international research consortiums as well as users from industry and the public Health Care Sector. They all benefit from the fast, flexible and secure infrastructure and the ability to combine different types of sensitive data and perform analysis. Computerome is physically installed at the DTU Risø campus and managed by a strong team of specialists from DTU. Computerome is the official supercomputer of ELIXIR Denmark, a member of ELIXIR, the European infratructure for biological information.

Life science research has special demands for the amount of data being processed as well as for the transfer time between storage and computing resources and the size of local storage on the nodes. Computerome fulfills all those demands. It also meet the demands of statutory acts and the highest security levels necessary when you are working with sensitive data. It debuted in November 2014 at #121 on TOP500 Supercomputing Sites within life science, and it is constantly expanding. 

Using Cloud
Computerome uses cloud technology as a delivery mechanism for HPC. This gives some uniqe advantages in terms of security and it hides the underlying hardware/Software complexities. Read more about our Cloud solution 

Technical set-up
Computerome's present compute resources consist of 16048 CPU cores with 92 TeraBytes of memory, connected to 3 PetaBytes of High-performance storage, and with a total peak performance of more than 483 TeraFLOPS (483 million million floating-point operations per second). 

The computer hardware is funded with grants from the Technical University of Denmark (DTU), University of Copenhagen (KU) and the Danish e-infrastructure Cooperation (DeiC).

Pilot Projects
Researchers from any Danish university may submit a call of interest for a project to become a DeIC National eScience Pilot Project. Read more about pilot projects

Computerome VS traditional HPC

Optimized for small data sets and large amounts of CPU power and specific tasks like simulations    Optimized for Big data analysis with parallelism, intelligent handling of multiple datatypes and diverse tasks (imaging, DNA assembly) 
Built for speed   Build for fast data access and processing 
Data in and out via network   Build for fast data access and processing 
Long term storage of sensitive data with maximum security
For best performance jobs are adapted to fit the machines capabilities   System automatically adapts to the job through applications
Access through cueing system   No waiting time. Fast access, scalable to very shifting demands
Technical skills needed of users. Mastering of scripts   Easy to get started and use for non technical users through web applications
    Cloud as a delivery mechanism gives all the advantages of cloud together with HPC