The Fabian Services

Fabian's overall aim is to provide a high quality central service for researchers to carry out compute-intensive tasks and the processing of large datasets with ease.

Fabian provides a High Performance Computing (HPC) Cluster with the following key features:

  • A mixture of compute nodes providing high core counts and large memory on which users can run applications.
  • High capacity local disk storage on which to work with large datasets and to process results.
  • A suite of applications optimised for the high performance computing hardware.
  • High throughput job scheduler configured to manage users workloads.
  • High speed connections into the LSE networks, LSE research systems and other academic networks.
  • A number of means of access from command line interfaces through to the web with graphical and desktop access.

Available for all users to:

  • Use a variety of applications.
  • Build their own bespoke applications and custom application modules.

Upcoming Services

We have a number of services that will pilot in the next few months and become available to all users once the terms of the service are agreed and testing is complete.

Database Service (DBaaS)

We will deliver a Database service initially solely for use in the HPC environment. The first database product to be used will be Oracle 12c as the school has a campus license that we can use at no cost.

If you are interested in becoming an early adopter of this service and shaping the form it will take please contact fabian@lse.ac.uk.

Git Version Control

In order that researchers can share and control their code a centrally managed gitlab service will be installed. This service has been used sucessfully across the school both in a research and administration context and will benefit researchers by providing both history of changes and easier deployment and sharing of code.

Related Information

 

Hardware Configuration

The Fabian Cluster service provides access to

  • 15 nodes totaling 360 CPU cores
  • 2TByte memory across the cluster (128GByte per node)
  • Soon there will be nodes providing 1TByte memory 

provided by the following hardware arrangement8

Type

Usage

Configuration

Login Node

Provides access to the compute/storage and scheduler through various command line and graphic interfaces.

HPE Proliant DL80 2U server

  • 8 CPU Cores
  • 14GByte memory 

15 Compute Nodes

Running user applications

HPE Apollo 6000

  • Intel v3 Haswell processors
  • 24 CPU cores
  • 128GByte memory
  • 1 TB local storage

2 Large Compute Nodes

Running user applications

HPE Apollo 6000

  • Intel v4 Haswell processors
  • 28 CPU cores
  • 1 TByte memory
  • 1 TB local storage

Storage

For your data while you are working on fabian whether in file, block or object

HPE Proliant DL80 2U server

  • 96TByte shared storage

2 Database Servers

Provide a relational database service

HPE BL460c Blade Servers

  • Intel v4 Broadwell processors
  • 128GByte memory
  • scalable SAN storage

fabiancluster-compute-25p

Software Supported

Fabian uses a modules environment to allow users to choose between and run multiple version of software and the libraries they depend on.

The Fabian environment aims to offer a range of standard software as used here at the LSE. The current list includes the following:-

Compilers, Languages and Libraries

  • R (2 versions) with MPI and other libraries
  • Python (3 standard versions and 2 anaconda distributions)
  • C++ (2 versions)

Statistical Applications

  • Matlab 2015aSP1
  • Stata 14MP1, 14MP2 and 14MP24

General Utilities

  • StatTransfer v13

If you require other software, libraries or modules to be installed please request these, via email to fabian@lse.ac.uk and we install it on the cluster for all users to access.  If you wish to you can either install any software or modules yourself into your home directory. If you wish to run multiple versions of softare yourself we will provide examples and assistance in doing so.

For more information about specialist software available at LSE, please see here

Cluster Service Principals

The Fabian Cluster service has been designed to accommodate a range of user requirements.

A variety of different use-cases

  • Embarrassingly parallel / single-threaded jobs
  • SMP / multi-threaded, single-node jobs
  • MPI / parallel multi-node jobs

This is aimed at supporting the various researchers on campus current activities and allowing them to make increasing use of HPC as their research needs grow and they gain more experience of HPC.

Scalable architecture

  • Separate login node providing connections from the LSE networks.
  • Multiple compute nodes for different jobs (e.g. standard or high memory)
  • Dedicated storage nodes and connections to LSE storage.
  • An internal high speed network connecting the compute and storage nodes. 

The separation of tasks as described above allows each component to be tuned for its specific task delivering better performance to the user It also permits the system growth to reflect the overall capacity and capability requirements of users. If researchers have a specific requirement they can add their own hardware to the system taking advantage of the existing software and hardware resources within an integrated service.

Multiple storage tiers for user data

  • 500GB local scratch disk on every node
  • 30TB user home-directory space
  • Larger project based directory spaces

This allows users to store datasets in an optimial location while they are working with the data on Fabian.

Multiple User environments.

  • A command line interface
  • A graphical/desktop interface

This is aimed at easing transition from the users working environment to/from the Fabian service allowing them to focus on the research challenges of analysing the larger datasets or simulating the more complex problems HPC enables. 

Share:Facebook|Twitter|LinkedIn|