Compute Cluster Information

What is BioSim?

BioSim is a computing cluster resource composed of 4 rack mounted DELL R815 servers. Each server houses 4 x 16-core 2.3 GHz AMD Opteron processors (64 per machine, 256 total), 512MB of RAM (2TB in total), 7.2 TB of workspace hard disk (28.8 TB total). and utilizes the Debian 11 (Bullseye) OS. It is primarily utilized in computational and data science courses and also research projects in the Phillips Lab: biomolecular simuation and machine learning. All computing resources for the cluster are managed using container-based deployments orchestrated via Kubernetes.

Accessing BioSim

While many of the resources below are open to the campus community, please contact Dr. Phillips before using them to ensure these systems can continue to serve the CS department and current courses which depend on them.


JupyterHub provides containerized Jupyter Lab/Notebook services. After logging in, users are provided with an Ubuntu 22.04 Linux container on one of the BioSim compute nodes for running applications either via the interactive notebook environment or terminal. No software installation is needed beyond a web-browser (Chrome or Firefox preferred). All authentication is handled via MTSU AzureAD OAuth (same as PipelineMT/D2L/etc.), so you may use your PipelineMT credentials to gain access to the service - all other resources on the cluster are only available after logging in via MTSU AzureAD OAuth.

JupyterHub Login URL:

SLURM HPC Cluster (OpenMPI + Singularity)

Users may also utilize the SLURM-based, OpenMPI+Singularity enabled cluster for batch computing jobs. Access is available via ssh from a terminal provided in the JupyterLab interface.

Log in via JupyterLab terminal:

ssh username@login.hpc.svc.cluster.local

Dask-Gateway Cluster Manager

Users may create Python dask clusters for parallel computing using the Dask Gateway provided on the cluster. Please contact Dr. Phillips for additional information on how to connect to and utilize the Dask Gateway.

Stanford CoreNLP Server

Users may utilize the Stanford CoreNLP server installation on the cluster for performing natural language processing. The Natural Language Processing Toolkit NLTK is installed on the JupyterLab container image, and the server may be contacted via Python or the terminal.

Stanford CoreNLP FQDN: http://server.corenlp.svc.cluster.local