View on GitHub

HCP Public Pages

Resources for the Human Connectome Projects

This guide was originally written for participants in the course Exploring the Human Connectome 2017 and is based on similar materials prepared for the 2015 and 2016 editions of the course.

During the course, an optional practical session titled Cloud-based Processing Using HCP Pipelines and Amazon Web Services was held. That session was devoted to a demonstration of Creating an EC2 instance for HCP Pipeline Processing. That demonstration was intended to give participants a general feel for what is necessary to create such an EC2 instance. However, participants were not expected to actually execute the process during the demonstration. 

The guide is for participants who wish to actually create their own Amazon EC2 instance for HCP Pipeline Processing following the steps that were carried out during the demonstration. The steps documented here are not the only way an EC2 instance can be created for running the HCP Pipeline Processing.

This material may also be useful for non-participants.

This guide assumes an understanding of Amazon Web Services concepts provided by the course lecture ConnectomeDB, Connectome Coordination Facility, Amazon Web Services.

This guide assumes that you already have credentials for accessing the HCP OpenAccess S3 Bucket. If you do not have those credentials, please follow the instructions at How to Get Access to the HCP OpenAccess Amazon S3 Bucket.

This guide assumes that you already have an AWS account. If you do not have an AWS account, please follow the instructions at How to Create an Amazon Web Services Account.

Some of the steps in this guide use services that are part of AWS that are not in the Free Tier of AWS. That means that in carrying out these steps, there may be charges from Amazon to the credit card associated with your AWS account.

Step 1a: Login to AWS

Figure 1: Amazon AWS Console

Step 1b: Start Creating an Instance

Figure 2: Selecting the EC2 service

Figure 3: EC2 Dashboard

Step 2: Launch an EC2 Instance

Figure 4: Choose and AMI page

Step 3: Find and Select an AMI

Figure 5: Result of searching for NITRC within the AWS Marketplace AMIs

Step 4: Select an Instance Type

Figure 6: Choosing the m3.xlarge Instance Type

Step 5: Add Storage

Figure 7: Add Storage page

Figure 8: Configuration of “external” volume

Step 6: Tag Your Instance

Figure 9: Tagging your instance

Step 7: Configure Security Group

Step 8: Review your instance

Step 9: Create access keys

Figure 10: Creating a new key pair

Step 10: Gather information

Figure 11: Instance ID

Figure 12: Table entry for a running instance

Step 11: NITRC-HCP-CE Configuration - Part 1

Figure 12: Main NITRC-CE Configuration page

Figure 13: NITRC-HCP-CE account successfully configured

Step 12: NITRC-HCP-CE Configuration - Part 2

Figure 14: NITRC-HCP-CE Control Panel

Step 13: NITRC-HCP-CE Configuration - Part 3

Figure 15: After applying AWS credentials

Figure 16: Successful mounting of HCP S3 bucket

Step 14: Access your instance via VNC/Guacamole

Figure 17: Guacamole connections page

Figure 18: Connected to a VNC server session

Step 15: Access your instance via SSH

ssh -Y hcpuser@<your-public-dns>

Step 16: Mounting your “external” EBS drive - Part 1

hcpuser@nitrcce:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.4G   12K  7.4G   1% /dev
tmpfs           1.5G  820K  1.5G   1% /run
/dev/xvda1       99G   22G   73G  23% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
none            5.0M     0  5.0M   0% /run/lock
none            7.4G  144K  7.4G   1% /run/shm
none            100M   32K  100M   1% /run/user
s3fs            256T     0  256T   0% /s3/hcp
hcpuser@nitrcce:~$
hcpuser@nitrcce:~$ lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   100G  0 disk
└─xvda1 202:1    0   100G  0 part /
xvdb    202:16   0  1000G  0 disk
hcpuser@nitrcce:~$
hcpuser@nitrcce:~$ sudo file -s /dev/xvdb
[sudo] password for hcpuser:
/dev/xvdb: data
hcpuser@nitrcce:~$

Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done

hcpuser@nitrcce:~$


## Step 17: Mounting your "external" EBS drive - Part 2

* Make a mount point and mount the device  

hcpuser@nitrcce:~$ sudo mkdir /data hcpuser@nitrcce:~$ sudo mount /dev/xvdb /data hcpuser@nitrcce:~$ df -h Filesystem      Size  Used Avail Use% Mounted on udev            7.4G   12K  7.4G   1% /dev tmpfs           1.5G  820K  1.5G   1% /run /dev/xvda1       99G   22G   73G  23% / none            4.0K     0  4.0K   0% /sys/fs/cgroup none            5.0M     0  5.0M   0% /run/lock none            7.4G  144K  7.4G   1% /run/shm none            100M   32K  100M   1% /run/user s3fs            256T     0  256T   0% /s3/hcp /dev/xvdb       985G   72M  935G   1% /data hcpuser@nitrcce:~$ sudo chmod 777 /data hcpuser@nitrcce:~$

* Notice that /dev/xvdb is now mounted at /data and has 935GB free space.
* You will want to mount the volume on every reboot.
* Edit the file /etc/fstab and add a line that looks like the following:  

/dev/xvdb   /data  ext4  defaults,nofail  0 2

* **There are <TAB> characters between the fields. Except between the 0 and the 2 there is just a single space character.**
* **You must be very careful here. Editing this file incorrectly can make your instance unusable.**
* You will need to be "root" or "superuser" to edit the file, so your command to edit the file using the nano editor would look something like:  

hcpuser@nitrcce:~$ sudo nano /etc/fstab ```

Step 18: Shutting down/Restarting your instance

How to Create an EC2 instance for HCP Pipeline Processing

How to Get Access to the HCP Open Access Amazon S3 Bucket

How to Create an Amazon Web Services Account

Attachments