Getting started
Although use of the Cloud for CloVR-Microbe as for all other CloVR pipelines is entirely optional, it is recommended for this pipeline, as several steps of the CloVR-Microbe pipeline are computationally extensive and local executions can be very time-consuming. The Celera assembly step requires RAM in access of 4GB. BLAST and HMMer searches of protein sequences during the annotation process benefit significantly from parallelization across multiple processors on the Cloud.
If you want to use the Cloud to run CloVR-Microbe, you must obtain credentials from your Cloud provider and CloVR must be configured to use these credentials. If you want to use the Amazon Elastic Compute Cloud (EC2), be sure to have configured your Amazon EC2 credentials. Usage on Amazon EC2 is charged per hour and care must be taken to terminate instances after a protocol has completed. See vp-terminate-cluster command below.
Input Data Set
File: CloVR-Microbe454 Example Dataset
In preparation for the CloVR-Microbe454 pipeline run that will be described below, the example data set should be downloaded and extracted to the shared folder in the extracted VM directory to allow easy access when working from within the CloVR VM. From within the VM, the shared folder is accessible as /mnt/.
At this point, you should be able to access the following file from within your VM:
/mnt/clovr_acinetobacter_example.sff
If you are using Virtual Box and are having problems accessing your data in the shared folder, check Troubleshooting on Virtual Box.
Tagging Input
To specify input for the pipeline, it must first be tagged. See the Configuration File section for more information on what types of input the pipeline can accept.
In our example above there is only one input SFF which can be tagged as follows:
vp-add-dataset -o --tag-name acinetobacter_sff /mnt/clovr_acinetobacter_example.sff
Configuration File
FILE: CloVR-Microbe454 configuration file
A configuration file is used when running the Microbe454 pipeline to define parameters to the various components as well as define inputs, outputs, log files and many other options that can be fine-tuned to control the pipeline. The configuration file detailed below can be found in the link above.
## Configuration file for clovr_microbe_454 ######################################################### ## Input information. ## Configuration options for the pipeline. ######################################################### [input] # Input Tag # The input tag for this pipeline INPUT_SFF_TAG=acinetobacter_sff
CloVR makes use of a tagging system in its pipelines for data being uploaded and data being downloaded with unique names. These unique names are used throughout the whole system during many steps in the pipeline process. In this pipeline, the input tag INPUT_SFF_TAG must match the tag you used with the vp-add-dataset command above.
[params] # Output prefix for the organism # Organisms have a prefix on them OUTPUT_PREFIX=asmbl # Organism # Genus and species of the organism. Must be two words in the form of: Genus species ORGANISM=Acinetobacter baylii
An OUTPUT_PREFIX should be provided that will be used for in naming all intermediate and output files. An ORGANISM name must be provided for use when generating the output Genbank files.
## sff_to_CA options ## ## trim can be one of the following values: ## none, soft, hard, chop TRIM=chop ## ## clear can be one of the following values: ## all, 454, none, n, pair-of-n, discard-n CLEAR=454 # Possible values: titanium flx LINKER=titanium # Insert size, must be two numbers separated by a space (ex '8000 1000') INSERT_SIZE=8000 1000
The remaining parameters control the assembly of the reads in the SFF file. The TRIM and CLEAR parameters control what portions of the reads are considered biologically relevant. Both parameters are used in denoting what portions of the reads are technical and therefore should not be included in the assembly. LINKER AND INSERT_SIZE are utilized when the dealing with a 454 paired end run. A Titanium or FLX linker can be specified alongside the insert size between mates (on average i +- d bp apart).
## celera assembler options SPEC_FILE=/dev/null SKIP_BANK=0
A valid celera assembler spec file can be provided to the pipeline containing additional configuration parameters for use by the assembly software and the SKIP_BANK flag can be set to 1 to also generate an AMOS Bank file that can be viewed in the visualization software Hawkeye.
[cluster] # Cluster Name # Cluster name to run this on, shouldn't need to specify manually CLUSTER_NAME=local
# Credential # Credential to use to make the cluster CLUSTER_CREDENTIAL=local
The cluster info section provides the information regarding which cluster to use if an existing cluster is running or what type of cluster should be created if a new cluster is necessary. Just as with the pipeline tag and input tags a cluster is provided with a unique identifier as defined by the CLUSTER_NAME option. The CLUSTER_CREDENTIAL parameter should match the Amazon EC2 credential created earlier (see Getting Started above).
[output] # Output Directory # Directory to download output to, this should be located # in /mnt somewhere OUTPUT_DIR=/mnt/output
Placement of output files from the pipeline can be controlled in the output info section of the configuration. Here the OUTPUT_DIR parameter can be set to anywhere within the VM to deposit files. It is recommended that this option be left as is to avoid complications that can occur if space runs out on the VM. The /mnt/ directory references the shared directory which should make use of the file system provided by whichever computer the VM is running off and will most likely contain more space than is alloted on the VM’s file system. If the output directory is changed to a location on the VM’s file system running out of space is a possibility.
[pipeline] # Pipeline Name PIPELINE_NAME=ReplaceWithYourPipelineName
Each pipeline run requires a unique name PIPELINE_NAME so that the CloVR system can download the correct set of output, after the pipeline has finished. This parameter is especially important to modify if multiple pipelines are running on the same cluster.
There are other options present in the config file which do not need to be changed and are used internally to the pipeline.
Running and Monitoring the Microbe454 pipeline
Running the CloVR-Microbe454 pipeline is as easy as executing the following from the command-line:
clovrMicrobe /mnt/clovr-microbe-454.config
This command automatically initiates a cluster, uploads the tagged data to that cluster, and starts the pipeline. To view a list of all available clusters you are running, execute:
vp-describe-cluster --list
Here is example output returned:
CLUSTER local CLUSTER clovr-microbe-454-cluster
Identify the cluster that the pipeline is running on and use the –name option with vp-describe-cluster:
vp-describe-cluster --name clovr-microbe454-cluster MASTER i-6459fb09     some-instance.compute-1.amazonaws.com       running EXEC   i-94f855f9     some-exec.compute-1.amazonaws.com           running GANGLIA http://some-instance.compute-1.amazonaws.com/ganglia ERGATIS http://some-instance.compute-1.amazonaws.com/ergatis SSH    ssh -oNoneSwitch=yes -oNoneEnabled=yes -o PasswordAuthentication=no -o ConnectTimeout=30 \ -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o UserKnownHostsFile=/dev/null \ -q -i /mnt/keys/devel1.pem root@some-instance.compute-1.amazonaws.com
This tells you that there is one master node and one exec node in ‘running’ status. It also gives you links to Ganglia and Ergatis. Visiting the Ergatis link will give you an overview of the pipeline status and if any elements of the pipeline have failed. The Ganglia link will display the status of the cluster including number of nodes and processes, available memory, and data transfers over the network.
Output
The output for the pipeline will automatically be downloaded onto your local VM in the directory specified in the OUTPUT_DIR parameter.
The output for the CloVR-Microbe454 pipeline will include the assembly scaffolds, assembly qc file, polypeptide fasta, CDS fasta and annotation files (in genbank and sqn formats.)
Terminating a cluster
When utilizing a cluster on EC2, you must terminate the cluster after the pipeline and download have completed. To terminate a cluster, enter you cluster name
vp-terminate-cluster --cluster=cluster_name
Interrupting a pipeline
If the execution of CloVR-Microbe is not going well for some reason or you realize you have made a mistake, you can interrupt the pipeline by visiting the Ergatis link describing the running pipeline, and clicking the “kill” button at the top of the page. This will cause the pipeline to stop. It may take a minute to effectively halt the pipeline. See below on restarting a pipeline.
Recovering from error and restarting the pipeline
If the execution of CloVR-Microbe fails and the pipeline has to be restarted, CloVR will attempt to resume the previous run, if the same command is used. In order to start the pipeline from scratch, PIPELINE_NAME should be changed in the config file to a different name. Also, note that if you have made any changes to the input data, you will need to re-tag it using vp-add-dataset.
# Name of pipeline PIPELINE_NAME=clovr_microbe454-2