Skip to content

Commit

Permalink
Merge pull request #160 from itrujnara/dev
Browse files Browse the repository at this point in the history
Move documentation from old branch
  • Loading branch information
itrujnara authored Jan 8, 2025
2 parents 78be50c + ddf2ce7 commit 29e50b1
Show file tree
Hide file tree
Showing 4 changed files with 181 additions and 67 deletions.
56 changes: 55 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,68 @@
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## v1.0.0dev - [date]
## v1.0.0dev

**The content below is the unaltered changelog of the unreleased 2020 version of the pipeline.**

## v0.1.0dev - [date]

Initial release of nf-core/kmermaid, created with the [nf-core](https://nf-co.re/) template.

### `Added`

- Add option to use Dayhoff encoding for sourmash.
- Add `bam2fasta` process to kmermaid pipeline and flags involved.
- Add `extract_coding` and `peptide_bloom_filter` process and flags involved.
- Add `track_abundance` feature to keep track of hashed kmer frequency.
- Add social preview image.
- Add `fastp` process for trimming reads.
- Add option to use compressed `.tgz` file containing output from 10X Genomics' `cellranger count` outputs, including `possorted_genome_bam.bam` and `barcodes.tsv` files.
- Add samtools_fastq_unaligned and samtools_fastq_aligned process for converting bam to per cell barcode fastq.
- Add version printing for sencha, bam2fasta, and sourmash in Dockerfile, update versions in environment.yml
- For processes translate, sourmash compute add cpus=1 as they are only serial ([#107](https://github.com/nf-core/kmermaid/pull/107)).
- Add `sourmash sig merge` for aligned/unaligned signatures from bam files, and add `--skip_sig_merge` option to turn it off.
- Add `--protein_fastas` option for creating sketches of already-translated protein sequences.
- Add `--skip_compare option` to skip `sourmash_compare_sketches` process.
- Add merging of aligned/unaligned parts of single-cell data ([#117](https://github.com/nf-core/kmermaid/pull/117)).
- Add renamed package dependency orpheum (used to be known as sencha).

### `Fixed`

#### Resources

- Increase CPUs in `high_memory_long` profile from 1 to 10.

#### Naming

- Rename splitkmer to `split_kmer`.

#### Per-cell fastqs and bams

- Remove `one_signature_per_record` flag and add bam2fasta count_umis_percell and make_fastqs_percell instead of bam2fasta sharding method.
- Use ripgrep instead of bam2fasta to make per-cell fastq, which will hopefully make resuming long-running pipelines on bams much faster.
- Make sure `samtools_fastq_aligned` outputs ALL aligned reads, regardless of mapping quality or primary alignment status.

#### Sourmash

- add `--skip_compute option` to skip `sourmash_compute_sketch_*`.
- Used `.combine()` instead of `each` to do cartesian product of all possible molecules, ksizes, and sketch values.
- Do `sourmash compute` on all input ksizes, and all peptide molecule types, at once to save disk reading/writing efforts.

#### Translate

- Updated sencha=1.0.3 to fix the bug in memory errors possibly with the numpy array on unique filenames ([PR #96 on orpheum](https://github.com/czbiohub/orpheum/pull/96)).
- Add option to write non-coding nucleotide sequences fasta files while doing sencha translate.
- Don't save translate csvs and jsons by default, add separate `--save_translate_json` and `--save_translate_csv`.
- Updated `sencha translate` default parameters to be `--ksize 8 --jaccard-threshold 0.05` because those were the most successful.
- Update renaming of `khtools` commands to `sencha`.

#### MultiQC

- Fix the use of `skip_multiqc` flag condition with if and not when.

### `Dependencies`

### `Deprecated`

- Removed ability to specify multiple `--scaled` or `--num-hashes` values to enable merging of signatures.
85 changes: 49 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,67 +19,80 @@

## Introduction

**nf-core/kmermaid** is a bioinformatics pipeline that ...

<!-- TODO nf-core:
Complete this sentence with a 2-3 sentence summary of what types of data the pipeline ingests, a brief overview of the
major pipeline sections and the types of output it produces. You're giving an overview to someone new
to nf-core here, in 15-20 seconds. For an example, see https://github.com/nf-core/rnaseq/blob/master/README.md#introduction
-->
**nf-core/kmermaid** is a bioinformatics pipeline that performs comparative analysis of \*omes using k-mer based methods. It supports various reference and sequencing input formats, and provides statistics files along with a MultiQC report as output. It provides pre-processing methods for reads and alignments.

<!-- TODO nf-core: Include a figure that guides the user through the major workflow steps. Many nf-core
workflows use the "tube map" design for that. See https://nf-co.re/docs/contributing/design_guidelines#examples for examples. -->
<!-- TODO nf-core: Fill in short bullet-pointed list of the default steps in the pipeline -->

1. Read QC ([`FastQC`](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/))
2. Present QC for raw reads ([`MultiQC`](http://multiqc.info/))
In the outline below, every step except for the main analysis is optional and might be input-dependent.

## Usage
**Optional – BAM preprocessing**

1. Extract BAM from 10X archive (`tar`)
2. Extract FASTQ reads ([`samtools`](http://www.htslib.org/))
3. Split reads per cell (`grep`)
4. Count UMIs per cell ([`pbtk`](https://github.com/PacificBiosciences/pbtk))

5. Download SRA experiment () [optional]

**Optional – read preprocessing**

> [!NOTE]
> If you are new to Nextflow and nf-core, please refer to [this page](https://nf-co.re/docs/usage/installation) on how to set-up Nextflow. Make sure to [test your setup](https://nf-co.re/docs/usage/introduction#how-to-run-a-pipeline) with `-profile test` before running the workflow on actual data.
6. Trim reads ([`fastp`](https://github.com/OpenGene/fastp))
7. Read QC ([`FastQC`](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/))
8. Remove rRNA ([`sortmerna`](https://github.com/sortmerna/sortmerna))
9. Translate to protein ([`orpheum`](https://github.com/czbiohub-sf/orpheum))

<!-- TODO nf-core: Describe the minimum required steps to execute the pipeline, e.g. how to prepare samplesheets.
Explain what rows and columns represent. For instance (please edit as appropriate):
**k-mer analysis per method**

First, prepare a samplesheet with your input data that looks as follows:
10. Create sketch
11. Calculate distances

`samplesheet.csv`:
12. Present the results ([`MultiQC`](http://multiqc.info/))

```csv
sample,fastq_1,fastq_2
CONTROL_REP1,AEG588A1_S1_L002_R1_001.fastq.gz,AEG588A1_S1_L002_R2_001.fastq.gz
## Usage

### With a samples.csv file

```bash
nextflow run nf-core/kmermaid --outdir s3://bucket/sub-bucket --samples samples.csv
```

Each row represents a fastq file (single-end) or a pair of fastq files (paired end).
### With R1, R2 read pairs

-->
```bash
nextflow run nf-core/kmermaid --outdir s3://olgabot-maca/nf-kmer-similarity/ \
--read_pairs 's3://bucket/sub-bucket/*{R1,R2}*.fastq.gz,s3://bucket/sub-bucket2/*{1,2}.fastq.gz'
```

### With SRA ids

Now, you can run the pipeline using:
```bash
nextflow run nf-core/kmermaid --outdir s3://bucket/sub-bucket --sra SRP016501
```

<!-- TODO nf-core: update the following command to include all required parameters for a minimal example -->
### With fasta files

```bash
nextflow run nf-core/kmermaid \
-profile <docker/singularity/.../institute> \
--input samplesheet.csv \
--outdir <OUTDIR>
nextflow run nf-core/kmermaid --outdir s3://bucket/sub-bucket \
--fastas '*.fasta'
```

> [!WARNING]
> Please provide pipeline parameters via the CLI or Nextflow `-params-file` option. Custom config files including those provided by the `-c` Nextflow option can be used to provide any configuration _**except for parameters**_; see [docs](https://nf-co.re/docs/usage/getting_started/configuration#custom-configuration-files).
### With bam file

For more details and further functionality, please refer to the [usage documentation](https://nf-co.re/kmermaid/usage) and the [parameter documentation](https://nf-co.re/kmermaid/parameters).
```bash
nextflow run nf-core/kmermaid --outdir s3://bucket/sub-bucket \
--bam 'possorted_genome_bam.bam'
```

## Pipeline output
### With split kmer

To see the results of an example test run with a full size dataset refer to the [results](https://nf-co.re/kmermaid/results) tab on the nf-core website pipeline page.
For more details about the output files and reports, please refer to the
[output documentation](https://nf-co.re/kmermaid/output).
```bash
nextflow run nf-core/kmermaid --outdir s3://bucket/sub-bucket --samples samples.csv --split_kmer --subsample 1000
```

## Credits

nf-core/kmermaid was originally written by olgabot, itrujnara.
nf-core/kmermaid was originally written by Olga Botvinnik. The DSL2 port is done by Igor Trujnara.

We thank the following people for their extensive assistance in the development of this pipeline:

Expand Down
106 changes: 77 additions & 29 deletions docs/output.md
Original file line number Diff line number Diff line change
@@ -1,61 +1,109 @@
# nf-core/kmermaid: Output

## :warning: Please read this documentation on the nf-core website: [https://nf-co.re/kmermaid/output](https://nf-co.re/kmermaid/output)

> _Documentation of pipeline parameters is generated automatically from the pipeline schema and can no longer be found in markdown files._
## Introduction

This document describes the output produced by the pipeline. Most of the plots are taken from the MultiQC report, which summarises results at the end of the pipeline.

The directories listed below will be created in the results directory after the pipeline has finished. All paths are relative to the top-level results directory.

<!-- TODO nf-core: Write this documentation describing your workflow's output -->

## Pipeline overview

The pipeline is built using [Nextflow](https://www.nextflow.io/) and processes data using the following steps:
The pipeline is built using [Nextflow](https://www.nextflow.io/)
and processes data using the following steps:

- [FastQC](#fastqc) - Raw read QC
- [MultiQC](#multiqc) - Aggregate report describing results and QC from the whole pipeline
- [FastQC](#fastqc) - read quality control
- [MultiQC](#multiqc) - Aggregate report describing results from the whole pipeline
- [Pipeline information](#pipeline-information) - Report metrics generated during the workflow execution
- [Sourmash Sketch](#sourmash-sketch) - Compute a k-mer sketch of each sample
- [Sourmash Compare](#sourmash-compare) - Compare all samples on k-mer sketches

## FastQC

### FastQC
[FastQC](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/) gives general quality metrics about your sequenced reads. It provides information about the quality score distribution across your reads, per base sequence content (%A/T/G/C), adapter contamination and overrepresented sequences.

<details markdown="1">
<summary>Output files</summary>
For further reading and documentation see the [FastQC help pages](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/Help/).

> **NB:** The FastQC plots displayed in the MultiQC report shows _untrimmed_ reads. They may contain adapter sequence and potentially regions with low quality. To see how your reads look after trimming, look at the FastQC reports in the `fastp` directory.
**Output files:**

- `fastqc/`
- `*_fastqc.html`: FastQC report containing quality metrics.
- `*_fastqc.html`: FastQC report containing quality metrics for your untrimmed raw fastq files.
- `fastqc/zips/`
- `*_fastqc.zip`: Zip archive containing the FastQC report, tab-delimited data file and plot images.

</details>
> **NB:** The FastQC plots displayed in the MultiQC report shows _untrimmed_ reads. They may contain adapter sequence and potentially regions with low quality.
## Sourmash Sketch

[Sourmash](https://sourmash.readthedocs.io/en/latest/) is a tool to compute MinHash sketches on nucleotide (DNA/RNA) and protein sequences. It allows for fast comparisons of sequences based on their nucleotide content.

**Output directory: `results/sourmash/sketches`**

For each sample and provided `molecules`, `ksizes` and `sketch_num_hashes_log2`, a file is created:

- `sample_molecule-${molecule}__ksize-${ksize}__${sketch_value}__track_abundance-${track_abundance}.sig`

For example:

```bash
SRR4050379_molecule-dayhoff_ksize-3_sketch_num_hashes_log2-2.sig
SRR4050379_molecule-dayhoff_ksize-3_sketch_num_hashes_log2-4.sig
SRR4050379_molecule-dayhoff_ksize-9_sketch_num_hashes_log2-2.sig
SRR4050379_molecule-dayhoff_ksize-9_sketch_num_hashes_log2-4.sig
SRR4050379_molecule-dna_ksize-3_sketch_num_hashes_log2-2.sig
SRR4050379_molecule-dna_ksize-3_sketch_num_hashes_log2-4.sig
SRR4050379_molecule-dna_ksize-9_sketch_num_hashes_log2-2.sig
SRR4050379_molecule-dna_ksize-9_sketch_num_hashes_log2-4.sig
SRR4050379_molecule-protein_ksize-3_sketch_num_hashes_log2-2.sig
SRR4050379_molecule-protein_ksize-3_sketch_num_hashes_log2-4.sig
SRR4050379_molecule-protein_ksize-9_sketch_num_hashes_log2-2.sig
SRR4050379_molecule-protein_ksize-9_sketch_num_hashes_log2-4.sig
```

[FastQC](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/) gives general quality metrics about your sequenced reads. It provides information about the quality score distribution across your reads, per base sequence content (%A/T/G/C), adapter contamination and overrepresented sequences. For further reading and documentation see the [FastQC help pages](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/Help/).
## Sourmash Compare

### MultiQC
**Output directory: `results/compare_sketches`**

<details markdown="1">
<summary>Output files</summary>
For each provided `molecules`, `ksizes` and `sketch_num_hashes_log2`, a file is created containing a symmetric matrix of the similarity between all samples, written as a comma-separated variable file:

- `similarities_molecule-${molecule}__ksize-${ksize}__${sketch_value}__track_abundance-${track_abundance}.csv`
For example,

```bash
similarities_molecule-dna_ksize-9_sketch_num_hashes_log2-4.csv
similarities_molecule-protein_ksize-3_sketch_num_hashes_log2-2.csv
similarities_molecule-protein_ksize-3_sketch_num_hashes_log2-4.csv
similarities_molecule-protein_ksize-9_sketch_num_hashes_log2-2.csv
similarities_molecule-protein_ksize-9_sketch_num_hashes_log2-4.csv
```

## MultiQC

[MultiQC](http://multiqc.info) is a visualization tool that generates a single HTML report summarizing all samples in your project. Most of the pipeline QC results are visualised in the report and further statistics are available in the report data directory.

The pipeline has special steps which also allow the software versions to be reported in the MultiQC output for future traceability.

For more information about how to use MultiQC reports, see [https://multiqc.info](https://multiqc.info).

**Output files:**

- `multiqc/`
- `multiqc_report.html`: a standalone HTML file that can be viewed in your web browser.
- `multiqc_data/`: directory containing parsed statistics from the different tools used in the pipeline.
- `multiqc_plots/`: directory containing static images from the report in various formats.

</details>
## Pipeline information

[MultiQC](http://multiqc.info) is a visualization tool that generates a single HTML report summarising all samples in your project. Most of the pipeline QC results are visualised in the report and further statistics are available in the report data directory.

Results generated by MultiQC collate pipeline QC from supported tools e.g. FastQC. The pipeline has special steps which also allow the software versions to be reported in the MultiQC output for future traceability. For more information about how to use MultiQC reports, see <http://multiqc.info>.

### Pipeline information
[Nextflow](https://www.nextflow.io/docs/latest/tracing.html) provides excellent functionality for generating various reports relevant to the running and execution of the pipeline. This will allow you to troubleshoot errors with the running of the pipeline, and also provide you with other information such as launch commands, run times and resource usage.

<details markdown="1">
<summary>Output files</summary>
**Output files:**

- `pipeline_info/`
- Reports generated by Nextflow: `execution_report.html`, `execution_timeline.html`, `execution_trace.txt` and `pipeline_dag.dot`/`pipeline_dag.svg`.
- Reports generated by the pipeline: `pipeline_report.html`, `pipeline_report.txt` and `software_versions.yml`. The `pipeline_report*` files will only be present if the `--email` / `--email_on_fail` parameter's are used when running the pipeline.
- Reformatted samplesheet files used as input to the pipeline: `samplesheet.valid.csv`.
- Parameters used by the pipeline run: `params.json`.

</details>

[Nextflow](https://www.nextflow.io/docs/latest/tracing.html) provides excellent functionality for generating various reports relevant to the running and execution of the pipeline. This will allow you to troubleshoot errors with the running of the pipeline, and also provide you with other information such as launch commands, run times and resource usage.
- Reports generated by the pipeline: `pipeline_report.html`, `pipeline_report.txt` and `software_versions.csv`.
- Documentation for interpretation of results in HTML format: `results_description.html`.
1 change: 0 additions & 1 deletion nextflow.config
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ params {
input = null



// MultiQC options
multiqc_config = null
multiqc_title = null
Expand Down

0 comments on commit 29e50b1

Please sign in to comment.