美文网首页
Creating an OTU table 信息搬运

Creating an OTU table 信息搬运

作者: 代号北极能 | 来源:发表于2019-06-28 22:06 被阅读0次

    信息来源https://www.drive5.com/usearch/manual/pipe_otutab.html


    Creating an OTU table

    In my approach, OTUs are sequences selected from the reads. The goal is to identify a set of correct biological sequences; see defining and interpreting OTUs for discussion.

    An OTU table is a matrix that gives the number of reads per sample per OTU. One entry in the table is usually a number of reads, also called a "count", or a frequency in the range 0.0 to 1.0.  Counts are calculated by mapping reads to OTUs. Note that read abundance has very low correlation with species abundance, so the biological interpretation of these counts is unclear.

    An OTU table is created by the otutab command. Input should be reads before quality filtering and before discarding low-abundance unique sequences, e.g. singletons. This will improve sensitivity, because many low-quality and low-abundance reads will map successfully to OTUs. Reads must have sample indenfiers in the labels. The search database is the OTU (or ZOTU) sequences, which must have valid OTU identifiers in the labels.

    I suggest making OTU tables for both 97% OTUs and denoised sequences (ZOTUs). Most biological conclusions, e.g. about diversity, should be the same for both tables. If they are not, this suggests that the analysis is not reliable.

    Normalizing the table

    After generating the table, you should use the otutab_rare command to normalize all samples to the same number of reads.

    Example

    usearch -otutab reads.fq -otus otus.fa -otutabout otutab.txt -mapout map.txt

    usearch -otutab reads.fq -zotus zotus.fa -otutabout zotutab.txt -mapout zmap.txt

    usearch -otutab_norm otutab.txt -sample_size 5000 -output otutab_norm.txt

    usearch -otutab_norm zotutab.txt -sample_size 5000 -output zotutab_norm.txt


    Defining and interpreting OTUs

    Sneath-Sokal OTUs

    The concept of an Operational Taxonomic Unit (OTU) was introduced by Peter Sneath and Robert Sokal in the 1960s through a series of books and articles which founded the field of numerical taxonomy (see e.g. Sneath & Sokal: Numerical Taxonomy, W.H. Freeman, 1973). Their goal was to develop a quantitative strategy for classifying organisms into groups based on observed characters, creating a hierarchical classification reflecting the evolutionary relationships between the organisms as faithfully as possible. Sneath-Sokal OTUs were constructed by making a table of observed traits which could be described by numerical values; e.g.1=present, 0=absent. A hierarchy (tree) was built by repeatedly merging the most similar groups according to the number of traits they had in common. This is an early example of a agglomerative clustering algorithm applied to biological classification. Sokal and Sneath were working before DNA sequences were available, and before the development of more accurate trait- and sequence-based phylogenetic tree construction algorithms such as neighbor-joining and maximum likelihood.

    Historical 97% identity threshold

    In 16S sequencing, OTUs are typically constructed using an identity threshold of 97%. To the best of my knowledge, the first mention of this threshold is in (Stackebrandt and Goebel 1994). If you know of an earlier reference for the magic number 97, please let me know. Stackebrandt and Goebel found that 97% similarity of 16S sequences corresponded approximately to a DNA reassociation value of 70%, which had previously been accepted as a working definition for bacterial species (Wayne et al. 1987).

    Modern OTU identity thresholds

    An analysis done in 2014 found that the optimal threshold was 98.5% for approximating species by clustering full-length 16S sequences (Yarza et al. 2014). However, this study was done using CD-HIT (Li and Godzik 2006), which has a problematic non-standard definition of sequence identity, based on taxonomy predictions in the SILVA database of which roughly one in five are wrong (Edgar 2018a). In a more recent paper, I showed that the optimal clustering threshold is ~99% identity for full-length sequences and ~100% for V4 (Edgar 2018b).

    OTUs constructed by clustering 16S sequences

    The earliest software program I'm aware of for clustering 16S sequences is FastGroup (Seguritan and Rohwer 2001). FastGroup used a greedy clustering algorithm similar to UCLUST. The identity threshold was 97%, for which the authors cite Stackebrandt and Goebel. DOTUR (Schloss and Handlesman 2005), the forerunner of mothur (Schloss et al. 2009), used agglomerative clustering with options to use maximum, minimum or average linkage. DOTUR and mothur generate OTUs at a range of identity thresholds, noting (DOTUR paper) that "[s]sequences with greater than 97% identity are typically assigned to the same species, those with 95% identity are typically assigned to the same genus, and those with 80% identity are typically assigned to the same phylum, although these distinctions are controversial". QIIME (Caporaso et al. 2010) uses the 97% threshold almost exclusively. Early versions of QIIME used CD-HIT as the default OTU clustering method, which was later replaced by UCLUST (Edgar 2013). I have not found any description of the intended interpretation of QIIME OTUs in their papers or documention (please let me know if I missed it).

    OTUs as a working definition of species

    Imagine it is the 1990s, and we have 16S sequences obtained by PCR and Sanger sequencing. At that time, relatively few 16S sequences were available in public repositories, so most could not be identified by database search. A pragmatic alternative is to make clusters at 97% identity. These clusters can be treated as Sneath-Sokal OTUs which tentatively assign sequences to species and higher-level groups. This is a reasonable approach if, but only if, experimental errors in the sequences are not significant. In the early literature, errors are rarely (never?) considered. Presumably, this is because Sanger sequencing has a very low error rate, and also because the importance of errors due to PCR, especially chimeras, was not widely known in the '90s (Ashelford et al. 2005).

    OTUs with next-generation reads

    Next-generation sequencing (NGS) emerged in the late 1990s (Wikipedia, DNA sequencing) and revolutionized microbiology by enabling low-cost, high-throughput sequencing of the 16S gene. These advantages came at the expense of shorter read lengths and higher error rates compared to Sanger sequencing. OTU clustering methods were applied to NGS reads, but it gradually became clear that these methods were generating many spurious OTUs due to experimental error, causing inflated estimates of diversity (see e.g., Huse et al. 2010). Some widely-used methods continue to generate large numbers of spurious OTUs at the time of writing (mid-2017), notably QIIME which reports hundreds or thousands of OTUs from Illumina reads of mock communities with ~20 species using recommended procedures.

    Westcott and Schloss OTUs

    In a recent paper, Westcott and Schloss proposed a definition of OTUs based on the Matthews Correlation Coefficient (MCC). However, their definition has problems in practice.

    Goals for making OTUs as clusters of reads

    Here, I suggest some goals for making OTUs.

    Operational definition of species-like groups

    Sneath-Sokal classification of the sequences. This is the historical motivation for making 16S OTUs.

    Merging variation within a single strain into one cluster

    Bacterial chromosomes often contain multiple copies of the 16S gene (paralogs). Paralogs often have small sequence differences. Clustering tends to merge paralogs into a single OTU. This is a good thing if the goal is to get one OTU per phenotype, which makes sense for most analyses.

    Merging variation between strains of a single species into one cluster

    Different strains of a given species, say E. coli, often have differences in their 16S sequences. This adds a level of variation within a species on top of the variation that may occur within a single strain due to paralogs. Clustering tends to merge both levels of variation to create a single OTU, or at least fewer OTUs, for the species. This is not such a good thing if the goal is to get one OTU per phenotype, because different strains can have important differences in phenotype (e.g., pathogenic or not). Also, while the concept of a strain is reasonably well-defined (minimal variation in the compete genome), the concept of a species is problematic for bacteria (Gevers et al. 2005Doolittle and Pakpke 2006), so the goal of having one OTU per species is not so well-defined or useful.

    Merging variation due to experimental error into one cluster

    Errors due to PCR sequencing error cause sequence variation in the reads which is not biological in origin. Clustering tends to merge bad sequences with correct biological sequences. To be effective, this requires that sequences with >3% error are rare, because they always induce spurious OTUs (see e.g. Edgar and Flyvbjerg 2014). A sequence with <3% error can also cause a spurious OTU, depending on whether the corresponding correct sequence is close to the center of the cluster (probably not) or close to the edge (more likely).

    Interpreting results obtained using OTUs

    To do meaningful biology with OTUs, you need to know which goals are intended (those I suggest above? others?), and how well those goals are achieved in practice. This can be problematic, because authors are often vague about their goals and sloppy in validating their methods. For example, one approach found in the literature is to test the method on artificial ("mock") communities with known composition, and measure alpha diversity / richness from the number of OTUs reported. If the number of OTUs is approximately the same as the number of species, the method is declared to be accurate. Some of these papers are deeply flawed. Algorithm parameters may be tuned on the mock community data used for testing, which could result in over-fitting. The right number of OTUs could be obtained for the wrong reasons; for example, because some of the OTUs have several species lumped into the same cluster, while others are spurious, e.g. chimeric. In other cases, authors attempt to use mock community diversity for validation, find that far too many OTUs are generated, then ignore, downplay or obfuscate the result without investigating where the spurious OTUs come from. If sources of error are not understood and accounted for, then biological inferences are unreliable and naive estimates of statistical significance can be misleading because low P-values can be obtained for false hypotheses (Taylor 1997).

    My definition of OTUs

    In USEARCH, my algorithms are designed to report OTUs which are correct biological sequences. By this definition, the connection to biology is clear and an OTU can be objectively verified or falsified, especially in mock community tests where the biological sequences are known.

    UPARSE OTUs

    In the case of the UPARSE-OTU algorithm (cluster_otus command), the goal is to report a subset of the correct biological sequences such that (a) all pairs of OTUs are <97% identical to each other, and (b) the OTU sequence is the most abundant in its neighborhood. These can be interpreted as Sneath-Sokal OTUs giving an operational approximation to species.

    Denoised OTUs (ZOTUs)

    With the UNOISE algorithm (unoise3 command), the goal is to report all correct biological sequences in the reads. These are called zero-radius OTUs, or ZOTUs. It is expected that some species may be split over several ZOTUs due to intra-species variation (differences between paralogs and differences between strains). The advantage of ZOTUs is that they enable resolution of closely related strains with potentially different phenotypes that would be lumped into the same cluster by 97% OTUs.

    OTU abundances

    By my definition, OTUs are sequences, not clusters. Traditional OTU analysis is usually based on an OTU table, which requires an abundance or frequency for each OTU in each sample. I calculate abundances by counting reads, while emphasizing that read abundance has very low correlation with species abundance, and therefore the biological interpretation of these abundances is unclear.

    For denoised sequences, the abundance of a ZOTU is intended to be the number of reads derived from that (one, unique) biological template sequence, counting correct reads and reads with errors.

    With 97% OTUs (UPARSE), the abundance of an OTU is intended to be the number of reads derived from all biological sequences that are >=97% identical to the OTU sequence, counting correct reads and reads with errors. This definition is not quite enough, because a read can match two or more OTUs. If there is more than one choice, the read is assigned to the OTU with highest identity (not highest abundance) because the OTU with highest identity is more likely to belong to the same species. If the choice is still ambiguous because two or more OTUs are tied for highest identity, the tie is broken systematically by choosing the first OTU in database file order. This procedure ensures that reads for a given strain should be consistently assigned to the same OTU.

    In practice, for both OTUs and ZOTUs, abundances are calculated by aligning reads to OTUs at a 97% identity threshold using the otutab command. A read is assigned to the most similar OTU sequence, breaking ties systematically. In the case of 97% OTUs, the motivation for using a 97% identity threshold is self-explanatory. With ZOTUs, the motivation is to allow up to 3% errors due to sequencing and PCR. It is expected that the ZOTUs already contain the true biological variation and the most similar ZOTU will almost always be the correct biological sequence for a read with errors.


    Mapping reads to OTUs

    The otutab command maps a read to an OTU by finding the OTU sequence with highest identity above a given threshold (usually 97%). If there is a tie, the tie is broken by choosing the first OTU in database file order.

    A read sequence can match two or more OTUs with >=97% identity. It has been suggested (Ye et al. 2015) that the read should be assigned to the OTU with highest abundance rather than highest identity. I disagree, because high identity is a better signal that the sequence is from the same species.

    Suppose a sequence (S) matches two OTU sequences: A with 97% identity and B with 98% identity. The unique sequence A has abundance 1,000 and B has abundance 100. Should we assign S to A or B? A has higher abundance but lower identity, vice versa for B.

    There are three possible reasons why S does not exactly match an OTU sequence: 1. it is a correct biological sequence, 2. it has sequence errors due to PCR or sequencing, or 3. it is chimeric.

    (1) S is a correct biological sequence

    Here, either S is a paralog of A or B derived from the same genome, or S is from a different species.

    (1a) If S is a paralog, we would prefer to assign it to the same OTU as the other paralog(s) from the species. This is more likely to be B because paralogs tend to have high identity. Paralogs in a given species almost always have higher identity to each other than to genes in another species. (Same argument applies to intra-species variation).

    (1b) If S belongs to a different species, then we are lumping two species into the same OTU and there is no reason to prefer A or B.

    Conclusion: if S is a correct biological sequence, it is better to choose the OTU with highest identity because it is the most likely to belong to the same species so we should assign S to B.

    (2) S has sequencing errors.

    Here, either S is a bad read of A or B, or S is a bad read of a correct biological sequence which is above the identity threshold so does not have its own OTU.

    (2a) If we know that S is a bad read of an OTU sequence then we should again choose the highest identity match because this is much more likely to be the correct sequence. Suppose S is a bad read of A. Adding errors to A will probably reduce identity to both A and B. A bad read of A which has higher identity to B must have base call errors that reproduce letters in B by chance; this is very unlikely.

    (2b) If S is a bad read of a biological sequence which is not A or B then this case is similar to (1) and we should therefore prefer the highest identity match.

    Conclusion: if S has sequencer error we should assign it to the OTU with highest identity because this is much more likely to be the correct sequence, so we should assign S to B.

    (3) S is an undetected chimera.

    This scenario is less common than (1) or (2) because chimeras are rare as a fraction of the reads. If S is chimeric, there is no reason to prefer A or B.

    相关文章

      网友评论

          本文标题:Creating an OTU table 信息搬运

          本文链接:https://www.haomeiwen.com/subject/aysycctx.html