A Deep Learning Library for Compound and Protein Modeling DTI, Drug Property, PPI, DDI, Protein Function Prediction
作者开发的一种处理药物与蛋白功能预测的通用框架。
一、数据
Drug:['Cc1ccc(CNS(=O)(=O)c2ccc(s2)S(N)(=O)=O)cc1', ...],
Protein:['MSHHWGYGKHNGPEHWHKDFPIAKGERQSPVDIDTH...', ...]
二、处理的生物学问题:
- DTI (Drug Target Interaction),预测药物与蛋白是否有关系。
- Drug Property Prediction,药物性能预测,可是回归问题,可是分类问题。
- DDI (Drug-Drug Interaction Prediction)
- Protein-Protein Interaction Prediction
- Protein Function Prediction
- 药物重定位,例如
Antiviral Drugs Repurposing for SARS-CoV2 3CLPro,Given a new target sequence (e.g. SARS-CoV2 3CL Protease), retrieve a list of repurposing drugs from a curated drug library of 81 antiviral drugs.
Given a new target sequence (e.g. SARS-CoV 3CL Pro), training on new data (AID1706 Bioassay), and then retrieve a list of repurposing drugs from a proprietary library (e.g. antiviral drugs). The model can be trained from scratch or finetuned from the pretraining checkpoint!
作者对以上问题,提供了Demo数据与使用范例
三、模型 m_btaa1005f1.png
(1) 数据编码:
- Drug 八种方式:
- Multi-Layer Perceptrons (MLP) on Morgan,
Morgan Fingerprint 1 is a 1024-length bits vector that encodes circular radius-2 substructures. A multi-layer perceptron is then applied on the binary fingerprint vector
- PubChem,
Pubchem 2 is a 881-length bits vector, where each bit corrresponds to a hand-crafted important substructures. A multi-layer perceptron is then applied on top of the vector
- Daylight
Daylight is a 2048-length vector that encodes path-based substructures. A multi-layer perceptron is then applied on top of the vector.
- RDKit 2D Fingerprint;
RDKit-2D is a 200-length vector that describes global pharmacophore descriptor. It is normalized to make the range of the features in the same scale using cumulative density function fit given a sample of the molecules.
- Convolutional Neural Network (CNN) on SMILES strings;
CNN 3 is a multi-layer 1D convolutional neural network. The SMILES characters are first encoded with an embedding layer and then fed into the CNN convolutions. A global max pooling layer is then attached and a latent vector describe the compound is generated.
6.Recurrent Neural Network (RNN) on top of CNN;
CNN+RNN 4,5 attaches a bidirectional recurrent neural network (GRU or LSTM) on top of the 1D CNN output to leverage the more global temporal dimension of compound. The input is also the SMILES character embedding.
- transformer encoders on substructure fingerprints;
Transformer 6 uses a self-attention based transformer encoder that operates on the sub-structure partition fingerprint
- message passing graph neural network on molecular graph
MPNN 8 is a message-passing graph neural network that operate on the compound molecular graph. It transmits latent information among the atoms and edges, where the input features incorporate atom/edge level chemical descriptors and the connection message. After obtaining embedding vector for each atom and edge, a readout function (mean/sum) is used to obtain a (molecular) graph-level embedding vector.
- Protein 七种方式:
- AAC
is a 8,420-length vector where each position correpsonds to an amino acid k-mers and k is up to 3.
- PseAAC
includes the protein hydrophobicity and hydrophilicity patterns information in addition to the composition.
- Conjoint Triad
uses the continuous three amino acids frequency distribution from a hand-crafted 7-letter alphabet.
- Quasi Sequence
takes account for the sequence order effect using a set of sequence-order-coupling numbers.
- CNN
is a multi-layer 1D convolutional neural network. The target amino acid is decomposed to each individual character and is encoded with an embedding layer and then fed into the CNN convolutions. It follows a global max pooling layer.
- CNN+RNN
attaches a bidirectional recurrent neural network (GRU or LSTM) on top of the 1D CNN output to leverage the sequence order information.
- Transformer
uses a self-attention based transformer encoder that operates on the sub-structure partition fingerprint 7 of proteins. Since transformer’s computation time and memory is quadratic on the input size, it is computational infeasible to treat each amino acid symbol as a token. The partition fingerprint decomposes amino acid sequence into protein substructures of moderate sized such as motifs and then each of the partition is considered as a token and fed into the model.
(2)Loss
For binding affinity score prediction, it uses mean squared error (MSE) loss
For binary interaction prediction, it uses binary cross entropy (BCE) loss
网友评论