More information about the second set of benchmarking data sets.¶
In our benchmarking paper, we focused on simulating one of the classic virtual screening use cases: you have a set of diverse actives (e.g. from an HTS experiment) and want to identify the next compounds that should be tested. When we did the model fusion paper we wanted to also simulate a second use case: you have a small set of similar actives (e.g. from a paper or patent) and want to identify the next compounds that should be tested. To accomplish this we added a second type of dataset to the benchmarking platform, for want of a better term we called these "Data sets II" (I was advocating for "leave one paper out"; it's probably good that Nikolas and Sereina prevailed).
In the paper itself we didn't have the space to do a great job of describing these datasets and why they are interesting. I'm going to try and at least partially remedy that here. It also gives me a chance to play around with an idea for pulling the scaffold out of a set of compounds.
To build datasets II, Sereina started with the 50 ChEMBL targets we had identified as being difficult enough to be interesting for the machine-learning exercise. For each of those targets, here are the steps:
- Find all ChEMBL documents that contain an activity value measured against that target.
- For each document, find the number of active molecules. We define actives as compounds with a potency < 10uM.
- If the document has less than ten actives, remove it.
- If the target has less than four documents remaining, remove it.
This process left us with 37 targets. The targets had from 4-37 papers each and the papers had 10-112 actives.
Given that the documents in ChEMBL are mainly from med chem papers, and knowing the content of the typical med chem paper, we asserted that each paper that makes up these datasets is likely to contain data about one or two chemical series along with a small number of reference compounds. I'll show here that, at least for a random selection of sets, this is actually true.
The approach is a simple one: pull the compounds and calculate the MCS that covers at least 80% of the compounds in the dataset. This arbitrary cutoff allows the reference compound(s) to be ignored. To allow for the small changes that are sometimes made to scaffolds, I use generic atom types for the MCS-finding and then, in a post-processing step, assign the atom types that are preserved across all compounds matching the MCS.
The datasets are in github: https://github.com/rdkit/benchmarking_platform
NOTE: This is another example where the post is a bit too big for blogger to handle. You can find the full post here in the nbviewer.
On to the analysis¶
import pandas as pd
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem import Draw
Draw.DrawingOptions.elemDict[0]=(0.,0.,0.) # draw dummy atoms in black
from rdkit.Chem import PandasTools
from rdkit.Chem import AllChem as Chem
from rdkit.Chem import DataStructs
from rdkit.Chem import MCS
import cPickle
from rdkit import rdBase
print(rdBase.rdkitVersion)
import time
print time.asctime()
Grab the data
import glob
pkls = glob.glob('/home/glandrum/Code/benchmarking_platform/compounds/ChEMBL_II/*.pkl')
print len(pkls)
Get target information from our local ChEMBL copy:
%load_ext sql
%config SqlMagic.feedback = False
import os
tgts={}
for pkl in pkls:
fn = os.path.split(pkl)[-1]
basen = os.path.splitext(fn)[0]
tgtn=int(basen.split('_')[-1])
data = %sql postgresql://localhost/chembl_16 \
select tid,pref_name,organism from target_dictionary where tid=:tgtn ;
tgts[data[0][0]]=(pkl,data[0][1],data[0][2])
for idx in sorted(tgts.keys()):
pkl,tgt,species = tgts[idx]
print idx,tgt,species
sets = cPickle.load(file(tgts[11631][0]))
The data is organized in a dictionary with one entry per paper.
docid,cmpds = sets.iteritems().next()
data = %sql postgresql://localhost/chembl_16 \
select journal,volume,first_page,year from docs where doc_id=:docid ;
print ', '.join(str(x) for x in data[0])
ids,smis= zip(*cmpds)
mols = [Chem.MolFromSmiles(x) for x in smis]
Draw.MolsToGridImage(mols,molsPerRow=5,legends=ids)
Define an MCS-based approach to find the scaffold from the paper:
def MCS_Report(ms,printSmarts=True,atomCompare='any',**kwargs):
"""
the "convert to specific" algorithm used isn't perfect because it doesn't deal correctly
with possible symmetries in the MCS-molecule match, but it's at least a start.
"""
mcs = MCS.FindMCS(ms,atomCompare=atomCompare,timeout=60,**kwargs)
nAts = numpy.array([x.GetNumAtoms() for x in ms])
print 'Mean nAts %.1f, mcs nAts: %d'%(nAts.mean(),mcs.numAtoms)
if printSmarts:
print mcs.smarts
mcsM = Chem.MolFromSmarts(mcs.smarts)
# find the common atoms
if atomCompare == 'any':
mcsM2 = Chem.MolFromSmiles(mcs.smarts,sanitize=False)
atNums=[-1]*mcsM.GetNumAtoms()
for m in ms:
match = m.GetSubstructMatch(mcsM)
if not match:
continue
for qidx,midx in enumerate(match):
anum = m.GetAtomWithIdx(midx).GetAtomicNum()
if atNums[qidx]==-1:
atNums[qidx]=anum
elif anum!=atNums[qidx]:
atNums[qidx]=0
for idx,atnum in enumerate(atNums):
if atnum>0:
mcsM.GetAtomWithIdx(idx).SetAtomicNum(atnum)
mcsM.UpdatePropertyCache(False)
Chem.SetHybridization(mcsM)
img=Draw.MolToImage(mcsM,kekulize=False)
return mcsM,img
The function prints out the mean number of atoms per molecule in the input along with the size of the MCS. This is intended to help assess whether or not the MCS is actually a scaffold.
mcs,img = MCS_Report(mols,threshold=0.8)
img
Now let's look at all the papers
def processTarget(tid):
pkl,target,species=tgts[tid]
print '-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*'
print '\t',target,species
sets = cPickle.load(file(pkl))
print '\t\t %d papers'%(len(sets))
alldata=[]
for docid,cmpds in sets.iteritems():
data = %sql postgresql://localhost/chembl_16 \
select journal,volume,first_page,year from docs where doc_id=:docid ;
ref=', '.join(str(x) for x in data[0])
print '----------------'
print ref
ids,smis= zip(*cmpds)
mols = [Chem.MolFromSmiles(x) for x in smis]
mcs,img = MCS_Report(mols,printSmarts=False,threshold=0.8,atomCompare='any')
alldata.append([docid,ref,mcs,img,mols])
return alldata
alldata=processTarget(11631)
Draw.MolsToGridImage([x[2] for x in alldata],molsPerRow=5,kekulize=False)
alldata=processTarget(13001)
Draw.MolsToGridImage([x[2] for x in alldata],molsPerRow=5,kekulize=False)
No comments:
Post a Comment