Kun:LabNotes/MONOD/2013-11-22
Jump to navigation
Jump to search
MONOD round 1[edit]
Target identification[edit]
- Data used:
- Cancer data:
- GBM: U87 (ENCODE RRBS Hudson Alpha; ENCODE 450k)
- Pancreatic cancer: PANC1 (ENCODE RRBS UW & Hudson Alpha; ENCODE 450k)
- Whole blood data: GSE30253 RRBS data; GSE31263 WGBS data
- Cancer data:
- DMSs and DMS clusters identification:
- I wrote a simple script that takes the average of all existing data for each CpG site, and reported the sites at which the methylation difference was greater than 0.8 between cancer and whole blood.
./find_DMS_MONOD_v1.pl > MONOD_v1_DMS.txt
- These sites were then grouped into DMS clusters.
./extract_clusters.pl MONOD_v1_DMS.txt > MONOD_v1_DMS_clusters.txt
Probe design[edit]
- Dinh went through one round of design using ppDesigner v1.1. However, upon close inspection, I found that some capturing arms contain long stretches of poly-Ts or poly-Gs.
- I modified get_probes.pl, and filtered out any candidate capturing arm that has over 6 A/T/G/Cs.
- I repeated the probe design.
- PANC_GBM_DMS: Target list
- Set I: gap size = 400-450bp, flanking len = 250bp; ppDesigner Output + probe assembly script V6 = Probe sequences: 3,514 probes
- Set II: gap size = 125-175bp, flanking len = 125bp; ppDesigner Output + probe assembly script V4 = Probe sequences: 6,960 probes
- CRC_DMS: Target list.
- Set I: gap size = 400-450bp, flanking len = 300bp; ppDesigner Output + probe assembly script V6 = Probe sequences: 148 probes
- Set II: gap size = 125-175bp, flanking len = 175bp; ppDesigner Output + probe assembly script V4 = Probe sequences: 149 probes
- PANC_GBM_DMS: Target list
- MONOD v1 oligo pool ordered from Custom Array.
BSPP capture[edit]
- Alan performed the first experiment on five cancer cell lines and three whole blood samples. Note that Alan also called this probe set as GP1:
Data analysis[edit]
Mapping summary[edit]
- All 16 sequencing libraries were sequenced in a PE150bp MiSeq run: /home/kunzhang/seqStore/131227_MiSeq_GP1
- Pre-processing of sequencing reads: the 6bp UMI in each of the Read 1 is extracted and placed in the read name, then 27bp of the 5'-ends (corresponding to H1&H2) were trimmed off from Read 1 and Read 2.
./extract_UMI_PE.pl
- The reads were then mapped by bisReadMapperPE19.pl, and the bam files were used for haplotype analysis. The 400bp probe set had very poor capture, so the following analysis focus on the data generated with the 150bp probe set (GP1_V4).
Sample | raw reads | mapped reads | on-target reads | mapping rate | specificity | # CpG called |
BE2C_V4 | 1,386,590 | 915,834 | 862,623 | 62.21% | 94.19% | 31,853 |
BE2C_V6 | 2,933,904 | 489,918 | 38,994 | 1.33% | 7.96% | 28,819 |
BXPC3_V4 | 1,805,276 | 1,256,196 | 1,232,955 | 68.30% | 98.15% | 36,528 |
BXPC3_V6 | 2,753,274 | 376,020 | 24,332 | 0.88% | 6.47% | 18,885 |
PANC1_V4 | 1,264,618 | 807,262 | 763,920 | 60.41% | 94.63% | 32,712 |
PANC1_V6 | 2,522,344 | 386,410 | 29,667 | 1.18% | 7.68% | 22,154 |
T98G_V4 | 1,803,782 | 1,125,066 | 1,107,621 | 61.41% | 98.45% | 37,906 |
T98G_V6 | 3,151,530 | 478,980 | 30,172 | 0.96% | 6.30% | 27,995 |
U87MG_V4 | 1,586,144 | 1,047,246 | 1,011,039 | 63.74% | 96.54% | 34,562 |
U87MG_V6 | 2,856,720 | 419,040 | 40,911 | 1.43% | 9.76% | 26,848 |
UCLA-SZ_B1_V4 | 1,427,496 | 774,492 | 713,916 | 50.01% | 92.18% | 29,093 |
UCLA-SZ_B1_V6 | 2,173,094 | 374,268 | 21,959 | 1.01% | 5.87% | 21,690 |
UCLA-SZ_D1_V4 | 1,409,960 | 523,588 | 472,779 | 33.53% | 90.30% | 26,859 |
UCLA-SZ_D1_V6 | 1,125,166 | 136,234 | 5,584 | 0.50% | 4.10% | 8,815 |
UCLA-SZ_H11_V4 | 1,574,628 | 643,542 | 583,033 | 37.03% | 90.60% | 29,864 |
UCLA-SZ_H11_V6 | 2,165,268 | 268,476 | 15,822 | 0.73% | 5.89% | 16,206 |
I used our standard pipeline to create a methylation matrix, and did a hierarchical clustering. File:131227 MiSeq 8 sample V4 clustering.png Clearly, all samples cluster as expect (3 blood in one group, two PC (BXPC3, PANC1) lines in the second, and three GBM lines (U87, T98G, BE2_C) in the third.
Obtaining haplotypes[edit]
- I wrote the bam2hapInfo.pl script to extract all haplotypes from a pair of bam files within all targeted regions.
../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt BE2C_V4_R1_UMI_001.fastq.sorted.fwd.bam BE2C_V4_R1_UMI_001.fastq.sorted.rev.bam >BE2C_V4_hapInfo.txt ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt PANC1_V4_R1_UMI_001.fastq.sorted.fwd.bam PANC1_V4_R1_UMI_001.fastq.sorted.rev.bam >PANC1_V4_hapInfo.txt ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt T98G_V4_R1_UMI_001.fastq.sorted.fwd.bam T98G_V4_R1_UMI_001.fastq.sorted.rev.bam >T98G_V4_hapInfo.txt ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt U87MG_V4_R1_UMI_001.fastq.sorted.fwd.bam U87MG_V4_R1_UMI_001.fastq.sorted.rev.bam >U87MG_V4_hapInfo.txt ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt UCLA-SZ_B1_V4_R1_UMI_001.fastq.sorted.fwd.bam UCLA-SZ_B1_V4_R1_UMI_001.fastq.sorted.rev.bam >UCLA-SZ_B1_V4_hapInfo.txt ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt UCLA-SZ_D1_V4_R1_UMI_001.fastq.sorted.fwd.bam UCLA-SZ_D1_V4_R1_UMI_001.fastq.sorted.rev.bam >UCLA-SZ_D1_V4_hapInfo.txt ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt UCLA-SZ_H11_V4_R1_UMI_001.fastq.sorted.fwd.bam UCLA-SZ_H11_V4_R1_UMI_001.fastq.sorted.rev.bam >UCLA-SZ_H11_V4_hapInfo.txt
Sample | # on-target reads | # unique reads | Clonal rate |
BE2C_V4 | 862,623 | 442,154 | 48.7% |
BXPC3_V4 | 1,232,955 | 709,284 | 42.5% |
PANC1_V4 | 763,920 | 268,728 | 64.8% |
T98G_V4 | 1,107,621 | 305,041 | 72.5% |
U87MG_V4 | 1,011,039 | 333,677 | 67.0% |
UCLA-SZ_B1_V4 | 713,916 | 396,377 | 44.5% |
UCLA-SZ_D1_V4 | 472,779 | 86,148 | 81.8% |
UCLA-SZ_H11_V4 | 583,033 | 186,801 | 68.0% |
Capture efficiency & bias[edit]
- Check the capture efficiencies for all probes using bam2probeEfficiency.pl: Batch processing script: get_probeEfficiency_all_files.sh; Merge all files into a matrix:
./get_probeEfficiency_matrix.pl > 131227_MiSeq_V4_probeEfficiency_matrix_UMI_20Mar14.txt
Some probes seemed to have very different capture efficiencies between cancer and whole blood. This could be due to the fact that two probes were synthesized for H1/H2 that having CpG sites and that the "C" probes have higher annealing efficiencies. I plotted the T-stat (between cancer and whole blood) against the number of CpG sites within the capture arms H1/H2, or within the insert. There was a positive correlation between the number of CGs in H1/H2, but that only explain ~10% of the variability. There seems to be another major factor unknown.
File:MONOD V1 Capture bias vs CG H1H2.pngFile:MONOD V1 Capture bias vs CG insert.png
- I decided to focus on the analysis on a subset of "stable probes" that have similar efficiency between cancer and whole blood.
Haplotype-based deconvolution[edit]
- One key concept of MONOD is to identify tissue-specific (i.e. tumor versus blood) haplotypes, and use them to de-convolute the test samples. We are looking for two types of answers: (i) whether the test samples contain a low level of DNA from one tissue type (tumor) among the majority of DNA from another tissue type (blood). The results should be reported as p-values; (ii) if the answer to (i) is yes, what's the level of tumor DNA. Here the results are values (point estimates) with confident intervals.
- Before getting into de-convolution, we need to define how to do comparison at the haplotype level. Here are number of considerations:
- The raw haplotypes might have some missing alleles, due to sequencing quality or other technical reasons. So we might need to remove some CpG sites that have too many missing alleles. We might also need to do trimming in another direction on the haplotypes, because some low-quality sequencing reads can lead to multiple missing alleles in the same haplotypes. Finally, after trimming CpG sites, the haplotypes might be too short to be useful for haplotype analysis. So we need to remove haplotypes below a certain length limit. I chose the minimal number of 5 CpG sites per haplotypes (tried 3,4,5,6,7 and 5 was the best). For the trimming, i first remove any read/haplotype that have less than 5 good alleles. Then I remove all CpG sites that have more than 10% missing alleles. After this second trimming, any haplotype with less than 5 CpG sites are removed.
- Each "true" haplotype might have different variants due to sequencing errors. In addition, several haplotypes that are different on one or two CpG sites might be in the same epigenetic state. So before comparing haplotypes, we want to collapse similar haplotypes into clusters, and perform the rest of analyses at the cluster level. In deciding how to identify clusters, we first need to come up with a similarity measurement, and a way to decide whether two haplotypes should be in the same clusters or not. I define an overall "error rate" at 0.04, which include sequencing errors, incomplete bisulfite conversion. In addition, for the CpG site that have an intermediate level of methylation at the population level, each individual molecule has certain probability to be methylated or unmethylated. I simply set this probability based on the average methylation level. With that, we can find out the number of allelic difference (Hemming distance) between any two haplotypes, and ask whether it can be explained by the error and the probability of acquiring methylation at the single molecule level. I used a p-value cutoff of 0.01 to assign two haplotypes into two different clusters. To systematically group raw haplotypes into clusters, I first ranked all haplotypes for a target region based on their abundance. Then I started from the most abundant haplotype, called it as a dominant haplotype of a cluster. This dominant haplotype was then used to recruit similar haplotypes in the list, based on the statistical cutoff mentioned about. All similar haplotypes were extracted from the list to form one haplotype cluster. The same procedure is repeated on the remaining haplotypes in the list, until all haplotypes are assigned to one of the clusters. Each cluster has at least one haplotype. If the cluster size is two or more, the most abundant one is called the dominant haplotype.
- In comparing two haplotype clusters, the distance between the two clusters is the Hemming distance between the two dominant haplotypes (i tried directly comparing individual haplotypes within the clusters, but it seems to make things worse). To test whether the two clusters are similar, I calculated the z-score based on the "error rate" mentioned above. A z-score cutoff of 3.1 was used (equivalent to 99.9% confidence).
- To test the idea of haplotype-based deconvolution, I created synthetic mixtures by mixing sequencing reads from cancer and whole blood at different ratios. I first found the number of usable reads in each bam files, and calculated the down-sampling rates for making synthetic data sets at different tumor/blood ratios:131227_MiSeq_down_sample_info.txt, then used a script (make_synthetic_mixtures.pl) to create a series of bam fils and hapInfo files.
../make_synthetic_mixtures.pl < 131227_MiSeq_down_sample_info.txt
- I then wrote another script (search_informative_probes.pl) to identify a subset of probes that are informative for distinguishing tumor DNA from whole blood. I ran this script on data from all five tumor cell lines, and combine all the informative probes (5_cancer_line_informative_probes.txt). I also tried to further trim down the list based on the consistent capturing on both tumor and blood DNA (T-test value between -1 and 1), but didn't find any improvement.
../../search_informative_probes.pl UCLA-SZ_B1_V4_hapInfo.txt BE2C_V4_hapInfo.txt UCLA-SZ_B1_V4_BE2C_V4_ds_5pct.hapInfo.txt > UCLA-SZ_B1_V4_BE2C_V4_informative_probes.txt ../../search_informative_probes.pl UCLA-SZ_B1_V4_hapInfo.txt BXPC3_V4_hapInfo.txt UCLA-SZ_B1_V4_BXPC3_V4_ds_5pct.hapInfo.txt > UCLA-SZ_B1_V4_BXPC3_V4_informative_probes.txt ../../search_informative_probes.pl UCLA-SZ_B1_V4_hapInfo.txt PANC1_V4_hapInfo.txt UCLA-SZ_B1_V4_PANC1_V4_ds_5pct.hapInfo.txt > UCLA-SZ_B1_V4_PANC1_V4_informative_probes.txt ../../search_informative_probes.pl UCLA-SZ_B1_V4_hapInfo.txt T98G_V4_hapInfo.txt UCLA-SZ_B1_V4_T98G_V4_ds_5pct.hapInfo.txt >UCLA-SZ_B1_V4_T98G_V4_informative_probes.txt ../../search_informative_probes.pl UCLA-SZ_B1_V4_hapInfo.txt U87MG_V4_hapInfo.txt UCLA-SZ_B1_V4_U87MG_V4_ds_5pct.hapInfo.txt > UCLA-SZ_B1_V4_U87MG_V4_informative_probes.txt awk '{print $1}' *informative* | sort | uniq > 5_cancer_line_informative_probes.txt
- Here is a key script (mixMethHapAnalysis.pl) that I wrote for deconvolution. It takes three hapInfo files, one for pure blood, a second for cancer reference, and the third for mixed DNA. The four argument is optional, which is a file containing a list of probes to focus on. For any haplotype cluster in the mixture, it tries to assign it to either blood or cancer based on the similarity. If a haplotype cluster has similar ones in both blood and cancer, I split the haplotype counts in the mixture to blood and cancer in proportional to the methylation level in the two references. Then I performed a simple ANOVA analysis across all haplotypes used, which reports the tumor and blood fractions as well as the t-test statistics.
- With the data we have for this sequencing run, I first performed "self-deconvolution", meaning that the mixture was created from one tumor and one blood sample, and that the same tumor and blood samples were used as the reference for deconvolution. This is an easy start, and every haplotypes in the mixture should be present in the reference, and no sample-to-sample variability is consider. I wrote a master script (deconvolute_synthetic_mixtures_self.pl) to do self-deconvolution for all synthetic DNA mixtures.
./deconvolute_synthetic_mixtures_self.pl <../131227_MiSeq_down_sample_info.txt > deconvolute_synthetic_mixtures_self_results.txt File:MONOD V1 detection sensitivity self 03Apr14.png File:MONOD V1 detection confidence self 03Apr14.png From these two plots, it appeared that MONOD probe set V1 is capable of detecting 2% tumor DNA at a reasonable level of confidence. This is very encouraging, and clearly there is a room to improve. File:MONOD V1 detection sensitivity self self stable 03Apr14.png File:MONOD V1 detection confidence self self stable 03Apr14.png Then I switched to the stable probes (the ones that have similar capturing efficiencies for tumor and blood DNA, -0.05<t-value<1), and saw a subtle improvement in the sensitivity down to ~1% tumor DNA in blood.
- Upon carefully inspecting my script, I realized that the PE reads were not handled very nicely on UMI processing. Ideally a paired of properly mapped reads from the same molecule should be combined into one haplotype to avoid UMI double counting. Therefore, I modified the bam2hapInfo.pl script. That turned out to be giving similar sensitivity. So I went back and examine the script again.
- I tried to be more careful in dealing with UMIs. Previously, when multiple reads have the same UMI, I simply use the first one (sometimes more than one to deal with barcode collision), and discard the rest. In fact we can use the multiple reads that sharing the same UMI to correct for sequencing errors and derive a consensus sequence (or haplotype). In doing so though, it is more difficult to handle barcode collision, which now i think is secondary as the chance of barcode collision will be much lower when the length of random barcodes is increased from 6 to 8. The other consideration is that the PE sequencing data can be either treated as SE reads or PE reads. The length of each haplotype would be longer in the PE mode, but there are likely fewer data points for the regression. It's hard to say whether PE or SE mode is better. So we should have the flexibility to process data in the two modes. bam2hapInfo.pl
Self-validation: File:MONOD V1 detection sensitivity self self all probes 07Apr14.png File:MONOD V1 detection confidence self self all probes 07Apr14.png Cross-validation: synthetic mixtures are from UCLA-SZ_B1 and one of the cancer line, references are UCLA-SZ_D1 UCLA-SZ_H11 combined, plus one of the different cancer cell lines. File:MONOD V1 detection sensitivity different all probes 07Apr14.png File:MONOD V1 detection confidence different all probes 07Apr14.png
- The detection sensitivity and confidence is much lower with cross-validation, which is expected, as in this case there are sample-to-sample variability, and hence the performance greatly depends on how comprehensive the reference data sets are in capturing all the variability. As we have more data, the performance should be improved.
- I further played around with the parameter settings for mixMethHapAnalysis.pl, and found a combination that leads to a much higher confidence.
$minHapLen =4; #if this number is too high, fewer haplotypes will be used, and the power is lower next if($refB_mean_methylation-$refA_mean_methylation<0.4); #excluding regions that have similar methylation between blood and cancer. 0.4 seems to be a sweet spot. # in addition, I previous required that a haplotype needs to have differential methylation level (>0.1) # in order to be included in the regression. I just found that this requirement actually decrease the power. Self-validation (SE mode): File:MONOD V1 detection sensitivity self self all probes 07Apr14 2.png File:MONOD V1 detection confidence self self all probes 07Apr14 2.png Self-validation (PE mode): File:MONOD V1 detection sensitivity self self all probes PE 07Apr14 2.png File:MONOD V1 detection confidence self all probes PE 07Apr14 2.png Cross-validation (SE mode): File:MONOD V1 detection sensitivity different all probes 07Apr14 2.png File:MONOD V1 detection confidence different all probes 07Apr14 2.png Cross-validation (PE mode): File:MONOD V1 detection sensitivity different all probes 07Apr14 PE2.png File:MONOD V1 detection confidence different all probes 07Apr14 PE2.png
- With all these tunings of algorithms and parameters, MONOD seems to be very promising at this point. With self-validation, it can clearly detecting cancer DNA at 0.2-0.5% with a high level of confidence. For cross-validation, even with a very small data sets generated in the first experiment, it can approach the ~2% level. The SE mode has a very small increase in sensitivity due to the larger N in regression, but the PE mode seems to be more robust. We can continue to evaluate these two options, but my feeling is that PE mode is the way to go.