Kun:LabNotes/MONOD/2013-11-22

From ZhangLabWiki
Jump to navigation Jump to search

MONOD round 1

Target identification

  • Data used:
    • Cancer data:
      • GBM: U87 (ENCODE RRBS Hudson Alpha; ENCODE 450k)
      • Pancreatic cancer: PANC1 (ENCODE RRBS UW & Hudson Alpha; ENCODE 450k)
    • Whole blood data: GSE30253 RRBS data; GSE31263 WGBS data
  • DMSs and DMS clusters identification:
    • I wrote a simple script that takes the average of all existing data for each CpG site, and reported the sites at which the methylation difference was greater than 0.8 between cancer and whole blood.
  ./find_DMS_MONOD_v1.pl > MONOD_v1_DMS.txt
    • These sites were then grouped into DMS clusters.
  ./extract_clusters.pl MONOD_v1_DMS.txt > MONOD_v1_DMS_clusters.txt

Probe design

BSPP capture

Data analysis

Mapping summary

  • All 16 sequencing libraries were sequenced in a PE150bp MiSeq run: /home/kunzhang/seqStore/131227_MiSeq_GP1
  • Pre-processing of sequencing reads: the 6bp UMI in each of the Read 1 is extracted and placed in the read name, then 27bp of the 5'-ends (corresponding to H1&H2) were trimmed off from Read 1 and Read 2.
  ./extract_UMI_PE.pl
  • The reads were then mapped by bisReadMapperPE19.pl, and the bam files were used for haplotype analysis. The 400bp probe set had very poor capture, so the following analysis focus on the data generated with the 150bp probe set (GP1_V4).
Sample raw reads mapped reads on-target reads mapping rate specificity # CpG called
BE2C_V4 1,386,590 915,834 862,623 62.21% 94.19% 31,853
BE2C_V6 2,933,904 489,918 38,994 1.33% 7.96% 28,819
BXPC3_V4 1,805,276 1,256,196 1,232,955 68.30% 98.15% 36,528
BXPC3_V6 2,753,274 376,020 24,332 0.88% 6.47% 18,885
PANC1_V4 1,264,618 807,262 763,920 60.41% 94.63% 32,712
PANC1_V6 2,522,344 386,410 29,667 1.18% 7.68% 22,154
T98G_V4 1,803,782 1,125,066 1,107,621 61.41% 98.45% 37,906
T98G_V6 3,151,530 478,980 30,172 0.96% 6.30% 27,995
U87MG_V4 1,586,144 1,047,246 1,011,039 63.74% 96.54% 34,562
U87MG_V6 2,856,720 419,040 40,911 1.43% 9.76% 26,848
UCLA-SZ_B1_V4 1,427,496 774,492 713,916 50.01% 92.18% 29,093
UCLA-SZ_B1_V6 2,173,094 374,268 21,959 1.01% 5.87% 21,690
UCLA-SZ_D1_V4 1,409,960 523,588 472,779 33.53% 90.30% 26,859
UCLA-SZ_D1_V6 1,125,166 136,234 5,584 0.50% 4.10% 8,815
UCLA-SZ_H11_V4 1,574,628 643,542 583,033 37.03% 90.60% 29,864
UCLA-SZ_H11_V6 2,165,268 268,476 15,822 0.73% 5.89% 16,206

Obtaining haplotypes

  • I wrote the bam2hapInfo.pl script to extract all haplotypes from a pair of bam files within all targeted regions.
  ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt BE2C_V4_R1_UMI_001.fastq.sorted.fwd.bam BE2C_V4_R1_UMI_001.fastq.sorted.rev.bam >BE2C_V4_hapInfo.txt
  ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt PANC1_V4_R1_UMI_001.fastq.sorted.fwd.bam PANC1_V4_R1_UMI_001.fastq.sorted.rev.bam >PANC1_V4_hapInfo.txt
  ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt T98G_V4_R1_UMI_001.fastq.sorted.fwd.bam T98G_V4_R1_UMI_001.fastq.sorted.rev.bam >T98G_V4_hapInfo.txt
  ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt U87MG_V4_R1_UMI_001.fastq.sorted.fwd.bam U87MG_V4_R1_UMI_001.fastq.sorted.rev.bam >U87MG_V4_hapInfo.txt
  ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt UCLA-SZ_B1_V4_R1_UMI_001.fastq.sorted.fwd.bam UCLA-SZ_B1_V4_R1_UMI_001.fastq.sorted.rev.bam >UCLA-SZ_B1_V4_hapInfo.txt
  ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt UCLA-SZ_D1_V4_R1_UMI_001.fastq.sorted.fwd.bam UCLA-SZ_D1_V4_R1_UMI_001.fastq.sorted.rev.bam >UCLA-SZ_D1_V4_hapInfo.txt
  ../bam2hapInfo.pl /home/kunzhang/CpgMIP/MONOD/GBM_PC_CRC_V4_capture_regions_hg19.txt UCLA-SZ_H11_V4_R1_UMI_001.fastq.sorted.fwd.bam UCLA-SZ_H11_V4_R1_UMI_001.fastq.sorted.rev.bam >UCLA-SZ_H11_V4_hapInfo.txt
Sample # on-target reads # unique reads Clonal rate
BE2C_V4 862,623 442,154 48.7%
BXPC3_V4 1,232,955 709,284 42.5%
PANC1_V4 763,920 268,728 64.8%
T98G_V4 1,107,621 305,041 72.5%
U87MG_V4 1,011,039 333,677 67.0%
UCLA-SZ_B1_V4 713,916 396,377 44.5%
UCLA-SZ_D1_V4 472,779 86,148 81.8%
UCLA-SZ_H11_V4 583,033 186,801 68.0%

Capture efficiency & bias

 ./get_probeEfficiency_matrix.pl > 131227_MiSeq_V4_probeEfficiency_matrix_UMI_20Mar14.txt

Some probes seemed to have very different capture efficiencies between cancer and whole blood. This could be due to the fact that two probes were synthesized for H1/H2 that having CpG sites and that the "C" probes have higher annealing efficiencies. I plotted the T-stat (between cancer and whole blood) against the number of CpG sites within the capture arms H1/H2, or within the insert. There was a positive correlation between the number of CGs in H1/H2, but that only explain ~10% of the variability. There seems to be another major factor unknown.

  File:MONOD V1 Capture bias vs CG H1H2.pngFile:MONOD V1 Capture bias vs CG insert.png
  • I decided to focus on the analysis on a subset of "stable probes" that have similar efficiency between cancer and whole blood.

Haplotype-based deconvolution

  • One key concept of MONOD is to identify tissue-specific (i.e. tumor versus blood) haplotypes, and use them to de-convolute the test samples. We are looking for two types of answers: (i) whether the test samples contain a low level of DNA from one tissue type (tumor) among the majority of DNA from another tissue type (blood). The results should be reported as p-values; (ii) if the answer to (i) is yes, what's the level of tumor DNA. Here the results are values (point estimates) with confident intervals.
  • Before getting into de-convolution, we need to define how to do comparison at the haplotype level. Here are number of considerations:
    • The raw haplotypes might have some missing alleles, due to sequencing quality or other technical reasons. So we might need to remove some CpG sites that have too many missing alleles. We might also need to do trimming in another direction on the haplotypes, because some low-quality sequencing reads can lead to multiple missing alleles in the same haplotypes. Finally, after trimming CpG sites, the haplotypes might be too short to be useful for haplotype analysis. So we need to remove haplotypes below a certain length limit. I chose the minimal number of 5 CpG sites per haplotypes (tried 3,4,5,6,7 and 5 was the best). For the trimming, i first remove any read/haplotype that have less than 5 good alleles. Then I remove all CpG sites that have more than 10% missing alleles. After this second trimming, any haplotype with less than 5 CpG sites are removed.
    • Each "true" haplotype might have different variants due to sequencing errors. In addition, several haplotypes that are different on one or two CpG sites might be in the same epigenetic state. So before comparing haplotypes, we want to collapse similar haplotypes into clusters, and perform the rest of analyses at the cluster level. In deciding how to identify clusters, we first need to come up with a similarity measurement, and a way to decide whether two haplotypes should be in the same clusters or not. I define an overall "error rate" at 0.04, which include sequencing errors, incomplete bisulfite conversion. In addition, for the CpG site that have an intermediate level of methylation at the population level, each individual molecule has certain probability to be methylated or unmethylated. I simply set this probability based on the average methylation level. With that, we can find out the number of allelic difference (Hemming distance) between any two haplotypes, and ask whether it can be explained by the error and the probability of acquiring methylation at the single molecule level. I used a p-value cutoff of 0.01 to assign two haplotypes into two different clusters. To systematically group raw haplotypes into clusters, I first ranked all haplotypes for a target region based on their abundance. Then I started from the most abundant haplotype, called it as a dominant haplotype of a cluster. This dominant haplotype was then used to recruit similar haplotypes in the list, based on the statistical cutoff mentioned about. All similar haplotypes were extracted from the list to form one haplotype cluster. The same procedure is repeated on the remaining haplotypes in the list, until all haplotypes are assigned to one of the clusters. Each cluster has at least one haplotype. If the cluster size is two or more, the most abundant one is called the dominant haplotype.
    • In comparing two haplotype clusters, the distance between the two clusters is the Hemming distance between the two dominant haplotypes (i tried directly comparing individual haplotypes within the clusters, but it seems to make things worse). To test whether the two clusters are similar, I calculated the z-score based on the "error rate" mentioned above. A z-score cutoff of 3.1 was used (equivalent to 99.9% confidence).
  • To test the idea of haplotype-based deconvolution, I created synthetic mixtures by mixing sequencing reads from cancer and whole blood at different ratios. I first found the number of usable reads in each bam files, and calculated the down-sampling rates for making synthetic data sets at different tumor/blood ratios:131227_MiSeq_down_sample_info.txt, then used a script (make_synthetic_mixtures.pl) to create a series of bam fils and hapInfo files.
  • I then wrote another script (search_informative_probes.pl) to identify a subset of probes that are informative for distinguishing tumor DNA from whole blood. I ran this script on data from all five tumor cell lines, and combine all the informative probes. I also tried to further trim down the list based on the consistent capturing on both tumor and blood DNA (T-test value between -1 and 1), but didn't find any improvement.
  • Here is a key script that I wrote for deconvolution. It takes three hapInfo files, one for pure blood, a second for cancer reference, and the third for mixed DNA. The four argument is optional, which is a file containing a list of probes to focus on.
  • TO BE CONTINUED.