Kun:LabNotes/MONOD/2015-3-11: Difference between revisions

From ZhangLabWiki
Jump to navigation Jump to search
Line 127: Line 127:
|  
|  
|}
|}
===Calculate mhl loads and construct matrixes.===
===Calculate mhl loads and construct matrices.===
*I wrote a script to calculate the "methylated haplotype load (MHL)" for all targets in a hapInfo file, and construct MHL matrix for all files in a folder: [[Media:get_methHapLoad_matrix.txt|get_methHapLoad_matrix.pl]].
*I wrote a script to calculate the "methylated haplotype load (MHL)" for all targets in a hapInfo file, and construct MHL matrices for all files in a folder: [[Media:get_methHapLoad_matrix.txt|get_methHapLoad_matrix.pl]].
*Five MHL matrixes were generated.
*Five MHL matrices were generated.
**1407-combined_RRBS_mld_blocks_stringent_mhl_matrix.txt: All RRBS data sets generated in July 2014
**1407-combined_RRBS_mld_blocks_stringent_mhl_matrix.txt: All RRBS data sets generated in July 2014
**140917_dRRBS_mld_blocks_stringent_mhl_matrix.txt: Additional dRRBS data sets generated in Sept 2014. Noi used two enzymes (MspI+TaqI) for the digestion.
**140917_dRRBS_mld_blocks_stringent_mhl_matrix.txt: Additional dRRBS data sets generated in Sept 2014. Noi used two enzymes (MspI+TaqI) for the digestion.

Revision as of 01:37, 6 April 2015

Data analysis: defining bins based on methylation haplotype blocks

  • I wanted to be more systematic in analyzing the combined data from RRBS, SeqCap and BSPP experiments.

Define informative bins for haplotype analysis

  • A key is to define a set of bins (or windows) across the genome for haplotype extraction.
    • My previous approaches were somewhat ad hoc, mostly based on read coverage in each group of data sets. I did try a number of different ways to split bigger windows into smaller ones for RRBS and SeqCap data, but none was systematic.
    • The thought here is that we want to define a bin where there is some internal methylation haplotype structure. In other words, every CpG site should be linked to other sites in the same bin. So if we use the conception of methylation LD, the pair-wise LD for all sites in a bin should be above a certain threshold. Formally, we can partition all CpG sites in the entire genome into methylation LD blocks. Each block would be a bin for MONOD. I wrote a script hapInfo2mld_blocks.pl for this purpose.
    • The next question is what data shall we use to define the bins. To be completely unbiased, we want to use WGBS data in the ideal situation. We have the N37 WGBS data from ten human tissues, plus the Heyn2013 whole blood WGBS data. All these data have been mapped. The Epigenomics Roadmap project just release a large number of WGBS data sets, but Dinh hasn't completed the mapping. These are all from non-cancerous tissues. I think we need to include some cancer data as cancer epigenome is highly screwed up and not represented by any normal tissue. So I would include our own primary tumor data generated by RRBS and SeqCap.
    • To obtain methylation haplotype blocks with hapInfo2mld_blocks.pl, we need haplotypes in the hapInfo format, which would require some sort of target definition in the first place. A reasonable set of targets to start with could be uniquely mappable and sequencible regions in the human genome. In other words, the genome is already partitioned into segments by those repetitive reads. For this purpose, I simply took the N37 WGBS data, reported the read coverage, and identified regions of at least 80bp in size with RD>=10 ( Batch processing script).
    • Partititioning methylation haplotypes and generate summary statistics ( Batch processing script).
Chromosome Total_block_size(bp) Average_block_size(bp) Number_of_blocks
chr1 7,104,372 95 74,446
chr2 6,151,111 94 65,456
chr3 4,181,003 98 42,678
chr4 3,500,168 91 38,258
chr5 4,015,791 95 42,447
chr6 4,286,579 98 43,779
chr7 4,417,448 89 49,439
chr8 3,779,783 93 40,809
chr9 3,564,824 92 38,769
chr10 4,299,677 92 46,511
chr11 4,115,304 95 43,243
chr12 3,860,323 95 40,823
chr13 2,129,337 91 23,421
chr14 2,638,407 95 27,662
chr15 2,659,525 97 27,537
chr16 3,893,757 88 44,005
chr17 4,351,313 92 47,328
chr18 1,925,501 91 21,247
chr19 4,001,607 89 44,728
chr20 2,631,521 93 28,358
chr21 1,279,904 87 14,638
chr22 2,187,548 87 25,080
Total 80,974,803 2,038 870,662
    • Identifying subsets of blocks overlapping with SeqCap (100,829) and RRBS (128,492) targets.
  /home/kunzhang/softwares/bedtools-2.17.0/bin/bedtools intersect -wa -a N37_WGBS_tumor_seqCap_RRBS_tumor_NC_all_chrs_RD10_80up.mld_blocks.bed -b ~/CpgMIP/MONOD/Data/141112_HiSeqRapidRun/OID42096_hg19_UMR_v1_capture_targets.bed > N37_WGBS_tumor_seqCap_RRBS_tumor_NC_all_chrs_RD10_80up.mld_blocks_seqCap_subset.bed
  /home/kunzhang/softwares/bedtools-2.17.0/bin/bedtools intersect -wa -a N37_WGBS_tumor_seqCap_RRBS_tumor_NC_all_chrs_RD10_80up.mld_blocks.bed -b /home/kunzhang/CpgMIP/Data/MONOD/1407-combined_RRBS/latest_organized_data/target_files/Primary_tumor_ALL.genomecov.RD50_80UP.merged.bed > N37_WGBS_tumor_seqCap_RRBS_tumor_NC_all_chrs_RD10_80up.mld_blocks_RRBS_subset.bed

Extract haplotypes into hapInfo files based on the informative bins.

Increase the stringency for definine informative bins.

  • When manually inspecting the mld_blocks for the promoter regions of a few cancer genes, such as SEPT9 and SDC2, I found that mld_blocks can be quite large, containing ~100 CpG sites. Having such large blocks makes it difficult to compare haplotypes, because different haplotypes in a block might not overlap at all.
  • I tested several cutoff values for r2, including 0.3, 0.5 and 0.7, on chr4 and manually inspect the partitioning of the SDC2 promoter. I found that 0.5 is probably a more appropriate cutoff.
  • Therefore, I used r2>=0.5 to repeat the partitioning of mld_blocks ( Batch processing script). I ended up having close to 300k blocks covering ~1% of the genome. Comparing with the blocks with r2>=0.1, the average block size is similar, and the total number of blocks was reduced to ~30%.
Chromosome Total_block_size(bp) Average_block_size(bp) Number_of_blocks
chr1 2,331,103 87 26,650
chr2 1,889,469 85 22,164
chr3 1,442,988 90 15,995
chr4 1,095,979 84 12,973
chr5 1,265,930 87 14,607
chr6 1,470,508 91 16,232
chr7 1,307,725 81 16,050
chr8 1,107,303 85 13,085
chr9 1,083,570 85 12,723
chr10 1,248,282 84 14,900
chr11 1,299,293 87 14,851
chr12 1,263,831 89 14,183
chr13 623,564 83 7,533
chr14 851,644 87 9,754
chr15 848,814 86 9,893
chr16 1,070,257 83 12,836
chr17 1,350,521 83 16,342
chr18 538,930 80 6,719
chr19 1,319,181 85 15,527
chr20 708,615 84 8,481
chr21 326,002 77 4,238
chr22 575,079 81 7,123
Total 25,018,588 292,859

Calculate mhl loads and construct matrices.

  • I wrote a script to calculate the "methylated haplotype load (MHL)" for all targets in a hapInfo file, and construct MHL matrices for all files in a folder: get_methHapLoad_matrix.pl.
  • Five MHL matrices were generated.
    • 1407-combined_RRBS_mld_blocks_stringent_mhl_matrix.txt: All RRBS data sets generated in July 2014
    • 140917_dRRBS_mld_blocks_stringent_mhl_matrix.txt: Additional dRRBS data sets generated in Sept 2014. Noi used two enzymes (MspI+TaqI) for the digestion.
    • 141216_SeqCap_mld_blocks_stringent_mhl_matrix.txt: NimbleGen SeqCap data on WGBS libraries, this data set has low enrichment factor due to sub-optimal capture condition. The WGBS libraries were good, so the data are useful.
    • 150209_SeqCap_mld_blocks_stringent_mhl_matrix.txt: NimbleGen SeqCap data on WGBS libraries, generated in Feb 2015. Both WGBS libraries and capture experiments were good.
    • 150209_BSPP_mld_blocks_stringent_mhl_matrix.txt: BSPP capture data on WGBS libraries, generated in Feb 2015.
    • N37_10_tissue_pool_mhl_matrix.txt: Low-coverage WGBS data from ten human tissues.