Data analysis: defining bins based on methylation haplotype blocks[edit]
- I wanted to be more systematic in analyzing the combined data from RRBS, SeqCap and BSPP experiments.
Summary of data[edit]
- RRBS data
- July 2014 data: the libraries were sequenced in two runs. Dinh combined the data and did the mapping.
- Bam folder: /media/Ext12T/DD_Ext12T/RRBS_MONOD/Bam_Merged/
- Mappable_bin_hapInfo folder: /home/kunzhang/CpgMIP/MONOD/Data/1407-combined_RRBS/mappable_bin_hapInfo
- Mld_block_hapInfo folder: /home/kunzhang/CpgMIP/MONOD/Data/1407-combined_RRBS/mld_blocks_stringent_hapInfo
- Sept 2014: these batch of libraries were generated with the dRRBS protocol (MSP I & Taq I digestion)
- Bam folder: /media/Ext12T/DD_Ext12T/RRBS_MONOD/140917_dRRBS/BAMfiles
- Mapple_bin_hapInfo folder:
- Mld_block_hapInfo folder: /home/kunzhang/CpgMIP/MONOD/Data/140917_dRRBS/mld_block_stringent_hapInfo
- WGBS-SeqCap data
- Dec2014 data: Good libraries, poor enrichment (still usable, just lower coverage)
- Bam folder: /media/Ext12T/DD_Ext12T/MONOD/141216_HiSeqRapidRun/BAMfiles
- Mappable_bin_hapInfo folder: /home/kunzhang/CpgMIP/MONOD/Data/141216_HiSeqRapidRunSeqCap/mappable_bin_hapInfo
- Mld_block_hapInfo folder: /home/kunzhang/CpgMIP/MONOD/Data/141216_HiSeqRapidRunSeqCap/mld_blocks_stringent
- Feb2015 data: Good libraries and good enrichment
- Bam folder: media/Ext12T/DD_Ext12T/MONOD/150209_SN216/SeqCap/BAMfiles/
- Mld_block_hapInfo folder: /home/kunzhang/CpgMIP/MONOD/Data/150209_SN216/SeqCap/mld_blocks_stringent
- WGBS-BSPP data
- Feb2015 data: same WGBS libraries as the Feb2015 SeqCap data.
- Bam folder: /media/Ext12T/DD_Ext12T/MONOD/150209_SN216/BSPP/BAMfiles
- Mld_block_hapInfo folder: /home/kunzhang/CpgMIP/MONOD/Data/150209_SN216/BSPP/mld_block_stringent_hapInfo
Define informative bins for haplotype analysis[edit]
- A key is to define a set of bins (or windows) across the genome for haplotype extraction.
- My previous approaches were somewhat ad hoc, mostly based on read coverage in each group of data sets. I did try a number of different ways to split bigger windows into smaller ones for RRBS and SeqCap data, but none was systematic.
- The thought here is that we want to define a bin where there is some internal methylation haplotype structure. In other words, every CpG site should be linked to other sites in the same bin. So if we use the conception of methylation LD, the pair-wise LD for all sites in a bin should be above a certain threshold. Formally, we can partition all CpG sites in the entire genome into methylation LD blocks. Each block would be a bin for MONOD. I wrote a script hapInfo2mld_blocks.pl for this purpose.
- The next question is what data shall we use to define the bins. To be completely unbiased, we want to use WGBS data in the ideal situation. We have the N37 WGBS data from ten human tissues, plus the Heyn2013 whole blood WGBS data. All these data have been mapped. The Epigenomics Roadmap project just release a large number of WGBS data sets, but Dinh hasn't completed the mapping. These are all from non-cancerous tissues. I think we need to include some cancer data as cancer epigenome is highly screwed up and not represented by any normal tissue. So I would include our own primary tumor data generated by RRBS and SeqCap.
- To obtain methylation haplotype blocks with hapInfo2mld_blocks.pl, we need haplotypes in the hapInfo format, which would require some sort of target definition in the first place. A reasonable set of targets to start with could be uniquely mappable and sequencible regions in the human genome. In other words, the genome is already partitioned into segments by those repetitive reads. For this purpose, I simply took the N37 WGBS data, reported the read coverage, and identified regions of at least 80bp in size with RD>=10 ( Batch processing script).
Data folder: /home/kunzhang/CpgMIP/MONOD/Data/WGBS_data/N37_WGBS/WGBS_merged_bams_by_chrs
Get summary: cat N37_10_tissue_pooled.autosomes.RD10_80up.genomecov.bed | awk '{sum+=$4} END { print "N = ", NR, "Sum = ", sum, " Average = ",sum/NR}'
N = 1072789 Sum = 2.52332e+09 Average = 2352.11
Chromosome
|
Total_block_size(bp)
|
Average_block_size(bp)
|
Number_of_blocks
|
chr1 |
7,104,372 |
95 |
74,446
|
chr2 |
6,151,111 |
94 |
65,456
|
chr3 |
4,181,003 |
98 |
42,678
|
chr4 |
3,500,168 |
91 |
38,258
|
chr5 |
4,015,791 |
95 |
42,447
|
chr6 |
4,286,579 |
98 |
43,779
|
chr7 |
4,417,448 |
89 |
49,439
|
chr8 |
3,779,783 |
93 |
40,809
|
chr9 |
3,564,824 |
92 |
38,769
|
chr10 |
4,299,677 |
92 |
46,511
|
chr11 |
4,115,304 |
95 |
43,243
|
chr12 |
3,860,323 |
95 |
40,823
|
chr13 |
2,129,337 |
91 |
23,421
|
chr14 |
2,638,407 |
95 |
27,662
|
chr15 |
2,659,525 |
97 |
27,537
|
chr16 |
3,893,757 |
88 |
44,005
|
chr17 |
4,351,313 |
92 |
47,328
|
chr18 |
1,925,501 |
91 |
21,247
|
chr19 |
4,001,607 |
89 |
44,728
|
chr20 |
2,631,521 |
93 |
28,358
|
chr21 |
1,279,904 |
87 |
14,638
|
chr22 |
2,187,548 |
87 |
25,080
|
Total |
80,974,803 |
2,038 |
870,662
|
|
- Identifying subsets of blocks overlapping with SeqCap (100,829) and RRBS (128,492) targets.
/home/kunzhang/softwares/bedtools-2.17.0/bin/bedtools intersect -wa -a N37_WGBS_tumor_seqCap_RRBS_tumor_NC_all_chrs_RD10_80up.mld_blocks.bed -b ~/CpgMIP/MONOD/Data/141112_HiSeqRapidRun/OID42096_hg19_UMR_v1_capture_targets.bed > N37_WGBS_tumor_seqCap_RRBS_tumor_NC_all_chrs_RD10_80up.mld_blocks_seqCap_subset.bed
/home/kunzhang/softwares/bedtools-2.17.0/bin/bedtools intersect -wa -a N37_WGBS_tumor_seqCap_RRBS_tumor_NC_all_chrs_RD10_80up.mld_blocks.bed -b /home/kunzhang/CpgMIP/Data/MONOD/1407-combined_RRBS/latest_organized_data/target_files/Primary_tumor_ALL.genomecov.RD50_80UP.merged.bed > N37_WGBS_tumor_seqCap_RRBS_tumor_NC_all_chrs_RD10_80up.mld_blocks_RRBS_subset.bed
Increase the stringency for definine informative bins.[edit]
- When manually inspecting the mld_blocks for the promoter regions of a few cancer genes, such as SEPT9 and SDC2, I found that mld_blocks can be quite large, containing ~100 CpG sites. Having such large blocks makes it difficult to compare haplotypes, because different haplotypes in a block might not overlap at all.
- I tested several cutoff values for r2, including 0.3, 0.5 and 0.7, on chr4 and manually inspect the partitioning of the SDC2 promoter. I found that 0.5 is probably a more appropriate cutoff.
- Therefore, I used r2>=0.5 to repeat the partitioning of mld_blocks ( Batch processing script). I ended up having close to 300k blocks covering ~1% of the genome. Comparing with the blocks with r2>=0.1, the average block size is similar, and the total number of blocks was reduced to ~30%.
Chromosome
|
Total_block_size(bp)
|
Average_block_size(bp)
|
Number_of_blocks
|
chr1 |
2,331,103 |
87 |
26,650
|
chr2 |
1,889,469 |
85 |
22,164
|
chr3 |
1,442,988 |
90 |
15,995
|
chr4 |
1,095,979 |
84 |
12,973
|
chr5 |
1,265,930 |
87 |
14,607
|
chr6 |
1,470,508 |
91 |
16,232
|
chr7 |
1,307,725 |
81 |
16,050
|
chr8 |
1,107,303 |
85 |
13,085
|
chr9 |
1,083,570 |
85 |
12,723
|
chr10 |
1,248,282 |
84 |
14,900
|
chr11 |
1,299,293 |
87 |
14,851
|
chr12 |
1,263,831 |
89 |
14,183
|
chr13 |
623,564 |
83 |
7,533
|
chr14 |
851,644 |
87 |
9,754
|
chr15 |
848,814 |
86 |
9,893
|
chr16 |
1,070,257 |
83 |
12,836
|
chr17 |
1,350,521 |
83 |
16,342
|
chr18 |
538,930 |
80 |
6,719
|
chr19 |
1,319,181 |
85 |
15,527
|
chr20 |
708,615 |
84 |
8,481
|
chr21 |
326,002 |
77 |
4,238
|
chr22 |
575,079 |
81 |
7,123
|
Total |
25,018,588 |
|
292,859
|
|
Calculate MHL loads and construct matrices.[edit]
- I wrote a script to calculate the "methylated haplotype load (MHL)" for all targets in a hapInfo file, and construct MHL matrices for all files in a folder: get_methHapLoad_matrix.pl.
- Five MHL matrices were generated.
- 1407-combined_RRBS_mld_blocks_stringent_mhl_matrix.txt: All RRBS data sets generated in July 2014
- 140917_dRRBS_mld_blocks_stringent_mhl_matrix.txt: Additional dRRBS data sets generated in Sept 2014. Noi used two enzymes (MspI+TaqI) for the digestion.
- 141216_SeqCap_mld_blocks_stringent_mhl_matrix.txt: NimbleGen SeqCap data on WGBS libraries, this data set has low enrichment factor due to sub-optimal capture condition. The WGBS libraries were good, so the data are useful.
- 150209_SeqCap_mld_blocks_stringent_mhl_matrix.txt: NimbleGen SeqCap data on WGBS libraries, generated in Feb 2015. Both WGBS libraries and capture experiments were good.
- 150209_BSPP_mld_blocks_stringent_mhl_matrix.txt: BSPP capture data on WGBS libraries, generated in Feb 2015.
- N37_10_tissue_pool_mhl_matrix.txt: Low-coverage WGBS data from ten human tissues.
- N37_10_tissue_pool_WB_WGBS_mld_blocks_stringent_mhl_matrix.txt: Low-coverage WGBS data from ten human tissues, plus whole blood WGBS data.
- Merge these matrices into two bigger matrices
../merge_mhl_matrix.pl N37_10_tissue_pool_WB_WGBS_mld_blocks_stringent_mhl_matrix.txt 141216_SeqCap_mld_blocks_stringent_mhl_matrix.txt 150209_SeqCap_mld_blocks_stringent_mhl_matrix.txt 150209_BSPP_mld_blocks_stringent_mhl_matrix.txt > N37_WB_WGBS_SeqCap_BSPP_merged__mld_blocks_stringent_mhl_matrix.txt
../merge_mhl_matrix.pl N37_10_tissue_pool_WB_WGBS_mld_blocks_stringent_mhl_matrix.txt 1407-combined_RRBS_mld_blocks_stringent_mhl_matrix.txt 140917_dRRBS_mld_blocks_stringent_mhl_matrix.txt > N37_WB_WGBS_RRBS_dRRBS_merged__mld_blocks_stringent_mhl_matrix.txt