Daniel:Notebook/Haplotyping/2014-8-14
Microfluidic Data
Variant Comparison: BacPools (From 8/13/2014)
It took a while to combine vcfs, but now I have all the .vcfs from the bacpool data merged into a single .vcf. This will now be used to compare against the microfluidic data. Ideally, we should see high concordance between the .vcfs. To compare, I ran the following command in GATK:
java -Xmx2g -jar /home/kunzhang/softwares/GenomeAnalysisTK-latest/GenomeAnalysisTK.jar \ -T VariantEval \ -R /media/Ext12T/GenomeDB/HsGenome/resources/human_g1k_v37.fasta \ -o output.eval.pgp1.bacpools.report \ --eval:Indx09 Indx09/Indx09.snp.raw.vcf \ --eval:Indx10 Indx10/Indx10.snp.raw.vcf \ --eval:Indx11 Indx11/Indx11.snp.raw.vcf \ --eval:Indx12 Indx12/Indx12.snp.raw.vcf \ --eval:Indx13 Indx13/Indx13.snp.raw.vcf \ --eval:Indx14 Indx14/Indx14.snp.raw.vcf \ --eval:Indx15 Indx15/Indx15.snp.raw.vcf \ --eval:Indx16 Indx16/Indx16.snp.raw.vcf \ --eval:Indx17 Indx17/Indx17.snp.raw.vcf \ --eval:Indx18 Indx18/Indx18.snp.raw.vcf \ --eval:Indx19 Indx19/Indx19.snp.raw.vcf \ --eval:Indx20 Indx20/Indx20.snp.raw.vcf \ --eval:Indx21 Indx21/Indx21.snp.raw.vcf \ --eval:Indx22 Indx22/Indx22.snp.raw.vcf \ --eval:Indx23 Indx23/Indx23.snp.raw.vcf \ --eval:Indx24 Indx24/Indx24.snp.raw.vcf \ --eval:Indx25 Indx25/Indx25.snp.raw.vcf \ --eval:Indx26 Indx26/Indx26.snp.raw.vcf \ --eval:Indx27 Indx27/Indx27.snp.raw.vcf \ --eval:Indx28 Indx28/Indx28.snp.raw.vcf \ --eval:Indx29 Indx29/Indx29.snp.raw.vcf \ --eval:Indx30 Indx30/Indx30.snp.raw.vcf \ --eval:Indx31 Indx31/Indx31.snp.raw.vcf \ --eval:Indx32 Indx32/Indx32.snp.raw.vcf \ --comp pgp1_variants_bac_all.vcf \ --mergeEvals
Results
There were 43269 total variants with 14824 (34.3%) of them novel. Of the 28445 common variants, 28423 (99.92%) were concordant.
Variant Comparison:BAC pools and Complete Genomics
Seems pertinent, as a control for the microfluidic data. The microfluidic data seemed to have a lot of novel variants (around 33% for both comparison sets), so we'll see how the comparison sets stack up with each other.
java -Xmx2g -jar /home/kunzhang/softwares/GenomeAnalysisTK-latest/GenomeAnalysisTK.jar \ -T VariantEval \ -R /media/Ext12T/GenomeDB/HsGenome/resources/human_g1k_v37.fasta \ -o output.eval.pgp1.cg.bacpools.report \ --eval:bacpools pgp1_variants_bac_all.vcf \ --comp pgp1.vcf
Results
3088900 variants evaluated, 411138 (13.31%) of which were novel. Of the 2677762 common variants, 2676669 (99.96%) were concordant.
Variant Comparison: Analysis
It would appear that Eric's microfluidic data has considerably higher amount of novel variants than the Complete Genomics/BAC pool data. This is probably due to the unfiltered nature of the variants. As such, I'm going to have to run VQSR on the microfluidic variants.
Variant Recalibration
Running the VQSR on the microfluidic data. First I'm just running Index 09, and if that works I'll repeat for the other chambers. A walkthrough on the arguments:
- the bac resource is listed as not known or true (meaning lower confidence in calls), but is used as the training data set because it has the relevant metrics in the .vcf for clustering
- the cgenomics resource is listed as known/true, but since its vcf contains just about no meta-information, it is not used for training. It is my understanding that the bac resource will be cross-referenced against the cgenomics resource for training purposes.
- The annotation flags are based on the annotations found in the Indx09 vcf. FS may not provide much information (seemed to be 0 for all data)
java -Xmx2g -jar /home/kunzhang/softwares/GenomeAnalysisTK-latest/GenomeAnalysisTK.jar \ -T VariantRecalibrator \ -R /media/Ext12T/GenomeDB/HsGenome/resources/human_g1k_v37.fasta \ -input Indx09/Indx09.snp.raw.vcf \ -resource:bac,known=false,training=true,truth=false,prior=12.0 pgp1_variants_bac_all.vcf \ -resource:cgenomics,known=true,training=false,truth=true,prior=15.0 pgp1.vcf \ -an MQ -an HaplotypeScore -an QD -an FS \ -mode SNP \ -recalFile recal.pgp21.recal \ -tranchesFile tranches.pgp21.tranches \ -rscriptFile rscript.pgp21.plots.R
Received an error:
ERROR MESSAGE: Bad input: Error during negative model training. Minimum number of variants to use in training is larger than the whole call set. One can attempt to lower the --minNumBadVariants arugment but this is unsafe.