Takes two data frames with model fits computed on two graphs for on the same populations and tests whether the scores of one graph are significantly better than the scores of the other.

compare_fits4(fit1, fit2, f2_blocks, f2_blocks_test, boot = FALSE, seed = NULL)

Arguments

fit1

The fit of the first graph

fit2

The fit of the second graph

f2_blocks

f2 blocks used for fitting fit1 and fit2. Used in combination with f2_blocks_test to compute f-statistics covariance matrix.

f2_blocks_test

f2 blocks which were not used for fitting fit1 and fit2

boot

If FALSE (the default), block-jackknife resampling will be used to compute standard errors. Otherwise, block-bootstrap resampling will be used to compute standard errors. If boot is an integer, that number will specify the number of bootstrap resamplings. If boot = TRUE, the number of bootstrap resamplings will be equal to the number of SNP blocks. If bootstrap resampling is enabled, empirical p-values (p_emp) and 95 confidence intervals (ci_low and ci_high) will be reported.

seed

Random seed used if boot is TRUE. Does not need to match a seed used in fitting the models

Examples

if (FALSE) {
nblocks = dim(example_f2_blocks)[3]
train = sample(1:nblocks, round(nblocks/2))
fit1 = qpgraph(example_f2_blocks[,,train], graph1)
fit2 = qpgraph(example_f2_blocks[,,train], graph2)
compare_fits4(fit1, fit2, example_f2_blocks[,,train], example_f2_blocks[,,-train])
}