![]() Although we’re using the same test statistic, we’re not assuming that the values of the test statistic are \(t\)-distributed, which would require the assumptions we want to avoid. Instead of computing the difference between means, we could use the test statistic from a standard statistical test. (It’s equivalent to sum(x=TRUE)/length(x).) 2*mean(res>=obs) # doubling (as suggested by JD) # 0.0466 mean(abs(res)>=abs(obs)) # count both tails: matches lmPerm # 0.0374 If x is a logical vector (such as res>=obs), then mean(x) first converts FALSE values to 0 and TRUE values to 1, then computes the mean this calculates the proportion of the values that are TRUE. If we want a two-tailed test, we have to decide whether we are doubling the observed value or counting the area in both tails. Since there aren’t actually that many possible outcomes, we could plot them this way instead of using a histogram: par(las=1,bty="l") If you want to be very fancy/tidyverse-ish, check out purrr::map_dbl(1:n, ~. %>% pull(colonies) # extract a single column Some alternative recipes for computing the difference in the means: (1) base R with aggregate() … either of these could be substituted for the mean(bdat.) line in the code above. colonies above scrambles the response variable with respect to the predictors (in this case, to the “field” vs. “forest” location)Ī picture of the results: hist(res,col="gray",las=1,main="") ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |