The correlation between the fraction dispersing and the mean dispersal distance of the dispersers could be mean one of two things:
- Simulated rain that makes more seeds go makes them go further in all directions
- Simulated rain that makes more seeds go is pushing them in the direction of the receiving runway
This matters because in case 2, the dispersal kernel in the negative direction will, presumably, have its absolute mean reduced.
If case 2 is true, then replicates with low mean dispersal fraction and mean dispersal distance should have “lost” proportionally more seeds to the backward direction (which weren’t counted). Thus, we would expect that replicates with positive residuals around the density-dependent seed production function should have higher-than-average dispersal fractions (and the converse).
First, lets look at density-dependence in the dispersal data:
ggplot(aes(y = Total_seeds/Density, x = Density, colour = Dispersal_fraction),
data = Ler_dispersal_stats) +
geom_smooth(method = "lm") + geom_point() + scale_x_log10() + scale_y_log10()
There is certainly no obvious pattern there. Let’s look at a linear model:
summary(lm(log(Total_seeds / Density) ~ log(Density) + Dispersal_fraction,
data = Ler_dispersal_stats))
Call:
lm(formula = log(Total_seeds/Density) ~ log(Density) + Dispersal_fraction,
data = Ler_dispersal_stats)
Residuals:
Min 1Q Median 3Q Max
-1.77595 -0.27330 0.07538 0.37723 1.10417
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.02941 0.32197 15.621 <2e-16 ***
log(Density) -0.71999 0.04811 -14.965 <2e-16 ***
Dispersal_fraction 0.96664 0.55644 1.737 0.0911 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6433 on 35 degrees of freedom
Multiple R-squared: 0.8774, Adjusted R-squared: 0.8704
F-statistic: 125.3 on 2 and 35 DF, p-value: < 2.2e-16
So there is a weak effect in the direction of case 2. Let’s try the mean dispersal distance instead:
summary(lm(log(Total_seeds / Density) ~ log(Density) + mulog,
data = Ler_dispersal_stats))
Call:
lm(formula = log(Total_seeds/Density) ~ log(Density) + mulog,
data = Ler_dispersal_stats)
Residuals:
Min 1Q Median 3Q Max
-1.56341 -0.34943 0.03147 0.39377 1.05593
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.67123 0.80143 3.333 0.002038 **
log(Density) -0.71075 0.04251 -16.719 < 2e-16 ***
mulog 1.49835 0.41713 3.592 0.000998 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.5731 on 35 degrees of freedom
Multiple R-squared: 0.9027, Adjusted R-squared: 0.8972
F-statistic: 162.4 on 2 and 35 DF, p-value: < 2.2e-16
Oh wow, that’s a strong effect! Look at a plot:
ggplot(aes(y = Total_seeds/Density, x = Density, colour = mulog),
data = Ler_dispersal_stats) + scale_color_distiller(palette = "YlOrRd") +
geom_smooth(method = "lm") + geom_point() + scale_x_log10() + scale_y_log10()
The plot is not hugely convincing to me though. And what we really want is the score on the principal axis:
Ler_dispersal_stats$PCA1 <- princomp(~ Dispersal_fraction + mulog,
data = Ler_dispersal_stats)$scores[,1]
ggplot(aes(y = Total_seeds/Density, x = Density, colour = PCA1),
data = Ler_dispersal_stats) + scale_color_distiller(palette = "YlOrRd") +
geom_smooth(method = "lm") + geom_point() + scale_x_log10() + scale_y_log10()
summary(lm(log(Total_seeds / Density) ~ log(Density) + PCA1,
data = Ler_dispersal_stats))
Call:
lm(formula = log(Total_seeds/Density) ~ log(Density) + PCA1,
data = Ler_dispersal_stats)
Residuals:
Min 1Q Median 3Q Max
-1.68288 -0.36527 0.04046 0.35219 1.15619
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.38924 0.16948 31.799 < 2e-16 ***
log(Density) -0.70679 0.04391 -16.095 < 2e-16 ***
PCA1 1.20744 0.37008 3.263 0.00247 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.5871 on 35 degrees of freedom
Multiple R-squared: 0.8979, Adjusted R-squared: 0.8921
F-statistic: 153.9 on 2 and 35 DF, p-value: < 2.2e-16
Yeah, I dunno. It sure looks like there’s a lot of noise, and it doesn’t improve the \(R^2\) much. Furthermore, the backward dispersal is unlikely to be all that important (a robust sensitivity check would be to set it to zero!). So I’m inclined to ignore it.