Lecture 39—Wednesday, April 18, 2012

Outline of lecture

Hypothesis testing in GAMS revisited

In Final Exam, Part 2 I ask you to fit a GAM using all the predictors and then try to simplify it.

setwd("C:/Users/jmweiss/Documents/ecol 562")
elfin <- read.csv('elfin.csv')
library(mgcv)
gam1 <- gam(Occupied_Unoccupied~s(TotalCanopy)+ s(WildIndigoSize)+ s(NearestWildIndigo)+ s(DistanceNearestTree)+ s(DirectionTree,k=8)+ s(Slope)+ s(SlopeAspect,k=8), data=elfin, method="ML", family=binomial)
summary(gam1)
Family: binomial
Link function: logit

Formula:
Occupied_Unoccupied ~ s(TotalCanopy) + s(WildIndigoSize) + s(NearestWildIndigo) +
    s(DistanceNearestTree) + s(DirectionTree, k = 8) + s(Slope) +
    s(SlopeAspect, k = 8)

Parametric coefficients:
            Estimate Std. Error z value Pr(>|z|) 
(Intercept)  -0.6010     0.3566  -1.685   0.0919 .
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Approximate significance of smooth terms:
                         edf Ref.df Chi.sq  p-value   
s(TotalCanopy)         3.409  4.249 31.697 2.96e-06 ***
s(WildIndigoSize)      1.000  1.000 32.884 9.79e-09 ***
s(NearestWildIndigo)   1.000  1.000  9.586  0.00196 **
s(DistanceNearestTree) 2.091  2.629  4.592  0.16224   
s(DirectionTree)       1.000  1.000  3.928  0.04749 * 
s(Slope)               1.000  1.000  0.054 
0.81608   
s(SlopeAspect)         1.000  1.000  0.667 
0.41416   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

R-sq.(adj) =   0.58   Deviance explained = 53.1%
ML score = 116.83  Scale est. = 1         n = 328

The summary table suggests that Slope and SlopeAspect are not significant, but we know that the p-values of the Wald tests for GAMs can be quite unreliable. The different reduced models we might fit and compare against this one are not all equivalent because the variable SlopeAspect has a large number of missing values.

#check for missing values
apply(elfin,2,function(x) sum(is.na(x)))
Occupied_Unoccupied         TotalCanopy      WildIndigoSize   NearestWildIndigo
                  0                   0                   0                   0
DistanceNearestTree       DirectionTree               Slope         SlopeAspect
                  0                   0                   0                 126

The gam function will silently remove these missing values anytime SlopeAspect is a predictor in the model. Models that are fit to different sets of observations cannot be compared using significance testing or AIC. Fortunately the anova function warns us if we try to compare models fit to different data.

gam2 <- gam(Occupied_Unoccupied~s(TotalCanopy)+ s(WildIndigoSize)+ s(NearestWildIndigo)+ s(DistanceNearestTree)+ + s(DirectionTree,k=8)+ s(Slope), data=elfin, method="ML", family=binomial)
#anova does not run because models fit to different data sets
anova(gam2,gam1)
Error in anova.glmlist(c(list(object), dotargs), dispersion = dispersion,  :
  models were not all fitted to the same size of dataset

It makes sense to test SlopeAspect first because if it can be dropped then the remaining variables can be examined using the full data set. To make models with and without SlopeAspect comparable I need to explicitly remove the observations that have missing values for SlopeAspect before fitting the model without SlopeAspect.

#remove observations with missing values of SlopeAspect
elfin2 <- elfin[!is.na(elfin$SlopeAspect),]
gam2 <- gam(Occupied_Unoccupied~s(TotalCanopy)+ s(WildIndigoSize)+ s(NearestWildIndigo)+ s(DistanceNearestTree)+ s(DirectionTree,k=8)+ s(Slope), data=elfin2, method="ML", family=binomial)

If we try to use the anova function to compare this model against the full model we get a surprise. Depending on the order the models are entered we either get a negative value for the change in deviance or a negative value for the difference in degrees of freedom and no reported p-value.

#negative change in deviance and no p-value!
anova(gam2, gam1, test='Chisq')
Analysis of Deviance Table

Model 1: Occupied_Unoccupied ~ s(TotalCanopy) + s(WildIndigoSize) + s(NearestWildIndigo) +
    s(DistanceNearestTree) + s(DirectionTree, k = 8) + s(Slope)
Model 2: Occupied_Unoccupied ~ s(TotalCanopy) + s(WildIndigoSize) + s(NearestWildIndigo) +
    s(DistanceNearestTree) + s(DirectionTree, k = 8) + s(Slope) +
    s(SlopeAspect, k = 8)
  Resid. Df Resid. Dev      Df Deviance Pr(>Chi)
1     317.2     211.64                         
2     316.5     212.64 0.70243 
-1.0034        

#negative change in df and no p-value
anova(gam1, gam2, test='Chisq')
Analysis of Deviance Table

Model 1: Occupied_Unoccupied ~ s(TotalCanopy) + s(WildIndigoSize) + s(NearestWildIndigo) +
    s(DistanceNearestTree) + s(DirectionTree, k = 8) + s(Slope) +
    s(SlopeAspect, k = 8)
Model 2: Occupied_Unoccupied ~ s(TotalCanopy) + s(WildIndigoSize) + s(NearestWildIndigo) +
    s(DistanceNearestTree) + s(DirectionTree, k = 8) + s(Slope)
  Resid. Df Resid. Dev       Df Deviance Pr(>Chi)
1     316.5     212.64                          
2     317.2     211.64
-0.70243   1.0034      

If we examine the log-likelihoods and AIC we see that we have apparently obtained the impossible. The more complicated of the two nested models has a lower log-likelihood.

#simpler model has better log-likelihood. Say what?
logLik(gam1)
'log Lik.' -106.3202 (df=11.49988)
logLik(gam2)
'log Lik.' -105.8185 (df=10.79745)
#reported AIC is also impossible: AIC(gam1) > AIC(gam2) + 2
AIC(gam1,gam2)
           df      AIC
gam1 11.49988 235.6402
gam2 10.79745 233.2319

The explanation for this is that the models we're comparing are not truly nested. Each model has its own set of smoothing parameters and these changed when the SlopeAspect variable was removed.

#examine summary table of smooths
summary(gam1)$s.table
                            edf   Ref.df      Chi.sq      p-value
s(TotalCanopy)         3.409316 4.249312 31.69727965 2.961970e-06
s(WildIndigoSize)      1.000013 1.000026 32.88357833 9.785235e-09
s(NearestWildIndigo)   1.000001 1.000002  9.58645567 1.960186e-03
s(DistanceNearestTree) 2.090525 2.628994  4.59219736 1.622430e-01
s(DirectionTree)       1.000006 1.000012  3.92812699 4.748563e-02
s(Slope)               1.000014 1.000029  0.05410182 8.160832e-01
s(SlopeAspect)         1.000006 1.000012  0.66684499 4.141582e-01
summary(gam2)$s.table
                            edf   Ref.df       Chi.sq      p-value
s(TotalCanopy)         3.379264 4.212707 31.097803031 3.754913e-06
s(WildIndigoSize)      1.362175 1.637155 43.979340925 1.395259e-10
s(NearestWildIndigo)   1.000001 1.000002  9.634214405 1.909866e-03
s(DistanceNearestTree) 2.055991 2.585283  4.648182579 1.536022e-01
s(DirectionTree)       1.000008 1.000016  3.880011753 4.886535e-02
s(Slope)               1.000014 1.000028  0.002752108 9.581656e-01

Apparently the presence of SlopeAspect put a constraint on the smoothing parameters of the other smooths that was lifted when SlopeAspect was excluded from the model. There are two ways to deal with this so that we can validly compare the models.

  1. Increase the number of knots beyond the default value of 10 so that the smoothing parameters are not constrained by the presence or absence of SlopeAspect. Doing so will not make the models nested but it will make the models approximately nested.
  2. Force the second model to use the smoothing parameters of the first model. With this choice the models will be truly nested and the likelihood ratio test will be valid.

Method 1: Increase the number of knots

To determine how many knots we should use, I fit both two models using a range of knot settings, record the log-likelihood and difference in deviances that are obtained, and determine at what point we start getting reasonable results.

#function to fit smooths with different choices for number of knots
my.out <- NULL
my.gam <- function(k) {
gam1 <- gam(Occupied_Unoccupied~s(TotalCanopy,k=k)+ s(WildIndigoSize,k=k)+ s(NearestWildIndigo,k=k)+ s(DistanceNearestTree,k=k)+ s(DirectionTree,k=8)+ s(Slope)+ s(SlopeAspect,k=8), data=elfin2, method="ML", family=binomial)
gam2 <- gam(Occupied_Unoccupied~s(TotalCanopy,k=k)+ s(WildIndigoSize,k=k)+ s(NearestWildIndigo,k=k)+ s(DistanceNearestTree,k=k)+ s(DirectionTree,k=8)+ s(Slope), data=elfin2, method="ML", family=binomial)
cur.results<-c(gam1$deviance, gam1$df.residual, logLik(gam1), gam2$deviance, gam2$df.residual, logLik(gam2), gam2$deviance-gam1$deviance, gam2$df.residual-gam1$df.residual, k)
my.out <- rbind(my.out,cur.results)
my.out
}
#try knots = 10, 15, 20, and 25
gam.results <- sapply(c(10,15,20,25),my.gam)
gam.results <- t(gam.results)
colnames(gam.results) <- c('gam1.dev', 'gam1.df', 'gam1.LL', 'gam2.dev', 'gam2.df', 'gam2.LL', 'dev change', 'df change', 'knots')
round(gam.results,3)
     gam1.dev gam1.df  gam1.LL gam2.dev gam2.df  gam2.LL dev change df change knots
[1,]  212.640 316.500 -106.320  211.637 317.203 -105.818     -1.003     0.702    10
[2,]  212.538 316.429 -106.269  211.875 317.209 -105.937     -0.664     0.780    15
[3,]  212.525 316.419 -106.263  213.438 317.460 -106.719     
0.913     1.041    20
[4,]  212.508 316.412 -106.254  213.423 317.454 -106.712     
0.915     1.042    25

With the choice of 20 knots the change in deviance is positive and the log-likelihood of the larger model exceeds the log-likelihood of the simpler model. Increasing the number of knots to 25 has only a small additional effect. I refit both models using k = 20 for all of the smooths where it is possible to do so. Slope is clearly linear so I leave it alone.

#refit models with k=20
k <- 20
gam1 <- gam(Occupied_Unoccupied~s(TotalCanopy,k=k)+ s(WildIndigoSize,k=k)+ s(NearestWildIndigo,k=k)+
+ s(DistanceNearestTree,k=k)+ s(DirectionTree,k=8)+ s(Slope)+ s(SlopeAspect,k=8), data=elfin2, method="ML", family=binomial)
gam2 <- gam(Occupied_Unoccupied~s(TotalCanopy,k=k)+ s(WildIndigoSize,k=k)+ s(NearestWildIndigo,k=k)+
+ s(DistanceNearestTree,k=k)+ s(DirectionTree,k=8)+ s(Slope), data=elfin2, method="ML", family=binomial)
#now test is correct
anova(gam2, gam1, test='Chisq')
Analysis of Deviance Table

Model 1: Occupied_Unoccupied ~ s(TotalCanopy, k = k) + s(WildIndigoSize,
    k = k) + s(NearestWildIndigo, k = k) + s(DistanceNearestTree,
    k = k) + s(DirectionTree, k = 8) + s(Slope)
Model 2: Occupied_Unoccupied ~ s(TotalCanopy, k = k) + s(WildIndigoSize,
    k = k) + s(NearestWildIndigo, k = k) + s(DistanceNearestTree,
    k = k) + s(DirectionTree, k = 8) + s(Slope) + s(SlopeAspect,
    k = 8)
  Resid. Df Resid. Dev     Df Deviance Pr(>Chi)
1    317.46     213.44                        
2    316.42     212.53 1.0411   0.9129   0.3532

The results are now reasonable and we can conclude that SlopeAspect should be dropped.

Method 2: Use the same smoothing parameters in both models

We can make the models truly nested by forcing both models to use the same values of the smoothing parameters. The smoothing parameters are stored in the $sp component of the model object.

k <- 10
gam1 <- gam(Occupied_Unoccupied~s(TotalCanopy,k=k)+ s(WildIndigoSize,k=k)+ s(NearestWildIndigo,k=k)+
s(DistanceNearestTree,k=k)+ s(DirectionTree,k=8)+ s(Slope)+ s(SlopeAspect,k=8), data=elfin2, method="ML", family=binomial)
gam1$sp
        s(TotalCanopy)      s(WildIndigoSize)   s(NearestWildIndigo) s(DistanceNearestTree)
          2.139258e-03           3.643231e+03           2.401480e+04           1.521201e-03
      s(DirectionTree)               s(Slope)         s(SlopeAspect)
          1.365879e+03           2.227771e+03           1.357435e+03

What's reported is the value of λ in the formula for smoothing splines.

smoothin spline

Large values of λ correspond to smooths whose graphs are nearly a straight lines. We can force the reduced model to use the smoothing parameters of the full model by specifying them in the sp argument of the GAM. Because SlopeAspect has been removed from the simpler model, I need to specify only the first six smoothing parameters of the full model. The likelihood ratio test now returns a positive change in deviance and both the log-likelihoods and AICs have changed in the correct direction.

gam2 <- gam(Occupied_Unoccupied~s(TotalCanopy,k=k)+ s(WildIndigoSize,k=k)+ s(NearestWildIndigo,k=k)+ s(DistanceNearestTree,k=k)+ s(DirectionTree,k=8)+ s(Slope), sp=gam1$sp[1:6], data=elfin2, method="ML", family=binomial)
anova(gam2, gam1, test='Chisq')
Analysis of Deviance Table

Model 1: Occupied_Unoccupied ~ s(TotalCanopy, k = k) + s(WildIndigoSize,
    k = k) + s(NearestWildIndigo, k = k) + s(DistanceNearestTree,
    k = k) + s(DirectionTree, k = 8) + s(Slope)
Model 2: Occupied_Unoccupied ~ s(TotalCanopy, k = k) + s(WildIndigoSize,
    k = k) + s(NearestWildIndigo, k = k) + s(DistanceNearestTree,
    k = k) + s(DirectionTree, k = 8) + s(Slope) + s(SlopeAspect,
    k = 8)
  Resid. Df Resid. Dev      Df Deviance Pr(>Chi)
1    317.49     213.44                         
2    316.50     212.64 0.99023  0.79671   0.3686

logLik(gam1)
'log Lik.' -106.3202 (df=11.49988)
logLik(gam2)
'log Lik.' -106.7186 (df=10.50965)
AIC(gam1,gam2)
           df      AIC
gam1 11.49988 235.6402
gam2 10.50965 234.4564

In practice it is probably a good idea to use both of these approaches in combination. The number of knots should be set large enough so that the smooths are not being constrained and a common value of the smooths should be chosen to ensure that the models being compared are truly nested.

Ordinal logistic regression

Ordinal logistic regression is a regression model for ordinal multinomial data that tries to take into account the natural order of the categories. Ordinal data can be analyzed using the baseline category logit model but at the expense of losing information contained in the order of the categories. Historically ordinal data have been treated as weakly numeric data. Scores were assigned to individual categories and a mean score calculated for each multinomial observation. The mean scores were then modeled as a function of predictors.

As an example Likert scales used in course evaluations often have numeric values associated with the categories (Table 1).

Table 1   Likert scale
Category
strongly disagree
disagree
neutral
agree
strongly agree
Score
1
2
3
4
5

Associating numeric scores with category labels encourages raters to assign ratings that are commensurate with the numeric values. Numeric scores for use in a regression analysis can also be developed post hoc. There are two problems with treating ordinal data as if they were continuous.

  1. Results can differ if a different scoring system is used. This places a burden on the analyst to demonstrate that the obtained results are score independent.
  2. A scoring system imposes a distance metric on the categories that may not correspond to the metric that was used by the raters.

Modern approaches account for the order of ordinal data but don't impose a distance metric on the categories. They all tend to be modifications of the baseline category logit model in which two adjustments that are made to take advantage of the ordinal nature of the data.

  1. Odds are reformulated so that they are true odds, not "odds-like" quantities. These odds are created to capture the directionality of the categories.
  2. Ordinal data make it possible to describe the effects of predictors more parsimoniously by using the same regression coefficients in different odds comparison. This is called the uniform association, proportional odds, or parallel assumption.

Different odds formulations for ordinal data can be used with or without the assumption of parallelism, so these two adjustments are more or less independent of each other. Ordinal regression models are classified by the kind of odds they model. The choice depends on selecting which event comparisons are considered interesting. Once this choice is made, true odds are calculated as the ratio of the probability in favor of the event to the probability against the event.

A taxonomy of ordinal regression models

Although a number of R packages exist that can fit one or more types of ordinal regression models, the most comprehensive among these is the VGAM package (Yee 2010). With VGAM all of the standard ordinal regression models can be fit using a common syntax. Table 2 lists the basic ordinal regression models, the probabilities and odds they choose to model, and the corresponding VGAM syntax needed to fit these models.

Table 2   Ordinal logistic regression models
Model
Probability
Odds
VGAM family
Cumulative odds
cumulative probability
cumulative odds
cumulative(reverse=T)
cumulative probability
cumulative odds
cumulative
Continuation ratio
cratioprob
cratio odds
cratio
sratioprob
sratioodds
sratio
Adjacent category
acat prob
acat odds
acat
  1. In the cumulative odds models we collapse the multinomial categories into two categories in such a way that the order of the original categories is preserved. There are two ways to formulate the odds, as the probability of reaching and exceeding a given category divided by the probability of not achieving that category or what is essentially its reciprocal. For J categories there are J – 1 such comparisons one can make. For instance, if Y = 1, 2, 3, or 4 then we could make the following three comparisons: Y ≤ 1 versus Y > 1, Y ≤ 2 versus Y > 2, and Y ≤ 3 versus Y > 3.
  2. The continuation ratio formulation yields a quantity that is analogous to the hazard function from survival analysis. In the version shown in Table 2 we restrict the set of interest to be those categories that equal or exceed the category of interest. So the reference group corresponds to those who do as well better than you. If we think of the categories as indicating progress then the continuation ratio per se (VGAM cratio) yields the odds of progressing (moving on) versus staying where you are. The VGAM sratio (stopping ratio) yields the odds of not progressing given that we've made it this far. The continuation ratio formulation is popular in educational research where, e.g., learning level is the ordinal categorical variable and the goal is estimate the probability of progressing beyond one's current level.
  3. The adjacent category formulation focuses on the transition between adjacent categories. The regression model attempts to find predictors that affect this transition.

The proportional odds assumption

When we fit any of the ordinal models described in Table 2 to a response variable with J categories, we get J – 1 regression coefficients for each predictor. This is as many regression coefficients as we would get had we instead fit the baseline category logit model. The log odds for category j in a cumulative odds regression model with two predictors would be written as follows.

nonparallel

This yields J – 1 different intercepts, J – 1 different coefficients for x, and J – 1 different coefficients of z, one for each of the J – 1 odds comparisons we're making.

The usual choice in ordinal logistic regression is to retain the J – 1 different intercepts, but to have just single coefficients, β1 and β2, for the predictors x and z. This is called the uniform association, proportional odds, or parallel hypothesis because it assumes the logit curves for the different category transitions are parallel.

parallel

The proportional odds model is directly analogous to the ordinary regression model. With it we can speak of "the" effect of x and "the" effect of z on the response variable rather than having a different effect at each category transition. When someone reports that they carried out ordinal logistic regression they almost always mean they fit a cumulative odds logit model in which the assumption of parallelism was made.

It is important to recognize that the uniform association hypothesis is an assumption that needs to be examined. Generally the more predictors there are in an ordinal logistic regression model, the more likely it is that the assumption is being violated by at least one of the predictors. In very simple regression models with a single dichotomous predictor an easy way to check the uniform association hypothesis is to calculate the raw empirical odds ratios to see if they are approximately the same for different category transitions.

As an illustration consider the following toy example of an ordinal response variable Y with three categories and a dichotomous predictor X. The tabulated data are shown below.

data

Suppose we consider probabilities of the form cumulative probability, j = 0, 1, with the goal of fitting a cumulative odds logit model with the parallelism assumption. Without the parallelism hypothesis our model is the following.

nonparallel model

which yields two different odds ratios for X, exp(β11) for the transition from Y = 0 to Y = 1 and exp(β12) for the transition from Y = 1 to Y = 2. When j = 0 in the cumulative logit model we calculate the odds of Y ≤ 0 versus Y > 0. To obtain this the categories 1 and 2 of the contingency table are combined.

data

The odds ratio X = 1 versus X = 0 is calculated as follows.

odds ratio 1

When j = 1, we calculate the odds of Y ≤ 1 versus Y > 1. This time categories 1 and 2 of the contingency table are collapsed.

data3

The odds ratio for X = 1 versus X = 0 is calculated as follows.

odds ratio 2

We see that the two raw odds ratios are approximately the same suggesting that exp(β11) ≈ exp(β12) and hence that the separate coefficients β12 and β12 can be replaced by a single value β1. This provides empirical evidence for the parallelism hypothesis for these data.

In the VGAM package we can test the parallelism hypothesis directly by comparing two ordinal regression models.

  1. An ordinal regression model in which a common regression parameter for a predictor x is used for all categories of Y (parallel = T).
  2. An ordinal regression model in which separate regression parameters for a predictor x are used for the different categories of Y (parallel = F).

These two models can be compared with a likelihood ratio test and if the test statistic is significant the parallel hypothesis is rejected. The general consensus is that this test may be overly conservative, rejecting the parallelism assumption too often. Given that a polytomous logit model will often fit the data better, it's probably prudent to examine the separate coefficients to see if the differences between them are substantive enough to reject parallelism.

In the end parallelism may be rejected for some predictors in a regression model but not all. In such a case it is possible to fit a model in which some predictors have a single coefficient for all categories of the response while other predictors have different coefficients. Such a model is called a partial proportional odds model and it too can be fit with the VGAM package.

Discrete choice models

In the typical discrete choice model we have a number of subjects making choices along with predictors that characterize the subjects as well as predictors that characterize the choices. The choice sets may vary across subjects so that we don't have a fixed set of multinomial categories. A place where discrete choice models have been used in ecology is in animal resource allocation studies. Although different animals will have different resources from which to choose, they are linked by a common set of predictors that characterize the resources and a common set of predictors that characterize the choosers. The notation for discrete choice models is given in Table 3.

Table 3   Notation for the discrete choice model
Term
Definition
Ci
the set of choices available to subject i
choice vector
the vector of values of the p explanatory variables describing subject i and choice j
the probability that subject i chooses choice j

The discrete choice model uses the following expression for .

discrete choice

This expression has its origin in utility theory in economics. Notice that it has the same form as the Cox partial likelihood from survival analysis. Instead of summing over the risk set in the denominator we sum over the choice set. Because of the close similarities between the discrete choice model and the Cox model, we can use survival analysis software to specify the discrete choice likelihood and obtain parameter estimates. It's important to realize that this is just a device for obtaining parameter estimates. There is no survival function interpretation for the results.

For example, suppose we have two subjects making choices. Subject 1 has a choice set {A, B, C} and subject 2 has a choice set {B, C, D}. Predictors x1 and x2 provide information about the choices and/or subjects. Suppose subject 1 chooses A and subject 2 chooses C. To determine how predictors x1 and x2 influence these choices we fit a discrete choice model in which we organize the data as shown in Table 4.

Table 4   Data organization for fitting the discrete choice model
ID
Subject
Options
x1
x2
Choose
Bout
Time
1
1
A
1
10
1
1
1
2
1
B
2
20
0
1
2
3
1
C
3
25
0
1
2
4
2
B
2
20
0
2
2
5
2
C
3
25
1
2
1
6
2
D
4
27
0
2
2

The variable Choose indicates the option that was selected by the subject, Choose = 1, and the options that were rejected, Choose = 0. This variable plays the role of the censor variable in survival analysis. The option that is chosen is assigned Time = 1, while all the options not selected are assigned Time = 2. This forces all of the remaining choices to be in the risk set at the "time" the choice was made. To ensure that the probabilities and risk sets are calculated separately by subject, we make each subject a stratum here denoted by the variable Bout. This discrete choice model can be fit with the coxph function of the survival package as follows.

coxph(Surv(Time, Choose) ~ x1 + x2 + strata(Bout))

A paper that introduces discrete choice modeling to ecologists is Cooper and Millspaugh (1999).

Cited references

Course Home Page


Jack Weiss
Phone: (919) 962-5930
E-Mail: jack_weiss@unc.edu
Address: Curriculum for the Environment and Ecology, Box 3275, University of North Carolina, Chapel Hill, 27599
Copyright © 2012
Last Revised--April 21, 2012
URL: https://sakai.unc.edu/access/content/group/2842013b-58f5-4453-aa8d-3e01bacbfc3d/public/Ecol562_Spring2012/docs/lectures/lecture39.htm