Skip to contents

Function for estimating a Random Effects Poisson-Lindley regression model

Usage

poisLind.re(
  formula,
  group_var,
  data,
  method = "NM",
  max.iters = 1000,
  print.level = 0,
  bootstraps = NULL,
  offset = NULL
)

Arguments

formula

an R formula.

group_var

the grouping variable(s) indicating random effects (e.g., individual ID).

data

a dataframe that has all of the variables in the formula.

method

a method to use for optimization in the maximum likelihood estimation. For options, see maxLik. Note that "BHHH" is not available for this function due to the implementation for the random effects.

max.iters

the maximum number of iterations to allow the optimization method to perform.

print.level

Integer specifying the verbosity of output during optimization.

bootstraps

Optional integer specifying the number of bootstrap samples to be used for estimating standard errors. If not specified, no bootstrapping is performed.

offset

an optional offset term provided as a string.

Value

An object of class `countreg` which is a list with the following components:

  • model: the fitted model object.

  • data: the data frame used to fit the model.

  • call: the matched call.

  • formula: the formula used to fit the model.

Details

The function poisLindRE is similar to the poisLind function, but it includes an additional argument group_var that specifies the grouping variable for the random effects. The function estimates a Random Effects Poisson-Lindley regression model using maximum likelihood. It is similar to poisLind, but includes additional terms to account for the random effects.

The Random Effects Poisson-Lindley model is useful for panel data and assumes that the random effects follow a gamma distribution. The PDF is $$ f(y_{it}|\mu_{it},\theta)=\frac{\theta^2}{\theta+1} \prod_{t=1}^{n_i}\frac{\left(\mu_{it}\frac{\theta(\theta+1)} {\theta+2}\right)^{y_{it}}}{y_{it}!} \cdot \frac{ \left(\sum_{t=1}^{n_i}y_{it}\right)! \left(\sum_{t=1}^{n_i}\mu_{it}\frac{\theta(\theta+1)}{\theta+2} + \theta + \sum_{t=1}^{n_i}y_{it} + 1\right) }{ \left(\sum_{t=1}^{n_i}\mu_{it}\frac{\theta(\theta+1)}{\theta+2} + \theta\right)^{\sum_{t=1}^{n_i}y_{it}+2} } $$

The log-likelihood function is: $$ LL = 2\log(\theta) - \log(\theta+1) + \sum_{t=1}^{n_i} y_{it}\log(\mu_{it}) + \sum_{t=1}^{n_i} y_{it}\log\!\left( \frac{\theta(\theta+1)}{\theta+2} \right) - \sum_{t=1}^{n_i}\log(y_{it}!) + \log\!\left( \left(\sum_{t=1}^{n_i}y_{it}\right)! \right) + \log\!\left( \sum_{t=1}^{n_i}\mu_{it}\frac{\theta(\theta+1)}{\theta+2} + \theta + \sum_{t=1}^{n_i}y_{it} + 1 \right) - \left(\sum_{t=1}^{n_i}y_{it} + 2\right) \log\!\left( \sum_{t=1}^{n_i}\mu_{it}\frac{\theta(\theta+1)}{\theta+2} + \theta \right) $$

The mean and variance are: $$\mu_{it}=\exp(X_{it} \beta)$$ $$ V(\mu_{it})=\mu_{it}+ \left(1-\frac{2}{(\theta+2)^2}\right)\mu_{it}^2 $$

Examples

# \donttest{
data("washington_roads")
washington_roads$AADTover10k <-
  ifelse(washington_roads$AADT > 10000, 1, 0)

poislind.mod <- poisLind.re(
  Animal ~ lnaadt + lnlength + speed50 +
    ShouldWidth04 + AADTover10k,
  data      = washington_roads,
  group_var = "ID",
  method    = "NM",
  max.iters = 1000
)
#> Warning: NaNs produced
summary(poislind.mod)
#> Call:
#>  Animal ~ lnaadt + lnlength + speed50 + ShouldWidth04 + AADTover10k 
#> 
#>  Method:  poisLindRE 
#> Iterations:  1002 
#> Convergence:  iteration limit exceeded  
#> Log-likelihood:  91004.96 
#> 
#> Parameter Estimates:
#> # A tibble: 7 × 7
#>   parameter       coeff `Std. Err.` `t-stat` `p-value` `lower CI` `upper CI`
#>   <chr>           <dbl>       <dbl>    <dbl>     <dbl>      <dbl>      <dbl>
#> 1 (Intercept)    -5.55        0.151   -36.6          0     -5.84      -5.25 
#> 2 lnaadt          0.175       0.018     9.44         0      0.138      0.211
#> 3 lnlength        8.67        0.262    33.1          0      8.16       9.19 
#> 4 speed50         4.26        0.131    32.5          0      4.00       4.52 
#> 5 ShouldWidth04  -2.92        0.262   -11.1          0     -3.43      -2.40 
#> 6 AADTover10k    11.8         0.262    45.2          0     11.3       12.4  
#> 7 ln(theta)     -46.6        NA        NA           NA     NA         NA    
# }