# LQMM – FAQ

Query: Number of levels

I saw that in the lqmm package you can perform a quantile regression, and that it can handle a 2-level nested model. I wanted to ask if it can handle a 3-level nested model as well?

lqmm can deal with 2 levels only.

Query: Convergence, tolerance and iterations

I have successfully been able to  apply your package lqmm to fit some models and it appears to be working ok with my data. I am using a call like this:

lqmm (y~x + covariates , random=~1,group=subjectID, iota=c(0.25, 0.50, 0.75),…)

However, I am getting these optimization errors every time:

Warning message: In errorHandling(OPTIMIZATION$low_loop, “low”, control$LP_max_iter, : Lower loop did not converge in: lqmm. Try increasing max number of iterations (500) or tolerance (1e-05)

I have increased the number of max iterations to 1000  (and tolerance)  as instructed using control = list(LP_tol_ll = 1e-10,LP_max_iter = 1000,verbose=F) but I have not being successful. A run command takes very long time to execute and when I attempt to run  summary(object), it even takes much longer time.

In general the ‘warning message’ is there to tell you that, given the optimization parameters, the algorithm has not converged. Whether this is concerning or not needs to be assessed case by case. It could be that the likelihood is flat in the region of the maximum  so the algorithm needs more iterations to converge; or that the data are sparse and there is not sufficient information;  or… the list goes on. First, you should use verbose = T in control and monitor how the likelihood changes. If the change in the objective function (and value of parameters) is small when using max_iter = 500 or 1000, well then the message shouldn’t be a source of concern.

In your code you have *decreased* the tolerance from 1e-05 to 1e-10, making the criterion stricter and convergence difficult to achieve even in 1000 iterations (increasing the tolerance means ‘be more tolerant’, that is use a number for LP_tol_ll larger than the value used previously). Try with max_iter = 1000 and LP_tol_ll = 1e-05 or LP_tol_ll = 1e-04 or LP_tol_ll = 1e-03.

It’s an obvious consequence that with stricter criteria the algorithm takes a long time, and this is then repeated in all bootstrap samples (which is how summary calculates the standard errors of the coefficients).

Query: Stata package?

I found some slides on the internet about a lqmm package for R, where it was stated that you intended to develop this for STATA as well. I am writing to you to know whether or not you have completed the lqmm package for STATA as I cannot find it,  or you intend to in the near future.

To-do lists and wishful thinking lists are sometimes merged together. Maybe there will be a Stata package, but the project has not progressed.

Query: Fixed or random?

I have been exploring the use of lqmm() for panel data quantile analysis.  In my attempt to understand the procedure I ran into some questions. For a given panel data quantile regression problem with fixed effects (see e.g. below), is it possible to make lqmm() output exactly ( or at least closely) match the output from rqpd() ?.  [rqpd as i understand implements the work of Koenker (2004)]

The intuition that L1-norm penalized fixed effects models have a strict relationship with LQMMs was suggested by Geraci and Bottai (2007, p. 146). For lambda equal to the ratio of the scale parameters, the beta’s estimates of the median regression should be similar. However, this says nothing about the predicted random effects and the estimated fixed effects for the clusters/subjects, which I would not expect to be necessarily close since LQMM is based on best linear prediction (Geraci and Bottai, 2013). Also, it is not clear what happens when estimating other quantiles.

I doubt that the comparison between fixed and random effects QR models can be done easily outside the median case and for parameters other than the population coefficients.

Query: Intrumental variables and weights

I am interested in using lqmm for an analysis, however I have an instrumental variable that I would like to use to correct for potential biases. In Abadie, Angrist, and Imbens (2002) introduced a set of weights for quantile regression based upon the instrumental variable. Frolich and Melly (2008, 2010) extended the approach to a modified set of weights based upon the instrumental variables that is more applicable to an unconditional model that does not include any other variable except the treatment/exposure variable of interest. My question is would the weights be applicable in lqmm such that the instrumental variable approach will recover the true effects (under the standard assumptions for instrumental variables)?

I don’t have an answer to your question as I haven’t yet had the opportunity to think about how specific methods commonly applied in a regression setting would apply to lqmm. However, the technical description of how weights are used in lqmm might give you a starting point. Equations 5 and 6 (Geraci and Bottai, 2013) define the integrated log-likelihood. This is not a ‘true’ likelihood since the asymmetric Laplace is used for computational purposes to estimate the fixed effects $\theta_x$. The assumption of normal or double-exponential random effects corresponds to using an $L_2$ or $L_1$ penalty. The weights vector specified in lqmm needs to be of the same length as $y$ but it is assumed that each cluster has the same weight

# Test example

set.seed(123)
M <- 50
n <- 10
test <- data.frame(x = runif(n*M,0,1), group = rep(1:M,each=n))
test$y <- 10*test$x + rep(rnorm(M, 0, 2), each = n) + rchisq(n*M, 3)
w <- rep(runif(M), each = n)

fit.lqmm <- lqmm(fixed = y ~ x, random = ~ 1, group = group, data = test, weights = w, iota = 0.5, nK = 11, type = “normal”)

In the algorithm, $w_i$ multiplies the *integrated* log-likelihood (equations 5 and 6), ie $\sum_{i}^{M} w_{i} * log L_{i}$. These weights are basically used as frequencies to simplify bootstrap calculations (I also suggest using weights in equation 13 for meta-analysis).

Query: Optimization algorithms

I have attached a small sample dataset and R script to help understand the questions.

We found that the results of the rq method from the quantreg package are significantly different from the lqm method. Is this to be expected based simply on the fact that both packages use a different optimization routine?

I suppose you sent me a representative sample. If this is the case, it’s quite apparent that there’s a problem of quantile crossing at higher quantiles. So either the model is misspecified or the data is too sparse at higher values of $X$. I’m not surprised that lqm and rq give different results for some quantiles as they are different optimization approaches. When the solution is not unique, even br (Barrodale-Roberts) or fn (Frisch-Newton) will give different answers.

Query: Splines

How can I include the B-spline basis matrix S into a lqmm model?

The short answer is that the scale parameter $\sigma$ is not relevant when interpreting the output of lqmm. The parameters of interest are the fixed effects $\theta_{x}$ and the covariance matrix of the random effects $\Psi$ (i.e., $\sigma$ is a nuisance). The long answer can be found in Geraci and Bottai (2007, 2013). A possible interpretation of $\sigma$ is related to the shrinkage factor of the random effects (2007, p.146). In general, $\sigma$ is related to the residual variability (2013, p. 12) and in specific conditions can be used in a likelihood ratio test (Koenker, Quantile Regression, 2005, p. 92).