New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mixing of model[["coefficients"]] and coef(model) #91

Closed
lukesonnet opened this Issue Mar 19, 2018 · 2 comments

Comments

Projects
None yet
2 participants
@lukesonnet

lukesonnet commented Mar 19, 2018

Please specify whether your issue is about:

  • a possible bug
  • a question about package functionality
  • a suggested code or documentation change, improvement to the code, or feature request

Hi @leeper. Thanks again for your work on this. In general, I think expecting "coefficients" to be in the output of a model object should be deprecated in favor of using s3 methods like "coef". You have done this in some places in your package (since the last CRAN submission).

However, in a crucial location you still overrwite model[["coefficients"]] and then expect coef(model) to extract that object. For example, in get_effect_variances.R, see the following lines:

FUN <- gradient_factory(data = data,
                                model = model,
                                variables = variables,
                                type = type,
                                weights = weights,
                                eps = eps,
                                varslist = varslist,
                                ...)
# get jacobian
jacobian <- jacobian(FUN, coef(model)[names(coef(model)) %in% c("(Intercept)", colnames(vcov))], weights = weights, eps = eps)

Now, in gradient_factory.default, you do the following:

model[["coefficients"]][names(coefs)] <- coefs

So if while in the gradient factory you think you are clobbering model[["coefficients"]], the next line in get_effect_variances uses coef(model) and won't get those new coefficients if coef.model_class != model$coefficients.

@leeper

This comment has been minimized.

Show comment
Hide comment
@leeper

leeper Apr 2, 2018

Owner

This is a good point but one I need to think about a bit more. The challenge is not all model classes implement a coef() method and even more, understandably, do not implement a coef<-() method. I suspect the best solution is to write a new generic (something like set_coef()) and then test that it behaves correctly on all classes.

Owner

leeper commented Apr 2, 2018

This is a good point but one I need to think about a bit more. The challenge is not all model classes implement a coef() method and even more, understandably, do not implement a coef<-() method. I suspect the best solution is to write a new generic (something like set_coef()) and then test that it behaves correctly on all classes.

@leeper

This comment has been minimized.

Show comment
Hide comment
@leeper

leeper May 11, 2018

Owner

Thanks, again, for raising this issue. I've made two changes in response:

  1. The addition of a reset_coefs() generic with methods that modify the relevant part of the object, which is now used inside gradient_factory().
  2. The addition of a test suite that assess both that reset_coefs() is narrowly successful in modifying the model object (such that the coef() method returns what was put into the object) but also - and more importantly - that the predictions that come out of the modified object are correct.

This should ensure that the clobbering at least produces the correct behavior. I can imagine there might be better ways to do this, but the tests will at least ensure that behavior is correct.

Owner

leeper commented May 11, 2018

Thanks, again, for raising this issue. I've made two changes in response:

  1. The addition of a reset_coefs() generic with methods that modify the relevant part of the object, which is now used inside gradient_factory().
  2. The addition of a test suite that assess both that reset_coefs() is narrowly successful in modifying the model object (such that the coef() method returns what was put into the object) but also - and more importantly - that the predictions that come out of the modified object are correct.

This should ensure that the clobbering at least produces the correct behavior. I can imagine there might be better ways to do this, but the tests will at least ensure that behavior is correct.

@leeper leeper closed this Aug 1, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment