You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Per the ask from @ccao-jardine, it could be fun to try make the linear model really good, by testing some polynomials and removing some categoricals. Let's give the recipe a good once over.
The text was updated successfully, but these errors were encountered:
it could be fun to try make the linear model really good
Is this the goal? If you are trying to predict, there is no point in using a linear model. Is there an assigned task to perform inference? If so, please create an inferential model issue and I will take it.
For inference, I also highly recommend a bayesian approach, like lace: https://github.com/promised-ai/lace. Joint priors will be critical in this housing context, and efforts to linearize this model would verge on procrustean.
Still - it is not clear to me why linear models would be pursued in the first place - fiddling with feature engineering does not move the needle for prediction, and I have not seen any inferential issues.
The linear model included in the pipeline is purely for reference. It's only used for comparison to the boosted tree model. Making the model specification better is just a low-priority training task for our junior employees.
Per the ask from @ccao-jardine, it could be fun to try make the linear model really good, by testing some polynomials and removing some categoricals. Let's give the recipe a good once over.
The text was updated successfully, but these errors were encountered: