1. Exquisite Tweets from @ryxcommar, @yversary, @quantian1, @ColinJMcAuliffe, @RandallSPQR, @AlanFeder

    Woody_WongECollected by Woody_WongE

    This is my ode to linear regression, which doesn't get enough love.

    It should be a useful and pragmatic educational complement for people who didn't learn linear regression in some grad school stats/ML course, or just people looking to brush up. ryxcommar.com/2019/09/06/som…

    Reply Retweet Like

    ryxcommar

    πŸ‘» p = 0.06 πŸ‘»

    For example, did you know you can use simple linear regression to estimate all of these "nonlinear" models?

    Reply Retweet Like

    ryxcommar

    πŸ‘» p = 0.06 πŸ‘»

    So, which one of these is better? Gonna have to read the blog post to find out the answer.

    Reply Retweet Like

    ryxcommar

    πŸ‘» p = 0.06 πŸ‘»

  2. All functions can be expanded as a Taylor series therefore all models are linear QED

    Reply Retweet Like

    yversary

    yarrriv

  3. ryxcommar

    πŸ‘» p = 0.06 πŸ‘»

  4. do not do linear regressions on nonlinearly transformed data and expect good results, this is legal advice

    Reply Retweet Like

    quantian1

    Quantian

  5. No regression should ever be left unpenalized

    (here's why that's the case for inference and not just prediction)

    austinrochford.com/posts/2013-11-…

    Reply Retweet Like

    ColinJMcAuliffe

    Colin McAuliffe

  6. Wow thifs blew up. I see all the feedback in the comments, and also from someone who DMed me to say nobody has divided for linear regression in matlab in the last 10 years (matlab over the last 10 years? the jokes write themselves). It's all appreciated. I'll get to it soon!

    Reply Retweet Like

    ryxcommar

    πŸ‘» p = 0.06 πŸ‘»

    I'm not saying these approaches are better than NLS, but you clearly didn't do it right from the OLS perspective:

    Reply Retweet Like

    ryxcommar

    πŸ‘» p = 0.06 πŸ‘»

  7. Fantastic Post. I thought you might go there with Frisch-Waugh but you should add a section on AR and ARMA models. Maybe a second post.

    Reply Retweet Like

    RandallSPQR

    Randall πŸŒπŸ’»πŸ

  8. AR models are a whole "thing," could spend forever talking about them.

    Reply Retweet Like

    ryxcommar

    πŸ‘» p = 0.06 πŸ‘»

    the bayesians are personally attacking me in my mentions

    Reply Retweet Like

    ryxcommar

    πŸ‘» p = 0.06 πŸ‘»

  9. I’ll admit that the first one is a decent fit even if it gives spurious results for large/small x, but the second one assumes you know the exact functional form of the data already- we’re fitting y=A x^b exp(c/x) and you gave it the exact correct values in advance.

    Reply Retweet Like

    quantian1

    Quantian

  10. ok, so I'll admit I actually did it wrong (I used semi-log instead of loglinear), and you can get an exact fit with OLS. So actually, OLS>NLS if you know the functional form. Sorry buddy.

    Reply Retweet Like

    ryxcommar

    πŸ‘» p = 0.06 πŸ‘»

  11. You fed it the correct coefficients in advance! Of course it performs well, all your code does is regress y against a*f(x) and concludes a=1 perfectly fits

    Reply Retweet Like

    quantian1

    Quantian

  12. sigh. here. I added your a,b,c to the model. note that a is an intercept so you don't need to explicitly include it. Same story. This is exactly the same as the NLS version you provided.

    Reply Retweet Like

    ryxcommar

    πŸ‘» p = 0.06 πŸ‘»

  13. Except your measurement errors are in log space not real space so they can’t go negative, which is cheating.

    Reply Retweet Like

    quantian1

    Quantian

  14. Exactly. That section on causal inference where BLUE > Prediction is huge 'thing' but you nailed in ~250 words.

    Reply Retweet Like

    RandallSPQR

    Randall πŸŒπŸ’»πŸ

  15. Thanks! But I don't know how to explain all the fussy stuff that comes up once you start working with AR.

    Also the Bayesians are attacking me for that section and I'mma be honest, I don't know how to respond to their critiques. Β―\_(ツ)_/Β―

    Reply Retweet Like

    ryxcommar

    πŸ‘» p = 0.06 πŸ‘»

  16. Do you consider GLMs linear models or non-linear models? I remember playing with logging/exponentiating variables vs. using a log-link/Poisson GLM, and preferring the glm...

    Reply Retweet Like

    AlanFeder

    AlanFeder

  17. they're linear models but with other things going on. Logit being a good example of that. I can't make any guarantees about your error terms though if you transform your variables.

    Reply Retweet Like

    ryxcommar

    πŸ‘» p = 0.06 πŸ‘»