My own preference, when trying to interpret interactions in logistic regression, is to look at the predicted probabilities for each combination of categorical variables.
54 What is a suppressor variable in multiple regression and what might be the ways to display suppression effect visually (its mechanics or its evidence in results)? I'd like to invite everybody who has a thought, to share.
Multivariable regression is any regression model where there is more than one explanatory variable. For this reason it is often simply known as "multiple regression". In the simple case of just one explanatory variable, this is sometimes called univariable regression. Unfortunately multivariable regression is often mistakenly called multivariate regression, or vice versa. Multivariate ...
In some studies, I saw sometimes people used lag of independent variables, sometimes they use lag of outcome variables as an additional control one. Can I ask what is the mechanic of using lag vari...
Brief Summary Why is it more common for logistic regression (with odds ratios) to be used in cohort studies with binary outcomes, as opposed to Poisson regression (with relative risks)? Backgrou...
For a variable that is involved in interactions, the "main-effect" regression coefficient -- that is, the regression coefficient of the variable by itself -- is the slope of the regression surface in the direction of that variable when all other variables that interact with that variable have values of zero, and the significance test of the ...
Linear regression can use the same kernels used in SVR, and SVR can also use the linear kernel. Given only the coefficients from such models, it would be impossible to distinguish between them in the general case (with SVR, you might get sparse coefficients depending on the penalization, due to $\epsilon$-insensitive loss)
The coefficients of an OLS regression are just simple descriptive statistics; you can compute them on any data, w/o having to make any assumption whatsoever, just as you could compute a mean of any dataset.
Are there other situations where inverse regression actually outperforms? Maybe there are other loss functions it does well with, or it's more robust to violations of assumptions, or more wrong but less biased. But so far it looks like reverse regression produces better point estimates and I'm going to bootstrap my confidence intervals in any case.
Can I interpret logistic regression coefficients and their p-values even if model performance is bad? Ask Question Asked 3 years ago Modified 3 years ago