Re: Introduction to ANOVA, REGRESSION,
> Tomorrow I am going to take a 3 days Statistics I course: Introduction
>to ANOVA, REGRESSION, and Logistic Regression with SAS Institute
> Prerequisites require some SAS skills as well as ….. an undergraduate
>course in statistics covering p-values, hypothesis testing, analysis of
>variance, and regression.
> While I have some SAS skills and used some statistical PROCs in SAS, I
>never took any undergraduate course in statistics. Instead I tried to
>compensate it with an extensive reading for about a month time period.
>However today, reviewing what I already read ABOUT analysis of variance, I
>found out that unlike p-value concept I do not understand clearly what is F
>concept as well as F-value and F-value ratio. What I got from the reading
>is that F-value is rather a result of a distribution and then after I get
>F-value, I can produce P-value.
> Does it mean that in some cases one can produce p-value without F value
>while in some cases it is not like that? If my assumption is correct then
>…in which cases it is needed ?
> So…I feel a little bit nervous for tomorrow….Could anybody give me a
>short and simple explanation of the concept?
> Any help would be greatly appreciated!!!
I hate to say it, but this is not good. You really need the background
in stats so that you can follow along with everyone else. Otherwise,
you'll be unable to follow some of the material. Maybe a lot of the
material. If there is too much material that you cannot follow, then
you will have wasted the substantial cost of the training. In cases I
know of, this also ruins the training for everyone else, as the trainer
either has to go too slowly over the material to catch you up, or else
has to enforce 'no questions whatsoever until end of class' type of
guidelines that may interfere with class learning.
Here's the bottom line. You canot have a p-value without some
probability distribution that the 'p' goes with. In linear models, that
distribution is likely to be a z, a t, a chi-squared, or an F, depending
on the hypothesis.
An F is simple: you have two independent variances (with some
extra asumptions you can look up), and you take their ratio. You
expect, under your null hypothesis, that the two variances will be
similar, so the ratio should be near 1. The farther away from 1 you
get, the farther out in the tails of the F you are, so the smaller
David L. Cassell
3115 NW Norwood Pl.
Corvallis OR 97330
On the road to retirement? Check out MSN Life Events for advice on how to
get there! http://lifeevents.msn.com/category.aspx?cid=Retirement
|Thread||Thread Starter||Forum||Replies||Last Post|
|Re: Introduction to ANOVA, REGRESSION,||Irin later||Newsgroup comp.soft-sys.sas||1||05-28-2006 07:04 PM|
|Re: Introduction to ANOVA, REGRESSION,||Kevin Roland Viel||Newsgroup comp.soft-sys.sas||0||05-22-2006 04:42 PM|
|Re: Introduction to ANOVA, REGRESSION,||SAS_learner||Newsgroup comp.soft-sys.sas||0||05-22-2006 04:30 PM|
|Re: Introduction to ANOVA, REGRESSION,||Arthur Tabachneck||Newsgroup comp.soft-sys.sas||0||05-22-2006 04:17 PM|
|Introduction to ANOVA, REGRESSION,||Irin later||Newsgroup comp.soft-sys.sas||0||05-22-2006 03:59 PM|