Resolved: Sample Dataset 2 Standard Error
I don't agree with the formula used for the standard error of Sample Dataset 2. In fact the denominator n must be
the sample size (I mean the number of observations within a sample) instead of the number of samples. The sample mean is supposed to get more accurate when the sample size increase, not when you pick many samples.
Therefore this question need one more input: the sample size
I also do not get it. The lecture gives one understanding of what "n" in the formula is, and then the practice exam asks you to use another meaning of that "n". Is the exam actually checked by the lecture authors?
I think I get it now. The standard error is in fact the standard deviation of the sampling distribution. Therefore the question here is to just compute the standard deviation of the provided sample (the sample means)
I am not understood also;
We do agree that the "Standard Error," which we're looking for, is "The Standard Deviation of The Sample Means Data That Is Already Provided In The Sample Dataset 2!" Therefore, the only thing to get the Standard Error here is to calculate the Standard Deviation of the Sample Dataset2! Am I right @EBE ALEX ???
@Seyed Amir, you are completly right.
the standard error of a sampling mean distribution (of what the data is) is the standard deviation of the sample mean (use STDEV.S over STDEV.P) divided by the square root of the number of samples. As n increases, we are dividing by a smaller number and so our standard error, (a measure on uncertainty / spread) decreases