# Statistics Flashcards

Cards: 88

Our statistics flashcards focus on central measures of tendency and inferential statistics and explain such data types as discrete, continuous, and quantitative. Master essential terms like ratio, interval, population, and sample, establishing a solid base for applying basic statistical concepts in math and science. Develop a core understanding of statistical analysis, employing fundamental principles and methods to analyze data. We offer a simple, effective way to break down complex concepts into manageable insights. As you advance, you'll grasp the significance of random and representative sampling for accurately reflecting a population's true nature. Our statistics flashcards also enhance your understanding of variables and data types—enabling precise data categorization and measurement. This knowledge helps you distinguish between categorical, numerical, discrete, and continuous data and understand the nominal and ratio measurement levels. The deck delves into constructing and interpreting frequency distribution tables and absolute, relative, and cumulative frequencies. It also covers visual data representations—including Pareto diagrams and histograms—and teaches analyzing relationships through scatter plots and cross tables. Our statistics flashcards also explore central tendency measures like mean, median, and mode and examine how skewness impacts them. Additionally, they scrutinize variability measures—including variance and standard deviation—to comprehend data dispersion. Understanding the coefficient of variation is critical to comparing variability across datasets. Diving into correlations, you'll learn to assess the strength and direction of relationships between variables and how to use correlation coefficient and covariance. Our statistics flashcards also address distributions—particularly the normal distribution and its importance in the central limit theorem and sampling distributions. Additionally, you'll learn hypothesis testing—including how to formulate null and alternative hypotheses, interpret test results via p-values and error types, and understand the test's power and the impact of sample size on it. Upon completing the statistics flashcards deck, you'll master key statistical concepts, perform data analysis, and confidently interpret results. This essential knowledge applies across fields—from business to science—facilitating informed decisions through quantitative data analysis. Learn statistics from scratch with our plethora of valuable knowledge. Don't delay—begin studying now!

Our statistics flashcards focus on central measures of tendency and inferential statistics and explain such data types as discrete, continuous, and quantitative. Master essential terms like ratio, interval, population, and sample, establishing a solid base for applying basic statistical concepts in math and science. Develop a core understanding of statistical analysis, employing fundamental principles and methods to analyze data. We offer a simple, effective way to break down complex concepts into manageable insights. As you advance, you'll grasp the significance of random and representative sampling for accurately reflecting a population's true nature. Our statistics flashcards also enhance your understanding of variables and data types—enabling precise data categorization and measurement. This knowledge helps you distinguish between categorical, numerical, discrete, and continuous data and understand the nominal and ratio measurement levels. The deck delves into constructing and interpreting frequency distribution tables and absolute, relative, and cumulative frequencies. It also covers visual data representations—including Pareto diagrams and histograms—and teaches analyzing relationships through scatter plots and cross tables. Our statistics flashcards also explore central tendency measures like mean, median, and mode and examine how skewness impacts them. Additionally, they scrutinize variability measures—including variance and standard deviation—to comprehend data dispersion. Understanding the coefficient of variation is critical to comparing variability across datasets. Diving into correlations, you'll learn to assess the strength and direction of relationships between variables and how to use correlation coefficient and covariance. Our statistics flashcards also address distributions—particularly the normal distribution and its importance in the central limit theorem and sampling distributions. Additionally, you'll learn hypothesis testing—including how to formulate null and alternative hypotheses, interpret test results via p-values and error types, and understand the test's power and the impact of sample size on it. Upon completing the statistics flashcards deck, you'll master key statistical concepts, perform data analysis, and confidently interpret results. This essential knowledge applies across fields—from business to science—facilitating informed decisions through quantitative data analysis. Learn statistics from scratch with our plethora of valuable knowledge. Don't delay—begin studying now!

## Explore the Flashcards:

1 of 88

Population

The entire set of items or individuals of interest in a study. Denoted By N.

2 of 88

Sample

A subset selected from the larger population; Denoted by n.

3 of 88

Parameter

A numerical value that describes a characteristic of the entire population. It is the opposite of statistic.

4 of 88

Statistic

A numerical value that describes a characteristic of a sample and used to estimate a population parameter. It is the opposite of a parameter.

5 of 88

Random Sample

A sample in which every member of the population has an equal chance of being selected.

6 of 88

Representative Sample

A sample that accurately mirrors the characteristics of the larger population.

7 of 88

Variable

A characteristic or attribute that can take on different values or categories. E.g. height, occupation, age etc.

8 of 88

Type of Data

The classification of data based on its nature.There are two types of data - categorical and numerical.

9 of 88

Categorical Data

Data that represents categories or labels without inherent numerical value.

10 of 88

Numerical Data

Data that represents quantifiable amounts or values. Can be further classified into discrete and continuous.

11 of 88

Discrete Data

Numerical data that can only take on specific, distinct values. Opposite of continuous.

12 of 88

Continuous Data

Numerical data that is 'infinite' and impossible to count. Opposite of discrete.

13 of 88

Levels of Measurement

A way to classify data. There are two levels of measurement - qualitative and quantitative.

14 of 88

Qualitative Data

A subgroup of levels of measurement. There are two types of qualitative data - nominal and ordinal.

15 of 88

Quantitative Data

A subgroup of levels of measurement. There are two types of quantitative data - ratio and interval.

16 of 88

Nominal Level of Measurement

Nominal level of measurement refers to variables that describe different categories or names. These categories cannot be put in any specific order.

17 of 88

Ordinal Level of Measurement

Ordinal level of measurement refers to variables that describe different categories, and they can be ordered.

18 of 88

Ratio Level of Measurement

Ratio level of measurement represents a number that has a unique and unambiguous zero point, no matter if a whole number or a fraction. For example, the temperature in Kelvin is a ratio variable.

19 of 88

Interval Level of Measurement

An interval variable represents a number or an interval. There isn't a unique and unambiguous zero point. For example, degrees in Celsius and Fahrenheit are interval variables.

20 of 88

Frequency Distribution Table

A table showing the frequency of each variable.

21 of 88

Frequency

The number of times a particular value or category occurs in a dataset.

22 of 88

Absolute Frequency

Measures the number of occurrences of a variable.

23 of 88

Relative Frequency

Measures the relative number of occurrences of a variable. Usually, expressed in percentages.

24 of 88

Cumulative Frequency

The sum of the relative frequencies of all members in a dataset up to a certain point. The cumulative frequency of all members is 100% or 1.

25 of 88

Pareto Diagram

A type of bar chart where frequencies are shown in descending order. There is an additional line on the chart, showing the cumulative frequency.

26 of 88

Histogram

A type of bar chart that represents numerical data. It is divided into intervals (or bins) that are not overlapping and span from the first observation to the last. The intervals (bins) are adjacent - where one stops, the other starts.

27 of 88

Cross Table (Contingency Table)

A table in a matrix format that displays the frequency distribution of the variables.

28 of 88

Bins (Histogram)

The intervals that are represented in a histogram.

29 of 88

Scatter Plot

A plot that represents numerical data. Graphically, each observation looks like a point on the scatter plot.

30 of 88

Measures of Central Tendency

Measures of central tendency are statistical values that represent the center or typical value of a dataset. The most common are the mean, median and mode.

31 of 88

Mean

The arithmetic average of all data points in a dataset.

32 of 88

Median

The middle number in a data set sorted in ascending or descending order.

33 of 88

Mode

The value that occurs most frequently in the dataset. A dataset can have one mode (unimodal), more than one mode (multimodal), or no mode at all.

34 of 88

Skewness

A measure which indicates whether the observations in a dataset are concentrated on one side.

35 of 88

Sample Formula

A formula that is calculated on a sample. The value obtained is a statistic.

36 of 88

Population Formula

A formula that is calculated on a population. The value obtained is a parameter.

37 of 88

Measures of Variability

Measures that describe the data through the level of dispersion (variability). The most common ones are variance and standard deviation.

38 of 88

Variance

Measures the dispersion of the dataset around its mean. It is measured in units squared. Denoted $$σ^2$$ for a population and $$s^2$$ for a sample.

39 of 88

Standard Deviation

Measures the dispersion of the dataset around its mean. It is measured in original units. Denoted σ for a population and s for a sample.

40 of 88

Coefficient of Variation

Measures the dispersion of the dataset around its mean. The coefficient of variation is unitless. Therefore, it is useful when comparing the dispersion across different datasets that have different units of measurement.

41 of 88

Univariate Measure

Univariate measure refers to the summary of a dataset that includes multiple categories of variables.

42 of 88

Multivariate Measure

A measure which refers to multiple variables.

43 of 88

Covariance

A statistical measure that quantifies the degree to which two random variables in a dataset change together. Usually, because of its scale of measurement, covariance is not directly interpretable.

44 of 88

Linear Correlation Coefficient

A measure of of the strength and direction of a linear relationship relationship between two variables. Very useful for direct interpretation as it takes on values from [-1,1]. Denoted $$\rho_{xy}$$ for a population and $$r_{xy}$$ for a sample.

45 of 88

Correlation

A statistical measure that describes the extent to which two variables change together. There are several ways to compute it, the most common being the linear correlation coefficient.

46 of 88

Distribution

A function that shows the possible values for a variable and the probability of their occurrence.

47 of 88

Bell Curve

A common name for the normal distribution.

48 of 88

Normal Distribution

A continuous, symmetric probability distribution that is completely described by its mean and its variance. Also known as the Gaussian distribution or bell curve.

49 of 88

Gaussian Distribution

The original name of the normal distribution. Named after the famous mathematician Gauss, who was the first to explore it through his work on the Gaussian function.

50 of 88

Standard Normal Distribution

A normal distribution with a mean of 0, and a standard deviation of 1

51 of 88

z-statistic

The cumulative frequency of a data value in a frequency distribution.

52 of 88

Standardized Variable

A variable which has been standardized using the z-score formula - by first subtracting the mean and then dividing by the standard deviation.

53 of 88

What does the Central Limit Theorem state?

The sampling distribution will approximate a normal distribution as the sample size increases. In general, a sample of at least 30 is often considered sufficient for the theorem to hold.

54 of 88

Sampling Distribution

The probability distribution of a given statistic (like the mean or variance) based on all possible samples of a fixed size from a population.

55 of 88

Standard Error

The standard deviation of the sampling distribution, which reflects the variability of sample means. It accounts for the sample size, with larger samples generally having smaller standard errors.

56 of 88

Estimator

Estimations we make according to a function or rule.

57 of 88

Estimate

The particular value that was estimated through an estimator.

58 of 88

Bias

The difference between an estimator's expected value and the true population parameter. An unbiased estimator has an expected value equal to the population parameter.

59 of 88

Efficiency (in Estimators)

Refers to an estimator's variability. An efficient estimator has minimal variability compared to others.

60 of 88

Point Estimator

A function or a rule, according to which we make estimations that will result in a single number.

61 of 88

Point Estimate

The specific numerical value obtained from a point estimator.

62 of 88

Interval Estimator

A function or a rule, according to which we make estimations that will result in an interval. In this course, we will only consider confidence intervals. Another instance that we don't discuss are also credible intervals (Bayesian statistics).

63 of 88

Interval Estimate

The categorization of data into discrete groups based on their attributes.

64 of 88

Confidence Interval

A confidence interval is the range within which you expect the population parameter to be. You have a certain probability of it being correct, equal to the significance level.

65 of 88

Reliability Factor

A singular metric that captures the entire variance of a dataset.

66 of 88

Level of Confidence

The probability that the population parameter lies within a given confidence interval. Denoted 1 - α. Example: 95% confidence level means that in 95% of the cases, the population parameter will fall into the specified interval.

67 of 88

Critical Value

A threshold value from a statistical table (z, t, F, etc.) associated with a chosen significance level.

68 of 88

z-table

A table showing values of the Z-statistic for various probabilities under the standard normal distribution.

69 of 88

t-statistic

A statistic that is generally associated with the Student's T distribution, in the same way the z-statistic is associated with the normal distribution.

70 of 88

t-table

A table showing t-statistic values for given probabilities and degrees of freedom.

71 of 88

Degrees of Freedom

The number of values in a statistical calculation that are free to vary without violating the data's constraints.

72 of 88

Margin of Error

The range within which the true population parameter is likely to lie, given a specific confidence level. It quantifies the uncertainty associated with a sample estimate, often expressed as a percentage of the estimate itself.

73 of 88

Hypothesis

A testable proposition or assumption about a population parameter.

74 of 88

Hypothesis Test

A test that is conducted in order to verify if a hypothesis is true or false.

75 of 88

Null Hypothesis

A default hypothesis for testing. Whenever we are conducting a test, we are trying to reject the null hypothesis.

76 of 88

Alternative Hypothesis

The hypothesis that contradicts the null hypothesis. It represents the researcher's claim.

77 of 88

To Accept a Hypothesis

The statistical evidence shows that the hypothesis is likely to be true.

78 of 88

To Reject a Hypothesis

The statistical evidence shows that the hypothesis is likely to be false.

79 of 88

One-Tailed (One-Sided) Test

A test that examines if a parameter is greater than or less than a specified value. In a one-tailed test, the alternative hypothesis focuses on a specific difference (higher than, lower than, or equal to).

80 of 88

Two-Tailed (Two-Sided) Test

A test that examines if a value is different (or equal) from a specified value. A two-tailed test considers the possibility of a difference in either direction from the null hypothesis.

81 of 88

Significance Level

The probability of rejecting the null hypothesis when it's true. Denoted α. You choose the significance level. All else equal, the lower the level, the better the test.

82 of 88

Rejection Region

The part of the distribution, for which we would reject the null hypothesis.

83 of 88

Type I Error (False Positive)

Rejecting a null hypothesis that is true. The probability of committing it is α, the significance level.

84 of 88

Type II Error (False Negative)

Accepting a null hypothesis that is false. The probability of committing it is β.

85 of 88

Power of the Test

The probability of correctly rejecting a false null hypothesis. (the researcher's goal). Denoted by 1- β.

86 of 88

z-score

A value indicating how many standard deviations an element is from the mean.

87 of 88

$μ_0$

The most frequent value occurring in a population dataset.

88 of 88

p-value

The smallest significance level at which the null hypothesis can be rejected based on the observed data.