Machine Learning Flashcards

Author: Ivan Kitov Cards: 94

Are you aiming to build a career in AI and big data? If so, learn ML to take a crucial leap towards realizing that goal. Machine learning transforms our interaction with data, automates decisions, and predicts outcomes. Our machine learning flashcards are essential for newcomers to data science, analysts looking to enhance their skills, and professionals wanting a quick review. Master machine learning and confidently explore its core concepts and techniques. You'll begin by understanding the fundamentals—differentiating supervised, unsupervised, and reinforcement learning, along with the importance of populations in research. Our machine learning flashcards cover essential statistical measures for core ML concepts—including the linear correlation coefficient, hypothesis testing, null and alternative hypotheses, and interpreting p-values. The causation versus correlation concept underscores the significance of discerning the impact of one variable on another. This is especially vital in regression analysis—a fundamental technique for predictive modeling in machine learning. You’ll delve through cards dedicated to linear regression models, covering such key elements as dependent and independent variables, coefficients, and the regression equation. Advanced topics cover ANOVA, different sum of squares measures, and understanding regression tables. You'll also explore assumptions underlying regression models like linearity and homoscedasticity and address such issues as multicollinearity, omitted variable bias, and autocorrelation. Our machine learning flashcards include essential models like logistic regression—teaching you how to evaluate model performance with confusion matrices and accuracy metrics. They also cover cluster analysis, classification, and techniques like K-means and hierarchical clustering, offering a comprehensive guide to grouping data points and category prediction. To learn data science, you'll need to navigate artificial intelligence and machine learning in some way. Utilizing our machine learning flashcards gives you a firm understanding of the theory and practical skills. Kick-off your first study session today—invaluable insights await!

Are you aiming to build a career in AI and big data? If so, learn ML to take a crucial leap towards realizing that goal. Machine learning transforms our interaction with data, automates decisions, and predicts outcomes. Our machine learning flashcards are essential for newcomers to data science, analysts looking to enhance their skills, and professionals wanting a quick review. Master machine learning and confidently explore its core concepts and techniques. You'll begin by understanding the fundamentals—differentiating supervised, unsupervised, and reinforcement learning, along with the importance of populations in research. Our machine learning flashcards cover essential statistical measures for core ML concepts—including the linear correlation coefficient, hypothesis testing, null and alternative hypotheses, and interpreting p-values. The causation versus correlation concept underscores the significance of discerning the impact of one variable on another. This is especially vital in regression analysis—a fundamental technique for predictive modeling in machine learning. You’ll delve through cards dedicated to linear regression models, covering such key elements as dependent and independent variables, coefficients, and the regression equation. Advanced topics cover ANOVA, different sum of squares measures, and understanding regression tables. You'll also explore assumptions underlying regression models like linearity and homoscedasticity and address such issues as multicollinearity, omitted variable bias, and autocorrelation. Our machine learning flashcards include essential models like logistic regression—teaching you how to evaluate model performance with confusion matrices and accuracy metrics. They also cover cluster analysis, classification, and techniques like K-means and hierarchical clustering, offering a comprehensive guide to grouping data points and category prediction. To learn data science, you'll need to navigate artificial intelligence and machine learning in some way. Utilizing our machine learning flashcards gives you a firm understanding of the theory and practical skills. Kick-off your first study session today—invaluable insights await!

Explore the Flashcards:

1 of 94

Machine Learning

An area of artificial intelligence that focuses on the development of algorithms that can learn patterns from data without being explicitly programmed.

2 of 94

Supervised Learning

Involves training a model using a dataset where the input comes paired with the correct output.

3 of 94

Unsupervised Learning

Involves training a model without explicit instructions, using data that isn't labeled.

4 of 94

Reinforcement Learning

A type of machine learning where an agent learns to behave in an environment by performing actions and receiving rewards for them.

5 of 94

Population

The entire set of items or individuals of interest in a study. Denoted by N.

6 of 94

Linear Correlation Coefficient

A measure of of the strength and direction of a linear relationship relationship between two variables. Very useful for direct interpretation as it takes on values from [-1,1]. Denoted ρxy for a population and rxy for a sample.

7 of 94

Correlation

A statistical measure that describes the extent to which two variables change together. There are several ways to compute it, the most common being the linear correlation coefficient.

8 of 94

Critical Value

A threshold value from a statistical table (z, t, F, etc.) associated with a chosen significance level.

9 of 94

Degrees of Freedom

The number of values in a statistical calculation that are free to vary without violating the data's constraints.

10 of 94

Hypothesis

A testable proposition or assumption about a population parameter.

11 of 94

Null Hypothesis

A default hypothesis for testing. Whenever we are conducting a test, we are trying to reject the null hypothesis.

12 of 94

Alternative Hypothesis

The hypothesis that contradicts the null hypothesis. It represents the researcher's claim.

13 of 94

Significance Level

The probability of rejecting the null hypothesis when it's true. Denoted α. You choose the significance level. All else equal, the lower the level, the better the test.

14 of 94

Rejection Region

The part of the distribution, for which we would reject the null hypothesis.

15 of 94

P-Value

The smallest significance level at which the null hypothesis can be rejected based on the observed data.

16 of 94

Causation

Causation refers to a causal relationship between two variables. When one variable changes, the other changes accordingly. When we have causality, variable A affects variable B, but it is not required that B causes a change in A.

17 of 94

Regression Analysis

A method to model and analyze the relationships between variables. Usually, it is used for building predictive models.

18 of 94

Linear Regression Model

A model that describes a linear relationship between two or more variables.

19 of 94

Dependent Variable ( ŷ )

The outcome variable being predicted or explained. It also 'depends' on the other variables. Usually, denoted y.

 

20 of 94

Independent Variable ( xi )

The variable (s) used to predict or explain variations in the dependent variable. It is the observed data (your sample data). Usually, denoted x1, x2 to xk.

21 of 94

Coefficient ( βi )

A factor that quantifies the relationship between an independent variable and the dependent variable.

22 of 94

Constant ( βo )

A constant value, which does not affect any independent variable, but affects the dependent one in a constant manner.

23 of 94

Epsilon ( ε )

The error of prediction. Difference between the observed value and the (unobservable) true value.

24 of 94

Regression Equation

An equation representing the relationship between variables, with coefficients estimated from data. Think of it as an estimator of the linear regression model.

25 of 94

b0, b1,…, bk

Estimates of the coefficients βo, β1, … βk.

26 of 94

Regression Line

The best-fitting line through the data points.

27 of 94

Residual ( e )

Difference between the observed value and the estimated value by the regression line. Point estimate of the error ( ε ).

28 of 94

b0

The intercept of the regression line with the y-axis for a simple linear regression.

29 of 94

b1

The slope of the regression line for a simple linear regression.

30 of 94

ANOVA

Abbreviation of 'analysis of variance'. A statistical framework for analyzing variance of means.

31 of 94

SST

Sum of squares total. SST is the squared differences between the observed dependent variable and its mean.

32 of 94

SSR

Sum of squares regression. SSR is the sum of the differences between the predicted value and the mean of the dependent variable. This is the variability explained by the regression model.

33 of 94

SSE

Sum of squares error. SSE is the sum of the differences between the observed value and the predicted value. This is the variability that is NOT explained by the model.

34 of 94

R-Squared ( R2 )

A measure ranging from 0 to 1 that shows how much of the total variability of the dataset is explained by the regression model.

 

35 of 94

OLS

An abbreviation of 'ordinary least squares'. A method to estimate the coefficients of a regression model by minimizing the sum of squared residuals.

36 of 94

Regression Tables

Tables summarizing the results of a regression analysis.

37 of 94

Multivariate Linear Regression

Also known as multiple linear regression. A regression model with multiple independent variables.

38 of 94

Adjusted R-Squared

A version of R-squared adjusted for the number of predictors in the model. It penalizes the excessive use of independent variables.

39 of 94

F-Statistic

A statistic used to test the overall significance of a model. The F-statistic is connected with the F-distribution in the same way the z-statistic is related to the Normal distribution.

40 of 94

F-Test

A test for the overall significance of the model.

41 of 94

Assumptions

Preconditions required for the validity of statistical techniques, like linear regression.

42 of 94

Linearity

The assumption that the relationship between variables is linear.

43 of 94

Homoscedasticity

The assumption that the variance of residuals is constant across all levels of the independent variables.

44 of 94

Endogeneity

In statistics refers to a situation, where an independent variable is correlated with the error term.

45 of 94

Autocorrelation

The correlation of a variable with itself over successive time intervals.

46 of 94

Multicollinearity

A situation where two or more independent variables are highly correlated, making it difficult to isolate the effect of individual predictors.

47 of 94

Omitted Variable Bias

Bias introduced when a relevant variable is left out of a regression model.

48 of 94

Heteroscedasticity

The presence of non-constant variance in the residuals of a regression model.

49 of 94

Log Transformation

Applying the logarithm function to a variable to linearize relationships or stabilize variances.

50 of 94

Semi-Log Model

A regression model where either the dependent or independent variable is logarithmically transformed.

51 of 94

Log-Log Model

A regression model where both the dependent and independent variables are logarithmically transformed.

52 of 94

Serial Correlation

Another term for autocorrelation.

53 of 94

Cross-Sectional Data

Data collected at a single point in time.

54 of 94

Time Series Data

Data collected at regular intervals over time (e.g. stock prices).

55 of 94

Day of the Week Effect

A well-known phenomenon in finance. Consists in disproportionately high returns on Fridays and low returns on Mondays.

56 of 94

Durbin-Watson Test

A way for detecting autocorrelation (a violation of the fourth OLS assumption).

57 of 94

Total Variability = ? + ?

Total Variability = Explained variability + Unexplained variability.

58 of 94

Clustering

A technique used to group similar data points together based on certain features, without having predefined categories.

59 of 94

Classification

An algorithmic approach to determining which category an input belongs to out of a set of categories.

60 of 94

Decision Tree

A decision support tool that uses a tree-like model of decisions and their potential consequences.

61 of 94

Categorical Data

Data that represents categories or labels without inherent numerical value.

62 of 94

Dummy Variables

Also known as indicator variables. They're used to represent categorical data as a series of binary values to include in statistical models.

63 of 94

Overfitting Model

When a model captures noise in the data and is too complex. It will perform exceptionally well on training data but poorly on unseen data.

64 of 94

Underfitting Model

When a model is too simple to capture the underlying trends in the data, resulting in poor performance on both the training and testing sets.

65 of 94

Training Dataset

The set of data used to train a machine learning model.

66 of 94

Testing Dataset

After training, this dataset is used to evaluate how well a model performs on data it hasn't seen before.

67 of 94

Logistic Regression Model

A statistical method for predicting binary outcomes. It's used when the dependent variable is categorical and binary.

68 of 94

Logit Regression Model

Another term for logistic regression. It models the log odds of the probability of the event occurring.

69 of 94

MLE Method

A method to estimate the parameters of a model. It chooses the parameter values that maximize the likelihood of the observed data given the model..

70 of 94

Likelihood Function

A function which estimates how likely it is that the model at hand describes the real underlying relationship of the variables.

71 of 94

LL-Null

Log likelihood-null – the log-likelihood of a model which has no independent variables

72 of 94

LLR P-Value

Log likelihood ratio – measures if our model is statistically different from LL-null, a.k.a. a useless model.

73 of 94

Confusion Matrix

A table used in classification problems where the accuracy of a model's predictions is summarized. Typically a 2x2 matrix for binary classification problems.

74 of 94

True Positives (TP)

Correctly predicted positive observations.

75 of 94

False Positives (FP)

Instances falsely predicted as positive (Type I error).

76 of 94

True Negatives (TN)

Correctly predicted negative observations.

77 of 94

False Negatives (FN):

Instances falsely predicted as negative (Type II error).

78 of 94

Pseudo R-squared

Unlike the R-squared in linear regression, it's used in logistic regression to provide a measure of how well the model explains the variance in the dependent variable.

79 of 94

AIC

A measure used to compare models. A lower AIC indicates a better model, but it considers the complexity of the model.

80 of 94

BIC

Similar to AIC, but has a higher penalty for models with more parameters.

81 of 94

McFadden’s R-squared

A measure used in logistic regression to indicate the goodness of fit of the model compared to a model with no predictors.

82 of 94

Accuracy of Model

The ratio of correctly predicted observation to the total observations.

83 of 94

Cluster Analysis

A multivariate statistical technique that groups observations on the basis some of their features or variables that they are described by.

84 of 94

Classification

The process of predicting the class or category of given data points based on certain features.

85 of 94

Euclidean Distance

The 'ordinary' distance between two points in space, calculated using the Pythagorean theorem.

86 of 94

K-Means Clustering

An iterative algorithm that tries to partition the dataset into K pre-defined distinct non-overlapping subgroups or clusters.

87 of 94

WCSS (Within-Cluster Sum of Squares)

WCSS is a measure used in clustering algorithms that represents the total distance between each point and the centroid of the cluster it belongs to.

88 of 94

Elbow Method

A method in K-means clustering to identify the optimal number of clusters by locating the "elbow" point in a plot of WCSS against cluster count. This point reflects the most effective balance between precision and computation.

89 of 94

Standardized Variable

A variable which has been standardized using the z-score formula - by first subtracting the mean and then dividing by the standard deviation.

90 of 94

Flat Clustering

A method where the number of clusters is defined in advance, and the dataset is partitioned into the specified number of clusters.

91 of 94

Hierarchical Clustering

Involves creating clusters that have a predetermined ordering from top to bottom.

92 of 94

Divisive (Top-Down) Clustering

Begins with all data points in a single cluster and recursively divides into smaller clusters.

93 of 94

Agglomerative (Bottom-Up) Clustering

Starts with each data point as an individual cluster and merges them into larger clusters based on similarity.

94 of 94

Dendrogram

A tree-like diagram that records the sequences of merges or splits in hierarchical clustering.