Top 18 Probability and Statistics Interview Questions for Data Scientists

Join over 2 million students who advanced their careers with 365 Data Science. Learn from instructors who have worked at Meta, Spotify, Google, IKEA, Netflix, and Coca-Cola and master Python, SQL, Excel, machine learning, data analysis, AI fundamentals, and more.

Start for Free
Eugenia Anello 23 Apr 2024 15 min read

Are you trying to prepare for a data science interview, but don’t you know where to start? It’s not just you; this task can be overwhelming. A data science interview may involve questions about anything from statistics and mathematics to deep learning and artificial intelligence.

So, it’s best to begin with the basics and gradually move on to more complex topics.

This article explains foundational concepts at the basis of machine learning and data analysis. More specifically, we present 18 common probability and statistics interview questions and answers to help you prepare for your data science or analytics job interview. We also offer a knowledge checklist to ensure you have a strong foundation in foundational statistics and probability concepts, as well as general interview tips to aid in your preparation.

Table of Contents

Statistics and Probability Knowledge Checklist

Before diving into statistics and probability questions, make sure you are familiar with all the concepts you might encounter in an interview by reviewing this knowledge checklist.

Master Descriptive Statistics: Understand key statistics like mean, median, mode, variance, and standard deviation.

For instance, a salary dataset can reveal central tendency and dispersion. Your prep should involve reviewing formulas, practicing calculations, and applying these concepts to varied datasets.

Understand Probability Distributions: Master distributions like normal, binomial, and Poisson.

Each distribution features unique characteristics and applications. For instance, normal distribution is crucial to the Central Limit Theorem. Enhance your understanding by studying these properties and applying them to practical situations.

Adopt Bayesian Thinking: Master Bayesian statistics and refine your beliefs with new evidence.

Understand priors, likelihoods, and posteriors. For example, update the probability of an event (such as disease presence) based on new data (test results). Study Bayes’ theorem and apply Bayesian inference in predictive modeling.

Grasp Hypothesis Testing: Learn how to make inferences about populations using sample data.

Define null and alternative hypotheses, select appropriate tests, calculate test statistics, and interpret p-values. Prepare by understanding different test types and their assumptions.

Learn Regression Analysis: Understand how regression models explain relationships between variables.

Learn how to interpret coefficients, assess model fit, and verify assumptions, such as forecasting housing prices based on size and location. Develop skills by constructing and analyzing models using real datasets.

Design Experiments Effectively: Learn to design experiments to test hypotheses or measure effects.

Grasp control groups, randomization, and outcome definition, such as configuring an A/B test to assess marketing strategies. Master and employ design principles to design solid experiments.

Avoid Statistical Paradoxes and Fallacies: Understand paradoxes like Simpson’s Paradox to prevent misleading data interpretations.

Examine these paradoxes and use your insights to prevent incorrect conclusions.

Master Sampling and Estimation: Comprehend sampling techniques, biases, and estimation accuracy.

Learn about sampling techniques, biases, and accuracy in estimation, such as predicting election outcomes through polls. Understand and implement sampling methods in real-world situations.

Communicate Results Effectively: Learn to communicate statistical findings to non-technical audiences.

This requires clear communication, impactful visualizations, and relevant interpretation, such as elucidating a statistical model's business strategy implications. Sharpen summarization, visualization, and presentation skills.

Code for Statistics: Develop proficiency in statistical programming (Python/R) for data manipulation, analysis, and modeling.

Utilize statistical methods, create visualizations, and decode results with coding, such as performing logistic regression in Python. Refine programming skills, analyze datasets statistically, and develop projects to demonstrate your expertise.

Top 18 Probability and Statistics Interview Questions for Data Scientists

Once you feel comfortable with the topics listed above, practice applying that knowledge to land a job using these top probability and statistics interview questions.

1. What Is the Difference Between Descriptive and Inferential Statistics?

Descriptive and inferential statistics are two different branches of the field. The former summarizes the characteristics and distribution of a dataset, such as mean, median, variance, etc. You can present those using tables and data visualization methods like box plots and histograms.

In contrast, inferential statistics allows you to formulate and test hypotheses for a sample and generalize the results to a broader population. Using confidence intervals, you can estimate the population parameters.

You must be able to explain the mechanisms behind these concepts because entry-level statistics questions for a data analyst interview often revolve around sampling, the generalizability of results, etc.

2. What Are the Main Measures Used to Describe the Central Tendency of Data?

Centrality measures are essential for exploratory data analysis. They indicate the center of the data distribution but yield different results. You must understand the difference between the main types to interpret and use them in analyses.

During a statistics job interview, you might need to explain the meaning of each measure of centrality, including mean, median, and mode:

  • Mean (or average) is the sum of all observations divided by the total number of participants or cases (n).
  • Median is the mid-point in a dataset ordered from the smallest to the largest when n is odd. With an even number of data points, it’s the average of the values in position n/2 and (n+1)/2—i.e., the two values in the middle.
  • Mode is the most frequently appearing data point. It is a valuable measure when working with categorical variables.

3. What Are the Main Measures of Variability?

Variability measures are also crucial in describing data distribution. They show how spread-out data points are and how far away they are from the mean.

Some basic questions during a statistics interview might require you to explain the meaning and usage of variability measures. Here’s your cheat sheet:

  • Variance measures the average squared distance of data points from the mean. A small variance corresponds to a narrow spread of the values, while a big variance implies that data points are far from the mean.
  • Standard deviation is the square root of the variance. It shows the amount of variation of values in a dataset.
  • Range is the difference between the maximum and minimum data value. It’s a good indicator of variability when there are no outliers in a dataset, but when there are, it can be misleading.
  • Interquartile range (IQR) measures the spread of the middle part of a dataset. It’s essentially the difference between the third and the first quartile.

4. What Are Skewness and Kurtosis?

Next on our list of statistics questions for a data science interview are the measures of the shape of data distribution: skewness and kurtosis.

Let’s start with the former.

Skewness is an excellent way to measure the symmetry of distribution and the likelihood of a given value falling in the tails. With symmetrical distribution, the mean and median coincide. If the data distribution isn’t symmetrical, it’s skewed.

There are two types of skewness:

  • Positive is when the right tail is longer. Most values are clustered around the left tail, and the median is smaller than the mean.
  • Negative is when the left tail is longer. Most values are clustered around the right tail, and the median is greater than the mean.

Three graphs show the symmetrical distribution, positive and negative skew, with the position of the mean, median, and mode in each case.

Kurtosis, on the other hand, reveals how heavy or light-tailed data is compared to the normal distribution. There are three types of kurtoses:

  • Mesokurtic distributions approximate a normal distribution.
  • Leptokurtic distributions have a pointy shape and heavy tails, indicating a high probability of extreme events occurring.
  • Platykurtic distributions have a flat shape and light tails. They reveal a low probability of the occurrence of extreme events.

Knowing the meaning and calculations of these measures may be enough for an entry-level job. But statistics interview questions for advanced data science positions may revolve around using these concepts in practice.

If you wish to prepare for more advanced positions, try the 365 Data Scientist Career Track. It starts from the basics with statistics and probability, builds your knowledge with programming languages involved in machine learning and AI, such as SQL, and ends with portfolio, resume, and interview preparation lessons.

5. Describe the Difference Between Correlation and Autocorrelation

These two concepts tend to be confused, which makes it a good trick question for a statistics interview. To avoid surprises, we’ll explain the difference.

A correlation measures the linear relationship between two or more variables. It ranges between -1 and 1. It’s positive if the variables increase or decrease together. If it’s negative, one variable decreases while the other increases. When the value is 0, the variables aren’t related.

The following scatterplot illustrates the different types of correlation:

A scatterplot of two positive correlations where r=0.7 and 0.3, two negative correlations where r=-0.7 and -0.3, and no correlation where r=0.

In contrast, autocorrelation measures the linear relationship between two values of the same variable. Just like correlation, it can be positive or negative. Typically, we use it when we deal with a time series, i.e., different observations of the same construct.

6. Explain the Difference Between Probability Distribution and Sampling Distribution

As noted, you may be asked various statistics interview questions regarding sampling and the generalizability of results. The difference between probability and sampling distribution is just one example.

A probability distribution is a function used to calculate the probability of a random variable X taking different values. There are two main types depending on the variable: discrete and continuous. Examples of the former are the binomial and Poisson distributions, and of the latter: normal and uniform distributions.

A sampling distribution is the probability distribution of a statistic based on a range of random samples from a population. The definition sounds confusing, but it’s encountered often in practice.

For example, imagine you’re a clinical data analyst working on developing a new treatment for patients with Alzheimer’s. You’ll likely be working with samples from the entire population of individuals with the disease. So, you’ll use the sampling distribution during the data analysis.

7. What Is the Normal Distribution?

 

Normal distribution is a central concept in mathematics and data analysis. As such, it often appears in statistics interview questions.

The normal (or Gaussian) distribution is the most important probability distribution in statistics. It’s often called а “bell curve” because of its shape—tall in the middle, flat toward the ends.

A key characteristic of the normal distribution is that the mean and the median coincide. The mean is equal to 0, and the standard deviation is 1. With this information, we can calculate the following:

  • 27% of the data falls within the +/-1 standard deviation of the mean.
  • 45% of the data falls within +/-2 standard deviations of the mean.
  • 7% of the data falls within +/-3 standard deviations of the mean.

This is known as the empirical rule.

A normal distribution with the percentage of data in each standard deviation segment under the curve.

But what is so special about it?

It’s considered that naturally occurring phenomena have a normal distribution. As such, we often use it in data analysis to determine the probability of a data point being above or below a given value or for a sample mean being above or below the population mean.

8. What Are the Assumptions of Linear Regression?

Next, we move on from basic to intermediate probability and statistics interview questions. To further advance your knowledge on these topics, check out 365's Statistics course for data scientists.

But for now, let’s continue with linear regression, which is the basis of predictive analysis.

It investigates the relationship between one or more independent variables (predictors) and a dependent variable (outcome). More concretely, it examines the extent to which the independent variables are good predictors of the result.

The residual (or error term) equals the predictor variable minus the actual observed value. Linear regression models aim to find the "line of best fit” with minimal error.

The typical statistics interview questions for a data analyst job might involve the above definitions or the following four main assumptions that must be met to conduct linear regression analysis.

  1. Linear relationship: A linear relationship exists between the predictors and the dependent variable.
  2. Normality: The dependent variable has a normal distribution for any fixed value of the predictor.
  3. Homoscedasticity: The variance of the error term is constant for every value of the independent variable.
  4. Independence: All observations are independent—meaning there is no autocorrelation between the residuals.

9. What Is Hypothesis Testing?

We’ve already touched on this topic with some of the previous statistics and probability interview questions. But since it’s a fundamental part of data analysis, we wish to cover it in more detail.

Hypothesis testing allows us to evaluate a hypothesis about the population based on sample data. How do we conduct it?

First, we formulate a null hypothesis (or H0)—assuming no difference or relationship between the variables. For each null hypothesis, there’s an alternative one considering the opposite. If H0 is rejected, the alternative hypothesis is supported.

We need to choose an appropriate statistical test to determine whether the data supports a particular hypothesis. If the probability of the null hypothesis is below a predetermined significance level, we can reject it.

On that note, statistics questions for a data analyst interview may also regard different types of statistical tests. To help you prepare, we cover the basic ones.

10. What Are the Most Common Statistical Tests Used?

There are numerous statistical tests, each one serving a different purpose. Note the following common ones:

  • The Shapiro-Wilk test is a statistical tool testing if a data distribution is normal.
  • A t-test assesses whether the difference between two groups is statistically significant.
  • Analysis of Variance (ANOVA) tests the statistical difference between more than two variables.

11. What Is the p-Value and How Do I Interpret It?

A p-value is the probability of obtaining given results if the null hypothesis is correct. To reject it, the p-value must be lower than a predetermined significance level α.

The most used significance level is 0.05. If the p-value is below 0.05, we can reject the null hypothesis and accept the alternative one.

In that case, the results are statistically significant.

This is a fundamental part of data analysis; therefore, a common statistics interview question.

12. What Is the Confidence Interval?

The confidence interval is the range within which we expect the results to lie if we repeat the experiment. It is the mean of the result plus and minus the predicted variation.

The standard error of the estimate determines the latter, while the interval's center coincides with the estimate's mean. The most common confidence interval is 95%.

13. What Are the Main Ideas of the Law of Large Numbers?

The Law of Large Numbers is a key theorem in probability and statistics with many practical applications in finance, business, etc. It states that if an experiment is repeated independently multiple times, the mean of all results will approximate the expected value.

A classic example is coin flipping. We know the probability (P) of getting tails is 50%. If the number of tails after 100 trials is X, then the expected value E(X) = n x P(X) = 100 x 0.5 = 50.

Let’s suppose we repeat the experiment multiple times.

The first time, we get X1= 65 tails; the second, X2 = 50 tails, and so on. Ultimately, we calculate the mean of all trials by adding the random variables (X1, X2, …, Xn) and dividing the sum by the number of experiments. Following the Law of Large Numbers, the mean of these results will approximate the expected value E(X) = 50.

This is a fundamental theorem in statistics with applications in machine learning; you can expect questions about it during a job interview.

14. What Is the Central Limit Theorem?

The Central Limit Theorem states that the distribution of sample means starts to resemble a normal distribution as the size of the sample increases. Interestingly, this happens even when the underlying population doesn’t have a Gaussian distribution.

Two graphs illustrate the Central Limit Theorem. On the left is the abnormal population distribution, and on the right is a sampling distribution of the mean, which is normal.

On the right, we see that—regardless of the population distribution—the sample means have a symmetrical bell shape distribution as the sample size increases. A sample size equal to or greater than 30 is typically considered large enough for the Central Limit Theorem to apply.

15. What’s the Difference Between Population and Sample in Data Analysis?

A population represents the entire set of items or individuals who are the focus of a study, typically symbolized by an uppercase ‘N.’ The numerical values derived from a population are referred to as parameters.

Conversely, a sample is a subset drawn from the population and is denoted by a lowercase ‘n.’ The numerical values generated from a sample are known as statistics.

This distinction is fundamental to statistical analysis. But it often prompts further exploration of why statisticians typically work with samples rather than populations and the various types of samples that can be used.

In a nutshell, samples are more practical and less costly than populations. With the correct statistical tests, a mere 30 sample observations may be sufficient for making data-driven decisions.

Importantly, samples exhibit two properties: randomness and representativeness. A sample may possess one, both, or neither of these properties. But your sample must be random and representative to conduct statistical tests and produce usable results.

Consider a Simplified Scenario

Imagine you work in a company divided into four equal-sized departments: IT, Marketing, HR, and Sales—each with 1000 employees. You need to gauge the overall attitude towards relocating to a new, bigger office elsewhere in the city.

Surveying all 4,000 employees seems excessive, but a sample of 100 employees seems reasonable. Given the equal distribution across departments, you'd expect 25 representatives from each in your sample.

  1. You randomly select 100 employees from the total 4,000 and find that your sample includes 30 from IT, 30 from Marketing, 30 from HR, and only 10 from Sales. Despite the sample being random, it’s not representative because it underrepresents the Sales department.
  2. Having worked in the company for some time, you have friends across all departments. You decide to source your sample from these friends, choosing 25 from each department. This sample is representative but not random.

In the first case, the sample is biased due to the underrepresentation of one group. In the second situation, the selection is biased towards a specific circle of people rather than reflecting the overall employee base.

To ensure both randomness and representativeness, you could randomly select 25 individuals from each department. This way, all groups are fairly represented, and the selection is random.

You may choose to elaborate or condense this explanation as needed. Alternatively, you could ask your audience whether they'd like a more in-depth topic exploration before offering this detailed understanding.

16. The Difference Between Probability and Likelihood

For the last technical question, we cover one of the fundamental principles of Bayesian statistics because data science interview questions may include that subject.

The difference between probability and likelihood is subtle but critical. Probability is the chance of a particular outcome to occur given the obtained values. When calculating it, we assume the parameters are trustworthy.

In contrast, likelihood aims to verify if the parameters in a model are trustworthy given the obtained results. In other words, we calculate the probability of a model being correct with the observed measurements.

17.  What's Your Knowledge of Statistics, and How Have You Used it as a Data Analyst?

Turning now to a more general question, you may simply be asked about your experience with statistics in data analysis.

As a data analyst, you must be on good terms with basic statistics. This means you should feel at ease finding the mean, median, and mode and be up for some significance testing. Plus, it's your job to make sense of these numbers in the grand scheme of the business. If the job calls for a deeper dive into statistics, it'll be spelled out in the job description. You should be able to explain how statistics has played a role in your experience as a data analyst. 

Response Example

In my work, I've frequently used fundamental statistics, primarily by calculating averages and standard deviations and conducting significance tests. These tests were instrumental in assessing the statistical significance of discrepancies in measurements between two distinct population groups for a specific project. I've also examined the connection between two variables within a dataset, using correlation coefficients in the process.

18.  How Do Data Scientists Use Statistics?

This is another question about the uses of statistics, this time pertaining data science.

This question often tests not just for an answer but for your ability to organize and communicate your thoughts logically. Stay calm and formulate a coherent, well-organized response.

Response Example

(You may want to further condense it to keep your interviewer engaged.)

Data science is rooted in mathematics and includes statistics, economics, programming, visualization, machine learning, and modeling. Programming turns abstract concepts into solutions, and economics addresses the business side of problem-solving. But the crux of data science lies in statistical analysis.

Machine learningoften considered a separate fieldefficiently applies statistics. Statisticians develop models lke linear regression and logistic regression, and their predictions are statistical inferences.

Deep learning also leans on statistics, for instance, in the commonly used 'Stochastic gradient descent.'

Data visualizationsdepicting variable distributions or interrelationsalign with descriptive statistics.

Data preprocessingprimarily a programming taskdoesn't typically require statistical expertise. But statistical data preprocessinglike creating dummy variables or feature scalingcalls for a solid understanding of statistics.

Put briefly, statistics is at the core of almost all aspects of data science.

General Interview Tips & Tricks

  1. Body Language: Effective body language communicates confidence and attentiveness.
  • Sitting up straight and facing the interviewer shows engagement.
  • Open arms signify openness and receptiveness, while crossed arms can suggest defensiveness.
  • Steady eye contact demonstrates confidence, but it's crucial to balance it to avoid staring or appearing disengaged.
  • Smiling—particularly during positive discussions—can create a friendly atmosphere.

Additionally, controlling nervous habits is essential; avoid fidgeting or repetitive movements that can distract or convey anxiety. Preparation and practice can significantly improve your body language—making you appear more composed and self-assured during the interview (Big Interview).

  1. Prepare Smart Questions: Asking insightful questions demonstrates your interest in the role and understanding of the company. Inquire about daily tasks, team dynamics, and performance metrics to show you're thinking about how you can contribute and succeed.

Questions about collaboration between departments can reveal your team-oriented mindset. Tailoring your questions to the company and role indicates you've done your homework and are seriously considering how you can fit in and add value. This approach impresses interviewers and gives them crucial information to make informed decisions (Indeed).

Regarding Data and Tools:

"Can you describe the typical data sets I'd be working with in this role? Are there any proprietary tools or technologies the team uses for data analysis?"

"How does the company approach data management and version control, especially when dealing with large datasets?"

Team and Collaboration:

"How does the data science team interact with other departments, such as engineering or product development, to implement insights and models?"

"Can you share an example of a recent project in which the data science team played a crucial role, and what was the impact?"

Modeling and Analysis:

"What types of models are most commonly used in your projects? Are there any unique challenges you face in model selection or validation in your industry?"

"How does the team stay updated with the latest advancements in machine learning and AI? Are there opportunities for ongoing learning and professional development?"

Project Lifecycle:

"Can you walk me through the lifecycle of a typical data science project here, from ideation to deployment?"

"How does the team measure the success and impact of a data science project? What metrics or KPIs are typically used?"

Culture and Growth:

"How does the company support innovation and experimentation within the data science team?"

"What opportunities are there for career advancement and skill development in the data science team?"

Challenges and Opportunities:

"What are some of the biggest challenges the data science team currently faces, and how are they being addressed?"

"Looking forward, what major goals or projects is the data science team aiming to achieve next year?"

  1. Use the STAR or PAR Method: Structuring responses with the STAR (Situation, Task, Action, Result) or PAR (Problem, Action, Result) method ensures clarity and impact. These frameworks help you articulate your experiences compellingly, highlighting your problem-solving and critical-thinking skills.

Start by setting the context (Situation or Problem), then describe your role (Task), what you did (Action), and the outcome (Result). This method helps interviewers understand your approach to challenges and your ability to drive results, providing a complete picture of your capabilities and achievements (The Muse).

Interview Question: "Can you tell us about a time when you used statistical analysis to solve a business problem?"

STAR Response:

Situation: In my previous role at XYZ Corp, we noticed a significant drop in sales during the third quarter, which was unusual given our historically stable performance. The senior management was concerned about the potential ongoing impact on the company's revenue.

Task: As a data scientist, my task was to analyze sales data, identify patterns or anomalies, and pinpoint potential causes for this decline. My goal was to provide actionable insights that could help reverse this trend.

Action: I started by conducting a thorough exploratory data analysis to understand the trends and outliers in our sales data. I applied various statistical techniques, including time series analysis to understand seasonal patterns and regression analysis to identify any significant variables correlated with the sales drop.

I also performed a cohort analysis to see if specific customer segments contributed more to the decline.

Result: My analysis revealed that a recent change in our pricing strategy had negatively impacted a key customer segment, leading to a sales decline. I presented these findings with visualizations and a detailed report to the management team.

Based on my insights, we quickly adjusted our pricing strategy, resulting in a significant recovery in sales over the next quarter. The company acknowledged my contribution, and I received an award for my analytical impactdemonstrating the value of data-driven decision-making in addressing business challenges.

  1. Dictate the Interview Date: Timing can influence the mood and attention of your interviewers. Mid-week and mid-morning slots are generally optimal because they avoid the rush of Monday beginnings and the winding down of Friday afternoons.

An interview scheduled before lunch can ensure you and the interviewer are not distracted by hunger, enhancing focus and engagement. While you may not always control the timing, suggesting your preference can contribute to a more favorable interview environment (Zety).

  1. Dress Appropriately: Dressing appropriately signals professionalism and respect for the company's culture. It's about aligning with the expected dress code while slightly exceeding it to show seriousness about the role. Overdressing slightly is better than underdressing, as it conveys a strong work ethic and attention to detail.

Your attire should not distract from your qualifications; instead, it should complement your professional demeanor. Remember, your appearance is often your first impression, so ensuring it reflects your competence and suitability for the role is crucial (Jobscan).

  1. Bring Necessary Items: Bringing essential items like extra resumes, references, a notebook, and a pen demonstrates your preparedness and organizational skills. Having your resume handy can facilitate discussions about your experience, while notes can help you ask informed questions or follow up on interview topics.

This level of preparedness shows you're meticulous and serious about the opportunity—traits highly valued in any professional setting (Jobscan).

  1. Arrive Early: Showing up early allows you to acclimate to the environment, observe workplace dynamics, and prepare mentally for the interview. But an arrival too early might inconvenience or pressure the interviewer.

Arriving 10-15 minutes beforehand strikes a balance, giving you enough time to settle in without causing any awkwardness. This punctuality reflects your time management skills and respect for the interviewer's schedule (Jobscan).

  1. Smile and Make Eye Contact: Smiling and maintaining appropriate eye contact are fundamental aspects of positive communication. They convey confidence, warmth, and engagement, helping establish a rapport with the interviewer.

Balance, however, is critical—overdoing either can be off-putting. These non-verbal cues complement your verbal communication, reinforcing your enthusiasm and interest in the position. They're simple yet powerful ways to make a positive impression and demonstrate your interpersonal skills (Jobscan).

  1. Practice Mock Interviews: Mock interviews provide a rehearsal space to refine your answers, body language, and overall strategy. They can simulate the interview environment, reducing anxiety and improving your performance on the actual day.

Feedback from these sessions can highlight areas for improvement, whether in content, delivery, or demeanor. Regular practice can enhance confidence—making you more articulate and composed during the actual interview (Interview Guys).

  1. Connect Before Diving In: Building a rapport before the formal interview can set a positive tone and differentiate you from other candidates. Engaging in light conversation demonstrates your interpersonal skills and ability to build relationships—crucial qualities in any work environment.

This initial connection can make the interview more like a dialogue than an interrogation, facilitating a more natural and productive exchange. It's a strategic way to show your personality and fit for the company.

Probability and Statistics Interview Questions for Data Scientists: Next Steps

This concludes our list of probability and statistics interview questions and answers. We covered fundamental concepts to help you prepare for an interview and understand more complex data science and analytics topics.

If you desire to deepen your knowledge, try the 365 Data Science Program, which offers self-paced courses led by renowned industry experts. You'll learn from the basics to advanced specialization by executing many practical exercises and real-world business cases. If you wish to see how the training works, sign up below and access a selection of free lessons.

Eugenia Anello

Research Fellow at University of Padova

Eugenia Anello is a Research Fellow at the University of Padova with a Master's degree in Data Science. Collaborating with the startup Statwolf, her research focuses on Continual Learning with applications to anomaly detection tasks. She also loves to write posts on data science topics in a simple and understandable way and share them on Medium.

Top