There are two errors that often rear their head when you are learning about hypothesis testing – *false positive *and *false negative, *technically referred to as *type I error *and *type II error *respectively.

At first, I was not a huge fan of the concepts, I couldn’t fathom how they could be at all useful. Throughout the years, though, I began to have a change of heart. The more I understood and encountered these errors the more they started to excite and interest me. Seeing their real-world applications and uses helped me go from an uninterested student to an enthusiastic teacher.

You know those teachers who frantically talk about a subject that nobody understands or wants to understand? Yeah, that’s me now! And its great, so I want to bring you to my level of excitement with this article by showing you how these two errors have practical implications in different and *interesting *real-life settings. Then hopefully, after reading it, you will be itching to tell your loved ones all about *type I error *and *type II error. *Lucky them!

Exclaimer: This article is not here to teach you how to distinguish between the two. If you would like to get an understanding of how to do that, we have made an explainer video on the subject **here.**

**Make Errors Your Friend**

Which error would you say is more serious?

**A false positive ***(type I error)* – when you reject a *true* null hypothesis – or a **false negative ***(type II error) – *when you accept a *false* null hypothesis?

I read in many places that the answer to this question is: **a false positive. **I don’t believe this to be 100% true. That’s further justified by the fact that a frequently asked question at data science interviews is: ‘give examples when false negative is the bigger problem’.

The proper scientific approach is to form a null hypothesis in a way that makes you try to reject it. So, let’s say I want to see if this particular article is performing better than the average of the other articles I have posted.

With this in mind, the null hypothesis I will choose is:

“The number of times my article is read will be *less* *or* *equal* to the number of similar articles I have posted”

If I reject the null hypothesis, this means one of two things.

*This article performed above average – Great! There’s my positive*

*I have made a type I error. I rejected a null hypothesis that was true. My test showed that I performed above average, but in fact, I did not. I got a false positive.*

Yes, here my false positive has a bad outcome, I will inevitably think my article is better than it is and, from now on, write all my articles in the same style, ultimately hurting my blog traffic. This will no doubt will affect my career and self-esteem in a negative way.

**What about the type II error – the false negative?**

This would occur if, say, this article was a masterpiece of blog writing, but my test shows me that it is not even mediocre. Of course, I will not attempt to write articles in this style any time soon. However, I am a driven person that learns from his ‘mistakes’ so instead try different techniques and potentially create even better writings.

This is not the best outcome, I may have missed an opportunity but that is in no way as devastating as the **false positive **where I believe I have done something to improve my career but instead potentially ruined it.

Now, this is a case where the worse situation is the **false positive, **however, a crucial fact is that I STATED the null hypothesis in a specific way. Had I swapped the null and alternative hypotheses, the errors would have been swapped, too.

Let me show you.

**New null hypothesis**

“The number of times my article is read will be *more* *than* the number of similar articles I have posted”

In a **false** **positive** situation, I would reject a null hypothesis that is true. So, the test would show that my masterpiece is actually mediocre or worse. Remember this phrase? That was the **false** **negative** from the previous example.

What this shows is that the two errors are interchangeable. Therefore, it is all about the design of your study; you can change things to help you avoid the bigger problem.

**Finding the positive in **the**… positive.**

When applying for a job in the data science industry, an interview question that often crops up is:

“Can you provide examples of situations when a **false** **positive** has a better outcome than a **false** **negative?**” (and vice versa)

Of course, you *could* use the above example, however, some academics don’t particularly like to hear the idea of swapping hypotheses. I just wanted to prove a point that everything is not so black and white when it comes to this concept.

Plus, I have plenty more examples for you that you can lay on your potential employer and show them that you really know your stuff. You’ll win them over in no time!

Some of these examples have hypotheses that cannot be switched due to science or law (see not so black and white). They do, however, give us situations where having a **false negative **is not the ideal. Of course, we’re still being a bit rebellious, but doing so within science and law, so, who can stop us!

**1. Pregnancy test.**

When you take a pregnancy test, you are asking: “Am I pregnant?”

In hypothesis testing, however, you have your null hypothesis:

“I am not pregnant”

Rejecting the hypothesis gives you a ‘+’ Congratulations! You are pregnant!

Accepting the hypothesis gives you a ‘–‘ Sorry, better luck next time!

Biology determines this one, so no switching I’m afraid. Although tests can malfunction, and **false** **positives** do occur; In this case, a **false positive **would be that little ‘+’ when you are, in fact not pregnant. A **false** **negative, **of course**,** would be the ‘–‘ when you’ve got a little baby growing inside you.

This is a good example because the better situation is entirely dependent on your situation!

Imagine Someone has been trying for a child for a long time then by some miracle their pregnancy test comes back positive. They mentally prepare themselves for having a baby and after a short period of ecstasy, in some manner, they find out that they are, in fact, not pregnant!

This is a terrible outcome!

A false negative for someone who really does not want a child, is not ready for one and when assuring themselves with a negative result, proceeds to drink and take drugs can be incredibly damaging for her, her family and her baby.

Swap these women’s situations, however, and you have outcomes that, while not ideal, are much better.

### Trivia time!

*Pregnancy tests have advanced to minimize the chances of a false negative. This does improve the test as while it would be unlikely that you would go to a doctor to confirm a negative result, it would be sensible to with a positive result. There are a number of medical reasons to get a false positive, but false negatives appear only due to faulty execution of the test. *

**2. AIDS test**

Here is a more clear-cut example.

Imagine a patient taking an HIV test.

The null hypothesis is:

“The patient doesn’t have the HIV virus.”

The ramifications of a **false** **positive** would at first be heartbreaking for the patient; to have to deal with the trauma of facing this news and telling your family and friends is not a situation you would wish upon anyone, but after going for treatment, the doctors will find out that she does not have the virus. Again, this would not be a particularly pleasant experience. But not having HIV is ultimately a good thing.

On the other hand, **a false negative **would mean that the patient has HIV but the test shows a negative result. The implications of this are terrifying, the patient would be missing out on crucial treatments and runs a high risk of spreading the virus to others

Without much doubt, the **false** **negative** here is the bigger problem. Both for the person and for society.

### Trivia time.

*Many doctors call AIDS results ‘reactive’, rather than positive, because of false positives. Before a patient is definitively said to be HIV positive, there are a series of tests carried out. It is not all based on a single blood sample. *

**3. Presumption of innocence**

In many countries, the law states that a suspect in a criminal case is: “Innocent until proven guilty”.

This comes from the Latin

*‘Ei incumbit probatio, qui dicit, non qui negat; cum per rerum naturam factum negantis probatio nulla sit’*.

Which translates to: “The proof lies upon him who affirms, not upon him who denies; since, by the nature of things, he who denies a fact cannot produce any proof.”*

Therefore, the null hypothesis is:

“The suspect is innocent.”

So simply enough, a **false positive **would result in an innocent party being found guilty, while a **false negative **would produce an innocent verdict for a guilty person.

If there is a lack of evidence, Accepting the null hypothesis much more likely to occur than rejecting it. Therefore, if the law was that the suspect is “Guilty until proven innocent.” with the hypothesis being “The suspect is guilty.” accepting the null hypothesis when false would result in many innocent people being imprisoned.

So, protecting one innocent person at the risk of letting five guilty people go free seems worth it for many people.

With the law the way it is, the general consensus is that the **false** **positive** would be the bigger problem. The idea of putting an innocent person behind bars is unsettling, as proving they are in fact, innocent once convicted is not simple. While a **false negative **would result in a guilty party going free, it could end up with a case being reopened or, if the person is a serial offender, he will be convicted at a later date anyway.

### Trivia time

*Until recently Mexico was using the ‘guilty unless proven innocent’ system. As a result, judges would not even open most criminal cases, because they would fear to put too many innocent people in jail. Since 2008, Mexico’s criminal justice system has been transitioning to ‘innocent, unless proven guilty’.*

**4. Breath alcohol test**

Breathalyzer tests are a necessary nuisance. Nobody wants to be stopped for a breath alcohol test, but then nobody wants to be killed by a drunk driver either. Swings and roundabouts.

The null hypothesis:

“You are below the alcohol limit.”

Again, simply enough, a **false positive **would show that you are over the limit when you haven’t even touched an alcoholic drink. A **false negative **would register you as sober when you are drunk, or at least over the limit.

Both problems do occur due to varying factors that can influence breath alcohol samples. The counteract the problems of **false positives **(losing your license, receiving fines or jail time), the law states that one can provide a blood or urine sample to prove their innocence (if they are, that is).

With this in mind, a **false negative **is clearly the bigger problem. Allowing drunk drivers to continue driving while assuming they are sober is obviously dangerous to them and others around them. While losing a few hours of your day is a small price to pay if it helps keep more people over the limit, off the road.

### Trivia time.

*Common alcohol levels at which people are considered legally impaired for driving range from 0.00% to 0.08%. The most common benchmarks around the world are 0.00%, also known as the zero tolerance, and 0.05%. The limit is the highest in the Cayman Islands, standing at 0.1%. This doesn’t imply a higher tolerance to drunk driving, so before hitting the road after a bottle of Jack Daniels, keep in mind that the local police really do enforce the laws with frequent checks. *

**5. SPAM**

The final thing I want to talk about is SPAM emails.

Many websites will tell you something along the lines of: “Please, check your SPAM folder. The email that we just sent you may end up there.”

Email providers increasingly use data mining algorithms to filter SPAM from what is wanted. This is a topic that deserves an article of its own. However, we are talking about when emails get misplaced.

I was flabbergasted when, some weeks ago, I sent an email to my sister and her email provider marked it as SPAM! How dare they! The only explanation I could come up with was that I used my personal mailbox to email my sister’s company email address. So, the algorithm saw no evidence that my email would be desired by my sister (maybe it knows something I don’t…). Therefore, it accepted the null hypothesis:

“This email is SPAM.”

If the algorithm rejects the null hypothesis, the email goes through. A **false positive **would be mean your inbox having the odd email from Nigerian princes looking to marry you, or long-lost relatives asking for your bank details, so they can send you the large inheritance from your great grandmother’s cousin’s step daughter’s cat.

A false negative could very well be the bigger problem. You may very well miss out on an invitation for an interview or the holiday snaps from your sibling, just because they are lost within the copious amounts of SPAM – that you half-heartedly skim through before deleting.

This is down to personal preference though, some people are so infuriated with a notification going off on their phone, only to see a pointless email, that a couple of misplaced personal emails are a small price to pay.

### Trivia time.

*Over 95% of the friend requests you send on Facebook are accepted, as you usually reach out to people you know. This is not true for SPAM accounts and this is one of the ways Facebook detects them. However, recently bots adopted a strategy where they pretend to be attractive females and focus on male users as their victims. Because male users, on average, accept these friend invitations, it takes much longer to detect the bots.*

These are few common examples of when you can have **false positives and false negatives**. As you can see, the error that is preferable really depends on the situation itself, your personal preference or how the study has been designed (and that you can just change the hypothesis if need be). So, I hope you won’t follow the general assumption that **false positives **result in bigger problems and are now better equipped to produce solid examples to back it up.

Now, I bet your interest in **false positives and false negatives **has got from 0% too 100% after reading this article. So, while the information is still fresh in your mind, check out our tutorials on hypothesis testing, sum of squares, point estimates, and types of data.

Or, if you want to mix things up, head for our Python, SQL, and Tableau tutorials.

Have a great day.

**F. Nan Wagoner (1917-06-01). “Wagoner’s Legal Quotes web page”. Wagonerlaw.com. Retrieved 2017-05-02.*

Thanks , the article was great , it really helped !