Machine learning is a vast and complex field that has inherited many terms from other places all over the mathematical domain.
It can sometimes be challenging to get your head around all the different terminologies, never mind trying to understand how everything comes together.
In this blog post, we will focus on one particular concept: the hypothesis.
While you may think this is simple, there is a little caveat regarding machine learning.
The statistics side and the learning side.
Don’t worry; we’ll do a full breakdown below.
You’ll learn the following:
- What Is a Hypothesis in Machine Learning?
- Is This any different than the hypothesis in statistics?
- What is the difference between the alternative hypothesis and the null?
- Why do we restrict hypothesis space in artificial intelligence?
- Example code performing hypothesis testing in machine learning
What Is a Hypothesis in Machine Learning?
In machine learning, the term ‘hypothesis’ can refer to two things.
First, it can refer to the hypothesis space, the set of all possible training examples that could be used to predict or answer a new instance.
Second, it can refer to the traditional null and alternative hypotheses from statistics.
Since machine learning works so closely with statistics, 90% of the time, when someone is referencing the hypothesis, they’re referencing hypothesis tests from statistics.
Is This Any Different Than The Hypothesis In Statistics?
In statistics, the hypothesis is an assumption made about a population parameter.
The statistician’s goal is to prove it true or disprove it.
This will take the form of two different hypotheses, one called the null, and one called the alternative.
Usually, you’ll establish your null hypothesis as an assumption that it equals some value.
For example, in Welch’s T-Test Of Unequal Variance, our null hypothesis is that the two means we are testing (population parameter) are equal.
This means our null hypothesis is that the two population means are the same.
We run our statistical tests, and if our p-value is significant (very low), we reject the null hypothesis.
This would mean that their population means are unequal for the two samples you are testing.
Usually, statisticians will use the significance level of .05 (a 5% risk of being wrong) when deciding what to use as the p-value cut-off.
What Is The Difference Between The Alternative Hypothesis And The Null?
The null hypothesis is our default assumption, which we are trying to prove correct.
The alternate hypothesis is usually the opposite of our null and is much broader in scope.
For most statistical tests, the null and alternative hypotheses are already defined.
You are then just trying to find “significant” evidence we can use to reject our null hypothesis.
These two hypotheses are easy to spot by their specific notation. The null hypothesis is usually denoted by H₀, while H₁ denotes the alternative hypothesis.
Example Code Performing Hypothesis Testing In Machine Learning
Since there are many different hypothesis tests in machine learning and data science, we will focus on one of my favorites.
This test is Welch’s T-Test Of Unequal Variance, where we are trying to determine if the population means of these two samples are different.
There are a couple of assumptions for this test, but we will ignore those for now and show the code.
You can read more about this here in our other post, Welch’s T-Test of Unequal Variance.
def welchsttest(M1, M2):
# remember, this is welchs, so we do not assume equal variance
T, p_value = stats.ttest_ind(M1, M2, equal_var = False)
print(f'T value {T},\n\np-value {round(p_value,5)}\n')
if p_value < .05:
print('Reject Null Hypothesis')
else:
print('Fail To Reject Null')
welchsttest(df['price'],df['sqft'])
We see that our p-value is very low, and we reject the null hypothesis.
What Is The Difference Between The Biased And Unbiased Hypothesis Spaces?
The difference between the Biased and Unbiased hypothesis space is the number of possible training examples your algorithm has to predict.
The unbiased space has all of them, and the biased space only has the training examples you’ve supplied.
Since neither of these is optimal (one is too small, one is much too big), your algorithm creates generalized rules (inductive learning) to be able to handle examples it hasn’t seen before.
Here’s an example of each:
Example of The Biased Hypothesis Space In Machine Learning
The Biased Hypothesis space in machine learning is a biased subspace where your algorithm does not consider all training examples to make predictions.
This is easiest to see with an example.
Let’s say you have the following data:
Happy and Sunny and Stomach Full = True
Whenever your algorithm sees those three together in the biased hypothesis space, it’ll automatically default to true.
This means when your algorithm sees:
Sad and Sunny And Stomach Full = False
It’ll automatically default to False since it didn’t appear in our subspace.
This is a greedy approach, but it has some practical applications.
Example of the Unbiased Hypothesis Space In Machine Learning
The unbiased hypothesis space is a space where all combinations are stored.
We can use re-use our example above:
Happy and Sunny and Stomach Full = True
This would start to breakdown as
Happy = True
Happy and Sunny = True
Happy and Stomach Full = True
… etc
Let’s say you have four options for each of the three choices.
3 x 4 = 12
This would mean our subspace would need 2^12 instances (4096) just for our little three-word problem.
This is practically impossible; the space would become huge.
So while it would be highly accurate, this has no scalability.
More reading on this idea can be found in our post, Inductive Bias In Machine Learning.
Why Do We Restrict Hypothesis Space In Artificial Intelligence?
We have to restrict the hypothesis space in machine learning. Without any restrictions, our domain becomes much too large, and we lose any form of scalability.
This is why our algorithm creates rules to handle examples that are seen in production.
This gives our algorithms a generalized approach that will be able to handle all new examples that are in the same format.
Other Quick Machine Learning Tutorials
At EML, we have a ton of cool data science tutorials that break things down so anyone can understand them.
Below we’ve listed a few that are similar to this guide:
- Instance-Based Learning in Machine Learning
- Types of Data For Machine Learning
- Verbose in Machine Learning
- Generalization In Machine Learning
- Epoch In Machine Learning
- Inductive Bias in Machine Learning
- Understanding The Hypothesis In Machine Learning
- Zip Codes In Machine Learning
- get_dummies() in Machine Learning
- Bootstrapping In Machine Learning
- X and Y in Machine Learning
- F1 Score in Machine Learning
- Unlocking the Formula for Two-Way ANOVA [Master Data Interpretation] - December 2, 2024
- How much does LinkedIn Senior Software Engineer make in Sunnyvale? [Top Salary Negotiation Strategies Revealed] - December 2, 2024
- What Do Custom Software Development Companies Do? [Unlock the Secrets] - December 1, 2024