### Definition
> Field of study that gives the computers the ability to learn without explicitly programmed. -Arthur Samuel 1959
- He made the software that played **checkers**. Eventually, the program played better than himself.
### Algorithm Types
#### 1. Supervised Learning
- Learns from given a sample of **right answers**.
- **used the most** in real-life applications and has seen rapid development.
Examples:
![[Pasted image 20250316190734.png]]
---
Algorithms:
1. **Regression**: Fit a line or curve (function) on a set of data: *prediction*
2. **Classification**: To identify/classify an input to limited outcomes.
> [!tip]
> In *regression* the result is a **continuous** data. Whereas in *classification* it is a set of finite **category**.
>[!info]
> Both in regression and classification, we can have **multiple input value**.
#### 2. Unsupervised Learning
- Input data is **not** labeled in unsupervised learning.
- We look for any interesting thing about the input data like **structures** and **patterns**.
1. **Clustering**:
e.g. Google News: groups similar news together.
e.g. genetic DNS microarray clustering.
![[Pasted image 20250316192608.png]]
- Classification vs. Clustering
2. **Anomaly Detection**: find unusual data points. e.g. fruad
3. **Dimensionality reduction**: compress data using fewer numbers.