-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
2158b5b
commit 20daa52
Showing
6 changed files
with
82 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,66 @@ | ||
# 层次聚类 Hierarchical Clustering | ||
|
||
![](./img/hc.png) | ||
|
||
层次聚类算法非常简单。它的主要思想是将数据集中的每个数据点视为一个单独的类,然后逐渐合并这些类,直到所有数据点都在一个类中。 | ||
|
||
> 💡 亲手试试看这个算法:<https://jydelort.appspot.com/resources/figue/demo.html> | ||
算法实现为: | ||
|
||
1. 将所有点视作一个类。 | ||
2. 计算每个类之间的距离。 | ||
3. 取距离最近的两个类合并。 | ||
4. 重复直到所有点只在一个类里。 | ||
|
||
![](./img/hc-output.png) | ||
[ttps://jydelort.appspot.com/resources/figue/demo.html] | ||
|
||
最终的输出会是如上图类似的树状图(dendrogram)。 | ||
|
||
而上面算法有一个非常明显的问题,即如果一个类里如果有很多点,我们应该怎么计算其与其他类的距离呢? | ||
|
||
我们有3种策略,可以根据情况选择: | ||
|
||
![](./img/hc-linkage.png) | ||
[https://www.semanticscholar.org/paper/Statistical-and-machine-learning-methods-to-analyze-The/27822318f2c8dbf5f92a4bd31d395bcca7db45cb] | ||
|
||
## Single-Linkage (SL) | ||
|
||
SL 策略是将两个类中最近的两个点之间的距离作为两个类之间的距离。因此其公式可以写为 | ||
|
||
$$ | ||
\text{Dist}_{SL}(U, V)=\min_{u \in U, v \in V} \text{dist}(u, v) | ||
$$ | ||
|
||
## Complete-Linkage (CL) | ||
|
||
CL 策略是将两个类中最远的两个点之间的距离作为两个类之间的距离。因此其公式可以写为 | ||
|
||
$$ | ||
\text{Dist}_{CL}(U, V)=\max_{u \in U, v \in V} \text{dist}(u, v) | ||
$$ | ||
|
||
## Group Average | ||
|
||
Group Avarage 策略是将两个类中所有点之间的距离的平均值作为两个类之间的距离。因此其公式可以写为 | ||
|
||
$$ | ||
\text{Dist}_{GA}(U, V)=\frac{1}{|U||V|}\sum_{u \in U}\sum_{v \in V} \text{dist}(u, v) | ||
$$ | ||
|
||
而这其中 Group Average 是最常用的策略。因为其可以很好的对抗噪声。 | ||
|
||
## 优势、劣势和注意事项 | ||
|
||
**优点** | ||
- 提供确定性结果 | ||
- 无需事先指定聚类的数量 | ||
- 可以创建任意形状的聚类 | ||
|
||
**缺点** | ||
- 无法扩展大型数据集,时间复杂度至少为 $O(N^2)$ | ||
|
||
**注意事项** | ||
- 不同的相似度算法(距离公式)会导致不同的结果 | ||
- 算法会对数据施加层次结构,即使这种结构并不适合数据。 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1,15 @@ | ||
# 聚类算法 | ||
# 聚类算法 | ||
|
||
聚类是一种任务,其希望通过一些手段将数据分成不同的聚类(或者说组),而这些聚类满足: | ||
- 聚类内相似度高 | ||
- 聚类外相似度低 | ||
|
||
不严谨地说,其是在寻找物体之间自然的分组。 | ||
|
||
聚类通常有很多角度,例如对于一群人,我们可以根据职业进行聚类,也可以根据性别进行聚类。这些都是不同的聚类角度。 | ||
|
||
~~聚类算法是指解决聚类任务的算法。~~通常来说,其为非监督学习。 | ||
|
||
通常来说在给定数据集 $\mathcal{D} = \{\mathbf{x}_1, \mathbf{x}_2,..., \mathbf{x}_N\}$ | ||
定义不同数据点的距离为 $\text{dist}(\mathbf{x}, \mathbf{z})$(即两个不同数据点的相似度) | ||
我们的目标是将这些数据点划分为 $K$ 组。 |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters