K - means 聚類筆記
聚類屬於無監督學習,與SVM等有類別標籤的監督學習不同。聚類的樣本特徵與想要分的類別數是其唯一的依據。K-means 是聚類里比較簡單的,其步驟分為Expectation步與Maximization步:
選 K 個點作為初始中心do: 1. 將每個點分配給最近的中心,形成K個分布(E STEP) 2. 重新計算 K 類的中心(M STEP)till: 中心不發生變化或達到最大迭代次數
如果將數據集的分類看成一個物理系統的演化過程的話,在外界環境穩定的情況下,系統最終將趨向於平衡。比如將油跟水裝在瓶子里晃蕩勻了後,放在一邊一會,油跟水便分離開了。在靜置過程中,水的中心跟油的中心不斷調整直到穩定。這個過程可以很形象地描述K-means演算法。
代碼:
function [L,m,v] = kmean(X,K,C)% --------------------------------% [L,m,v] = kmean(X,K,C)% --------------------------------% Inputs% X - N*P N個P維特徵向量% K - 分類數目% C - 迭代次數% Outputs% L - N*1 (N個特徵向量的類別標籤)% m - K*P (K個類別P維中心)% v - K*P (K類特徵的P維均方根)[N,P] = size(X);if N < K returnendm = zeros(K,P);Xmax = zeros(1,P);Xmin = zeros(1,P);for i = 1:P Xmax(i) = max(X(:,i)); Xmin(i) = min(X(:,i)); d = (Xmax(i)-Xmin(i))/(K-1); m(:,i) = Xmin(i):d:Xmax(i);endD = zeros(N,K);L = zeros(N,1);for i = 1:C L = Estep(N,K,D,L,X,m); m = Mstep(K,P,N,X,L,m);endv = zeros(K,P);for k = 1:K for p = 1:P s = 0; c = 0; for i = 1:N if L(i) == k s = s + ( X(i,p) - m(k,p) )*( X(i,p) - m(k,p) ); c = c + 1; end end v(k,p) = sqrt(s/c); endendend% E - STEPfunction L = Estep(N,K,D,L,X,m)for i = 1:N for j = 1:K D(i,j) = ( X(i,:) - m(j,:) )*( X(i,:) - m(j,:) ); end [~,L(i)] = min(D(i,:));endend% M - STEPfunction m = Mstep(K,P,N,X,L,m)for k = 1:K for p = 1:P s = 0; c = 0; for i = 1:N if L(i) == k s = s + X(i,p); c = c + 1; end end m(k,p) = s/c; endendend
腳本:
X = [randn(200,2) + 2*ones(200,2); randn(200,2) - 2*ones(200,2)];[N,M] = size(X);K = 2;[L,m,v] = kmean(X,K,50);figure;plot(X(:,1),X(:,2),K*);figure;subplot(121);plot(X(:,1),X(:,2),K*); subplot(122);for i = 1:N if L(i) == 1 plot(X(i,1),X(i,2),r.) end hold on if L(i) == 2 plot(X(i,1),X(i,2),b.) endendplot(m(:,1),m(:,2),ko,MarkerFaceColor,k);t = 0:pi/100:2*pi;for i = 1:K x = v(i,1)*cos(t) + m(i,1); y = v(i,2)*sin(t) + m(i,2); plot(x,y,k)end
輸出結果:
推薦閱讀: