標籤:

機器學習實戰之kNN演算法

機器學習實戰之kNN演算法

來自專欄 經管人學數據分析

1 演算法介紹

總的來說,入門的第一個機器學習演算法是k-近鄰演算法(kNN),它的工作原理是:存在一個樣本數據集合,也稱作訓練樣本集,並且樣本集中每個數據都存在標籤,即我們知道樣本集中每一數據與所屬分類的對應關係。輸入沒有標籤的新數據後,將新數據的每個特徵與樣本集中數據對應的特徵進行比較,然後演算法提取樣本集中特徵最相似數據(最近鄰)的分類標籤。一般來說,我們只選擇樣本數據集中前k個最相似的數據,這就是k-近鄰演算法中k的出處,通常k是不大於20的整數。最後,選擇k個最相似數據中出現次數最多的分類,作為新數據的分類。

2 演算法流程

k-近鄰演算法的偽代碼如下:

對未知類別屬性的數據集中的每個點依次執行以下操作:

(1) 計算已知類別數據集中的點與當前點之間的距離;

(2) 按照距離遞增次序排序;

(3) 選取與當前點距離最小的k個點;

(4) 確定前k個點所在類別的出現頻率;

(5) 返回前k個點出現頻率最高的類別作為當前點的預測分類。

3 案例實踐——使用 k-近鄰演算法改進約會網站的配對效果

(參考機器學習實踐第二章,數據來源manning.com/books/machi

4 代碼

"""Created on Dec 10, 2017kNN: k Nearest NeighborsInput: inX: vector to compare to existing dataset (1xN) dataSet: size m data set of known vectors (NxM) labels: data set labels (1xM vector) k: number of neighbors to use for comparison (should be an odd number)Output: the most popular class label"""import numpy as npdef file2matrix(filename): # 整理數據集 fr = open(filename) arraylines = fr.readlines() numberlines = len(arraylines) returnmat = np.zeros((numberlines, 3), dtype=float) classtable = [] classcol = np.zeros((numberlines, 1), dtype=int) index = 0 for line in arraylines: line = line.strip(
) everyline = line.split( ) returnmat[index, :] = everyline[0:3] classtable.append(everyline[-1]) if classtable[index] == smallDoses: classcol[index] = 2 elif classtable[index] == largeDoses: classcol[index] = 3 else: classcol[index] = 1 index = index + 1 return returnmat, classcoldef norm(features): # 歸一化特徵向量 normarray = np.zeros((features.shape[0], features.shape[1])) for i in range(features.shape[1]): maxvals = np.max(features[:, i]) minvals = np.min(features[:, i]) dist = maxvals-minvals normarray[:, i] = (features[:, i]-minvals)/dist return normarraydef classify(features, datatest, classlable, k): # kNN演算法實踐 normtrain = norm(features) normtest = norm(datatest) a = features.shape[0] b = features.shape[1] c = datatest.shape[0] votelable = [] group = {} testlable = np.zeros((c, 1)) diffmat = np.zeros((a, b)) totaldist = np.zeros((a, 1)) for i in range(c): for j in range(b): diffmat[:, j] = (normtest[i, j] - normtrain[:, j])**2 totaldist[:, 0] = np.sqrt(np.sum(diffmat, axis=1)) sortdist = np.argsort(totaldist, axis=0) for n in range(k): votelable.append(classlable[sortdist[n]][0][0]) voteset = set(votelable) for item in voteset: group[item] = votelable.count(item) lastclass = max(zip(group.values(), group.keys())) print(group) testlable[i] = lastclass[1] votelable = [] group = {} return testlableif __name__ == __main__: filepath = Ch02datingTestSet.txt returnmat, classcol = file2matrix(filepath) normarray = norm(returnmat) features, datatest = normarray[0:900, :], normarray[900:1000, :] testlable = classify(features, datatest, classcol[0:900], 10) # 計算正確率 e = 0 for y in range(100): if testlable[y] == classcol[900+y]: e = e+1 print(e/100)

推薦閱讀:

TAG:k近鄰法 | Python |