Python數據分析及可視化實例之泰坦尼克號存活預測(23)

系列文章總目錄:Python數據分析及可視化實例目錄


1.項目背景:

Titanic大概是kaggle上最受歡迎的項目,該項目主要是讓參賽選手根據訓練集中的乘客數據和存活情況進行建模,進而使用模型預測測試集中的乘客是否會存活。乘客特徵總共有11個,如下:

PassengerId = 乘客ID

Pclass = 客艙等級(1/2/3等艙位)

Name = 乘客姓名

Sex = 性別

Age = 年齡

SibSp = 兄弟姐妹數/配偶數

Parch = 父母數/子女數

Ticket = 船票編號

Fare = 船票價格

Cabin = 客艙號

Embarked => 登船港口

總體來說Titanic和其他比賽比起來數據量算是很小的了,訓練集合測試集加起來總共891+418=1309個。因為數據少,所以很容易過擬合(overfitting),一些演算法如GradientBoostingTree的樹的數量就不能太多,需要在調參的時候多加註意。

2.分析步驟:

1. 數據清洗(Data Cleaning)

2. 探索性可視化(Exploratory Visualization)

3. 特徵工程(Feature Engineering)

4. 基本建模&評估(Basic Modeling& Evaluation)

5. 參數調整(Hyperparameters Tuning)

6. 集成方法(EnsembleMethods)

3.分析結果:

4.源碼(公眾號:海豹戰隊):

# coding: utf-8nn# 親,轉載即同意幫推公眾號:海豹戰隊,嘿嘿......n# 數據源可關注公眾號:海報戰隊,後留言:數據nn# In[1]:nnimport pandasntitanic = pandas.read_csv("titanic_train.csv") # 數據源可以搜索也可以加微信:nemoonntitanic.head(5)n# print (titanic.describe()) # 查看數據基本統計參數n# print(titanic.info()) # 查看數據基本類型和大小nnn# In[2]:nntitanic["Age"] = titanic["Age"].fillna(titanic["Age"].median())n# print(titanic.describe()) # 用中位數來處理缺失值nnn# In[3]:nn# print(titanic["Sex"].unique()) # 當年沒有第三類人,否則會列印出NANnn# 將性別0,1化,男人0,女人1;在用pandas作統計或者後續的數據分析時,文本型數據要預處理。ntitanic.loc[titanic["Sex"] == "male", "Sex"] = 0ntitanic.loc[titanic["Sex"] == "female", "Sex"] = 1nnn# In[4]:nnprint(titanic["Embarked"].unique()) # 登船港口有未知的,說明當年偷渡已經是常態,套票哪裡都有。ntitanic["Embarked"] = titanic["Embarked"].fillna(S)ntitanic.loc[titanic["Embarked"] == "S", "Embarked"] = 0ntitanic.loc[titanic["Embarked"] == "C", "Embarked"] = 1ntitanic.loc[titanic["Embarked"] == "Q", "Embarked"] = 2nnn# In[22]:nn# Import 線性回歸類nfrom sklearn.linear_model import LinearRegressionn# 交叉驗證走起nfrom sklearn.cross_validation import KFoldn# 自選特徵量,船票本身和獲救關係不大所以就沒有入選npredictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]n# 實例化一個分類器nalg = LinearRegression()n# 生成一個交叉驗證實例,titanic.shape[0]:數據集行數;n_splits:表示劃分幾等份;random_state:隨機種子數nkf = KFold(titanic.shape[0], n_folds=3, random_state=1)npredictions = []nfor train, test in kf:n # 訓練集n train_predictors = (titanic[predictors].iloc[train,:])n # 目標集(標籤)n train_target = titanic["Survived"].iloc[train]n # 開始訓練走起n alg.fit(train_predictors, train_target)n # 測試集n test_predictions = alg.predict(titanic[predictors].iloc[test,:])n # 記錄測試結果n predictions.append(test_predictions)n# print(predictions)nnn# In[4]:nnimport numpy as npn# 測試結果是3個獨立的矩陣(三份測試數據),接下來進行合併 npredictions = np.concatenate(predictions, axis=0)n# 預測存貨概率大於0.5生,小於等於0.5死(也是一瞬間)npredictions[predictions > .5] = 1npredictions[predictions <=.5] = 0naccuracy = sum(predictions[predictions == titanic["Survived"]]) / len(predictions)n# print(accuracy) # 精度nnn# In[5]:nnfrom sklearn import cross_validationnfrom sklearn.linear_model import LogisticRegressionnalg = LogisticRegression(random_state=1)n# 直接計算交叉驗證的結果,結果略有差異,下方法對三個分組的精度進行了平均nscores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=3)n# print(scores.mean())nnn# #### 預測nn# In[6]:nntitanic_test = pandas.read_csv("test.csv")ntitanic_test["Age"] = titanic_test["Age"].fillna(titanic["Age"].median())ntitanic_test["Fare"] = titanic_test["Fare"].fillna(titanic_test["Fare"].median())ntitanic_test.loc[titanic_test["Sex"] == "male", "Sex"] = 0 ntitanic_test.loc[titanic_test["Sex"] == "female", "Sex"] = 1ntitanic_test["Embarked"] = titanic_test["Embarked"].fillna("S")ntitanic_test.loc[titanic_test["Embarked"] == "S", "Embarked"] = 0ntitanic_test.loc[titanic_test["Embarked"] == "C", "Embarked"] = 1ntitanic_test.loc[titanic_test["Embarked"] == "Q", "Embarked"] = 2nnn# In[7]:nnfrom sklearn import cross_validationnfrom sklearn.ensemble import RandomForestClassifier # 隨機森林npredictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]n# random_state:隨機數種子;子模型數:n_estimators;min_samples_split: 內部節點再劃分所需最小樣本數;min_samples_leaf:葉子節點最少樣本數nalg = RandomForestClassifier(random_state=1, n_estimators=10, min_samples_split=2, min_samples_leaf=1)nkf = cross_validation.KFold(titanic.shape[0], n_folds=3, random_state=1)nscores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=kf)n# 平均預測結果nprint(scores.mean())nnn# In[8]:nn# 調整參數nalg = RandomForestClassifier(random_state=1, n_estimators=100, min_samples_split=4, min_samples_leaf=2)nkf = cross_validation.KFold(titanic.shape[0], 3, random_state=1)nscores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=kf)nprint(scores.mean())nnn# In[9]:nn# 加入家庭成員數作為特徵ntitanic["FamilySize"] = titanic["SibSp"] + titanic["Parch"]ntitanic["NameLength"] = titanic["Name"].apply(lambda x: len(x))n# titanic["NameLength"].head()nnn# In[10]:nnimport ren# 獲取名字的titlendef get_title(name):n title_search = re.search( ([A-Za-z]+)., name)n if title_search:n return title_search.group(1)n return ""n# 獲取title的詞頻ntitles = titanic["Name"].apply(get_title)nprint(pandas.value_counts(titles)) # 列印詞頻n# 將主要title數字化ntitle_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Dr": 5, "Rev": 6, "Major": 7, "Col": 7, "Mlle": 8, "Mme": 8, "Don": 9, "Lady": 10, "Countess": 10, "Jonkheer": 10, "Sir": 9, "Capt": 7, "Ms": 2}nfor k,v in title_mapping.items():n titles[titles == k] = vn# 驗證假定nprint(pandas.value_counts(titles))n# 增加一個title列ntitanic["Title"] = titlesnnn# In[11]:nnimport numpy as npnfrom sklearn.feature_selection import SelectKBest, f_classifnnpredictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked", "FamilySize", "Title", "NameLength"]nn# 選擇K個最好的特徵,返回選擇特徵後的數據nselector = SelectKBest(f_classif, k=5)nselector.fit(titanic[predictors], titanic["Survived"])nn# 獲取每個特徵的p-values, 然後將其轉化為得分nscores = -np.log10(selector.pvalues_)nn# 選擇四個最佳的特徵n# predictors = ["Pclass", "Sex", "Fare", "Title"]n# alg = RandomForestClassifier(random_state=1, n_estimators=50, min_samples_split=8, min_samples_leaf=4)nn# Bokehnnnn# In[12]:n# 看看哪個特徵獲救的幾率最大?nfrom bokeh.io import output_notebook, shownoutput_notebook()nfrom bokeh.plotting import figurenfrom bokeh.models import ColumnDataSource, FactorRangennn# In[13]:nnsource = ColumnDataSource({predictors:predictors,scores:scores})nsource nnn# In[14]:nnp = figure(title=泰坦尼克號乘客特徵與存活率關係, y_range=FactorRange(factors=predictors), x_range=(0, 100), tools=save)np.grid.grid_line_color = Nonenp.hbar(left=0, right=scores, y=predictors,height=0.5 ,color=seagreen, legend= None, source=source)nshow(p)nnn# In[15]:nnfrom sklearn.ensemble import GradientBoostingClassifiernimport numpy as npnn# 迭代決策樹nalgorithms = [n [GradientBoostingClassifier(random_state=1, n_estimators=25, max_depth=3), ["Pclass", "Sex", "Age", "Fare", "Embarked", "FamilySize", "Title",]],n [LogisticRegression(random_state=1), ["Pclass", "Sex", "Fare", "FamilySize", "Title", "Age", "Embarked"]]n]nkf = KFold(titanic.shape[0], n_folds=3, random_state=1)npredictions = []nfor train, test in kf:n train_target = titanic["Survived"].iloc[train]n full_test_predictions = []n for alg, predictors in algorithms:n # 訓練集n alg.fit(titanic[predictors].iloc[train,:], train_target)n # 測試集n test_predictions = alg.predict_proba(titanic[predictors].iloc[test,:].astype(float))[:,1]n full_test_predictions.append(test_predictions)n # 測試準確率n test_predictions = (full_test_predictions[0] + full_test_predictions[1]) / 2n test_predictions[test_predictions <= .5] = 0n test_predictions[test_predictions > .5] = 1n predictions.append(test_predictions)nnpredictions = np.concatenate(predictions, axis=0)naccuracy = sum(predictions[predictions == titanic["Survived"]]) / len(predictions)nprint(accuracy,len(predictions))n

膠水語言博大精深,

本主只得一二為新人帶路,

老鳥可去另一專欄:Python中文社區

新手可查閱歷史目錄:

Python數據分析及可視化實例目錄


最後,別只收藏不關注哈

推薦閱讀:

如何快速入行數據分析師?
如何看待近兩年用戶行為數據分析平台的發展?
用數據「量化」品牌營銷和效果營銷(上)
Python數據分析及可視化實例之Request、BeautifulSoup
[賽車知識]F1遙感數據分析——整理自網路

TAG:Python | 数据分析 | 数据可视化 |