NVIDIA 自動駕駛演算法的keras(TensorFlow)實現

NVIDIA End-to-End 自動駕駛深度學習演算法

之前一篇文章提到的要用tf來實現,不過那時候寫的代碼版本已經舊了,所以乾脆用keras重寫一個,後端依然使用的是tensorflow。

本文使用Udacity開源的無人駕駛模擬器生成數據並測試效果:

udacity/self-driving-car-sim

如何使用模擬器可以參考:

udacity/CarND-Behavioral-Cloning-P3

先上模型結構圖

其實看代碼更清晰,不的不說Keras真是一個非常好用的工具,寫出來的模型非常乾淨整潔。

所用到的API

Guide to the Sequential model

Convolutional Layers conv2d - Keras Documentation

Convolutional Layers cropping2d - Keras Documentation

Core Layers flatten - Keras Documentation

Core Layers dense - Keras Documentation

Core Layers lambda - Keras Documentation

Core Layers dropout - Keras Documentation

# 數據載入依賴nimport csvnimport cv2nimport numpy as npnn# 模型構建依賴nfrom keras.models import Sequentialnfrom keras.layers import Conv2D, Cropping2Dnfrom keras.layers import Flatten, Dense, Lambda, Dropoutnn# 模型數據處理所需要的依賴nfrom sklearn.utils import shufflenfrom sklearn.model_selection import train_test_splitnn# 載入訓練數據nLINES = []nn# 數據矯正值,用於糾正方向盤角度nCORRECTION = 0.2nn# 從數據集中讀取必要數據nwith open(./data/driving_log.csv) as csvfile:n READER = csv.reader(csvfile)n for line in READER:n LINES.append(line)nn# 分組出訓練集與校驗集nTRAIN_SAMPLES, VALIDATION_SAMPLES = train_test_split(LINES, test_size=0.2)nn# 用一個生成器來傳入訓練數據,避免出現內存不足的情況ndef generator(samples, batch_size=32):n """n generat training samplesn """n num_samples = len(samples)n while 1: # Loop forever so the generator never terminatesn shuffle(samples)n for offset in range(0, num_samples, batch_size):n batch_samples = samples[offset:offset+batch_size]nn images = []n measurements = []n for batch_sample in batch_samples:n # Use 3 camerasn measurement = float(batch_sample[3])n measurement_left = measurement + CORRECTIONn measurement_right = measurement - CORRECTIONnn for i in range(3):n source_path = batch_sample[i]n filename = source_path.split(/)[-1]n current_path = ./data/IMG/ + filenamen image_bgr = cv2.imread(current_path)n # OpenCV的大坑,大坑啊!!!n image = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)n # image = cv2.resize(image, (80, 160), cv2.INTER_NEAREST)n images.append(image)n # Augment imagen images.append(cv2.flip(image, 1))nn measurements.extend([measurement,n measurement*-1,n measurement_left,n measurement_left*-1,n measurement_right,n measurement_right*-1])nnn x_train = np.array(images)n y_train = np.array(measurements)nn yield shuffle(x_train, y_train)nnTRAIN_GENERATOR = generator(TRAIN_SAMPLES, batch_size=32)nVALIDATION_GENERATOR = generator(VALIDATION_SAMPLES, batch_size=32)nn# 輸入幀的參數nROW, COL, CH = 160, 320, 3nn# ModelnMODEL = Sequential()n# 執行圖像歸一化nMODEL.add(Lambda(lambda x: x / 127.5 - 1.0, input_shape=(ROW, COL, CH)))n# 剪裁圖片 只保留和道路相關的部分n#MODEL.add(Cropping2D(cropping=((60, 20), (0, 0))))n# 利用卷積層來進行特徵提取nMODEL.add(Conv2D(24, 5, strides=(2, 2), activation=relu))nMODEL.add(Dropout(0.7))nMODEL.add(Conv2D(36, 5, strides=(2, 2), activation=relu))nMODEL.add(Conv2D(48, 5, strides=(2, 2), activation=relu))nMODEL.add(Conv2D(64, 3, activation=relu))nMODEL.add(Conv2D(64, 3, activation=relu))n# 如果出現過擬合可以添加Dropoutn# MODEL.add(Dropout(0.8))n# 全鏈接層nMODEL.add(Flatten())nMODEL.add(Dense(100))nMODEL.add(Dense(50))nMODEL.add(Dense(10))nMODEL.add(Dense(1))nnnMODEL.compile(loss=mse, optimizer=adam)nMODEL.fit_generator(TRAIN_GENERATOR,n steps_per_epoch=len(TRAIN_SAMPLES),n validation_data=VALIDATION_GENERATOR,n validation_steps=len(VALIDATION_SAMPLES),n epochs=3)nn# 保存模型nMODEL.save(model.h5)n

然後車子就可以開起來咯:

提高訓練效果的注意事項:

  1. 使用模擬器生成數據的時候盡量使用搖桿或者滑鼠,這樣數據比較平滑
  2. 盡量保持車在車道中間,如果車技不佳就開慢點。
  3. 注意opencv讀取圖片顏色順序為BGR,這是一個大坑,一開始訓練的時候總往水裡開,後來才發現原來是顏色搞反了。

推薦閱讀:

Docker--深度學習環境配置一站式解決方案
如何使用最流行框架Tensorflow進行時序預測和時間序列分析
Caffe學習筆記--如何創建自定義Layer
怎樣在tensorflow中使用batch normalization?

TAG:无人驾驶车 | 人工智能 | TensorFlow |