吳老師-『神經網路與深度學習』第四周

吳老師-『神經網路與深度學習』第四周

來自專欄青銅演算法工程師日記

  • 實現深度神經網路時,常用的檢查代碼有錯的方法,是寫下來,過一遍矩陣的維數。

  • 神經網路的正向傳播和反向傳播,方塊代表每層

  • 第L層前向傳播,反向傳播的計算公式,左邊是單個樣本,右邊是向量化

  • 參數和超參數的一些列舉

作業:

  • Initialize the parameters for a two-layer network and for an LL-layer neural network.
  • Implement the forward propagation module (shown in purple in the figure below).
    • Complete the LINEAR part of a layers forward propagation step (resulting in Z[l]Z[l]).
    • We give you the ACTIVATION function (relu/sigmoid).
    • Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
    • Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer LL). This gives you a new L_model_forward function.
  • Compute the loss.
  • Implement the backward propagation module (denoted in red in the figure below).
    • Complete the LINEAR part of a layers backward propagation step.
    • We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
    • Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
    • Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
  • Finally update the parameters.

initialize_parameters(n_x, n_h, n_y)

initialize_parameters_deep(layer_dims)

linear_forward(A, W, b)

linear_activation_forward(A_prev, W, b, activation)

推薦閱讀:

TAG:神經網路 | 深度學習DeepLearning | 機器學習 |