Feedforward and Backward Propagation in Gated Recurrent Unit (GRU)
Published:
In this post, I’ll discuss how to implement a simple Recurrent Neural Network (RNN), specifically the Gated Recurrent Unit (GRU). I’ll present the feed forward proppagation of a GRU Cell at a single time stamp and then derive the formulas for determining parameter gradients using the concept of Backpropagation through time (BPTT).