Details

The Unreasonable Effectiveness of Recurrent Neural Networks

The Unreasonable Effectiveness of Recurrent Neural Networks
5/5 based on 1 votes.
There’s something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I’ve in fact reached the opposite conclusion). Fast forward about a year: I’m training RNNs all the time and I’ve witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you.
Submitted by drmoon on Dec 03, 2016 (Edited Dec 03, 2016)
340 views. Averaging 0 views per day.

Post Reply


Please login or register if you wish to leave a comment.

Quick Search

Statistics

3,012 listings in 21 categories, with 2,253,173 clicks. Directory last updated Sep 12, 2023. Welcome Amara Fatima, the newest member.