Too Long; Didn't Read
Since <a href="https://medium.com/@samim" data-anchor-type="2" data-user-id="f3c8148878e1" data-action-value="f3c8148878e1" data-action="show-user-card" data-action-type="hover" target="_blank">samim</a> published all those awesome and fun posts on using a Recurrent Neural Network to generate text (see: <a href="https://medium.com/@samim/zen-rrnn-on-meditation-machines-bbeb92aa62d3#.vn9ox6zb8" target="_blank">Zen-RNN</a>, <a href="https://medium.com/@samim/ted-rnn-machine-generated-ted-talks-3dd682b894c0" target="_blank">TED-RNN</a>, <a href="https://medium.com/@samim/obama-rnn-machine-generated-political-speeches-c8abd18a2ea0" target="_blank">Obama-RNN</a>), I’ve been looking for an opportunity to try the <a href="https://github.com/karpathy/char-rnn" target="_blank">char-nn</a> library myself. An opportunity came up after all of the papers at this year’s Neural Information Processing Systems Conference (<a href="https://nips.cc/" target="_blank">NIPS</a> 2015) appeared online. What is more suitable to play around with* an RNN than a bunch of papers that talk a lot about RNNs?