Too Long; Didn't Read
<em>In </em><a href="http://fast.ai" target="_blank"><em>fast.ai</em></a><em>’s </em><a href="https://www.usfca.edu/data-institute/certificates/deep-learning-part-two" target="_blank"><em>Deep Learning Part 2</em></a><em> with </em><a href="https://twitter.com/jeremyphoward" target="_blank"><em>@jeremyphoward</em></a><em> and </em><a href="https://twitter.com/math_rachel" target="_blank"><em>@math_rachel</em></a><em> we’ve been learning about generative models, which inspired me to experiment in some new directions. I’ll show you the results of these experiments here and describe how they were done. For complete comprehension, you should be familiar with CNNs, loss functions, etc; if you’re not, check out </em><a href="http://course.fast.ai/" target="_blank"><em>Practical Deep Learning For Coders, Part 1</em></a><em> (the MOOC version of part 1 of this course).</em>