Technical but interesting view of the current status, we experimented with GANs for some design-type applications. Emphasizing the 'collaborative' style of such models. Below is a bit fragmented, but still instructive, so I am passing it along because I know particular readers are interested.
Generative Adversarial Networks - The Story So Far from FloydHub
Summarizing every single improvement to the 2014 vanilla GANs is about as hard as watching season 8 of Game of Thrones on repeat. So instead, I’m going to recap the key ideas behind some of the coolest results in GAN research over the years.
I’m not going to explain concepts like transposed convolutions and Wasserstein distance in detail. Instead, I’ll provide links to some of the best resources you can use to quickly learn about these concepts so that you can see how they fit into the big picture.
If you’re still reading, I’m going to assume that you know the basics of deep learning and that you know how convolutional neural networks work.
With that said, here’s the map of the GAN landscape: .....
Way back in 2014, Ian Goodfellow proposed a revolutionary idea — make two neural networks compete (or collaborate, it’s a matter of perspective) with each other.
One neural network tries to generate realistic data (note that GANs can be used to model any data distribution, but are mainly used for images these days), and the other network tries to discriminate between real data and data generated by the generator network.
The generator network uses the discriminator as a loss function and updates its parameters to generate data that starts to look more realistic. .... "
Monday, August 05, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment