We experimented with a similar ideas, using simulation and evolutionary methods to rate alternate neural network designs. Considerable improvements since then. Net architecture still contains much art. The 'art' we talk about here can be readily rated by performance. In Wired:
Google’s Dueling Neural Networks Spar to Get Smarter, No Humans Required
THE DAY RICHARD Feynman died, the blackboard in his classroom read: “What I cannot create, I do not understand.”
When Ian Goodfellow explains the research he’s doing at Google Brain, the central artificial intelligence lab at the internet’s most powerful company, he points to this aphorism from the iconic physicist, Caltech professor, and best-selling author. But Goodfellow isn’t referring to himself—or any other human being inside Google. He’s talking about the machines: “What an AI cannot create, it does not understand.”
Goodfellow is among the world’s most important AI researchers, and after a brief stint at OpenAI—the Google Brain competitor bootstrapped by Elon Musk and Sam Altman—he has returned to Google, building a new research group that explores “generative models.” These are systems that create photos, sounds, and other representations of the real world. Nodding to Feynman, Goodfellow describes this effort as an important path to all sorts of artificial intelligence. .... "