The Asimov Institute has put out a library/ontology of neural networks. Interesting, showed many kinds of nodes I had never heard of, but also many I have. Does not really specify how all the methods work, or what they are for. Does show you the potential complexity. Good thing to put away away for reference, and bring out when you need it. Worth an initial browse to understand the zoo's breadth.
( I see I reported on this previously last year, I assume it has been updated)