Fascinating after-training analysis of how neurons capture specifics. Deep research and technical, but perhaps a way to squeeze new kinds of information from Deep Learning. Beyond analytics to embedded knowledge? Good detailed links at the below article
Putting neural networks under the microscope
Researchers pinpoint the “neurons” in machine-learning systems that capture specific linguistic features during language-processing tasks.
By Rob Matheson | MIT News Office
Researchers from MIT and the Qatar Computing Research Institute (QCRI) are putting the machine-learning systems known as neural networks under the microscope.
In a study that sheds light on how these systems manage to translate text from one language to another, the researchers developed a method that pinpoints individual nodes, or “neurons,” in the networks that capture specific linguistic features.
Neural networks learn to perform computational tasks by processing huge sets of training data. In machine translation, a network crunches language data annotated by humans, and presumably “learns” linguistic features, such as word morphology, sentence structure, and word meaning. Given new text, these networks match these learned features from one language to another, and produce a translation. ... "
Friday, February 01, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment