/* ---- Google Analytics Code Below */

Saturday, December 03, 2022

Mechanical Neural Networks

Quite technical, but worth a link for later study.... 

Mechanical neural networks: Architected materials that learn behaviors  ... 

BY RYAN H. LEE ERWIN A. B. MULDER ET AL.

One for all: Universal material model based on minimal state-space neural networks

Abstract

Aside from some living tissues, few materials can autonomously learn to exhibit desired behaviors as a consequence of prolonged exposure to unanticipated ambient loading scenarios. Still fewer materials can continue to exhibit previously learned behaviors in the midst of changing conditions (e.g., rising levels of internal damage, varying fixturing scenarios, and fluctuating external loads) while also acquiring new behaviors best suited for the situation at hand. Here, we describe a class of architected materials, called mechanical neural networks (MNNs), that achieve such learning capabilities by tuning the stiffness of their constituent beams similar to how artificial neural networks (ANNs) tune their weights. An example lattice was fabricated to demonstrate its ability to learn multiple mechanical behaviors simultaneously, and a study was conducted to determine the effect of lattice size, packing configuration, algorithm type, behavior number, and linear-versus-nonlinear stiffness tunability on MNN learning as proposed. Thus, this work lays the foundation for artificial-intelligent (AI) materials that can learn behaviors and properties. ... ' 

INTRODUCTION

Scientists have been inspired by the interconnected network of neurons that constitute biological brains and enable complex learning with unmatched speed and energy efficiency. Consequently, many have sought to leverage a variety of interconnected networks to mimic natural learning for numerous artificial-intelligent (AI) applications (1–3).

Some of the first networks developed for AI purposes were purely mathematical in form. The concepts underlying these mathematical networks, called artificial neural networks (ANNs) (Fig. 1A), were first introduced by McCulloch and Pitts (4) but were later matured by Rosenblatt (5). The mathematical formulation underlying ANNs can be diagrammed using interconnected lines, shown in blue in Fig. 1A, that represent scalar values, called weights (6), which are multiplied by input numbers that are fed into multiple layers of activation functions (6), called neurons, which ultimately produce output values. If the ANN is provided with a set of known input and output values, then the network can be trained by tuning its weights so that it accurately predicts previously unknown output values that result for any desired input values. Hornik et al. (7) proved the true AI potential of ANNs by demonstrating that, with sufficiently large numbers of neurons and layers, ANNs could learn to model almost anything by accurately mapping any number of inputs to any number of outputs. Tuning the weights of sizeable ANNs, however, proved to consume large amounts of computational time and energy using traditional digital computers.  ... ' 

No comments: