/* ---- Google Analytics Code Below */
Showing posts with label Neuromorphic. Show all posts
Showing posts with label Neuromorphic. Show all posts

Saturday, March 04, 2023

Fungal Motherboards?

Chemical or living tissue computers that can be linked to electronic components?    The ultimate example of  Biochemical collaboration.   Fungi computing?   Embedded regeneration?  UnConventual to be sure. 

ACM TECHNEWS

Inside the Lab Growing Mushroom Computers    By Popular Science, March 3, 202

A mushroom motherboard.

With fungal computers, mycelia — the branching, web-like root structure of the fungus — act as conductors, as well as the electronic components of a computer.

The Unconventional Computing Laboratory (UCL) of the U.K.'s University of the West of England focuses on the development of chemical or living computers that can interface with hardware and software.

Examples include fungal computers that utilize mycelium as electronics and conductors in order to enable new forms of information processing and analysis.

The researchers found mycelium with different geometrical arrangements can compute different logical functions and can map circuits based on received electrical responses; UCL's Andrew Adamatzky suggested this could lead to neuromorphic circuits.

Fungal computers' self-regenerative abilities could improve fault tolerance, reconfigurability, and energy efficiency, despite their inability to match the speeds of current computers.

From Popular Science   https://www.popsci.com/technology/unconventional-computing-lab-mushroom/ 

Friday, December 02, 2022

We Will See a Completely New Type of Computer, Says AI Pioneer Hinton

 Hinton predicts: A 'mortal' Neuromorphic Computer?  Ready to sign up and test.   When? 

We Will See a Completely New Type of Computer, Says AI Pioneer Hinton

In ZDNet, Tiernan Ray, December 1, 2022

Artificial intelligence pioneer and 2018 ACM A.M. Turing award recipient Geoffrey Hinton envisions a "mortal" neuromorphic computer combining hardware and software. Speaking at the Neural Information Processing Systems conference, Hinton said mortal computation means "the knowledge that the system has learned and the hardware, are inseparable." Hinton said such computers could be grown, forgoing costly chip fabrication, and he imagines they will be "used for putting something like GPT-3 in your toaster for $1, so running on a few watts, you can have a conversation with your toaster." He suggested a forward-forward neural network model, eliminating the backpropagation common to most neural networks, might suit mortal computation hardware.

Article

Wednesday, August 24, 2022

A Neuromorphic Chip for AI on the Edge

Chips for AI

 A Neuromorphic Chip for AI on the Edge

UC San Diego News Center

By Ioana Patringenaru, August 17, 2022

An international team of researchers created the NeuRRAM neuromorphic chip to compute directly in memory and run artificial intelligence (AI) applications with twice the energy efficiency of platforms for general-purpose AI computing. The chip moves AI closer to running on edge devices, untethered from the cloud; it also produces results as accurate as conventional digital chips, and supports many neural network models and architectures. "The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility," said former University of California, San Diego researcher Weier Wan. ... '

Saturday, March 12, 2022

Lego Robot with an Organic 'Brain' Learns to Navigate a Maze

Mention of Carver Mead, who I followed for some time.

Lego Robot with an Organic 'Brain' Learns to Navigate a Maze  By Scientific American, January 28, 2022

In the winter of 1997 Carver Mead lectured on an unusual topic for a computer scientist: the nervous systems of animals, such as the humble fly. Mead, a researcher at the California Institute of Technology, described his earlier idea for an electronic problem-solving system inspired by nerve cells, a technique he had dubbed "neuromorphic" computing. A quarter-century later, researchers have designed a carbon-based neuromorphic computing device—essentially an organic robot brain—that can learn to navigate a maze.

A neuromorphic chip memorizes information similarly to the way an animal does. When a brain learns something new, a group of its neurons rearrange their connections so they can communicate more quickly and easily. As a common saying in neuroscience goes, "Neurons that fire together wire together." When a neuromorphic chip learns, it rewires its electric circuits to save the new behavior like a brain does to save a memory.

The idea of brainlike computation has been around for a while. But Paschalis Gkoupidenis of the Max Planck Institute for Polymer Research in Mainz, Germany, and his neuromorphic research team are pioneers in crafting this technology from organic materials. To build their chip, the researchers used long chains of carbon-based molecules called polymers, which are soft and, in some ways, behave similarly to living tissues. In order to let their material carry an electric charge like real neurons, which are energy-efficient and operate in a watery medium, the scientists coated the organic material with an ion-rich gel. This provided "more degrees of freedom to mimic biological processes," Gkoupidenis says.

From Scientific American

Thursday, February 17, 2022

Advances in Brain Inspired Computing

Computers taking  hints from brain designs?   Neuromorphic.  Note that Spiking neural networks new to me, will examine.  

AI Overcomes Stumbling Block on Brain-Inspired Hardware

Allison Whitten, Contributing Writer,   Quanta Magazine

Algorithms that use the brain’s communication signal can now work on analog neuromorphic chips, which closely mimic our energy-efficient brains.

Today’s most successful artificial intelligence algorithms, artificial neural networks, are loosely based on the intricate webs of real neural networks in our brains. But unlike our highly efficient brains, running these algorithms on computers guzzles shocking amounts of energy: The biggest models consume nearly as much power as five cars over their lifetimes.

Enter neuromorphic computing, a closer match to the design principles and physics of our brains that could become the energy-saving future of AI. Instead of shuttling data over long distances between a central processing unit and memory chips, neuromorphic designs imitate the architecture of the jelly-like mass in our heads, with computing units (neurons) placed next to memory (stored in the synapses that connect neurons). To make them even more brain-like, researchers combine neuromorphic chips with analog computing, which can process continuous signals, just like real neurons. The resulting chips are vastly different from the current architecture and computing mode of digital-only computers that rely on binary signal processing of 0s and 1s.

With the brain as their guide, neuromorphic chips promise to one day demolish the energy consumption of data-heavy computing tasks like AI. Unfortunately, AI algorithms haven’t played well with the analog versions of these chips because of a problem known as device mismatch: On the chip, tiny components within the analog neurons are mismatched in size due to the manufacturing process. Because individual chips aren’t sophisticated enough to run the latest training procedures, the algorithms must first be trained digitally on computers. But then, when the algorithms are transferred to the chip, their performance breaks down once they encounter the mismatch on the analog hardware.

Now, a paper published last month in the Proceedings of the National Academy of Sciences has finally revealed a way to bypass this problem. A team of researchers led by Friedemann Zenke at the Friedrich Miescher Institute for Biomedical Research and Johannes Schemmel at Heidelberg University showed that an AI algorithm known as a spiking neural network — which uses the distinctive communication signal of the brain, known as a spike — could work with the chip to learn how to compensate for device mismatch. The paper is a significant step toward analog neuromorphic computing with AI.  ... "

Saturday, September 05, 2020

Intelligent Sensing

More examples of integrating sensors for learning.  Adapting. 

Intelligent Sensing Abilities for Robots to Carry Out Complex Tasks
National University of Singapore
July 15, 2020

Researchers at the National University of Singapore (NUS) have developed a sensory integrated artificial brain system that mimics biological neural networks in order to make robots smarter and more intuitive. The system features an artificial skin sensor that can identify an object's shape, texture, and hardness 10 times faster than the blink of an eye. Intel's Loihi neuromorphic research chip processes sensory data from the artificial skin. The researchers used a robotic hand equipped with the artificial skin to read Braille; tactile data was passed to the Loihi chip, which was more than 92% accurate in classifying the Braille letters. By combining both vision and touch data in a spiking neural network, the robot was able to classify objects and detect object slippage. Said NUS' Harold Soh, "[A] neuromorphic system is a promising piece of the puzzle for combining multiple sensors to improve robot perception." ... '

Monday, August 10, 2020

Optimizing Neural Networks on a Brain-Inspired Computer

Considerable challenge.   Will the biomimicry provide enough value to adjust our methods to what we now know about the brain?

How to Optimize Neural Networks on a Brain-Inspired Computer
HPCwire
July 28, 2020

A study by scientists at Germany’s Heidelberg University and the Max Planck Institute for Dynamics and Self-Organization reveals how "critical states" can be used to optimize artificial neural networks running on brain-inspired neuromorphic hardware. Critical states are the points at which systems can quickly and fundamentally change their overall characteristics. Although they are widely assumed to be optimal for computation in recurrent neural networks, the researchers found that criticality is not beneficial for every task. In an experiment performed on a prototype of the analog neuromorphic BrainScales-2 chip, the researchers found that changing input strength permits easy adjustment of the distance to criticality. They also showed a clear relationship between criticality and task performance, finding that only complex, memory-intensive tasks benefited from criticality. ... "

Friday, July 24, 2020

Intel Building Artificial Skin

Modeling chips more closely to biological neurons.

Researchers use Intel’s neuromorphic chip to build artificial skin  By  Maria Deutscher in SiliconAngle

Intel Corp. today revealed that researchers are using its neuromorphic chips to develop artificial skin for robots, in a project representing one of the first practical applications of the technology.

Intel, the leading maker of central processing units, is researching alternative chip architectures to help it maintain its long-term competitive advantage. Neuromorphic computing is one of the areas where the company is active. The term refers to an emerging class of chips that have transistors modeled after neurons to help them run artificial intelligence models faster.  ... " 

See also:
Neuromorphic Chips Take Shape  By Samuel Greengard
Communications of the ACM, August 2020, Vol. 63 No. 8, Pages 9-11
10.1145/3403960 ... 

Monday, July 13, 2020

Better Neuromorphic Computing Leading to Intelligence?

A look at the history of neuromorphic computing.  Or the use of some  'forms' of the biological brain to provide 'intelligence'.    Artificial Neural Network methods already do this, but relatively weakly.  Mentioned is Terry Sejnowski and his current work at the Salk Institute.      We connected with Sejnowski when he was closely looking at diagnostic systems in the 80s.   Quite general overview here, and you have to sign in for the full article.  See also an outline of Sejnowski's work at Salk:    Worth following, as I do.

Neuromorphic computing finds new life in machine learning     by 7wData

Efforts have been underway for forty years to build computers that might emulate some of the structure of the brain in the way they solve problems. To date, they have shown few practical successes. But hope for so-called neuromorphic computing springs eternal, and lately, the endeavor has gained some surprising champions.

The research lab of Terry Sejnowski at The Salk Institute in La Jolla this year proposed a new way to train "spiking" neurons using standard forms of machine learning, called "recurrent neural networks," or "RNNs."

And Hava Siegelmann, who has been doing pioneering work on alternative computer designs for decades, proposed along with colleagues a system of spiking neurons that would perform what's called "unsupervised" learning.

Neuromorphic computing is an umbrella term given to a variety of efforts to build computation that resembles some aspect of the way the brain is formed. The term goes back to work by legendary computing pioneer Carver Mead in the early 1980s, who was interested in how the increasingly dense collections of transistors on a chip could best communicate. Mead's insight was that the wires between transistors would have to achieve some of the efficiency of the brain's neural wiring. ... "

Saturday, June 13, 2020

Aiming to put AI in Your Pocket

Mentioned this previously, now more in Singularity Hub on the topic:

MIT’s Tiny New Brain Chip Aims for AI in Your Pocket
By Jason Dorrier - Jun 11, 2020 in Singularity Hub

The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant’s worth of electricity and racks of chips to learn.

That’s not to slander machine learning, but nature may have a tip or two to improve the situation. Luckily, there’s a branch of computer chip design heeding that call. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket.

The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors—chip components that can mimic their natural counterparts in the brain.

In a recent paper in Nature Nanotechnology , a team of MIT scientists say their tiny new neuromorphic chip was used to store, retrieve, and manipulate images of Captain America’s Shield and MIT’s Killian Court. Whereas images stored with existing methods tended to lose fidelity over time, the new chip’s images remained crystal clear.

“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” Jeehwan Kim, associate professor of mechanical engineering at MIT said in a press release. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.” ...   " 

Tuesday, June 09, 2020

Memristor Chips Emerge for Local Intelligence

The idea has been around for a while.  Getting closer to real synapses in real brains.   But still a considerable way to go.  As mentioned a means to produce IOT capabilities on local devices.  Paper and techical detail pointed to.  Requires considerable materials innovation.

Engineers put tens of thousands of artificial brain synapses on a single chip
The design could advance the development of small, portable AI devices.

Jennifer Chu | MIT News Office
June 8, 2020

MIT engineers have designed a “brain-on-a-chip,” smaller than a piece of confetti, that is made from tens of thousands of artificial brain synapses known as memristors — silicon-based components that mimic the information-transmitting synapses in the human brain.

The researchers borrowed from principles of metallurgy to fabricate each memristor from alloys of silver and copper, along with silicon. When they ran the chip through several visual tasks, the chip was able to “remember” stored images and reproduce them many times over, in versions that were crisper and cleaner compared with existing memristor designs made with unalloyed elements.

Their results, published today in the journal Nature Nanotechnology, demonstrate a promising new memristor design for neuromorphic devices — electronics that are based on a new type of circuit that processes information in a way that mimics the brain’s neural architecture. Such brain-inspired circuits could be built into small, portable devices, and would carry out complex computational tasks that only today’s supercomputers can handle.

“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”  ... " 

Thursday, March 19, 2020

Detecting Odors

I often mention research in this area because we spent considerable time looking at the idea of an 'artificial nose' to effectively  test quality in products, especially coffee, but in other areas as well.    Here more in the area.  Note the use of 'Neuromorphic', or brain-inspired chips to address the problem.

Intel Trains Neuromorphic Chip to Detect Odors
in VentureBeat
By Kyle Wiggers

Intel and Cornell University researchers have trained Intel's Loihi neuromorphic processor to identify 10 materials from their odors, demonstrating how neuromorphic computing could be applied to detect precursor smells and potentially find explosives and narcotics, diagnose diseases, and notice signs of smoke and carbon monoxide. The chip was trained by configuring the circuit schematic of biological olfaction, using a dataset compiling the activity of 72 chemical sensors in response to various scents. The researchers said the method kept Loihi's memory of the scents intact, and the chip has "superior" recognition accuracy compared with conventional techniques. Said Intel's Nabil Imam, "This work is a prime example of contemporary research at the crossroads of neuroscience and artificial intelligence and demonstrates Loihi's potential to provide important sensing capabilities that could benefit various industries." ... '

Wednesday, January 15, 2020

Future of Computer Tech?

Points again to the 'Brain as a computing model' conundrum.   We have lots of brains we can study, observe, dissect.   But we still don't know their operational specs.   Algorithms 'work', because their input and output are simple.   Brains are not.    But its good we can get closer to the biomimicry of the brain for any hope of replicating their approach.

Brain-Like Device May Forecast Future of Computer Technology
By The Daily Bruin,  January 14, 2020

Researchers at UCLA and Japan's National Institute for Materials Science developed a device with the ability to mimic certain characteristics of the brain. The work is described in "Emergent Dynamics of Neuromorphic Nanowire Networks," published in Scientific Reports.

James Gimzewski, a chemistry professor at UCLA, and Adam Stieg, associate director of the California NanoSystems Institute at UCLA, helped create the device, which spans 10 square millimeters.

The device's small size is possible because of the number of networks within the device, all of which are made from silver nanowires, which have an average diameter of 360 nanometers.

"Because [the networks] self-assemble on the nanoscale, we can achieve a very high density of synaptic-like connections that wouldn't be achievable using normal computer chip technology," Gimzewski says.

Kelsey Scharnhorst, who assisted in research for this project, says the growing popularity of machine learning has led to a scramble to develop an option that can produce computational outputs in a timely, more-energy-efficient fashion.

"Instead of doing everything with algorithms . . . [computations] can be sped up an insane amount [with machine learning], and that can make a huge difference for a ton of technology," Scharnhorst says.

From The Daily Bruin  Full article at the link.

Monday, November 04, 2019

Artificial Networks Shed Light on Human Face Recognition

From what we know how the brain works to artificial neural nets (ANN), which we know are very simplistic neural models, then back to the actual operation of the brain?

Artificial Networks Shed Light on Human Face Recognition  By Weizmann Institute of Science

A pair of face images that elicited dissimilar neuronal activation patterns.
Weizmann Institute of Science researchers are gaining new insights into humans' ability to recognize faces, through the use of deep neural networks.

Researchers at the Weizmann Institute of Science in Israel have gained new insights on humans' ability to recognize faces using deep neural networks.

The researchers compared brain activity to these networks by analyzing data from 33 epileptics with implanted brain electrodes as they were shown a series of faces, each of which triggered a unique neuronal activation pattern.

The neural network was shown the same images, to determine whether it would exhibit activation patterns similar to the human brain.  There were striking similarities, especially in the network's middle layers, which represent the actual pictorial appearance of the faces.

Weizmann's Shany Grossman said, "These findings can help advance our understanding of how face perception and recognition are encoded in the human brain [and] ... may also help to further improve the performance of neural networks, by tweaking them so as to bring them closer to the observed brain response patterns."

From Weizmann Institute of Science    View Full Article

Monday, October 21, 2019

More Neuromorphic Computing and Sensors

We worked with ORNL.   Where more sensor based computing would have been useful in reasoning about complex processes.    Its not clear that just getting closer to 'Neuromorphic' computing, that is computing that uses models much closer to biological brains than ANN (Artificial Neural networks) currently in use, will buy us.  But the attempt is a exciting one.

Bio-Circuitry Mimics Synapses and Neurons in a Step Toward Sensory Computing
Oak Ridge National Laboratory     By Ashley C. Huff

Researchers at Oak Ridge National Laboratory, the University of Tennessee, and Texas A&M University have demonstrated bio-inspired devices that bring us closer to neuromorphic, or brain-like, computing. The breakthrough is the first example of a lipid-based "memcapacitor," a charge storage component with memory that processes information much like the way synapses do in the brain. This discovery could lead to the emergence of computing networks modeled on biology for a sensory approach to machine learning. The new method uses soft materials to mimic biomembranes and simulate the way nerve cells communicate with one another. Said ORNL researcher Pat Collier, "Incorporating biology—using biomembranes that sense bioelectrochemical information—is key to developing the functionality of neuromorphic computing." ...

Tuesday, July 16, 2019

Intel's Neuromorphic Chips

Meaning chips that are closer in structure to neural networks, which themselves are just considerable simplifications of networks of biological neurons.   The result is being able to do such neural computation, key to deep learning, much faster.   No quantum computing here.   Below what Intel Corp writes about this, and then a piece by Technology Review.

Intel's Neuromorphic Computing

The emergent capabilities in artificial intelligence being driven by Intel Labs have more in common with human cognition than with conventional computer logic.

HIGHLIGHTS:
Neuromorphic computing research emulates the neural structure of the human brain.

The Loihi research chip includes 130,000 neurons optimized for spiking neural networks.

Intel Labs is making Loihi-based systems available to the global research community.

Probabilistic computing addresses the fundamental uncertainty and noise of natural data.

Collaborations on next-generation AI extend to worldwide industry and academic researchers.

What Is Neuromorphic Computing
The first generation of AI was rules-based and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for example. The second, current generation is largely concerned with sensing and perception, such as using deep-learning networks to analyze the contents of a video frame.

A coming next generation will extend AI into areas that correspond to human cognition, such as interpretation and autonomous adaptation. This is critical to overcoming the so-called “brittleness” of AI solutions based on neural network training and inference, which depend on literal, deterministic views of events that lack context and commonsense understanding. Next-generation AI must be able to address novel situations and abstraction to automate ordinary human activities.

Intel Labs is driving computer-science research that contributes to this third generation of AI. Key focus areas include neuromorphic computing, which is concerned with emulating the neural structure and operation of the human brain, as well as probabilistic computing, which creates algorithmic approaches to dealing with the uncertainty, ambiguity, and contradiction in the natural world.  .... " 


Also from Technology Review:
Intel’s new AI chips can crunch data 1,000 times faster than normal ones  ......  "

Wednesday, November 28, 2018

ACM Tech Talk: From Media to Meaning: Classic Machine Learning

Good piece.   With frightening implications of the combination surveillance, neural methods,  and the use of high speed video generation to fake anything we want. Notable explanation of Adversarial neural networks (GAN).

Watch First ACM Tech Talk: “From Media to Meaning: Classic Machine Learning” with Blaise Agüera y Arcas

Blaise Agüera y Arcas is a Distinguished Scientist at Google AI, where he leads a team that works on intersections of neural nets and neuromorphic AI. In this talk, Blaise examines the recent revolution in deep networks which has enabled the use of classic machine learning techniques to go from media to meaning. He covers neural nets, generative adversarial techniques, and the ethical implications of these new technologies.  ... "

Saturday, January 28, 2017

Tuesday, October 25, 2016

Analog Methods for AI

Came into computing at the very end of analog computing systems being taught.     I remember asking why,  and got the answer that they were being replaced by digital methods.  Essentially a digital system could simulate any analog system and was more flexible and programmable.    Is the world changing for AI applications?  In IEEE Spcetrum, a guest article on the topic. How Analog and Neuromorphic Chips Will Rule the Robotic Age  By Shahin Farshchi

Tuesday, March 15, 2016

EU's Robotic Nose for Aroma Sensory Data

A long time challenge, a system that can detect and recognize aroma.   We experimented in the area of coffee beans and blend classification, linked to optimizing machine learning.  But the existing systems could not capture the ppm variances involved.

At the time it was suggested that such a system could 'sniff out' changes in blends or even manufacturing results or emerging issues in real time.  At least the system could gather large quantities of data that could be mined for subtle, or not not so subtle changes.   We have done that in the dimension of imagery, now how about scent? Would this system take us closer to that idea?

See also  Inhalio.com

In the CACM: 
The European Union's BIOMACHINELEARNING project has created a neuromorphic network for odor recognition, running on neuromorphic hardware, which can receive real-time input from electrical gas sensors.

The project's researchers say the technology could lead to the development of a cost-effective, portable, and fully functional robotic nose.

When studying how to improve the accuracy and speed of odor detection and identification, the researchers found they could use bio-inspired signal processing to enhance the signals from sensors and resolve variations in gas concentrations resulting from of a phenomenon called "turbulence." Rapid concentration changes associated with turbulence can be resolved with inexpensive, off-the-shelf gas sensors and appropriate signal processing, according to the researchers. .... "