Misplaced Pages

Bio-inspired computing: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 07:38, 2 December 2017 editJarble (talk | contribs)Autopatrolled, Extended confirmed users149,680 edits linking← Previous edit Latest revision as of 09:29, 10 October 2024 edit undoCitation bot (talk | contribs)Bots5,409,928 edits Altered journal. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Misplaced Pages:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 42/592 
(86 intermediate revisions by 48 users not shown)
Line 1: Line 1:
{{About|using biology as an inspiration in computing|computers composed of biological parts|Biological computing|data analysis and mathematical modeling in biology|Computational biology}}

{{Multiple issues|
{{cleanup|reason=This article has potential, but is currently mostly used as a coatrack for ].|date=August 2016}} {{cleanup|reason=This article has potential, but is currently mostly used as a coatrack for ].|date=August 2016}}
{{merge|Biological computing|date=December 2016}} {{Original research|date=September 2021}}
{{POV|date=September 2021}}
{{distinguish|Computational biology}}
}}
'''Bio-inspired computing''', short for '''biologically inspired computing''', is a field of study that loosely knits together subfields related to the topics of ], ] and ]. It is often closely related to the field of ], as many of its pursuits can be linked to ]. It relies heavily on the fields of ], ] and ]. Briefly put, it is the use of computers to model the living phenomena, and simultaneously the study of life to improve the usage of computers. Biologically inspired computing is a major subset of ].
'''Bio-inspired computing''', short for '''biologically inspired computing''', is a field of study which seeks to solve computer science problems using models of biology. It relates to ], ], and ]. Within ], bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of ].


== Areas of research == == History ==
'''Early Ideas'''


The ideas behind biological computing trace back to 1936 and the first description of an abstract computer, which is now known as a ]. ] firstly described the abstract construct using a biological specimen. Turing imagined a mathematician that has three important attributes.<ref>{{Cite book |last=Turing |first=Alan |url=http://worldcat.org/oclc/18386775 |title=On computable numbers : with an application to the Entscheidungsproblem |date=1936 |publisher=Mathematical Society |oclc=18386775}}</ref> He always has a pencil with an eraser, an unlimited number of papers and a working set of eyes. The eyes allow the mathematician to see and perceive any symbols written on the paper while the pencil allows him to write and erase any symbols that he wants. Lastly, the unlimited paper allows him to store anything he wants memory. Using these ideas he was able to describe an abstraction of the modern digital computer. However Turing mentioned that anything that can perform these functions can be considered such a machine and he even said that even electricity should not be required to describe digital computation and machine thinking in general.<ref>{{Citation |last=Turing |first=Alan |title=Computing Machinery and Intelligence (1950) |date=2004-09-09 |url=http://dx.doi.org/10.1093/oso/9780198250791.003.0017 |work=The Essential Turing |pages=433–464 |publisher=Oxford University Press |doi=10.1093/oso/9780198250791.003.0017 |isbn=978-0-19-825079-1 |access-date=2022-05-05}}</ref>
Some areas of study encompassed under the canon of biologically inspired computing, and their biological counterparts:


'''Neural Networks'''
*]s ↔ ]

*] ↔ ]
First described in 1943 by Warren McCulloch and Walter Pitts, neural networks are a prevalent example of biological systems inspiring the creation of computer algorithms.<ref>{{Citation |last1=McCulloch |first1=Warren |title=A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) |date=2021-02-02 |url=http://dx.doi.org/10.7551/mitpress/12274.003.0011 |work=Ideas That Created the Future |pages=79–88 |publisher=The MIT Press |access-date=2022-05-05 |last2=Pitts |first2=Walter|doi=10.7551/mitpress/12274.003.0011 |isbn=9780262363174 |s2cid=262231397 }}</ref> They first mathematically described that a system of simplistic neurons was able to produce simple ] such as ], ] and ]. They further showed that a system of neural networks can be used to carry out any calculation that requires finite memory. Around 1970 the research around neural networks slowed down and many consider a 1969 ] by Marvin Minsky and Seymour Papert as the main cause.<ref>{{Cite book |last=Minsky |first=Marvin |url=http://worldcat.org/oclc/1047885158 |title=Perceptrons : an introduction to computational geometry |publisher=The MIT Press |year=1988 |isbn=978-0-262-34392-3 |oclc=1047885158}}</ref><ref>{{Cite web |title=History: The Past |url=https://userweb.ucs.louisiana.edu/~isb9112/dept/phil341/histconn.html |access-date=2022-05-05 |website=userweb.ucs.louisiana.edu}}</ref> Their book showed that neural network models were able only model systems that are based on Boolean functions that are true only after a certain threshold value. Such functions are also known as ]. The book also showed that a large amount of systems cannot be represented as such meaning that a large amount of systems cannot be modeled by neural networks. Another book by James Rumelhart and David McClelland in 1986 brought neural networks back to the spotlight by demonstrating the linear back-propagation algorithm something that allowed the development of multi-layered neural networks that did not adhere to those limits.<ref>{{Cite book |last1=McClelland |first1=James L. |last2=Rumelhart |first2=David E.|url=http://worldcat.org/oclc/916899323 |title=Parallel distributed processing : explorations in the microstructure of cognition. |date=1999 |publisher=MIT Press |isbn=0-262-18120-7 |oclc=916899323}}</ref>
*] ↔ ]

*] ↔ ]s, ]s, ]s, ]s
'''Ant Colonies'''
*] ↔ the ]

*] ↔ ]
Douglas Hofstadter in 1979 described an idea of a biological system capable of performing intelligent calculations even though the individuals comprising the system might not be intelligent.<ref>{{Cite book |last=Hofstadter |first=Douglas R. |url=http://worldcat.org/oclc/750541259 |title=Gödel, Escher, Bach : an eternal golden braid |date=1979 |publisher=Basic Books |isbn=0-465-02656-7 |oclc=750541259}}</ref> More specifically, he gave the example of an ant colony that can carry out intelligent tasks together but each individual ant cannot exhibiting something called "]." Azimi et al. in 2009 showed that what they described as the "ant colony" algorithm, a clustering algorithm that is able to output the number of clusters and produce highly competitive final clusters comparable to other traditional algorithms.<ref>{{Citation |last1=Azimi |first1=Javad |title=Clustering Ensembles Using Ants Algorithm |date=2009 |url=http://dx.doi.org/10.1007/978-3-642-02264-7_31 |work=Methods and Models in Artificial and Natural Computation. A Homage to Professor Mira’s Scientific Legacy |pages=295–304 |place=Berlin, Heidelberg |publisher=Springer Berlin Heidelberg |isbn=978-3-642-02263-0 |access-date=2022-05-05 |last2=Cull |first2=Paul |last3=Fern |first3=Xiaoli|series=Lecture Notes in Computer Science |volume=5601 |doi=10.1007/978-3-642-02264-7_31 }}</ref> Lastly Hölder and Wilson in 2009 concluded using historical data that ants have evolved to function as a single "superogranism" colony.<ref>{{Cite journal |last1=Wilson |first1=David Sloan |last2=Sober |first2=Elliott |date=1989 |title=Reviving the superorganism |url=http://dx.doi.org/10.1016/s0022-5193(89)80169-9 |journal=Journal of Theoretical Biology |volume=136 |issue=3 |pages=337–356 |doi=10.1016/s0022-5193(89)80169-9 |pmid=2811397 |bibcode=1989JThBi.136..337W |issn=0022-5193}}</ref> A very important result since it suggested that group selection ]s coupled together with algorithms similar to the "ant colony" can be potentially used to develop more powerful algorithms.
*]s ↔ ]

*] ↔ patterning and rendering of animal skins, bird feathers, mollusk shells and bacterial colonies
== Areas of research ==
*] ↔ plant structures

*] and ] ↔ epidemiology and the spread of disease
Some areas of study in biologically inspired computing, and their biological counterparts:
*] ↔ intra-] ] processes in the ]
{| class="wikitable"
*] ↔ ], ], ], ]s, etc.
|+
*] ↔ ]
!Bio-Inspired Computing Topic
*] ↔ ], ]
!Biological Inspiration
|-
|]
|]
|-
|]
|]
|-
|]
|]
|-
|]
|]s, ]s, ]s, ]s
|-
|]
|]
|-
|]
|]
|-
|]
|]
|-
|]
|Patterning and rendering of animal skins, bird feathers, mollusk shells and bacterial colonies
|-
|]
|Plant structures
|-
|] and ]s
|Epidemiology
|-
|]
|] ] processes in the ]
|-
|]
|], ], ], ]s
|-
|]
|]
|-
|]s
|], ]
|}


== Artificial intelligence == == Artificial intelligence ==


Bio-Inspired computing can be distinguished from traditional artificial intelligence by its approach to computer learning. Bio-inspired computing uses an evolutionary approach, while traditional A.I. uses a ']' approach. Bio-inspired computing begins with a set of simple rules and simple organisms which adhere to those rules. Over time, these organisms evolve within simple constraints. This method could be considered ] or ]. In traditional artificial intelligence, intelligence is often programmed from above: the programmer is the creator, and makes something and imbues it with its intelligence.
The way in which bio-inspired computing differs from the traditional artificial intelligence (AI) is in how it takes a more evolutionary approach to learning, as opposed to what could be described as ']' methods used in traditional AI. In traditional AI, intelligence is often programmed from above: the programmer is the creator, and makes something and imbues it with its intelligence. Bio-inspired computing, on the other hand, takes a more ], ] approach; bio-inspired techniques often involve the method of specifying a set of simple rules, a set of simple organisms which adhere to those rules, and a method of iteratively applying those rules. For example, training a virtual insect to navigate in an unknown terrain for finding food includes six simple rules. The insect is trained to

=== Virtual Insect Example ===
Bio-inspired computing can be used to train a virtual insect. The insect is trained to navigate in an unknown terrain for finding food equipped with six simple rules:

* turn right for target-and-obstacle left; * turn right for target-and-obstacle left;
* turn left for target-and-obstacle right; * turn left for target-and-obstacle right;
* turn left for target-left-obstacle-right; * turn left for target-left-obstacle-right;
* turn right for target-right-obstacle-left, * turn right for target-right-obstacle-left;
* turn left for target-left without obstacle and * turn left for target-left without obstacle;
* turn right for target right without obstacle. * turn right for target-right without obstacle.
The virtual insect controlled by the trained ] can find food after training in any unknown terrain.<ref name="Silvia_2013">{{cite journal | author = Xu Z |author2=Ziye X |author3=Craig H |author4=Silvia F | title = Spike-based indirect training of a spiking neural network-controlled virtual insect | journal = Decision and Control (CDC), IEEE | pages = 6798–6805 |date=Dec 2013 | doi = 10.1109/CDC.2013.6760966 | isbn = 978-1-4673-5717-3 }}</ref> After several generations of rule application it is usually the case that some forms of complex behaviour arise. Complexity gets built upon complexity until the end result is something markedly complex, and quite often completely counterintuitive from what the original rules would be expected to produce (see ]s). For this reason, in ]s, it is necessary to accurately model an ''in vivo'' network, by live collection of "noise" coefficients that can be used to refine statistical inference and extrapolation as system complexity increases.<ref>{{cite web|url=http://www.duke.edu/~jme17/Joshua_E._Mendoza-Elias/Research_Interests.html#Neuroscience_-_Neural_Plasticity_in|title="Smart Vaccines" - The Shape of Things to Come|author=Joshua E. Mendoza|work=Research Interests|archiveurl=https://web.archive.org/web/20121114233853/http://people.duke.edu/~jme17/Joshua_E._Mendoza-Elias/Research_Interests.html|archivedate=November 14, 2012}}</ref> The virtual insect controlled by the trained ] can find food after training in any unknown terrain.<ref name="Silvia_2013">{{cite book | author = Xu Z |author2=Ziye X |author3=Craig H |author4=Silvia F |title=52nd IEEE Conference on Decision and Control |chapter=Spike-based indirect training of a spiking neural network-controlled virtual insect | journal = IEEE Decision and Control | pages = 6798–6805 |date=Dec 2013 | doi = 10.1109/CDC.2013.6760966 | isbn = 978-1-4673-5717-3 |citeseerx=10.1.1.671.6351 |s2cid=13992150 }}</ref> After several generations of rule application it is usually the case that some forms of complex behaviour ]. Complexity gets built upon complexity until the result is something markedly complex, and quite often completely counterintuitive from what the original rules would be expected to produce (see ]s). For this reason, when modeling the ], it is necessary to accurately model an ''in vivo'' network, by live collection of "noise" coefficients that can be used to refine statistical inference and extrapolation as system complexity increases.<ref>{{cite web|url=http://www.duke.edu/~jme17/Joshua_E._Mendoza-Elias/Research_Interests.html#Neuroscience_-_Neural_Plasticity_in|title="Smart Vaccines" The Shape of Things to Come|author=Joshua E. Mendoza|work=Research Interests|archiveurl=https://web.archive.org/web/20121114233853/http://people.duke.edu/~jme17/Joshua_E._Mendoza-Elias/Research_Interests.html|archivedate=November 14, 2012}}</ref>


Natural evolution is a good analogy to this method–the rules of evolution (], ]/reproduction, ] and more recently ]) are in principle simple rules, yet over millions of years have produced remarkably complex organisms. A similar technique is used in ]s. Natural evolution is a good analogy to this method–the rules of evolution (], ]/reproduction, ] and more recently ]) are in principle simple rules, yet over millions of years have produced remarkably complex organisms. A similar technique is used in ]s.

== Brain-inspired computing ==

Brain-inspired computing refers to computational models and methods that are mainly based on the mechanism of the brain, rather than completely imitating the brain. The goal is to enable the machine to realize various cognitive abilities and coordination mechanisms of human beings in a brain-inspired manner, and finally achieve or exceed Human intelligence level.

=== Research ===
] researchers are now aware of the benefits of learning from the brain information processing mechanism. And the progress of brain science and neuroscience also provides the necessary basis for artificial intelligence to learn from the brain information processing mechanism. Brain and neuroscience researchers are also trying to apply the understanding of brain information processing to a wider range of science field. The development of the discipline benefits from the push of information technology and smart technology and in turn brain and neuroscience will also inspire the next generation of the transformation of information technology.

=== The influence of brain science on Brain-inspired computing ===
Advances in brain and neuroscience, especially with the help of new technologies and new equipment, support researchers to obtain multi-scale, multi-type biological evidence of the brain through different experimental methods, and are trying to reveal the structure of bio-intelligence from different aspects and functional basis. From the microscopic neurons, synaptic working mechanisms and their characteristics, to the mesoscopic ], to the links in the macroscopic brain interval and their synergistic characteristics, the multi-scale structure and functional mechanisms of brains derived from these experimental and mechanistic studies will provide important inspiration for building a future brain-inspired computing model.<ref>徐波,刘成林,曾毅.类脑智能研究现状与发展思考.中国科学院院刊,2016,31(7):793-802.</ref>

=== Brain-inspired chip ===
Broadly speaking, brain-inspired chip refers to a chip designed with reference to the structure of human brain neurons and the cognitive mode of human brain. Obviously, the "] chip" is a brain-inspired chip that focuses on the design of the chip structure with reference to the human brain neuron model and its tissue structure, which represents a major direction of brain-inspired chip research. Along with the rise and development of “brain plans” in various countries, a large number of research results on neuromorphic chips have emerged, which have received extensive international attention and are well known to the academic community and the industry. For example, EU-backed ] and BrainScaleS, Stanford's ], IBM's ], and Qualcomm's ].

TrueNorth is a brain-inspired chip that IBM has been developing for nearly 10 years. The US DARPA program has been funding IBM to develop pulsed neural network chips for intelligent processing since 2008. In 2011, IBM first developed two cognitive silicon prototypes by simulating brain structures that could learn and process information like the brain. Each neuron of a brain-inspired chip is cross-connected with massive parallelism. In 2014, IBM released a second-generation brain-inspired chip called "TrueNorth." Compared with the first generation brain-inspired chips, the performance of the TrueNorth chip has increased dramatically, and the number of neurons has increased from 256 to 1 million; the number of programmable synapses has increased from 262,144 to 256 million; Subsynaptic operation with a total power consumption of 70&nbsp;mW and a power consumption of 20&nbsp;mW per square centimeter. At the same time, TrueNorth handles a nuclear volume of only 1/15 of the first generation of brain chips. At present, IBM has developed a prototype of a neuron computer that uses 16 TrueNorth chips with real-time video processing capabilities.<ref>{{cite web|url=http://www.eepw.com.cn/article/271641.htm|title=美国类脑芯片发展历程|publisher=]}}</ref> The super-high indicators and excellence of the TrueNorth chip have caused a great stir in the academic world at the beginning of its release.

In 2012, the Institute of Computing Technology of the Chinese Academy of Sciences(CAS) and the French Inria collaborated to develop the first chip in the world to support the deep neural network processor architecture chip "Cambrian".<ref>{{cite journal | doi=10.1145/2654822.2541967 | title=Dian ''Nao'' | year=2014 | last1=Chen | first1=Tianshi | last2=Du | first2=Zidong | last3=Sun | first3=Ninghui | last4=Wang | first4=Jia | last5=Wu | first5=Chengyong | last6=Chen | first6=Yunji | last7=Temam | first7=Olivier | journal=ACM SIGARCH Computer Architecture News | volume=42 | pages=269–284 | doi-access=free }}</ref> The technology has won the best international conferences in the field of computer architecture, ASPLOS and MICRO, and its design method and performance have been recognized internationally. The chip can be used as an outstanding representative of the research direction of brain-inspired chips.

=== Challenges in Brain-Inspired Computing ===

==== Unclear Brain mechanism cognition ====
The human brain is a product of evolution. Although its structure and information processing mechanism are constantly optimized, compromises in the evolution process are inevitable. The cranial nervous system is a multi-scale structure. There are still several important problems in the mechanism of information processing at each scale, such as the fine connection structure of neuron scales and the mechanism of brain-scale feedback. Therefore, even a comprehensive calculation of the number of neurons and synapses is only 1/1000 of the size of the human brain, and it is still very difficult to study at the current level of scientific research.<ref>Markram Henry, Muller Eilif, Ramaswamy Srikanth .Cell, 2015, Vol.163 (2), pp.456-92PubMed</ref>
Recent advances in brain simulation linked individual variability in human cognitive ] and ] to the ] in ], ], ] and ] ].<ref>{{Cite journal |last1=Schirner |first1=Michael |last2=Deco |first2=Gustavo |last3=Ritter |first3=Petra |date=2023 |title=Learning how network structure shapes decision-making for bio-inspired computing |journal=Nature Communications |volume=14 |issue=2963 |page=2963 |doi=10.1038/s41467-023-38626-y|pmid=37221168 |pmc=10206104 |bibcode=2023NatCo..14.2963S }}</ref>

==== Unclear Brain-inspired computational models and algorithms ====
In the future research of cognitive brain computing model, it is necessary to model the brain information processing system based on multi-scale brain neural system data analysis results, construct a brain-inspired multi-scale neural network computing model, and simulate multi-modality of brain in multi-scale. Intelligent behavioral ability such as perception, self-learning and memory, and choice. Machine learning algorithms are not flexible and require high-quality sample data that is manually labeled on a large scale. Training models require a lot of computational overhead. Brain-inspired artificial intelligence still lacks advanced cognitive ability and inferential learning ability.

==== Constrained Computational architecture and capabilities ====
Most of the existing brain-inspired chips are still based on the research of von Neumann architecture, and most of the chip manufacturing materials are still using traditional semiconductor materials. The neural chip is only borrowing the most basic unit of brain information processing. The most basic computer system, such as storage and computational fusion, pulse discharge mechanism, the connection mechanism between neurons, etc., and the mechanism between different scale information processing units has not been integrated into the study of brain-inspired computing architecture. Now an important international trend is to develop neural computing components such as brain memristors, memory containers, and sensory sensors based on new materials such as nanometers, thus supporting the construction of more complex brain-inspired computing architectures. The development of brain-inspired computers and large-scale brain computing systems based on brain-inspired chip development also requires a corresponding software environment to support its wide application.


== See also == == See also ==
{{list|date=December 2016}} {{prose|date=December 2016}}
* ] * ]
* ] * ]
Line 67: Line 149:
* ] * ]
* ] * ]
* ]


; Lists ; Lists
Line 80: Line 163:
''(the following are presented in ascending order of complexity and depth, with those new to the field suggested to start from the top)'' ''(the following are presented in ascending order of complexity and depth, with those new to the field suggested to start from the top)''


* ""
* "" * ""
* "", Peter J. Bentley. * "", Peter J. Bentley.
* "" * ""
* '''', Steven Johnson. * '''', Steven Johnson.
* ''Dr. Dobb's Journal'', Apr-1991. (Issue theme: Biocomputing) * ''Dr. Dobb's Journal'', Apr-1991. (Issue theme: Biocomputing)
* '''', Mitchel Resnick. * '''', Mitchel Resnick.
* ''Understanding Nonlinear Dynamics'', Daniel Kaplan and ]. * ''Understanding Nonlinear Dynamics'', Daniel Kaplan and ].
* {{cite journal | first1 = E. | last1= Ridge | first2 = D. | last2 = Kudenko | first3 = D. | last3 = Kazakov | first4 = E. |last4=Curry | title = Moving Nature-Inspired Algorithms to Parallel, Asynchronous and Decentralised Environments, | citeseerx = 10.1.1.64.3403 | journal = Self-Organization and Autonomic Informatics (I) | year = 2005 | volume = 135 | pages = 35–49 }} * {{cite journal | first1 = E. | last1= Ridge | first2 = D. | last2 = Kudenko | first3 = D. | last3 = Kazakov | first4 = E. |last4=Curry | title = Moving Nature-Inspired Algorithms to Parallel, Asynchronous and Decentralised Environments | citeseerx = 10.1.1.64.3403 | journal = Self-Organization and Autonomic Informatics (I) | year = 2005 | volume = 135 | pages = 35–49 }}
*''Swarms and Swarm Intelligence'' by Michael G. Hinchey, Roy Sterritt, and Chris Rouff, *'''' by Michael G. Hinchey, Roy Sterritt, and Chris Rouff,
* '''', L. N. de Castro, Chapman & Hall/CRC, June 2006. * '''', L. N. de Castro, Chapman & Hall/CRC, June 2006.
* "", . MIT Press. 1998, hardcover ed.; 2000, paperback ed. An in-depth discussion of many of the topics and underlying themes of bio-inspired computing. * "", . MIT Press. 1998, hardcover ed.; 2000, paperback ed. An in-depth discussion of many of the topics and underlying themes of bio-inspired computing.
* Kevin M. Passino, , Springer-Verlag, London, UK, 2005. * Kevin M. Passino, , Springer-Verlag, London, UK, 2005.
* '''', L. N. de Castro and F. J. Von Zuben, Idea Group Publishing, 2004. * '''', L. N. de Castro and F. J. Von Zuben, Idea Group Publishing, 2004.
*Nancy Forbes, Imitation of Life: How Biology is Inspiring Computing, MIT Press, Cambridge, MA 2004. *Nancy Forbes, Imitation of Life: How Biology is Inspiring Computing, MIT Press, Cambridge, MA 2004.
* M. Blowers and A. Sisti, ''Evolutionary and Bio-inspired Computation: Theory and Applications'', SPIE Press, 2007. * M. Blowers and A. Sisti, ''Evolutionary and Bio-inspired Computation: Theory and Applications'', SPIE Press, 2007.
* X. S. Yang, Z. H. Cui, R. B. Xiao, A. H. Gandomi, M. Karamanoglu, ''Swarm Intelligence and Bio-Inspired Computation: Theory and Applications'', Elsevier, 2013. * X. S. Yang, Z. H. Cui, R. B. Xiao, A. H. Gandomi, M. Karamanoglu, '''', Elsevier, 2013.
* "", ] * "", ]
* ''The portable UNIX programming system (PUPS) and CANTOR: a computational envorionment for dynamical representation and analysis of complex neurobiological data'', ], and Claus-C Hilgetag, Phil Trans R Soc Lond B 356 (2001), 1259–1276 * ''The portable UNIX programming system (PUPS) and CANTOR: a computational envorionment for dynamical representation and analysis of complex neurobiological data'', ], and Claus-C Hilgetag, Phil Trans R Soc Lond B 356 (2001), 1259–1276
* "", J. Timmis, M. Amos, W. Banzhaf, and A. Tyrrell, Journal of Unconventional Computing 2 (2007) 349–378. * "", J. Timmis, M. Amos, W. Banzhaf, and A. Tyrrell, Journal of Unconventional Computing 2 (2007) 349–378.
* {{cite book | last1=Neumann | first1=Frank | last2=Witt | first2=Carsten | title=Bioinspired computation in combinatorial optimization. Algorithms and their computational complexity | zbl=1223.68002 | series=Natural Computing Series | location=Berlin | publisher=] | isbn=978-3-642-16543-6 | year=2010 }} * {{cite book | last1=Neumann | first1=Frank | last2=Witt | first2=Carsten | title=Bioinspired computation in combinatorial optimization. Algorithms and their computational complexity | zbl=1223.68002 | series=Natural Computing Series | location=Berlin | publisher=] | isbn=978-3-642-16543-6 | year=2010 }}
* {{cite book | last1=Brabazon | first1=Anthony | last2=O’Neill | first2=Michael | title=Biologically inspired algorithms for financial modelling | zbl=1117.91030 | series=Natural Computing Series | location=Berlin | publisher=] | isbn=3-540-26252-0 | year=2006 }} * {{cite book | last1=Brabazon | first1=Anthony | last2=O’Neill | first2=Michael | title=Biologically inspired algorithms for financial modelling | zbl=1117.91030 | series=Natural Computing Series | location=Berlin | publisher=] | isbn=978-3-540-26252-7 | year=2006 }}
* C-M. Pintea, 2014, , Springer {{ISBN|978-3-642-40178-7}} * C-M. Pintea, 2014, , Springer {{ISBN|978-3-642-40178-7}}
* "", Y. Zhang and S. Li * "", Y. Zhang and S. Li


== External links == == External links ==
* Group, University of Surrey, UK * Group, University of Surrey, UK
* *
* *
* *
Line 112: Line 196:
* *
* *
* *
* UCD, Dublin Ireland * UCD, Dublin Ireland
* *
Line 122: Line 206:
{{DEFAULTSORT:Bio-Inspired Computing}} {{DEFAULTSORT:Bio-Inspired Computing}}
] ]
]
] ]
] ]
]

Latest revision as of 09:29, 10 October 2024

This article is about using biology as an inspiration in computing. For computers composed of biological parts, see Biological computing. For data analysis and mathematical modeling in biology, see Computational biology.
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
This article may require cleanup to meet Misplaced Pages's quality standards. The specific problem is: This article has potential, but is currently mostly used as a coatrack for WP:REFSPAM. Please help improve this article if you can. (August 2016) (Learn how and when to remove this message)
This article possibly contains original research. Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research should be removed. (September 2021) (Learn how and when to remove this message)
The neutrality of this article is disputed. Relevant discussion may be found on the talk page. Please do not remove this message until conditions to do so are met. (September 2021) (Learn how and when to remove this message)
(Learn how and when to remove this message)

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

History

Early Ideas

The ideas behind biological computing trace back to 1936 and the first description of an abstract computer, which is now known as a Turing machine. Turing firstly described the abstract construct using a biological specimen. Turing imagined a mathematician that has three important attributes. He always has a pencil with an eraser, an unlimited number of papers and a working set of eyes. The eyes allow the mathematician to see and perceive any symbols written on the paper while the pencil allows him to write and erase any symbols that he wants. Lastly, the unlimited paper allows him to store anything he wants memory. Using these ideas he was able to describe an abstraction of the modern digital computer. However Turing mentioned that anything that can perform these functions can be considered such a machine and he even said that even electricity should not be required to describe digital computation and machine thinking in general.

Neural Networks

First described in 1943 by Warren McCulloch and Walter Pitts, neural networks are a prevalent example of biological systems inspiring the creation of computer algorithms. They first mathematically described that a system of simplistic neurons was able to produce simple logical operations such as logical conjunction, disjunction and negation. They further showed that a system of neural networks can be used to carry out any calculation that requires finite memory. Around 1970 the research around neural networks slowed down and many consider a 1969 book by Marvin Minsky and Seymour Papert as the main cause. Their book showed that neural network models were able only model systems that are based on Boolean functions that are true only after a certain threshold value. Such functions are also known as threshold functions. The book also showed that a large amount of systems cannot be represented as such meaning that a large amount of systems cannot be modeled by neural networks. Another book by James Rumelhart and David McClelland in 1986 brought neural networks back to the spotlight by demonstrating the linear back-propagation algorithm something that allowed the development of multi-layered neural networks that did not adhere to those limits.

Ant Colonies

Douglas Hofstadter in 1979 described an idea of a biological system capable of performing intelligent calculations even though the individuals comprising the system might not be intelligent. More specifically, he gave the example of an ant colony that can carry out intelligent tasks together but each individual ant cannot exhibiting something called "emergent behavior." Azimi et al. in 2009 showed that what they described as the "ant colony" algorithm, a clustering algorithm that is able to output the number of clusters and produce highly competitive final clusters comparable to other traditional algorithms. Lastly Hölder and Wilson in 2009 concluded using historical data that ants have evolved to function as a single "superogranism" colony. A very important result since it suggested that group selection evolutionary algorithms coupled together with algorithms similar to the "ant colony" can be potentially used to develop more powerful algorithms.

Areas of research

Some areas of study in biologically inspired computing, and their biological counterparts:

Bio-Inspired Computing Topic Biological Inspiration
Genetic Algorithms Evolution
Biodegradability prediction Biodegradation
Cellular Automata Life
Emergence Ants, termites, bees, wasps
Artificial neural networks Biological neural networks
Artificial life Life
Artificial immune system Immune system
Rendering (computer graphics) Patterning and rendering of animal skins, bird feathers, mollusk shells and bacterial colonies
Lindenmayer systems Plant structures
Communication networks and communication protocols Epidemiology
Membrane computers Intra-membrane molecular processes in the living cell
Excitable media Forest fires, "the wave", heart conditions, axons
Sensor networks Sensory organs
Learning classifier systems Cognition, evolution

Artificial intelligence

Bio-Inspired computing can be distinguished from traditional artificial intelligence by its approach to computer learning. Bio-inspired computing uses an evolutionary approach, while traditional A.I. uses a 'creationist' approach. Bio-inspired computing begins with a set of simple rules and simple organisms which adhere to those rules. Over time, these organisms evolve within simple constraints. This method could be considered bottom-up or decentralized. In traditional artificial intelligence, intelligence is often programmed from above: the programmer is the creator, and makes something and imbues it with its intelligence.

Virtual Insect Example

Bio-inspired computing can be used to train a virtual insect. The insect is trained to navigate in an unknown terrain for finding food equipped with six simple rules:

  • turn right for target-and-obstacle left;
  • turn left for target-and-obstacle right;
  • turn left for target-left-obstacle-right;
  • turn right for target-right-obstacle-left;
  • turn left for target-left without obstacle;
  • turn right for target-right without obstacle.

The virtual insect controlled by the trained spiking neural network can find food after training in any unknown terrain. After several generations of rule application it is usually the case that some forms of complex behaviour emerge. Complexity gets built upon complexity until the result is something markedly complex, and quite often completely counterintuitive from what the original rules would be expected to produce (see complex systems). For this reason, when modeling the neural network, it is necessary to accurately model an in vivo network, by live collection of "noise" coefficients that can be used to refine statistical inference and extrapolation as system complexity increases.

Natural evolution is a good analogy to this method–the rules of evolution (selection, recombination/reproduction, mutation and more recently transposition) are in principle simple rules, yet over millions of years have produced remarkably complex organisms. A similar technique is used in genetic algorithms.

Brain-inspired computing

Brain-inspired computing refers to computational models and methods that are mainly based on the mechanism of the brain, rather than completely imitating the brain. The goal is to enable the machine to realize various cognitive abilities and coordination mechanisms of human beings in a brain-inspired manner, and finally achieve or exceed Human intelligence level.

Research

Artificial intelligence researchers are now aware of the benefits of learning from the brain information processing mechanism. And the progress of brain science and neuroscience also provides the necessary basis for artificial intelligence to learn from the brain information processing mechanism. Brain and neuroscience researchers are also trying to apply the understanding of brain information processing to a wider range of science field. The development of the discipline benefits from the push of information technology and smart technology and in turn brain and neuroscience will also inspire the next generation of the transformation of information technology.

The influence of brain science on Brain-inspired computing

Advances in brain and neuroscience, especially with the help of new technologies and new equipment, support researchers to obtain multi-scale, multi-type biological evidence of the brain through different experimental methods, and are trying to reveal the structure of bio-intelligence from different aspects and functional basis. From the microscopic neurons, synaptic working mechanisms and their characteristics, to the mesoscopic network connection model, to the links in the macroscopic brain interval and their synergistic characteristics, the multi-scale structure and functional mechanisms of brains derived from these experimental and mechanistic studies will provide important inspiration for building a future brain-inspired computing model.

Brain-inspired chip

Broadly speaking, brain-inspired chip refers to a chip designed with reference to the structure of human brain neurons and the cognitive mode of human brain. Obviously, the "neuromorphic chip" is a brain-inspired chip that focuses on the design of the chip structure with reference to the human brain neuron model and its tissue structure, which represents a major direction of brain-inspired chip research. Along with the rise and development of “brain plans” in various countries, a large number of research results on neuromorphic chips have emerged, which have received extensive international attention and are well known to the academic community and the industry. For example, EU-backed SpiNNaker and BrainScaleS, Stanford's Neurogrid, IBM's TrueNorth, and Qualcomm's Zeroth.

TrueNorth is a brain-inspired chip that IBM has been developing for nearly 10 years. The US DARPA program has been funding IBM to develop pulsed neural network chips for intelligent processing since 2008. In 2011, IBM first developed two cognitive silicon prototypes by simulating brain structures that could learn and process information like the brain. Each neuron of a brain-inspired chip is cross-connected with massive parallelism. In 2014, IBM released a second-generation brain-inspired chip called "TrueNorth." Compared with the first generation brain-inspired chips, the performance of the TrueNorth chip has increased dramatically, and the number of neurons has increased from 256 to 1 million; the number of programmable synapses has increased from 262,144 to 256 million; Subsynaptic operation with a total power consumption of 70 mW and a power consumption of 20 mW per square centimeter. At the same time, TrueNorth handles a nuclear volume of only 1/15 of the first generation of brain chips. At present, IBM has developed a prototype of a neuron computer that uses 16 TrueNorth chips with real-time video processing capabilities. The super-high indicators and excellence of the TrueNorth chip have caused a great stir in the academic world at the beginning of its release.

In 2012, the Institute of Computing Technology of the Chinese Academy of Sciences(CAS) and the French Inria collaborated to develop the first chip in the world to support the deep neural network processor architecture chip "Cambrian". The technology has won the best international conferences in the field of computer architecture, ASPLOS and MICRO, and its design method and performance have been recognized internationally. The chip can be used as an outstanding representative of the research direction of brain-inspired chips.

Challenges in Brain-Inspired Computing

Unclear Brain mechanism cognition

The human brain is a product of evolution. Although its structure and information processing mechanism are constantly optimized, compromises in the evolution process are inevitable. The cranial nervous system is a multi-scale structure. There are still several important problems in the mechanism of information processing at each scale, such as the fine connection structure of neuron scales and the mechanism of brain-scale feedback. Therefore, even a comprehensive calculation of the number of neurons and synapses is only 1/1000 of the size of the human brain, and it is still very difficult to study at the current level of scientific research. Recent advances in brain simulation linked individual variability in human cognitive processing speed and fluid intelligence to the balance of excitation and inhibition in structural brain networks, functional connectivity, winner-take-all decision-making and attractor working memory.

Unclear Brain-inspired computational models and algorithms

In the future research of cognitive brain computing model, it is necessary to model the brain information processing system based on multi-scale brain neural system data analysis results, construct a brain-inspired multi-scale neural network computing model, and simulate multi-modality of brain in multi-scale. Intelligent behavioral ability such as perception, self-learning and memory, and choice. Machine learning algorithms are not flexible and require high-quality sample data that is manually labeled on a large scale. Training models require a lot of computational overhead. Brain-inspired artificial intelligence still lacks advanced cognitive ability and inferential learning ability.

Constrained Computational architecture and capabilities

Most of the existing brain-inspired chips are still based on the research of von Neumann architecture, and most of the chip manufacturing materials are still using traditional semiconductor materials. The neural chip is only borrowing the most basic unit of brain information processing. The most basic computer system, such as storage and computational fusion, pulse discharge mechanism, the connection mechanism between neurons, etc., and the mechanism between different scale information processing units has not been integrated into the study of brain-inspired computing architecture. Now an important international trend is to develop neural computing components such as brain memristors, memory containers, and sensory sensors based on new materials such as nanometers, thus supporting the construction of more complex brain-inspired computing architectures. The development of brain-inspired computers and large-scale brain computing systems based on brain-inspired chip development also requires a corresponding software environment to support its wide application.

See also

This article is in list format but may read better as prose. You can help by converting this article, if appropriate. Editing help is available. (December 2016)
Lists

References

  1. Turing, Alan (1936). On computable numbers : with an application to the Entscheidungsproblem. Mathematical Society. OCLC 18386775.
  2. Turing, Alan (2004-09-09), "Computing Machinery and Intelligence (1950)", The Essential Turing, Oxford University Press, pp. 433–464, doi:10.1093/oso/9780198250791.003.0017, ISBN 978-0-19-825079-1, retrieved 2022-05-05
  3. McCulloch, Warren; Pitts, Walter (2021-02-02), "A Logical Calculus of the Ideas Immanent in Nervous Activity (1943)", Ideas That Created the Future, The MIT Press, pp. 79–88, doi:10.7551/mitpress/12274.003.0011, ISBN 9780262363174, S2CID 262231397, retrieved 2022-05-05
  4. Minsky, Marvin (1988). Perceptrons : an introduction to computational geometry. The MIT Press. ISBN 978-0-262-34392-3. OCLC 1047885158.
  5. "History: The Past". userweb.ucs.louisiana.edu. Retrieved 2022-05-05.
  6. McClelland, James L.; Rumelhart, David E. (1999). Parallel distributed processing : explorations in the microstructure of cognition. MIT Press. ISBN 0-262-18120-7. OCLC 916899323.
  7. Hofstadter, Douglas R. (1979). Gödel, Escher, Bach : an eternal golden braid. Basic Books. ISBN 0-465-02656-7. OCLC 750541259.
  8. Azimi, Javad; Cull, Paul; Fern, Xiaoli (2009), "Clustering Ensembles Using Ants Algorithm", Methods and Models in Artificial and Natural Computation. A Homage to Professor Mira’s Scientific Legacy, Lecture Notes in Computer Science, vol. 5601, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 295–304, doi:10.1007/978-3-642-02264-7_31, ISBN 978-3-642-02263-0, retrieved 2022-05-05
  9. Wilson, David Sloan; Sober, Elliott (1989). "Reviving the superorganism". Journal of Theoretical Biology. 136 (3): 337–356. Bibcode:1989JThBi.136..337W. doi:10.1016/s0022-5193(89)80169-9. ISSN 0022-5193. PMID 2811397.
  10. Xu Z; Ziye X; Craig H; Silvia F (Dec 2013). "Spike-based indirect training of a spiking neural network-controlled virtual insect". 52nd IEEE Conference on Decision and Control. pp. 6798–6805. CiteSeerX 10.1.1.671.6351. doi:10.1109/CDC.2013.6760966. ISBN 978-1-4673-5717-3. S2CID 13992150. {{cite book}}: |journal= ignored (help)
  11. Joshua E. Mendoza. ""Smart Vaccines" – The Shape of Things to Come". Research Interests. Archived from the original on November 14, 2012.
  12. 徐波,刘成林,曾毅.类脑智能研究现状与发展思考.中国科学院院刊,2016,31(7):793-802.
  13. "美国类脑芯片发展历程". Electronic Engineering & Product World.
  14. Chen, Tianshi; Du, Zidong; Sun, Ninghui; Wang, Jia; Wu, Chengyong; Chen, Yunji; Temam, Olivier (2014). "Dian Nao". ACM SIGARCH Computer Architecture News. 42: 269–284. doi:10.1145/2654822.2541967.
  15. Markram Henry, Muller Eilif, Ramaswamy Srikanth Reconstruction and simulation of neocortical microcircuitry .Cell, 2015, Vol.163 (2), pp.456-92PubMed
  16. Schirner, Michael; Deco, Gustavo; Ritter, Petra (2023). "Learning how network structure shapes decision-making for bio-inspired computing". Nature Communications. 14 (2963): 2963. Bibcode:2023NatCo..14.2963S. doi:10.1038/s41467-023-38626-y. PMC 10206104. PMID 37221168.

Further reading

(the following are presented in ascending order of complexity and depth, with those new to the field suggested to start from the top)

External links

Categories: