Revision as of 11:50, 1 May 2013 editThe Anome (talk | contribs)Edit filter managers, Administrators252,970 edits →Ethical issues: explains -> states← Previous edit | Revision as of 15:45, 1 May 2013 edit undoLuckyLouie (talk | contribs)Extended confirmed users, Pending changes reviewers27,103 edits →Identifying thoughts: Copyedit, according to what source actually saysNext edit → | ||
Line 17: | Line 17: | ||
In 2011, a team led by Shinji Nishimoto used only brain recordings to partially reconstruct what volunteers were seeing. The researchers applied a new model, about how moving object information is processed in human brains, while volunteers watched clips from several videos. An algorithm searched through thousands of hours of external YouTube video footage (none of the videos were the same as the ones the volunteers watched) to select the clips that were most similar.<ref>{{citation|first1=Shinji|last1=Nishimoto|first2=An T.|last2=Vu|first3=Thomas|last3=Naselaris|first4=Yuval|last4=Benjamini|first5=Bin|last5=Yu|author5-link=Bin Yu||first6=Jack L.|last6=Gallant|title=Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies|journal=]|doi=10.1016/j.cub.2011.08.031|volume=21|issue=19|year=2011|pages=1641–1646}}</ref><ref></ref> The authors have uploaded demos comparing the watched and the computer-estimated videos.<ref></ref> | In 2011, a team led by Shinji Nishimoto used only brain recordings to partially reconstruct what volunteers were seeing. The researchers applied a new model, about how moving object information is processed in human brains, while volunteers watched clips from several videos. An algorithm searched through thousands of hours of external YouTube video footage (none of the videos were the same as the ones the volunteers watched) to select the clips that were most similar.<ref>{{citation|first1=Shinji|last1=Nishimoto|first2=An T.|last2=Vu|first3=Thomas|last3=Naselaris|first4=Yuval|last4=Benjamini|first5=Bin|last5=Yu|author5-link=Bin Yu||first6=Jack L.|last6=Gallant|title=Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies|journal=]|doi=10.1016/j.cub.2011.08.031|volume=21|issue=19|year=2011|pages=1641–1646}}</ref><ref></ref> The authors have uploaded demos comparing the watched and the computer-estimated videos.<ref></ref> | ||
In 2000, ] |
In 2000, ] neuroengineer John Norseen speculated that future brain mapping technology could be used so that "a pilot could fly a plane by merely thinking" or an airport screening system could "identify a terrorist's mental profile". Norseen reports that early research includes simple interactions with test subjects, such as identifying and cataloging which parts of the brain are active when the subject is asked to think of a number. Norseen says that a database of such "brainprints" could be used as part of an airport security system, speculating that a "single electrode" or a "dome above your head" could be employed to collect the data.<ref>{{cite web| title="Buck Rogers, meet John Norseen" | author="U.S. News and World Report" | url="http://www.usnews.com/usnews/culture/articles/000103/archive_033992.htm"}}</ref><ref>{{cite web| title="Decoding Minds, Foiling Adversaries" | author="SIGNAL Magazine" | url="http://www.afcea.org/signal/archives/content/Oct01/"}}</ref> | ||
===Predicting intentions=== | ===Predicting intentions=== |
Revision as of 15:45, 1 May 2013
Thought identification refers to the empirically verified use of technology to, in some sense, read people's minds. Recent research using neuroimaging has provided some early demonstrations of the technology's potential to recognize high-order patterns in the brain. In some cases, this provides meaningful (and controversial) information to investigators.
Professor of neuropsychology, Barbara Sahakian, qualifies "A lot of neuroscientists in the field are very cautious and say we can't talk about reading individuals' minds, and right now that is very true, but we're moving ahead so rapidly, it's not going to be that long before we will be able to tell whether someone's making up a story, or whether someone intended to do a crime with a certain degree of certainty."
Examples
Identifying thoughts
When humans think of an object, like a screwdriver, many different areas of the brain activate. Psychologist Marcel Just and his colleague, Tom Mitchell, have used FMRI brain scans to teach a computer to identify the various parts of the brain associated with specific thoughts.
This technology also yielded a discovery: similar thoughts in different human brains are surprisingly similar neurologically. To illustrate this, Just and Mitchell used their computer to predict, based on nothing but FMRI data, which of several images a volunteer was thinking about. The computer was 100% accurate, but so far the machine is only distinguishing between 10 images.
Psychologist John Dylan-Haynes states that FMRI can also be used to identify recognition in the brain. He provides the example of a criminal being interrogated about whether he recognizes the scene of the crime or murder weapons. Just and Mitchell also claim they are beginning to be able to identify kindness, hypocrisy, and love in the brain.
In 2010 IBM applied for a patent on how to extract mental images of human faces from the human brain. It uses a feedback loop based on brain measurements of the fusiform gyrus area in the brain which activates proportionate with degree of facial recognition.
In 2011, a team led by Shinji Nishimoto used only brain recordings to partially reconstruct what volunteers were seeing. The researchers applied a new model, about how moving object information is processed in human brains, while volunteers watched clips from several videos. An algorithm searched through thousands of hours of external YouTube video footage (none of the videos were the same as the ones the volunteers watched) to select the clips that were most similar. The authors have uploaded demos comparing the watched and the computer-estimated videos.
In 2000, Lockheed Martin neuroengineer John Norseen speculated that future brain mapping technology could be used so that "a pilot could fly a plane by merely thinking" or an airport screening system could "identify a terrorist's mental profile". Norseen reports that early research includes simple interactions with test subjects, such as identifying and cataloging which parts of the brain are active when the subject is asked to think of a number. Norseen says that a database of such "brainprints" could be used as part of an airport security system, speculating that a "single electrode" or a "dome above your head" could be employed to collect the data.
Predicting intentions
See also: Neuroscience of free willSome researchers in 2008 were able to predict, with 60% accuracy, whether a subject was going to push a button with their left or right hand. This is notable, not just because the accuracy is better than chance, but also because the scientists were able to make these predictions up to 10 seconds before the subject acted - well before the subject felt they had decided. This data is even more striking in light of other research suggesting that the decision to move, and possibly the ability to cancel that movement at the last second, may be the results of unconscious processing.
John Dylan-Haynes has also demonstrated that FMRI can be used to identify whether a volunteer is about to add or subtract two numbers in their head.
Brain as input device
Emotiv Systems, an Australian electronics company, has demonstrated a headset that can be trained to recognize a user's thought patterns for different commands. Tan Le demonstrated the headset's ability to manipulate virtual objects on screen, and discussed various future applications for such brain-computer interface devices, from powering wheel chairs to replacing the mouse and keyboard.
Decoding brain activity to reconstruct words
On January 31, 2012 Brian Pasley and colleagues of University of California Berkeley published their paper in PLoS Biology where in subjects internal neural processing of auditory information was decoded and reconstructed as sound on computer by gathering and analyzing electrical signals directly from subjects brains. The research team conducted their studies on the superior temporal gyrus, a region of the brain that is involved in higher order neural processing to make semantic sense from auditory information. The research team used a computer model to analyze various parts of the brain that might be involved in neural firing while processing auditory signals. Using the computational model, scientists were able to identify the brain activity involved in processing auditory information when subjects were presented with recording of individual words. Later, the computer model of auditory information processing was used to reconstruct some of the words back into sound based on the neural processing of the subjects. However the reconstructed sounds were not of good quality and could be recognized only when the audio wave patterns of the reconstructed sound were visually matched with the audio wave patterns of the original sound that was presented to the subjects. However this research marks a direction towards more precise identification of neural activity in cognition.
Ethical issues
With brain scanning technology becoming increasingly accurate, experts predict important debates over how and when it should be used. One potential area of application is criminal law. Haynes states that simply refusing to use brain scans on suspects also prevents the wrongly accused from proving their innocence.
References
- ^ The Guardian, "The brain scan that can read people's intentions"
- ^ 60 Minutes "Technology that can read your mind"
- IBM Patent Application: Retrieving mental images of faces from the human brain
- Nishimoto, Shinji; Vu, An T.; Naselaris, Thomas; Benjamini, Yuval; Yu, Bin; Gallant, Jack L. (2011), "Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies", Current Biology, 21 (19): 1641–1646, doi:10.1016/j.cub.2011.08.031
{{citation}}
: Cite has empty unknown parameter:|1=
(help) - American Blog, Breakthrough Could Enable Others to Watch Your Dreams and Memories [Video], Philip Yam
- Nishimoto et al. uploaded video, "Nishimoto.etal.2011.3Subjects.mpeg" on Youtube
- "U.S. News and World Report". .
{{cite web}}
: Check|url=
value (help) - "SIGNAL Magazine". .
{{cite web}}
: Check|url=
value (help) - Attention: This template ({{cite pmid}}) is deprecated. To cite the publication identified by PMID 18408715, please use {{cite journal}} with
|pmid=18408715
instead. - Kühn, S., & Brass, M. (2009). Retrospective construction of the judgment of free choice.Consciousness and Cognition, 18, 12-21.
- Matsuhashi, M., & Hallett, M. (2008). The timing of the conscious intention to move. European Journal of Neuroscience , 28, 2344-2351.
- Tan Le: A headset that reads your brainwaves
- Pasley BN, David SV, Mesgarani N, Flinker A, Shamma SA, et al. (2012) Reconstructing Speech from Human Auditory Cortex. PLoS Biol 10(1): e1001251. doi:10.1371/journal.pbio.1001251
- Science decodes 'internal voices' BBC News 31st January 2012
- ^ Secrets of the inner voice unlocked feb 1st 2012