Talk:Brain in a vat
This article was nominated for deletion on 23 February 2007. The result of the discussion was Keep (merge at discretion).. |
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Wiki Education Foundation-supported course assignment
[edit]This article was the subject of a Wiki Education Foundation-supported course assignment, between 2 February 2021 and 17 March 2021. Further details are available on the course page. Student editor(s): Mmarinkovic5678. Peer reviewers: Jmm26.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 18:18, 17 January 2022 (UTC)
Wiki Education Foundation-supported course assignment
[edit]This article was the subject of a Wiki Education Foundation-supported course assignment, between 23 August 2021 and 8 October 2021. Further details are available on the course page. Student editor(s): Nootiebeans. Peer reviewers: Alabaw25, Justin Spinach.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 18:18, 17 January 2022 (UTC)
Wiki Education Foundation-supported course assignment
[edit]This article was the subject of a Wiki Education Foundation-supported course assignment, between 6 September 2020 and 6 December 2020. Further details are available on the course page. Student editor(s): CaptainJoseph.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 16:11, 16 January 2022 (UTC)
Simulated world the same as in scientist's lab?
[edit]is the brain in the simulated world the same as the one in the scientist's lab?
previous comment unsigned
I think the answer is ........ maybe! 68.144.80.168 (talk) 15:38, 24 June 2008 (UTC)
Explain consciousness and why I experience this person and you can proof all this wrong, but since that is not possible it's just stupid. I need a beer now.
Lem story
[edit]There is also a pretty old story on the same theme by Stanislaw Lem. It belongs to his Ijon Tichy cycle, but unfortunately I don't have it in hand to check the exact name. Another, though not so close reference was used in the French movie City of Lost Children (Cite des enfants perdue), where one of the characters actually was a brain in an aquarium, using its ears to swim around. --Oop 17:13, Jan 21, 2005 (UTC)
- Roald Dahl has also written a short story dealing with the brain in a vat. I forget what it is called though. 86.20.233.151 17:43, 29 May 2006 (UTC)
- The story is from 1961, called in Polish "Skrzynie Profesora Corcorana", in English published in 1982 in "Star Diaries", in "Memoirs of a space traveler: further reminiscences of Ijon Tichy" as "Professor Corcoran" or "Further Reminiscences of Ijon Tichy I". It does not contain brains, actually; rather, it seems that his "boxes" contain AIs. But other than that, everything fits: AIs think they are living persons living in a real world, while all of their sensory stimuli is fed from a machines. Corcoran at the end states (somewhat ambiguously) that he believes he is also inside inside a box in someone's laboratory. Not sure whether it belongs here (after all, it's not about a real brain in a vat) -- szopen — Preceding unsigned comment added by 31.1.195.123 (talk) 21:56, 26 August 2017 (UTC)
Cartoon example
[edit]I've seen several cartoons -- Futurama and Krang and RoboCop come to mind -- that depict living brains floating in a (clear glass) jar. (Was one in a scene in Ghost in the Shell, or am I mis-remembering?) Since they can see outside, it's not the total sensory deprivation/replacement that the main brain-in-a-vat article discusses. Is it a common enough cultural icon to warrant a paragraph or two? --DavidCary 01:13, 7 September 2005 (UTC)
Needs work
[edit]Having read some of the primary works in philosophy on this topic, I would like to point out that this page needs work. The terminology is confused and the arguments lack precision. -Collingsworth 02:05, 1 December 2005 (UTC)
:I agree. This page kind of caught me off guard; I was looking for an article about Hilary Putnam's essay about intentionality Brains in a Vat. Neither she nor her essay are mentioned in this article, although her husband is. I'll try to add something if I can, but it looks as though this piece may be in need of some general cleanup first. Shaggorama 11:03, 5 February 2006 (UTC)
:-->Wait, I think I had putnam confused with churchland. Are there even two Putnams or is this article poorly referencing Hilary, because it does sound kind of like her work.Shaggorama 11:06, 5 February 2006 (UTC)
- wow, scratch that whole bit. It's 3am, and I totally forgot that Putnam was a man. I'm just going to go to sleep now and stop bothering you nice people.Shaggorama 11:10, 5 February 2006 (UTC)
This line is incorrect:
Likewise, whatever we can mean by "brain" and "vat" must be such that we obviously are not brains in vats (the way to tell is to look in a mirror).
Since the brain can't 'look' in a mirror and see a vat, it will see whatever the virtual reality would decide should be seen in a mirror. Is this line attributable to Hilary or to some editor?
This bit is correct:
'Since the brain can't 'look' in a mirror and see a vat, it will see whatever the virtual reality would decide should be seen in a mirror.'
But, yes. The rest seems a little...incorrect
Importance of thought experiment
[edit]Back to the point - The 'brain in a vat' thought experiment is important because it reveals a significant falacy - we may think we are experiencing the physical world, but clearly all we are experiencing is information. The physical world is a theory that is consistent with the information. However the existance of the physical world is only one of many, many consistent theories. Clearly, from this, it is impossible to establish any universal physical laws.
[MS] That depends on what you mean by "universal physical laws". Despite what the average layman believes physical science never "proves" anything. Physical science is a matter of collecting observations and creating generalizations from observations. The rule is that the generalization must not include extraneaous constructs. must be consistent with all previous observations (tho' not future observations!). They must also provide testable hypotheses. First, there is no systematic way to check consistency, so the process takes time. Also the observations must be repeatable under a wide variety of circumstances.
So the "laws" of science are only the current `best` generalizations of repeatable observation - little more. With this in mind, we NEVER have the opportunity to establish any "universal physical laws". Even if there is no supercomputer between you and reality.
Physical laws are quite unlike social laws in that whenever there is a violation of a physical law, we fix the law rather than punish the violator. Classical mechanics laid down a nice set of "laws" of motion and force, but new observations of motion at the speed of light caused a revision of these so-called universal laws. No doubt future observation in new and different conditions will cause further revisions. Universal, schmuniversal - these are just a snapshot of current thinking & observations.
Heisenberg noted that the closer you try to observe a (tiny) physical phenomena the more you effect the observation and he put a physical number to the limit of the accuracy of any observation. This is just the sort of sneaky feature we should expect a mad scientist with a supercomputer to apply to his brain-in-the-vat experiment. It's a bit like noting that we each have computer screens in front of us with nice resolution, but if we get out the magnifying glass we will see individual pixels, and there is fundamentally no way to sub-divide the pixel to gain additional information. This limits the amount of computation needed so the mad scientists can then use one of those cheap eSupercomputers.
I can say at least one other thing about this supercomputer - the OS must be by Microsoft. My experience is that at intervals of approximately 24 hours the system crashes, sensory inputs fail and this is followed by a period of approximately 8 hours in which the super-computation is clearly inconsistent and sometimes incoherent. Sometimes gravity selectively does not apply to me. The most egregious violation is when Christina Aguilara shows up in a particularly skanky oufit for a hug. Yes all manner of "universal physical laws" are violated while the supercomputer is "down". Somehow there is a clearcut sense that the reboot is complete, indicated by a sudden urge for coffee and a shower. Then the reality provided by the supercomputer is again consistent with the previous day - or at least consistent with how I recall the previous day (which memory is in some doubt).
OK - my point is that we really can't touch the hypothetical reality we speak of by observation with our feeble and distorting senses. We can't trust our brains to not manufacture the experience - they *seem* to have been designed for some other monkey-like purpose. Also some evil supercomputer may be at the other end of the thin cord that connects us each to experience. Still, the one and only process that seems to be effective in predicting future outcomes based on past recollections of experience is that the scientific method works; that is it leads to better 'apparent' results(extrapolations) based on 'apparent' observations than any other. Yet using that scientific method on the meta-theory of experience should caquse us to reject the brain-in-vat theory as containing extraneous constructs which are not in evidence.
OK - please bear with me for a re-cap. We may or may not experience anything correlated the "external reality"; the question is moot, no evidence can be provided either way. Whatever we observe and whatever we recall *seems* to form a pattern which can form the basis of a generalization which can then be applied, with some more limited uncertaintly, to expectations of the future. That in our potentially faulty experience and recollection, the sun rose today and the day before, provides no proof that the sun will rise tomorrow; yet such patterns of past experience extrapolated to future expectation are quite successfully. A semi-formal system of time extrapolation called "the scientific method" appears to be the most successful at this game of selecting the relevent observations and projecting this into future occurrances. Despite the potential existence of my brain in a vat, or the manipulation of my recollections, this method still 'appears' to be successful at determining patterns in the future based on these potentially cloudy observations and recollections and the potentially manipulated future experiences. The scientific method suggests removing extranaous constructs and unobservable (constructs which genreate no testable hypotheses) from all theories. If applied to the meta-theory about experience, we should remove the extraneous and unobservable mad-scientist/supercomputer parts based on Occam's razor or the parsimony principle.
There is no "proof" that the minimalist theory with extraneous elements removed is correct. We should actually expect it to require revision. But since it, like all scientific conclusions is a tenative statement subject to revision as new observations appear - then the correct minimalist theory which produces a testable hypothesis is, "what we experience, we experience". That much is testable and repeatable.
In more detail we can state that there is great apparent consistency in what we experience - that the speed of light is observed always to be a constant, that mass/energy is observed always to be conserved through the relation E = M*c^2, that masses always fall at the same rate in a constant gravity field, that magnetism is always observed to be a relativistic effect of electrostatics. That the Sun rises each day is also an observation describing all past days. These observations can readily be extrapolated into the future, but with some limited peril.
Picture
[edit]If someone could upload the picture of people in a vat in the film The matrix, that'd would be good. Also, there needs to be a short summary up top, instead of one huge section. 165.230.46.138 17:41, 27 April 2006 (UTC)
Irrelevant examples
[edit]Many of the examples given refer to disembodied heads or brains which experience their external reality by artificial means. This is quite different from the concept discussed, wherin the brain experiences an artificial reality. -Ahruman 09:48, 12 May 2006 (UTC)
- Agreed. Maybe they should all be edited out, or moved to a separate article? Luvcraft 21:49, 12 August 2006 (UTC)
- Done. Split off irrelevant examples to Media featuring brains in jars. Luvcraft 21:59, 12 August 2006 (UTC)
Death speculation
[edit]Just a speculation, but if the "brian in a vat" had an experience of death, would the brain actually die in the vat, because it believes it has died? Or would the brain come to no physical harm in the "real" world because it is only experiencing death in a false virtual reality? Stovetopcookies 06:47, 23 June 2006 (UTC)
I was wondering the same thing. Suppose you are a brain in a vat. I imagine if you shot yourself in the head fatally that you would die in the virtual world, so the real brain no longer recieves any external information. No sound, touch, taste, sight, nothing. The real brain is alive, but it is interpreting no information. Assume the computer can make the real brain unconsious and you basically have a brain in a vat in a coma induced by the computer. Simulated death. Once again, the brain in a vat will not know that it is a brain in a vat, nor will it know that it is actually alive unless this coma-state is reversed by the computer. Sabar 04:37, 7 September 2006 (UTC)
- We need not take such dangerous methods to test whether we are living in a vat. Take a drink of alcohol. If you feel the effects of the alcohol, then you are either not a brain in a vat, or the system is capable of delivering alcohol to your brain. The brain in a vat scenario can deliver sensation to your brain, but could not cause internal brain states (such as being intoxicated) unless it was vastly more complex. If you were a brain in a vat, if you experienced a situation where you expected to die, you would simply find out that it hurt, but you would not die, unless the computer killed your brain.--RLent (talk) 17:51, 24 March 2008 (UTC)
- The problems you put forward, alcohol, death, several more, including chemical based emotions, could all be solved by simulating the brain and the vat inside another computer. In such a case, the vat could be equipped with any imaginable technology. Another possibility, what if our *real* brain was an entirely different technology than a human brain. In that case, we cannot be sure that alcohol, death, cannot be triggered through stimulus. 68.144.80.168 (talk) 16:02, 24 June 2008 (UTC)
- If the brain is simulated, it's not a brain in a vat. While similar, the idea that we are a computer program is distinct from being a brain connected to the computer. It does seem that the more we look into the scenario, the more fantastic it has to be to be plausible.--RLent (talk) 19:05, 1 July 2008 (UTC)
What makes you think alchohol even exists in the "real world" (assuming this is true) maybe our concept of drugs is just part of the simulation. Maybe the drugs of the real world were manufactured by people, something made to effect chemicals in the brain to make yourself relaxed or more energetic or maybe even more creative (depending on the substance) it might explain why such things would be found literally growing out of the ground sometimes. Maybe i just don't understand the world but it always seemed odd that a plant would naturally effect the way the human brain works...when smoked. What kind of survival mechanism is that? Maybe these "plants" and other drugs are just creations of the simulation meant to mirror the effects of whatever drug they have in the non simulated world (or maybe they just plug themselves into a simulation to feel these effects. 174.42.206.246 (talk) 16:02, 18 February 2010 (UTC)
Movie occurance(s)
[edit]In the movie Saturn 3, there was a large vat of blank brain tissue that was used as memory storage for the robot. Stovetopcookies 06:50, 23 June 2006 (UTC)
Dark Star
[edit]A very small detail: The article says "After Dolittle attempts to reason the bomb to drop from the bomb bay and destroy the nearby planet". If I rember the film correctly, this is wrong. The bomb is malfunctioning due to some energy surge, and it wrongfully believes it recieved ordes to detonate. The crew tries to explain this to the bomb to make it go *back into* the bomb bay and *not* detonate. When this fails they use a Descartes type of reasoning (that it cannot be certain about anything in the real world) which sends Bomb #20 into a filosophical mode which ends in "delusions of grandeur". :) Kricke 19:25, 2 August 2006 (UTC)
Hilary Putnam is inconsistent:
[edit]"Putnam claims that the thought experiment is inconsistent on the grounds that a brain in a vat could not have the sort of history and interaction with the world that would allow its thoughts or words to be about the vat that it is in. In other words, if a brain in a vat stated "I am a brain in a vat," it would always be stating a falsehood."
Bullshit. All the mad scientist has to do is disengage the supercomputer, duh.
I mean this is an integral component of the experiment, right?
That there IS , in fact, a "real" HUMAN and UNAMPUTATED mad scientist in a "real" world WITH a "real" super computer?
right?
and that this "real" person grabbed another "real" full grown person with "real" experiences in the "real" world, sedated this person and then tricked him or her into believing that reality was never interrupted? meanwhile, as this person's brain is minding its own business in the "super computer" world, the person's body is being amputated...
THEN! the mad scientist (proving his madness) lifts the veil of the super computer sending the brain into a downward spiral of fuzzy logic. viola! the impotent brain is horrified at being stricken powerless and doesn't know what to think leading to immense entertainment on a scale comparable to that of monster truck rallies.
What a fucking retard.
OF COURSE A brain is capable of knowing it has been reduced to merely a brain in a vat. --
Of course Hillary is placing arbitrary limitations on the caqpabilities of the evil supercomputer. Why can't an evil supercomputer cause the brain to believe that it is in a vat, and even believe it is stating that it is in a vat ? Worse, the sentence, "In other words, if a brain in a vat stated "I am a brain in a vat," it would always be stating a falsehood.", is clearly wrong. When an object of category A states "I am an A", it is a tautalogically true statement.
--- I'm not sure that you understood Putnam correctly. What Putnam was trying to get at, I believe, is that if all we know are virtual things and virtual logic, then we pretty much have no intellectual right to make any assumptions about real world's own things and logic. All that we can logically gather from a virtual existence are other virtual things. For all we know, brains and vats are contructs of the computer, as well as the concept of 'in'. The purpose that I get out of the BIV thought experiment is that is lets one try to figure out what things must be true even if we are a brain in a vat, being controlled by an evil demon, dreaming, or being tested by a deity.
- Right. Imagine that in the virtual world of the brain there is a kind of thing that it calls "vap". The word "vap" refers to, when used by the brain, that sort of object in the virtual world. That's a natural assumption, since it's the coming across them and needing to talk or think about them that is the reason for the term "vap"'s existence. Now imagine that that object is by pure coincidence identical to our vats. The word "vap" still however refers to vaps in the virtual world, it's just a coincidence that vaps happen to look like vats. Now imagine that they were actually called "vats" by the brain instead of "vaps." Same word as our word "vat," referring to objects that look the same to the brain as vats look to us. But even so, the brain's vat-word does not refer to the vats in our world. It can't and it's incoherent to think so. If the brain has a word "vat" referring to objects that look like our vats, then that is just a weird coincidence. So if it claims to be a brain in a vat, we can be sure that it's not talking about the brain that we are seeing or the vat that we see it submersed in. Nevertheless, if the moral here is that it makes no in principle perceivable difference whether we are brains in vats or not, why should we care? A difference that makes no difference is no difference. Kronocide 20:29, 9 November 2007 (UTC)
The Whisperes in the Darkness
[edit]The Lovecraft story The Whisperes in the Darkness has to be one of the earlist referances to a Brain-in-a-Vat, If not does any one know were the idea first apeared?
Artical disproves real world?
[edit]By Putnam's reasoning, if a brain in a vat stated "I am a brain in a vat," it would always be stating a falsehood. If the brain making this statement lives in the "real" world, then it is not a brain in a vat. On the other hand, if the brain making this statement is really just a brain in a vat, then by stating "I am a brain in a vat" what the brain is really stating is "I am what nerve stimuli have convinced me is a 'brain,' and I reside in an image that I have been convinced is called a 'vat'."
But if this is true, then the converse is also true: If a brain in the "real" world stated "I am a brain in the real world," it too would always be stating a falsehood. If the brain making this statement lives in a vat, then it is not a brain in the "real" world. On the other hand, if the brain making this statement is in the "real" world, then by stating "I am a brain in the 'real' world" what the brain is really stating is "I am what nerve stimuli have convinced me is a 'brain,' and I reside in an image that I have been convinced is called a 'real world'."
This means neither brain "lives" in the real world. It also means that neither brain "lives" in a vat. Both brains "live" in their own world-models, based on the data they're given. Eyes, for example, do not detect color. Color does not exist in the "real" world. Color exists only as an idea in a brain.
Organic machines such as our eyes and ears can lie to our brains just as a computer could, affecting what our model of the real world looks, sounds, smells, feels, and tastes like.
The real world ends at the optical nerve.
ChiaHerbGarden 06:53, 4 February 2007 (UTC)Chia
---I think you're absolutely correct. The truth of the matter is that everyone really is a brain in a vat. The vat is a skull, and the supercomputer is our senses. Having said that, it seems unimportant to persue figuring out what exact type of vat we're in, whether plastic or bone, and unimportant also to figure out what type of supercomputer we're in, whether metal and electric or organic. The important thing to know is that, yes, we are in fact a brain in a vat with a supercomputer.
Perhaps the real question being pondered here is about the role of the computer. If we're comparing the senses to a supercomputer, as I am, then the difference between the BIV scenario and real life, is that the supercomputer in the BIV scenario was responsible for creating a virtual world based on 1)how the computer is programmed, and 2)how the brain responds. Thus the senses are truly doing the same exact role of the computer in the BIV scenario: sending the necessary images and thoughts to the brain as dictated by Evolution or God for whatever reason. You are a brain in a vat, is it really so disappointing that the supercomputer isn't metal?
-- --The real world doesn't just end at the optic nerve. The brain also generates its own virtual reality. It's been an established fact since at least 1931 that colours like purple, pink and magenta are not created by a single wave-length of light. Yet the human brain usually percieves these combinations as distinct from the rest of the visual spectrum. [1]
That means that the experience of "seeing the colour magenta" is not only a result of light striking our retina. It is also a result of the way each brain uniquely processes sensory input. Therefore, according to the brain-in-a-vat argument, saying "I see a magenta-coloured tree" is more or less speaking in "vat-english" and cannot truthfully refer to anything in the external world. Nor can it truthfully refer to the subjective experience of another person, even if that person tells you that they also see "a magenta-coloured tree" themsleves. Basically, the color pink is proof that everybody is wrong about everything, all the time. Even if everybody agrees with you. 2001:56A:7665:1800:E8A5:DD53:E4A4:8A47 (talk) 00:07, 2 May 2021 (UTC)
WikiProject
[edit]Um...why does no one bring proposed deletions to the attention of the relevant WikiProject? That seems like an appropriate courtesy. KSchutte 00:17, 11 March 2007 (UTC)
Article is excellent. Brief and correct.
?
[edit]Who provided the vat? And who programmed the 'computer' providing the input to the brain in the vat?
Apart from that, doesn't The Matrix (movie) borrow from this -as it borrowed from Hindu theology?
--76.174.222.187 08:47, 9 October 2007 (UTC)
>It's an allegory, it's not perfect. You can't really provide a complete description for something that is utlimately unknowable.
One of the reasons for the merge request at the top of the article is because evil genius could be said to be the one to provide the vat. --RossF18 10:09, 9 October 2007 (UTC)
Hmmm...I dunno about that. I think (or my brain in a vat thinks) this article works better as a stand-alone article. If anything, I'd say evil genius should be merged here since to a brain in a vat, the evil genius is just another 'figment of the imagination' that is as valid/invalid as the walking in the sun scenario in the article's illustration.
Also, the evil genius is only one, narrow, potential 'answer' to the brain in a vat 'dilemma': if you can be sure of nothing except for what's in your pickled brain, the "evil genius" scenario can't possibly be accepted as a basis for the underlying Solipsist 'argument' without question.
--Piepie 06:18, 10 October 2007 (UTC)
- Interesting. Except that brain in a vat is derived/contemporary rethinking of the evil genius concept. Descartes talked about evil genius or demeon, and brain in a vat derives from that as a phylosophical concempt. --RossF18 01:33, 5 November 2007 (UTC)
References in popular culture
[edit]Note that these are all references to the "brain in a vat" thought experiment, as described above.
Simon Wright in Captain Future.
The movie They Saved Hitler's Brain (1966).
It appears in The Man With Two Brains, a Warner Brothers movie starring Steve Martin.
The use of similar ideas in movies is not infrequent, as in Vanilla Sky, and The Matrix (a clear reference to both Plato's Allegory of the Cave and the brain-in-a-vat theory, though in that case entire bodies were preserved, rather than just brains). A similar sense of indistintion between virtual reality and reality appears in eXistenZ, a David Cronenberg film.
The Tom the Dancing Bug comic strip by Ruben Bolling has a recurring Brain in a Beaker story, detailing the influence of minor events in the real world on the virtual inhabitants of a disembodied brain.
The cartoon series Codename: Kids Next Door features an episode in which Numbuh One is trapped in a body-vat a la The Matrix and is made to believe he is on a dream island. He breaks out by tapping his heels together, which in the "real world" activates his jet boots.
The game Sid Meier's Alpha Centauri involves the base facility "Bioenhancement Center", the construction of which, plays the quote:
- "We are all aware that the senses can be deceived, the eyes fooled. But how can we be sure our senses are not being deceived at any particular time, or even all the time? Might I just be a brain in a tank somewhere, tricked all my life into believing in the events of this world by some insane computer? And does my life gain or lose meaning based on my reaction to such solipsism?" This is then ironically revealed to be the thought of an individual in a literal brain in a vat project and is followed by the statement "termination of specimen advised."
This vocal sample has been also included in the Prometheus track "O.K. Computer".
In the film Dark Star by John Carpenter, a planet-destroyer bomb has repeatedly been activated and deactivated due to various malfunctions. Dolittle reasons with the bomb's computer, inducing it to question whether its activation signal was real, and Bomb #20 takes up the philosophy of Descartes (I think, therefore I am) and returns to the bomb bay, commenting, "I must think on this further." Later, Pinback attempts to command the bomb to drop from the bomb bay and destroy the nearby planet (as per the mission) but Bomb #20 responds "You are false data." Then, reasoning that all stimuli Bomb #20 is experiencing are not true, the bomb assumes itself to be God, declares, "Let there be light", and explodes.
In a short story by Julio Cortazar titled "La noche boca arriba" ("The night, face-up"), the reader follows the story of a motorcyclist who has just been involved in an accident and a young native "Moteca" man who is fleeing his own sacrifice. The imagery parallels in the story lead us to question who is dreaming of whom. The reader initially believes that the motorcyclist is dreaming of an Aztec boy following his traffic accident. In the end, it is revealed that the Aztec boy has been dreaming of the motorcyclist in his state of fear and delusion.
The computer game The Infinite Ocean by Jonas Kyratzes deals with a sentient computer that ponders the same issue. Here the question is rephrased to "how can you know whether you are a human being or just a sentient computer dreaming it is one?" The game A Mind Forever Voyaging by Steve Meretzky of Infocom involves a similar theme, but the sentient computer is informed of his/its nature a short time before the start of play.
The idea of a person's brain or even more abstractly, consciousness, being removed from the body appears in some of Stanisław Lem's novels. A related topic, an artificial mind being fed artificial stimuli by its mad-scientist creator, also appears there.
The cartoon show Batman: The Animated Series featured an episode, Perchance to Dream, where Batman was incapacitated by the Mad Hatter, and connected to a machine that simulated reality based on existing brain functions (dreams). He realizes the situation, and to escape, throws himself out of a bell tower; the logic being that when dreaming of falling, one always wakes up before hitting the ground.
The comic Green Lantern's Infinite Crisis crossover featured Green Lantern and Green Arrow being attached to the 'Black Mercy', a fictional symbiotic plant that creates an artificial 'perfect world' for the host, who will have no memory of his previous life. The 'Black Mercy' first appeared in Alan Moore's Superman story, For the Man Who Has Everything.
In an episode of Stargate SG-1 (The Gamekeeper) the team visits an alien planet where they are imprisoned inside capsules that cause them to experience a simulated world controlled by a "gamekeeper". When they discover what has happened, the gamekeeper causes them to experience leaving the capsules again so that they will think that they have left the artificial world. They then experience an artificial version of their own world nearly indistinguishable from reality, but are eventually able to determine that they are still inside the machines and escape.
- PLEASE NOTE. I was unable to verify the bulk of this material, much of which is dubious and all of which lacks any sort of reliable sources. As a compromise measure I have moved it to the talk page so that it is not "lost" in the edit history shuffle. Please do not re-insert this material without providing the reliable sources deemed necessary by WP:V policy. Thank you for your understanding. Burntsauce 16:59, 10 October 2007 (UTC)
- I don't see these as vandalism. Instead these are possibly a product (or perhaps an unrelated coincidence) of a just-closed discussion in WP:AfD. There was a suggestion that some of the non-notable fiction concerning disembodied brains should be merged here or at Isolated brain. These seem to be more "brain in a vat" than "isolated brain", so they may well be independent, but I'm inclined to spin them off into "Disembodied Brains in Fiction" or something along those lines, with a reference to the new article from here. Still, consensus should dominate. - Bilby (talk) 02:12, 11 March 2008 (UTC)
- My mistake - I thought the issue was the above being posted, not the above being removed from a talk page. As such, it did constitute vandalism. - Bilby (talk) 06:24, 11 March 2008 (UTC)
- I also suggest the inclusion of the novel "Caverns of Socrates", by Dennis McKiernan, which revolves around just such a plot device.76.178.235.206 (talk) 07:14, 9 September 2008 (UTC)
- Removed the anime show Sword Art Online from the list. While it has appearent similarities, it does not deal with this issue in any way.
Brain in a Vat - verification of Alpha Centauri's quote
[edit]I can verify that Sid Meier's Alpha Centauri does produce the quote you removed upon the player's creation of the facility "Bioenhancement Center." I can produce the audio if necessary.
I listened to the quote about five minutes ago. Let me know if more is needed.
I cannot verify the statement about OK Computer.
- Shumake 13:07, 17 October 2007 (UTC)
Berkeleyan Idealism and the Brain in a Vat
[edit]Is there any reference in the relevant literature linking the Brain in a Vat scenario to George Berkeley's idealism, whereby all we can know of reality is our perceptions? It seems that both have similar implications, even if they are presented in a different form.--Pariah (talk) 23:17, 14 December 2007 (UTC)
extra detail
[edit]With particular relevance to solipsism, it is not the case that the brain in the vat cannot know anything, it can still know everything about math with certainty. This is not just a theory, it's a strong, well developped theory. 68.144.80.168 (talk) 16:18, 24 June 2008 (UTC)
Brain in a vat?
[edit]http://technology.newscientist.com/article/mg19926696.100 It seems a brain in a vat has been constructed. Granted, it's not very clever. But still... —Preceding unsigned comment added by 82.33.119.96 (talk) 22:05, 15 August 2008 (UTC)
Merge Discussion
[edit]Could someone archive the merge discussion. The whole merge discussion went on for a while and then deleted without any comment. Find it here please: Wikipedia talk:Philosophy/Notice Board
Too much Hillary Putnam
[edit]Too much of the article discusses Putnam's objections over a brain that has ALWAYS been in a vat. I see no reason why this special case should be canonical. More effort should be made to find earliest published references to the "brain in a vat," and arguments presented there. The "brain in vat" obviously could apply to adult brains transferred into a vat, and the quotes of Putnam don't address this situation at all.Tumacama (talk) 15:38, 3 October 2009 (UTC)
What about the brain try to do some self examination?
[edit]As long as the super computer can only interact with the brain by impulses, it cannot control the brain's inner physical structure. What if the brain try to for example 'beat' his/her own brain, and expecting some 'physical' unconscious.
Yes, you may say that the computer can cheat the brain's feeling, or control brain's life-sustaining supply. But what if the brain tries to do some more subtle self research. E.g., kill some nerve cells and see if he/she forgets something...
In summary, any brain can do some self research, and believe some self action may result in some internal changes, which is not come from impulses. By this, the brain can tell if it is in a vat!
except that the computer has full control on the brain. Pond918 (talk) 06:52, 13 August 2010 (UTC) —Preceding unsigned comment added by Pond918 (talk • contribs) 06:31, 13 August 2010 (UTC)
Ok, the machine (M) has full control of the brain in the vat (BiV). So BiV is included in the part E of M+BiV for which M+BiV has full control. If E=M+BiV then M+BiV has full self-control and thus it can decide for itself to be fully uncontrollable (if it can't decide for it self at least one thing, it has no full self control). Thus we have a vicious circle. If on the other hand BiV<E<M+BiV then the uncontrollable part, say U, of M+BiV, is essentially an isolated chamber inside M, hence we may assume that M-U+BiV has full self-control and the vicious circle is repeated. Thus M has no full control on BiV. --karatsobanis (talk) 20:14, 24 July 2011 (UTC)
- Even if the BiV has complete "freedom of decision", it has effectively no "freedom of action" (since all the activities and consequences that it initiates are played out as a simulation supplied to it by the computer). Therefore all perceived consequences of "actions" by the brain - including experiments - are up to the computer to allocate. Similarly, the origin of the computer is irrelevant: it might be a product of evolution, the creation of a supernatural being, or the product of some other unknown process. What matters for verification is the predicament of the brain: it might construct abstract models of its situation - including eg the origin of the computer - based on the concepts it forms in the simulation, but it can never test or verify them, because the consequences of all tests are governed by the computer. Therefore it can have no access to verification. And, as noted above, whether we like it or not, this predicament is essentially ours.Orbitalforam (talk) 08:41, 3 August 2011 (UTC)
- Interestingly, verification would also seem to be a problem for any kind of intelligence, even an infinitely intelligent, apparently omniscient one. It could never be sure that it was not a simulation run by another intelligence. This would be a "Deity in a Vat" (DiV) problem.131.111.41.167 (talk) 10:21, 5 August 2011 (UTC)
- Even if the BiV has complete "freedom of decision", it has effectively no "freedom of action" (since all the activities and consequences that it initiates are played out as a simulation supplied to it by the computer). Therefore all perceived consequences of "actions" by the brain - including experiments - are up to the computer to allocate. Similarly, the origin of the computer is irrelevant: it might be a product of evolution, the creation of a supernatural being, or the product of some other unknown process. What matters for verification is the predicament of the brain: it might construct abstract models of its situation - including eg the origin of the computer - based on the concepts it forms in the simulation, but it can never test or verify them, because the consequences of all tests are governed by the computer. Therefore it can have no access to verification. And, as noted above, whether we like it or not, this predicament is essentially ours.Orbitalforam (talk) 08:41, 3 August 2011 (UTC)
Can Source Code (2011 film) be added as a popular culture example? 89.204.153.146 (talk) 23:05, 27 April 2011 (UTC)
And another shot: Perhaps the Brain-In-a-Vat hypothesis is irrefutable. However such a model for the world is trivial. Hence it is all comes down as a matter of taste, preference and choice. But this very existence of freedom of choice between models of the world indicates that the experimenter is not omnipotent over his own experiment. The rest follows. — Preceding unsigned comment added by Karatsobanis (talk • contribs) 10:09, 4 January 2013 (UTC)
Refutation for it:
[edit]After I spend almost a month tortured by these hypothesis, I finally found a rebuttal to this: Every person has their good memories. These memories can be triggered if the individual see something, hear something, hear some music, see some film and so on. If you , for example, hear a song not heard in a long time, you have searched for this song. Note: YOU have searched for it, and not any stimulus designed by a supercomputer. No mad scientist could make you search for it, hear it and feel happy about it. The scientist could materialize all around, but you will never be able to get by their own incentives and find them. That would be impossible. Think about it. Thank you very much - Eduardo Sellan III (talk) 02:14, 14 December 2011 (UTC)
PKD on brains in vats
[edit]In a 1978 interview with The Aquarian (reprinted on page 9 of PKD Otaku 4), Philip K Dick had the following to say about brains in vats:
AQUARIAN: Then what is the major influence on your work?
DICK: Philosophy and philosophical inquiry.
I studied philosophy during my brief career at the University of California at Berkley. I'm what they call an "acosmic pan-enthiest," which means that I don't believe that the universe exists. I believe that the only thing that exists is God and he is more than the universe. The universe is an extension of God into space and time.
That's the premise I start from in my work, that so-called "reality" is a mass delusion that we've all been required to believe for reasons totally obscure.
Bishop Berkeley believed that the world doesn't exist, that God directly impinges on our minds the sensation that the world exists. The Russian science fiction writer Stanislaw Lem poses that if there was a brain being fed a simulated world, is there any way the brain could tell it was a simulated world? The answer, of course, is no. Not unless there was a technological foul-up.
Imagine a brain floating in a tank with millions and millions of electrodes attached to specific nerve centers. Now imagine these electrodes being selectively stimulated by a computer to cause the brain to believe that it was walking down Hollywood Boulevard chomping on a hamburger and checking out the chicks.
Now, if there was a technological foul-up, or if the tapes got jumbled, the brain would suddenly see Jesus Christ pass by down Hollywood Boulevard on his way to Golgotha, pursued by a crowd of angry people, being whipped along by seven Roman Centurions.
The brain would say, "Now hold on there!" And suddenly the entire image would go "pop" and disappear.
I've always had this funny feeling about reality. It just seems very feeble to me sometimes. It doesn't seem to have the substantiality that it's suppose to have.
I look at reality the way a rustic looks at a shell game when he comes into town to visit the fair. A little voice inside me says, "now wait just a second there..."
-- noosphere 09:10, 13 December 2012 (UTC)
Arguable
[edit]"Though the world does not appear wholly logical or consistent, enough of it is logical and consistent to suggest that there is a substantial probability that I am (or the reader, assuming he exists, is) part of a real universe. So, the cost-benefit analysis favors belief in the external world's existence."
Since there is no way to know the power and/or limits of such a "computer", saying the probability of reality being true is high just because reality seems "consistent" (whatever a consistent reality means to us, brains in vats) is simply nonsense. — Preceding unsigned comment added by 189.121.107.141 (talk) 16:13, 10 February 2014 (UTC)
Article seems devoid of key information
[edit]It's all very well mentioning similar or antecedent ideas, but who was the first person to write about a brain in a vat? Who has elaborated on the idea? Which books mention it? Saint91 (talk) 23:36, 20 August 2014 (UTC)
Answer by Roni:
According to Joachim Eberhardt: Gehirne in Tanks - Warum die skeptische Frage offen bleibt. Zeitschrift für philosophische Forschung 58 (2004) 4, p.560 [a PDF of said article is here to find: http://opus4.kobv.de/opus4-fau/frontdoor/index/index/docId/241] the first mentioning of the idea of a brain in a vat occured in
Harman, Gilbert 1973: Thought, Princeton/NJ, p.5. — Preceding unsigned comment added by Ronibert (talk • contribs) 10:47, 22 August 2014 (UTC)
Pascal's wager
[edit]"However, if one behaves as though the world is not real and it is, one's selfish decisions can cause serious harm to the happiness of large numbers of people. This claim is logically flawed, just like Pascal's wager. E.g. it might turn out that harming others in the imaginary world actually makes people happier in the real world and vice versa.
TheWanderer1357 (talk) 12:34, 8 November 2014 (UTC)
Brain in a Vat hypnosis
[edit]Imagine a mad scientist has taken your brain from your body, put in a life sustaining liquid. The brain is inserted with the electrode, electrode is connected to a machine that will produce images and sensations on the computer. Because all the information about the world you get are handled through your brain, the computer has the ability to simulate your daily experience.If this is indeed possible, how do you want to prove to the world around you is true, rather than a generated environment by a computer simulation? Personally, I believe that there is this mysterious connection, but not because I'm a skeptic. Just think, if a person A is in the vat, at this time, the evil scientist let another person B into the cylinder of the brain in the world. Clearly, then, after entering, the human B is there to talk about the semantics of the reality of the cylinder and the brain, then as long as B and A had conversation for a while, then between A and B will begin to exist some contact. This means that A can also correct the talk about the reality of cylinder and the brain. If this is the case, if in the beginning B has not entered the brain in a vat, but we only need computer to adjust the parameters, so as to meet the requirements of B into the brain in a vat and chat with a the electronic signal, then A can be correctly talk about the reality of cylinder and the brain. — Preceding unsigned comment added by 2605:E000:5B14:3E00:45D2:9A83:AC73:D5E4 (talk) 03:57, 25 February 2016 (UTC)
- Start-Class Philosophy articles
- Mid-importance Philosophy articles
- Start-Class metaphysics articles
- Mid-importance metaphysics articles
- Metaphysics task force articles
- Start-Class epistemology articles
- Mid-importance epistemology articles
- Epistemology task force articles
- Start-Class logic articles
- Mid-importance logic articles
- Logic task force articles
- Start-Class philosophy of mind articles
- Mid-importance philosophy of mind articles
- Philosophy of mind task force articles
- Start-Class Transhumanism articles
- Mid-importance Transhumanism articles
- Start-Class science fiction articles
- Mid-importance science fiction articles
- WikiProject Science Fiction articles