Jump to content

Talk:Memory-prediction framework

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

I have expanded fairly significantly on the framework; hopefully not too long now? Will need to create a few linked articles (on Grossberg, Mountcastle, Edelman & their relevant theories) - but before I do so, I want to see if this version survives. Ebarnard 13:15, 28 Mar 2005 (UTC)

Like I'm gonna mess witht your post...

[edit]

I knew this was gonna be a huge page. I haven't read Mountcastle, but if I do, I'll let you know how it goes. Have you tried getting your 'colleagues' to check out this page? Grad students, even? Wikipedia is actually built on word of mouth, not just immensely important scientific breakthroughs. It takes time for serious discourse to build up, especially at high levels of 'functionning'. You probably have smart friends, so invite them. Or, you can just start cutting liks into such popular Wikipedia sites as "Brain/Human" and "Neurology" and, hell, even just post something in 'Thought." They'll probably take it down, but who cares? Maybe Mountcastle himself is on Wikipedia right now! He could be looking for YOU! That is, unless he died some time in the late 70's. I wouldn't know because for some reason his Wikipedia link is still blank.

I'll start linking to more poplular pages, and try to attract interest. Don't worry, you won't have to deal with people like me once intelligent people start leaving intelligent comment's like , " Do you think there might be a laterally shifting signal between colums which might allow a rapid harmonic cohesion to occur from a relatively weak, but steady bottom up signal? Could this somehow account for amplification, or the Varela/Maturana theories on Phase-locking? "

Ya. People like that.

Who were you and who were you talking to? ---- CharlesGillingham (talk) 16:21, 5 June 2008 (UTC)[reply]

prediction

[edit]

The word "prediction" is used in two different ways in the article. The section called "Explanatory successes and predictions" uses the word "prediction" in a conventional way. The "Memory-prediction framework" uses the word "prediction" in a new way that I think can be best described as being part of the jargon that has grown up around the work of Jeff Hawkins. I suggest that the Memory-prediction framework article include a definition of the word "prediction" as it is used in the theory. This should be done very close to the top of the article. As for as I can tell (from just reading the article), "prediction" is used in Hawkins' theory in the same contexts where other neurobiologists only use terms like "pattern recognition". Sensory inputs activate memories in such a way that pattern recognition takes place, and this pattern recognition process feeds back to influence the on-going processing of sensory inputs. In a sense, the activated memory of a pattern can be said to make "predictions" that can be compared to on-going sensory inputs. Alternatively the activated memory of a pattern can be said to represent "expectations" and these "expectations" are what leads to "predictions". The article should better distinguish between "expectations" and "predictions", and if these are really different elements of the theory, the article should say what brain events are proposed to correspond to each. --JWSchmidt 16:53, 16 October 2005 (UTC)[reply]

Another word which could be used in place of "expectation" might be inference, as the word "expectation" already has an established meaning in the statistics vocabulary. In the case of the visual system, the expected location of some feature, for example an eyebrow on a face, based solely on a few looks in a saccade of the observers eye, might be said to to be an inference based on the observer's prior recognition of the head and facial features of a supposed person in the scene of the visual system. In the article itself, it is probably safer to build upon the vocabulary of the framework, if one is to question usage of the word "prediction" in the framework. Perhaps it will be possible to modify each usage of the word with an appropriate adjective. Hawkins uses the phrase "temporally constant invariant", and implies that finding these invariants is part of the processing of a "prediction" in the cortex. But according to the picture he is painting, it's only a part of the process. Ancheta Wis 17:54, 16 October 2005 (UTC)[reply]

How Many Bytes in Human Memory?

[edit]

Today it is commonplace to compare the human brain to a computer, and the human mind to a program running on that computer. Once seen as just a poetic metaphor, this viewpoint is now supported by most philosophers of human consciousness and most researchers in artificial intelligence. If we take this view literally, then just as we can ask how many megabytes of RAM a PC has we should be able to ask how many megabytes (or gigabytes, or terabytes, or whatever) of memory the human brain has.

Several approximations to this number have already appeared in the literature based on "hardware" considerations (though in the case of the human brain perhaps the term "wetware" is more appropriate). One estimate of 1020 bits is actually an early estimate (by Von Neumann in The Computer and the Brain) of all the neural impulses conducted in the brain during a lifetime. This number is almost certainly larger than the true answer. Another method is to estimate the total number of synapses, and then presume that each synapse can hold a few bits. Estimates of the number of synapses have been made in the range from 1013 to 1015, with corresponding estimates of memory capacity.

A fundamental problem with these approaches is that they rely on rather poor estimates of the raw hardware in the system. The brain is highly redundant and not well understood: the mere fact that a great mass of synapses exists does not imply that they are in fact all contributing to memory capacity. This makes the work of Thomas K. Landauer very interesting, for he has entirely avoided this hardware guessing game by measuring the actual functional capacity of human memory directly (See "How Much Do People Remember? Some Estimates of the Quantity of Learned Information in Long-term Memory", in Cognitive Science 10, 477-493, 1986).

Landauer works at Bell Communications Research--closely affiliated with Bell Labs where the modern study of information theory was begun by C. E. Shannon to analyze the information carrying capacity of telephone lines (a subject of great interest to a telephone company). Landauer naturally used these tools by viewing human memory as a novel "telephone line" that carries information from the past to the future. The capacity of this "telephone line" can be determined by measuring the information that goes in and the information that comes out, and then applying the great power of modern information theory.

Landauer reviewed and quantitatively analyzed experiments by himself and others in which people were asked to read text, look at pictures, and hear words, short passages of music, sentences, and nonsense syllables. After delays ranging from minutes to days the subjects were tested to determine how much they had retained. The tests were quite sensitive--they did not merely ask "What do you remember?" but often used true/false or multiple choice questions, in which even a vague memory of the material would allow selection of the correct choice. Often, the differential abilities of a group that had been exposed to the material and another group that had not been exposed to the material were used. The difference in the scores between the two groups was used to estimate the amount actually remembered (to control for the number of correct answers an intelligent human could guess without ever having seen the material). Because experiments by many different experimenters were summarized and analyzed, the results of the analysis are fairly robust; they are insensitive to fine details or specific conditions of one or another experiment. Finally, the amount remembered was divided by the time allotted to memorization to determine the number of bits remembered per second.

The remarkable result of this work was that human beings remembered very nearly two bits per second under all the experimental conditions. Visual, verbal, musical, or whatever--two bits per second. Continued over a lifetime, this rate of memorization would produce somewhat over 109 bits, or a few hundred megabytes.

While this estimate is probably only accurate to within an order of magnitude, Landauer says "We need answers at this level of accuracy to think about such questions as: What sort of storage and retrieval capacities will computers need to mimic human performance? What sort of physical unit should we expect to constitute the elements of information storage in the brain: molecular parts, synaptic junctions, whole cells, or cell-circuits? What kinds of coding and storage methods are reasonable to postulate for the neural support of human capabilities? In modeling or mimicking human intelligence, what size of memory and what efficiencies of use should we imagine we are copying? How much would a robot need to know to match a person?"

What is interesting about Landauer's estimate is its small size. Perhaps more interesting is the trend--from Von Neumann's early and very high estimate, to the high estimates based on rough synapse counts, to a better supported and more modest estimate based on information theoretic considerations. While Landauer doesn't measure everything (he did not measure, for example, the bit rate in learning to ride a bicycle, nor does his estimate even consider the size of "working memory") his estimate of memory capacity suggests that the capabilities of the human brain are more approachable than we had thought. While this might come as a blow to our egos, it suggests that we could build a device with the skills and abilities of a human being with little more hardware than we now have--if only we knew the correct way to organize that hardware. —Preceding unsigned comment added by 59.96.172.31 (talk) 13:54, 22 April 2008 (UTC)[reply]

missing critical view

[edit]

There some critical book review about the book. That given critical view from scientists should be part of the article too. 89.196.9.188 (talk) 11:56, 10 January 2011 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified 3 external links on Memory-prediction framework. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 14:43, 9 September 2017 (UTC)[reply]