Research on the Cognitive Neuroscience of Memory
- How and where memory occurs in the brain, particularly memory acquired through practice
- How experience shapes action, perception and thought through pervasive mechanisms of plasticity throughout the human brain
- Implicit and explicit memory contributions to perceptual-motor skill learning
- Implicit and explicit memory in visual category learning
- How general cognitive ability can be improved through cognitive practice
- Repetitive training of working memory span to improve cognition
- For both younger adults and to remediate age-related cognitive decline
- Perceptual-motor skill learning using the SISL task
- Working memory training using the SeVi-WM task
- Using computational modeling and functional neuroimaging to study interactions among the brain’s memory systems
- Investigating memory system interactions and intuitive decision making using visual category learning.
Check the Presentations link on the right side bar to see the most recent ideas and reports as presented as posters and talks at recent conferences.
309 Cresap Laboratory
Department of Psychology
Phone: (847) 467-5779
2029 Sheridan Road
Evanston, IL 60201
I think I’m not even going to explain why this is interesting to me beyond the obvious title and the fact that the senior author, Liz Brannon is a childhood friend and now distinguished researcher and Professor at Duke.
Implicit learning involves picking up information from the environment without explicit instruction or conscious awareness of the learning process. In nonhuman animals, conscious awareness is impossible to assess, so we define implicit learning as occurring when animals acquire information beyond what is required for successful task performance. While implicit learning has been documented in some nonhuman species, it has not been explored in prosimian primates. Here we ask whether ring-tailed lemurs (Lemur catta) learn sequential information implicitly. We tested lemurs in a modified version of the serial reaction time task on a touch screen computer. Lemurs were required to respond to any picture within a 2 × 2 grid of pictures immediately after its surrounding border flickered. Over 20 training sessions, both the locations and the identities of the images remained constant and response times gradually decreased. Subsequently, the locations and/or the identities of the images were disrupted. Response times indicated that the lemurs had learned the physical location sequence required in original training but did not learn the identity of the images. Our results reveal that ring-tailed lemurs can implicitly learn spatial sequences, and raise questions about which scenarios and evolutionary pressures give rise to perceptual versus motor-implicit sequence learning.
I do wonder about their definition of “implicit” in lemurs, though…
I ran across this link to Paul Krugman being insightful and thoughtful about the general question of ‘What is a Model” and “What do we use them for in Science?”
It’s about economics and specifically models of development economics, but the general questions of methodology apply to social sciences more broadly.
It is in a way unfortunate that for many of us the image of a successful field of scientific endeavor is basic physics. The objective of the most basic physics is a complete description of what happens. In principle and apparently in practice, quantum mechanics gives a complete account of what goes on inside, say, a hydrogen atom. But most things we want to analyze, even in physical science, cannot be dealt with at that level of completeness. The only exact model of the global weather system is that system itself. Any model of that system is therefore to some degree a falsification: it leaves out some (many) aspects of reality.
How, then, does the meteorological researcher decide what to put into his model? And how does he decide whether his model is a good one? The answer to the first question is that the choice of model represents a mixture of judgement and compromise. The model must be something you know how to make — that is, you are constrained by your modeling techniques. And the model must be something you can construct given your resources — time, money, and patience are not unlimited. There may be a wide variety of models possible given those constraints; which one or ones you choose actually to build depends on educated guessing.
And how do you know that the model is good? It will never be right in the way that quantum electrodynamics is right. At a certain point you may be good enough at predicting that your results can be put to repeated practical use, like the giant weather-forecasting models that run on today’s supercomputers; in that case predictive success can be measured in terms of dollars and cents, and the improvement of models becomes a quantifiable matter. In the early stages of a complex science, however, the criterion for a good model is more subjective: it is a good model if it succeeds in explaining or rationalizing some of what you see in the world in a way that you might not have expected.
There is also a nice description of a “Dishpan model” by David Fultz as an example of a hyper-simplified model that illustrated some emergent properties useful for meteorology.
What resonates with me about Krugman’s description is a common interest in building the simplest, descriptive models that we hope illuminate underlying principles in complex processes. In Economics, particularly Macro, the scientific goal is to understand systems of unmanageable complexity (interactions among all the people and institutions that produce economic activity). In Neuroscience and Psychology, we attempt to understand the human brain, also a system of unmanageable complexity.
I also prefer simple models with a small handful of parameters to illustrate concepts, while having a lot of admiration and respect for modelers who take on the complexity of building up from individual neurons (each themselves having nearly unmanageable complexity, fwiw). The simple models also cannot be “right” in the same sense Krugman describes above, but they can account for some useful fraction of the variance we aim to explain and hopefully expose some deeper principles that might even eventually direct neural-level modeling.
There’s a good question on the other end of the complexity spectrum as well, about why it is worth even building simple models with a few parameters over and above simply making theoretical statements like “changing x causes a change in y.” Such theoretical statements are the bread and butter of standard approaches to Psychological Science, especially experimental work, but I’ll leave the answer as an exercise, perhaps to be tackled in my graduate seminar next time I teach modeling (hints: quantification and prediction are important).
Links to sources:
The following ad should appear in the Cognitive Neuroscience Newsletter soon:
Postdoctoral Positions at Northwestern University
Memory Systems, Intuition and Modeling
Department of Psychology
Laboratories of Paul Reber & Ken Paller
Multiple postdoctoral openings currently available on two new projects aimed at accelerating expertise development from training using memory systems theory. One project will develop methods to improve the use of intuition in decision making. The second project will use targeted memory reactivation to enhance consolidation processes and speed learning. Both projects reflect collaborative research between the laboratories of Professor Paul Reber (http://reberlab.psych.northwestern.edu/) and Professor Ken Paller (http://pallerlab.psych.northwestern.edu/). Also see http://CogNS.northwestern.edu for further information on the local cognitive neuroscience environment.
We are searching for postdoctoral candidates with a strong interest in human memory research and with expertise in some of the following areas: memory systems research, experimental behavioral methods, computational simulation modeling, multivariate pattern analysis, EEG recording and analysis.
Interested candidates can send inquiries and application materials to Susan Florczak <email@example.com>. Applications will be evaluated when received and hiring decisions made on a rolling basis. Multiple two-year appointments are currently available. Applications should include a cover letter, CV, and names of at least three references.
We are also looking to hire a new Research Assistant for the lab. Applications for the RA position should go through NU Human Resources.
I got another request to comment on yet another media claim that technology is bad for our brains. It’s actually also a good example of really poor science reporting in the media, so I won’t link it, but the topic seems generally of interest and it appears to be based on a curious underlying (folk) model of cognition worth thinking about.
How would this work? How could technology make us less smart? The core idea is that be looking things up, we memorize less and therefore we are less smart than we could be otherwise. But this misses the issue of substitution. If you aren’t memorizing something you can look up, do you learn something else instead?
To me, the interesting underlying idea is: Memory doesn’t have an “off switch”
We are constantly recording experiences from our environment. Of course, not everything gets remembered, so maybe we focus too much on the memory failures. But we aren’t consciously turning our memories on and off through the day. So if we are trying to memorize arbitrary facts that we could look up on google instead, during that time we aren’t doing something else that could have left a useful memory trace. Note that I’m describing this as an attention/perception bottleneck, but it could be a memory consolidation level bottleneck as well (which is probably the actual constraint that keeps us from remembering everything we experience).
The only way for this argument to really make sense is to have a strong theory that everything we would have memorized (instead of relying on google) is more valuable to our internal knowledge state than everything we learn instead. I think that is going to be a hard case to make. And it won’t really be about technology.
There’s another way to make a possible ‘technology hurts the brain’ case based on skill learning/strengthening. If memory is a skill that can be improved by intensive practice, then concentrated attempts to memorize arbitrary information could theoretically make you better at remembering (and over time, you’d just get smarter). But there is no evidence anywhere that long-term memory can be strengthened this way — and many people have tried to do this.
Working memory looks to be trainable, but if anything, technology that makes you hold a question in mind while putting in the search terms to look it up is going to expand your WM rather than causing it to atrophy.
So no, technology is not going to make us less smart. It’s almost certain to be overwhelmingly in the other direction — the access provided by the internet to incredibly rich and diverse kinds of information means the average knowledge content of the average human brain in the 21st century is a lot more than the 20th or any other prior time.
I was asked to answer some questions from a middle school student doing a research project on video games. Since I am interested in the topic generally, I should probably figure out how to answer these kinds of questions at an age-appropriate level. My attempt:
1. Do video games affect the human brain? Do video games affect the way of thinking? Do video games damage the thinking part of the brain?
Yes, video games can affect your brain, like anything else that you do a lot of. However, these changes can sometimes be for the better. There is recent evidence of improvements in “visuospatial attention” (how you see the world) following video game play. There may also be changes for the worse, like increasing aggression, but these are not yet well understood.
2. Can video games improve people’s knowledge? Can they help people’s grades get better in school? Or can the[y] get bad grades?
Video games probably won’t help you in school very much. They can cause problems in schoolwork when kids play too many games and don’t keep up with homework and assignments. If you are getting your homework done, playing games won’t hurt and may actually help a little bit.
3. Can video games make people lose time? With friends and family? Time outside?
If you spend too much time on games and do not make time for friends, family, proper exercise and sleep, then that will very likely cause problems.
4. Can video games make people sick? Gain weight? Headaches or a tumor?
Some people report dizziness and nausea (upset stomach) from games that give you first person perspective. This is very likely related to the kind of motion sickness you can get when riding in a car. In rare cases, some people may react badly to flashing lights/sounds in video games. In general, games won’t make you sick. If you eat in an unhealthy way when playing videogames, that can lead to weight gain and other health problems.
5. Can video games make people addicted to what their mainly about? How do they do this? Why do people get addicted?
Gaming addiction is not well understood. Games aren’t addictive the way other things are (like cigarettes). However, there are certainly some people who have problems like in (2) and (3) above. They seem to play so much that it messes up a lot of other things in their life. That looks a lot like being addicted. It also can look like a lot of other problems that teenagers often run into — mood swings, depression, difficulty in relating to others. I do not think it is well known whether games can cause those problems or whether kids having those kinds of problems for another reason sometimes like to play a lot of videogames.
Thank you very much for your help.
You are welcome, Jose.