Today’s thought exercise
This is a very interesting piece on the philosophy of science and popular understandings of science:
How our botched understanding of ‘science’ ruins everything
http://theweek.com/article/index/268360/how-our-botched-understanding-of-science-ruins-everything
As an exercise to the reader, explain what is wrong with his complaint that what most people think of science is actually the opposite of science.
Some helpful ideas for this might be found here (Noah Opinion) and here (Crooked Timber, where I first found the links).
Seems like a topic we should be discussing in 205. I think it’s the right level of ‘meta’ for a class on experimental design.
Forgetting names
For some reason, I’ve been getting a lot of requests lately to explain why we are bad at remembering people’s names lately. An email exchange on this with an Atlantic reporter got summarized online here:
Curiously, it then also got picked up on another site, Lifehacker:
http://lifehacker.com/why-its-so-hard-to-remember-peoples-names-1620881563
And then I was contacted earlier this week and did a short conversation on the phone with a radio show, Newstalk, in Ireland with host Sean Moncrieff.
All the conversations went well, although I’m not sure I had much to say beyond the basics that names are hard and arbitrary, unlike other facts you tend to learn about people you meet.
A more interesting idea is that I suspect there is a “reverse Dunning-Kruger” effect for name memory. Dunning-Kruger effects are cases where everybody thinks they are above average. For names, my sense is that most people think they are below average. I would guess they aren’t, but just that most of us are bad at names. In theory, it wouldn’t be very hard to test this, but I don’t think anybody has even run a real experiment.
Cognition at high speed
I’m a big fan of Jerry, who posts to YouTube as ChessNetwork his videos of playing chess online. One of the things he does regularly is playing online speed chess — ultra-rapid, “bullet” chess where each player has ~1m for the whole game.
Chess is a different game when you have 60 seconds to make every move in a whole game. I find it compelling because it exposes the absence of calculation in very high level chess play. At 1-2 seconds/move, it is almost purely pattern matching, habit and processes we would have to call intuition. There is no time for anything but the most rudimentary of calculation. And yet the level of play is pretty sharp.
Jerry is particularly entertaining because he keeps up a verbal stream of consciousness patter while playing. He notes positional principles that guide some move selection and his voice gives away his excitement audibly when he senses a tactical play coming.
Understanding how this type of cognitive process is accomplished would tell us a lot about human cognitive function. What he is doing here is not really hard for any chess player with decent playing experience (I am decent at bullet chess — nothing like Jerry, but I can play). And relevant to the old post about AI & Hofstadter, the fact that computers are unequivocally dominant at chess has nothing to do with understanding how humans play bullet chess.
https://www.youtube.com/watch?v=VSkMH9uBPmI
I’ve spoken with chess professionals about speed chess in the past and the general sense is that playing speed will not make you better at chess. But studying and playing chess slow will make you better at speed chess. Perhaps a principle of training intuition in complex tasks can be derived from that.
The Man Who Would Teach Machines to Think
Good article on Cognitive Science versus Artificial Intelligence in the Atlantic from a few weeks ago.
Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we’ve lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
This is the key point, in my opinion:
“I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.”
I’ve had the chance recently to tell the story of how I came to Cognitive Neuroscience from originally studying Computer Science and this captures the main idea quite well.
Especially the last part of the quote — I really don’t understand why more people don’t think this way. I’ve thought that ever since Deep Blue beat the best chess players in the world, why isn’t anybody organizing competitions for actually smart chess playing programs that aren’t allowed to brute force search billions of positions? I guess there just aren’t enough of us who think that is an interesting problem. Or maybe among those of us who do, there aren’t any who have the time to work on that problem since there are so many other interesting problems in trying to study human intelligence.
Neuroscience and video game skill learning
I wrote a short piece for a gaming-oriented online magazine, GLHF (Good Luck, Have Fun!) talking about the neuroscience of skill learning and how it applies to getting better at even things like video games. The magazine is generally focused on Starcraft2 and the professional e-sports scene around Starcraft (although I think they want to get into Dota2 as well).
I clipped images of the piece below, but you can access it directly either via the main magazine url: http://glhfmag.com/
Or you can go directly to the relevant issue via: http://issuu.com/glhfmag/docs/glhf_magazine_6_issuu_single_page?e=5965119/4641972
Brain training by Starcraft
Can’t believe I didn’t Randomness this one already…
Real-Time Strategy Game Training: Emergence of a Cognitive Flexibility Trait
- Brian D. Glass, W. Todd Maddox, & Bradley C. Love
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0070350
The main finding: increased cognitive flexibility after 40 hours of playing Starcraft. Of note, the assessment of cognitive flexibility was done by meta-analytic Bayes factor across a wide array of assessments. That’s very creative and maybe the right way to be approaching measurement of subtle transfer effects. If the transfer effect is in a process that is partly represented across a variety of measures, you’d need someway of combining the measures and also partially out the target process. Also of note, the participants were all female because they wanted non-gamers (defined as <2 hours/week) and there weren’t any male non-gamers at UT Austin.
Brain Training by BrainAge
Brain Training in PLoS One: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0055518
Brain Training Game Boosts Executive Functions, Working Memory and Processing Speed in the Young Adults: A Randomized Controlled Trial
Rui Nouchi, Yasuyuki Taki, Hikaru Takeuchi, Hiroshi Hashizume,Takayuki Nozawa, Toshimune Kambara, Atsushi Sekiguchi, Carlos Makoto Miyauchi, Yuka Kotozaki, Haruka Nouchi, Ryuta Kawashima
The title seems to accurately tell the results of the experiment. Training was Nintendo BrainAge versus Tetris in a randomized controlled trial design. WM & PS went up in participants who did BrainAge >> Tetris (Simple Reaction Time went up in Tetris). Participants were young, mean age 20, n=16 per group. Training was 15m/day 5 days/week for 4 weeks. Somewhat surprisingly strong results for relatively low total hours of training in younger adults. Recruiting, compliance, retention look very strong though. I guess you could worry about expectancy effects but everything else looks very solid. A big, elaborate assessment battery was used. I haven’t looked at every piece of it, but Ravens (RAPM) curiously went up a lot in both groups.
Learning a short, timed, motor sequence
I stumbled across the “cup song” by Anna Kendrick (from the movie Pitch Perfect, also performed by her on Letterman and originally learned from a “viral video” which sources to a homemade youtube video by Lulu and the Lampshades). The trick is singing a short song while tapping out a short percussion sequence using a cup on a table. For good singers, it sounds pretty good.
The sequence has 11 or 14 elements depending how you count (is “clap twice” one thing or two?). Either way, it’s pretty short. It’s also quick to execute. The sequence is repeated a number of times during the song, which is only ~1m long total.
This video, which is a 5m tutorial on how to do the cup-tapping sequence is interesting from a learning and memory perspective. It is taught by explicit description, then repeated demonstration, in very short chunks 3-4 steps at a time.
http://www.youtube.com/watch?v=qa4BUtsATsg
I think it’s a good example of how we learn sequences, but raises some interesting questions. Why is it so hard to memorize? It’s just 14 steps. Why not use bigger chunks? Why so much repetition even by the tutor? Is remembering a timed motor sequence really that hard? And it’s pretty obvious that verbally memorizing the steps isn’t going to let you perform it fluidly. Something very important is happening in repeated practice.
Maybe the difficulty of learning a relatively short sequence this way is why it is so surprising that we can see 30-80 item sequence learning in SISL. If we depend on going through explicit memory to guide performance for initial practice, we’re going to mainly learn pretty short sequences. The implicit skill learning system isn’t apparently so constrained, though, and pulls structure out of those sequences pretty quickly (~50 reps, regardless of length) if you can get the motor system to go through the steps.
Paranormal activity
I did a short interview with an Australian radio show called Ghosts of Oz, hosted by Danni Remali, on Saturday night. The show focuses on paranormal topics and they wanted to talk about deja vu — they found me by the Scientific American AskTheBrains column. They assured me that they wanted a real scientific perspective and some information about the brain. Of course, I was a little worried about pseudoscience, etc., but decided to give it a shot anyway.
I figured I could make the case for the experience of deja vu as related to a benign misfiring in the brain of a feeling of familiarity — creating a sudden feeling of familiarity even during a new experience. The idea is based on neuropsychological research by Martin Conway with patients who experience this phenomenon constantly (due to FTD), That seemed to go over ok, I just needed to be careful to say it was a normal experience for healthy participants and did not imply anything wrong with the brain. It’s more prevalent in younger people as well, for what it’s worth.
On the fly I realized that there is a nice point to make about implicit learning and awareness. The fact that we have knowledge and some processing happening outside of awareness, that could very likely create the sense that we have some additional mysterious cognitive capacity. A flash of intuition based on implicitly learned environmental patterns is functionally indistinguishable from a precognitive event. From this, I tried to make the case that some experiences that get called “paranormal” may simply reflect the fact that the brain is more complex than we realize — we really are smarter than we know.
It felt like a nice way to make contact with the more positive aspects of the show/host’s approach to “spirituality” without endorsing any pseudoscience. Overall, I think the conversation went well.
#overlyhonestmethods
So apparently a new hastag, #overlyhonestmethods, is burning up the twitterverse. It appears to be driven by students, technicians, post-docs in science labs blowing off steam about the challenges of doing research. It’s funny and probably a good thing in the overall sociology of science — I think. It is a good thing if it helps people find the appropriate level of “healthy skepticism” to approach science reporting with. Sometimes people are a bit too quick to accept statements like “Science found X to be true” without consideration of how the conclusion was drawn. On the other hand, it isn’t a good thing to further strengthen the anti-science crowd too much (e.g., climate change).
Fans of neuroimaging may find this blog post on that topic particularly entertaining:
http://blog.ketyov.com/2013/01/overlyhonestmethods-for-neuroimaging.html
Amusing excerpt:
Data were preprocessed 8 different times, tweaking lots of settings because Author One kept screwing up or the data kept “looking weird” with significant activation in the ventricles and stuff, which can’t be right.
and one that we hope won’t hit too close to home:
The methods section is difficult to write because the data on which this paper is based were collected 4 years ago by a roton (Author Two) and dumped on Author One because he didn’t have any other idea what to do for his PhD and this is, quite frankly, his last hope.