A friend of mine sent me this the other day. It’s really so powerful, Jill Bolte Taylor is the most emotive speaker I think I’ve ever seen from the Ted talks. For alot of it there’s a strange and very uncomfortable dynamic going on- sort of attraction/repulsion in that, for me, it was so attractive to hear about this immediate and powerfully felt connection to the larger universe, and yet, so horrifyingly compromised by her utter helplessness. A profoundly left brain/right brain story. Which, It turns out, is being adapted by Ron Howard for a film that’s possibly going to star Jodie Foster. I probably shouldn’t use my left brain to think about that.
Archive for the ‘Psychology & Neuroscience’ Category
Researchers from the university of Montreal, conducting a study comparing the views of men in their 20s who had never been exposed to pornography with regular users, stumbled at the first hurdle when they couldn’t find a man who hadn’t watched porn. “We started our research seeking men in their 20s who had never consumed pornography,” said Professor Simon Louis Lajeunesse. “We couldn’t find any.” On average, the men they interviewed first started watching it when they were 10 years old. They also found that 90% of them used the internet to access porn. None of which is that surprising- what is a little surprising, is that they made a point of saying that none of their sample group had a pathalogical sexuality (the sample group was 20 people).
Via The Telegraph
“Language for exact number is a cultural invention rather than a linguistic universal… number words do not change our underlying representations of number but instead are a cognitive technology for keeping track of the cardinality of large sets across time, space, and changes in modality…” So says this report on the language of the Amazonian Piraha tribe, who have no number words in their vocabulary. Numbers are a cognitive technology for memory. Or at least that’s what they evolved out of. Years ago one of the authors of the report, Daniel Everett, tried to teach members of this tribe to learn to count. In an article from a few years ago in Spiegel, he explains what happened;
Over a period of eight months, he tried in vain to teach them the Portuguese numbers used by the Brazilians — um, dois, tres. “In the end, not a single person could count to ten,” the researcher says.
Eventually Everett came up with a surprising explanation for the peculiarities of the Pirahã idiom. “The language is created by the culture,” says the linguist. He explains the core of Pirahã culture with a simple formula: “Live here and now.” The only thing of importance that is worth communicating to others is what is being experienced at that very moment. “All experience is anchored in the presence,” says Everett, who believes this carpe-diem culture doesn’t allow for abstract thought or complicated connections to the past — limiting the language accordingly.
Living in the now also fits with the fact that the Pirahã don’t appear to have a creation myth explaining existence. When asked, they simply reply: “Everything is the same, things always are.” The mothers also don’t tell their children fairy tales — actually nobody tells any kind of stories. No one paints and there is no art.
Another excellent piece in the New York Times Magazine; “Understanding the Anxious Mind”. It’s an area that fascinates me- innate temperament- in particular I’m interested in how far genetic predisposition is “switched on” by different circumstances and how the same predisposition can lead to vastly different expression. The piece focuses on work by Jerome Kagan, “one of the most influential developmental psychologists of the 20th century”. There’s some truly great stuff here, for example;
Four significant long-term longitudinal studies are now under way: two at Harvard that Kagan initiated, two more at the University of Maryland under the direction of Nathan Fox, a former graduate student of Kagan’s. With slight variations, they all have reached similar conclusions: that babies differ according to inborn temperament; that 15 to 20 percent of them will react strongly to novel people or situations; and that strongly reactive babies are more likely to grow up to be anxious.They have also shown that while temperament persists, the behavior associated with it doesn’t always. Kagan often talks about the three ways to identify an emotion: the physiological brain state, the way an individual describes the feeling and the behavior the feeling leads to. Not every brain state sparks the same subjective experience; one person might describe a hyperaroused brain in a negative way, as feeling anxious or tense, while another might enjoy the sensation and instead uses a positive word like “alert.” Nor does every brain state spark the same behavior: some might repress the bad feelings and act normally; others might withdraw. But while the behavior and the subjective experience associated with an emotion like anxiety might be in a person’s conscious control, physiology usually is not.
At 15, about two-thirds of those who had been high-reactors in infancy behaved pretty much like everybody else.One such person was Mary, now a 21-year-old junior at Harvard, who was in the high-reactive group as a baby and was moderately fearful at ages 1 and 2. She didn’t think of herself as anxious, just dutiful. “I don’t stray from the rules too much,” she said when we spoke by telephone not long ago. “But it’s natural for me — I never felt troubled about it. I was definitely the kid who worked really hard to get good grades, who got all my homework done before I watched TV.” Mary also was an accomplished ballet dancer as a child, which gave her a way to work off energy and to find a niche in which she excelled. That talent, plus being raised in what Kagan called a “benevolent home environment,” might have helped shift Mary’s innate inhibition to something more constructive. If Mary’s high-reactive temperament is evident now, it comes out in the form of conscientiousness and self-control.
Newsweek reports on a study by Sam Harris (author of The End of Faith: Religion, Terror and the Future of Reason) on the way the brain processes facts and beliefs;
What Harris, his fellow researcher Jonas Kaplan, and the other authors of the study want to address is the idea, which has been floating around in both scientific and religious circles, that our brains are doing something special when we believe in God—that religious belief is, neurologically speaking, an entirely different process from believing in things that are empirically and verifiably true (things that Harris endearingly refers to as “tables and chairs”). He says his results “cut against the quite prevalent notion that there’s something else entirely going on in the case of religious belief.” Our believing brains make no qualitative distinctions between the kinds of things you learn in a math textbook and the kinds of things you learn in Sunday school. Though the existence of God will never be proved—or disproved—by an fMRI scan, science can study a thing or two about the neurological mechanisms of belief. What Harris’s study shows is that when a conservative Christian says he believes in the Second Coming as an undeniable fact, he isn’t lying or exaggerating or employing any other rhetorical maneuver. If a believer’s brain regards the Second Coming the way it does every other fact, then debates about the veracity of faith would seem—to the committed believer, at least—to be rather pointless.
He also noted, marking with asterisks as to its significance, what he called the “blasphemy reaction”: that when atheists disagreed with a Christian belief, or when Christians affirmed one, their pleasure centers lit up—proof that the combatants in the faith-versus-reason wars really do enjoy the fight, equally.
Both the New York Times Magazine (Is Happiness Catching?) and Wired (The Buddy System: How Medical Data Revealed Secret to Health and Happiness) have reported on the work of Nicholas Christakis and James Fowler on the Framingham Heart Study; suggesting that behaviors are “contagious”. The New York Times piece includes some thoughtful discussion on the usefulness and limitations of social network analysis;
Social-network science ultimately offers a new perspective on an age-old question: to what extent are we autonomous individuals? “If someone does a good thing merely because they’re copying others, or they do something bad merely because they’re copying others, what credit do they deserve, or what blame do they deserve?” Christakis asks. “If I quit smoking because everyone around me quits smoking, what credit do I get for demonstrating self-control?” If you’re one of the people who are partly driven by his DNA to hang out on the periphery of society, well, that’s also where the smokers are, which means you are also more likely to pick up their habit.
To look at society as a social network — instead of a collection of individuals — can lead to some thorny conclusions. In a column published last fall in The British Medical Journal, Christakis wrote that a strictly utilitarian point of view would suggest we should give better medical care to well-connected individuals, because they’re the ones more likely to pass on the benefits contagiously to others. “This conclusion,” Christakis wrote, “makes me uneasy.”
Reminds me of the Oscar Wilde quote; “Most people are other people. Their thoughts are someone else’s opinions, their lives a mimicry, their passions a quotation”. Although the idea that we can pick up internal states like happiness, through mirror neurons, essentially mimicking those around us seems a little less dark.
Check out this article “Disorderly genius: How chaos drives the brain”, from the New Scientist. It describes how the brain operates on the edge of chaos and has much in common with physical systems such as piles of sand, earthquakes and forest fires;
Though much of the time it runs in an orderly and stable way, every now and again it suddenly and unpredictably lurches into a blizzard of noise.
Neuroscientists have long suspected as much. Only recently, however, have they come up with proof that brains work this way. Now they are trying to work out why. Some believe that near-chaotic states may be crucial to memory, and could explain why some people are smarter than others.
In technical terms, systems on the edge of chaos are said to be in a state of “self-organized criticality”. These systems are right on the boundary between stable, orderly behaviour – such as a swinging pendulum – and the unpredictable world of chaos, as exemplified by turbulence.
The quintessential example of self-organized criticality is a growing sand pile. As grains build up, the pile grows in a predictable way until, suddenly and without warning, it hits a critical point and collapses. These “sand avalanches” occur spontaneously and are almost impossible to predict, so the system is said to be both critical and self-organizing. Earthquakes, avalanches and wildfires are also thought to behave like this, with periods of stability followed by catastrophic periods of instability that rearrange the system into a new, temporarily stable state.
It goes on to say that the more unstable the brain, the more intelligent. The less time spent in the stable phase-locked state, and the more time spent in the unstable phase shift state, the greater the IQ. For some reason this story makes me think of the book “Solaris” by Stanislaw Lem. Forget about the films, if you haven’t read the book, it’s a must-read. What makes it so amazing is it’s depiction of a sentient ocean. It’s a natural system that is self-aware and has absolutely no concept of what humanity is. I wonder how far the type of analysis in the New Scientist article, of brain states and consciousness, could eventually describe such things as identity and self awareness as naturally occurring patterns. There’s also interesting insights into adaptability and inspiration- “Self-organized criticality also appears to allow the brain to adapt to new situations, by quickly rearranging which neurons are synchronized to a particular frequency. “The closer we get to the boundary of instability, the more quickly a particular stimulus will send the brain into a new state,”" Analogies with the physical systems that they’ve mentioned already exist in our descriptions of inspiration, the spread of ideas etc.
I’m looking at Paul Ekman’s FACS system at the moment, the pictures that he uses to illustrate the system are pretty amusing. This, for example, is from an article in The Guardian asking “Can You Decode These Emotions?”
It’s a bit like Japanese rockabilly or something. It all looks right but there’s something missing. The girl’s face is composed in the right way, the muscle movements are right, and yet her eyes are completely devoid of the emotion. It strikes me that the test is somehow a little compromised as a result. A few months ago I did an interview for a news channel and the journalist interviewed an academic about Ekman’s work for the piece. The academic claimed that Ekman has been largely discredited and that it is generally accepted that the purpose of facial expressions is as communication. Therefore they don’t reflect the internal state of the subject. Which, just about anyone can tell you, is complete nonsense.
If something has evolved for a specific purpose, it doesn’t mean that it works on a conscious level. Walking, for example, is a learned trait, unlike facial expressions, and yet we don’t think about putting one foot infront of the other. We only think about expression when we want to use our body for communication on a conscious level. And a lot of the time we aren’t very good at faking internal states. If someone is playing a role in a social situation, it’s often expected of them, but much of the time we aren’t fooled by the performance. Which I think is one of the reasons why great actors are fascinating. A good performance is really a sort of willed self-delusion aided by research, observation and hard work.
This is brilliant- a Times online article about Richard Bentall, who’s book “Doctoring the Mind” is out in September. He rages against the biomedical model and the failure of psychiatry to work with patients in psychotherapy, relying instead on drugs. A familar story, but there’s some astonishing stuff here. Not least his conviction that it would be better to be treated in Nigeria than in london, because “In Nigeria, people with severe mental illness tend to be looked after in an extended family system or by supportive religious leaders, who tell them not to worry about hearing voices.” Hence the recovery rate is better. Or that whilst the studies published by drug companies about SSRI drugs seem to show that they work, they suppress data, which when added to the mix under the Freedom of Information Act, reveals that they work only slightly better than placebos. Anyway, if you know anyone who’s on this shit, and if you live in New York then chances are you do, point them to this article.
A few weeks ago Seed magazine ran a short interview with Alison Gopnik, a developmental psychologist who has recently published a book called “The Philosophical Baby”. “Why do children exist at all? It doesn’t make tremendous evolutionary sense to have these creatures that can’t even keep themselves alive and require an enormous investment of time on the part of adults. That period of dependence is longer for us than it is for any other species, and historically that period has become longer and longer” she asks. She goes on to make the case that during this period of uselessness, the child is free to explore the physical world as well as possible worlds through imaginative play- and that new ways of looking at the world spring from childhood;
Both Piaget and Freud thought that the reason children produced so much fantastic, unreal play was that they couldn’t tell the difference between imagination and reality. But a lot of the more recent work in children’s theory of mind has shown quite the contrary. Children have a very good idea of how to distinguish between fantasies and realities. It’s just they are equally interested in exploring both. The picture we used to have of children was that they spent all of this time doing pretend play because they had these very limited minds, but in fact what we’ve now discovered is that children have more powerful learning abilities than we do as adults. A lot of their characteristic traits, like their pretend play, are signs of how powerful their imaginative abilities are.
Imagination isn’t just something we develop for our amusement; it seems to be something innate and connected to how we understand the causal structure of the real world. In fact, the new computational model of development we’ve created — using what computer scientists call Bayesian networks — shows systematically how understanding causation lets you imagine new possibilities. If children are computing in this way, then we’d expect imagination and learning to go hand in hand.
Ignoring the fact that they’ve developed a computer model of development (?!), In February I posted about an article called “Delusional Realities” in which Shaun Gallagher argued that “in the spirit of embodied, situated and phenomenological views of cognition,” the delusional subject “does not live in the one unified world of meaning that is defined objectively (in a view from nowhere), but in multiple realities, sub-universes or finite provinces of meaning.” The interviewer asked him; “This is a philosophical approach to answering a psychiatric question, but do you also think it has implications for other areas of philosophy? For instance, does showing that people can experience alternate realities suggest that to some extent aspects of objective reality are just shared beliefs about the world, and does this suggest that our ideas about the world are more constructed than we normally think of them as being?” Shaun Gallagher replied “I think one of the big questions underneath all of this is, you know, how do we actually think about the mind – what is the mind?…And everybody agrees, it’s not a Cartesian substance, but then there’s a huge amount of talk among a lot of philosophers about beliefs and desires and mental states, and then there’s a huge amount of talk about brain states.” Which brings us back to Alison Gopnik and her study of babies;
Increasingly, modern philosophers say that we can learn about the big questions by looking at science. But science, especially developmental psychology, can also tell us about philosophy; it can tell us about what we start with, what we learn, and what the basic facets of human nature are. The kind of picture you often get from scientifically oriented philosophy is often very much in the vein of evolutionary psychology, with everything innate and genetically determined. But one of the more important things that has come out of developmental work is that there’s also a powerful capacity for change. And we’re starting to understand how that change takes place at a very detailed neurological and computational level.