Friday, December 30, 2011

"That's Not Fair": Sushi, Notions of Justice, and Student Grades

Since human beings first developed the ability to speak, one of the most frequently uttered phrases has to be “Not fair!” Fairness is such a basic concept, even small children (and people who just behave like small children) discuss it at length. Like many topics I discuss on this blog, notions of what is fair and unfair are largely shaped by our experiences. In fact, even the idea of “fair” is a social one. There’s really no way to divorce fairness from the social context - it is a core component in all social interactions. Even so, people have different ideas of what is (and is not) fair. I was thinking about this the other day, as my husband and I were sharing a sushi roll.

This very special roll - dubbed the 2012 roll (Happy New Year, by the way!) - contained yellow tail, asparagus, “special” seaweed paper, and various sauces, and each of the 8 pieces were topped with one of 4 kinds of fish roe: masago, red tobiko, black tobiko (my favorite), and green tobiko. I figured: 4 kinds of roe, 8 pieces - enough for us to each have 4, one with each kind of roe. So when my husband just starts picking up pieces without regard to the fact that he just grabbed a second red one, my first thought was, “Hey, not fair!”

And right after I think this - and say, “Dude, you already had a red one.” - I stop and consider, “Why is this idea of equality so important to me? It’s just sushi.”

While justice can be evaluated differently by different people, based on their perspective, research on justice shows that there is some consensus in how people evaluate justice. There are two (possibly three) overarching perspectives that people might use at different times, and that fit into the different theoretical frameworks: distributive justice, procedural justice, and (possibly) interactional justice.

Adam Smith’s social exchange theory, which inspired some of the earliest research on distributive justice, states that people evaluate fairness of an outcome by creating a ratio of outputs to inputs (basically, how much work I had to do and how much I got in return), then comparing that ratio to the ratio of another’s outputs to inputs. If the evaluator’s ratio is equal to the comparison ratio, then the outcome is fair. Smith stated, however, that this process is still subjective, and that many biases can influence the values in one’s own ratio and in the comparison other’s ratio, especially because we may have really skewed ideas of how much work another person puts in.

Find a funnier cartoon?!  I don't have time for anything but Google Image search.
 In general, distributive justice deals with what types of distributions of outcomes will be perceived to be fair. There are three different rules that can be applied when distributing outcomes among parties. Fairness of the outcome is determined by the rule applied. The first is equality, in which each party receives an equal share. The second is equity, where an individual party’s share is based on the amount of input from that party; this is sometimes referred to as the merit rule, and is based on social exchange theory. Finally, the last rule is need, where share is based on whether the party has a deficit or has been slighted in some other distribution.

The preferred rule is influenced by many things, including the goals of the distribution. Equality is often chosen when group harmony is the goal. Equity rules are preferred when the goal is to maximize contributions. Need rules are used when group welfare is a concern, or when resources are limited. The context of the distribution can also have an influence. If outcomes are distributed publicly, the equality rule is usually preferred, while outcomes distributed privately often lead to use of the equity rule. The degree to which the perceiver contributes (or believes he contributes) can also influence preference; high contributors prefer equity and low contributors prefer equality.

Distributive justice was the major justice construct until the mid-1970’s, when procedural justice was introduced. Fans of procedural justice argue that people prefer fair procedures because they believe they will lead to fair outcomes, and that as long as people believe the process of allocating resources was fair, they’ll be fine with the outcome. Any teachers out there can probably tell you this is not always the case.

"I showed up.  That should be worth at least a B, right?"
In fact, not all researchers agree that procedural justice is more important in evaluating justice. Hegtvedt (2006 - an excellent book chapter on justice frameworks, which you can read here) argues that procedural justice may only appear to be more highly valued because 1) procedures are easily interpreted and 2) individuals may lack the information necessary to compare outcomes across group members. When outcome information is available, people will focus more on that information.

Some people have identified another type of justice called interactional justice. Interactional justice is made up of two parts: Interpersonal justice is the degree of respect and dignity demonstrated in the procedure, and informational justice is the sharing of information on process and the distribution of outcomes. Bies and Shapiro (1987 - abstract here), for example, refer to the journal peer review process as an example of how lack of interactional justice can influence whether the outcome is perceived to be fair. Such processes are often lengthy and the responses from reviewers, condescending (see previous post). Some people call the process, and the outcome, unfair, while others do not. The authors argue that the difference in responses could be due to the explanations offered by editors; if editors provide a good reason for the delay and reviewers’ responses, authors may believe they have been treated fairly.  Of course, the reason that some argue there are only two justice perspectives is because they believe interactional justice is simply a subdivision of procedural justice.

Anyway, my husband and I ended up evenly dividing the sushi, though he told me later he would have happily traded two black tobiko rolls for two red. Bartering, hmm? Another post, another day.

Thoughtfully yours,
~Sara

Thursday, November 24, 2011

On Thanksgiving and Gratitude

Every year on Thanksgiving, we are called to give thanks for the good things we have... and of course, gorge on lots of food. I'll resist the urge to write about overeating and focus instead on what the holiday is really about: gratitude.

Though in the past, the field of psychology has focused on maladaptive behavior, the expansion of various subfields of psychology beyond clinical psychology has led to the study of a variety of behaviors, both good and bad, and to focus on, not only the things that make us mentally ill, but the things that make us healthy, happy, and fulfilled. The study of gratitude is one area studied by so-called positive psychologists.

There are certainly individual differences in ability to feel gratitude; some people are simply more grateful than others. (You can find out more about your "trait gratitude" by taking this measure). But social situations, like Thanksgiving, can also influence your minute-to-minute levels of gratitude (or "state gratitude").

Gratitude, unsurprisingly, is strongly associated with psychological well-being, happiness, and life satisfaction (find two full articles about this here and here). Feeling gratitude reduces stress and positive coping; these positive benefits are observed even when people are randomly assigned to an intervention meant to increase their gratitude (read one such experiment here), meaning that these benefits can be reaped by anyone, not just people are are "naturally grateful".

Being the target of gratitude is also beneficial. Being told "thank you" makes a person more likely to repeat the behavior in the future, probably because it functions as a reward (and as I've said before, if a behavior is rewarded, it's more likely to occur again). So even if you feel like someone is "just doing their job", saying "thank you" can make him or her feel more motivated to repeat that behavior in the future and will likely improve your future interactions with that person, as well.

So keep feeling that gratitude, today and everyday - it's good for you. Happy Thanksgiving everyone!

Thoughtfully yours,
Sara

Wednesday, November 16, 2011

Your Brain on Smells: Memory, Emotion, and Scent

In my approximately 30 years on Earth, I have developed many allergies. Some I've had since the beginning (e.g., lactose intolerance), others I discovered much later (e.g., aspartame, the chemical name of Nutrasweet). While I would love to explore what the heck is up with all these crazy allergies, I'm instead writing about what happened as a result of my latest allergy discovery. I recently learned that I'm allergic to an ingredient in a product I use pretty regularly (for the sake of brevity, I won't go into detail); this ingredient is so commonly used in this product that to get a product free of this stuff, I had to go to Whole Foods.

First of all, never go grocery shopping hungry. I've been told this before, but had to break my rule this time because of scheduling constraints. Second - and this rule is even more important than the first - never go to Whole Foods hungry - ever! Going to my regular grocery store hungry is bad enough; everything looks so appealing and tasty. Whole Foods is something else. Not only is the store very visually appealing, it smells how I think Heaven will smell. When you walk through produce, you smell the vegetables. The fish smells like fish (the good, fresh kind - the way fish is supposed to smell). The cheese section... need I go on?

Not only did I want to eat everything in sight, I savored the smells so much that I think I fell in love. Yes, I might have fallen in love with Whole Foods.

This, of course, got me thinking about psychology. But then, everything makes me think of psychology, so perhaps we should be more concerned if I walked out of Whole Foods thinking nothing more than, "I'm in love."

Our brains are fascinating. I really mean it. Our brains are just about the coolest invention ever. Not only are they highly efficient, processing machines (that definitely make important, but predictable, errors), so many of the systems are interconnected in really amazing ways. The connection among smells, memory, and emotions is one example.

To really briefly summarize, the lowest parts of our brains are the parts that developed (evolutionarily) first. They handle the basic functions: breathing, sleeping/waking, etc. These very basic functions are handled by parts of the brains directly connected to our brain stems. As you get farther up in the brain and away from the brain stem, you get to the higher functioning systems that developed last. Our olfactory bulb, which is involved in perception of smells, is on the under part of our brain, close to our nose. So one of the first systems to develop, but slightly higher up the chain than breathing.

The olfactory bulb is the yellow structure above the nasal cavity.
Because of the location of the olfactory bulb, it is closely tied into the limbic system, a region in the middle of your brain that contains (among other structures) the hippocampus (involved in storage of memories) and the amygdala (involved in emotion) - the reward pathway I discussed in my very first blog post resides in this region.

It should come as no surprise then that emotions, memory, and smells are closely related, and that stimulation of one of these systems (such as the one for memory) can activate another system (such as emotion). Certainly, memories elicit emotions (you remember an event that made you happy, and you feel happy again), and emotions can elicit memories.

But what about smells? Ever smell something and suddenly find yourself thinking of an event from childhood? Pumpkin pie, turkey, certain candies - these all remind me of holidays at home and feeling happy. Certain flowers, particularly those in my bridal bouquet, remind me of my wedding day.

Which is probably why I felt this strong feeling of love. As I was entering Whole Foods, I smelled the exact flowers from my bouquet. And of course, being a foodie, the other fantastic food smells certainly gave me something to savor. In the words of Jim Gaffigan, "I like food... a lot." All of these wonderful emotions, memories, and smells combined to make me think I love Whole Foods.

Wait, you mean I'm not actually in love with Whole Foods? What am I going to do with all these love poems?!

Thoughtfully yours,
Sara

Friday, November 4, 2011

On Publishing, Perishing, and Post-Docing: A Reaction to Diederik Stapel's Confession

One reason I started this blog was as an outlet for my writing. I've always loved writing, and often considered it for a career (in those fleeting moments when I was really proud of something I had written and thought, "Yeah, I can do this forever"). I was constantly penning short stories, creating characters and writing notes to my friends in which they played a prominent role (or sometimes were the authors of the notes themselves). I've written many plays: one acts, two acts, I even had the outline of a three act modern tragedy that I still think of going back to - my Citizen Kane or Death of a Salesman (yes, I know I'm being overly dramatic: as a formerly theatre person, I have the flair for drama, and as a psychology person, I'm painfully self-aware of that and all my other traits).

Of course, I changed my major in the middle of my first semester at college, from theatre to psychology, not realizing that, if I thought getting published as a fiction writer was tough, it was nothing compared to getting published as a psychology researcher. Publish or perish is the expression in my field, and it is accurate. Getting the best jobs, getting research funding, it all depends on having a strong publication record. And with more people earning higher degrees now, there's even more competition. This is one reason the number of PhDs going into post-doc positions has also increased recently; grad school alone is no longer enough to prepare most people for the most attractive research and academic positions.

My number one goal in my post-doc is to publish as much as I possibly can. I even submitted a paper today. But I can't rest on my laurels, because I've got 5 other papers in various stages of preparation. Though my most recent reviews may still sting (and I'm not alone - there's actually a group on Facebook devoted to Reviewer 2, often the most matter-of-fact and even rude of the group) I can't let it traumatize me for too long, because there are more studies to perform, more data to analyze, more papers to write.

That's why when I read an article in the New York Times about a prominent psychology researcher who admitted that he massaged data, made up findings, and even wrote up studies that were never actually performed, and published it all in prominent journals, I was a bit annoyed. Am I bitter that while I was dealing with snide reviewers insulting my intelligence, research methods knowledge, and mother, this guy was fabricating data, falsifying methodology, and just plain making whole studies up (and getting rewarded for it, albeit not purposefully)? In a word: yes. But, no matter how tough the publishing world was, the possibility of doing what this guy did was never even an option. It's not that I thought this sort of thing doesn't happen; we all know it does, just as we know there are students who hire people to take the SATs or write their theses for them.

I know I'm not the only one who can say that this wouldn't be one of my answers to the difficulty of publishing in this field, and it's not because of a lack of creativity. Whenever we write research proposals, we have already have to write the introduction/background and methodology sections; we sometimes have to write an expected results section. Make that "expected" part disappear, add some statistics, illustrative quotes, whatever, then finish with a discussion/conclusion and voila! Made up study. And if you're in a field or at an institution where it's normal for someone to conduct and write up a study all by his- or herself, who will ever find out?

Well, apparently someone did, because this guy was caught and confessed, and the whole thing was written up in the New York Times. You can perhaps understand his motivation, and there are surely countless other researchers who have done the same thing and never got caught. And if you're a bit sly about it, your chances of getting caught will likely go down further. So what makes the people who would never do such a thing different?

Anyone who has taken an introductory philosophy class - or who has seen the movie Election - can tell you the difference between morals and ethics. For those who fall in neither of those groups: Morals are notions about what is right and what is wrong. Ethics often refers to the moral code of a particular group, and it sometimes is used to describe what is considered right and wrong at someone's job or within a certain field. That is, if we say a study was conducted ethically, we mean generally that it was performed in a way to minimize unnecessary harm, but more specifically, we mean that an overseeing body examined it and decided it abided by the rules established by some even higher-up overseeing body. Psychological ethics clearly say that falsifying data is wrong; it's unambiguously stated. Stapel can't plead ignorance here.

Sorry, my moral compass appears to be broken today.  I'll have to get back to you tomorrow.
But not everyone avoids doing something because it's wrong. People are at different stages in their moral development; for some the possibility of getting caught is their deterrent. One of the most well-known theorists on moral reasoning is Kohlberg, who (while a post-doc at University of Chicago) began developing a taxonomy of six developmental stages. The first two stages apply to children; in the first stage, people are motivated by seeking pleasure and avoiding punishment, and determine morality by what an action gets them in return. Similarly, stage 2 individuals are driven by self-interest and in actions that further their own goals, needs, etc.; these people behave morally toward others when it also benefits them. 

As we move into adolescence and adulthood, we also move into stages 3 and 4. In stage 3, people begin fulfilling societal roles, and behave in ways that conform to others' expectations; it seems the motivating principle here is they, as in "what would they think?" In stage 4, morality is based on legality. Finally, some lucky few move to stages 5 and 6, which Kohlberg considered the highest levels of morality. These individuals are no longer motivated by pleasing others, what is legal/illegal, or even self-interest; instead, they develop universal principles of what is right and wrong, and seek to enforce those principles, even if it means breaking the law or sacrificing their own needs.

But perhaps what it really comes down to is why one became a scientist at all. I like to think I went into this field because I was good at it, but then there are other things I'm good at (perhaps things I'm even better at than this), some that I could have potentially built a career around. I find the field to be challenging, but once again, there are other interesting and challenging fields I could have pursued. As cheesy as it sounds, I really want to make the world a better place and I see my field as one approach to reaching that goal. I'm sure Diederik Stapel had similar reasons for going into this field. Somewhere along the way, that motivation got lost, or at least overpowered by the drive to publish (or perish).

How can we keep people from getting to this point? How can we reward scientific integrity, even if it means fewer publications and a less attractive CV? And most importantly, how can we verify a researcher's findings are valid?

Thoughtfully yours,
Sara

Saturday, October 22, 2011

Still Too Pretty to Write this Blog Post: Gender and the STEM Fields Revisited

Just a quick post to revisit a topic I've covered before. One of my past blog posts was about two articles, one covering a controversial t-shirt marketed to young girls and the other discussing a study of men's and women's spatial ability in two cultures.

I just read an article about women in science that also provides some support that women are just as capable as men if given the right environment in which to thrive. It's interesting, though, that the professor in charge of the lab discussed in the article, worries that cultivating an all-women lab (at least by accident) may be as negative as the "old boys" labs of the past.

Some research has found that single gender classrooms are actually beneficial for both male and female students. Of course, at what point should integration happen (because it will have to happen eventually, unless you plan on the workplace also being divided)? And are there any long-term negative consequences associated with single gender classes? Does it make it difficult when the student finally encounters a member of the opposite gender in an academic setting? Or do these students, because of the lack of variability in gender in their classrooms, never learn that gender might be related to academic skills?

It seems, though, that Professor Harbron has every reason to be concerned. After all, even though you could argue, "Male students just don't seem to be interested in joining her lab, so why should we force them?", that argument has been used for a while to rationalize doing nothing to deal with many female students' lack of interest in the STEM fields.

I'm all about encouraging people "follow your dreams", but at some point, we have to recognize the powerful outside forces that can influence those dreams. As someone who discovered a love of math later in life, I wish I had had someone to help me with my struggles and push me to keep trying. In fact, it seems to me, the best way to encourage students to follow their dreams, is to get them to try everything and hold off on deciding what they want to do as their career until it is absolutely necessary for them to decide.

Yes, I know that sounds kind of counter-intuitive, but hear me out. In many other countries, students are tested early on to discover what they're good at. At some point, educators determine what Joe Student is good at, and begin training Joe in that discipline. Sure, our system is not as structured as that. Even if Joe is good at a certain thing, Joe can choose to go into another discipline all on his own. Still, once Joe has decided what he likes, we direct him toward activities and classes that will get him to his goal. And, if we think Joe is making a bad choice, we may try to direct him toward something else. But, if instead, we give Joe a taste of all his options without influencing him toward one field over another, and keep doing that until it's time for him to decide, who knows?

You may be saying, "We already do that." But do we? If Jane Student expresses an interest in math, do we encourage her with the same vigor as we do Joe? Do we place equal value on all the different options students have, or do we make casual statements that direct students to one option over another (by suggesting one option is better than others)? If we can get rid of preconceived notions about who is suited for a certain field (and who is not), we can create an environment where students thrive. Perhaps Professor Harbron is right that her lab is no more ideal a set-up than the all-male labs from before. But by examining her lab, and other educational environments, maybe we can discover the best approach.

Thoughtfully yours,
Sara

Sunday, October 9, 2011

The Need to Personalize: Why Consumer Data is More Important Now than Ever Before

As a long-time researcher, my answer to many questions is, "What do the data say?" I consider myself to be a very empirical person, so having data to support views or my approach to life is very important to me. Even in parts of my life where no data are available, I continue asking questions. And, like most people, I constantly observe other people and draw inferences from their behavior. So when I read about some of the cutting edge work the Obama campaign is doing with supporter data, I wondered, "Why aren't more politicians doing this?" and more importantly, "Why aren't more people doing this?"

I'll be the first to say that too much personalization is not necessarily a good thing. For one, I'm really not a fan of design-your-own-major programs and would be happy to go into the "whys" of that sometime. But when it comes to marketing or informing people about causes they can join, personalization is an excellent idea. In fact, it's the logical continuation of what market researchers have been doing for years.

When a company creates a new product, designs a new ad campaign, etc., they want to get consumer perspectives. They do this through things like focus groups, where a group of similar people are brought in to try out a new product or view a new ad and discuss their thoughts as a group (I also frequently use focus groups in my research - you can get a lot of really useful information and they're fun to do!), and survey research, where people may (as an example) hear a new product slogan and rate it on a scale.

Market researchers also often collect demographic information from their participants, things like age, gender, race & ethnicity, and education, to see how these factors relate to responses to the product. This gives some basic information on who is likely to buy your product, and what approaches those groups respond to. A company who wants to appeal to many demographic groups may develop more than one ad campaign and put certain ads in places likely to be seen by certain groups. If you want to see some basic personalized marketing in action, grab a couple of magazines, say a fashion magazine and a sports magazine. Take a look at the ads in each - the ads in the fashion magazine may be for different products than the ads in the sports magazine. Not only that, you may notice that even ads for a product found in both of the magazines are different in terms of color scheme, layout, and writeup. You'll probably even notice different people featured in the ads.

The same is true for advertising during certain shows. Market researchers know what kinds of things their target demographic likes to watch on television and will buy ad space during that time.

Of course, I call this "basic" because it's not really personalized to one specific person; it's aimed at a specific group who have some feature or features in common. But advances in technology have made it even easier to gather information about a specific person, and in some cases, deliver that personalized advertising directly to that one individual. Google has been doing this for years. Facebook is also doing more and more of this targeted marketing. Using data from people's search terms or status updates, specific ads are selected from a database and displayed to that person.

Why is this personalization such a good idea? People respond to (e.g., process) things that are personally relevant more quickly. Research shows that people, for instance, show stronger preferences for the letters in their own names, probably because these letters are more familiar and therefore more fluent (easier to process - discussed in a previous blog post). When we're feeling especially miserly with our cognitive resources and are operating on auto-pilot, such highly personalized information can still "get in" and be noticed and processed by the brain.

Personally relevant information also appeals to our sense-of-self, our identity (also discussed in a previous blog post). We view the world through the lens of our personal experiences and details; some people may be better at considering another person's viewpoint than others, but we can never be separate from the self, so our view of the world is always going to be self-centered (and I use that term in a descriptive, rather than judgmental, sense).

Even in theories developed in fields like psychology, we recognize that the perspective of the theorist receives a lot of weight (this is why following the scientific method to develop testable and falsifiable hypotheses and gather empirical data, is so important; it's also one reason why theories are developed by many studies on a subject, hopefully performed by more than one individual, and no single study is the end-all, be-all on the topic).

I remember reading a quote in college by a great psychologist about Freud, and after much searching, could not uncover the quote or the source, but the person (I want to say Gordon Allport, who met Freud briefly when he was 22 and became disillusioned with psychoanalysis as a result) essentially asked of Freud's theory, "What kind of little boy would want to be so possessive of his mother?" - that is, he suggested that Freud's theory, specifically its emphasis on the Oedipus complex, was more about Freud himself than normal human development.

These days, individuals are doing things with computers that were once only reserved for the nerdiest of super-nerds sitting in front of a computer the size of a small house.

And if we leave it running all night, we could have our checkbook balanced by tomorrow morning! Who has the punch cards?
The people who are embracing today's technology to personalize content are the ones who will be sticking around as the market becomes more and more saturated. That's why I would argue that this kind of personalization is so important - there are almost 6.8 billion people on this planet, nearly all of whom will make their livelihood off of selling something (whether that something is concrete, like a product, or abstract, like a skill). As the population continues to grow, getting even a minuscule percentage of that total to see what you have to offer and show an interest in buying it, is going to take some serious data-crunching ability.

If you sell a product, any kind of product, you should be collecting data on your customers and using their information to make important decisions. And if you've got any kind of computational and statistical know-how, work those skills, because you are going to be sorely needed as the market continues moving in this direction.

True, some people are just born to sell things - they can convince you that you need this product, that your very life depends on having this thing. We can't all be Steve Jobs, walking into a room and presenting the product with such flair and resonance that we all suddenly wonder how our life was ever complete before iPod. (Years from now, when people look at the products Jobs was responsible for and think, "Oh, a personal music player, how quaint", they're still going to be watching and dissecting his presentations to find that magic formula.  If only it were that easy.)

And perhaps part of Jobs's genius was that he could do some of this data-crunching in his head, examining how people are responding to his presentation in real-time and making subtle shifts to bring the message home and get people on board. Few people possess that ability.  But for the rest of us, we can perhaps get by with a little help from our data.

Tuesday, October 4, 2011

Whatever They Offer You, Don't Feed the Trolls

You may have noticed that I talk a lot about the media on this blog. The media is one of my main research interests - one that I've retained despite respecializing as part of my post-doc. I find it fascinating. Media information is everywhere and, as people become more and more connected through the Internet and mobile devices, its influence is only likely to grow. Though media research has been conducted since the early 20th century, one area that has really taken off is research on Internet media. This is not only because the Internet is becoming people's main sources for information, but also because the Internet is inherently different from other forms of media as well as constantly in flux.

Even with reality television taking off like it has, it's not easy to get onto television. Movies, music, and similar forms of media are not as easy to get into either. You need things like talent, good looks, and connections (unless you're Shia LaBeouf - in that case, your celebrity is inexplicable). The Internet, however, is one big free-for-all. Anyone can get online and create a webpage, post a music video on YouTube, start a blog ;). Thanks to the Web 2.0 movement, the tools are available to allow anyone, regardless of technical know-how, to get his or her message out there. Because of the ease with which individuals can add content to the ever-growing World Wide Web, the Internet is constantly changing. New information is becoming available, and new tools are being created to allow individuals even more control over content.

Obviously, there are many aspects of the Internet that are worthy of further exploration, but today, I'd like to write about an Internet phenomenon that has been around probably as long as the Internet itself, and is only becoming worse thanks to Web 2.0: trolling.

They may look cute and cuddly, but it's best to ignore them.
Last month, BBC news featured a story about trolling and a few cases in which people were arrested and jailed for trolling. In these cases, the trolling was really over-the-top bad: for example, a young man posting really thoughtless remarks on a memorial website for a young woman who was killed. Still, websites are cracking down on trolling in a variety of ways, such as by requiring comments to be approved before appearing on the site. Some argue that simply requiring people to register should be sufficient, because people are no longer anonymous.

The argument is that people troll because they are "deindividuated" in a place like the Internet: they can shed their usual identity and adopt a new persona, which research suggests can lead to bullying and outbursts of violence. This is the phenomenon behind "mob mentality", where people in a large crowd can begin engaging in antisocial behavior, such as vandalism, physical assault, etc. So take away the opportunity to hide one's identity, and problem solved, right?

Yes, I tend to ask questions right before I'm about to argue the opposite. I'm totally predictable. :)

Let's be honest, other than my friends who read this blog (and perhaps my entire readership is made up of my friends, but hey, that makes you guys great friends :D), do you honestly know me? I'm not anonymous here; you can find out my name, my general location, my education and background. I drop autobiographical statements here and there. Still, your entire experience with me is online. I could be totally different here than I am in real life.

So what's to stop me from adopting a new personality entirely for my online interactions, that differs markedly from the "real" me? And what's to stop me from adopting the persona of a thoughtless jerk who trolls message boards trying to get a rise out of people? (This is just hypothetical, BTW; I have never, say, inferred a singer sucked on a comment board of their YouTube video… or anything like that.)  Honestly, even on a site like Facebook, where it's super-easy to figure out who I am (and getting easier each day), I'm more apt to call someone out on something I would never utter in person.

I suppose if someone says something truly awful, requiring registration would make it easy to track them down for disciplinary (and even legal) action. But just like other forms of bullying and harassment, there is always a gray area where the behavior, though repugnant, is not punishable. Even online behavior that has led to a person's death went unpunished for fear that it would lead to a precedent that could make creating false online identities illegal. And as this case showed us, even requiring a person to register doesn't guarantee they are who they say they are. And requiring comments to be approved before appearing leaves too much room for bias; can't the moderator simply choose to accept the comments with which he/she agrees and reject the rest?

Perhaps the issue, then, is not deindividuation, but distance from the target.  Stanley Milgram, who conducted one of the most unethical (aka: most awesome) social psychology experiments ever, found that people were more likely to follow the experimenter's instructions to shock another participant when they were farther away from the person getting shocked.  On the other hand, if people were in the same room as the participant being shocked, they were much less likely to follow the experimenter's orders.

If the issue really is distance from the target, then we'll always have this issue in Internet-mediated communication.  In fact, as people spend more and more time communicating with people via the Internet, the problem is only likely to worsen.  Can we ever get away from the trolls? Other than "not feeding them" - and seriously, DON'T FEED THE TROLLS - what can we do to prevent trolling?

Thoughtfully yours,
Sara

Tuesday, September 27, 2011

When Thinking Feels Hard: New Layouts, New Features, and New Thought Patterns

It feels like the Internet landscape is changing.  Recently, Facebook unveiled its new look (as well as some additional features that have had more than a few express concerns about privacy).  As with any new Facebook roll-out, people are complaining.  The news feed has been replaced with Top Stories and Not So Top Stories (okay, not the term, but that's what it comes down to).  On the side, users not only have the chat list that has been available for some time, but a Twitter-style feed (dubbed the ADD bar) giving real-time updates from friends, friends of friends, and the occasional random friend of a friend of a friend.

As people have pointed out, the people who complain about a Facebook update are likely the same people who complained about the update before that, and before that, perhaps suggesting that some people like to complain.  But just like Dr. Gregory House thinks everyone lies, I say, "Everyone complains", at some time or another.  I think there's more to these layout changes than predisposition to complain.  That's right, ladies and gents, I'm talking about the situational influences - notice a theme here? :)

You may also have noticed the look of this blog has changed.  'Tis the season.  It seems like as a child, I would always want to reinvent myself in the Fall.  Perhaps the same is true for websites.  But how might these changes influence our perceptions?

A few years ago, I had the pleasure of listening to researcher Norbert Schwarz give a talk at my grad school alma mater, Loyola University Chicago.  If you've never checked out his research, you definitely should; visit his homepage here.  Not only is he incredibly friendly and funny, his research, while definitely theory-driven, is incredibly applicable to a variety of social situations.  Or maybe it's that he's really good at taking theories and applying them to a variety of situations; either way, I would love to be able to do that.  Theory is not my strong point.

One area Schwarz has studied a great deal is metacognitive experiences, basically thinking about thinking, and how we use cues from our thinking to influence the way we think.  Wow, that made so much more sense in my head.  Okay, how about a concrete example?  Let's say I show you an ad for a car, then ask you to come up with a list of 10 reasons why you should buy that car.

Go ahead and get started on your list; I can wait.
Unless you know a lot about the car or I offered a really great option to consider (Batmobile anyone?), you probably had a lot of trouble coming up with a list of 10 items.  You might use that cue, "Wow, thinking of 10 items was really hard" to tell you something about whether you really want to buy the car.  That is, because thinking felt difficult, you took that as a cue to mean the thing you were considering was not that good.  Schwarz refers to this perceived ease/difficulty as "processing fluency".

Schwarz has shown that processing fluency can be manipulated in many ways, such as by using an illegible font or by asking participants to remember very specific personal events (such as 12 times you behaved assertively).  Another way is familiarity; more familiar things are easier to process.

Now obviously, we don't always need thinking to feel easy.  Sometimes, we encounter things to which we want to devote our full cognitive effort.  But as I mentioned in a previous blog post, we're cognitive misers.  We're choosey with how we spend our cognitive resources.  If we're asked to learn a new software package for work, for example, we might be willing to devote the effort (there are a lot of other variables operating, but this is just a for instance).  Facebook, on the other hand, is a leisure time activity, and many people who aren't high need-for-cognition folks would rather be able to have fun without thinking too hard.

But people continue to use Facebook, and though some users have likely split recently, Facebook currently has 750 million members (according to Google population data, the Earth's population is currently 6,775,235,700, so that means about 1 of every 9 people uses Facebook).  Perhaps processing fluency is not the only issue at work here; the very nature of the social networking site is, well, it's social.  Your friends are there, and in some cases, it might be your only opportunity for interaction.  That might make some people unlikely to leave (of course, since Google+ is now open to the public, the landscape may continue to shift).

For those who left Facebook, I'd love to hear your reasons (in comments below), even if you left long before the recent update.  For those who stuck around, don't worry; eventually you'll get used to the new look and thinking won't feel so difficult... just in time for the next update.

Thoughtfully yours,
Sara

Friday, September 23, 2011

Pachelbel and Queen: The Psychology of Music Cravings

Cravings; everyone gets them one time or another. Food cravings make sense (well, some of them). Your body often craves what it needs more of, though apparently some cravings for one thing (e.g., sugar) are actually a sign your body is lacking something else (e.g., magnesium). And while your taste buds may be fooled (Mmm, thanks for that Diet Coke, sweet nectar of life), your body is not (Artificial sweetener? Nice try, I still want sugar) (which some research suggests is why drinking diet soda doesn't actually help if you're trying to lose weight).

But today, I experienced another craving, one I've had before but never really considered until now. I found myself craving certain music. As I was listening to one album (Rachmaninoff's All Night Vigil, admittedly, one I've been listening to a lot recently because my choir will be performing it) I found myself craving (for lack of a better word) another work, Carmina Burana, different in style, instrumental support, and subject matter: While Rachmaninoff is a series of a cappella songs based on chants from the Eastern Orthodox tradition (read: quite religious), Carmina Burana is a celebration of sex, drugs, and rock n' roll (well, perhaps not rock n' roll, but the other two most definitely), and of course, fortune. One of the most well-known works from Carmina Burana is "O Fortuna", a piece you've definitely heard; it's often used in advertisements, even though the lyrics "O fortune, like the moon, you are changeable…" are perhaps not well-suited for the car, movie, football game, etc., ads I constantly see it used in. But people keep using it because it sounds epic - like you're talking about something really important and awesome. Well, it does if you don't understand Latin. But I digress.

Read a list of popular culture uses of O Fortuna.

Obviously, when you crave food, it fulfills a bodily need; even if the food you crave isn't all that good for you (And really, are any food cravings for things that are actually healthy? I don't know about you, but I rarely crave carrots.), it still provides calories your body needs to run all of its systems. In fact, pretty much any craving you can think of fulfills a survival need of some kind. But what need does music fulfill? Apparently, I'm not the only one who has considered this question. There are textbooks, articles, and even whole scholarly journals devoted entirely to the psychology of music. Researchers have examined how music preferences relate to personality, the factors that explain why someone likes a particular piece of music, and how music affects mood, to name a few.

Even researchers outside of music psychology recognize the power of music to influence your mood. One of the most popular manipulations for studies on the effect of mood is to have participants listen to a piece of music known to elicit certain feelings: happiness, sadness, etc. But research directly on music is far more rare (but becoming more common thanks to all these great publication outlets). Peter Rentfrow and Sam Gosling, two personality researchers who also frequently examine music in their work, noted in this article that of the 11,000 articles published in top social and personality psychology journals between 1965 and 2002, music was an index or subject term in only 7. The few studies on these topics find that music is related to personality (such traits as sensation-seeking - also implicated in enjoyment of horror movies - as well as extraversion, warmth, conservatism, and psychoticism), social identity (something I've also blogged about before), and physiological arousal (though Rentfrow and Gosling's brief review of this subject still touches a lot on personality).

Of course preference would refer to what types of music you enjoy. Personally, I enjoy many different styles of music, everything from orchestral and choral music (what many refer to as "classical") to piano-driven pop to classic rock to blues (my background while writing most of this blog post was music of Stevie Ray Vaughan). But at certain times, I may prefer to listen to one style of music, or one particular artist, or even one particular song. What determines that minute-to-minute preference? One study (Ter Bogt, Mulder, Raaijmakers, & Gabhainn, 2011, abstract here) recently published in Psychology of Music may offer some explanation. They categorized their participants by self-rated level of importance of and involvement with music: high-involved, medium-involved, and low-involved listeners. High-involved listeners were the eclectic types - they liked a broad range of music styles, experienced a great deal of positive affect while listening, and reported that music served a variety of purposes in their lives: dealing with stress, constructing their identity, relating to others, and enhancing their mood. The other two groups showed more narrow music preferences and considered music to be a less integral part of their lives.

Relatedly, Saarikallio (2011 - abstract here) argued that music was important for emotional self-regulation, and performed interviews with people from a variety of age groups and levels of involvement with music, finding that reasons for listening to music included mood maintenance, distraction, solace, revival, and psyching up; reasons were quite consistent across age groups.

Though these studies don't tackle the topic directly, they suggest that people may select music from their internal list that they think will serve whatever purpose they're addressing (such as coping with stress or maintaining a happy mood). High-involved listeners have a larger internal list, and also get a great deal out of listening to music, so they probably do it regularly and may have certain songs in mind for particular needs. For example, I worked for a year in downtown Chicago, commuting from the north side. Though I had the El (that's the Elevated Train for non-Chicagoans) and didn't have to deal with Chicago drivers (shudder), the El was often crowded early in the morning and people were none too friendly. (I spent part of this time commuting with a broken arm and did anyone offer me a seat? No, but that's a blog post for another day.)  My solution for all the negativity: sublime choral music, especially Bruckner's Mass in e minor; just out-of-this-world gorgeous.  Made my commute so much more bearable.

There are countless other examples of studies examining the relationship between traits and music preference, whether those traits use personality theory terms (extraversion, openness to experience, etc.) or ability terms (musical abilities, etc.). Over the course of the literature I've read, I've had my ego stroked (you like lots of types of music and use music for emotional needs because you're a good musician - yay!) but also knocked down a bit (people who use music in those ways tend to have lower IQs - aw, man). But what about this notion of need? What need does music fulfill and can that explain the issues of musical cravings?

So who do I turn to when I want to examine human needs? Maslow, of course! Why didn't I think of it sooner? (See aforementioned finding about IQ.) Most people who have taken introductory psychology know Maslow as the guy who created the hierarchy of needs.



Look familiar? According to Maslow, human needs fit into one of five levels of the pyramid. Needs at the bottom of the pyramid are most important - without them, we don't really consider the needs higher up on the pyramid, because we're busy trying to fulfill those basic needs. So in order of importance, our needs are Physiological, Safety, Love/Belonging, Esteem, and Self-Actualization.

 Arguably, music would be one of the Self-Actualization needs, falling perhaps under the sub need of creativity. How then can we explain the ubiquity of music in so many human cultures, even cultures that perhaps struggle to meet needs that are at the base of the pyramid? Of course, even though music is present in nearly all human cultures, there are certainly individual differences in importance of music (as was shown in studies above). Perhaps for some people, then, music falls on a lower, more integral part of the pyramid. For instance, Ter Bogt et al. (above) found that the high-involved music-lovers considered music to be an important part of their identity (both individually and socially). These individuals might place music, then, with Esteem or even Love/Belonging. In fact, music might fall in more than one place, and be used when fulfilling a variety of levels of needs; that might even explain why certain music is more appealing at certain times.

I'm sure I'm not the only one who experiences these musical cravings. Even so, despite services that introduce you to new music based on other music preferences, there doesn't seem to be anyone trying to measure and use these minute-to-minute variations to make a buck. Perhaps because when you get right down to it, no one completely understands it beyond knowing it exists.

Thoughtfully yours,
Sara

Wednesday, September 21, 2011

Keep Telling Yourself It's Only a Movie: The Psychology of Horror Movie Enjoyment, Part 2

As I said in my last post, I love horror movies. I set out to explore some reasons why I (and many other people) might love movies that others might find disturbing. Little did I know, this search would turn up so much information, in addition to my personal notes on the subject, that 1) my first blog post was quite long and 2) I still had to cut it off and add the ever-annoying "To Be Continued". Who knew that something so trivial could bring up so much relevant information in the scientific community? Well, I did, but that was what we call a rhetorical question, dear reader.

So what is it that sets horror movie lovers apart from others? Last time, I wrote about the sensation-seeking personality, which includes love of horror movies in a long list of traits held by people who seek out thrills in all the wrong places and suffer from an insatiable case of neophilia (and if you thought, "Ew, they like dead people?!" dear reader, read that word again). As I said, though, I hesitate to accept something so clinical. As a social psychologist, I try to look for situational explanations as well. And as a recovering radical behaviorist (hi, my name is Sara, and I'm a Skinner-holic), I try to think of how learning and patterns of reinforcement & punishment might have shaped a behavior.

Love of horror movies, for instance, seems to be correlated with gender. Men are more likely to enjoy them than women. Of course, you could make the case that there are some innate, biological differences between men and women that make them respond differently to these images, but behavior shaping and reinforcement could also explain some of these differences. When children start growing up, they begin going through what is called gender socialization: they learn how to be little boys and little girls.

One way this happens is through selective reinforcement. We reinforce (through our verbal and behavioral responses) when little girls play with dolls and little boys play with trucks; we may not necessarily reinforce when little boys play with dolls and little girls play with trucks. We often reinforce when little boys play rough, but sometimes even punish little girls when they play rough.

Gender stereotypes can become so internalized, we even get the kids to do the dirty work for us:


Even parents who insist they don't want to reinforce gender stereotypes with their kids may inadvertently reinforce gendered behaviors; a parent may not "have a problem" with Jr. playing with dolls but they may not say anything or join him during that kind of play, but would respond positively and join when Jr. plays with a truck.

As a behaviorist (oops, I mean recovering behaviorist), I love observing these kinds of interactions; people often fail to realize what kinds of behaviors they're rewarding.

So you could argue gender socialization influences movie choice. When children pick out a movie to watch at the movie theater, at home, etc., parents always have to option of saying, "No, not that one. How about something else?" Do that often enough, and with certain movie selections, and over time, children learn what sorts of movies they should be watching (and what they should avoid – or at least, what movies they should be watching when the parents are around; people constantly test to see what they can “get away with”).

The problem with explaining a behavior with reinforcement and punishment is that the definition is circular, and it's difficult to get to the root cause. What is a reinforcer? Something that reinforces, that is, makes a behavior more likely to occur again. The thing is defined by the effect is has on behavior, and if it doesn't have that effect on behavior, it was never that thing to begin with (do you see why I say recovering behaviorist? - I love this subfield of psychology but it definitely gives me a headache). You are never able to get away from individual differences here, because something that may be reinforcing for one person may be neutral or punishing for another. Individual differences are fine, of course (I love including some of these variables in my studies), but where do they come from? Biology, perhaps? Very early experiences? Start thinking too much about this, and Watson's insistence that, "I can shape anyone to be anything I want, mwa ha ha" and Skinner's "Innate schminnate" attitude start to unravel, and the only way to repair it is to weave it with – gasp – cognitive psychology, evolutionary psychology, and psychodynamics. Oh, the horror.

Of course, timing is key when it comes to reinforcement and punishment. We know that for a reinforcer to be truly reinforcing, and a punisher to be truly punishing, it should happen quickly after the target behavior; this is one of the basic tenets of behaviorism. To take this one step farther, something may only work if it is delivered at a certain time. Perhaps one has to be in a certain state of mind for a horror movie to be enjoyable, and if you see a movie during that proper window, you’re more likely to seek it out again. Your mind shapes to enjoy these images (there is evidence that experiences actually change your brain physically, and that early stimulation may have long-term implications for brain development and, therefore, things like intelligence), and you become, over time, a horror movie fiend.

If you first view a movie during one of those off times, your response may be disgust, a response that strengthens over time. This is similar to something I’m exploring for another blog post (coming soon!) about cravings. This argument still depends very much on inner states, like mood, but considering that other research has established things like mood influence our decision-making and processing of information, it stands to reason that mood could influence whether something is reinforcing or not. And also considering we know that physiological states influence whether something is reinforcing or not (a cookie isn’t very rewarding if you’re not at all hungry), it doesn’t seem like a huge leap to weave these two lines of thinking together.

Hard to believe I once spouted Skinner constantly – Skinner, the man who said, “So far as I’m concerned, cognitive science is the creationism of psychology. It is an effort to reinstate that inner initiating or originating creative self or mind that, in a scientific analysis, simply does not exist.” Ah, Skinner, you were a brilliant man, but just because you can’t take something out and dissect it, or put it in an operant chamber and train it to press a bar, doesn’t mean it isn’t real or important. Not to mention, with advances in technology and scientific methods, something that was once unobservable, and therefore untestable, unfalsifiable, and unscientific, often becomes the subject of routine scientific study.

If we keep moving forward in the 20th century in the history of behaviorism, we come to Albert Bandura and his work on vicarious learning, imitation, and modeling. We learn a lot by watching others and chose role models to imitate (something Thorndike began studying almost a century prior to Bandura, but he found little to no evidence in his studies of animals and determined such learning did not exist). It’s possible, then, that love (or hatred) of horror movies develops in part from who we aspire to be and from observing the responses of others. This could even explain why behaviors not in line with gender stereotypes get shaped and reinforced; a little girl may learn to behave in “boyish” way by observing and imitating little boys (perhaps one reason that little girls with lots of brothers become “tomboys” themselves – I use the quotes because I personally hate these terms, but sometimes there is no better way to explain something).

 To use a personal analogy, I grew up with a brother and mostly male cousins, so perhaps my early models were mostly male. And though I did engage in some “girly” play behaviors, I would also play with action figures and train sets with my brother, and we would often watch TV together: lots of He-Man, Justice League, and even WWF. (I was traumatized when I learned pro wrestling was fake. Hulk Hogan, I still remember that sobbing letter I wrote to you after you “broke your back”. Hope you had a nice vac-ay, you liar!)

Once again, we come back to the original dilemma, and to play devil’s advocate, you may be asking, “Okay, but where did the behavior originally come from? If we’re imitating, where does the imitated behavior come from? What makes parents want to reinforce gendered behavior? Which came first, the chicken or the egg? And you really wrote a get-well letter to Hulk Hogan? Dork.” Yes, yes, I know. I suppose we can never truly get away from the cognitive, personality, and clinical arguments. But depending on them entirely seems as misguided as depending entirely on contingencies of reinforcement as the sole explanation for behavior (sorry Skinner). Taken together, I think we… well, I think we perhaps created more questions than we started with, but isn’t that what good science is all about? What do you think?

Thoughtfully yours,
Sara

Friday, September 16, 2011

Keep Telling Yourself It's Only a Movie: The Psychology of Horror Movie Enjoyment, Part 1

The weather is turning colder and fall is just around the corner. There are many things to love about fall. As much as people love summer activities, the heat begins to wear on many of us; the cooler days of fall are usually a welcome change by summer's end. Fall clothes are also some of my favorites; say goodbye to shorts and flip-flops and hello to scarves, sweaters, jackets - there's just something cozy about the layers, the neutral colors. Speaking of colors, fall leaves... need I say more? Yes, fall is wonderful for many reasons, including one more: Halloween.

Why Halloween? For one: It's the one time of year that it is acceptable to watch horror movies (I watch them pretty much year-round, but this is the one time that it's totally acceptable to talk about these movies and invite people over to watch them with you). I love horror movies. Despite (or perhaps because of) being slightly traumatized watching a scene from Children of the Corn when I was 4 or so years old, a scene I remember quite vividly, I grew up to be a horror movie fiend. When my family made its visit to the local video store, I'd want to check out movies from A Nightmare on Elm Street series (I've seen them all, even, shudder, Freddie versus Jason; Part 3: Dream Warriors, is my favorite for blending scary with funny – generally on purpose). On rainy days when we couldn't play outside, I'd turn off all the lights, block out of the windows, and watch the Exorcist in the dark. Alone, because no one in my family wanted to join me. And it wasn't just movies: I was a voracious reader, and though I would read pretty much anything from the fiction or nonfiction sections of the library, Stephen King was (and still is) one of my favorite authors.

Many believe that people who love scary movies don't find them scary. I don't know about others, but I find them terrifying. I'll sometimes have trouble sleeping after a really good one, like the first time I saw Poltergeist, and, especially when I'm alone, I'll find myself imagining all kinds of creepy things and explanations for weird noises. I probably find them just as scary as people who refuse to watch them (and I know a lot of people who fall into this camp, including my family, especially my mom).

But still, I love them. I own many, one of the first things people notice when looking through my movie collection. I talk about them to anyone who will listen; yes, I'm that person who still goes on about that awesome scene in that one movie that came out the year I was born but didn't see until I was 10 (aka: The Thing). When someone talks about zombies, I feel we have to clarify, "Are these slow-moving Night of the Living Dead zombies? Or crazy fast 28 Days Later zombies? Or somewhere in the middle Walking Dead zombies", because it’s an important distinction. And yes, I’m also that obnoxious person who thrives on horror movie trivia: did you know that A Nightmare on Elm Street (movie 1) took place in Springwood, California, but suddenly in later movies, they reveal they’re in Springwood, Ohio? That’s movie magic for you; they moved an entire city across the country.

So what explains my love of movies that would leave my mom insisting she sleep with the lights on surrounded by crosses, garlic, and silver? I started doing some research on this, and it seems there are many psychologists and communication experts who have sought to explain this very thing. There's actually a lot here, and a lot of commentary that I think is necessary, so this is the first of two parts on this topic.

One explanation is the notion of "sensation-seeking". Some people, for instance, are high sensation seekers. According to Jonathan Roberti, clinical psychologist and expert in sensation-seeking (read a review of his here), these individuals thrive on experiences that leave them emotionally and physically high. Not only would high sensation seekers be more interested in seeking out thrills and other emotional highs through the media (they apparently enjoy horror movies), they also are at increased risk for other "thrilling" activities, like drug use, excessive gambling, and casual sex. (Hmm, this isn't really sounding like me, but let's continue exploring.) Furthermore, high sensation seekers thrive on novelty, always seeking out new experiences, and are willing - and perhaps, prefer - to take risks to achieve these thrills; they're likely to select careers that allow them to take risks and experience new things - forensic identification, aka profiling, is one career they tend to express an interest in - and are prone to boredom.

Some research suggests that people have different brain responses to stimuli, and that some people, called "high stimulation seekers" (sounds like high sensation seekers to me, but different terms for different folks) experience activation in areas of the brain associated with reinforcement and arousal when exposed to intense stimuli, whereas others experience activation in emotional areas of the brain. While the researchers did not measure these brain reactions in response to horror movies, it may be that my brain responses are what differentiate me from my mom. While these images are reinforcing and psychologically arousing to horror movie lovers, they're upsetting to others.

But I hesitate to leave it at that. It seems that this explanation is more clinical, referring to enjoyment of horror movies as part of an additional diagnosis or, at the very least, a personality type. I hesitate to accept this explanation alone, because it clinicalizes (if that is in fact a word - and if not, it should be) something that could be considered normal. Additionally, though this research suggests there is a correlation between enjoyment of horror movies and these other behaviors, the one thing that makes horror movie viewing different from these other things is the notion of risk. There are risks of bodily injury or death involved in these behaviors, even something as commonplace as riding a roller coaster (though the odds are very, very low). The only risk involved with watching horror movies is that you might get a little freaked out, which could be quite traumatizing for people who dislike horror movies but probably not people who seek them out (either way, remember the advice given to viewers of Last House on the Left: "to keep from fainting, keep telling yourself 'it’s only a movie'"). As Roberti points out, however, not all of their behaviors are related to risk; they also enjoy trying new things in general, in terms of art, music, and sports; they score high on the personality dimension, "Openness to Experience".

Still, there are many good reasons to avoid clinical explanations when other explanations are just as likely if not better. One early example of the dangers of over-clinicalizing is a study by Rosenthal, aptly titled "On Being Sane in Insane Places". In this study Rosenthal and 7 colleagues got themselves committed to different mental hospitals, by each meeting with psychiatrists and informing them they were hearing voices saying the words, "Empty", "Hollow", and "Thud". All but one were diagnosed with schizophrenia (the final patient was diagnosed with bipolar disorder, or “manic depression” as it was called then) and committed. After reporting to an inpatient psychiatric ward, these "pseudo patients" stopped faking any symptoms and began acting as they normally would to see how long it would take for facility staff to notice. Their stays ranged from 7 to 52 days, with an average of 19 days.

So, they had time to kill, and thought, "Let's have some fun". What's a social psychologist's idea of fun? Doing research. While they were inpatient, they had an interesting opportunity to observe the various happenings: patients' behaviors, providers' behaviors, and most importantly, providers' assessments of patients' behaviors. They found that once a patient had been labeled with a clinical diagnosis, his or her behavior was often interpreted in line with that diagnosis, even when situational explanations were perhaps better. For example, patients spent a lot of time hanging out in the cafeteria waiting for meal-time, which providers attributed to things like "oral fixation", but the researchers thought was more likely because there's not much to do in a mental hospital, but eating is one regular activity that breaks up the monotony. Even the researchers' note-taking behavior was attributed to their diagnoses. [Interesting side note: while the providers never caught on that the pseudo patients were not actually mentally ill, 35 of the 118 other patients caught on rather quickly, and would ask the researchers things like, "Why are you here? You're not crazy."]

As a social psychologist, I think we should at least consider situational explanations for these phenomena. That's for next time. To be continued...

Thoughtfully yours,
Sara

Sunday, September 11, 2011

Forgiveness and the 10-Year Anniversary of 9/11

Today, we remember the 10th anniversary of the attacks of September 11th. In church today, the message was one of forgiveness. There are many religious and spiritual arguments for the importance of forgiveness that I won't go into. Psychologists also have explored this concept, and have discovered how forgiveness (and its converse, unforgiveness) influences an individual's mental and physical health.

Forgiveness is defined in many ways, but all of these definitions add up to one thing: forgiveness is something a wronged person offers to the one (or ones) who perpetrated the wrongdoing. It is generally viewed as a process that the forgiver works toward through many emotions and behaviors. Forgiveness is also often viewed as a personality trait; some people are simply more forgiving than others.

A lot of evidence suggests that being in a state of unforgiveness is damaging to both your mental and physical health (read one of many reviews here). Conversely, forgiveness is associated with better mental and physical health. Forgiveness is something you do, in part for the other person, but also for yourself. Refusing to forgive and continuing to hold a grudge is, for lack of a better word, toxic to your well-being.

This is because, in refusing to forgive, we often dwell on the wrongdoing. Psychologists refer to this constant dwelling on the negative as "rumination", and refer to rumination about perceived wrongdoing as "vengeful rumination". Research on rumination in general finds negative effects. Rumination is negatively correlated with sleep quality (abstract), as well as alcohol misuse, disordered eating, and self-harm (full paper). It makes focusing attention and problem solving difficult, because ruminators tend to be less confident in their problem solving abilities (full paper), and also because rumination uses working memory that could be devoted to the problem (full paper). Ruminators generate more biased interpretations of negative events, are more pessimistic about the future, and are poorer at solving interpersonal problems, as well (full paper).

Rumination is also associated with poor physical health. High ruminators show physiological stress markers, such as increased salivary cortisol (full paper here and abstract here) and immune system activity (full paper). People who ruminate also take longer for their heart rate and blood pressure to return to normal after being made to feel angry, which can put them at risk for organ damage over time (full paper).

Forgiveness is not "letting someone off the hook". It is not the same as condoning or absolving someone of wrong-doing. The old adage of "forgive and forget" doesn't necessarily lead to better outcomes, mainly because of the forgetting part. It is good to forgive, but not necessarily good to forget. Forgetting means failing to learn a lesson - a lesson that may be very important for you later on. This leaves the "forgiver" in a rather difficult position; one in which he or she must remember the wrongdoing without holding a grudge.

How, then, do you forgive? And how do you think about the act in such a way, that you can find forgiveness without simply ruminating on the event? Rumination has one key component - it is dwelling on the negative without trying to find a solution for the negative. You're stuck in the mud and simply spinning your wheels without really getting anywhere. Reflection, on the other hand, involves thinking through an event and trying to find closure. Reflective thought leads to a change in the thinker.

The review I linked to above (linked again here) discussed some of the reflection “forgivers” engage in. One cognitive process is empathy, in which the forgiver puts him- or herself in the other’s shoes, and attempts to experience the same emotional state. Not only do “trait forgivers” experience more empathy, but people who are randomly assigned to engage in empathy are also able to experience forgiveness. This provides some evidence that anyone, even people who are not naturally empathetic, can use this experience to forgive.

Forgivers are more generous in their appraisals of the one(s) to be forgiven, seeing them as more likable or having more likable traits. They are also better at understanding another person’s explanation for the behavior. In essence, they try to see the situation from the other person’s point-of-view. You don’t have to accept another person’s explanation, but rather, try to understand where they’re coming from. At the very least, this understanding can aid in finding a solution or determining a path to reconciliation. People often have a very self-centered view of the world in that they have difficulty recognizing that other people do not see things in the same way or have the same knowledge (a good blog post for another day).

Of course, one thing that may make forgiveness such a difficult process in the case of 9/11 is the severity of the wrong as well as the fact that the group responsible has such different worldviews. In a previous blog entry, I talked about stereotypes and ingroup/outgroup, all of which is definitely relevant here. Our tendency to dehumanize the outgroup makes forgiveness complicated, because forgiveness is a between-human experience. Forgiveness in this case is not impossible, but would have to involve an even greater degree of understanding and attempts to characterize the other group’s point-of-view.

Forgiveness is a process. Even 10 years later, the emotions are still very raw, but we can still continue moving forward.

Thoughtfully yours,
Sara

Wednesday, September 7, 2011

Too Pretty to Write this Blog Post: On Math Performance, Stereotypes, and Bad T-Shirt Choices

I know, I haven't posted in a while. Mea culpa, etc. etc. I'll try to be better about this from now on :)

Recently, two stories came across my desk (well, desktop, but whatever) that I've been meaning to blog about. It seemed like a perfect opportunity to talk about some of my early research interests. I've always been interested in the media: how it shapes our perceptions, and how it influences our behavior, performance on various tasks, and even our health (something I've done some research on and should definitely post about one of these days). One of my first studies I worked on - for my master's thesis - was about how the media influences women's math performance.

The most recent pop culture reference to women and academic ability was on a t-shirt marketed toward girls 7 to 16 at the major retailer, JC Penney. Though JC Penney eventually pulled the shirt, many argued that this t-shirt perpetuated negative stereotypes about girls and women. This is not the only instance of stereotypes about women popping up in the media. Abercrombie and Fitch prompted a "girlcott" in 2005 when they released a t-shirt with the phrase "Who needs brain when you have these?" printed across the bustline. T-shirts aren't the only outlets for these messages. Perhaps you remember the "Math is hard" Barbie? Teen Talk Barbie was the cause of controversy when one of the many phrases she uttered is "Math class is tough!" And advertising is constantly filled with gender stereotypes (both of men and women).

But wait a minute? Do these messages really affect women's math performance? Research (and not just my own) finds that they can. To avoid looking biased toward my own research, I won't even discuss my thesis.

But first, to give you some background information. Much of this research is influenced by the work of Claude Steele and his colleagues, and on Steele's concept called "stereotype threat". Essentially, stereotype threat occurs when the negative stereotype about a group's performance on a task makes a member of that group underperform on that task. To give a concrete example, Steele first explored this concept in a study on standardized test performance by African-American students. He and his colleague Aronson found that making African-American students conscious of racial stereotypes resulted in underperformance on a standardized test. Even asking them report their race on the test lowered performance. When racial stereotypes were made irrelevant through test instructions, African-American students performed at the same level as White students. (Read the original article here).

Stereotype threat became an important area of study because it provided evidence to counter claims of genetic inferiority that have been used to explain past race differences in standardized test scores. Steele and his colleagues went on to apply this phenomenon to a variety of groups, and used it to explain another group difference: women’s scores in math testing. In a conceptual replication of their study on African-American students' test performance, they found that when some women were told a math test had shown gender differences in the past, they performed worse than men; when they were told the math test they were taking had not shown gender differences, they performed at the same levels as men. (Full paper available here).

Prior to Steele and his colleagues' research, biological explanations were applied to gender differences in math and the sciences. Many people believed that women were just naturally bad at math. These opinions continue even today; early in 2005, Harvard University President, Lawrence H. Summers, presented a variety of explanations for women’s under-representation in math and science, including that the difference may be due to genetic inferiority (he resigned not long after, though this speech was just one of many marks against him).

Steele established three conditions that must be satisfied in order for stereotype threat to occur: 1) there must be awareness of the stereotype by society at large, that is, the stereotype that women are bad at math must be well-known; 2) the individual must be identified with the domain of interest, that is, a woman must be math-identified in order for stereotype threat to take place; and 3) the negative stereotype must be relevant to the individual during the domain-specific situation, that is, the stereotype that women are bad at math must be relevant to the individual when she encounters a math test. Though some people have called proposition 2 into question, and have even shown that stereotype threat can operate among women who don't really care about their math ability/performance, the other two propositions are necessary.

What these propositions mean, however, is that instructions from an experimenter aren't necessary for stereotype threat to occur. Simply having a high awareness of the stereotype (abstract here), being strongly woman-identified (full paper), or believing that gender stereotypes are correct (full papers here and here) can lead women to underperform in math even in the absence of any stereotype cues from the experimenter. It's not even necessary that the information made salient is related to math. Being the only woman in a group of men is one way (abstract, full paper). Receiving unwanted homework help from your parents is another (abstract here).

But it was work by Paul Davies and his colleagues that found stereotypical television commercials could lower math performance in women and impact their interest in math. In one study, some participants saw two stereotypical commercials: a teenage girl jumping for joy about a new acne medication and another of a woman drooling over a new brownie mix. Others saw counter-stereotypical ads: one in which a woman impressed a man with her knowledge of automotive engineering and another of a woman discussing health care concerns. Men and women who saw the counter-stereotypical ads did not differ on math performance. Men and women who saw the stereotypical ads did, with women receiving significantly lower scores. In a second study, women exposed to stereotypical ads avoided math questions in favor of verbal questions more frequently than women who saw neutral ads or men in either condition. A third study found that exposure to stereotypical television ads lowered women's interest in quantitative careers.

None of these television commercials said anything about women's math ability. Why then would these influence math performance? It has been argued that these messages create an "atmosphere of stereotype threat", where the messages are ubiquitous and inescapable. No special instructions are needed to make women question their math ability; messages present in their environment are enough. Some have argued that the controversy of JC Penney's t-shirt is misplaced; after all, it's just a t-shirt. But is it? Based on what we know about stereotype threat and how similar messages have influenced women's math performance, can we really shrug this off as "just a t-shirt"?

So what was the second story that came across my desktop? A cultural exploration of not math performance, but a related concept: spatial reasoning. In this really creative natural experiment, researchers examined two tribes in India, the Karbi and the Khasi. These two tribes are very similar in lifestyle, diet, even DNA (they likely share some common ancestors). The main difference is in gender roles. In the Karbi society, land is passed to male children when the parents die. Women rarely own land. In the Khasi society, women control the land and goods; men can't own land and their earnings are turned over to their nearest female relative. The researchers went to different members of these two tribes (in total, 1300 people) and found some striking results: in the Karbi society, men outperformed women, but in the Khasi society, no differences were found.

There are still people who argue that stereotype threat doesn't explain these differences, or if it does, is only one variable among many that explain these differences. There is definitely some validity to that argument. The effect sizes in these studies are small to moderate at best, meaning there is more pushing math scores around than stereotype threat. Furthermore, the original article that prompted this blog post is a t-shirt about "doing homework" in general - not just the math work. There aren't many stereotypes that women are bad in school - in fact, in some cases, the opposite stereotype exists - so would this t-shirt really have the effects I discuss above? What do you think?

Thoughtfully yours,
Sara

Thursday, August 18, 2011

Celebrities and Weight Management

I didn't think, when I started this blog, that I would even bother responding to celebrity quotes. True, I could probably blog forever and a day about the things celebrities utter in interviews, on their Twitter page, etc. - in fact, there are many successful blogs devoted to just that topic. In a recent interview, however, Mila Kunis talked about weight loss. Since weight management is one of my areas of research, I felt I needed to respond -- plus, I was looking for a good reason to talk about weight management research on here.

Essentially, Mila said that people who are “trying to lose weight” and are unsuccessful are simply not trying hard enough. What prompted her to reach this conclusion is the fact that she was able to lose 20 pounds for her role in Black Swan, a substantial amount, considering she normally is very thin. Of course, what Mila said is problematic for a few reasons.

Even when an individual is successful at losing weight through a program, weight gain in the time following the program is very common; most will gain back two-thirds of the weight within a year, and nearly all of it within 5 years. Why? Because sudden and drastic changes are difficult to maintain. That’s one reason you’ll find that, for many people, losing weight, especially with “fad diets”, is easy but maintaining weight loss is difficult. The approaches celebrities often take to lose weight for a role definitely work over the short-term, but are rarely sustainable. Look at celebrities who did not lose weight for a role, but who did so because their weight was unhealthy – for example, Oprah Winfrey (whose weight has often been the target of comedians) lost 67 pounds on the liquid diet, and unveiled her new look on her show while pulling a wagon of fat… only to regain much of that weight later. In fact, within a week of going off the diet, she had gained 10 pounds. Such low calorie diets cannot be maintained for very long, and there’s a good reason for that. In fact, even in cases where a doctor has prescribed a very low calorie diet (an approach taken only for patients who are very obese), the patient has to (or is supposed to) undergo intense medical supervision.

Whatever changes you make to lose weight, whether it is diet, exercise, or some combination of both, they have to be changes you’re willing to maintain over the long-term, or your chances of regaining the weight are high.

My predominant concern when I see celebrities losing large amounts of weight, and talking about how easy it is and “anyone can do this”, is that it creates unrealistic expectations. In fact, a lot of research shows that people entering weight loss programs come in with really unrealistic expectations.

Furthermore, telling people they’re going about weight loss in the “wrong way” and to “try harder” doesn’t instruct them on how to lose weight effectively and in a healthy way. This is probably one reason that media coverage of celebrity weight loss and the constant messages about what people are “supposed to look like” can lead to disordered eating and other maladaptive behaviors. Figuring out how to lose weight is pretty intuitive – cut down on calorie intake and/or increase physical activity – but the approaches one needs to take to lose weight healthily are definitely not intuitive. Even if someone decides to do some research into losing weight, there are many sources of information, some teaching really unhealthy approaches. A lot of people don’t even realize what disordered eating means, thinking that, as long as they aren’t starving themselves or forcing themselves to vomit after eating, their behaviors (like “fasting” after large meals or exercising to the point of exhaustion) are perfectly normal and even healthy.

I’m certainly not attacking celebrities. I know that Mila probably felt that by telling people “no, you can” was an attempt to boost people’s confidence in themselves and their ability to reach their goals (a concept psychologists call “self-efficacy”). It’s definitely a noble goal, because research suggests that people starting weight management programs often have low self-efficacy.

Even so, boosting confidence may lead people to try something to lose their weight, but not necessary the right thing, so increasing self-efficacy needs to be done in concert with teaching healthy weight management approaches. This is one of the many reasons that people trying to lose weight on their own are not very effective.

Just once, rather than hearing a celebrity go on and on about how much he loves to eat fast food or how she is able to keep thin simply by “playing with her kids”, I would love to hear a celebrity say, “You know what, keeping thin is hard work! Here are all the things I do…” Okay, not as a great a sound-bite, I know (and arguably not the celebrity's responsibility), but it might help to balance out some of the other celebrity sound-bites that I fear do more harm than good.

Of course, celebrities are not the only ones sending the wrong, or at least, incomplete messages. Proposed policy to outlaw Happy Meals or add additional taxes to “junk food” is just as bad as simply saying, “What you’re doing is wrong” – it doesn’t teach what people should be doing instead. Rather than punishing people for making the “wrong” choices, we need to incentivize them to make the “right” choices.

Thoughtfully yours,
Sara