Owning it

What happens when the authors of studies linking candidate gene polymorphisms to response to drug consumption tried to replicate their own research?

As many of you know, the saga of replication problems continues unabated in social and personality psychology. The most recent dust up being over the ability of some researchers to replicate Dijksterhuis’ professor prime studies and the ensuing arguments over those attempts.

While social and personality psychologists “discuss” the adequacies of the replication attempts in our field a truly remarkable paper was published in Neuropsychopharmacology (Hart, de Wit, & Palmer, 2013).  The second and third authors have a long collaborative history working on the genetics of drug addiction.  In fact, they have published 12 studies linking variations in candidate genes, such as BDNF, DRD2, and COMT to intermediary phenotypes related to drug addiction.  As they note in the introduction to their paper, these studies have been cited hundreds of times and would lead one to believe that single SNPS or variations in specific genes are strongly linked to the way people react to amphetamines.

The 12 original studies all relied on a really nice experimental paradigm.  The participants received placebos and varying doses of amphetamines across several sessions, and the experimenters and participants were blind to what dose they received.  The order of drug administration was counterbalanced.  After taking the drugs, the participants rated their drug-related experience over the few hours that they stayed in the lab.  The authors, their post docs, and graduate students published 12 studies linking the genetic polymorphisms to outcomes like feelings of anxiety, elation, vigor, positive mood, and even concrete outcomes such as heart rate and blood pressure.

While the experimental paradigm had rather robust experimental fidelity and validity, the studies themselves were modestly powered (Ns = 84 to 162).  Sound familiar?  It is exactly the same situation we face in many areas of psychological research now—a history of statistically significant effects discovered using modestly powered studies.

As these 12 studies were going to press (a 5-year period), the science of genetics was making strides in identifying the appropriate underlying model of genotype-phenotype associations.  The prevailing model moved from the common-variant model to the rare variant or infinitesimal model.  The import of the latter two models was that it would be highly unlikely to find any candidate gene effects linked to any phenotype, whether it be endophenotype, intermediate phenotype, subjective, or objective phenotype because the effect of any candidate gene polymorphism would be so small.  The conclusion would be that the findings published by this team would be called into question, with the remote possibility that they had been lucky enough to find one of the few polymorphisms that might have a big effect, like APOE.

So what did the authors do?  They kept on assessing people using their exemplary methods and also kept on collecting DNA.  When they reached a much larger sample size (N = 398), they decided to stop and try to replicate their previously published work.  So, at least in terms of our ongoing conversations about how to conduct a replication, the authors did what we all want replicators to do—they used the exact same method and gathered a replication sample that had more power than the original study.

What did they find?  None of the 12 studies replicated.  Zip, zero, zilch.

What did they do?  Did they bury the results?  No, they published them.  And, in their report they go through each and every previous study in painful, sordid detail and show how the findings failed to replicate—every one of them.

Wow.

Think about it.  Publishing your own paper showing that your previous papers were wrong.  What a remarkably noble and honorable thing to do–putting the truth ahead of your own career.

Sanjay Srivastava proposed the Pottery Barn Rule for journals–if a journal publishes a paper that other researchers fail to replicate, then the journal is obliged to publish the failure to replicate.  The Hart et al (2013) paper seems to go one step further.  Call it “Clean up your own mess” rule or the “Own it” rule—if you bothered to publish the original finding, then you should be the first to try and directly replicate the finding and publish the results regardless of their statistical significance.

We are several years into our p-hacking, replication lacking, nadir in social and personality psychology and have yet to see a similar paper.  Wouldn’t be remarkable if we owned our own findings well enough to try and directly replicate them ourselves without being prodded by others?  One can only hope.

Posted in Uncategorized | 3 Comments

Schadenfreude

This week in PIG-IE we discussed the just published paper by an all-star team of “skeptical” researchers that examined the reliability of neuroscience research.  It was a chance to take a break from our self-flagellation to see whether some of our colleagues suffer from similar problematic research practices.

Button, K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience.

If you’d like to skip the particulars and go directly to an excellent overview of the paper, head over to Ed Yong’s blog.

There are too many gems in this little paper to ignore, so I’m going to highlight a few features that we thought were invaluable.  First, the opening paragraph is an almost poetic introduction to all of the integrity issues facing science, not only psychological science.  So, I quote verbatim:

“It has been claimed and demonstrated that many (and possibly most) of the conclusions drawn from biomedical research are probably false. A central cause for this important problem is that researchers must publish in order to succeed, and publishing is a highly competitive enterprise, with certain kinds of findings more likely to be published than others. Research that produces novel results, statistically significant results (that is, typically p < 0.05) and seemingly ‘clean’ results is more likely to be published. As a consequence, researchers have strong incentives to engage in research practices that make their findings publishable quickly, even if those practices reduce the likelihood that the findings reflect a true (that is, non-null) effect. Such practices include using flexible study designs and flexible statistical analyses and running small studies with low statistical power. A simulation of genetic association studies showed that a typical dataset would generate at least one false positive result almost 97% of the time, and two efforts to replicate promising findings in biomedicine reveal replication rates of 25% or less. Given that these publishing biases are pervasive across scientific practice, it is possible that false positives heavily contaminate the neuroscience literature as well, and this problem may affect at least as much, if not even more so, the most prominent journals.”

The authors go on to show that the average power of neuroscience research is an abysmal 21%.  Of course, “neuroscience” includes animal and human studies.  When broken out separately, the human fMRI studies had an average statistical power of 8%.  That’s right, 8%.  Might we suggest that the new Brain Initiative money be spent by going back and replicating the last ten years of fMRI research so we know which findings are reliable?  Heck, we gripe about our “coin flip” powered studies in social and personality psychology (50% power).  Compared to 8% power, we rock.

Here are some additional concepts, thoughts, and conclusions from their study worth noting:

1.  Excess Significance: “The phenomenon whereby the published literature has an excess of statistically significant results that are due to biases in reporting.”

2. Positive predictive value:  What the p-rep was supposed to be; “the probability that a positive research finding reflects a true effect (as in a replicable effect).”  They even provide a sensible formula for computing it.

3.  Proteus phenomenon: “The first published study is often the most biased towards an extreme result.”  This seems to be our legacy.  Unreliable but “breathtaking” findings that are untrue, but can’t be discarded because we seldom if ever publish the lack of replications.

4.  Vibration of effects:  “low -powered studies are more likely to provide a wide range of estimates of the magnitude of an effect”

Vibration effects are really, really important because there are some in our tribe who believe that using smaller sample sizes “protects” one from reporting spuriously small effects.  In reality, the authors describe how using small samples increases the likelihood of Type I and Type II errors.  Underpowered studies are simply bad news.

 

Posted in Uncategorized | 1 Comment

When effect sizes matter: The internal (in?)coherence of much of social psychology

This is a guest post by Lee Jussim.  It was originally posted as a comment to the Beginning of History Effect, but it seemed too important to leave as a comment. It has been slightly edited to help it stand alone.

Effect sizes may matter in some but not all situations, and reasonable people may disagree.

This post is about one class of situations where: 1) They clearly do matter; and 2) They are largely ignored. That situation: When scientific articles, theories, writing makes explicit or implicit claims about the relative power of various phenomena (see also David F’s comments on ordinal effect sizes).

If you DO NOT care about effect sizes, that is fine. But, then, please do not make claims about the “unbearable automaticity of being.” I suppose automaticity could be an itsy bitsy teenie weenie effect size that is unbearable (like a splinter of glass in your foot), but that is not my reading of those claims. And it is not just about absolute effect sizes. It would be about the relative effects of conscious versus unconscious processes, something almost never compared empirically.

If you do not care about relative effect sizes, please do not declare that “social beliefs may create reality more than reality create social beliefs” (or the equivalent) as have lots of social psychologists.

If you do not care about at least relative effect sizes, please do not declare stereotypes to be some extraordinarily difficult-to-override “default” basis of person perception and argue that only under extraordinary conditions do people rely on individuating information (relative effect sizes of stereotypes versus individuating information in person perception are r’s=.10, .70, respectively).

If you do not care about at least relative effect sizes, please do not make claims about error and bias dominating social perception, without comparing such effects to accuracy, agreement, and rationality.

If one is making claims about the power and pervasiveness of some phenomenon — which social psychologists apparently often seem to want to do — one needs effect sizes.

Two concrete examples:
Rosenhan’s famous “being sane in insane places” study:
CLAIMED that the “insane were indistinguishable from the sane.” The diagnostic label was supposedly extraordinarily powerful. In fact, his own data showed that the psychiatrists and staff were over 90% accurate in their judgments.

Hastorf & Cantril’s famous “they saw a game” study:
This was interpreted both by the original authors and by pretty much everyone who has ever cited their study thereafter as demonstrating the power of subjective, “constructive” processes in social perception. It actually found far — and I do mean FAR — more evidence of agreement than of bias.

Both of these examples — and many more — can be found in my book (you can get the first chapter, and abstracts and excerpts here:http://www.rci.rutgers.edu/~jussim/TOC.html
(it is very expensive, so, if you are interested, I cannot in good faith recommend buying it, but there is always the library).

If (and I mean this metaphorically, to refer to all subsequent social psychological research, and not just these two studies) all Rosenhan and Hastorf & Cantril want to claim is “bias happens” then they do not need effect sizes. If they want to claim that labels and ingroup biases dominate perception and judgment — which they seemed very much to want to do — they need not only an effect size, but to compare effect sizes for bias to those for accuracy, agreement, rationality, and unbiased responding.

Lee Jussim

Posted in Uncategorized | 11 Comments

A Case in Point

In the post about the Beginning of History Effect, I used candidate gene research as a case in point to illustrate how unreplicable research can get lodged in the scientific literature and how it is difficult to then dislodge.  A case in point emerged with perfect timing this weekend in the New York Times Magazine.  In a horribly sourced story on “Why some kids handle pressure, while others fall apart”, the authors claim that “One particular gene, referred to as the COMT gene, could to a large degree explain why one child is more prone to be a worrier, while another may be unflappable, or in the memorable phrasing of David Goldman, a geneticist at the National Institutes of Health, more of a warrior.”

Just shoot me.

Keep in mind that there is not only a meta-analysis of existing studies showing that the relation between the COMT polymorphism and cognitive functioning is indistinguishable from zero (Barnett, Scoriels, & Munafo, 2009), but also a comprehensive review that shows that all the existing associations between gene polymorphisms and cognitive functioning are zero (Chabris et al 2012).  No, the authors can’t let science stand in the way of selling their new book or the NY Times from actually checking the veracity of the claims made in the article–we need to sell ad space after all.  No, it is far more important to misinform thousands, if not millions of students and their parents that their test anxiety can be attributed to one genetic polymorphism, even if it is not true.

This is one illustration of the way unreplicable research gets instantiated in our field and in the minds of the public and inevitably, granting agencies.  The original article and/or idea is compelling.  It fits a broad world view that some groups researchers/people want to believe.  All subsequent disconfirmations of the effect are rationalized or ignored.  Then, we spend 20 years or so waisting our time because the majority of follow up research fails to replicate the original effect and the original idea just fades away.  Also keep in mind that the genetics literature is far better than most of our paradigms in psychology because the failed replications are often published–that is, if people are motivated to write them up.  Think of the far more insidious situation where a finding is p-hacked to prominence in JPSP and few if any disconfirmations are published, because we “don’t publish replications.”  Those fake results can live forever as they infiltrate text books and theoretical reviews in psychology.

At some point we have to decide whether we are doing science or propaganda.  If we are going to do science, we need to care more about having robust ideas than robust careers.

Brent W. Roberts

Posted in Uncategorized | Leave a comment

Does (effect) Size Matter?

Does (effect) Size Matter?.

A highly related and important post from David Funder

Posted in Uncategorized | 1 Comment

The Beginning of History Effect

Doctor, my eyes have seen the years
And the slow parade of fears without crying
Now I want to understand
I have done all that I could
To see the evil and the good without hiding
You must help me if you can
Doctor, my eyes
Tell me what is wrong
Was I unwise to leave them open for so long
                              Jackson Browne
 

I’m having a hard time reading scientific journal articles lately.  No, not because I’m getting old, or because my sight is failing, though both are true.  No, I’m having trouble reading journals like JPSP and Psychological Science because I don’t believe, can’t believe the research results that I find there.

Mind you nothing has changed in the journals. You find tightly tuned articles that portray a series of statistically significant findings testing subtle ideas using sample sizes that are barely capable if detecting whether men weigh more than women (Simons, Nelson, & Simonsohn, 2013). Or, in our new and improved publication motif, you find single, underpowered studies, with huge effects that are presented without replication (e.g., short reports).  What’s more, if you bother to delve into our history and examine any given “phenomena” that we are famous for in social and personality psychology, you will find a history littered with similar stories; publication after publication with troublingly small sample sizes and commensurate, unbelievably large effect sizes.  As we now know, in order to have a statistically significant finding when you employ the typical sample sizes found in our research (n = 50), the effect size must not only be large, but also overestimated.  Couple that with the fact that the average power to detect even the unbelievably large effect sizes that we do report is 50% and you arrive at the inevitable conclusion that our current and past research simply does not hold up to scrutiny. Thus, much of the history of our field is unbelievable.  Or, to be bit less hyperbolic, some unknown proportion of our findings can’t be trusted.  That is to say, we have no history, or at least no history we can trust.

This was brought home for me recently when a reporter asked me to weigh in on a well-known psychological phenomenon that he was writing about.  I poked around the literature and found a disconcertingly large number of “supportive” studies using remarkably small sample sizes and netting (without telling of course), amazingly large effect sizes, despite the fact that the effect was supposed to be delicate.  I mentioned this in passing to a colleague who was more of an expert on the topic and he said “well, the real effect for that phenomenon is much smaller.”  His comment reflected the fact that he, unlike the reporter, or the text book writer, or the graduate student, or the scholar from another field, or me, knew about all the failed studies that had never been published.  However, if you took the history lodged in our most elite journals you would have to come to a different conclusion—the effect size was huge in the published literature.  If you bother to look at many of our most prized ideas, you will find a similar pattern.

The Beginning of History Effect is, of course, a play on the End of History idea put forward by Fukuyama that with the end of overt and subtle battles of the cold war and the transition to almost universal liberal democracy would essentially end the tension requisite for the narrative of history to continue.  The Beginning of History Effect (no, unfortunately, it is not an illusion) is an attempt to put positive spin on the fact that we can’t rely on our own scientific history. The most positive take on this situation is that we have the chance of making history from here on out by conducting more reliable research.  I guess the most telling question is whether there is any reason to be optimistic that we will begin our history anew or whether we will continue to fight for ideas and questionable methods that have left us little empirical edifice on which to rest our weary bones?

To bring the point home, and to illustrate just how difficult it will be to begin our history over again, I thought it would be instructive to highlight a set of personality findings that are evidently untrue, but still get published in our top journals.  Specifically, any study that has been published showing a statistically significant link between a candidate gene and any personality phenotype is demonstrably wrong.  How do I know?  If one spends a little time examining these studies you will find a very consistent profile.  The original study will have what we think is a relatively large sample—hundreds of people—and no replication.  Ever.  If you go to the supporting literature to find replications you find none or the typical “inconsistent” pattern.  More tellingly, if you go to the genome-wide association studies, you will find that they have never, ever replicated any of the candidate gene studies that litter the history of personality psychology, despite the fact that they contain tens of thousands of participants.

What this means in the terminology of the current replication crisis in the field of social and personality psychology is that the effect sizes associated with any given candidate gene polymorphism are so small that they cannot be detected without a sample size in the tens of thousands (if not hundreds of thousands).  It is the same low power issue plaguing experimental psychology just playing out on a different scale.  This should caution any blanket prescriptions for a priori acceptable sample sizes for any kind of research.  The sample size you need is dictated by your effect size and that can’t always be known before hand.  Who would have known that the correct sample size for candidate gene research was 50K?  Many people still don’t know, including reviewers at our top journals.

The interesting, and appalling thing about the genetics research in personality psychology is that the geneticists knew all along that the modal approach used in our research linking candidate genes or even gwas data to phenotypes was wrong from the beginning (Terwilliger & Goring, 2000).  In fact, the current arguments in genetics revolve around whether the right genetic model is a “rare variant” or “infinitesimal model” (Gibson, 2012).  Either model accepts the fact that there are almost no common genetic variants that have a notable relation to any phenotype of interest, in personality, or psychology, or otherwise.  And by notable, I mean “effect size that is detectable using standard operating procedures in personality psychology (e.g., N of 100 to 500).

What this means in practical terms is that a bunch of research, some done by close friends and colleagues, is patently wrong.  And by close friends, I mean really close friends—award winning close friends.  What are we going to do about that?  What am I supposed to do about that? Simply ignore it?  Talk about it in the hallways of conferences and over drinks at the bar?  Tell people quietly that they shouldn’t really do that type of research?

Multiply this dilemma across our subfields and you see the problem we face.  So, maybe we should hit the reboot button and start our history over again.  At the very least, we need to confront our history.  Our current approach to the replication crisis is to either deny it or recommend changes to the way we do our current research.  Given our history of conducting unreliable research we need to do more.  In other essays I’ve called for a USADA of psychology to monitor our ongoing use of improper research methods.  I think we need something more like a Consumer Reports for Psychology.  We need a group of researchers to go back and redo our key findings to show that they are reliable—to evaluate the sturdiness and value of our various concepts year in and year out.  Brian Nosek’s Reproducibility Project has started in this direction, but we need more.  We need to vet our legacy, otherwise our research findings are of unknown value, at best.

Brent W. Roberts

Posted in Uncategorized | 57 Comments

A Lay Theory of the Successful Graduate Student/Academic

Just recently, the Chronicle of Higher Education ran a piece on what it takes to be successful in an academic career.  It was a pleasant essay, which emphasized some of the usual suspects like industriousness and the like, but it felt a little off to me.  The thing that seemed lacking was the disparate, often conflicting nature of the profile of qualities that I believe (read: I don’t have much data to back these opinions up) are found in successful academics.  To put it in psychometric jargon, the attributes necessary for success are orthogonal—they don’t all come neatly bundled in each person.  This makes finding the combination in any given individual, which is often necessary for success, a rare occurrence.

I thought it might be constructive to start a provisional list of qualities that appear to lead to success in academic careers from my idiosyncratic vantage point.  Keep in mind that this is a theory in search of data.  The list can be used in several ways.  First, it can be used in a study to see if I’m right.  Second, it can be used as an aid in selection—self or otherwise.  Or, if you are already in the career track the list can be used for development.  I do not possess all of these qualities—very few people ever do—but I have worked hard to develop them. Alternatively, if you don’t want to change, a brilliant option is to team up with someone who complements your strengths and weaknesses.  You can be the idea person and your colleague the quant jock or vis versa.

Here is the provisional list:

  1. Curiosity.  It is difficult to be creative (see below) if you aren’t curious about how things work in the first place.  Are you the kid who never grew out of the “why” stage? Good.  Come to the Academy.  One reason this is so important is that, more than most jobs, an academic career entails that you structure your own work.  You have to be engaged in the world to do so and being curious is the first step to being engaged.
  2. Creativity.  Our job is to create new knowledge.  Creating new knowledge can come about through several strategies (see below).  Being creative, by coming up with new ideas or new ways of thinking about things is one of the best and most constructive strategies.  It is also rare and difficult.  It is something to aspire to and be thankful for if you achieve it.
  3. Tolerance of Ambiguity or Ambiguity Tolerance (economists call it the latter).  The world of science has so many unstructured situations it is difficult to emphasize the importance of this quality enough.  At a most fundamental level, science is not about finding the answer, but in finding a provisional answer until a better one comes along.  Moreover, many of our ideas fail—and that is a good thing because failure tells us what ideas to ignore.  Also, the act of being engaged in the attempt to generate knowledge brings into stark relief the huge swath of things we don’t know.  Finally, as noted above, scientists structure their own world.  We seldom have supervisors setting our agendas.  We set our own.  That can be rather disconcerting if you like to have things well defined.  If you don’t need closure, you’ll probably do better or at least have a nicer time in the academic world.
  4. Analytical interest.  Science is often like solving puzzles.  Much of our work goes awry and it is critical that one have the ability to analyze the situation and figure out how to do things differently next time.
  5. Technological skill or obsession.  Science is often the blind application of technology—new statistical models (e.g., latent growth modeling), new methods (e.g., EMA), and new technologies (FMRI, genetics).  One can make a significant contribution to science by learning how to apply new techniques to old ideas.  Being a tech nerd is a good thing.
  6. Persistence or “Fire in the Belly”.  This isn’t just a willingness to work hard.  This is the willingness to persist and even redouble your efforts despite negative feedback, barriers, and failures.  One of my colleagues famously had a manuscript rejected 10 different times over a 3-year period before finally getting it published.  This doesn’t count the revisions he did along the way.  Another eminent scientist had his first 12 papers out of graduate school rejected before he got a hit.  There are several ways this attribute manifests itself:  1) Being interested in and/or passionate about what you are doing.  If you are passionate about your ideas, then it is easy to persist against the withering wind of contempt.  Find your passion and you may have found your scientific career.  2) Fear unemployment.  The stress of getting a job and then getting tenure is a psychological gift.  Harness the fear of impending unemployment to redouble your efforts. 3) Unresolved Oedipal complex.  Got a chip on your shoulder and still need to show “father” that you are worthy?  Good. Use that chip to your benefit by using it to respond to reviewers.  Respond to “no” with, “I will show you why you are wrong.” 4) Being obtuse.  If criticism washes over you like water off a ducks back, great.  When combined with passion, this is a formidable combination.
  7. The willingness to write.  You don’t have to enjoy it or be good at it, but you need to  write.  Of course, it helps if you enjoy it and are good at it.  An anecdote might help.  I had a very good writing teacher in high school.  He gave up on the career of writing because in order to practice the art of writing he had to lock himself in a room with a carton of cigarettes and a bottle of scotch and smoke and drink the story out of himself.  After a few years of this, he was unwilling to write, for good reason.  He wasn’t going to last very long going about it that way.  Hopefully writing will come easier to you.  If not, there are many, many careers that do not rely so heavily on writing.
  8. To believe that you can improve your writing.  Fundamentally, we are all storytellers.  In scientific writing, we are simply telling the story of our research.  Writing well is key to being a good storyteller.  That said, the belief that you are a good writer can get in the way of good writing, as those who believe they are good writers often don’t take criticism well.  I’d rather have a student who is a poor writer and who is willing to work on his or her writing than a “good” writer who isn’t open to improving his or her writing.  The problem here is that “good” writing at the undergraduate level is not the same thing as writing for a scientific audience.  Moreover, students typically don’t get the opportunity to test and or demonstrate their scientific writing skills before arriving in grad school.  Writing for science and a scientific audience is much more than proper sentence structure.
  9. Adequate oral communication skills.  You need to teach and give presentations.  Doing this well is a reflection of many of the qualities listed above combined with a desire to transmit one’s ideas to others in a consumable fashion.  Your audiences, be they students or colleagues, will appreciate it.
  10. Rudimentary social skills.  We tolerate a huge diversity of quirky behavior in the Academy.  That said, if you have a choice between being a southbound side of a northbound mule and being somewhere on the autistic spectrum, choose the later.  We’d rather give a job or tenure to someone a bit odd than someone who is going to be a pain in the ass for our entire future—even if they are the next “big thing.”  I’ve seen more people derail because they think being an ass is the right way of doing things.  These individuals fail to understand the ecology of the academic career in which your colleagues make a concrete decision to embrace you in perpetuity with a tenure decision.  Or, to put it another way, if you are going to be an ass, you better be a really, really good one.
  11. Reliability.  When called upon to do a task, do it well.  This doesn’t mean that you have to be the paragon of conscientiousness.  What it means is that when you have to perform a task that has serious ramifications for yourself and others—analyzing data for a paper, making up a test for a class, organizing the materials for your research—you work hard enough to pull it off, no excuses.  Being a professor is one of the few careers in the labor market that does not require conventional levels of conscientiousness.  In fact, too much conscientiousness can undermine some of the other qualities on this list (e.g. curiosity and creativity).  That said, if you can’t get the details right on your work and therefore can’t be relied upon, you simply won’t succeed.
  12. Oh, and lest I forget, according to R. Chris Fraley the most important key to success is the obsessive and somewhat maladjusted love of coffee.

It should also be pointed out that there are many qualities not on this list, such as:

1.  Good grades in your grad courses.  To a disconcerting degree, graduate students continue to obsess over classroom performance.  One of my colleagues actually goes so far as to give grad students with perfect A’s in their first year in grad school poor performance evaluations.  In her opinion, it is a sign that her students have the wrong values.  Whether you agree with her or not, the fact of the matter is that no one ever looks at your grad GPA and no one cares.  Get over it.

2.  Smarts.  I can’t tell you how many times I’ve had the conversation about failed grad students or colleagues that has started with the comment “but he was really smart.”  Guess what?  Everyone is smart at this level, even the failures.  This should come as no surprise as we select on GRE scores and GPA.  Some of my most brilliant friends and colleagues are failed academics[1].  Brilliance is only one ingredient of success and sometimes not the most important.  Really, really smart people often don’t get Ph.D.s.  They go off and create fields that other people get their Ph.D.s in.  Of course, if you don’t have the basic smarts to represent your field as an expert that is a problem, but a very rare problem given the way the system is structured.

3.  Psychological adjustment.  Depressed? Anxious? Insecure?  Don’t worry about it.  As long as you produce good science, you don’t have to be happy about it.  We’ll be happy for you.  Remember Freud, who said that the optimal level of adjustment was ambivalence.

There are two qualities that I personally value, that are clearly not systematically valued in the field.  In fact, we may be living in a time where we’ve inadvertently rewarded people for the inverse of these qualities.

1.  Skepticism.  For some reason, we have become a science of confirmation.  We tell stories with our data that corroborate our hypotheses rather than testing or confronting them.  Moreover, like most people, we tend to believe disproportionately in our own ideas.  Be skeptical, even of your own work.  While this may not make you famous, I do believe that it will make your contributions more lasting.

2.  Integrity.  Don’t lie, cheat, steal, or obfuscate your way to a scientific product.  Don’t accept the fact that others lie, cheat, steal and bullshit their way to fame and fortune.  Cheating your way to a publication may help you in the short run, but will undermine your reputation and the field in the long run.

I’m sure there are more qualities necessary and many more that are not.  That said, the list is rather heterogeneous.  That is, not many people will possess all of the qualities that contribute to success in graduate school and beyond.  That is okay.

Brent W. Roberts


[1] Some may take umbrage with the term “failed”.  Keep in mind that many of my friends who “failed” went on to spectacularly successful careers elsewhere because they were very talented people.  Failure is a good thing.  Embrace it.

Posted in Uncategorized | 1 Comment

Replicability: The good, the really good, and the ugly

The good:

Our European Journal of Personality paper proposing ways to improve the replicability of research in psychology is in press.  You can find a copy here.  The EJP editor is soliciting comments and the entire package should be published soon.

The really good:

A slew of papers on replicability in psychological science is now available in the November issue Perspectives on Psychological Science.  The entire issue is a must read.

Greg Francis has another paper in Psychonomic Bulletin and Review that provides a most brilliant analysis of why even simple, direct replication is not an answer to our problems.

And, as many of you have seen, the editors at Psychological Science circulated a letter outlining some reasonable, if modest proposals for changes to the journal that are described succinctly by Sanjay Srivastava on his blog, The Hardest Science.

For the first time in a long while, things might actually be moving in the right direction. I’m sure Nate Silver could have predicted that.

Posted in Uncategorized | 3 Comments

It is sunny, and I’m still depressed

For this week’s PIG-IE session we have the opportunity to read a draft of a paper from Richard Lucas (MSU; richard.e.lucas@gmail.com) and Nicole Lawless (U of Oregon) testing the relation between the weather and life satisfaction–in 1 million people.

The paper is still under review, so if you have pertinent comments, like “Did you do a power analysis?” or “Have you considered replicating that in a smaller sample?” I’m sure Rich will be grateful (well, maybe not for those questions).

And, whatever you do, don’t revise your intro psych lecture about mood and weather.  We can’t trust findings based on so many people.

Posted in Uncategorized | Leave a comment

The Black Cloud of Genetics and Personality

Some of you know that some of us (namely Brent), are more than skeptical about “gene for” studies linking candidate genes to personality and/or any other phenotype.  What you may not know is that Brent is now enthusiastic (for the moment) about gene expression research–the product of genes, not the genes themselves (i.e., RNA).

So, this week Brent re-presented a presentation he gave last week to a group of social scientists and biologists telling the tale of how he went from gene-by-environment enthusiast, to gene-by-environment skeptic, to gene expression enthusiast.  If you are interested in the presentation you can find it here.

If you are interested in reading some of the background literature Gibson’s recent paper gives you the scoop on common versus rare versus infinitesimal models of gene functioning and Munafo & Flint give you the sad news that none of the existing gene association studies in personality psychology should be paid much attention.  Ouch.

On the flip side, Steve Cole gives you the overview of Stress Genomics, which is the new kid on the block, and Miller et al show you how gene expression works.  Let’s hope their fate is unlike the previous 20 years of “gene for” studies which can now be used to line that old bird cage.

Posted in Uncategorized | Leave a comment