We end the semester with a paper on an old-school idea of personal scripts and how they relate to consistency, etc:
-
Recent Posts
Recent Comments
Archives
- October 2025
- March 2020
- November 2019
- October 2019
- March 2019
- September 2018
- June 2018
- May 2017
- April 2017
- February 2017
- December 2016
- September 2016
- August 2016
- June 2016
- November 2015
- September 2015
- April 2015
- October 2014
- July 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- July 2013
- May 2013
- April 2013
- February 2013
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- May 2012
- April 2012
- February 2011
- December 2010
- November 2010
Posts on reproducibility
Meta
Okay, so this week our discussion was less conceptual and more in line with a “post publication review.”
Deflationary expectations
We love qualitative and idiographic approaches–at least in the abstract. But, in the breach we often find ourselves less than satisfied. Ironically, “scientific” work on qualitative material seems to end up reducing the material to a small number of rating scales–often fewer than 5. Maybe we should give up trying to shoe-horn qualitative material into the standard model? I remember Ken Craik (RIP) describing architectural journals where a heavily quantitative study would be followed by a simple case study of a beautiful new home. We study people. People are intrinsically interesting. Why can’t we do the same thing in our journals? This would provide a place for qualitative material and help to inspire new insights into human functioning.
Conceptual myopia
Tomkins work is wonderful and under appreciated, but it is not the only conceptual frame that could be used to organize script-like, idiographic material. That is to say, readers in our group questioned whether there was really any alternative hypothesis in this work. Pitting McAdams, or Singer, or Cervone against Tomkins might have led to a more dynamic set of hypotheses.
Methods, methods everywhere
1. There were some clarity issues with study 1 that made us less than confident in the findings. First, the nature of the prompts in study 1 were vague. Were the participants fed their own scripts? If so, the findings are not surprising or interesting. Second, the response latencies were “z-scored”. Some interpreted this as centering within person. Others interpreted this to literally mean the scores were z-scored, which is a between person transformation and would do nothing meaningful to the data. Regardless, knowing how things worked in raw and transformed data structures would be preferable.
2. The lack of independence within the Big Five is starting to bug us (study 2). We get the same thing in many different types of samples. Correlations above .5 are too high. Either we clean up our measures or walk back the myth that there are 5 independent dimensions.
3. 50 participants in each study? Really? Based on what justification? A power analysis? The low power causes problems across the board. There appears to be a bit of a desire to argue that the structure of scripts in this study can generalize or is comprehensive. We really have to stop generalizing from small groups of college students to anything. Moreover, a good number of the most interesting correlations in study 2 are ignored because there was too little power to detect them (conscientiousness and trauma-fear, for example). We understand that conducting qualitative research can be time-consuming, but that is no excuse for running underpowered studies.