Joseph Simmons

Joseph Simmons
  • Professor of Operations, Information, and Decisions

Contact Information

  • office Address:

    3730 Walnut St.
    JMHH 551
    Philadelphia, PA 19104

Research Interests: , experimental methods, consumer behavior

Links: CV, Data Colada Blog, Easy Pre-registration

Overview

Joe Simmons is a Professor at the Wharton School of the University of Pennsylvania, where he teaches a course on Managerial Decision Making. He has two primary areas of research. The first explores the psychology of judgment and decision-making, with an emphasis on understanding and fixing the errors and biases that plague people’s judgments, predictions, and choices. The second area focuses on identifying and promoting easy-to-adopt research practices that improve the integrity of published findings. Joe is also an author of Data Colada, an online resource that attempts to improve our understanding of scientific methods, evidence, and human behavior, and a co-founder of AsPredicted.org, a website that makes it easy for researchers to pre-register their studies.

Continue Reading

Research

  • Joachim Vosgerau, Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2019), 99% Impossible: A Valid, or Falsifiable, Internal Meta-Analysis, Journal of Experimental Psychology: General, Forthcoming.

    Abstract: Several researchers have relied on, or advocated for, internal meta-analysis, which involves statistically aggregating multiple studies in a paper to assess their overall evidential value. Advocates of internal meta-analysis argue that it provides an efficient approach to increasing statistical power and solving the file-drawer problem. Here we show that the validity of internal-meta-analysis rests on the assumption that no studies or analyses were selectively reported. That is, the technique is only valid if (1) all conducted studies were included (i.e., an empty file-drawer), and (2) for each included study, exactly one analysis was attempted (i.e., there was no p-hacking). We show that even very small doses of selective reporting invalidate internal-meta-analysis. For example, the kind of minimal p-hacking that increases the false-positive rate of one study to just 8% increases the false-positive rate of a 10-study internal meta-analysis to 83%. If selective reporting is approximately zero, but not exactly zero, then internal meta-analysis is invalid. To be valid, (1) an internal meta-analysis would need to exclusively contain studies that were properly pre-registered, (2) those pre-registrations would have to be followed in all essential aspects, and (3) the decision of whether to include a given study in an internal meta-analysis would have to be made before any of those studies are run.

  • Joshua Lewis and Joseph Simmons (Draft), The Directional Anchoring Bias.

    Abstract: When people estimate an unknown quantity after previously considering a high candidate value (or “anchor”), they estimate higher values than they would have done after considering a low anchor. In explaining this effect, previous anchoring research has emphasized the distance between the anchor and the estimate. However, across 5 studies (N = 5,662), we find a directional anchoring bias: people disproportionately estimate values that are higher than high anchors and lower than low anchors, and this bias accounts for between 10% and 20% of the total anchoring effect (Study 1). The bias seems to result from people expressing their intuitions about estimation quantities. For example, when estimating an intuitively high quantity (such as the weight of an elephant), people tend to express their intuition that the quantity is “high” by adjusting their estimates upwards from the anchor. When anchors are higher, a decision to adjust upwards necessitates a higher estimate, so higher anchors lead to higher estimates. Consistent with this mechanism, we find that participants’ intuitions about the stimuli moderate the directional anchoring bias (Studies 2-5). In addition, we demonstrate the adverse effects of this bias for estimation accuracy (Study 3) and consumer choice (Studies 4 & 5).

  • Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2019), P-curve won’t do your laundry, but it will distinguish replicable from non-replicable findings in observational research: Comment on Bruns & Ioannidis (2016), PLoS ONE, 14(3), e0213454.

    Abstract: P-curve, the distribution of significant p-values, can be analyzed to assess if the findings have evidential value, whether p-hacking and file-drawering can be ruled out as the sole explanations for them. Bruns and Ioannidis (2016) have proposed p-curve cannot examine evidential value with observational data. Their discussion confuses false-positive findings with confounded ones, failing to distinguish correlation from causation. We demonstrate this important distinction by showing that a confounded but real, hence replicable association, gun ownership and number of sexual partners, leads to a right-skewed p-curve, while a false-positive one, respondent ID number and trust in the supreme court, leads to a flat p- curve. P-curve can distinguish between replicable and non-replicable findings. The observational nature of the data is not consequential.

  • Joshua Lewis and Joseph Simmons (Under Revision), Prospective Outcome Bias: Investing to Succeed When Success Is Already Likely.

    Abstract: How do people decide whether to incur costs to increase their likelihood of success? In investigating this question, we developed a theory called prospective outcome bias. According to this theory, people make decisions that they expect to feel good about after the outcome has been realized. Importantly, people expect to feel best about decisions that are followed by successes – even when the decisions did not cause the successes. Consequently, they are most inclined to incur costs to increase their likelihood of success when success is already likely (e.g., people are more inclined to increase their probability of winning a prize from 80% to 90% than from 10% to 20%). We find evidence for this effect, and for prospective outcome bias, in nine experiments. In Study 1, we establish that people expect to evaluate decisions that precede successes more favorably than decisions that precede failures, even when the decisions did not cause the success or failure. Then, we document that people are more motivated to increase higher chances of success. Study 2 establishes this effect in an incentive-compatible laboratory setting, and Studies 3-5 generalize the effect to different kinds of decisions. Studies 6-8 establish that prospective outcome bias drives the effect (rather than regret aversion, waste aversion, or probability weighting). Finally, in Study 9, we find evidence for another prediction of prospective outcome bias: holding expected value constant, people prefer small increases in the probability of a large reward to large increases in the probability of a small reward.

  • Joshua Lewis, Joseph Simmons, Uri Simonsohn, Alex Rees-Jones (Under Review), Diminishing Sensitivity to Outcomes: What Prospect Theory Gets Wrong About Diminishing Sensitivity to Price.

    Abstract: Prospect Theory assumes that decision makers are diminishingly sensitive to the magnitude of gains and losses. A well-known demonstration of this phenomenon involves people being more willing to travel across town to save $5 off a $15 purchase rather than to save $5 off a $125 purchase (e.g., the “Jacket/Calculator” scenario). In this paper, we present evidence that diminishing sensitivity to price is separate, different, and arguably inconsistent with Prospect Theory. Across four studies, we find that people exhibit diminishing sensitivity with respect to outcomes that do not align with their evaluations of gains and losses. Specifically, a reference point determines if a price is coded as a gain or a loss, but whatever that reference point, people are diminishingly sensitive to the absolute magnitudes of amounts considered.

  • Joshua Lewis, Celia Gaertig, Joseph Simmons (2019), Extremeness Aversion Is a Cause of Anchoring,.

    Abstract: When estimating unknown quantities, people insufficiently adjust from values they have previously considered, a phenomenon known as anchoring. We suggest that anchoring is at least partially caused by a desire to avoid making extreme adjustments. In seven studies (N = 5,279), we found that transparently irrelevant cues of extremeness influenced people’s adjustments from anchors. In Studies 1-6, participants were less likely to adjust beyond a particular amount when that amount was closer to the maximum allowable adjustment. For example, in Study 5, participants were less likely to adjust by at least 6 units when they were allowed to adjust by a maximum of 6 units than by a maximum of 15 units. In Study 7, participants adjusted less after considering whether an outcome would be within a smaller distance of the anchor. These results suggest that anchoring effects may reflect a desire to avoid adjustments that feel too extreme.

  • Celia Gaertig and Joseph Simmons (2018), Do People Inherently Dislike Uncertain Advice?, Psychological Science, 29 (4), pp. 504-520.

    Abstract: Research suggests that people prefer confident to uncertain advisors. But do people dislike uncertain advice itself? In eleven studies (N = 4,806), participants forecasted an uncertain event after receiving advice, and then rated the quality of the advice (Studies 1-7, S1-S2) or chose between two advisors (Studies 8-9). Replicating previous research, confident advisors were judged more favorably than advisors who were “not sure.” Importantly, however, participants were not more likely to prefer certain advice: They did not dislike advisors who expressed uncertainty by providing ranges of outcomes, numerical probabilities, or by saying that one event is “more likely” than another. Additionally, when faced with an explicit choice, participants were more likely to choose an advisor who provided uncertain advice over an advisor who provided certain advice. Our findings suggest that people do not inherently dislike uncertain advice. Advisors benefit from expressing themselves with confidence, but not from communicating false certainty.

  • Celia Gaertig and Joseph Simmons (Under Review), The Psychology of Second Guesses: Implications for the Wisdom of the Inner Crowd.

    Abstract: Prior research suggests that averaging two guesses from the same person can improve quantitative judgments, an effect dubbed the “wisdom of the inner crowd.” In this article, we suggest that this effect hinges on whether people (1) resample their second guess from a similar distribution as their first guess was sampled from (what we call a Resampling Process), or (2) explicitly decide in which direction their first guess had erred before making their second guess (what we call a Choice Process). We report the results from seven studies (N = 5,768) in which we manipulated whether we asked participants to explicitly indicate, right before they made their second guess, whether their first guess was too high or too low, thereby inducing a Choice Process. We found that asking participants to decide whether their first guess was too high or too low before they made a second guess increased their likelihood of making a more extreme second guess. When the correct answer was not very extreme (as was often the case), this reduced people’s likelihood of making a second guess in the right direction and harmed the benefits of averaging, thus rendering the inner crowd less wise. When the correct answer was very extreme, then asking participants to indicate whether their first guess was too high or too low improved the wisdom of the inner crowd. Our findings suggest that the wisdom-of-the-inner-crowd effect is not inevitable, but rather that it hinges on the process by which people generate their second guesses.

  • Celia Gaertig and Joseph Simmons (Under Review), Why (and When) Are Uncertain Price Promotions More Effective Than Equivalent Sure Discounts?.

    Abstract: Past research suggests that offering customers an uncertain promotion, such as an X% chance to get a product for free, is always more effective than providing a sure discount of equal expected value. In seven studies (N = 11,238), we find that uncertain price promotions are more effective than equivalent sure discounts only when those sure discounts are or seem small. Specifically, we find that uncertain promotions are relatively more effective when the sure discounts are actually smaller, when the sure discounts are made to feel smaller by presenting them alongside a larger discount, and when the sure discounts are made to feel smaller by framing them as a percentage-discount rather than a dollar amount. These findings are inconsistent with two leading explanations of consumers’ preferences for uncertain over certain promotions – diminishing sensitivity and the overweighting of small probabilities – and suggest that people’s preferences for uncertainty are more strongly tethered to their perceptions of the size of the sure outcome than they are to their perceptions of the probability of getting the uncertain reward.

  • Leif D. Nelson, Joseph Simmons, Uri Simonsohn (2018), Psychology’s Renaissance, Annual Review of Psychology, 69, pp. 511-534.

    Abstract: In 2010-2012, a few largely coincidental events led experimental psychologists to realize that their approach to collecting, analyzing, and reporting data made it too easy to publish false-positive findings. This sparked a period of methodological reflection that we review here and call “psychology’s renaissance.” We begin by describing how psychology’s concerns with publication bias shifted from worrying about file-drawered studies to worrying about p-hacked analyses. We then review the methodological changes that psychologists have proposed and, in some cases, embraced. In describing how the renaissance has unfolded, we attempt to describe different points of view fairly but not neutrally, so as to identify the most promising paths forward. In so doing, we champion disclosure and pre- registration, express skepticism about most statistical solutions to publication bias, take positions on the analysis and interpretation of replication failures, and contend that “meta-analytical thinking” increases the prevalence of false-positives. Our general thesis is that the scientific practices of experimental psychologists have improved dramatically.

Teaching

Current Courses

  • OIDD299 - Judg & Dec Making Res Im

    This class provides a high-level introduction to the field of judgment and decision making (JDM) and in-depth exposure to the process of doing research in this area. Throughout the semester you will gain hands-on experience with several different JDM research projects. You will be paired with a PhD student or faculty mentor who is working on a variety of different research studies. Each week you will be given assignments that are central to one or more of these studies, and you will be given detailed descriptions of the research projects you are contributing to and how your assignments relate to the successful completion of these projects. To complement your hands-on research experience, throughout the semester you will be assigned readings from the book Nudge by Thaler and Sunstein, which summarizes key recent ideas in the JDM literature. You will also meet as a group for an hour once every three weeks with the class's faculty supervisor and all of his or her PhD students to discuss the projects you are working on, to discuss the class readings, and to discuss your own research ideas stimulated by getting involved in various projects. Date and time to be mutually agreed upon by supervising faculty and students. the 1CU version of this course will involve approx. 10 hours of research immersion per week and a 10-page paper. The 0.5 CU version of this course will involve approx 5 hours of research immersion per week and a 5-page final paper. Please contact Maurice Schweitzer if you are interested in enrolling in the course: schweitzer@wharton.upenn.edu

    OIDD299005

    OIDD299006

Past Courses

  • MGMT690 - MANAG DECSN MAKING

    See OIDD 690. This is a cross-listed course with OIDD 690.

  • OIDD290 - DECISION PROCESSES

    This course is an intensive introduction to various scientific perspectives on the processes through which people make decisions. Perspectives covered include cognitive psychology of human problem-solving, judgment and choice, theories of rational judgment and decision, and the mathematical theory of games. Much of the material is technically rigorous. Prior or current enrollment in STAT 101 or the equivalent, although not required, is strongly recommended.

  • OIDD299 - JUDG & DEC MAKING RES IM

    This class provides a high-level introduction to the field of judgment and decision making (JDM) and in-depth exposure to the process of doing research in this area. Throughout the semester you will gain hands-on experience with several different JDM research projects. You will be paired with a PhD student or faculty mentor who is working on a variety of different research studies. Each week you will be given assignments that are central to one or more of these studies, and you will be given detailed descriptions of the research projects you are contributing to and how your assignments relate to the successful completion of these projects. To complement your hands-on research experience, throughout the semester you will be assigned readings from the book Nudge by Thaler and Sunstein, which summarizes key recent ideas in the JDM literature. You will also meet as a group for an hour once every three weeks with the class's faculty supervisor and all of his or her PhD students to discuss the projects you are working on, to discuss the class readings, and to discuss your own research ideas stimulated by getting involved in various projects. Date and time to be mutually agreed upon by supervising faculty and students. the 1CU version of this course will involve approx. 10 hours of research immersion per week and a 10-page paper. The 0.5 CU version of this course will involve approx 5 hours of research immersion per week and a 5-page final paper. Please contact Maurice Schweitzer if you are interested in enrolling in the course: schweitzer@wharton.upenn.edu

  • OIDD690 - MANAG DECSN MAKING

    The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

Awards and Honors

  • MBA Excellence in Teaching Award, 2014
  • MBA Excellence in Teaching Award, 2013
  • Winner of the Helen Kardon Moss Anvil Award, awarded to the one Wharton faculty member “who has exemplified outstanding teaching quality during the last year”, 2013
  • One of ten faculty nominated by the MBA student body for the Helen Kardon Moss Anvil Award, 2012
  • MBA Excellence in Teaching Award, 2012
  • Wharton Excellence in Teaching Award, Undergraduate Division, 2011

In the News

Knowledge @ Wharton

Activity

Latest Research

Joachim Vosgerau, Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2019), 99% Impossible: A Valid, or Falsifiable, Internal Meta-Analysis, Journal of Experimental Psychology: General, Forthcoming.
All Research

In the News

Why Humans Distrust Algorithms – and How That Can Change

Many people are averse to using algorithms when making decisions, preferring to rely on their instincts. New Wharton research says a simple adjustment can help them feel differently.

Knowledge @ Wharton - 2017/02/13
All News

Awards and Honors

MBA Excellence in Teaching Award 2014
All Awards