How do recruiters distinguish promising candidates? Through experience, they have developed an intuitive theory of star employees—the traits, dispositions, beliefs, and motivations that underpin excellence. People try to infer these unseen traits from sparse and noisy signals, like tea leaves strewn across a linkedin profile. I create formal models of intuitive theories like these, honing them by testing their predictions with behavioral experiments.
Formal models of intuitive theories (and cognition more generally) can provide richer insights and predictive power compared to simpler data analyses. Modeling the actual causal processes underlying behavior allows for intelligent interventions (and helps avoid acting on irrelevant correlations like cargo cults).
I model the conceptual structures underpinning social cognition (and human behavior more generally) in terms of probabilistic generative models that encode how a person's attitudes, emotions, and beliefs cause their behavior. To capture the complex causal structure underpinning people's flexible social reasoning often requires sophisticated representational tools (beyond causal networks). My research therefore employs probabilistic programs capable of capturing arbitrarily complex causal structures (e.g., church with examples and webppl with examples).
The recent development of these representational tools, coupled with the algorithms that make inference over them tractable, enable powerful models that can capture the complex and flexible social inferences people make. For example, recursive thinking "I think that you think that I think..." and non-literal language interpretation like irony.
Using these tools I was able to develop a model of how people integrate advice with other forms of evidence. For exmaple, if the stock fundamentals make a stock look poised for a breakout, but Warren Buffet sells the stock, what should you do and what do people actually do? I formalized people's conceptual model of the adviser and showed that given their understanding of him/her they rationally incoporated their advice. Using this model we see what makes a good adviser, and make quantitative predictions about how manipulating the adviser's attributes would affect his/her influence (a live version of the model can be found here and a thorough written treatment here)
Testing complex cognitive models requires careful data analysis. I have employed a number of techniques to arbitrate between competing models and gain insights into my behavioral experiments (using statistical languages like R and probabilistic programming languages like webppl). I have used linear mixed-effects models to analyze within-subjects designs; (nonparametric) resampling methods for robust tests and cross-validated model selection; and Bayesian data analyses to estimate the posterior parameter values of my cognitive models (examples: bayesian data analysis, basic exploration, and basic exploration (verbose)).
I have striven to write literate programs throughout my graduate career to ensure my research is transparent and reproducible. Given the range of programming languages used for the cognitive modeling I have used the meta-programming language org-mode/babel. This allows for probabilistic programming languages and more statistics focused languages (like R) to be incorporated into one woven document.
To arbitrate between competing cognitive models requires precise behavioral experiments. Using intuition and more formal tools like Optimal Experiment Design (OED). I ran unique experiments that cleanly tested the novel predictions of my models. Some recent examples include tests of how direct evidence and advice are integrated (here), how we think about other's hyperbolic discounting (here), whether people think that others are "wishful thinkers" (here), and how an adviser's bias influences their influence (here).
Feel free to ask me more about my research and software and ask for slides from my presentations at Stanford, Berkeley, Brown, Göttingen, and MIT.