br Prediction In many ways one core goal
Prediction In many ways, one core goal of this commentary is to assert that, in recognition of their broad accomplishments thus far, the models presented in this issue – particularly the dual-systems model of Shulman and colleagues (2016), as well as related models such as imbalance (Casey, 2015) or triadic models (Ernst et al., 2006), are ready to be challenged to make more precise predictions. In other words, we believe that there is sufficient verification in the published literature to warrant a reorientation towards falsification in this subfield; with respect to the social reorientation model of Nelson and colleagues (2016), we suspect that the rapidly growing research interest in this subfield means a similar level of verification is not far behind. But in order for a theory to be falsifiable, it needs to make precise predictions. We note that terms like imbalance and mismatch are unfortunately quite imprecise in this regard, requiring the null carboxypeptidase to be that they are either precisely matched or in balance during adolescence (which as Meehl (1978, 1990a, 1990b, 1997) was fond of pointing out, is almost certainly never true), or that there is no difference in the degree of imbalance or mismatch among children, adolescents and adults. Additionally, the precision with which we define “adolescents” and the “variable” in question is also critical. In other words, even if individual models make directional predictions (as many do), this precision frequently gets lost in translation via imprecise definitions of adolescent development and/or neural mechanisms (as is discussed in more detail below). Shulman and colleagues (2016) identify several refinements of the dual-systems model that we endorse, especially the prediction that social contexts (especially peer contexts) may be significant moderators of many developmental effects. There are other changes to the models that we are concerned about, however. One trend that has caught our attention is the gradual expansion of constructs and processes in one of the systems, without a similar inclusivity with respect to neural circuitry. Early conceptualizations of one system in these models focused primarily on motivational processes like reward sensitivity, subserved by ventral striatum (Steinberg, 2008; Casey et al., 2008), despite frequent mention of broader affective and emotional factors (but see the triadic model for an approach that has always strongly differentiated these two; Ernst et al., 2006). As acknowledged by Shulman and colleagues (2016), there is now enhanced emphasis on the contributions of social context and social cognition. But instead of considering how this expands the relevant neural circuitry, which is significantly (Blakemore, 2008) but not entirely (Nelson et al., 2016) distinct from the key regions and networks of interest in these models at present, these moderating and mediating processes are generally subsumed into a “socioemotional,” “affective,” or “motivational” system. If we are being precise about networks and processes, and want to be more precise in our predictions, we believe these constructs should be unmerged. The original description of the social information processing network (Nelson et al., 2005) attempted to distinguish these constructs in some ways, but the updated review in this issue largely did not revisit what we now know a decade later about the differential functions and relevant regions or networks of interest. In our view, compelling models must differentiate among social, affective, and motivational factors (and their sub-components) and associated neural networks, which each are likely to have specific reciprocal relationships with the lateral frontoparietal circuitry supporting cognitive functions such as attention and control.
Experimental design An additional benefit of the emphasis on precise experimental design is the attention Homologs draws to construct validity in our tasks. What precise constructs or processes (e.g., neurocognitive, affective, interpersonal) do our tasks assess? For example, in a recent systematic review of fMRI paradigms used to study reward in adolescent versus adults, the authors concluded that the various tasks produced mixed results and that it is “difficult to clearly map the role of specific neural mechanisms onto… developmental changes” (p. 988; Richards et al., 2013). Furthermore, they conclude that, “we lack knowledge in the multiple ways that different variables, including subject characteristics, experimental factors, and environmental contexts, can influence the neural systems underlying reward-related behavior” (p. 988). This is just one example of an area where a variety of tasks that putatively tap into a coherent underlying construct (i.e., functioning of the reward system) can show strongly dissociated effects across different studies of that construct. Indeed, rarely (if ever) are more than one of these tasks used in a single neuroimaging study, so we actually have little information on the degree to which performance on these tasks covaries. Nevertheless, we often interpret these tasks as if they do all tap one construct. This highlights the importance of very careful attention to construct validity in our experimental tasks. In fact, it would serve us well to return to the rich and venerable literature on construct validity in the area of psychometrics. Cronbach and Meehl (1955) originally presented the concept of a “nomological network” as a method of evaluating the construct validity of psychological tests, and proposed that in order to demonstrate that your measure has construct validity researchers must develop a nomological network that consists of both a theoretical framework and an empirical framework regarding methods of measurement, and specification of the links among and between these two frameworks. Perhaps because of the expense and complexity of neuroimaging compared to psychometric measures, very few imaging tasks have been rigorously subjected to this kind of evaluation, and yet the absence of such an analysis can be a critical barrier to progress if studies that are putatively investigating a particular construct are not in fact doing so.