Exposure to Pre- and Perinatal Risk Factors Partially Explains Mean Differences in Self-Regulation between Races

Objectives

To examine whether differential exposure to pre- and perinatal risk factors explained differences in levels of self-regulation between children of different races (White, Black, Hispanic, Asian, and Other).

Methods

Multiple regression models based on data from the Early Childhood Longitudinal Study, Birth Cohort (n ≈ 9,850) were used to analyze the impact of pre- and perinatal risk factors on the development of self-regulation at age 2 years.

Results

Racial differences in levels of self-regulation were observed. Racial differences were also observed for 9 of the 12 pre-/perinatal risk factors. Multiple regression analyses revealed that a portion of the racial differences in self-regulation was explained by differential exposure to several of the pre-/perinatal risk factors. Specifically, maternal age at childbirth, gestational timing, and the family’s socioeconomic status were significantly related to the child’s level of self-regulation. These factors accounted for a statistically significant portion of the racial differences observed in self-regulation.

Conclusions

The findings indicate racial differences in self-regulation may be, at least partially, explained by racial differences in exposure to pre- and perinatal risk factors.

Self-regulation—which can been defined as the regulation of the self by the self [1]—is a human phenotype that has a pronounced influence on a wide range of outcomes across the entire life course. The inability to regulate one’s attention in early childhood is a harbinger of maladaptive and problematic outcomes later in life [2–6]. Children who have problems with self-regulation are, for example, more likely to develop and manifest behavioral problems, to display signs of conduct disorder, and to have difficulties in forging social relationships [3, 7–8]. Children and adolescents who lack self-control—a phenotype that is closely related to self-regulation—are at risk for engaging in delinquent acts, for using and abusing drugs and alcohol, and for performing poorly in school [9–10]. Moreover, problems with self-control also affect economic success, overall health, and the probability of coming into contact with the criminal justice system in adulthood [4, 11]. Taken together, the available literature suggests self-regulation is an important trait that has consistent and wide-sweeping effects on a number of human complex traits.

Research has revealed that individual differences in self-regulation emerge within the first few years of life [12–15] and remain relatively stable throughout adolescence and adulthood [16]. As a result, there has been a significant amount of research devoted to uncovering the etiological origins of self-regulation. This rapidly expanding literature has revealed that a range of factors, including genetic/biological influences [17–19], cultural/social forces [18, 20], and school-based elements [21] influence the development of self-regulation during the first two decades of the life course. Although a number of disciplinary perspectives have been employed to explain the development of self-regulation, one perspective in particular that has generated some empirical support is the public health approach. This approach has centered on examining an array of factors, especially pre- and perinatal risk factors, and how they affect the development of self-regulation and related phenotypes [22–25].

Importantly, scholars have noted that mean levels of self-regulation differ across racial categories with Black respondents tending to score higher on measures of impulsivity compared to Asians and Whites [6, 26–27]. Additionally, there is evidence of differences in a range of temperament scores cutting across samples of Asian and American respondents [28–29]. In studies examining related traits—such as general intelligence (which is associated with long term planning, problem solving ability, increased prosocial behavior, and increased self-regulation)—similar race-graded patterns have emerged such that Blacks and Hispanics tend to evince lower scores than Whites and Asians [26–27, 30–31].

Given these findings, an intriguing question that has yet to be fully addressed concerns the degree to which exposure to pre- and perinatal risk factors explains racial disparities in measures of self-regulation (see, generally [32–34]). While few studies have addressed this question, the literature base offers two general explanations of any observed racial differences in self-regulation [35–36]. First, racial differences in mean levels of self-regulation may be due, in part, to differential exposure to risk factors (i.e., an exposure-level hypothesis). If one group is more likely to experience trait-relevant risk factors than another group, then the former should exhibit lower levels of self-regulation on average. We refer to this explanation as Hypothesis 1.

A second explanation also highlights the importance of risk factors, but suggests that groups differ in their susceptibility to risk. Thus, racial groups may exhibit mean differences in self-regulation because one group is differentially more or less vulnerable to the effects of the risk factors. This moderating explanation posits a statistical interaction between risk factors and race in the prediction of self-regulation. We refer to this explanation as Hypothesis 2. It is also important to note that Hypothesis 2—the moderation explanation—is consistent with the argument that certain experiences (e.g., education) qualitatively differ across racial lines. If this were the case, we might expect some factors to matter more for one group compared to another [37]. (An anonymous reviewer deserves credit for raising the possibility that Hypothesis 2 may be supported if interpretations of the experiences differ across racial groups.)

While relatively little research has considered Hypothesis 2, there is some evidence to support Hypothesis 1; that is, that racial groups differ in their level of exposure to risk factors [32–33, 36]. Indeed, Lynch [38] reported a wide range of racial disparities in early child health and development using data from the Early Childhood Longitudinal Study, Birth Cohort (the same data that will be analyzed here). Moreover, national statistics show that of the four million births in 2001, approximately 83 percent received prenatal care during the first trimester, meaning that roughly 17 percent did not receive such medical attention. Breaking down this statistic by racial categories revealed that approximately 89 percent of White mothers received prenatal care during the first trimester while approximately 75 percent of Black mothers, 69 percent of American Indian mothers, 84 percent of Asian or Pacific Islander mothers, and 76 percent of Hispanic mothers received such care [39]. (The analysis presented below utilizes data drawn from the year 2001, so we present national statistics from that same year.) This finding leaves open the possibility that minority mothers are more likely to experience pregnancy and/or birth complications than White mothers due to their lesser access to, or utilization of, prenatal care. Racial differences were also observed for other indicators such as maternal age at childbirth, a purported risk factor for the child’s development [40], and length of the gestation period (e.g., Black children tended to be born earlier than other children [39]; see also [36]).

About Luke Ford

I've written five books (see Amazon.com). My work has been covered in the New York Times, the Los Angeles Times, and on 60 Minutes. I teach Alexander Technique in Beverly Hills (Alexander90210.com).
This entry was posted in Blacks, Race. Bookmark the permalink.