(1) Color-concept associations. People continually form and update associations between colors and concepts as they experience the world.
(2) Color-concept association network. Color–concept associations can be represented in a network, with weights on the edges that represent association strengths. For example, a strong weight might connect the concept banana and a saturated yellow, whereas a weight near zero would connect banana and saturated blue. This network structure helps conceptualize how people form coherent judgments under conflicts that arise from the lack of one-to-one correspondence between colors and concepts in the world. Weights vary across the network depending on the relevance of particular colors and concepts for the judgment at hand. Thus, the network structure can adjust to fit the given perceptual and conceptual context.
(3) Color inferences. People make inferences using the color–concept association network to produce judgments. So far we have identified three types of inference operations that result in three types of judgments: pooling results in judgments about preferences for colors, transmitting results in judgments about preferences for concepts, and assigning results in interpretations about the meaning of colors in visual encoding systems (e.g. information visualizations).
Our current research focuses on understanding how color-concept associations are formed, and how they are used to interpret information visualizations. Our focus is on color, but our findings should extend to other perceptual features insofar as people have systemic associations between those features and concepts.
Associations between visual features and concepts are at the core of visual reasoning. Evidence suggests that color-concept associations are the basis on which people (1) evaluate preferences for colors, (2) evaluate preferences for entities, and (3) interpret meanings of colors in information visualizations. The link between color-concept associations and these three seemingly different types of judgements can be understood within the Color Inference Framework (Schloss, 2018). The framework posits that people continually form and update their associations between colors and concepts through color-related experiences in the world. These associations can be represented in a network that stores associations between all possible colors and concepts. Different kinds of inference operations are computed on the color-concept association network to produce different kinds of judgments: pooling produces preferences for colors, transmitting influences preferences for entities, and assigning determines interpretations of the meanings of colors in visual encoding systems.
In this line of research we have two key objectives. First, we aim to understand how people form color-concept associations through their experiences in the world. Second, we aim to develop efficient ways of quantifying color-concept associations by leveraging image databases and computational modeling. With good estimates of color-concept associations, we will be able to produce more comprehensive predictions about how these associations contribute to visual reasoning.
Automatically estimating color-concept associations
Quantifying color-concept associations is a central part of our research, but obtaining those judgments from human participants is costly in time and effort. Building on prior work using large-scale databases, we are working on new ways to automatically estimate color-concept associations. So far, we have developed a hybrid approach using image statistics and human judgments. We trained and tested models using human ratings on a specific set of colors, and once the models were trained, they could be used to estimate color-concept associations for new concepts and new colors without humans in the loop. The most effective model used features that were relevant to human perception and cognition, aligning with perceptual dimensions of color space and extrapolating within color categories (Rathore, Leggon, Lessard, & Schloss, 2020).
These methods will advance the field’s understanding of visual reasoning for visual communication in a few ways. First, they will help make visual reasoning research more efficient by circumventing the need to collect human ratings each time we to quantify color-concept associations to use for experiment designs on interpreting information visualizations. Second, these models provide insights into how humans form color-concept associations. Third, they will help achieve a long-term goal of automatically producing optimal color palettes for semantically interpretable visualizations.
Understanding how color-concept associations map onto dimensions in color space
Both in the scientific literature and in popular culture, it is common to find claims like color x means y (e.g., red means anger). However, color-concept associations are not so discreet or unitary. Color-concept associations are graded and continuous across color space, not all-or-none. Researchers can quantify the associations between a given concept and all possible colors, and represent them in what we call a “color-concept association space”. We contend that every concept has a color-concept association space, which means every color is associated with every concept to some degree, even if that degree is near zero.
From this perspective, we can approach characterizing color-concept associations by quantifying how they map onto dimensions within color space (e.g., lightness, chroma, redness vs. greenness, and yellowness vs. blueness). This approach has been shown to dispel commonly held notions about color-concept associations. In particular, it is commonly held that yellow hues are associated with happiness whereas blues hues are associated with sadness. We found that happiness/sadness of colors was dominated by lightness and chroma. When lightness and chroma were controlled statistically or colorimetrically, yellow hues were no happier than blue hues and in some cases blue hues were happier (Schloss, Witzel, & Lai, 2020). Although the origin of these color-emotion associations is still unclear, having a more accurate description of the phenomena will help constrain possible accounts of where color-emotion associations come from and why they exist.
One possible source of color-emotion associations involves perceiving colors concurrently with human expressions of emotion (e.g., angry faces becoming redder builds associations between anger and redness). In ongoing work, we are leveraging such social color-emotion associations to investigate how color can be used as a tool to enhance the emotion-expressive capability of artificial social agents (i.e., social robots).
To interpret information visualizations, people use visual reasoning to determine how visual features map onto concepts. For example, to interpret the colors in weather maps, in neuroimaging, bar graphs, and recycling bin signs, people must determine which colors in the visualization map on different quantities or categories represented in the visualization. People have expectations, or “inferred mappings” for how visual features will map on concepts, and they have an easier time interpreting visualizations that match those expectations. The challenge is understanding what determines people’s inferred mappings. Addressing this challenge will advance knowledge about how visual reasoning works, and will translate to designing effective and efficient information visualizations.
More details coming soon!
Explaining color preferences
Why do people have color preferences? How are color preferences formed, and why do they change over time? According to the Ecological Valence Theory (EVT), preference for a given color is influenced by preferences for all concepts associated with the color (Palmer & Schloss, 2010). People like colors that, on average, remind them of things they like, and they dislike colors that remind them, on average, of things they dislike. Within the Color Inference Framework (Schloss, 2018), the EVT account of color preferences can be described as a “pooling” operation on the color-concept association network. Preference for all concepts associated with a color are pooled to produce a summary preference, which is used to determine preference for the color.
The EVT provides a unified account for why color preferences differ between individuals and change over time (Schloss & Palmer, 2017). Preference for a given color is calculated for a given individual at a given moment in time depending on (a) which concepts are associated with that color (differential object-association hypothesis), (b) the relative preferences for concepts associated with that color (differential valence hypothesis), and (c) the degree to which different concepts are activated in the individual’s mind (differential activation hypothesis). Evidence supporting these hypotheses comes from studies on individual differences, cultural differences, priming effects, and seasonal changes.
Describing and predicting patterns of color preferences
What are effective ways to describe patterns of color preferences? How can we predict people’s preferences for colors they haven’t judged? The EVT is helpful for explaining the origins of color preferences, but other methods are more efficient for describing and predicting patterns of color preferences. We explored different color space metrics for constructing models of color preferences based on cone contrasts (Hurlbert & Ling, 2007) and higher-level color appearance spaces using Euclidean and cylindrical coordinates (Schloss, Lessard, Racey, & Hurlbert, 2018). Using multiple linear regression with cross-validation to avoid overfitting, we found that the most effective model used cylindrical coordinates in CIELAB space with 1st and 2nd harmonics. This approach is effective for describing and predicting color preferences at the group and individual levels.
Preferences for color combinations
Preference for individual colors only weakly predict preferences for color pairs, as preference for color pairs tend to depend on relational factors among color, including hue similarity and lightness contrast (Schloss & Palmer, 2011). Fortunately, once those relational components are characterized in color pairs, pair preferences can be used to predict higher-order color combinations. We leveraged this property for creating Colorgorical, a color palette generating tool for information visualizations (Gramazio, Laidlaw, & Schloss, 2017). Colorgorical allows users to specify the number of colors in the palette and use sliders to specify the relative importance of aesthetic preference, perceptual discriminability, and name difference. It also enables users to specify seed colors to include in palettes. Evaluations demonstrated that, on average, Colorgorical produces palettes that are as perceptually discriminable and more preferable then benchmark palettes from professional designers. Developing Colorgorical was an initial stage in an ongoing project to understand how to automate the design of effective color palettes for visual communication.
To demonstrate the potential of VR-based learning, we have developed two lesson plans which can be downloaded below: The Virtual Visual System™ and Virtual Auditory System™. The lesson plans immerse people in a model of the brain based on real brain scans, allowing them to follow the path from sensory input to cortex. Information stations along the way describe key topics at each stage of neural processing.
Our perspective on VR education is that VR is a lens, analogous to a microscope or telescope, through which students experience content that would otherwise be difficult to see. We believe that the future of VR in the classroom is to provide enriched experiences that are integrated within the larger course structure, rather than supplant traditional education. Just as students do not spend entire classes with microscopes or telescopes attached to their face, they also need not to spend entire classes wearing VR headsets. VR acts as a springboard to facilitate class discussion and activities, rather than isolate students from each other and the instructor. Thus, the UW Virtual Brain Project™ lessons are brief (about 5 min.) and can be built into regular lessons on neural structure and function.
In the lab. The UW Virtual Brain Project™ team is conducting research to demonstrate the efficacy of VR-based education and to identify the aspects of VR that are especially beneficial to learning outcomes.
UW Virtual Brain Project™ team: Karen Schloss • Bas Rokers • Chris Racey • Simon Smith • Ross Treddinick • Nathaniel Miller • Melissa Schoenlein • Chris Castro