ASCB logo LSE Logo

Visual Literacy in Bloom: Using Bloom’s Taxonomy to Support Visual Learning Skills

    Published Online:https://doi.org/10.1187/cbe.17-08-0178

    Abstract

    Vision and Change identifies science communication as one of the core competencies in undergraduate biology. Visual representations are an integral part of science communication, allowing ideas to be shared among and between scientists and the public. As such, development of scientific visual literacy should be a desired outcome of undergraduate instruction. We developed the Visualization Blooming Tool (VBT), an adaptation of Bloom’s taxonomy specifically focused on visual representations, to aid instructors in designing instruction and assessments to target scientific visual literacy in undergraduate instruction. In this article, we identify the need for the VBT, describe its development, and provide concrete examples of its application to a curriculum redesign effort in undergraduate biochemistry.

    INTRODUCTION

    Visualization is a critical component of science, as all scientific disciplines are rife with visual representations (e.g., graphs, computer models, chemical formulae). Visual representations bridge the gap between unobservable phenomena (e.g., molecular processes) and/or scientific theories and the observable world, and they aid the viewer in better understanding the ideas represented (Gershon et al., 1998; Tibell and Rundgren, 2010). Visual representations can convey complex messages. Accordingly, scientists use them to model hypotheses, identify meaningful patterns in data, and communicate ideas within the scientific community, with the general public, and for the education of future scientists. The messages contained within visual representations can only be delivered if both the creator and the audience of the representation have developed sufficient visual literacy (Trumbo, 1999), and the success of scientists is dependent on their capacity for generating and interpreting visual representations (Tibell and Rundgren, 2010).

    Current efforts to integrate authentic practices of scientists into undergraduate curricula (e.g., American Association for the Advancement of Science, 2011) include explicitly targeting communication skills that are translatable across multiple disciplines (Merkel, 2012; American Society for Biochemistry and Molecular Biology [ASBMB], 2017). Visual representations, as described earlier, are an essential aspect of scientific communication (Trumbo, 1999). Guides for reforming curricula emphasize scientific communication, as evidenced by recommendations that students be able to analyze and interpret results provided to them visually and to represent data in appropriate and meaningful ways (Merkel, 2012; ASBMB, 2017). Thus, the development of scientific visual literacy should be an explicit outcome of undergraduate science courses (Blummer, 2015; Mnguni et al., 2016). We define visual literacy as the achievement of fluency in the disciplinary discourse (Airey and Linder, 2009) scientists use when engaging in activities such as 1) decoding and interpreting visual representations, 2) encoding and creating visual representations, and 3) generating mental models (Offerdahl et al., 2017).

    Curricula and courses designed in support of visual literacy should employ the principles of backward design, namely, assessments and activities should be aligned with stated learning objectives so they provide practice with and reinforcement of the skills underpinning visual literacy (Wiggins and McTighe, 2005). The first step in backward design is articulating clear and measurable learning objectives, that is, identifying what it is students should know and be able to do as a result of instruction. Bloom’s taxonomy (Bloom et al., 1956) has historically been used in K–12 education to articulate and classify learning objectives hierarchically within three domains (cognitive, affective, and sensory; e.g., Kunen et al., 1981; Imrie, 1995). More recently, Bloom’s taxonomy has been applied in undergraduate science education to characterize the learning outcomes commonly targeted and assessed in undergraduate science, technology, engineering, and mathematics education (e.g., Allen and Tanner, 2002; Bissell and Lemons, 2006). With regard to visual literacy in particular, Trumbo (1999) used Bloom’s taxonomy to describe the visual learning skills involved in developing familiarity with visual representations and gaining competency with the process of extracting meaning from a visual representation. Mnguni and colleagues (2016) also used Bloom’s taxonomy to characterize cognitive skills associated with the process of visualization while seeking to measure students’ visual literacy level.

    In its inception, Bloom’s taxonomy was designed as a tool for educators to make explicit the targets of instruction (Bloom et al., 1956; Anderson et al., 2001). But there is evidence that Bloom’s taxonomy can also be used as a tool to increase student reflection and learning (Crowe et al., 2008). Both applications of Bloom’s taxonomy have been exemplified in undergraduate biology. Crowe and colleagues (2008) created the Blooming Biology Tool (BBT), which articulated examples of biology learning outcomes at each level. Instructors then used the BBT to categorize exam questions to determine whether their exams aligned with intended learning outcomes. Paired with performance results, the BBT allowed the instructor of an inquiry-based lab to determine which skills were most difficult for students to apply. Altering the instructional design to allow students additional practice with these skills resulted in increased performance on items requiring those skills (Crowe et al., 2008). Similarly, students were trained to use the BBT to categorize assessment questions and to generate their own questions about assigned readings at each level. Students applied the BBT to course assessments and calculated their performance at each cognitive level, thereby promoting reflection on performance and self-diagnosis of skills needing additional practice. Students were then given the Bloom’s-based Learning Activities for Students (BLASt), which suggests activities designed for practicing each cognitive skill. In this manner, combined use of the BBT and BLASt facilitated greater reflection on, and approaches to, learning. The work of Crowe and colleagues (2008) demonstrated the utility of Bloom’s taxonomy as a way to not only identify desired learning outcomes and assess student learning, but to generate targeted and actionable feedback for both student and instructor to support student learning across cognitive levels.

    We extend on this work by applying a Bloom’s-based approach to provide the scaffolds for visual literacy in an undergraduate biochemistry classroom. Specifically, we used the BBT as a starting point to systematically reflect on the literature related to scientific visual literacy (e.g., Trumbo, 1999; Schönborn and Anderson, 2006, 2010) and create the Visualization Blooming Tool (VBT)—an articulation of the visual learning skills underpinning visual literacy. We then applied the VBT to 1) examine the degree to which visual learning skills were targeted and reinforced through assessment in an undergraduate biochemistry course, 2) guide the creation of additional practice and assessment items that aligned with those skills, and 3) investigate the degree to which increasing practice with and assessment of visual learning skills throughout a semester affected student performance on visual-based exam questions at the conclusion of the course. In this paper, we detail the development of the VBT, highlight examples of its utility, and discuss how the findings of our study indicate a need for better understanding of how the addition of visual representations on assessment impacts the limited capacity of students’ working memory.

    DEVELOPING THE VBT

    The VBT was designed to assist instructors and students alike in the practice and assessment of visual learning skills. Development of the VBT included using the BBT (Crowe et al., 2008) to generate an initial prototype focused specifically on visual literacy, submitting the initial prototype for expert review and feedback, and iteratively refining the instrument until it could be applied with high interrater reliability.

    Stage 1

    We began by adapting Bloom’s taxonomy to articulate the visual learning skills underpinning visual literacy, resulting in the development of the VBT (Table 1). To this end, we started with the BBT (Crowe et al., 2008) as a guide and then used the cognitive process dimension of the revised Bloom’s taxonomy (Anderson et al., 2001) to better characterize the skills and types of tasks associated with each of the cognitive levels of the VBT. To direct the cognitive taxonomy toward visual literacy, we drew upon literature describing types of visualization skills individuals must demonstrate to be successful scientists (e.g., Trumbo, 1999; Schönborn and Anderson, 2006, 2010). Specifically, we extracted commonly agreed upon cognitive skills related to visualization (as opposed to visuospatial skills, for instance) and mapped them onto Bloom’s taxonomy.

    We established face validity of the VBT by circulating it for review. First, we provided the VBT to groups of undergraduate science students with little or no knowledge of Bloom’s taxonomy. They were asked to use the VBT to categorize sample assessments from their respective disciplines—biology, biochemistry, chemistry, and physics—and provide feedback regarding the utility of the instrument. Subsequently, the VBT was sent out for review by authors of two of the most-cited papers that use Bloom’s taxonomy as a framework in life sciences education (Crowe et al., 2008; Momsen et al., 2010).

    TABLE 1. The Visualization Blooming Tool

    CognitionSkillsGeneral characteristicsExample visualization tasks
    Knowledge
    • Memorize

    • Recognize

    • Recall

    • Retrieve

    Items require students only to remember facts or information.
    • Label components of the image.

    • Identify experimental process or conventional method that would yield the representation.

    • List ordered steps in a schematic.

    • Define abbreviations or symbols used.

    • State the formula or equation.

    • Identify structures or features.

    Comprehension
    • Understand

    • Interpret

    • Infer

    • Exemplify

    • Classify

    • Items consist of familiar scenarios and are often focused on surface features or on the representation itself.

    • Students are required to construct meaning from provided representation.

    • Make predictions in situations that have already been explicitly covered.

    • Compare between images based on visible features of the representation.

    • Summarize what is represented.

    • Categorize representations based on surface features (find patterns).

    Application
    • Execute

    • Implement

    • Apply

    • Students must carry out a procedure or process to solve a problem.

    • Lower-order (execution) items are familiar, repetitive tasks that explicitly prompt the use of a specific procedure and do not require conceptual understanding. A fixed series of steps is used to find a single possible answer.

    • Higher-order (implementation) items are tasks that involve conceptual understanding and require selection, modification, or manufacture of a procedure to fit the situation. The solving process may contain decision points or may result in more than one possible solution.

    • Calculate a solution.

    • Sketch graph from provided data.

    • Draw expected pattern of results.

    • Predict what could happen if a single variable were changed.

    • Translate information from one form of representation into another.

    Analysis
    • Differentiate

    • Discriminate

    • Organize

    • Integrate

    • Deconstruct

    • Attribute

    • Items may appear similar to comprehension items, but require additional contextualizing from the student. (Consider how many steps students need to make to answer the question.)

    • Students must discriminate relevant information, determine how elements fit into overall structure, build connections, or determine underlying purpose of the representation.

    • Make determination regarding a concept by comparing representations.

    • Infer biological/biochemical implications.

    • Predict how representation would change if multiple properties were altered.

    • Determine purpose/intent of representation.

    • Differentiate between relevant and irrelevant information to solve unfamiliar problem.

    Evaluation
    • Check

    • Coordinate

    • Critique

    • Judge

    • Test

    • Items require students to make judgments based on criteria or standards, including quality, effectiveness, efficiency, or consistency.

    • Evaluation may involve testing for inconsistencies within a process or representation or making a judgment based on noted positive and negative features of a product.

    • Decide whether data support/disprove the hypothesis or conclusions.

    • Determine whether a single representation contains parts that contradict one another or whether multiple provided representations contradict each other.

    • Judge which convention or type of abstraction should be used to convey information.

    • Discern which of two methods is the best way to solve a problem.

    • Critique existing representation based on biochemical/molecular principles.

    • Assess effectiveness of a representation.

    Synthesis
    • Generate

    • Hypothesize

    • Plan

    • Design

    • Construct

    • Reorganize

    • Produce

    • Items require students to put elements together to form a functional product or to reorganize elements into new pattern or structure.

    • This may involve generating a variety of possible solutions, devising a solution method and plan of action, or executing a plan and constructing the solution.

    • Generate hypotheses from multiple representations.

    • Structure evidence into argument for or against a conclusion/hypothesis.

    • Design plan to collect evidence to support scientific argument.

    • Reorganize information or integrate multiple concepts to construct new representations/models.

    • Develop plan to solve problems by selecting appropriate equations, variables, etc.

    • Generate alternative ways to represent data/information.

    Stage 2

    The feedback received in stage 1 was used to clarify the features that would allow for a cleaner distinction to be made between comprehension and analysis, as these categories include similar skills and tasks. For instance, comparison tasks could fall under either category, depending on the depth of the comparison students are being asked to make. Comprehension-level questions typically probe only surface-level reasoning, while analysis tasks prompt students to reason at a deeper level (Anderson et al., 2001). As seen in Figure 1, assessment items using the same graphical representation can be classified differently based on the nature of the prompt. The first task, in which students must determine which curve represents the DNA strand with the highest melting point, can be accomplished by students at this educational level solely by using the features of the representation itself and would thus be considered a comprehension question. In contrast, the second task requires that students have conceptual knowledge regarding the hydrogen bonding and relative stability of base paring.

    FIGURE 1.

    FIGURE 1. Comprehension and analysis tasks may appear similar; however, comprehension typically involves surface features of the representation, while analysis requires deeper, more conceptual knowledge.

    When assigning Bloom’s categories to visualization tasks, we found it useful to break down assessment items and list everything students would have to do or know to successfully respond to the prompt. This was especially important when considering the “gray areas” mentioned earlier, and helped eliminate some of the familiarity bias that may lead content experts to code tasks at a either a lower or higher cognitive level than realistic for what the students were required to do. For instance, in Figure 1, the comprehension question requires that students identify the melting point of each curve and then compare them. In the analysis question, students must not only be able to identify and compare melting points, but must also infer which curve represents a DNA strand with a particular base composition by linking their conceptual knowledge of base pairing, stability, and energy to temperature. Further breakdown of these tasks can be found in Supplemental Figure S1.

    After this stage of refinement, two coders (J.A. and colleague) tested the reliability of the VBT by using it to characterize all assessment items (n = 169) from a single semester of undergraduate biochemistry. Following independent coding, interrater agreement was first estimated by simple percent agreement to be 89% before discussion. A Fleiss’s kappa, which can be used to compare the observed agreement of two or more coders with the expected agreement while accounting for the likelihood a category was coded by chance (Fleiss, 1971), was also calculated (κ = 0.85). According to guidelines published by Fleiss (1981), a kappa value of greater than or equal to 0.75 is considered excellent agreement.

    Stage 3

    The VBT is not meant to be limited to biochemistry visual literacy, but should be generally applicable in science courses. Thus, we tested its utility in coding exam questions from four courses: general chemistry, introductory biology, cellular biology, and biochemistry. The cognitive level of an assessment item depends, in part, on familiarity with the content. For instance, if students complete an analysis-level explanation in class and are subsequently given the same question on a quiz, the quiz question would be categorized as being at the comprehension level. When assigning Bloom’s levels to questions from each course, we considered the amount of prior exposure based on course syllabi and other course artifacts and took into consideration our knowledge of instruction in these courses. When contextual information was insufficient, we followed the example of Crowe et al. (2008) and erred toward Blooming at the next highest level. While coding these questions, we noticed variability in the cognitive demand of questions falling under application between courses. For instance, general chemistry exams included a large number of application-level questions; however, these questions assessed the rote memorization and use of a formula with variables that were explicitly provided. This contrasts greatly from application-level questions on exams in biochemistry, for example, for which students must be apply some conceptual understanding to select an appropriate formula and determine the relevant variables in what is often a multistep procedure. These two types of questions provide different levels of evidence of student learning. Indeed, the application category has previously been considered a transition between lower-order (LOCS) and higher-order cognitive skills (HOCS; Crowe et al., 2008). To determine whether a course provides opportunities for students to practice with both lower- and higher-order application items, or to examine how students perform on lower-order or higher-order visualization tasks, it is necessary to distinguish where the transition from LOCS to HOCS occurs. To elucidate the differences between lower- and higher-order application tasks, we used the descriptions of the two cognitive processes associated with the application level: 1) executing, which solely involves procedural knowledge to solve a familiar prompt; and 2) implementing, which requires both conceptual and procedural knowledge to respond to an unfamiliar problem (Anderson et al., 2001); this allowed us to create two distinct levels in the VBT. An example of this can be seen in Figure 2, where the low application question requires students to plug the provided values into the provided equation to solve for Kmapp and then plot that reaction on the graph. In the high application version, however, students must have conceptual knowledge of the type of inhibitor, based on its binding, to recognize that only Km is affected by the inhibitor. Students must then select which equation is necessary to solve for Kmapp and then plot the resulting reaction. This final revision of the VBT was tested by independent coders (J.A., E.O., and colleague) on assessment items from an introductory biology course (n = 76), and interrater reliability was again found to be excellent (κ = 0.86).

    FIGURE 2.

    FIGURE 2. Lower-order application questions only require students to apply procedural knowledge, while higher-order application questions require students to apply conceptual knowledge to select the appropriate procedure.

    THE VBT WAS USED TO EXAMINE DEGREE TO WHICH VISUAL LEARNING SKILLS WERE TARGETED AND REINFORCED

    Achieving competency in the disciplinary practices of science requires that students be provided ample opportunities to practice and receive feedback about their progress toward mastery. Mastery of the skills underpinning visual literacy is supported by course designs that follow the principles of backward design, with assessments and activities created to align with the established learning objectives so they provide practice with and assessment of visual learning skills at all levels (Wiggins and McTighe, 2005). To this end, in an effort to more explicitly target development of visual learning skills, we looked to structure our course such that students were 1) provided opportunities to practice with visual representations across all cognitive levels and 2) assessed on the visual cognitive skills they had practiced.

    Course transformation began by using the VBT to determine the degree to which the existing course curriculum provided opportunities for practice and assessment of visual learning skills. In-class activities and course assessments were collected and categorized with the VBT, thereby establishing a baseline measure of the 1) opportunities for practice and 2) reinforcement through assessment (Figure 3, 2012 data). We defined opportunities for practice as preclass assignments (e.g., reading quizzes), in-class activities, low-stakes assessments, clicker questions, and quizzes. These opportunities were then compared with the summative assessments within each instructional unit. In Fall 2012, students were provided practice with visual representations during the first two units of the course, but practice—particularly at the higher cognitive levels—diminished during the final third of the semester (Figure 3, top left). The importance of visual literacy was reinforced by each of the unit exams, with more than 80% of the points requiring the use of visual representations in some manner (Figure 3, top right). However, while students were provided some opportunities for practice at the evaluation and synthesis levels during the first unit, they were not assessed on a synthesis task until the final exam, and no evaluation tasks appeared on any unit exam in 2012.

    FIGURE 3.

    FIGURE 3. All visual-based tasks in each semester were classified according to the VBT to determine alignment between practice (low-stakes assessments) and assessment (exams) in each exam unit.

    THE VBT GUIDED CREATION OF ADDITIONAL ITEMS TO IMPROVE ALIGNMENT WITH VISUAL LITERACY SKILLS

    In Fall 2014, the VBT was used to generate new visual-based practice and assessment items to increase practice with visual representations, particularly on HOCS, and to improve the alignment between practice and assessment. Second-unit content (e.g., enzyme functions, kinetics, and mechanisms) builds directly upon the basic principles of thermodynamics and protein structure covered in the first unit, so we shifted practice opportunities more toward HOCS in unit 2 (54%, as compared with 38% in 2012; Figure 3), as students should have more background knowledge to reason at higher levels. The final unit mainly comprised basic principles of the other macromolecules (i.e., nucleic acids, carbohydrates, and lipids), which had not been previously covered in the semester, so we did increase opportunities for practice on LOCS as well as on HOCS in unit 3 (Figure 3, bottom left). Looking at measurement in Fall 2014, we continued to stress the importance of visual literacy in biochemistry, with nearly 80% or more of the exam points associated with a visual representation (Figure 3, bottom right). We also attempted to reduce some of the alignment issues that occurred in Fall 2012 by placing more emphasis on HOCS than on comprehension.

    EFFECTS OF INCREASED PRACTICE ON STUDENT PERFORMANCE

    Given the greater practice with visual representations, particularly on HOCS, and the improved alignment in Fall 2014, we initially predicted students would perform better on visual exam items. We also predicted the increased use of visual representations might lead to a greater general understanding of the content, because the cognitive theory of multimedia learning (CTML) suggests using visual images in addition to text allows for processing of a larger amount of information (Mayer, 2001).

    To test these hypotheses, we looked at measures of student performance at the end of the semester. The course final exam was used to measure students’ ability to reason with visual representations, while student scores on the Introductory Molecular and Cell Biology Assessment (IMCA; Shi et al., 2010) and final course grades were used as measurements of general content understanding. Final exams, which were cumulative and consisted of multiple-choice and short-answer questions, were similar in structure, concepts covered, and types of representations used. The final exam in 2014, as shown in Figure 3 (right panels), did have evaluation and synthesis tasks that were not included in 2012.

    Despite increased practice with visual representations and improved alignment in Fall 2014, performance on the final exam decreased significantly (p < 0.001) from Fall 2012 to Fall 2014, with mean scores of 67.2 and 62.8%, respectively. This difference was seen on visual items, which made up the bulk of the exam in both semesters (as indicated in Figure 3, right panels). Looking solely at the visual-based questions, we see that the decrease in performance was limited to the HOCS questions; students performed similarly on LOCS in both semesters (Figure 4). This effect was seen on both the multiple-choice and short-answer sections. Regarding general content understanding, there was no significant difference between semesters on overall course grades or IMCA scores. Examples of HOCS assessment items that were challenging for students can be found in Supplemental Figures S2 and S3.

    FIGURE 4.

    FIGURE 4. Performance on higher-order visual-based exam questions (measured as mean percentage of points earned per question) decreased significantly (*, p = 0.01) in Fall 2014 compared with performance on higher-order questions in 2012 and lower-order questions in 2014.

    DISCUSSION

    Students are unlikely to develop mastery without practice, and they are unlikely to practice skills that are not evaluated on high-stakes assessments (Scouller, 1998). Misalignment of practice with assessment can have negative effects on student performance and understanding. An example of misalignment can be seen in the Fall 2012 data; while students were given an opportunity to practice with tasks at the evaluation and synthesis levels during the first unit, these skills were not assessed until the final exam—or not at all, in the case of evaluation (Figure 3, top). Given the gap in time between practice and assessment, students were likely ill-prepared to respond to the synthesis-level question on the final exam. Furthermore, students in Fall 2012 did not receive many opportunities to practice any HOCS during the final unit, possibly sending an implicit message to students that HOCS are less important; assessments, even low stakes, communicate to students what is valued in a course (Black and Wiliam, 2009; Momsen et al. 2013; Offerdahl and Montplaisir, 2014). If students did indeed perceive LOCS to be more important to prepare for on the exam, we hypothesize this perception could have contributed to lower performance on HOCS (though this was not shown to be significantly lower than LOCS).

    We had predicted that providing students with more practice using visual representations and improving alignment between practice and assessment would lead to increased performance. Comparison of the final exams, however, indicated otherwise, as students in Fall 2014 performed significantly lower on visual HOCS. Interpretation of this result should be taken with caution, however. The final exams between the two semesters assessed higher cognitive skills to differing degrees, as the Fall 2014 exam did test evaluation and synthesis (Figure 3, right). This could be further complicated by the fact that students in 2012 could avoid the synthesis task included in the short-answer section, but students in 2014 were forced to answer a multipart short-answer question that included either synthesis or evaluation tasks (many students chose to complete both). The decrease in student performance could be attributed to an increase in cognitive level or may have been affected by the inclusion of a visual representation.

    Given the lack of higher-order questions without a visual component in either semester, our data do not support one interpretation over another with regard to the effect of adding a visual component on student performance. The CTML would suggest addition of a visual component would ultimately increase performance, due to an overall reduction of total cognitive load. The CTML posits that, when information is presented in a multimedia manner, the information can be processed through both a verbal channel and a visual channel, which increases the amount of information processed at once and better scaffolds the formation of mental schema (Mayer, 2001). The exam question with the lowest success rate—with only 13 of 238 students answering correctly—required students to reason about reducing ends in glycogen without providing a visual representation of the described molecule (Figure 5). The CTML allows us to hypothesize that adding a representation of glycogen would have allowed some of the information to be processed through the visual channel, increasing the likelihood that students would be able to successfully answer the conceptual question.

    FIGURE 5.

    FIGURE 5. Students struggled to reason about reducing ends on glycogen when a visual representation of the molecule was not provided.

    In contrast, cognitive load theory (Sweller, 1994) suggests the inclusion of visual representations could negatively impact performance, as it adds to the cognitive demand of the task, namely, that interpreting the image in addition to reasoning through the assessment prompt will overwhelm working memory, resulting in lower performance. Cognitive load theory may help explain the performance result depicted in Figure 6. In the first question, students were reasonably successful (191 of 240 students answered correctly) in selecting the graph representing an inhibitor that increases Km. In contrast, the success rate was significantly lower (90 of 240 students answered correctly) for the second question, in which students should have been able to reason conceptually that an inhibitor, which inhibits enzyme activity, would not increase the velocity of an enzyme reaction. In this question, the presence of the graphs may have increased the cognitive load of the task to such a point that it hindered the students’ ability to integrate their knowledge of what an inhibitor does. Repeated practice over time is a suggested method for reducing the amount of cognitive processing needed to successfully complete a task. While we did increase practice, especially on HOCS, a single semester may be an inadequate amount of time to build enough familiarity with elements, particularly if students have not been given opportunities to practice or be assessed similarly before the course.

    FIGURE 6.

    FIGURE 6. Students were able to compare graphs to determine which represented an increase in Km. The inclusion of graphs, however, may have hindered students’ ability to reason conceptually about the impact of inhibitors on Vmax.

    CONCLUSION

    Similar to the BBT (Crowe et al., 2008), the VBT is intended to be an assessment tool that can be used by both instructors and students to augment teaching and learning. Specifically, the VBT can assist instructors in classifying visual-based questions based on their cognitive level and in building assessments and activities better aligned with their learning objectives related to visual literacy. Using the VBT to craft visual-based assessment items at each cognitive level would provide instructors with a way to assess student mastery of visual learning skills, identify the areas that students find most challenging, and adapt their instructional practices to enhance focus on those aspects of scientific visual literacy. More broadly, the VBT could be used to evaluate the extent to which visual learning skills are assessed or scaffolded across a curriculum.

    The VBT is also a tool for students as they transition from novice to more expert-like visualization skills. Novice learners tend to focus on the surface features of representations and, as a result, may find it difficult to extract meaningful trends or more complex encoded messages (National Research Council, 2012). Using the VBT to intentionally practice and master basic decoding skills (LOCS) will make their application more automatic, and students will increasingly be able to distinguish the surface features of a representation from deeper structure. Students can also use the VBT as a tool to be more metacognitive about their scientific visual literacy and strategize about how they can improve their skills. Making visual learning skills more explicit in instruction through use of the VBT will likely help students to effectively interpret and create visual representations and develop a better understanding of the content and skills needed to be successful scientists.

    The VBT could also aid researchers interested in investigating instructional strategies for increasing the visual literacy of science students. The data we present here suggest increasing the number of opportunities for practice with visual representations within a semester is not enough to increase student mastery of visual learning skills. Increasing practice without providing explicit feedback may be detrimental to student performance on tasks requiring HOCS, perhaps due to increased demand on the working memory. We cannot definitively conclude, however, whether this effect is due to the increased presence of visual representations or merely an artifact of the increased prevalence of higher-order questions. A better understanding of how the addition of a visual representation to an assessment item impacts the cognitive load associated with responding to the item is needed. Once we understand this, we can move forward to look at how the additional cognitive load may be minimized, should that be the case, or at how best to take advantage of the reduced cognitive load of visual-based tasks, should the alternative hold true.

    ACKNOWLEDGMENTS

    This material is based on work supported by the National Science Foundation (NSF) Graduate Research Fellowship Program under grant no. DUE-1156974. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. Thank you to Alison Crowe, Jennifer Momsen, and Jon Dees for reviewing the VBT, and to Miriam Ziegler for generously sharing some of her HOCS exam questions.

    REFERENCES

  • Airey, J., & Linder, C. (2009). A disciplinary discourse perspective on university science learning: Achieving fluency in a critical constellation of modes. Journal of Research in Science Teaching, 46(1), 27–49. Google Scholar
  • Allen, D., & Tanner, K. (2002). Approaches to cell biology teaching: Questions about questions. Cell Biology Education, 1(3), 63–67. LinkGoogle Scholar
  • American Association for the Advancement of Science. (2011). Vision and change in undergraduate biology education: A call to action. Washington, DC Google Scholar
  • American Society for Biochemistry and Molecular Biology. (2017). Accreditation Program for Bachelor’s Degrees in Biochemistry and Molecular Biology. Retrieved August 12, 2017, from www.asbmb.org/uploadedFiles/Accreditation/application/App%20Guide_032817.pdf. Google Scholar
  • Anderson, L. W., Krathwohl, D. R., & Bloom, B. S. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman. Google Scholar
  • Bissell, A. N., & Lemons, P. P. (2006). A new method for assessing critical thinking in the classroom. BioScience, 56(1), 66–72. Google Scholar
  • Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5 Google Scholar
  • Bloom, B. S., Krathwohl, D. R., & Masia, B. B. (1956). Taxonomy of educational objectives: The classification of educational goals. New York: McKay. Google Scholar
  • Blummer, B. (2015). Some visual literacy initiatives in academic institutions: A literature review from 1999 to the present. Journal of Visual Literacy, 34(1), 1–34. Google Scholar
  • Crowe, A., Dirks, C., & Wenderoth, M. P. (2008). Biology in Bloom: Implementing Bloom’s taxonomy to enhance student learning in biology. CBE—Life Sciences Education, 7(4), 368–381. LinkGoogle Scholar
  • Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378–382. Google Scholar
  • Fleiss, J. L. (1981). Statistical methods for rates and proportions. 2nd ed New York: Wiley. Google Scholar
  • Gershon, N., Eick, S. G., & Card, S. (1998). Information visualization. Interactions, 5(2), 9–15. Google Scholar
  • Imrie, B. W. (1995). Assessment for learning: Quality and taxonomies. Assessment & Evaluation in Higher Education, 20(2), 175–189. Google Scholar
  • Kunen, S., Cohen, R., & Solman, R. (1981). A levels-of-processing analysis of Bloom’s taxonomy. Journal of Educational Psychology, 73(2), 202–211. Google Scholar
  • Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press. Google Scholar
  • Merkel, S. (2012). The development of curricular guidelines for introductory microbiology that focus on understanding. Journal of Microbiology & Biology Education, 13(1), 32–38. MedlineGoogle Scholar
  • Mnguni, L., Schönborn, K., & Anderson, T. (2016). Assessment of visualisation skills in biochemistry students. South African Journal of Science, 112(9/10), 10.17159/sajs.2016/20150412 Google Scholar
  • Momsen, J., Offerdahl, E., Kryjevskaia, M., Montplaisir, L., Anderson, E., & Grosz, N. (2013). Using assessments to investigate and compare the nature of learning in undergraduate science courses. CBE—Life Sciences Education, 12(2), 239–249. LinkGoogle Scholar
  • Momsen, J. L., Long, T. M., Wyse, S. A., & Ebert-May, D. (2010). Just the facts? Introductory undergraduate biology courses focus on low-level cognitive skills. CBE—Life Sciences Education, 9(4), 435–440. LinkGoogle Scholar
  • National Research Council. (2012). Discipline-based education research: Understanding and improving learning in undergraduate science and education. Washington, DC: National Academies Press. Google Scholar
  • Offerdahl, E. G., Arneson, J. B., & Byrne, N. (2017). Lighten the load: Scaffolding visual literacy in biochemistry and molecular biology. CBE—Life Sciences Education, 16(1), es1 LinkGoogle Scholar
  • Offerdahl, E. G., & Montplaisir, L. (2014). Student-generated reading questions: Diagnosing student thinking with diverse formative assessments. Biochemistry and Molecular Biology Education, 42(1), 29–38. MedlineGoogle Scholar
  • Schönborn, K. J., & Anderson, T. R. (2006). The importance of visual literacy in the education of biochemists. Biochemistry and Molecular Biology Education, 34(2), 94–102. MedlineGoogle Scholar
  • Schönborn, K. J., & Anderson, T. R. (2010). Bridging the educational research–teaching practice gap. Biochemistry and Molecular Biology Education, 38(5), 347–354. MedlineGoogle Scholar
  • Scouller, K. (1998). The influence of assessment method on students’ learning approaches: Multiple choice question examination versus assignment essay. Higher Education, 35(4), 453–472. Google Scholar
  • Shi, J., Wood, W. B., Martin, J. M., Guild, N. A., Vicens, Q., & Knight, J. K. (2010). A diagnostic assessment for introductory molecular and cell biology. CBE—Life Sciences Education, 9(4), 453–461. LinkGoogle Scholar
  • Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4(4), 295–312. Google Scholar
  • Tibell, L. A. E., & Rundgren, C. J. (2010). Educational challenges of molecular life science: Characteristics and implications for education and research. CBE—Life Sciences Education, 9(1), 25–33. LinkGoogle Scholar
  • Trumbo, J. (1999). Visual literacy and science communication. Science Communication, 20(4), 409–425. Google Scholar
  • Wiggins, G. P., & McTighe, J. (2005). Understanding by design. Danvers, MA: Association for Supervision and Curriculum Development. Google Scholar