Next Article in Journal
Minority Stress and Mental Health in Italian Bisexual People
Previous Article in Journal
Professional Values Challenged by Case Management—Theorizing Practice in Child Protection with Reflexive Practitioners
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applied Behavior Analysis (ABA) as a Footprint for Tutoring Systems: A Model of ABA Approach Applied to Olfactory Learning

1
Natural and Artificial Cognition Lab, University of Naples “Federico II”, Via Porta di Massa 1, 80133 Naples, Italy
2
Irfid-Neapolisanit, Via Funari, 80044 Ottaviano NA, Italy
*
Author to whom correspondence should be addressed.
Submission received: 19 March 2020 / Revised: 3 April 2020 / Accepted: 7 April 2020 / Published: 9 April 2020

Abstract

:
Applied Behavior Analysis (ABA) belongs to the analysis of behavior techniques introduced by the theorists of behaviorism in psychological fields. It deals with the application of behaviorism principles to guide the learning process. It can serve as a footprint to build artificial tutoring systems in environments for specific learning processes. In this paper, we delineate the pathway to build an artificial tutoring system following ABA footprints, named the ABA tutor. In implementing the ABA tutor, the techniques of ABA are reproduced. This paper also describes how to build a tutor based on ABA and how to use it to favor olfactory learning. In more detail, the ABA tutor is inserted in SNIFF, a system that combines a software and a hardware side to assess and practice the sense of smell exploiting gamification. A first experiment was run involving 90 participants, and the results indicated that the artificial tutoring system based on ABA principles can effectively promote olfactory learning. The implications of this approach are discussed.

1. Introduction

Applied Behavior Analysis, known as the acronym ABA, is a widespread set of methods and practices that have been used in the field of psychological intervention since 1950. It focuses on the translation, in practical and operational terms, of the findings and results obtained by psychologists of behaviorism. Its goal is to lead learning processes in different contexts. It can, therefore, be considered the applicative side of the Analysis of Behavior; it pursues the objective to find the connections between different behaviors of organisms—the dependent variables—and what can have an effect on them—the independent variables (Cooper et al. 2007). The Analysis of Behavior, with its methodology, has notably advanced the reflection of learning processes (Mazur 2015).
ABA has gained a relevant role in psychological intervention, as it offers the opportunity to address and stimulate new behavior and actions. This process is led by a professional operator: the ABA therapist. Fundamentally, ABA can be defined as a structured and formally identified program that addresses and leads learning processes so as to achieve new, more adaptive behaviors, exploiting the law of effect.
This law, the cornerstone of behaviorism, has the following definition: the organism—for example, an animal—will tend to put the behaviors that lead to satisfying consequences into practice, whereas it will tend to stop any behavior that is followed by unpleasant outcomes (Thorndike 1927, 1933). This law is simple but powerful, as it can describe a wide range of behaviors.
The traditional experimental setting conceived by Thorndike to show the law of effect can be pictured this way: in a cage, there is an animal—for example, a cat—that is relatively free to go around and explore this limited environment. During the exploration, if, by chance, it presses a lever, the door of the cage opens and the cat frees itself. The environment offers many elements, between which the caged cat can identify which element is functional to get away and select the action that is appropriate to reach the goal (pressing the lever).
In this experimental setting, the main elements are the stimuli in the environment, the actions that can be performed, the consequences of these actions, and the effect of the consequences on the probability of future actions. After more than a century of the scientific effort that has followed this finding, almost every aspect related to the law of effect is explained.
Indeed, in recent years, we have come to know about the law of effect on a neural and molecular basis (Kandel et al. 1986) together with the cognitive side explored by psychologists, which covers processes such as operant conditioning (Skinner 1938, 1953, 1963), latent learning (Seward 1949), or trial and error learning (Boswell 1947; Thorndike 1933). If we consider human beings, the law of effect is not the main learning mechanism (Gamez and Rosas 2005, 2007), but it does cover a central role in a wide class of learning and adaptation processes at different degrees of complexity. In fact, the organism, while actively adapting to its environment with the constraints posed by its own sensory and motor features, selects which action can be carried out and, depending on the effects that it will produce, be it positive or negative, will repeat or not repeat this action in subsequent similar conditions. The law of effect can indeed be applied to some human behaviors—in particular, the behavioral repertoire, where associative processes play a central role.
In the context of this rich theoretical framework, ABA has established a methodology with a well-defined set of methodologies and techniques. According to these methodologies and techniques, a professional operator, the ABA therapist, follows a person, usually with problems of adaptation, in the process of acquisition of crucial behaviors that can lead to positive effects in the enhancement of her/his quality of life, promoting her/his well-being at various levels, including psychological and social levels. Thanks to the chance to address the acquisition of adaptive behaviors, ABA has collected a great success in the field of Autistic Spectrum Disorder for the intervention on children (Shook 2005; Virués-Ortega 2010), as it permits the reduction of the frequency of problematic actions and favors the emergence of language with communicative goals.
ABA can be applied to the field of Intelligent Tutoring Systems (see Anderson et al. 1985), complementing other approaches that involve artificial neural networks (Fenza et al. 2017) or agent-based systems (Ponticorvo et al. 2017a). In this paper, we propose to exploit the ABA approach to design and implement an artificial tutoring system to be applied in the context of associative learning.
This approach is apparently new, but its basic rationale can be traced back to the problem related to the dynamic adaptation of prompts frequency, which represents an active debate in Intelligent Systems research. Bouchet et al. (2016) examined whether ITS employing strategies at a meta-cognitive level could benefit from modifications in prompts depending on the behaviors of self-regulation displayed by the learning individual. The results reported by the authors showed that the prompts frequency had a notable effect in favoring learning, which was also indicated by other studies by the same authors (Bouchet et al. 2013), by Harley et al. (2017), and by Kinnebrew et al. (2015).
As made evident in the following part of this paper, the ABA approach to ITS (Ponticorvo et al. 2018b) has, at its core, prompts and focuses on the association of stimuli, and does not give as much importance to more complex cognitive processes, including meta-cognitive analysis. Moreover, this approach is indeed devoted to a particular case of associative learning that is quite a basic form of learning.
Therefore, we describe our effort to conceive and realize a new category of tutoring systems inspired by ABA that models the interaction between the ABA professional operator and the learner. In particular, the goal of the present work is to propose and describe a methodology to introduce the principles of ABA into an artificial tutoring system and verify whether it is effective in favoring associative learning. In this sense, the present paper is meant to give a contribution to the field of prompts frequency variation in Intelligent Tutoring Systems research.
The paper follows the following structure: in the next section, we introduce the ABA fundaments to clarify the starting point for the ABA tutor, which will be detailed in the following section. Then, the first implementation of the ABA tutor is introduced—i.e., a system to train sense of smell, for which associative learning is essential, together with the experiments we performed on 90 people to verify the effectiveness of the ABA tutor.

2. ABA Fundaments

The ABA approach, which is the starting point for the ABA tutor, is based on very well-defined rules and guidelines that address the ABA technician work. In a very concise way, we can say that they follow this procedure:
  • they organize the learning environment in such a way that the learner can identify the stimuli that are relevant: this process moves from easier conditions, where identification is almost immediate, to more difficult ones;
  • they include some elements (objects or also events) that, if manipulated or selected by the learner, produce some clear and evident consequences;
  • they dispose a reinforcement program with a specific reward supply pattern, so that the learner, while acting in the learning environment, can evaluate, consciously or unconsciously, the consequences as neutral, positive, or negative.
It is, therefore, clear that nothing is left to chance and every aspect must be accurately planned. This planning, related to ABA methodology, starts with a clear and detailed definition of the elements of the environment where the intervention takes place. First of all, the target stimulus must be identified and must be connected to the desired response. In other words, the stimulus–response chain must be decided, so that if the behavioral response is emitted, it will be reinforced and rewarded.
If we translate the traditional experimental setting by Thorndike into ABA terminology, we can identify that the target stimulus is the lever, whereas the response is to be rewarded if the pressure is on the lever. The law of effect, together with operant conditioning, is one of the bases that ABA is built upon, as the behavior is represented as operant rather than respondent because it produces effects, outcomes, and consequences and, at the same time, in a circular reaction, it changes in response to these outcomes. It is evident that these principles are strongly related to the core ideas of behaviorism and its concepts: reinforce, stimuli, and generalization (Granpeesheh et al. 2009). ABA applies these concepts through four main procedures (Ricci et al. 2014): prompting, fading, shaping, and chaining.
Now, we will briefly describe how these procedures work. Prompting is based on hints and, indeed, named prompts that are given to the learner so as that she/he can more easily identify the target stimulus. In the experimental condition described in the introduction, the experimenter can facilitate the target stimulus recognition by the cat—for example, she/he can lead the cat’s paw close to the lever and exercise a pressure: the action that will open the cage door. Fading exploits actions that reinforce the desired responses if the target stimuli undergo slight modifications. Again, translating the Thorndike experimental condition into ABA terms, the researcher can insert more than one lever into the cage, which starts the exit mechanism at least partially and gives a food reward for lever pressure (Wolery and Gast 1984). The procedure that makes it possible to acquire a behavior that is not present in the usual range of behaviors of a certain organism is named shaping. Obviously, it is impossible to give a reward and reinforce a certain action when it is never displayed: in this case, the procedure foresees to reward a response that is shown at least rarely and has some common features with the target one. Chaining is used when the goal is to teach long behavioral sequences that can be split into smaller sequences or single behaviors (Lindsley 1996). These principles (Cooper et al. 2007) have been included and inserted in the ABA tutor (Figure 1), which is the central point of the next section.

3. The Tutor Based on ABA Principles: The ABA Tutor

The artificial ABA tutor we describe in this paper is based on the two principles introduced in the previous section: law of effect and prompting. In more detail, the artificial ABA tutor is built considering the law of effect and prompting as two modules that can be active or not and, thus, producing 3 different kinds of tutors, which we named:
  • P tutor: in the P tutor, only the prompting module is active and it determines if prompts are included or not;
  • S tutor: in the S tutor, only the law of effect affects the interaction between the tutor and the learner modifying the exposure to stimuli;
  • SP tutor: in the SP tutor, both the stimuli exposure based on the law of effect and prompting modules are active.
For the goals of the present study, we also considered the condition with no active module as a base-line. In Figure 1, the ABA tutor architecture, including the two described modules, is depicted.
On the left side of Figure 1, the first module of the ABA tutor is represented. The first module is what we called the prompting module: its role is to give hints to the participants. These hints favor the process of correctly associating stimulus and response. If the module is inactive, no prompt is given and, therefore, no hint is provided to the participant. On the right, we find another module: the stimuli exposure module. It has the function to determine which stimuli are presented to the participants. In order to run this selection, the stimuli exposure module can operate randomly, or it can follow a pre-defined rule—for example, a probabilistic rule. This probabilistic rule is governed by a probabilistic engine, pictured in Figure 2. In Table 1, how ABA tutors derive from the module’s activation is summarized.
The probabilistic engine, represented in Figure 2, starts from the four categories where the stimuli can be placed. Indeed, these categories host stimuli and associate them with a defined probability to be selected. In the first group, named A, are the stimuli not yet presented or presented and not recognized. In this group, the probability of exposure is 60%. The second group, B, includes those stimuli that the participant was able to identify at least once; their probability to be presented more times is 25%. Group C includes stimuli that the participants identified at least twice; they are associated with a probability of 10%. Group D hosts the highly recognized stimuli (participants recognized them at least three times). Their chance to be shown more times is set to 5%.
When the learning session begins, all stimuli belong to Group A; as the trials go by, the stimuli begin to move from one group to another, according to the pictured arrow. Each group is associated with a probability rate that does not change, but each stimulus can move from one group into another. When the session ends, if it is successful, all stimuli should have moved to Category D. From this discussion, it is evident that the core of this mechanism resides in the association between stimuli and responses. For this reason, we implemented the ABA tutor in a domain where associative learning is fundamental: olfactory learning. As olfactory learning is a somewhat unexplored issue, we will devote the next paragraph to sketch its main features. Then, in Section 4, we introduce in detail the tool where the ABA tutor is applied to olfactory learning.

Why Olfaction

In human beings, the most valued senses are no doubt sight and hearing, but olfaction played an important role in evolutionary history and is still important for cognitive functions. Olfaction has the most direct connection with the cerebral areas involved in emotions and memory and it has, therefore, particular links with these functions.
For the goals of this paper, the link between olfaction and memory is particularly relevant, as olfactory memory, in comparison with other sensory memory, is ruled by specific and distinct mechanisms. Moreover, it is possible to identify two forms of olfactory memory depending on stimulus elaboration: the first one is based on familiarity and more perceptive, and the second one is based on memory and, therefore, contextual. When talking about olfactory memory, we refer to both the memory of odors and memory evoked by odors (Herz et al. 2004). Koster (2002) has defined some peculiarities of olfactory memory: it is nominal, episodic and non-semantic, emotional, and particularly durable. It can be defined as nominal because it gives hints about the qualitative differences between odors, even if it is difficult to determine differences in the intensity and to name them, which probably derives from the adaptive value of olfaction, where it is crucial to distinguish between a familiar and non-familiar stimulus. It is episodic rather than semantic, as it is easier to remember the context and the moment when a certain odor has been perceived rather than name it. Another important feature is that olfactory learning cannot rely on techniques or tricks based on semantics, which contrasts with other mnemonic processes. These peculiarities make olfaction and olfactory learning an adequate example of associative learning that is suitable to test the ABA tutor. In the “Materials and Method” section, we will describe the experiment we have run to answer the following research questions:
-
Is ABA tutor, as a whole, effective in promoting associative learning?
-
What are the effects of ABA tutor modules?

4. Materials and Method

4.1. The Tool for Olfactory Learning: SNIFF

The tutor based on ABA principles was implemented in SNIFF, a tool that integrates digital and physical elements that can be used to test and to develop sense of smell. SNIFF is represented in Figure 3 (Di Fuccio et al. 2016; Ponticorvo and Miglino 2018; Ponticorvo et al. 2017a, 2017b, 2018a). This tool takes inspiration from the pedagogical framework derived from the Montessori studies (Montessori 2013), and, more specifically, to the smelling bottles or jars. The latter are little bottles that contain fragrances and are usually found in the sensorial area of a Montessori learning environment. In accordance with the Montessori smelling activity, SNIFF adopts a gamified approach with numerous trials. At each step, the player has to make an association between a smell in the little jar and the corresponding picture shown on the screen. The visual stimuli to be proposed—which the player must associate with the corresponding smell—reside in the SNIFF database, whose extraction is determined by the ABA module described above. Running this activity can increase olfactory abilities. We have 30 L smelling bottles (described above and represented in Figure 3), on which an RFID (Radio Frequency IDentification) tag is attached. These tags can be detected by an antenna, thus, connecting the digital and physical side: the smelling jars become augmented materials.
The SNIFF tool is developed with Smart Technologies to Enhance Learning and Teaching (STELT) by Miglino et al. (2013), which is a platform that allows the design and implementation of hybrid educational materials. By hybrid, we mean that these materials have a digital and physical side. Moreover, this platform offers the opportunity to include artificial intelligence modules (Di Fuccio et al. 2015; Ferrara et al. 2016). With STELT, it is possible to design learning scenarios, to record the player interaction, and to use tutoring system functionalities, such as delivering feedback.

4.2. The ABA Tutor in SNIFF

The tutoring system integrated in SNIFF is the ABA tutor with the two modules described in the previous section. The first module governs the prompting function. In more detail, the jars are associated with a color that distinguish five categories, each one formed by six smells. There is no semantic connection between the color and the smell. SNIFF can ask the player to select an odor within a category (e.g., “look for this odor in the green or yellow jars”): the prompts are offered by the colored jars (Figure 3). The second module leads the exposure of stimuli: it allows an adaptation to the individual player and her/his ability level. If the player makes mistakes, the probability of a certain odor to be shown becomes higher; if on the contrary, it becomes lower.
Moreover, the adaptation also involves the first module: in fact, if the player makes mistakes, SNIFF gives more precise hints (e.g., “look for this odor in the green jars”). If she/he succeeds, it will give less precise hints (e.g., “look for this odor in the green, blue, or yellow jars”).
Imagine a player starting from the intermediate level: she/he is asked to identify an odor within two categories (“find the honey smell. It is in a yellow or blue jar”). If she/he makes a mistake, in the subsequent iteration, she/he will have to look in one single category (“it is in a blue jar”); if she/he is able to identify the odor, she/he moves to a more difficult level (“find the orange smell. It is in a yellow, blue, or green jar”).
The player receives timely feedback: if the selection is correct, SNIFF shows a little anecdote about the odor on the screen. If it is incorrect, SNIFF indicates the odor to find and suggests searching again. In the SNIFF games, the player has 50 trials, where the 30 stimuli are proposed once or more, depending on the stimuli exposure engine and its adaptation to the player’s level.

4.3. Participants

The experiment involved 90 participants: 45 males and 45 females. Thirty-three were non-smokers (36.67%) and 57 were smokers (63.33%). The sample’s average age was 22.65 years. Participants were recruited between students of the department, where the experiment took place through a call. Of the people that adhered to the experiment, 90 participants were randomly selected. As the study was focused on olfactory behavior, it was important to have participants of about the same age, with balanced gender percentage, and without any olfactory deficit. In fact, none of the participants had any cognitive problems, nor neurological impairments. Participants were randomly assigned to the four experimental conditions, as described in the next subsection.
In the experiment that we describe here, the goal was to test the ABA tutors represented in Table 1. This was done in order to understand the effects of the ABA tutors as a whole, and of the two modules. Considering the different implementations of the ABA tutor and the base-line condition (where no tutor is active), we had, therefore, four ABA tutors corresponding to four experimental conditions: P tutor (P) with the active prompting module and S tutor (S) with the active stimuli exposure module. If they were both active, we had the SP tutor; if they were both inactive, we had the baseline condition.
We used the following procedure for the experiment: the research explained to the participant how the experimental session worked—in particular, introducing SNIFF as a game to improve olfactory abilities. The SNIFF tools showed a picture of a fruit or a flower on the screen and the participant had to identify the jar containing the corresponding odor. The researcher did not give any help or hint to the participant. At the end of all sessions, participants underwent a brief interview to obtain information on the usability of the tool.

5. Results

Here we describe the results both of the 30 stimuli and the 50 trials foreseen by the SNIFF game, so as to distinguish the improvement in odor recognition and the performance. Of 30 stimuli participants, an average of 20.03 stimuli (SD: 5.79) were recognized. There was a gender difference in odor recognition, as reported in the literature (Brand and Millot 2001): women had better performance in olfactory recognition, and the average for women was 21.36 (SD: 0.81), whereas the average for men was 19.05 (SD: 0.73)—see Figure 4.
Considering the experimental conditions, the comparison between averages, performed with a one-way ANOVA, indicated a detectable difference both on the 30 stimuli (F = 13.416; p = 0.000) and on the 50 trials (F = 7.593; p = 0.000).
The post-hoc comparison (Table 2), performed with the Bonferroni method on 30 stimuli, revealed statistically detectable differences between the conditions of base-line and S; base-line and P; base-line and SP; S tutor and SP tutor; P tutor and SP tutor.
On 50 trials (Table 3), the post-hoc comparison with the Bonferroni method revealed significant differences between the conditions of base-line and P; base-line and the SP tutor; S and P; and S and SP.
To investigate the effects of the tutor that selected stimuli and prompts, we performed an univariate analysis of variance that detected a statistical effect of the stimuli exposure (F = 20.186; p = 0.000) and prompts (F = 21.152; p = 0.000) variables, with no interaction with the gender and smoker variables on the 30 stimuli. Of 50 attempts, women recognized, on average, 31.06 stimuli (SD: 1.22), and men recognized, on average, 27.09 stimuli (SD: 1.09), as shown in Figure 4. The ANOVAs revealed no significant effect on the stimuli exposure variables (F = 0.30, p = 0.720), and a significant effect on prompts (F = 22.585; p = 0.000), with no interaction with the gender and smoker variables.
To further investigate this result, we divided the participants’ responses into three blocks: from 0 to 15, from 16 to 30, and from 31 to the last. This way, we identified three moments along the game: start, middle, and end. In particular, we expected that this last moment showed the effect of the ABA tutor in the game performance (see Table 4). We then separated the participants according to their ability. If, in a block, the participant had replied correctly from 33% to 66% of times, she/he was categorized as medium; if lower, as low; if higher, as good (a behavioral categorization). We placed the participants’ replies in the corresponding block, and then we considered the last time window and calculated how many people fell into a certain category. We then focused on good participants at the end of the game for the four experimental conditions, as reported in Table 4. In the SP condition, the presence of both modules corresponded to the highest number of good players in comparison with the other groups. The S and P conditions, where only one module was active, led to an intermediate number of good players, which was higher than the base-line condition, where no module was active. In the SP condition, we had the highest number of “good” participants, with a significant difference:
Chi2 = 0.999; p < 0.05.
The results of this test indicated that both modules of the ABA tutor had positive effects on olfactory performance: these are significant for the recognition of the stimuli and a measure of odor recognition improvement. For the whole game performance, the effect emerged for prompts, but the stimuli exposure was evident if we focused on the ability of the participants in the final phases of the game.

6. Conclusions

The ABA approach has become widespread in the intervention of children with atypical development—in particular, children with ASD (Autism Spectrum Disorder). This happens because it gives the opportunity to modify maladaptive behaviors and to acquire new adaptive ones. On the other side, intelligent tutoring systems have become an essential tool to support learning and teaching in many different contexts. The proposal of an intelligent tutoring system inspired by ABA aims at offering a new perspective on ITS to focus on associative learning and to offer ABA professionals tools and methodologies to be integrated into their daily routine to support it.
This study has some limitations: we have addressed mainly olfactory learning, but this approach should be widened to include other kinds of associative behavior, and also non-associative behavior, to be more meaningful. In fact, olfactory skills are an example of associative learning, but other kinds of knowledge and skills may be tutored using the ABA approach.
Moreover, in this paper, we have described ABA as a set of methods and techniques, but it is worth underlining that the ABA methodology also has a strong generative aspect. We use this adjective to stress that ABA can favor the emergence—and lead to the stabilization of—one or more behaviors already present in their behavioral repertoire and, thus, helping children to show them in the appropriate context. These behaviors do not come from above, imposed by the therapist, but emerge from the child.
To use a metaphor, the ABA therapist works as the Greek philosopher Socrates: he helped the people that talked to him to pull out their thoughts. In the same way, the ABA therapist promotes the process to externalize some behaviors that belong to the children’s repertoire and can be adaptive.
This issue makes evident that the ABA therapist cannot disappear and be exchanged by the ABA artificial tutor; rather, it is a crutch, a useful helping tool for the technician, who focuses on the maieutical process. The ABA tutor can support the technical aspects (recording data and offering personal profiles) that represent the child’s situation in terms of challenges and opportunities. With the present paper, we have shown that the ABA tutor can be effective in promoting some kinds of associative learning: in future work, we will investigate how to apply it to other contexts, with different ages and with children with typical and atypical development. This will allow us to verify its effectiveness in wider contexts, where associative learning is important.

Author Contributions

Conceptualization, M.P. and O.M.; methodology, M.P. and O.M.; software, M.P. and O.M.; validation, M.P. and A.R.; formal analysis, M.P. and A.R.; investigation, O.M., and M.P.; resources, A.R. and O.M.; data curation, M.P., writing—original draft preparation, M.P.; writing—review and editing, M.P., A.R., and O.M.; supervision, O.M.; project administration, O.M. and M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

Authors would like to thank the Master Thesis students who helped to collect data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Anderson, John R., C. Franklin Boyle, and Brian J. Reiser. 1985. Intelligent tutoring systems. Science 228: 456–62. [Google Scholar] [CrossRef] [PubMed]
  2. Boswell, Foster P. 1947. Trial and error learning. Psychological Review 54: 282. [Google Scholar] [CrossRef] [PubMed]
  3. Bouchet, Francois, Jason M. Harley, and Roger Azevedo. 2013. Impact of different pedagogical agents adaptive self-regulated prompting strategies on learning with MetaTutor. In International Conference on Artificial Intelligence in Education. Berlin: Springer, pp. 815–19. [Google Scholar]
  4. Bouchet, Francois, Jason M. Harley, and Roger Azevedo. 2016. Can adaptive pedagogical agents prompting strategies improve students learning and self-regulation? In Intelligent Tutoring Systems. Paper presented at 13th International Conference, Kunming, China, June 24–26. Edited by A. Micarelli, J. Stamper and K. Panourgia. Zagreb: Springer International Publishing, pp. 368–74. [Google Scholar]
  5. Brand, Gerard, and Jean-Louis Millot. 2001. Sex differences in human olfaction: Between evidence and enigma. The Quarterly Journal of Experimental Psychology Section B 54: 259–70. [Google Scholar] [CrossRef] [PubMed]
  6. Cooper, John O., Timothy E. Heron, and Heward L. William. 2007. Applied Behavior Analysis. Columbus: Merrill Publishing Company. [Google Scholar]
  7. Di Fuccio, Raffaele, Michela Ponticorvo, Andrea Di Ferdinando, and Orazio Miglino. 2015. Towards Hyper Activity Books for Children. Connecting Activity Books and Montessori-like Educational Materials. In Design for Teaching and Learning in a NetworkedWorld. Cham: Springer, pp. 401–6. [Google Scholar]
  8. Di Fuccio, Raffaele, Michela Ponticorvo, Fabrizio Ferrara, and Orazio Miglino. 2016. Digital and multisensory storytelling: Narration with smell, taste and touch. In European Conference on Technology Enhanced Learning. Cham: Springer, pp. 509–12. [Google Scholar]
  9. Fenza, Giuseppe, Francesco Orciuoli, and Demetrios G. Sampson. 2017. Building Adaptive Tutoring Model Using Artificial Neural Networks and Reinforcement Learning. Paper presented at IEEE 17th International Conference on Advanced Learning Technologies, ICALT 2017, Timisoara, Romania, July 3–7; art. no. 8001832. pp. 460–62. [Google Scholar]
  10. Ferrara, Fabrizio, Michela Ponticorvo, Andrea Di Ferdinando, and Orazio Miglino. 2016. Tangible interfaces for cognitive assessment and training in children: LogicART. In Smart Education and e-Learning 2016. Cham: Springer, pp. 329–38. [Google Scholar]
  11. Gamez, A. Matias, and Juan M. Rosas. 2005. Transfer of stimulus control across instrumental responses is attenuated by extinction in human instrumental conditioning. International Journal of Psychology and Psychological Therapy 5: 207–22. [Google Scholar]
  12. Gamez, A. Matias, and Juan M. Rosas. 2007. Associations in human instrumental conditioning. Learningand Motivation 38: 242–61. [Google Scholar] [CrossRef]
  13. Granpeesheh, Doreen, Dennis R. Dixon, Jonathan Tarbox, Andrew M. Kaplan, and Arthur E. Wilke. 2009. The effects of age and treatment intensity on behavioral intervention outcomes for children with autism spectrum disorders. Research in Autism Spectrum Disorders 3: 1014–22. [Google Scholar] [CrossRef]
  14. Harley, Jason M., Michelle Taub, Roger Azevedo, and Francois Bouchet. 2017. Let’s Set Up Some Subgoals: Understanding Human-Pedagogical Agent Collaborations and Their Implications for Learning and Prompt and Feedback Compliance. IEEE Transactions on Learning Technologies 11: 54–66. [Google Scholar] [CrossRef]
  15. Herz, Rachel S., Corrente Schankler, and Sophia Beland. 2004. Olfaction, emotion and associative learning: Effects on motivated behavior. Motivation and Emotion 28: 363–83. [Google Scholar] [CrossRef]
  16. Kandel, Eric R., Marc Klein, Vincent F. Castellucci, Samuel Schacher, and Philip Goelet. 1986. Some principles emerging from the study of short-and long-term memory. Neuroscience Research 3: 498–520. [Google Scholar] [CrossRef]
  17. Kinnebrew, John S., Brian C. Gauch, James R. Segedy, and Gautam Biswas. 2015. Studying student use of self-regulated learning tools in an open-ended learning environment. In International Conference on Artificial Intelligence in Education. Cham: Springer, pp. 185–94. [Google Scholar]
  18. Koster, Egon P. 2002. The specific characteristics of the sense of smell. In Olfaction, Taste and Cognition. Cambridge: Cambridge University Press, pp. 27–43. [Google Scholar]
  19. Lindsley, Ogden R. 1996. Is fluency free-operant response-response chaining? The Behavior Analyst 19: 211–24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Mazur, James E. 2015. Learning and Behavior: Instructor’s Review Copy. London: Psychology Press Routledge. [Google Scholar]
  21. Miglino, Orazio, Andrea Di Ferdinando, Massimiliano Schembri, Massimiliano Caretti, Angelo Rega, and Carlo Ricci. 2013. STELT (Smart Technologies to Enhance Learning and Teaching): Una piattaforma per realizzare ambienti di realtà aumentata per apprendere, insegnare e giocare. Sistemi Intelligenti 25: 397–404. [Google Scholar]
  22. Montessori, Maria. 2013. The Montessori Method. Piscataway: Transaction Publishers. [Google Scholar]
  23. Ponticorvo, Michela, and Orazio Miglino. 2018. Hyper activity books for children: How technology can open books to multisensory learning, narration and assessment. Qwerty-Open and Interdisciplinary Journal of Technology, Culture and Education 13: 1. [Google Scholar]
  24. Ponticorvo, Michela, Raffaele Di Fuccio, Andrea Di Ferdinando, and Orazio Miglino. 2017a. An agent-based modelling approach to build up educational digital games for kindergarten and primary schools. Expert Systems 34: e12196. [Google Scholar] [CrossRef]
  25. Ponticorvo, Michela, Fabrizio Ferrara, Raffaele Di Fuccio, Andrea Di Ferdinando, and Orazio Miglino. 2017b. SNIFF: A game-based assessment and training tool for the sense of smell. In International Conference in Methodologies and Intelligent Systems for Techhnology Enhanced Learning. Cham: Springer, pp. 126–33. [Google Scholar]
  26. Ponticorvo, Michela, Raffaele Di Fuccio, Fabrizio Ferrara, Angelo Rega, and Orazio Miglino. 2018a. Multisensory educational materials: Five senses to learn. In International Conference in Methodologies and Intelligent Systems for Techhnology Enhanced Learning. Cham: Springer, pp. 45–52. [Google Scholar]
  27. Ponticorvo, Michela, Angelo Rega, and Orazio Miglino. 2018b. Toward tutoring systems inspired by applied behavioral analysis. In International Conference on Intelligent Tutoring Systems. Cham: Springer, pp. 160–69. [Google Scholar]
  28. Ricci, Carlo, Chiara Magaudda, Giorgia Carradori, Delia Bellifemine, and Alberta Romeo. 2014. Il manuale ABAVB-Applied Behavior Analysis and Verbal Behavior: Fondamenti, tecniche e programmi di intervento. Rome: Edizioni Centro Studi Erickson. [Google Scholar]
  29. Seward, John P. 1949. An experimental analysis of latent learning. Journal of Experimental Psychology 39: 177. [Google Scholar] [CrossRef] [PubMed]
  30. Shook, Gerald L. 2005. An Examination of the Integrity and Future of the Behavior Analyst Certification Board R Credentials. Behavior Modification 29: 562–74. [Google Scholar] [CrossRef] [PubMed]
  31. Skinner, Burrhus F. 1938. The Behavior of Organisms. New York: Appleton-Century-Crofts. [Google Scholar]
  32. Skinner, Burrhus F. 1953. Science and Human Behavior. New York: Free Press. [Google Scholar]
  33. Skinner, Burrhus F. 1963. Operant behavior. American Psychologist 18: 503–15. [Google Scholar] [CrossRef]
  34. Thorndike, Edward L. 1927. The law of effect. The American Journal of Psychology 39: 212–22. [Google Scholar] [CrossRef]
  35. Thorndike, Edward L. 1933. A proof of the law of effect. Science 77: 173–75. [Google Scholar] [CrossRef] [PubMed]
  36. Virués-Ortega, Javier. 2010. Applied behavior analytic intervention for autism in early childhood: Meta-analysis, meta-regression and dose–response meta-analysis of multiple outcomes. Clinical Psychology Review 30: 387–99. [Google Scholar] [CrossRef] [PubMed]
  37. Wolery, Mark, and David L. Gast. 1984. Effective and Efficient Procedures for the Transfer of Stimulus Control. Topics in Early Childhood Special Education 4: 52–77. [Google Scholar] [CrossRef]
Figure 1. ABA tutor architecture with two modules.
Figure 1. ABA tutor architecture with two modules.
Socsci 09 00045 g001
Figure 2. The stimuli exposure module: the probabilistic core for selecting stimuli.
Figure 2. The stimuli exposure module: the probabilistic core for selecting stimuli.
Socsci 09 00045 g002
Figure 3. SNIFF hardware and software system: on the left side, an interaction with a child; on the right, the colored jars with the smells.
Figure 3. SNIFF hardware and software system: on the left side, an interaction with a child; on the right, the colored jars with the smells.
Socsci 09 00045 g003
Figure 4. Average of recognized odors according to gender.
Figure 4. Average of recognized odors according to gender.
Socsci 09 00045 g004
Table 1. ABA modules and resulting tutors.
Table 1. ABA modules and resulting tutors.
Law of EffectPromptingTutor
ActiveActiveSP
ActiveNon-ActiveS
Non-ActiveActiveP
Table 2. One-way ANOVA results on 30 stimuli, post-hoc comparisons. The average difference (absolute value) and the associated probability are in parentheses. Statistically detectable differences are in bold.
Table 2. One-way ANOVA results on 30 stimuli, post-hoc comparisons. The average difference (absolute value) and the associated probability are in parentheses. Statistically detectable differences are in bold.
Base-LineSPSP
Base-line-
S4.159 (0.029)-
P4.453 (0.017)0.293 (≈1)-
SP9.409 (0.000)5.250 (0.003)4.957 (0.007)-
Table 3. One-way ANOVA results in the 50 trials, post-hoc comparisons. The average difference (absolute value) and the associated probability are in parentheses. Statistically detectable differences are in bold.
Table 3. One-way ANOVA results in the 50 trials, post-hoc comparisons. The average difference (absolute value) and the associated probability are in parentheses. Statistically detectable differences are in bold.
Base-LineSPSP
Base-line-
S0.197 (≈1)-
P7.158 (0.010)7.355 (0.06)-
SP7.541 (0.007)7.738 (0.004)0.383 (≈1)-
Table 4. Number of good participants for each condition.
Table 4. Number of good participants for each condition.
SPSPBase-Line
2116165

Share and Cite

MDPI and ACS Style

Ponticorvo, M.; Rega, A.; Miglino, O. Applied Behavior Analysis (ABA) as a Footprint for Tutoring Systems: A Model of ABA Approach Applied to Olfactory Learning. Soc. Sci. 2020, 9, 45. https://0-doi-org.brum.beds.ac.uk/10.3390/socsci9040045

AMA Style

Ponticorvo M, Rega A, Miglino O. Applied Behavior Analysis (ABA) as a Footprint for Tutoring Systems: A Model of ABA Approach Applied to Olfactory Learning. Social Sciences. 2020; 9(4):45. https://0-doi-org.brum.beds.ac.uk/10.3390/socsci9040045

Chicago/Turabian Style

Ponticorvo, Michela, Angelo Rega, and Orazio Miglino. 2020. "Applied Behavior Analysis (ABA) as a Footprint for Tutoring Systems: A Model of ABA Approach Applied to Olfactory Learning" Social Sciences 9, no. 4: 45. https://0-doi-org.brum.beds.ac.uk/10.3390/socsci9040045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop