Next Article in Journal
Incorporating Phrases in Latent Query Reformulation for Multi-Hop Question Answering
Next Article in Special Issue
Improving User’s Experience in Exploring Knowledge Structures: A Gamifying Approach
Previous Article in Journal
Comments on Mathematical Aspects of the Biró–Néda Model
Previous Article in Special Issue
Tags’ Recommender to Classify Architectural Knowledge Applying Language Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Next Generation of Edutainment Applications for Young Children—A Proposal

by
Adriana-Mihaela Guran
*,†,
Grigoreta-Sofia Cojocar
and
Laura-Silvia Dioşan
Department of Computer Science, Babeş-Bolyai University, 400085 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 23 December 2021 / Revised: 9 February 2022 / Accepted: 16 February 2022 / Published: 19 February 2022

Abstract

:
Edutainment applications are a type of software that is designed to be entertaining while also being educational. In the current COVID-19 pandemic context, when children have to stay home due to the social distancing rules, edutainment applications for young children are more and more used each day. However, are these applications ready to take the place of an in-person teacher? In this paper, we propose a new generation of edutainment applications that are more suitable for preschoolers (aged 3–6 years old in our country) and closer to the in-person student–teacher interaction: emotions aware edutainment applications. We discuss the most important challenges that must be overcome in developing this kind of applications (i.e., recognizing children’s emotions, enhancing the edutainment application with emotion awareness, and adapting the interaction flow) and the first steps that we have taken for developing them.

1. Introduction

The term edutainment is a mixture between education and entertainment [1,2,3]. Edutainment applications are a type of software that is designed to be entertaining while also being educational. Various studies have analyzed the impact that the use of edutainment applications has on the learning outcome [4,5]. The studies’ results show that using edutainment applications in the classrooms influences positively the learning outcome. Edutainment applications for very young children (aged between 3–6 years old) need to integrate the learning goals with young children’s main activity, play. These applications are meant to help educators in teaching and consolidating new knowledge, especially now, when society is facing multiple challenges due to the crisis introduced by the COVID-19 pandemic, and most educational activities are performed using digital approaches (online or offline). Modern educational approaches include a broad range of technology-enhanced educational strategies to provide support for digital learning. Existing digital technology can assist students in learning, and it can play a crucial role in the field of education, but it does not have enough capabilities to replace the teachers. A teacher is not just a facilitator of knowledge, but also a guide, a mentor and an inspiration for students. Teachers do more than just the one-way task of instructing students. They can recognize social cues that would be impossible for a machine to identify, especially non-verbal or invisible (natural) interactions, that affect the learning experience. These cues help recognize students’ difficulties that might be more personal or emotional in nature, and which a machine cannot identify. The teachers also help to contextualize lessons in real-time, which might not be possible for a piece of technology to do. The human interaction cannot be replaced by computers, and human skills like decision-making or time management cannot be taught by technology. Technology by no means can be a replacement for teachers, but it can be used effectively to enhance the learning process.
Because in the current context of society, technology not only supports the learning process but sometimes even replaces the in-person student–teacher interaction, we should try to empower technology with some of the skills the teachers naturally have. Recognizing emotions during the learning process is an ability that teachers have, but one that the interactive applications usually do not possess, even though emotions have a high impact on the results of the learning process [6,7,8].
The current trends in education are to include technology. Moreover, due to the current pandemic context, the educational system has been forced to replace physical interaction with remote learning with the support of technology. While for school children solutions to continue the learning process have been found, for preschoolers the lack of digital resources and the lack of digital competencies have brought difficulties. Developing digital applications for preschoolers should provide the learning content, but it should also be able to mediate the interaction of children with technology, by being aware, at least, of the children’s emotional state. In this paper, we propose a new type of edutainment applications for young children, that takes into consideration the emotions of the young user during the interaction and adapts when a negative emotion is identified.
The idea of enhancing learning supporting tools with emotion awareness and adapting their interaction flow based on the emotions of the learner is not new and there are various studies available in the field. Feidakis provides a summary of emotion-aware systems designed for e-learning in virtual settings in [9]. Another study by Ruiz et al. [10] suggested a method for assessing students’ mood based on a model of twelve positive and negative emotions, using self-report and observing interactions with teachers. However, the existing studies and approaches have been validated through case studies on university students and in the context of e-learning systems, and most of them do not automatically identify the learner’s emotions while interacting with the application. Usually, the user’s emotions are identified based on some questionnaires or other kind of input from the user. Our proposal is different because it is addressed toward young children, aged 3–6 years old, that have fast changes of emotional state. The very young age of these users creates difficulties in automatic emotion recognition. Furthermore, other existing methods used for emotion recognition, like self-reporting, cannot be applied to this type of users.
The main contribution of this paper is to propose a new generation of edutainment applications for young children (aged 3–6 years old): emotions aware edutainment applications. We present the need for such applications, the advantages that they bring, the challenges that must be overcome in order to develop such applications, a possible architecture, a three-phase process for developing them, and a very simple prototype.
The rest of the paper is structured as follows. Section 2 presents the concept of edutainment applications and the challenges in designing them for young children. In Section 3, we present our proposal for the next generation of edutainment applications: emotions aware edutainment applications. We describe the existing challenges and possible solutions. A proof of concept prototype is presented in Section 4. Discussions are given in Section 5. The paper ends with further work (Section 6).

2. Edutainment Applications for Young Children

Educational entertainment, or edutainment, has been used to present teaching content in an entertaining context. Edutainment blends games with learning and provides a fun and enjoyable way of acquiring new knowledge. However, designing edutainment applications for young children involves many challenges like wrapping the educational content in games, deciding the appropriate interaction for young children, providing a balance between learning and fun, handling errors in interaction, or addressing failures when an incorrect answer is provided.
In general, designing applications for young children is a challenging task, and the existing guidelines refer to children having between 0 and 8 years as only one kind of user [11,12]. However, there are significant differences among children in various age groups. Children aged 3 to 6 years old cannot read or write, so interaction using written messages is not recommended, they need adult guidance and monitoring, and to keep them focused, they need rewards for their actions. Designing edutainment applications for such young children imposes the design of a game-based learning strategy with new constraints on interaction (for example, no written input/output), appropriate feedback based on child’s performance, and appropriate rewards based on the obtained results.
Presently, an edutainment application for young children consists of a predefined set of tasks { T 1 , T 2 , . . . , T n } , that a child must execute in order to gain new knowledge or new skills. Each task has a difficulty level and a type. Usually the difficulty level is easy, medium or complex, and the type of the task is a quiz, memory-game, riddle or puzzle. Currently, the edutainment applications are following a sequential flow (that we will call the normal flow), where the content and the tasks are presented either in a predefined order (meaning that if we run the application multiple times, the tasks will always be presented in the same order) or in a random order (meaning that if we run the application multiple times, some tasks will be presented in a different order). If a child cannot perform task T i , then the application proposes task T i + 1 , abandoning T i . Some applications will replay the tasks that have been skipped at the end of the predefined set of tasks (after T n ). As these edutainment applications are meant for young children, aged between 3–6 years old, the entire execution time of such an application does not exceed 10 min.
For these applications, challenges occur when a child does not perform the task correctly or when the child decides to quit interacting with the edutainment. In the situation of wrong answers, the edutainment application should provide hints to help the child perform the task. New questions arise in this situation based on the time to wait for the child’s answer, when to provide hints, how to provide the hints, how many times hints should be presented before deciding to move on with the interaction. In the classical (in-person) teaching-learning scenario, these challenges are gently solved by the educational experts by providing encouraging feedback or hints for task accomplishment, or by proposing a different task that can be successfully accomplished by the child. However, compensating for the teacher’s absence when difficulties occur is almost impossible if there is no additional information about a child’s actions or reactions.

3. Next Generation of Edutainment Applications for Young Children

The current generation of edutainment applications for young children are focusing on providing good entertaining and education aspects, but the next generation of edutainment applications should also consider a child’s emotional state during interaction. It is important to start developing emotions aware edutainment applications for young children as the children’ emotional state influences the learning outcome. Furthermore, emotion-aware edutainment application can be used to support learning at a child’s own pace.
In our vision, the next generation of edutainment applications should allow a customized approach for task selection when difficulty in performing a task is encountered or when the child’s emotional state changes to a negative emotion, such as frustration or anger. However, enhancing edutainment applications with emotions awareness is not an easy task and various challenging aspects must be tackled. Presently, the most important challenges are as follows:
  • How to automatically identify a child’s emotions?
  • How to integrate emotions recognition within an edutainment application?
  • How to adapt the edutainment application’s interaction flow based on the identified emotions?
In the following subsections we address each of these challenges. First, we describe what emotions are, how they can affect the learning process, and how they can be automatically detected. Then, we give a description of the architecture and development process that we propose for enhancing edutainment applications with emotions recognition. Afterwards we describe the algorithms that we propose for adapting the interaction based on the identified emotions.

3.1. Young Children’s Emotions

In the literature, various definitions of emotions have been given. Any brief episodes of coordinated changes (brain, autonomic, and behavioral) that facilitate a reaction to an important event are classified as emotions. Emotions are also targeted, according to Frijda, and require a relationship between the individual experiencing the feeling and the emotion’s object [13,14]. Davou defines emotions as the organism’s reaction to any disturbance of the perceptual environment [15]. Among all emotions, there are some basic emotions which are patterns of physiological reactions and which can be easily recognized universally. A few examples of basic emotions are fear, anger and happiness [16,17,18,19].
Understanding emotions, managing emotions and empathizing with others are all important skills in nonverbal communication and social integration. Emotions are also components of school readiness and academic success [6,7]. Researchers have found that there are statistically significant associations between social-emotional skills measured in kindergarten and key young adult outcomes over multiple domains of education, mental health, employment, substance use and criminal activity. Non-cognitive skills interact with cognitive skills to enable success in school and at the workplace. Pekrun [20] identified the so-called academic emotions and discovered that a good mood encourages comprehensive, creative thinking (Figure 1). Negative emotions like anger, sadness, fear or boredom are negatively associated to the learning process and outcomes, whereas positive emotions like enjoyment and hope are positively related to the learning process and outcomes. In many cases, negative emotions are also detrimental to motivation, performance and learning [21,22,23].
Children communicate their emotions through multiple channels like gestures, vocalization, body posture, body movements and facial expressions [24]. Frustration is a common emotion in young children that occurs when children cannot achieve a specified goal. Frustration is a healthy and normal feeling that can help a youngster learn more effectively. Frustration indicates that the child should find another solution to the problem encountered. Still, frustration must be handled before changing to anger or tantrum. For the next generation of edutainment applications for young children we are interested in identifying negative emotions like fear, anger or boredom that appear during interaction and that could negatively impact the learning process. Other (positive) emotions like happiness or surprise are also important, as they can be used to assess the satisfaction of the child while interacting with the application.

3.2. Automatic Child Emotion Recognition

Children and adults express their feelings through facial expressions, through their body, their behavior, their words and gestures. Due to the varying ways of expressing emotions, it is not easy to automatically identify a person’s emotions. Most research has focused on automatic identification of emotions from facial expressions, which involves two steps: face detection and emotion recognition. Because automatic face localization is a required stage of facial image processing for many applications, face detection research is very advanced [25,26], having a 99% accuracy [27], but the emotion recognition (or identification) from facial expressions still remains an open problem. The Machine Learning (ML) approaches achieved only around 75% accuracy for facial emotion classification [28,29].
Datasets with children’s faces are challenging to create since image gathering requires parental authorization, and discovering and documenting pupils takes time and effort. There are only a few datasets available in this context, and they are not evenly distributed in terms of emotions. The most used datasets for emotion recognition from faces are CAFE [30], CK++ [31], FER [32] and JAFFE [33]. However, only CAFE and FER contain children images, while the other datasets contain only adult images. Even in these datasets, there are some aspects that negatively impact the accuracy of the results, like the small number of children images, the lack of natural expressions in images as there are only posed images in the datasets, and the imbalanced representation of some emotions. In these datasets, the predominant emotions are neutral, sad and happy.
Automatic identification of children’s emotions from facial expressions is even more challenging due to additional factors, like lack of available datasets with children faces and the ways children react when they know pictures are taken with them. In [34], we have conducted a pilot study in order to determine the appropriateness of several ML-algorithms for children’s emotions automatic identification from their facial expressions, using different datasets (composed of adults and children faces). In different projects, several teams of 3–5 students from our faculty have implemented various methods for emotion recognition in images and videos. Five different projects have used Convolutional Neural Networks (CNN) [35], but with different network architectures. In order to determine the best hyper-parameters of the emotion classifier, each CNN architecture was trained in a cross-validation framework (by a random division of training data into learning and validation parts). Four projects used the same dataset, FER, and the same type of photos to train and test the classifier (images with adults). For the training–testing flow, different scenarios were used: some teams trained the classifier on images of adults and tested it on the same type of images or on a mixed dataset (adults and kids), one team trained and tested the classifier on images of kids, and others trained and tested the classifier on mixed images (adults and kids). The obtained accuracy is different for each project, ranging from 44% to 81%. The different performances obtained by the projects could have been caused by the distinct training setups. The obtained results show that an emotion classifier trained on adults images can be successfully used to other adult images (obtaining the best accuracy of 81%), but the classifier’s accuracy decreases when used on mixed images (adults and children) to 50%. The project that has trained and tested the CNN with only children images taken from the CAFE dataset, has obtained an accuracy of 68%.

3.3. Integrating Emotions Recognition into an Edutainment Application

In order to enhance edutainment applications with emotions recognition capabilities we have to add new modules to them. In our opinion, the enhanced edutainment application should contain at least the following modules:
  • the edutainment module—that is still responsible for presenting the learning content and the tasks to aid the comprehension of the new knowledge;
  • an emotion recognition module—that is responsible for the identification of the emotional state of the user;
  • a coordinator module—that is responsible for the coordination of the other modules;
  • a dataset building module—that is responsible for the datasets creation and management (for example adding images of young children and annotating them with the corresponding emotion).
The edutainment and emotion recognition modules should run in parallel and they should communicate via the coordinator. There are at least two possible scenarios for when information should be exchanged between these modules:
  • The edutainment module, at some predefined moments from the tasks’ execution flow, sends requests about the emotional state of the user (e.g., when a task is finished, when a long time has passed since using the application, when the user has difficulties in completing a task, etc.) and it adapts the interaction based on the received response.
  • The emotion recognition module continuously sends information about the identified emotions to the edutainment module. Based on the received information, the edutainment module will filter the negative emotions and their context, and will trigger its interaction adaptation.
In Figure 2, a high level view of an edutainment application enhanced with emotion recognition capabilities considering the second scenario is shown.
Both scenarios have advantages and disadvantages. The first scenario is more efficient as the information exchange takes places only at predefined moments, but it may ignore important emotional changes that can occur between two successive moments when the emotions are being recognized by the corresponding module. The second scenario is more resource expensive as the emotion recognition module needs to continuously identify emotions and to send them to the edutainment module. In this scenario, the edutainment module has to both play the exposed content and, at the same time, check the received information. In this regard, it is important to mention that the emotional state of a child may change very fast, that is why the information sent from the emotion recognition module to the edutainment module may lead to bottlenecks.
For this proposal we have decided to use a variation of the first scenario, as very often these applications will be used on computers with limited resources. We propose to use the following phases for developing emotions recognition enhanced edutainment applications:
  • Phase 1Development of the edutainment module. In this step, together with the other stakeholders (educational experts, kindergarten teachers, etc.) should be decided the tasks to be included in the edutainment module, their difficulty level, their type, and the normal interaction flow. The idea of each application may be decided by the early childhood educators. Each application should address one or multiple domains from the curricula and should be composed by a learning part, where new content is presented, and a practical part that contains tasks to support knowledge fixation. One possible approach that can be used for the edutainment module development is described in [36].
  • Phase 2Development of the emotion recognition module and dataset building. After the edutainment module was developed, a child’s emotion recognition approach must be selected and validated. If the accuracy of the selected approach is not the desired one, new datasets with data about children while interacting with the edutainment module could be created in order to improve the accuracy of the emotion recognition approach. These datasets should be annotated by human experts. If there already exist big enough datasets, or if the accuracy of the emotion recognition approach used is good enough for this type of applications, then the dataset building step may be skipped.
  • Phase 3Integration of emotions awareness into the edutainment module. The edutainment module should be modified in order to also include the adaptation feature for the interaction flow.
In this approach each of the proposed modules should provide at least the following services (presented in Figure 3):
  • The IdentifyEmotion Service, part of the emotion recognition module, that is responsible for identifying the emotional state from the data sent to it (images, etc.).
  • The AddData Service, part of the dataset building module that is responsible for adding new data to the datasets used for training and validation of the selected emotion recognition approach.
  • The Annotation Service, part of the dataset building module that is used to annotate the data from the datasets.
  • The StartMonitoring Service, part of the coordinator module that is responsible for initiating the monitoring activity of the emotional state of the child.
  • The EndMonitoring Service, part of the coordinator module that is responsible for ending the monitoring activity, started by the StartMonitoring Service.
  • The CurrentEmotionalState Service, part of the coordinator module that is responsible for obtaining the necessary input data for the emotion recognition approach and sending it to the IdentifyEmotion Service in order to obtain the current emotional state of the child.
  • The AdaptInteraction Service, part of the edutainment module that is responsible for initiating adaptation of the interaction flow, in order to change the child’s emotional state.
In the following we describe how these services should collaborate in order to add emotion awareness to the edutainment module. The UML activity diagram from Figure 4 shows how the proposed services collaborate, after the application starts.
Before starting any interaction with the child, the edutainment module should call the CurrentEmotionalState Service from the coordinator in order to obtain the child’s emotional state. If the identified emotion is positive, then the edutainment module should call the StartMonitoring Service from the coordinator module. When this service is called, the coordinator module should start gathering the necessary data regarding the interaction with the edutainment module (take images of the child, collect other required data), and from time to time (like 5 or 10 s) it should send this data to the AddData Service and/or IdentifyEmotion Service. If the development process is in the second phase, the gathered data should be sent only to the AddData Service from Database Building module. If the development process is in the third phase it should send it to the IdentifyEmotion Service from the emotion recognition module. In the third development phase the data could be sent to both services if we want the keep adding data to the existing datasets for further analysis and use. After receiving the identified emotion from the IdentifyEmotion Service, the coordinator must decide if the obtained result requires interaction adaptation of the edutainment module (for example, if the emotional state of the child is a negative one, like frustration or boredom). If it does, then the coordinator module will call the AdaptInteraction Service from the edutainment module. When the AdaptInteraction Service is called with the identified emotion, the interaction flow running in the edutainment module should be modified according to the current emotional state of the child. One possible approach for adapting the interaction flow is described in more details in the next section.

3.4. Adapting the Interaction Flow

An important aspect that must be considered for the next generation of edutainment applications is how should the interaction flow of the edutainment module be modified based on the information received from the emotion recognition module even in the classic interaction flow of an edutainment application challenges are encountered and some decisions must be taken. For example, when a child does not succeed in accomplishing a task, what should the edutainment application do? Should it show hints and how many? Should it let the child try again, and how many times? Should it just move on to the next task? Usually, the answers to these questions are found by discussing with teachers and educators, and, very often, empirical decisions are taken. In our previous work about edutainment applications development for preschool children, we have decided, together with a kindergarten teacher, that if a child gives a wrong answer to a task requirement, the application should provide hints [36,37]. However, in some cases, the decisions to be taken could be more complex. For example, when a child fails to accomplish a task, by giving the wrong answer, different situations may appear after receiving a hint:
  • the child successfully accomplishes the task;
  • the child fails again to perform the task;
  • the child does not perform any interaction action (maybe the child leaves the computer, abandoning the interaction altogether).
For our previously developed edutainment applications, the approach used was to go on to the next task if a child failed twice to accomplish the current task. Afterwards, we considered presenting again the failed tasks to the child if he/she desires to give it another try. Furthermore, if a child does not interact with the edutainment application for 45 s, then the application automatically goes to the next task. However, in this situation, it is also possible that the child has already moved away from the computer and, in such case, the interaction should not continue.
In an emotion-aware edutainment application, decisions based on the identified emotions regarding the interaction flow must also be taken. In this scenario, it is very important to have a high accuracy of emotion recognition, but also a real-time answer regarding the emotion identification. A solution that decides the tasks’ flow based on a child’s emotional state would provide a better approach to support socio-emotional learning and, also, to support young children’s progress in learning.
The identification of negative emotions such as frustration, anger or boredom triggers the interaction flow adaption decisions. Interaction with edutainment applications should only occur when children are in a positive emotional state. As a consequence, an emotion aware edutainment should assess children’s emotions, and should play the normal interaction flow only if the child is in a positive emotional state. Otherwise, the edutainment application should suggest activities that will improve the child’s emotional state by sending encouraging messages, kind messages, or suggesting fun physical activities (for example, imitate birds’ flying) or relaxing activities (take deep breaths). In the following, we will call timeout the interruption of the interaction flow by proposing entertaining physical activities.
After the child is ready to begin the learning process (meaning that the identified emotion is not angry or frustrated), the application should start presenting the content (usually designed as a story or a game with tasks presented as challenges). If the child’s emotional state does not change, then the normal interaction flow will be presented. If the child’s emotional state changes, then an analysis of the situation is performed as follows. If the child becomes angry, then the interaction should stop and a timeout should be given to the child to overcome the anger. The application should propose some relaxing physical activities and the interaction should start again only if the child is in a positive mood. If the child becomes bored, then the application should switch to more complex tasks or it should change the objects in the task context or it should completely change the task type. If the child becomes frustrated, the application should try to identify the cause of frustration. The cause of frustration might be physical, in the sense that the young child cannot perform the required interaction task (for example, a drag and drop action). In this case, the application should switch to simpler interaction methods (like using only clicks, for example). If a child is frustrated because she/he does not know how to solve the task (for example, when the child does not perform any action on the interface), then some cues should be presented to support task accomplishment. If the child still cannot solve the task, the application should skip the current task and it should continue the interaction flow with encouraging messages. Still, it is difficult to automatically identify the frustration cause, but if the child continues interacting with the interface we may suppose that he/she intends to solve the task, but cannot perform the required actions.
If a child needs more than a predefined number of timeouts during the interaction, we consider that the interaction flow should stop to avoid amplification of the child’s negative emotions. The experts consider that if the child requires more than two timeouts during the interaction, then the child’s emotional state is not suitable for learning.
The solution that we propose for the interaction flow adaptation consists of two parts: an adaptation of the algorithm corresponding to the normal interaction flow and a newly added algorithm for when the AdaptInteraction Service is called by the coordinator. In Algorithm 1, the outline of the normal interaction flow algorithm that was modified in order to consider the emotional state of the child when the application starts is given.
Before starting any interaction, the algorithm gets the child’s emotional state from the coordinator. If the state is a negative one, namely, angry, the application will give the child a timeout and propose some entertainment activities in order to change the child’s emotional state. After the execution of the entertainment activities, the emotional state is obtained again. If it is still a negative one, the previous steps are executed again; otherwise, the normal interaction flow will start. Before starting this flow, the emotional state monitoring activity is started by calling the StartMonitoring Service from the coordinator. If after executing the relaxing physical activities for a predefined number of times (called l i m i t ), the state of the child is still a negative one, then the interaction stops. We consider that, in this case, the child’s emotional state does not facilitate learning.    
Algorithm 1: Normal interaction flow algorithm.
Mathematics 10 00645 i001
In Algorithm 2, an outline of the algorithm that is executed each time the service AdaptInteraction is called by the coordinator is given. The coordinator will call this service when a negative emotion of interest is identified by the emotion recognition module, during the monitoring activity. The experts consider frustration and boredom as negative emotions from learning point of view, so the interaction flow will be adapted only when these emotions are identified by the emotion recognition module. Whenever the coordinator module gets a negative emotion of interest while in monitoring mode, the coordinator module calls AdaptInteraction service. The edutainment module, when AdaptInteraction service is called, checks the negative emotion kind, and based on this, will decide how to adapt the interaction:
  • If the emotional state is frustration, then the edutainment application will resume executing the interaction flow, will stop the monitoring activity, will increase the number of t i m e O u t s given and will propose relaxing physical tasks in order to change the child’s emotional state. If after the execution of these tasks, the emotional state of the child becomes positive, then the monitoring activity will restart and the edutainment module will continue playing the c u r r e n t T a s k . Otherwise, the interaction is stopped.
  • If the emotional state is boredom, then the edutainment application will increase the difficulty level or the type of the next task to be executed.
Algorithm 2: Interaction adaptation algorithm.
Mathematics 10 00645 i002

4. Prototype

In order to test our proposal for the next generation of edutainment applications, we have conducted a preliminary study in which we have modified a simple edutainment application based on the results of a facial expression emotion recognizer. A proof of concept prototype application has been developed by a team of computer science master students. The prototype adds an effect to a selected edutainment material based on the automatically identified emotions of the viewer. In this study, we have focused on recognizing the following emotions: happiness, sadness, anger, disgust and surprise, and we have applied the following effects when one of the interested emotions is identified:
  • a bright effect when happiness is identified;
  • a sepia effect when sadness is identified;
  • a distorted effect when anger is identified;
  • a blurred effect when disgust is identified;
  • a black and white effect when surprise is identified.
In our investigation, we have started with two intelligent models based on Deep Learning for detecting emotions from adult faces. The first model we have tested was organised in a 6-layered Convolutional Neural Network, while the second model was organised in a 17-layered Convolutional Neural Network (both implemented in the Keras framework). These models were trained on small (48 × 48 px) grayscale images from Facial Expression Recognition (FER) dataset [32]. In this dataset, there are 28,709 training and 3589 testing images. Each of the images were stored in 48 × 48 pixels. Unfortunately, this dataset does not contain any children images.
The learning settings were characterised by a different number of epochs, Adam, derived from Adaptive Moment Estimation [38], as an optimization algorithm and a categorical cross-entropy loss function. For weight initialization we used the default settings, each layer having its own default value for initializing the weights but for most of the layers, the default kernel initializer was the Glorot uniform initialiser [39].
In our use case, we were interested in getting emotion recognition as accurately as possible, the most important criterion used for evaluating the models’ quality was accuracy. With the best model, trained on adults’ images, we got an accuracy of 84.42%. The loss evolution during the training process is depicted in Figure 5.
We also investigated the performance of our models in the case of children faces. The Child Affective Face Set (CAFE) [30] was used in this scope. This dataset contains 1192 photographs of 154 children. In this scenario, our second model scored the best accuracy, with an average of 75.73% (see the confusion matrix from Figure 6) and resolved some of our problems with the confusion between some of the classes. However, a liability of this model is that the loss remained on a higher value and the confusion between disgust and angry and the one between surprise and fear persisted.
We noticed an important characteristic that influenced the recognition process in the children case, as follows. The age of the children is a factor that affects the results, possibly due to some transitions that may be much more pronounced in younger children (such as cheeks). Emotions that have similar effects on the face are confused (for example, anger and disgust which are characterized by frowning and partial or total closure of the eyes). Due to the natural shape of the face, some children are recognized as happy or sad even though they have a neutral face position.
Due to the improved accuracy of emotion recognition for adults compared to the accuracy obtained by our approach on children’s images, we used adults as test subjects. In Figure 7, Figure 8, Figure 9 and Figure 10 is shown how the edutainment changes based on the viewer’s different emotions (anger, happy, disgust and surprise).

5. Discussions

The results of our study show that it is feasible to develop emotions aware edutainment applications, but there are some aspects that must be improved. In the following, we discuss the advantages and disadvantages of developing such applications, and what must be further improved in order to get applications that can be used in a real context. The advantages that this type of edutainment applications bring are as follows:
  • The children could safely use these applications outside the formal education system, especially when in-person interaction is not possible due to social distancing rules.
  • The learning process of the child will be personalized, adapted to his/her own pace.
  • Interacting with safe edutainment applications, the young child will also develop basic digital competences.
  • The proposed architecture allows to easily plug-in and -out the emotion recognition and adaptation modules.
The disadvantages that using this type of edutainment applications may bring are as follows:
  • Such applications could have an increased response time, affecting negatively the interaction.
  • Some researchers consider that the gathered information about the children’ state of mind could be improperly used to influence subconscious processes. In our proposal, the identified emotion is used only to avoid increased negative emotions while interacting with the application. If negative emotions are identified repeatedly, the application should stop executing.
Our preliminary research indicates that developing emotion-aware edutainment applications for young children is feasible, but that some improvements are required. First, the accuracy of the emotion recognizer from facial expressions must be improved. More datasets with children are needed in order to improve the existing emotions recognizers’ results. Second, other sources of information for the emotion recognizer should be considered, like children’s posture and motion or children’s voice. Third, the time needed to identify the emotions and the time needed for adapting the interaction flow must be carefully analyzed. An aspect revealed by our study is the delayed adaptation. It takes 2–3 s until the addition of the effect is visible to the viewer. In a real-context, it shouldn’t take long for the edutainment application to react to a change in the child’s emotional state.

6. Conclusions and Further Work

In this paper, we have presented our proposal for the next generation of edutainment applications for young children, namely, emotion-aware edutainment applications. By enhancing edutainment applications with emotion awareness we can provide a better context for learning for young children. In the future we intend to
  • validate our proposal on real and more complex case studies (implementation of the proposed approach);
  • use multiple channels (body posture, voice, sensors) to extract information for the automatic emotion recognition module;
  • consider the situations in which negative emotions occur frequently for different children (in this case it may also mean that changes in the design of the edutainment module should be made); and
  • use emotions awareness to also evaluate the satisfaction of the little users. Identifying frustration during learning activities with an edutainment application could also provide hints on interaction flow design.

Author Contributions

Conceptualization, A.-M.G., G.-S.C. and L.-S.D.; methodology, A.-M.G., G.-S.C. and L.-S.D.; software, A.-M.G. and L.-S.D.; validation, A.-M.G. and L.-S.D.; investigation, A.-M.G., G.-S.C. and L.-S.D.; data curation, L.-S.D.; writing—original draft preparation, A.-M.G., G.-S.C. and L.-S.D.; writing—review and editing, A.-M.G., G.-S.C. and L.-S.D.; visualization, L.-S.D.; supervision, G.-S.C.; project administration, G.-S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank to the educational experts who provided support in understanding their work, to the students involved in case studies implementation, to the children participating in our activities and to their parents for agreeing it.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Disney, W. Educational Values in Factual Nature Pictures. Educ. Horizons 1954, 33, 82–84. [Google Scholar]
  2. Rapeepisarn, K.; Wong, K.W.; Fung, C.C.; Depickere, A. Similarities and Differences between “Learn through Play” and “Edutainment”. In Proceedings of the 3rd Australasian Conference on Interactive Entertainment, Perth, Australia, 4–6 December 2006; Murdoch University: Murdoch, Australia, 2006; pp. 28–32. [Google Scholar]
  3. Nemec, J.; Trna, J. Edutainment or Entertainment Education Possibilities of Didactic Games in Science Education. The Evolution of Children Play-24. In Proceedings of the ICCP World Play Conference, Brno, Czech Republic, September 2007; pp. 55–64. [Google Scholar]
  4. Mat Zin, H.; Mohamed Zain, N.Z. The effects of edutainment towards students’ achievements. Reg. Conf. Knowl. Integr. ICT 2010, 129, 2865. [Google Scholar]
  5. Kara, Y.; Yeşilyurt, S. Comparing the Impacts of Tutorial and Edutainment Software Programs on Students’Achievements, Misconceptions, and Attitudes towards Biology. J. Sci. Educ. Technol. 2008, 17, 32–41. [Google Scholar] [CrossRef]
  6. Denham, S.A.; Bassett, H.H.; Thayer, S.K.; Mincic, M.S.; Sirotkin, Y.S.; Zinsser, K. Observing preschoolers’ social-emotional behavior: Structure, foundations, and prediction of early school success. J. Genet. Psychol. 2012, 173, 246–278. [Google Scholar] [CrossRef]
  7. Hyson, M. The Emotional Development of Young Children: Building an Emotion-Centered Curriculum; Teachers College Press: New York, NY, USA, 2004. [Google Scholar]
  8. Kostelnik, M.; Soderman, A.; Whiren, A.; Rupiper, M.L. Guiding Children’s Social Development and Learning: Theory and Skills; Cengage Learning: Boston, MA, USA, 2016. [Google Scholar]
  9. Feidakis, M. Chapter 11—A Review of Emotion-Aware Systems for e-Learning in Virtual Environments. In Formative Assessment, Learning Data Analytics and Gamification; Caballé, S., Clarisó, R., Eds.; Intelligent Data-Centric Systems; Academic Press: Boston, MA, USA, 2016; pp. 217–242. [Google Scholar] [CrossRef]
  10. Ruiz, S.; Urretavizcaya, M.; Fernández-Castro, I.; López-Gil, J.M. Visualizing Students’ Performance in the Classroom: Towards Effective F2F Interaction Modelling. In Design for Teaching and Learning in a Networked World; Conole, G., Klobučar, T., Rensing, C., Konert, J., Lavoué, E., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 630–633. [Google Scholar]
  11. Druin, A.; Solomon, C. Designing Multimedia Environments for Children: Computers, Creativity, and Kids; Wiley: Hoboken, NJ, USA, 1996. [Google Scholar]
  12. Markopoulos, P.; Bekker, M. How to compare usability testing methods with children participants. In Interaction Design and Children; Now Publishers Inc.: Hanover, PA, USA, 2002; Volume 2, pp. 153–158. [Google Scholar]
  13. Frijda, N.H. Appraisal and Beyond: The Issue of Cognitive Determinants of Emotion; Lawrence Erlbaum Associates Ltd.: Hove, UK, 1993. [Google Scholar]
  14. Frijda, N.H. Varieties of affect: Emotions and episodes, moods and sentiments. In The Nature of Emotion: Fundamental Questions; Ekman, P., Davidson, R., Eds.; Oxford University Press: New York, NY, USA, 1994; pp. 59–64. [Google Scholar]
  15. Davou, B. Thought Processes in the Era of Information: Issues on Cognitive Psychology and Communication; Papazissis Publishers: Athens, Greece, 2000. [Google Scholar]
  16. Damasio, A.R. Descartes’ Error. Emotion, Reason and the Human Brain; Avon Books: New York, NY, USA, 1994. [Google Scholar]
  17. Ekman, P.; Friesen, W. Facial Action Coding System: A Technique for the Measurement of Facial Movement; Consulting Psychologists Press: Palo Alto, CA, USA, 1978. [Google Scholar]
  18. Ortony, A.; Clore, G.L.; Collins, A. The Cognitive Structure of Emotions; Cambridge University Press: Cambridge, UK, 1988. [Google Scholar] [CrossRef] [Green Version]
  19. Parrott, W.G. Emotions in Social Psychology: Key Readings; Psychology Press: Oxfordshire, UK, 2000. [Google Scholar]
  20. Pekrun, R. The Impact of Emotions on Learning and Achievement: Towards a Theory of Cognitive/Motivational Mediators. Appl. Psychol. 1992, 41, 359–376. [Google Scholar] [CrossRef]
  21. Pekrun, R.; Lichtenfeld, S.; Marsh, H.W.; Murayama, K.; Goetz, T. Achievement Emotions and Academic Performance: Longitudinal Models of Reciprocal Effects. Child Dev. 2017, 88, 1653–1670. [Google Scholar] [CrossRef]
  22. Rowe, A.D.; Fitness, J. Understanding the Role of Negative Emotions in Adult Learning and Achievement: A Social Functional Perspective. Behav. Sci. 2018, 8, 27. [Google Scholar] [CrossRef] [Green Version]
  23. Manwaring, K.C. Emotional and Cognitive Engagement in Higher Education Classrooms. Ph.D. Thesis, Brigham Young University, Provo, UT, USA, 2017. Available online: https://scholarsarchive.byu.edu/etd/6636 (accessed on 22 December 2021).
  24. Halberstadt, A.G.; Eaton, K.L. A meta-analysis of family expressiveness and children’s emotion expressiveness and understanding. Marriage Fam. Rev. 2002, 34, 35–62. [Google Scholar] [CrossRef]
  25. Taskiran, M.; Kahraman, N.; Erdem, C.E. Face recognition: Past, present and future (a review). Digit. Signal Process. 2020, 106, 102809. [Google Scholar] [CrossRef]
  26. Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, Present, and Future of Face Recognition: A Review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
  27. Deng, J.; Guo, J.; Zhang, D.; Deng, Y.; Lu, X.; Shi, S. Lightweight face recognition challenge. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019. [Google Scholar]
  28. Ko, B.C. A brief review of facial emotion recognition based on visual information. Sensors 2018, 18, 401. [Google Scholar] [CrossRef]
  29. Lopes, A.T.; de Aguiar, E.; De Souza, A.F.; Oliveira-Santos, T. Facial expression recognition with convolutional neural networks: Coping with few data and the training sample order. Pattern Recognit. 2017, 61, 610–628. [Google Scholar] [CrossRef]
  30. LoBue, V.; Thrasher, C. The Child Affective Facial Expression (CAFE) set: Validity and reliability from untrained adults. Front. Psychol. 2015, 5, 1532. [Google Scholar] [CrossRef]
  31. Lucey, P.; Cohn, J.F.; Kanade, T.; Saragih, J.; Ambadar, Z.; Matthews, I. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, San Francisco, CA, USA, 13–18 June 2010; IEEE: New York, NY, USA, 2010; pp. 94–101. [Google Scholar]
  32. Goodfellow, I.J.; Erhan, D.; Carrier, P.L.; Courville, A.; Mirza, M.; Hamner, B.; Cukierski, W.; Tang, Y.; Thaler, D.; Lee, D.H.; et al. Challenges in representation learning: A report on three machine learning contests. In International Conference on Neural Information Processing; Springer Publishing: New York, NY, USA, 2013; pp. 117–124. [Google Scholar]
  33. Lyons, M.; Akamatsu, S.; Kamachi, M.; Gyoba, J. Coding facial expressions with gabor wavelets. In Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan, 14–16 April 1998; IEEE: New York, NY, USA, 1998; pp. 200–205. [Google Scholar]
  34. Guran, A.M.; Cojocar, G.S.; Diosan, L. A Step Towards Preschoolers’ Satisfaction Assessment Support by Facial Expression Emotions Identification. Knowledge-Based and Intelligent Information & Engineering Systems. In Proceedings of the 24th International Conference KES-2020, Virtual Event, Online, 16–18 September 2020; pp. 632–641. [Google Scholar] [CrossRef]
  35. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  36. Guran, A.M.; Cojocar, G.S.; Moldovan, A. Designing edutainment software for digital skills nurturing of preschoolers: A method proposal. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: Software Engineering in Society, ICSE-SEIS ’20, Seoul, Korea, 27 June–19 July 2020; pp. 63–70. [Google Scholar] [CrossRef]
  37. Guran, A.M.; Cojocar, G.S.; Moldovan, A. A User Centered Approach in Designing Computer Aided Assessment Applications for Preschoolers. In Proceedings of the 15th International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE 2020, Prague, Czech Republic, 5–6 May 2020; pp. 506–513. [Google Scholar] [CrossRef]
  38. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference for Learning Representations, Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  39. Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; Chia Laguna Resort. Teh, Y.W., Titterington, M., Eds.; JMLR, Inc. and Microtome Publishing: Brookline, MA, USA, 2010; Volume 9, pp. 249–256. [Google Scholar]
Figure 1. Academic emotions [20].
Figure 1. Academic emotions [20].
Mathematics 10 00645 g001
Figure 2. High-level view of an edutainment application enhanced with emotion recognition.
Figure 2. High-level view of an edutainment application enhanced with emotion recognition.
Mathematics 10 00645 g002
Figure 3. Proposed modules and services.
Figure 3. Proposed modules and services.
Mathematics 10 00645 g003
Figure 4. Activity diagram for services collaboration.
Figure 4. Activity diagram for services collaboration.
Mathematics 10 00645 g004
Figure 5. Loss evolution for the model trained on adults faces.
Figure 5. Loss evolution for the model trained on adults faces.
Mathematics 10 00645 g005
Figure 6. Real emotions versus predicted emotions in children images.
Figure 6. Real emotions versus predicted emotions in children images.
Mathematics 10 00645 g006
Figure 7. Anger emotion and distorted filter.
Figure 7. Anger emotion and distorted filter.
Mathematics 10 00645 g007
Figure 8. Happy emotion and bright filter.
Figure 8. Happy emotion and bright filter.
Mathematics 10 00645 g008
Figure 9. Disgust emotion and blurred filter.
Figure 9. Disgust emotion and blurred filter.
Mathematics 10 00645 g009
Figure 10. Surprise emotion and black & white filter.
Figure 10. Surprise emotion and black & white filter.
Mathematics 10 00645 g010
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guran, A.-M.; Cojocar, G.-S.; Dioşan, L.-S. The Next Generation of Edutainment Applications for Young Children—A Proposal. Mathematics 2022, 10, 645. https://0-doi-org.brum.beds.ac.uk/10.3390/math10040645

AMA Style

Guran A-M, Cojocar G-S, Dioşan L-S. The Next Generation of Edutainment Applications for Young Children—A Proposal. Mathematics. 2022; 10(4):645. https://0-doi-org.brum.beds.ac.uk/10.3390/math10040645

Chicago/Turabian Style

Guran, Adriana-Mihaela, Grigoreta-Sofia Cojocar, and Laura-Silvia Dioşan. 2022. "The Next Generation of Edutainment Applications for Young Children—A Proposal" Mathematics 10, no. 4: 645. https://0-doi-org.brum.beds.ac.uk/10.3390/math10040645

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop