Next Article in Journal
A Long-Distance Communication Architecture for Medical Devices Based on LoRaWAN Protocol
Previous Article in Journal
A Multi Objective Evolutionary Algorithm for the Parameters Extraction of Organic Thin Film Transistors Models
Previous Article in Special Issue
Ultra-Short Window Length and Feature Importance Analysis for Cognitive Load Detection from Wearable Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Artificial Intelligence and Ambient Intelligence

Department of Intelligent Systems, Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia
*
Author to whom correspondence should be addressed.
Submission received: 8 April 2021 / Accepted: 9 April 2021 / Published: 15 April 2021
(This article belongs to the Special Issue Artificial Intelligence and Ambient Intelligence)

1. Introduction

Artificial intelligence (AI) and its sister ambient intelligence (AmI) have in recent years become one of the main contributors to the progress of digital society and human civilization. For example, breakthroughs have been achieved in image processing [1,2,3,4] natural language processing [5,6,7], and reinforcement learning [8,9]. All of this affects practically every aspect of our lives, be it search engines such as Google, autonomous vehicles, robots, or smart healthcare. The relation to electronics is particularly interesting. While the exponential progress of electronics expressed through Moore’s Law [10] or Keck’s Law enabled progress of information society and AI, the design of new chips already to some extent depends on the successful application of AI methods, and will likely more so in the future.
Several questions arise in relation to the above research and development fields. Are there major possibilities for improvements by connecting SW, AI, and AmI methods directly to the chips? Is it possible to integrate the flexibility of SW with the speed of electronic HW and vastly improve the cognitive and computing powers? Will AmI benefit through this progress, since it is intrinsically devoted to connecting devices and humans?
However, future is all but certain as the COVID-19 crisis demonstrates. It might be that we are already facing a slow but steady decline of electronic components following the fast exponential growth. In addition, AI is notoriously known for its wild ups and downs similar to computer generations, where after a hype a major disappointment is proclaimed worldwide when the human level intelligence seems to be as far as before [11]. However, like Phoenix, AI rises again and again, and unlike well-known physical hardware limitations there is no major well-defined limitation for the AI progress. Indeed, it seems that superintelligence and super ambient intelligence are just decades away [12]. They will bring major technological and societal changes, hopefully for the best.
The objective of this Special Issue is to focus on the technical and overview contribution for the AI, AmI, information society and electronics. In addition, papers deal with
  • Mobile/wearable intelligence
  • Robotics applied to smart tasks
  • Applications of combined pervasive/ubiquitous/cognitive computing with AI
  • Use of mobile, wireless, visual, and multi-modal sensor networks in intelligent systems
  • Intelligent handling of privacy, security and trust

2. Artificial Intelligence and Ambient Intelligence

In the review paper “Relations between Electronics, Artificial Intelligence and Information Society through Information Society Rules” [13], Matjaž Gams at al. present relations between information society (IS), electronics and artificial intelligence mainly through twenty-four IS laws. The laws constitute a novel collection, not presented in literature before, describing major properties in the mentioned field, and the way they influence progress. The laws mainly describe the exponential growth in a particular field such as processing, storage or transmission capabilities with related references for further study. Each law bears the name of its inventor. Rules such as Moore’s Law are reasonably well known even in general public, however, the majority of rules is not presented at university education all over the world. There exist probably tens of similar rules, but the authors picked the most relevant to comprehensibly present the fields. Not all rules are technical, some present relations to production prices and human interaction while others capture human cognitive issues. An analysis is devoted to time dependencies of the rules, and the final part of the paper describes the progress, state-of the-art and potential further progress of AI. AI is already occasionally exceeding human capabilities and will do so even more in the future. In some areas where AI was presumed to be incapable of performing even at a modest level, such as the production of art or programming software, AI is making progress that can sometimes reflect true human skills by programs like GPT3.
The review paper is followed by seven research papers.
Jaakko Tervonen et al. [14] addressed the issue of human cognitive abilities under pressure in the information society in “Ultra-Short Window Length and Feature Importance Analysis for Cognitive Load Detection from Wearable Sensors”. Cognitive load detection is beneficial in several applications of human–computer interaction, for example in autonomous driving. The paper concentrates on accurate and real-time bio signal-based cognitive-load detection. More specifically, the paper addresses the problem of data segmentation by analyzing optimal and minimal window length. A comparative analysis is presented, in which ultra-short (30 s or less) window lengths were used for cognitive load detection with a wrist-worn device, which provides heart rate, heart rate variability, galvanic skin response, and skin temperature. These bio signal data are used to extract features at six different window lengths. The extracted features are then used to train an Extreme Gradient Boosting classifier to detect high vs. low cognitive load. The results indicate that longer intervals in general achieve higher accuracy, with 25 s window performing the best (67.6%). Lowest performance (60.0%) is obtained with 5 s window. The relation between different bio signal features, the classification performance and the most useful features was also investigated. The results with wearables seem as reliable as with other, more expensive and obtrusive sensors.
The article “A One-Dimensional Non-Intrusive and Privacy-Preserving Identification System for Households” by Tomaz Kompara et al. [15] introduces a novel indoor identification system based on a network of laser sensors, each attached on top of the room entry. There is a need for systems awareness of an inhabitant’s presence and identity in many ambient-intelligence applications, including intelligent homes and cities, with two major concerns: costs and preserving non-intrusiveness. The system should be seamless for the user, preserving the user’s privacy as much as possible. The proposed solution is based on a one-dimensional depth sensor, mounted on top of a doorway, facing towards the entrance at an angle. This position allows acquiring the user’s body shape, i.e., silhouette, while the user is crossing the doorway. The sensor data coupled with classical machine learning methods are used for user-identification. The system is non-intrusive and preserves privacy. This is achieved by omitting user-sensitive information such as activity, facial expression or clothing. Additionally, the system does not use video or audio data. The system is based on a statistical observation that a typical household is shared by only a small number of physically quite different inhabitants. This hypothesis was tested on a nearly 4000-person, publicly available database of anthropometric measurements. The analysis of the relationships among accuracy, measured data and number of residents revealed quality accuracy up to 10 inhabitants. In addition, the system was evaluated in a real-world scenario on 18 subjects entering a door under a variety of conditions (e.g., different objects and different clothing). A 10-fold cross validation showed 98.4% accuracy for all subjects, and 99.1% for groups of five subjects. These results indicate that a network of one-dimensional depth sensors might be suitable for the identification task with purposes such as non-obtrusive surveillance for security and ambient-intelligence comfort.
In “Device-Free Crowd Counting Using Multi-Link Wi-Fi CSI Descriptors in Doppler Spectrum” [16], Ramon F. Brena et al., tasked themselves to successfully measure the quantity of people in a given space. This information is relevant in many applications, ranging from marketing to safety. The approach is based on measuring crowd size with an inexpensive Wi-Fi equipment, taking advantage of the fact that Wi-Fi signals get distorted by people’s presence. Based on the previous experience and by identifying distortion Wi-Fi patterns, the method estimates the number of people in a given space. Using machine learning classifiers and channel state information (CSI), the method estimates the number of people placed between a Wi-Fi transmitter and a receiver. The method achieved better results than the compared single link or averaging approaches. The advantage comes from taking into consideration individual channel information instead of taking the average of the information of all channels. The experiments demonstrated improvements from 44% accuracy with one link to 99% with six links. Additionally, more details are presented about how the addition of each of the multiple links of information influences the accuracy of the prediction.
In “Constructing Emotional Machines: A Case of a Smartphone-Based Emotion System” by Hao-Chiang Koong Lin et al. [17], the emphasis is on an emotion system (emotion machines) developed and deployed on smartphones. The objective of this study is to explore factors that developers focus on when developing emotional machines. More specifically, user attitudes toward emotional messages sent by machines and the effects of emotion systems on user behavior were investigated in detail. A study was performed for two weeks with 124 individuals using a smartphone for more than one year. The participants used the system at will and freely interacted with the system agent. The smartphones generated 11,264 crucial notifications in total, among which 76% were viewed by the participants and 68.1% enabled the participants to resolve unfavorable smartphone conditions in a timely manner and allowed the system agent to provide users with positive emotional feedback. The majority of the participants were pleased by the emotional messages, they were taking into account the emotional messages and were convinced that the developed system enabled their smartphone to exhibit emotions. Additionally, a study revealed that an emotion system triggers certain patterns and behaviors in users, and the degree of attention paid to emotional messages corresponds to the quality of the emotion system.
In “Gaining a Sense of Touch Object Stiffness Estimation Using a Soft Gripper and Neural Networks” [18], Michal Bednarek et al. deal with soft gripping. The objective is to manipulate an elastic, soft and unstructured object, vulnerable to deformations. To perform such a task successfully, it is necessary to estimate the physical parameters of a squeezed object to adjust the manipulation procedure. While humans perform the task using a large volume of obtained knowledge starting from childhood, robots lack that type of knowledge and must rely on other approaches. The chosen approach is based on estimation of physical parameters using deep learning algorithms utilizing measurements from direct interaction with objects using robotic grippers. The interaction of the gripper with the object generates signals which are used to calculate object stiffness coefficient. Physical experiments were executed by the Yale OpenHand soft gripper, based on readings from inertial measurement units (IMUs) attached to the fingers of the gripper. The results indicate that the approach can reliably estimate the parameters of the object thus enabling smooth grasping and handling. The results enabled the creation of three datasets of IMU readings gathered while squeezing the objects, two from the experiments in simulation environment and one from real-life experiments. The dataset is publicly available to the scientific community to enable further testing of new approaches in the growing field of soft manipulation.
The paper “On Robustness of Multi-Modal Fusion—Robotics Perspective” [19] by Michal Bednarek et al. deals with a robotic perception system that needs to successfully integrate information from several data streams. Multi-modal fusion of heterogeneous data streams is a crucial ability enabling noise-robustness. Related approaches often rely on application-specific manual design of a multimodal-data fusion system to handle multi-modal data. As the volume and dimensionality of sensory feedback increase in recent years, it is beneficial to use other approaches. Multi-modal machine learning is one of the emerging fields for this task with focus mainly on vision and audio input. Robots, however, often use haptic sensors when interacting with an environment. An example would be gripping an object and handling it in a particular way. The experiments described in the paper involved three tasks: (i) grasp outcome classification, (ii) texture recognition, and (iii) multi-label classification of haptic adjectives based on haptic and visual data. Four learning-based multi-modal fusion methods were compared on three publicly available datasets containing haptic signals, images, and robots’ poses. The quality of each method was analyzed, in terms of performing the task and on their robustness against data degradation. The later issue is rarely considered in the research papers, whereas it is quite common in real life, when a degradation of sensory feedback often occurs during robot interaction with its environment, e.g., under various light conditions.
In “PUT-Hand—Hybrid Industrial and Biomimetic Gripper for Elastic Object Manipulation” [20], Tomasz Mańkowski et al. present an approach for manipulation of elastic objects using an anthropomorphic gripper based on off-the-shelf and 3D-printed components. The gripper contains five elements and each of them contains three fully actuated fingers for precise manipulation, and two tendon-driven digits for secure power grasping. The gripper is equipped with an on-board controller circuit and firmware, enabling full joint control and observation by resistive position and angle sensors in each joint. Additionally, the sensory system of the hand consists of tri-axial optical force sensors placed on fully actuated fingers’ fingertips for reaction force measurement. A PC provides the motor control using USB communication protocol providing a robot operating system in the form of a driver. To analyze performance of the gripper, several experiments were performed and are reported in the paper. The design files, source codes and results are available online under CC BY-NC 4.0 and MIT licenses.

3. Conclusions

We would like to take this opportunity to thank all the authors for submitting papers to this Special Issue. We also hope that the readers will find new and useful information on artificial intelligence and ambient intelligence as this field continues to progress with amazing speed.

Acknowledgments

We would like to thank all the researchers who submitted articles to this special issue. We are also grateful to all the reviewers who helped in the evaluation of the manuscripts and made very valuable suggestions to improve the quality of contributions. We would like to acknowledge the editorial board of Electronics, who invited us to guest edit this special issue. We are also grateful to the Electronics Editorial Office staff who worked thoroughly to maintain the rigorous peer-review schedule and timely publication.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  2. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, Inception-ResNet and the impact of residual connections on learning. In Proceedings of the Thirty-first AAAI Conference on Artificial intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4278–4284. [Google Scholar]
  3. Chaib, S.; Yao, H.; Gu, Y.; Amrani, M. Deep feature extraction and combination for remote sensing image classification based on pre-trained CNN models. In Proceedings of the Ninth International Conference on Digital Image Processing (ICDIP 2017), Hong Kong, China, 21 July 2017; p. 104203D. [Google Scholar]
  4. Vandal, T.; Kodra, E.; Ganguly, S.; Michaelis, A.; Nemani, R.; Ganguly, A.R. Deepsd: Generating high resolution climate change projections through single image super–resolution. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 9 August 2017; pp. 1663–1672. [Google Scholar]
  5. Young, T.; Hazarika, D.; Poria, S.; Cambria, E. Recent trends in deep learning based natural language processing. IEEE Comput. Intell. Mag. 2018, 13, 55–75. [Google Scholar] [CrossRef]
  6. Bengio, Y.; Ducharme, R.; Vincent, P.; Jauvin, C. A neural probabilistic language model. J. Mach. Learn. Res. 2003, 3, 1137–1155. [Google Scholar]
  7. Devlin, J.; Chang, M.W.; Lee, M.K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  8. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Driessche, G.V.D.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef]
  9. Vinyals, O.; Babuschkin, I.; Czarnecki, W.M.; Mathieu, M.; Dudzik, A.; Chung, J.; Choi, D.H.; Powell, R.; Ewalds, T.; Georgiev, P.; et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nat. Cell Biol. 2019, 575, 350–354. [Google Scholar] [CrossRef]
  10. Moore, G. Cramming More Components onto Integrated Circuits (1965). In Ideas That Created the Future; The MIT Press: Cambridge, MA, USA, 2021; Volume 38, pp. 261–266. [Google Scholar] [CrossRef]
  11. Gams, M.; Gu, I.Y.-H.; Härmä, A.; Muñoz, A.; Tam, V. Artificial intelligence and ambient intelligence. J. Ambient. Intell. Smart Environ. 2019, 11, 71–86. [Google Scholar] [CrossRef] [Green Version]
  12. Yampolskiy, R.V. Artificial Superintelligence: A Futuristic Approach, 1st ed.; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  13. Gams, M.; Kolenik, T. Relations between Electronics, Artificial Intelligence and Information Society through Information Society Rules. Electronics 2021, 10, 514. [Google Scholar] [CrossRef]
  14. Tervonen, J.; Pettersson, K.; Mäntyjärvi, J. Ultra-Short Window Length and Feature Importance Analysis for Cognitive Load Detection from Wearable Sensors. Electronics 2021, 10, 613. [Google Scholar] [CrossRef]
  15. Kompara, T.; Perš, J.; Susič, D.; Gams, M. A One-Dimensional Non-Intrusive and Privacy-Preserving Identification System for Households. Electronics 2021, 10, 559. [Google Scholar] [CrossRef]
  16. Brena, R.; Escudero, E.; Vargas-Rosales, C.; Galvan-Tejada, C.; Munoz, D. Device-Free Crowd Counting Using Multi-Link Wi-Fi CSI Descriptors in Doppler Spectrum. Electronics 2021, 10, 315. [Google Scholar] [CrossRef]
  17. Lin, H.-C.K.; Ma, Y.-C.; Lee, M. Constructing Emotional Machines: A Case of a Smartphone-Based Emotion System. Electronics 2021, 10, 306. [Google Scholar] [CrossRef]
  18. Bednarek, M.; Kicki, P.; Bednarek, J.; Walas, K. Gaining a Sense of Touch. Object Stiffness Estimation Using a Soft Gripper and Neural Networks. Electronics 2021, 10, 96. [Google Scholar] [CrossRef]
  19. Bednarek, M.; Kicki, P.; Walas, K. On Robustness of Multi-Modal Fusion—Robotics Perspective. Electronics 2020, 9, 1152. [Google Scholar] [CrossRef]
  20. Mańkowski, T.; Tomczyński, J.; Walas, K.; Belter, D. PUT-Hand—Hybrid Industrial and Biomimetic Gripper for Elastic Object Manipulation. Electronics 2020, 9, 1147. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gams, M.; Gjoreski, M. Artificial Intelligence and Ambient Intelligence. Electronics 2021, 10, 941. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10080941

AMA Style

Gams M, Gjoreski M. Artificial Intelligence and Ambient Intelligence. Electronics. 2021; 10(8):941. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10080941

Chicago/Turabian Style

Gams, Matjaz, and Martin Gjoreski. 2021. "Artificial Intelligence and Ambient Intelligence" Electronics 10, no. 8: 941. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10080941

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop