Next Article in Journal
Microtexture Performance of EAF Slags Used as Aggregate in Asphalt Mixes: A Comparative Study with Surface Properties of Natural Stones
Previous Article in Journal
Online Knowledge Learning Model Based on Gravitational Field Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Computer Vision in Autonomous Unmanned Aerial Vehicles—A Systematic Mapping Study

1
Escuela Técnica Superior de Ingenieros Industriales, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
2
Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
3
Departamento de Ingeniería Eléctrica, Electrónica, Automática y Comunicaciones, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
*
Author to whom correspondence should be addressed.
Submission received: 29 May 2019 / Revised: 18 July 2019 / Accepted: 22 July 2019 / Published: 5 August 2019
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Personal assistant robots provide novel technological solutions in order to monitor people’s activities, helping them in their daily lives. In this sense, unmanned aerial vehicles (UAVs) can also bring forward a present and future model of assistant robots. To develop aerial assistants, it is necessary to address the issue of autonomous navigation based on visual cues. Indeed, navigating autonomously is still a challenge in which computer vision technologies tend to play an outstanding role. Thus, the design of vision systems and algorithms for autonomous UAV navigation and flight control has become a prominent research field in the last few years. In this paper, a systematic mapping study is carried out in order to obtain a general view of this subject. The study provides an extensive analysis of papers that address computer vision as regards the following autonomous UAV vision-based tasks: (1) navigation, (2) control, (3) tracking or guidance, and (4) sense-and-avoid. The works considered in the mapping study—a total of 144 papers from an initial set of 2081—have been classified under the four categories above. Moreover, type of UAV, features of the vision systems employed and validation procedures are also analyzed. The results obtained make it possible to draw conclusions about the research focuses, which UAV platforms are mostly used in each category, which vision systems are most frequently employed, and which types of tests are usually performed to validate the proposed solutions. The results of this systematic mapping study demonstrate the scientific community’s growing interest in the development of vision-based solutions for autonomous UAVs. Moreover, they will make it possible to study the feasibility and characteristics of future UAVs taking the role of personal assistants.

1. Introduction

The use of unmanned aerial vehicles (UAVs) has significantly increased in recent years. These aircraft are mainly characterized by the fact that they allow access to remote places without the direct intervention of a human operator aboard. These places are generally difficult to access and/or have unfavorable conditions. UAVs’ abilities are permitting their use in manifold applications, such as remote sensing, support in emergency situations, inspection of infrastructures, logistics systems, professional photography and video, and precision agriculture spray systems, among others [1]. An emerging domain is flying assistance robotics, where UAVs come through with a present and future model of fully autonomous personal monitoring capacities. Some examples are Aire, a self-flying robotic assistant for the home [2], Fleye, a personal flying robot [3], and CIMON and Astrobee, flying assistant robots in the space station [4,5].
Personal assistant robots are principally based on monitoring people’s activities in order to provide them help and support in daily activities. Our current research interest is framed in this field. One of our major objectives is to design an autonomous aerial vehicle to assist dependent people [6,7]. In this sense, it is necessary that a vision system captures images of the dependent person, which are analyzed in order to determine the assistance required at each moment. Therefore, it is necessary to address the problem of autonomous navigation in both indoor and outdoor environments. In other words, the UAV must perform flights in a completely autonomous manner. At this point, and given the need to have a vision system for monitoring the assisted person, both issues may be intertwined.
In this respect, a series of complex operations must still be solved in order to achieve fully autonomous UAVs. Full autonomous navigation means flying in an environment without even requiring the control of an operator on land. In this respect, it should be noted that a UAV requires sensors to (a) measure the aircraft’s state, (b) sense the environment, (c) detect landmarks and targets (in tracking missions), and even (d) detect both static and dynamic obstacles, among others, if it is to fly autonomously. Moreover, it is necessary to integrate this information into a control system so as to ensure that the movement of the aircraft is accurate and safe.
Vision systems play an outstanding role as regards performing all these complex and interrelated tasks, since images captured by a vision camera contain a vast amount of data concerning the flight environment. This information is then extracted and analyzed by using computer vision algorithms to obtain useful information for navigation and flight control. As a result, the amount of works focused on the development of solutions based on computer vision for autonomous UAVs has grown notably in the last few years. We therefore present a systematic mapping of literature in order to summarize and analyze the research carried out on this topic.
A systematic mapping is a method that is employed to review, classify and structure documents focused on a specific research topic. Systematic mapping studies were initially used principally in the field of medical research, but they are now also being applied to other areas related to engineering and new technologies, such as web development [8], mobile devices [9], social-technical congruence [10], and unmanned aerial systems in smart cities [11]. With regard to the topic of computer vision systems in mobile robots, reviews and meta-analyses have been presented for both ground [12] and aerial vehicles [13,14,15]. However, a systematic mapping study that provides an objective procedure with which to identify the nature and extent of the research on this topic has not been conducted to date. This is, therefore, to the best of our knowledge, the first systematic mapping study focused on how computer vision is being used in autonomous flying robots. Its purpose is to provide a global view of those papers that introduce computer vision-based solutions for autonomous UAVs.
The paper is organized as follows. Section 2 describes the research method used, including the research questions and classification scheme. Section 3 presents the results obtained from the systematic mapping study. Finally, Section 4 provides the main findings of the study and Section 5 the most relevant conclusions.

2. Methods

A systematic mapping study “maps” a research area by classifying papers in order to identify which topics are well-studied and which need additional study. Therefore, a systematic mapping study aims to get an overview of a certain research area and how far it is covered in research by studying the research field by using methods from information retrieval and statistical analysis. The present systematic mapping study has been performed on the basis of guidelines provided by the Template for a Mapping Study Protocol [16]. Figure 1 illustrates the methodology used. It mainly consists of four steps: (1) the definition of the research questions in order to determine the search strategy, (2) performing the search to obtain an initial set of documents, (3) the screening of papers to select the most relevant ones, and (4) the classification and mapping process carried out in order to eventually obtain the results. The research questions, search strategy, inclusion and exclusion criteria, studies selection and classification scheme are detailed below.

2.1. Research Questions

As stated before, the goal of this systematic mapping study is to provide an overview of the current research on the topic of computer vision in autonomous UAVs. This way, we will obtain a broad view of navigation and flight control operations in which computer vision is relevant. Consequently, it will be possible to address future in-depth studies for the development of autonomous aerial robots, and, particularly, assistant UAVs. This overall approach has led to the following research questions:
  • RQ1: How many papers have been published on computer vision in autonomous UAVs?
    This first question aims to discover the number of works about the use of computer vision in autonomous UAVs that have been published and the forums in which these papers are most frequently published.
  • RQ2: How is computer vision used in autonomous UAVs?
    The second question concerns the idea of grouping articles according to the type of operation for which computer vision is used. Four main categories related to navigation, stability and maneuver control, guidance or tracking, and obstacle detection and avoidance are considered. This enables us to know on what areas the research is principally focused, and allows us to carry out a more in-depth analysis of the remaining questions.
  • RQ3: What types of UAVs have been used in the proposed solutions?
    One of the most determining aspects is to discover for what type of UAV platform each proposal has been designed and validated. This is precisely the goal of the third question. In addition, by considering the first classification on the use of vision, we can get an idea of which type of UAV is most frequently used in each operation.
  • RQ4: What are the characteristics of the vision systems?
    The number, localization, and orientation of the cameras will be analyzed at this point in order to draw conclusions about the main features of the vision systems commonly used in each navigation and flight control area.
  • RQ5: What is the validation process for each proposed solution?
    The last research question is related to the kind of tests carried out to validate the proposals; flight, experimental, and simulation trials, or a combination of them. The idea is to perform an analysis of the kind of tests most frequently performed for each main category considered.

2.2. Research Strategy

A search string was defined in order to automatically find articles that use computer vision technologies for the autonomous navigation and control of UAVs by using terms related to the two main concepts: UAV and vision. The search string was, therefore, composed of two parts. The first part contained words related to autonomous aircraft, while the second was defined by employing terms related to computer vision. Table 1 shows the definition of the search string, in which the Boolean OR was used to join terms and synonyms, and the Boolean AND was used to join the two main parts.

2.3. Inclusion and Exclusion Criteria

Inclusion and exclusion criteria were formulated in close relation to our current research interests in aerial assistant robots. In this sense, we only focused on works concerning technological solutions and not on their acceptance. Thereby, only papers published in Engineering and Computer Science on autonomous vision-based navigation and flight control solutions for individual UAVs (not swarm or multi-UAVs) were considered. In this manner, inclusion and exclusion criteria were defined as follows:
  • Inclusion criteria:
    I1. Papers focused on computer vision for autonomous UAVs.
    I2. Papers whose subject area is Engineering or Computer Science.
    I3. Papers written in English.
    I4. Papers published until 31 December 2017.
  • Exclusion criteria:
    E1. Review papers.
    E2. Papers describing missions with multiple UAVs.
    E3. Papers focused only on recording and/or enhancing aerial images.
    E4. Papers focused on designing UAV simulators or hardware.
    E5. Papers not focused on UAV navigation or flight control.

2.4. Selection of Studies

This section shows details of the results obtained during the process of selecting the papers (see Figure 2).
The Scopus database was selected with the objective of finding papers containing the words related to UAVs and vision defined by the search string (see Table 1) in their title, abstract or keywords. Additionally, some options of the Scopus search engine were used to limit the results to papers published in the areas of Engineering or Computer Science (I2), written in English (I3), and published until December 2017 (I4). The complete research string can be consulted in Figure 2.
The search took place in June 2018 and resulted in a total of 2081 papers. After collecting the papers, eight duplicated works were removed and the other 2073 articles were passed through a screening process composed of three filters. Firstly, a manual screening was carried out to analyze them. In this stage, we observed that many papers referred to vision systems in remotely controlled UAVs (not autonomous) or that the flight control system did not depend on visual information. In the kind of papers mainly focused on applications such as remote sensing, the vision system is not, therefore, used for the navigation and/or flight control of autonomous UAVs, signifying that inclusion criterion I1 was not completely met. An automatic filter (see details in Figure 2) was consequently applied to ensure the effectiveness of this criterion.
The idea was that papers that contained at least one term (or a pair of terms) on UAVs and at least one vision term in title, abstract and keywords, would attain a 100% certainty percentage. When the presence of these terms in any of these fields decreased, their percentage also decreased. After the application of the automatic filter—which excluded a total of 1815 articles whose certainty percentage was lower than 95—two filtering processes were manually performed.
During a first manual screening, each record retrieved from the automated filter was evaluated to decide whether or not it should be excluded by reading its title, abstract and keywords. As a result, 97 of the 258 remaining papers were excluded because their title, abstract or keywords met at least one of the exclusion criteria E1 to E5. After removing these papers, a last filter was applied using the same exclusion criteria but this time screening the full-text. In this step, after reading the complete text of the remaining 161 articles, 17 works were not included in the final selection owing to the fact that they satisfy at least one of the some exclusion criteria E3 to E5. After carrying out the selection process, a total of 144 articles were considered in order to carry out the systematic mapping study.

2.5. Classification

The papers were classified according to four properties that were used to answer the research questions (see Figure 3): the Task for which computer vision has been used, the Class of UAV for which the vision-based solution has been designed, the Vision System employed, and the Validation Process used to test the solution.

2.5.1. Vision-Based Task

With regard to RQ2 (Use of Computer Vision), each paper was classified in one of the following main categories:
Visual Navigation: The navigation task includes determining the aircraft’s position and orientation (e.g., [17,18,19]). Sensors are, therefore, required to measure the aircraft’s state and to sense the flight environment. Here, the main feature when compared to those of other solutions is the use of a vision system as the main sensor, which may be complemented by other sensors (e.g., inertial measurement units, global positioning systems) in order to correct the visual estimation. Two examples are shown in Figure 4. These are a quadrotor using artificial visual markers to estimate its position in an indoor environment (see a.1), and a vision-based terrain referenced navigation approach, in which aerial images are compared with a digital terrain elevation database to estimate the UAV’s position (see a.2).
Vision-Based Control: The aircraft’s position and orientation are controlled on the basis of the information captured by vision sensors and subsequently processed by computer vision algorithms (e.g., [20,21,22]). The vision-based control of UAVs began in the 1990s. Since then, several solutions have been proposed in order to address operations such as stabilization to maintain a constant altitude and/or a straight flight (see b.1), and maneuver control to guide a UAV so that it performs a specific and precise movement, such as positioning with respect to a visual mark (see b.2). When the information obtained by a vision system is integrated into the control loop directly, this is called visual servoing. In this case, the control law depends directly on the error signal obtained from visual information.
Vision-Based Tracking/Guidance: In vision-based tracking or guidance, the UAV control system is designed to perform a flight based on relative navigation with respect to a target—usually in motion—or a flight path defined by a series of visual references or features (e.g., [23,24,25]). A vision-based system must, therefore, be able to detect the target or visual reference, estimate its position and determine the actions required to control the UAV’s flight path. Two examples are presented in Figure 4. There is a mobile target tracking by a fixed-wing UAV (see c.1) and a quadrotor following a visual path defined by images of a previously known environment (see c.2).
Vision-Based Sense-and-Avoid (SAA): A completely autonomous navigation requires the ability to detect and avoid both static and dynamic obstacles. When these tasks are performed on the basis of the information captured and processed by a vision system, we refer to vision-based sense-and-avoid (e.g., [26,27,28,29]). The main objective is, therefore, to use one or more cameras to detect possible collisions with objects or even with other aircraft, and to determine the control actions required to achieve a collision-free flight. Two examples of vision-based SAA operations are illustrated in Figure 4. We have a quadrotor detecting the approach to an obstacle in its flight path (see d.1), and a mapping-based autonomous flight of a quadrotor in a coal mine (see d.2).

2.5.2. Class of UAV

With regard to RQ3 (UAV Platform), each paper was classified into one of the following classes of UAV according to the aircraft’s configuration [36,37]:
Fixed-Wing: This aircraft consists basically of a rigid wing that has a predetermined airfoil, which makes flight possible by generating lift due to the UAV’s forward airspeed. This airspeed is generated by the forward thrust produced usually by a propeller turned by an electric motor. It is mainly characterized by its high cruise speed and long endurance and is mostly used for long distance, long range and high altitude missions (e.g., [38,39,40,41]). Two models of fixed-wing UAVs are illustrated in Figure 5 (see a.1 and a.2).
Rotatory-Wing: These aircraft have rotors composed of blades in constant motion, which produce the airflow required to generate lift. These flying machines, which are also called vertical take-off and landing (VTOL) rotorcraft, are normally used for missions that require hovering flight. They allow a heavier payload, easier take-off and landing, and better maneuvering than fixed-wing UAVs (e.g., [42,43,44,45]). The most common models are helicopters and quadrotors, a multi-rotor aircraft with four rotors that is widely used at present [46]. Figure 5 displays two such models, a helicopter (see b.1), and a quadrotor, also named quadcopter (see b.2).
Flapping-Wing: This class of micro-UAV reproduces the flight of birds or insects [47,48,49]. It has an extremely low payload capability and low endurance owing to its reduced size. However, flapping-wing UAVs have low power consumption and can perform vertical take-offs and landings. Despite these advantages, the difficulties related to their construction and set-up are still relevant today. Two designs of these novel aircraft are illustrated in Figure 5 (see c.1 and c.2).
Airship: An airship or dirigible is a “lighter-than-air” aircraft that is steered and propelled through the air by using rudders and propellers or another thrust. These aerostatic aircraft stay aloft by filling a large cavity like a balloon with a lifting gas. Major types of airship are non-rigid (or blimps), semi-rigid and rigid. A blimp (technically a “pressure airship”) is a powered, steerable, lighter-than-air vehicle whose shape is maintained by the pressure of gases within its envelope [50]. Since no energy is expended to lift this aircraft, the saving is used as a power source for displacement actuators, thus enabling long-duration flights. In addition, this air vehicle has the capability to work safely at low levels, close to people and buildings [51,52]. Figure 5 shows two examples of airship, namely, a blimp UAV (see d.1), and an innovative blimp UAV in spherical-shape (see d.2).

2.5.3. Vision System

With regard to RQ4 (Vision System), each paper was first classified by considering the number of cameras as:
Monocular: A vision system with just one camera is used as a vision-based solution (e.g., [56,57,58,59]). An example of a monocular system is displayed in Figure 6 (a.1).
Multi-camera: A vision system with two or more cameras is employed in the proposed approach. Here, the cameras are separate or have different orientations (e.g., [60,61,62]). A multi-camera configuration is illustrated in Figure 6 (a.2).
Stereo: A special and widely used case of a multi-camera system with two cameras in a stereo configuration (in the same localization and with the same orientation) or a stereo-camera (two vision sensors) is employed for the vision-based proposal (e.g., [63,64,65,66]). A stereo vision system is presented in Figure 6 (a.3).
In addition to the number of cameras, their location either on board the UAV or on the ground, and the orientation of the system, that is, where the vision sensors look (in the direction of the flight, toward the ground, or toward the UAV itself), was also considered to complete the classification according to the vision system used. Figure 6 shows several configurations considering the location and orientation of the cameras (see b.1, b.2 and b.3).

2.5.4. Type of Validation

With regard to RQ5 (Validation Tests), each paper was classified into one or more of the following types of trials (see Table 2):
Flight: In this case, the proposed solution has been tested during a real flight of a UAV in outdoor and/or indoor environments (e.g., [68,71,72]). The UAV may sometimes be controlled by an operator (remotely or manually) during the flight.
Experimental (e.g., [73,74,75]): Two kinds of experiments were considered: offline tests with which to validate the proposed solution by using data and images recorded on a previously performed flight, and lab tests in which the proposal is tested by using a laboratory platform that simulates the dynamic behavior of a real aircraft (for instance, Twin Rotor MIMO System [76,77,78]).
Simulation: In this kind of trial, the proposed solution is validated by means of simulation tests, such as hardware-in-the-loop simulations (HILS), image-in-loop simulations (IILS), virtual reality environments, and numerical simulations. In all these cases, the main feature is that the flight of a model or virtual aircraft is simulated (e.g., [79,80,81,82]). With respect to the camera employed, it is a real device (physical), a virtual camera in a 3D environment or even a model that represents its behavior by providing theoretical measurements that would be obtained with computer vision.

3. Results

In this section, we respond to the five research questions described previously in accordance with the literature selected.

3.1. RQ1: How Many Papers Have Been Published on Computer Vision in Autonomous UAVs?

According to the classification and mapping process, a total of 144 papers have been published on computer vision for autonomous UAVs during the period studied (until December 2017). The annual trend of the papers is shown in Figure 7, which illustrates the evolution of the number of works focused on computer vision for the navigation and control of UAVs since 1999. Despite an initial slow growth in the early years, a boom started in 2007 when the amount of research began to grow remarkably. Since that year, the extension has been continuous and the number of papers published in recent years has been very significant, demonstrating that the subject matter of this review is of great interest to the scientific community.
The 144 papers were published in 68 journals. As shown in Table 3, more than half of the papers originate from 13 top venues. Almost a third of the total works belong to the top five venues, which include five or more papers each. It is noteworthy that the Journal of Intelligent & Robotic Systems contains 23 papers (≈16% of the total). The full list of papers from the 68 journals can be consulted in Appendix A.
Considering the Journal Citation Reports (JCR), some conclusions can be drawn. According to data from year 2007, most of the 68 journals have a notable impact factor (IF) for categories such as ‘Engineering, Aerospace’, ‘Robotics’, ‘Automation & Control Systems’, ‘Instruments & Instrumentations’ and ‘Computer Science, Artificial Intelligence’. Figure 8 represents the number of journals (and the papers published) within each JCR quartile. In this respect, 124 of 144 papers (over 86%) were published in indexed journals. These numbers make it possible to highlight the scientific relevance of the papers considered in this systematic mapping study. This also shows that the development of a fully autonomous UAV capable of flying without the control of a human operator is a challenging and multidisciplinary problem. This is a remarkable sign of the complexity involved in the development of solutions based on computer vision for UAV navigation and flight control.

3.2. RQ2: How Is Computer Vision Used in Autonomous UAVs?

Depending on the task for which computer vision has been employed, each paper has first been classified into four main categories: 1. Visual Navigation, 2. Vision-Based Control, 3. Vision-Based Tracking/Guidance and 4. Vision-Based Sense-and-Avoid (SAA). It should be noted that solutions presented in more than one paper have been considered only once, and those papers that deal with different operations will be classified only according to the main task considered.

3.2.1. Distribution and Annual Trend per Category

The way in which the 144 papers are distributed in the four main categories is shown in Figure 9. The graph shows the number of papers in each main category and also details the classification that has been carried out considering the specific operation addressed. The results show that practically two of every three papers belong to the first two categories, 41 are focused on the design of visual solutions for UAV navigation (cat. 1) and 55 present control algorithms based on the information provided by a vision system (cat. 2). The remaining papers are divided into the two other categories: 33 and 15 papers focus on developing solutions for the visual guidance of a UAV (cat. 3) and solutions to allow a UAV to detect and avoid obstacles during its flight (cat. 4), respectively.
The number of works published annually within each main category can be consulted in Figure 10. In general, an upward trend that is remarkable for the second and third categories can be appreciated. When considering the publication year, we can highlight that the earlier works appertain to the two first categories, while papers presenting vision-based solutions for the guidance or detection and avoidance of obstacles are more recent. In our opinion, this is a clear consequence of the fact that navigation and control are basic (essential) tasks to be dealt with before confronting more complex tasks, such as objective/obstacle tracking and avoidance. Flying autonomously and safely requires the design of more complex and robust control systems. Another noteworthy fact is that, except in the first years (2000–2001), works on computer vision for UAVs have appeared annually, and, since 2013, the works have been distributed in the four categories. These data highlight the scientific community’s current interest in the development of computer vision systems for different navigation and flight control operations.
The data for the classification according to the specific operation can be consulted in Table 4. It shows the complete list of the 144 papers grouped into twelve subcategories (three for each main category). The operations considered and the results obtained are discussed below.

3.2.2. Vision-Based Task

Category 1. Visual Navigation

Firstly, it should be kept in mind that the flight control of a UAV depends principally on knowing where the UAV is and where it wants to go. Knowledge of the aircraft’s state is, therefore, essential. Therefore, it seems logical that navigation solutions have been the focus of much research carried out to date. The subcategories considered within the first category are: 1.1. Self-Localization for those papers that use computer vision to determine the aircraft’s position within the flight environment (19 papers); 1.2. State Estimation for papers focused on solutions by which to compose the aircraft’s state using visual information (based principally on visual odometry) (10 papers); and, 1.3. Based on SLAM to classify vision-based solutions that simultaneously determine the aircraft’s localization and build a map or model of the flight environment (12 papers). At this point, it is important to note that other sensors are frequently used to complement and correct the visual ones. The most common are global positioning systems (only for outdoors), inertial measurement units, and altimeters.
Unfortunately, this information is not clearly detailed in some papers analyzed, especially papers in which validation trials do not consist of real flights. For this reason, the study of complementary sensors had not been considered in this mapping study. However, it would be interesting to address this issue in future more in-depth reviews.

Category 2. Vision-Based Control

This is the category in which the greatest number of papers has been found. In addition, its tendency toward annual growth is highly notable. As mentioned above, vision-based control consists of determining the actions required to control the aircraft’s position and orientation by using visual information as a reference. In this second category, the papers have been classified, according to their control purpose, into: 2.1. Stabilization when visual information is integrated into the control system of the aircraft to stabilize its flight; principally using the skyline as a reference, or using algorithms for the detection and compensation of movement; 2.2. Control of Maneuvers when the objective is to regulate the UAV’s position in order to perform accurate movements such as positioning in relation to a visual landmark; and 2.3. Autonomous Landing when the information provided by vision sensors is integrated into the landing control system (papers focused on the detection of the landing site and the consequent estimation of the aircraft’s pose to assist in landing have also been considered here). The results show that stabilization is, without any doubt, the least frequent topic with only four papers, while both the control of maneuvers and landing (a specific type of maneuver) are the subcategories with a greater number of papers (24 and 27, respectively).

Visual Servoing

At this moment, it is important to introduce visual servoing [88], which means that the motion of a UAV (or a robot, in general) is controlled with respect to a visual target, defined by features or artificial landmarks. Visual information is directly integrated into the feedback control in this widely used control approach. Three main methods are considered [163]: (a) position-based visual servoing (PBVS), or 3D visual servoing, which involves the reconstruction of the target pose with respect to the camera, (b) image-based visual servoing (IBVS), or 2D visual servoing, which aims to control the dynamics of features directly in the image plane, and (c) hybrid visual servoing, or 2 1/2D visual servoing, which is based on combining visual features obtained directly from the image, and features expressed in the Euclidean space.
Of these three methods, the most commonly used is IBVS, in which the control action is directly determined from the error signal obtained by comparing the actual image captured by the camera and a reference image. This method has been used principally in maneuvers for rotatory-wing UAVs and in landing for fixed-wing UAVs. Furthermore, the visual servoing scheme is not only limited to operations classified under vision-based control. In fact, it has been widely used in target following missions (generally in motion). This is one of the operations considered in the vision-based guidance/tracking category discussed below. In relation to visual servoing, motion capture systems like VICON are used as complementary sensors (see, for instance, articles [109,145]) that estimate the velocity of the UAV whilst the UAV looks toward the visual landmark. Unfortunately, motion capture systems are still limited to a specific small volume inside research labs [178].

Category 3: Vision-Based Tracking/Guidance

This way, continuing with the third category, the papers have been classified into the following groups: 3.1. Target Tracking when the proposed solution is designed so that a UAV detects and follows a specific target (generally in motion); 3.2. Features/Path Tracking for papers in which a UAV is visually guided to follow a path defined by images, or a UAV is guided through visual features that it must detect and follow, such as lines on a road; and, finally, 3.3. Autonomous Aerial Refueling (AAR) which groups those papers in which one UAV follows another in order to solve the problem of autonomous resupply. It should be noted that some of the papers classified here address only the first stage of the problem, that is, the pose estimation between the two UAVs involved in the refueling process (tanker and receiver UAV). Of these subcategories, the most popular is target tracking with 17 papers, while in second place we find the AAR subcategory with 10 papers, and, finally, the tracking of routes or features with only six works.

Category 4: Vision-Based Sense-and-Avoid (SAA)

Lastly, the following subcategories have been considered for the fourth category: 4.1. Detection and Avoidance of Intruders which includes solutions to avoid collisions with other aircraft, primarily through evasive maneuvers; 4.2. Mapping-Based Obstacle Avoidance on the use of strategies to avoid obstacles based on a mapping process and subsequent obstacle-free path planning; and, finally, 4.3. Autonomous Flight and Obstacle Avoidance in which autonomous navigation in complex environments has been addressed. In this last case, the UAV generally uses a strategy based on finding which path to follow in order to avoid colliding with obstacles (e.g., navigation between a row of trees in orchard). Based on the results obtained, the subcategory in which the highest number of papers has been published is related to mapping-based solutions, with six papers, followed very closely by avoiding other aircraft and autonomous flight, with five and four papers, respectively. All these operations are very close to the concept of fully autonomous UAVs, and it is, therefore, foreseeable that research will continue to focus on them in the near future.

3.3. RQ3: What Types of UAVs Have Been Used in the Proposed Solutions?

The papers have been grouped into Fixed-Wing, Rotatory-Wing, Flapping-Wing and Airship, according to the aircraft’s configuration. Moreover, an additional Not Specified group has been considered in order to classify those papers that do not detail the class of UAV.

3.3.1. Distribution and Annual Trend per UAV Class

The UAV class distribution over the total of 144 papers is shown in Figure 11. This graph shows that, without any doubt, the most widely used UAV platform corresponds to the rotatory-wing class (102 papers, 71%), followed by 25% that represents the fixed-wing class (37 papers). Only one paper uses an airship UAV. Finally, three papers do not specify the type of UAV. The only class of UAV not present in the systematic mapping study is, therefore, flapping-wing aircraft. Consequently, there is an interesting gap on which future research should be focused. Moreover, considering the characteristics of flapping-wing UAVs, they could be very useful for indoor applications and/or approaches requiring a small aircraft.
The above data are also endorsed when observing the annual trend considering the UAV class (see Figure 12). This graph presents the annual evolution of the number of works. The data again provide evidence of a greater use of rotatory-wing UAVs. Another noteworthy fact is that the use of rotary-wing aircraft has continued since 1999, also showing a clear trend of growth, which can be considered almost exponential in recent years. However, the other UAV classes have a discontinuous evolution over time. With the exception of fixed-wing UAVs, for which some evolution can be appreciated, no conclusions can be drawn for the remaining cases, given the small number of works.

3.3.2. Concerning the Vision-Based Task and Type of UAV

If we consider the first classification regarding the computer vision task in each paper, the distribution again shows that the most frequent type of UAV belongs to the rotatory-wing class. Rotary-wing UAVs are the majority in all categories, and more clearly for the first two focused on navigation and control. The papers that do not specify the type or employ a less frequent type of UAV belong to categories 1. Visual Navigation and 4. Vision-Based Sense-and-Avoid (SAA). These papers are usually validated by means of tests using images obtained from databases or satellite maps. Furthermore, a more in-depth analysis has been performed considering the specific types of aircraft within each class. Figure 13 shows the UAV class distribution over the four categories detailing the type of UAV. This information is provided as “Class: Type of UAV”; for example, Rotatory-Wing: Helicopter. When the type of UAV is not available, only the name of its class is indicated.
Starting with the most common Rotatory-Wing class of UAV, the types of UAV found in the analysis are Helicopter, Quadrotor, Multirotor and Sit-On-Tail VTOL. Quadrotors are clearly the most widely used, higlighting 18 papers in the control of maneuvers subcategory. In this respect, quadcopters are vehicles with four rotors, relatively simple to control, with high maneuverability and the ability to hover. All this, together with the wide range of low-cost quadrotors on the market, has made a significant contribution to their clear predominance in the research developed to date. The second most frequent type of rotary vehicles are helicopters, distantly followed in third place by multirotors. A Sit-On-Tail VTOL aircraft, which is used in a paper on the control of maneuvers, is in the last position.
With respect to the second most common class, Fixed-Wing, we should point out that it is usual not to provide details of which model of fixed-wing aircraft has been used. In fact, only three papers provide this information. Two of them address the process of autonomous landing by using a Blended Wing-Body aircraft, whereas a Shadow model is used in one paper on the detection and avoidance of an intruder aircraft. Furthermore, the only UAV grouped in Airship is a Blimp, which is used in a paper concerning visual navigation based on SLAM.
An additional fact is that these graphs once again show the predominance of rotary-wing air vehicles over the fixed-wing category, not only as regards the number of articles, but also as regards the fact that this type of air vehicle is present in all the subcategories considered, while fixed-wing air vehicles are used in only some of them. However, it should be highlighted that fixed-wing UAVs are more used in operations related to high-altitude flight and large distances due to the features of such airplanes. For instance, in the case of autonomous aerial refueling operations, whose purpose is, in effect, to increase flight autonomy (the ability to perform the complete flight without the need of making stops to refuel on the ground). Fixed-wing UAVs are also used frequently in stabilization of the UAV’s flight regarding the skyline, and in detection and avoidance of other aircraft.
Finally, in order to conclude this extensive analysis of UAV platforms, the complete list of the 144 papers considered in this mapping study, classified according to the class of UAV and also detailing the type of aircraft, is shown in Table 5.

3.4. RQ4: What Are the Characteristics of the Vision System?

The 144 papers considered in this systematic mapping study have been classified with regard to the vision system in terms of the number of cameras as Monocular (1 camera), Multi-camera (2) (two cameras), Multi-camera (4) (four cameras), and, finally, a special type of multi-camera, Stereo (two vision sensors in a stereo configuration, i.e., at the same location and oriented toward a same point).
Concerning the vision systems, their location, that is, where they are installed, and also their orientation, that is, where they point to, has also been analyzed. Other concepts, such as the camera technology used and the images recorded (color or gray scale), have not been considered because this information is not always detailed in the papers analyzed. In addition, this question makes no sense in the case of those proposals that are validated using tests that do not require a physical camera device (see Table 2), such as virtual reality or numerical simulations. Moreover, more information on technical aspects of the vision systems are out the scope of a systematic mapping study. Nevertheless, we believe that a more in-depth analysis of the cameras would be interesting for future detailed reviews, particularly those focused on solutions validated during a UAV’s real flight.

3.4.1. Distribution and Annual Trend per Vision System

The distribution of these types of vision systems as regards the database is shown in Figure 14, in which the predominance of monocular systems is clear, with a total of 116 papers representing 80% of the papers. Secondly, stereo systems are present in 20 papers, representing 14%. The remaining 6% is divided between multi-camera (2) and multi-camera (4) systems with 7 and 1 papers, respectively.
Figure 15 shows the annual trend of papers considering the vision system employed. A clear predominance of monocular vision systems can be observed again. Moreover, the graph shows a remarkable growth trend for these monocular systems. In the case of stereo, the growth is not clear. However, some increase can be observed in the last few years’ research (since 2014). This data may suggest that the lower use of stereo systems can be due to their later development. Regarding the rest of multi-camera configurations, it is not possible to draw relevant conclusions as a result of the reduced number of papers focused on these systems.

3.4.2. Concerning the Vision-Based Task, Localization, and Orientation

With regard to the influence of the computer vision task at hand, the predominant system is again the monocular one for all the categories, followed noticeably by the stereo system, which is present in all the categories. The case of multi-camera systems is different. In fact, a 2-camera system is used in tasks related to control, tracking and sense-and-avoidance (cat. 2, 3 and 4), and a 4-camera system is employed only for navigation (cat. 1). The remaining details concerning the vision system distribution in each main category are provided in Figure 16.
Table 6 presents the list of 144 papers grouped considering the number, localization, and orientation of the cameras. The location is On Board the UAV and On Ground, or a combination of both in multi-camera systems. With regard to the orientation, when the vision system is on board, the most frequent setups are pointing Downward (inclined towards the ground) and Forward (in the direction of flight). In the case of papers that address autonomous aerial refueling, a camera pointing to the other aircraft involved in the process has also been considered. Here, the system points to the UAV that acts as a receiver (Toward the Receiver UAV) or to the aircraft with the fuel tank (To the Tanker UAV). However, in the case of ground vision systems, their orientation is always directed Toward the UAV. Finally, in a few cases, the vision system’s orientation is modified (shown as Depending on the test).
Lastly, some conclusions are presented on the data related to the vision systems used. (a) The results show a predominance of on-board monocular systems. This is mainly owing to the lower complexity of computer vision algorithms, and also to the ease of installation of a single camera on a UAV, with a resulting decrease in the weight added to the aircraft. (b) With regard to camera orientation, this depends to a large extent on the task for which the proposal has been designed. Visual navigation systems generally use cameras pointing toward the ground (especially in geolocation systems). Such orientation is also necessary in missions involving the tracking of targets on the ground. (c) The orientation pointing in the direction of flight is mostly used in systems for the detection and avoidance of obstacles, and when the visual reference is placed in front of the UAV. In the particular case of aerial refueling, the camera, which is installed on board the UAV receptor or tanker, points in the direction of the other aircraft in order to measure the distance between both. (d) Multi-camera systems are also interesting solutions, especially stereo systems, which have been used in a good number of papers. In this case, these systems provide information of the depth, which is a clear advantage despite being more complex. (e) The stereo systems are usually installed aboard the UAV, but some solutions also use stereo systems installed on the ground, mainly to guide fixed-wing UAVs during their landing maneuvers. In this case, the orientation of the cameras is obviously towards the UAV itself.

3.5. RQ5: What Is the Validation Process for Each Proposed Solution?

Lastly, the processes employed to validate the proposed solutions described in the papers have been addressed. Figure 17 enables us to appreciate the different validation processes (Flight, Experimental and Simulation) and the number of papers that make use of them. The figure also shows the distribution over the four main categories regarding the navigation and flight control task performed. In this figure, please note that: (1) experimental and simulation tests have been indicated by means of the terms EXP and SIM, respectively, and (2) the symbol & indicates that the validation process is composed of different types of tests.
Upon observing the results, it is possible to state that the most widely used validation process is flight tests, with a total of 56 papers (39%). When considering the validation processes using flight and other types of simulation and/or experimentation trials, the percentage rises to more than 54%, which is really remarkable. On the contrary, a significant number of the papers (around 25%; 38 papers) use only simulation tests for validation, while another 27 papers (close to 20%) conduct experimental tests. In this respect, some proposals have been validated by means of offline tests, i.e., trials that employed flight data and images recorded during previous navigation. These tests, together with tests carried out in laboratories, have been considered as experimental tests, which are less relevant than flight tests. However, they are also necessary as a previous step to ensure the performance of a proposed solution.
If we analyze the data considering the four main categories of operations, the flight is still predominant and stands out notably in the case of visual navigation and vision-based control. However, the weights of simulation tests are very important in some categories, especially in vision-based control and tracking/guidance. This way, an important number of papers used only simulations tests, which could be considered a negative point. However, not only numerical simulations (usually in visual servoing approaches) but also new paradigms as interesting as virtual reality have been employed. We consider that the kind of technology that simulates complex tasks, such as aerial refueling that involves two aircraft, is one of the most promising areas for the development of improved UAVs. In this respect, it is possible to perform realistic simulations, which may be very useful to test and improve new control algorithms, without the need to acquire expensive aircraft.
The total of 144 papers, grouped according to their validation process, is shown in Figure 18. Three large circles have been drawn to represent the three types of tests, (Flight, Experimental and Simulation). The intersections of the circles represent those validation processes that are composed of several types of tests.
In addition, in Figure 19, all papers that use flight for validation are classified into: Outdoor and Indoor flights. These two groups are again represented using circles, signifying that the intersection between both means flight tests in both indoor and outdoor environments (5 papers). The results show that outdoor flights are slightly more frequent than indoor ones.
Finally, in relation to precision or accuracy, it must be highlighted that the scarce information provided in the articles has been decisive not to include this parameter in this systematic mapping study. In fact, there is a very limited number of papers addressing this topic in detail. Most papers approach precision only from a qualitative point of view, not providing numerical values, or showing graphical figures difficult to measure and, consequently, to compare with other works. The several solutions presented a state that they are accurate, but they are difficult to analyze as they have been tested with distinct platforms, sensors, cameras. In this sense, the difference between flight, experimental, and simulation tests is also a disadvantage when comparing different solutions in a correct way. It should also be considered that the analyzed papers address a wide range of flight operations, whose objectives are very dissimilar, so that a comparison in precision could lead to confusion.

4. Discussion

  • Number and Relevance of the Papers:
    (i) The number of research works focused on the use of computer vision for autonomous aerial vehicles has not stopped growing in recent years, which confirms the scientific community’s interest in this topic. (ii) More than 86% of the papers analyzed in this study were published in journals indexed in JCR in such outstanding areas as Aerospace Engineering, Robotics, Automation, and Artificial Intelligence.
  • Computer-Vision Tasks
    (i) The vast majority of research analyzed propose only solutions to tackle some kind of task and not a complete solution for a fully autonomous vision-based UAV. This fact, which has allowed us to structure the results in different categories, can be considered as negative since the aerial vehicles described fail to perform completely autonomous flights in real environments. However, it is foreseeable that the presence of completely autonomous UAVs based on computer vision will be common in the coming years, considering the rapid development and the annual trend of this research field. (ii) In this sense, the focus of research is shifting from relatively simpler operations, such as location and stabilization, to more complex solutions which are closer to the concept of autonomous UAV, such as following moving targets or trajectories defined by visual references, as well as solutions for the detection and avoidance of both static and dynamic obstacles, so as to carry out a collision-free flight.
  • Type of UAV, Vision System and Validation Process
    (i) The most widely used UAVs belong to the rotary-wing class. With regard to this type of aerial vehicle, which is principally characterized by the fact that they permit vertical take-off and landing, suspended flight, and greater maneuverability than fixed-wing vehicles, the number of papers presenting solutions specifically designed for or tested on them has not stopped growing in this century. Quadrotors and (to a lesser extent) helicopters are the most frequent vehicles within this class of UAV. (ii) The most widely used vision system is a single camera installed on board the UAV. This configuration is present in almost 80% percent of the works analyzed. Secondly, stereo systems have also been used in a significant number of cases. Again, the annual trend shows the predominance of mono-camera vision systems against multi-camera configurations. (iii) More than 54% of the works analyzed have included flight tests in their validation process. However, a non-negligible number (around 25%) have validated their proposals only through the use of simulation tests. Nevertheless, in this case, not only have numerical simulations been employed, but also virtual reality environments, in addition to hardware- and image-in-the-loop simulations.
After these general conclusions concerning the systematic mapping, we should state that we believe that the study carried out here is of great value since it has allowed the analysis of practically two decades of research. It should, therefore, be of great help to researchers who are starting to develop solutions for UAVs using computer vision, and for experts who wish to consult previously presented solutions. In addition, the classification structure introduced in this paper could be of great help for future reviews focused on the most recent years or certain types of operations, UAV classes, or validation processes carried out during real flights. These future studies may address technical issues in greater depth that are beyond the scope of a systematic mapping study.
Lastly, it is necessary to center the discussion in the concept of assistant aerial robots, our personal research interest which led to performing this systematic mapping study. This way, and considering the information we have analyzed performing this study, a series of conclusions have been drawn to focus our future developments. This, defines, therefore, our guidelines to advance in developing an autonomous UAV based on computer vision to assist dependent persons. In this sense, we summarize the main conclusions following:
  • Our personal interest in assistant aerial robots requires us to focus future studies on vision-based target tracking and obstacle avoidance principally, so that the UAV follows the person in order to perform monitoring tasks, meeting the safety conditions to make it possible to use such assistant UAVs in real environments.
  • We consider that quadrotors are the most suited for our purpose of developing aerial assistant robots for dependent people. The above-mentioned advantages regarding the rotatory-class are relevant reasons. In addition, the hovering ability and its relative simpler flight control make it possible to use these UAVs in dependents’ complex indoor/outdoor environments for which the assistant will be designed.
  • Initially, the best solution for our purpose is the use of a monocular system aboard the UAV. This solution is easier to install, allows us to reduce the payload for the UAV and the vision algorithms are simpler. However, we should not discard the use of a stereo system in the future, should we need accurate depth information. What does seem clear is the location of the vision system on board the UAV, which allows us to perform the tracking of the person in order to determine the action of assistance required at any given time.
  • We consider that virtual reality will be essential in our development process of the new personal flying assistant. Virtual reality allows us to test the performance of the UAV in a realistic home indoor environment containing multiple obstacles, as well as outdoor environments. In addition, virtual simulation tools make it possible to carry out studies using immersive and semi-immersive technologies, such as virtual reality helmets. This way, it will be possible to validate the proposal of an assistant UAV in a safe environment and, at the same time, the user acceptance of such technology. After that, real flight tests in controlled conditions will be conducted before bringing these novel assistants to real lives.

5. Conclusions

This paper has presented the first systematic mapping study focused on the topic of computer vision in autonomous unmanned aerial vehicles. The objective has, therefore, been to analyze papers that introduce the development of solutions based on computer vision for navigation tasks and flight control in UAVs. After a search process that returned a total of 2081 papers, a screening process was applied that eventually allowed us to obtain 144 papers. In order to answer the research questions posed, the papers were analyzed and classified according to the task for which computer vision had been used, the UAV class for which the proposal had been designed or validated, the characteristics of the vision system installed, and the validation process used in each case.

Author Contributions

Conceptualization, L.M.B., R.M. and A.F.-C.; Methodology, A.F.-C.; Investigation, L.M.B., R.M. and A.F.-C.; Data Curation, R.M. and A.F.-C.; Writing-Original Draft Preparation, L.M.B.; Writing-Review & Editing, L.M.B., R.M. and A.F.-C.; Funding Acquisition, A.F.-C.

Funding

This work was partially supported by Spanish Ministerio de Ciencia, Innovación y Universidades, Agencia Estatal de Investigación (AEI)/European Regional Development Fund (FEDER, EU) under DPI2016-80894-R grant. Lidia María Belmonte holds a scholarship (FPU014/05283) from the Spanish Ministerio de Educación y Formación Profesional.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
HILShardware-in-the-loop simulation
IBVSimage-based visual servoing
IFimpact factor
IILSimage-in-loop simulation
JCRJournal Citation Reports
PBVSposition-based visual servoing
RQresearch question
SAAsense-and-avoid
SLAMsimultaneous localization and mapping
UASunmanned aircraft system
UAVunmanned aerial vehicle
VTOLvertical take-off and landing

Appendix A. List of Journals and Papers Published

JournalPapers
Journal of Intelligent & Robotic Systems[18,20,27,30,40,51,68,69,71,73,83,84,85,86,87,88,89,90,91,92,93,94,95]
IEEE Transactions on Aerospace and Electronic Systems[28,96,97,98,99,100,101]
Sensors[17,29,31,66,72,102,103]
International Journal of Advanced Robotic Systems[38,64,104,105,106]
Journal of Guidance, Control, and Dynamics[33,39,107,108,109]
Autonomous Robots[58,110,111,112]
IEEE/ASME Transactions on Mechatronics[22,59,113,114]
Journal of Field Robotics[35,75,115,116]
Robotics and Autonomous Systems[117,118,119,120]
Canadian Aeronautics and Space Journal[121,122,123]
Control Engineering Practice[34,124,125]
IEEE Transactions on Industrial Electronics[44,126,127]
IEEE Transactions on Robotics[128,129,130]
Advanced Robotics[24,32]
Aircraft Engineering and Aerospace Technology[158,169]
Asian Journal of Control[151,153]
Expert Systems with Applications[74,143]
IEEE Transactions on Instrumentation and Measurement[23,57]
IFAC-PapersOnLine[146,172]
International Journal of Robotics Research[61,65]
International Journal of Robust and Nonlinear Control[45,142]
ISA Transactions[80,150]
Journal of Aerospace Information Systems[147,173]
Journal of Aerospace Engineering[159,164]
Optik[135,136]
Science China Information Sciences[42,144]
Science China Technological Sciences[41,63]
Advances in Mechanical Engineering[160]
Aeronautical Journal[137]
Aerospace Science and Technology[62]
Automation in Construction[163]
Aviation[154]
Engineering Letters[139]
Eurasip Journal on Advances in Signal Processing[54]
IEEE Aerospace and Electronic Systems Magazine[43]
IEEE Sensors Journal[25]
IEEE Transactions on Circuits and Systems for Video Technology[133]
IEEE Transactions on Control Systems Technology[82]
IEEE Transactions on Robotics and Automation[170]
IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews[161]
IEEE Transactions on Vehicular Technology[140]
IEICE Electronics Express[152]
IET Control Theory and Applications[81]
Industrial Robot[176]
International Journal of Aeronautical and Space Sciences[155]
International Journal of Automation and Computing[175]
International Journal of Control[177]
International Journal of Control, Automation and Systems[157]
International Journal of Distributed Sensor Networks[132]
International Journal of High Performance Systems Architecture[171]
International Journal of Imaging and Robotics[168]
International Journal of Optomechatronics[134]
International Journal of Robotics and Automation[149]
International Journal on Electrical Engineering and Informatics[166]
Journal of Infrastructure Systems[53]
Journal of Robotic Systems[141]
Journal of Robotics and Mechatronics[165]
Journal of Vibration and Control[167]
Machine Vision and Applications[174]
Mobile Information Systems[21]
Neurocomputing[162]
Optical Engineering[138]
Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering[156]
Robotica[60]
Sensor Review[148]
Transport and Telecommunication[131]
Trends in Bioinformatics[19]
Unmanned Systems[145]

References

  1. González-Jorge, H.; Martínez-Sánchez, J.; Bueno, M.; Arias, P. Unmanned Aerial Systems for Civil Applications: A Review. Drones 2017, 1, 2. [Google Scholar] [CrossRef]
  2. Aire. A Self-Flying Robotic Assistant for the Home by Aevena Aire. Available online: https://esist.tech/2017/09/21/aire-a-self-flying-robotic-assistant-for-the-home-by-aevena-aire/ (accessed on 18 March 2019).
  3. Fleye. Fleye—Your Personal Flying Robot. Available online: https://www.kickstarter.com/projects/gofleye/fleye-your-personal-flying-robot (accessed on 18 March 2019).
  4. CIMON. Assisting Astronauts with Airbus Innovation. Available online: https://www.airbus.com/newsroom/stories/Assisting-astronauts-with-Airbus-innovation.html (accessed on 18 March 2019).
  5. Bualat, M.G.; Smith, T.; Smith, E.E.; Fong, T.; Wheeler, D. Astrobee: A New Tool for ISS Operations. In Proceedings of the 15th International Conference on Space Operations. American Institute of Aeronautics and Astronautics, Marseille, France, 28 May–1 June 2018. [Google Scholar] [CrossRef]
  6. Belmonte, L.M.; Morales, R.; García, A.S.; Segura, E.; Novais, P.; Fernández-Caballero, A. Assisting Dependent People at Home through Autonomous Unmanned Aerial Vehicles. In Proceedings of the ISAmI 2019, Ávila, Spain, 26–28 June 2019. [Google Scholar]
  7. Belmonte, L.M.; Morales, R.; García, A.S.; Segura, E.; Novais, P.; Fernández-Caballero, A. Trajectory Planning of a Quadrotor to Monitor Dependent People. In Proceedings of the IWINAC 2019, Almería, Spain, 3–7 June 2019. [Google Scholar]
  8. Fernandez, A.; Insfran, E.; Abrahão, S. Usability evaluation methods for the web: A systematic mapping study. Inf. Softw. Technol. 2011, 53, 789–817. [Google Scholar] [CrossRef]
  9. Roberto, R.; Lima, J.P.; Teichrieb, V. Tracking for mobile devices: A systematic mapping study. Comput. Graph. 2016, 56, 20–30. [Google Scholar] [CrossRef]
  10. Sierra, J.M.; Vizcaíno, A.; Genero, M.; Piattini, M. A systematic mapping study about socio-technical congruence. Inf. Softw. Technol. 2018, 94, 111–129. [Google Scholar] [CrossRef]
  11. Moguel, E.; Conejero, J.M.; Sánchez-Figueroa, F.; Hernández, J.; Preciado, J.C.; Rodríguez-Echeverría, R. Towards the Use of Unmanned Aerial Systems for Providing Sustainable Services in Smart Cities. Sensors 2017, 18, 64. [Google Scholar] [CrossRef] [PubMed]
  12. Martinez-Gomez, J.; Fernández-Caballero, A.; Garcia-Varea, I.; Rodriguez, L.; Romero-Gonzalez, C. A Taxonomy of Vision Systems for Ground Mobile Robots. Int. J. Adv. Robot. Syst. 2014, 11, 111. [Google Scholar] [CrossRef] [Green Version]
  13. Kanellakis, C.; Nikolakopoulos, G. Survey on Computer Vision for UAVs: Current Developments and Trends. J. Intell. Robot. Syst. 2017, 87, 141–168. [Google Scholar] [CrossRef] [Green Version]
  14. Al-Kaff, A.; Martín, D.; García, F.; de la Escalera, A.; Armingol, J.M. Survey of computer vision algorithms and applications for unmanned aerial vehicles. Expert Syst. Appl. 2018, 92, 447–463. [Google Scholar] [CrossRef]
  15. Lu, Y.; Xue, Z.; Xia, G.S.; Zhang, L. A survey on vision-based UAV navigation. Geo-Spat. Inf. Sci. 2018, 21, 21–32. [Google Scholar] [CrossRef] [Green Version]
  16. RG, E. Template for a Mapping Study. Available online: https://community.dur.ac.uk/ebse/resources/templates/MappingStudyTemplate.pdf (accessed on 10 January 2018).
  17. Munguía, R.; Urzua, S.; Bolea, Y.; Grau, A. Vision-based SLAM system for unmanned aerial vehicles. Sensors 2016, 16, 372. [Google Scholar] [CrossRef]
  18. Zhao, S.; Lin, F.; Peng, K.; Dong, X.; Chen, B.M.; Lee, T.H. Vision-aided estimation of attitude, velocity, and inertial measurement bias for UAV stabilization. J. Intell. Robot. Syst. 2016, 81, 531–549. [Google Scholar] [CrossRef]
  19. Seng, L.K.; Ovinis, M.; Nagarajan; Seulin, R.; Morel, O. Vision-based state estimation of an unmanned aerial vehicle. Trends Bioinform. 2016, 10, 11–19. [Google Scholar] [CrossRef]
  20. Cocchioni, F.; Frontoni, E.; Ippoliti, G.; Longhi, S.; Mancini, A.; Zingaretti, P. Visual based landing for an unmanned quadrotor. J. Intell. Robot. Syst. 2016, 84, 511–528. [Google Scholar] [CrossRef]
  21. Jeong, H.J.; Choi, J.D.; Ha, Y.G. Vision based displacement detection for stabilized UAV control on cloud server. Mob. Inf. Syst. 2016, 2016, 8937176. [Google Scholar] [CrossRef]
  22. Xie, H.; Low, K.H.; He, Z. Adaptive visual servoing of unmanned aerial vehicles in GPS-denied environments. IEEE/ASME Trans. Mechatron. 2017, 22, 2554–2563. [Google Scholar] [CrossRef]
  23. Yin, Y.; Wang, X.; Xu, D.; Liu, F.; Wang, Y.; Wu, W. Robust visual detection-learning-tracking framework for autonomous aerial refueling of UAVs. IEEE Trans. Instrum. Meas. 2016, 65, 510–521. [Google Scholar] [CrossRef]
  24. Harik, E.H.C.; Guérin, F.; Guinand, F.; Brethé, J.F.; Pelvillain, H.; Parédé, J.Y. Fuzzy logic controller for predictive vision-based target tracking with an unmanned aerial vehicle. Adv. Robot. 2017, 31, 368–381. [Google Scholar] [CrossRef] [Green Version]
  25. Liu, Y.; Wang, Q.; Zhuang, Y.; Hu, H. A novel trail detection and scene understanding framework for a quadrotor UAV with monocular vision. IEEE Sens. J. 2017, 17, 6778–6787. [Google Scholar] [CrossRef]
  26. Wan, Y.; Tang, J.; Lao, S. Research on the collision avoidance algorithm for fixed-wing UAVs based on maneuver coordination and planned trajectories prediction. Appl. Sci. 2019, 9, 798. [Google Scholar] [CrossRef]
  27. Fasano, G.; Accardo, D.; Tirri, A.E.; Moccia, A.; Lellis, E.D. Sky region obstacle detection and tracking for vision-based UAS sense and avoid. J. Intell. Robot. Syst. 2016, 84, 121–144. [Google Scholar] [CrossRef]
  28. Park, J.; Kim, Y. Collision avoidance for quadrotor using stereo vision depth maps. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 3226–3241. [Google Scholar] [CrossRef]
  29. Al-Kaff, A.; García, F.; Martín, D.; De La Escalera, A.; Armingol, J. Obstacle detection and avoidance system based on monocular camera and size expansion algorithm for UAVs. Sensors 2017, 17, 1061. [Google Scholar] [CrossRef] [PubMed]
  30. Lee, D.; Kim, Y.; Bang, H. Vision-based terrain referenced navigation for unmanned aerial vehicles using homography relationship. J. Intell. Robot. Syst. 2013, 69, 489–497. [Google Scholar] [CrossRef]
  31. Huang, K.L.; Chiu, C.C.; Chiu, S.Y.; Teng, Y.J.; Hao, S.S. Monocular vision system for fixed altitude flight of unmanned aerial vehicles. Sensors 2015, 15, 16848–16865. [Google Scholar] [CrossRef] [PubMed]
  32. Ozawa, R.; Chaumette, F. Dynamic visual servoing with image moments for an unmanned aerial vehicle using a virtual spring approach. Adv. Robot. 2013, 27, 683–696. [Google Scholar] [CrossRef] [Green Version]
  33. Dobrokhodov, V.N.; Kaminer, I.I.; Jones, K.D.; Ghabcheloo, R. Vision-based tracking and motion estimation for moving targets using unmanned air vehicles. J. Guid. Control Dyn. 2008, 31, 907–917. [Google Scholar] [CrossRef]
  34. Courbon, J.; Mezouar, Y.; Guénard, N.; Martinet, P. Vision-based navigation of unmanned aerial vehicles. Control Eng. Pract. 2010, 18, 789–799. [Google Scholar] [CrossRef]
  35. Schmid, K.; Lutz, P.; Tomić, T.; Mair, E.; Hirschmüller, H. Autonomous vision-based micro air vehicle for indoor and outdoor navigation. J. Field Robot. 2014, 31, 537–570. [Google Scholar] [CrossRef]
  36. García Carrillo, L.R.; Dzul López, A.E.; Lozano, R.; Pégard, C. Quad Rotorcraft Control; Advances in Industrial Control; Springer: London, UK, 2013. [Google Scholar] [CrossRef]
  37. Lozano, R. (Ed.) Unmanned Aerial Vehicles. Embedded Control; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2013. [Google Scholar] [CrossRef]
  38. Ma, Z.; Hu, T.; Shen, L. Stereo vision guiding for the autonomous landing of fixed-wing UAVs: A saliency-inspired approach. Int. J. Adv. Robot. Syst. 2016, 13, 43. [Google Scholar] [CrossRef]
  39. Choi, H.; Kim, Y.; Hwang, I. Reactive collision avoidance of unmanned aerial vehicles using a single vision sensor. J. Guid. Control Dyn. 2013, 36, 1234–1240. [Google Scholar] [CrossRef]
  40. Huh, S.; Shim, D.H. A vision-based automatic landing method for fixed-wing UAVs. J. Intell. Robot. Syst. 2010, 57, 217–231. [Google Scholar] [CrossRef]
  41. Fan, Y.; Ding, M.; Cao, Y. Vision algorithms for fixed-wing unmanned aerial vehicle landing system. Sci. China Technol. Sci. 2017, 60, 434–443. [Google Scholar] [CrossRef]
  42. Duan, H.; Li, H.; Luo, Q.; Zhang, C.; Li, C.; Li, P.; Deng, Y. A binocular vision-based UAVs autonomous aerial refueling platform. Sci. China Inf. Sci. 2016, 59, 053201. [Google Scholar] [CrossRef]
  43. Lyu, Y.; Pan, Q.; Zhao, C.; Zhang, Y.; Hu, J. Feature article: Vision-based UAV collision avoidance with 2D dynamic safety envelope. IEEE Aerosp. Electron. Syst. Mag. 2016, 31, 16–26. [Google Scholar] [CrossRef]
  44. Fu, Q.; Quan, Q.; Cai, K.Y. Robust pose estimation for multirotor UAVs using off-board monocular vision. IEEE Trans. Ind. Electron. 2017, 64, 7942–7951. [Google Scholar] [CrossRef]
  45. Le Bras, F.; Hamel, T.; Mahony, R.; Treil, A. Output feedback observation and control for visual servoing of VTOL UAVs. Int. J. Robust Nonlinear Control 2011, 21, 1008–1030. [Google Scholar] [CrossRef]
  46. Fernández-Caballero, A.; Belmonte, L.M.; Morales, R.; Somolinos, J.A. Generalized Proportional Integral Control for an Unmanned Quadrotor System. Int. J. Adv. Robot. Syst. 2015, 12, 85. [Google Scholar] [CrossRef] [Green Version]
  47. Belmonte, L.M.; Morales, R.; Fernández-Caballero, A.; Somolinos, J.A. Robust Linear Longitudinal Feedback Control of a Flapping Wing Micro Air Vehicle. In Artificial Computation in Biology and Medicine; Ferrández Vicente, J.M., Álvarez-Sánchez, J.R., de la Paz López, F., Toledo-Moreo, F.J., Adeli, H., Eds.; Springer: Cham, Switzerland, 2015; pp. 449–458. [Google Scholar] [Green Version]
  48. Jung, H.K.; Choi, J.S.; Wang, C.; Park, G.J. Analysis and Fabrication of Unconventional Flapping Wing Air Vehicles. Int. J. Micro Air Veh. 2015, 7, 71–88. [Google Scholar] [CrossRef] [Green Version]
  49. Rongfa, M.N.; Pantuphag, T.; Srigrarom, S. Analysis of Kinematics of Flapping Wing UAV Using OptiTrack Systems. Aerospace 2016, 3, 23. [Google Scholar] [CrossRef]
  50. Grossman, D. Airships, Dirigibles, Zeppelins & Blimps: What’s the Difference? Available online: https://www.airships.net/dirigible/ (accessed on 18 March 2019).
  51. Caballero, F.; Merino, L.; Ferruz, J.; Ollero, A. Unmanned aerial vehicle localization based on monocular vision and online mosaicking. J. Intell. Robot. Syst. 2009, 55, 323–343. [Google Scholar] [CrossRef]
  52. Hygounenc, E.; Jung, I.K.; Souères, P.; Lacroix, S. The Autonomous Blimp Project of LAAS-CNRS: Achievements in Flight Control and Terrain Mapping. Int. J. Robot. Res. 2004, 23, 473–511. [Google Scholar] [CrossRef]
  53. Rathinam, S.; Kim, Z.W.; Sengupta, R. Vision-based monitoring of locally linear structures using an unmanned aerial vehicle. J. Infrastruct. Syst. 2008, 14, 52–63. [Google Scholar] [CrossRef]
  54. Conte, G.; Doherty, P. Vision-based unmanned aerial vehicle navigation using geo-referenced information. EURASIP J. Adv. Signal Process. 2009, 2009, 387308. [Google Scholar] [CrossRef]
  55. Holness, A.E.; Bruck, H.A.; Gupta, S.K. Characterizing and modeling the enhancement of lift and payload capacity resulting from thrust augmentation in a propeller-assisted flapping wing air vehicle. Int. J. Micro Air Veh. 2018, 50–60. [Google Scholar] [CrossRef]
  56. Belmonte, L.M.; Castillo, J.C.; Fernández-Caballero, A.; Almansa-Valverde, S.; Morales, R. Flying Depth Camera for Indoor Mapping and Localization. In Ambient Intelligence—Software and Applications; Mohamed, A., Novais, P., Pereira, A., Villarrubia, G., Fernández-Caballero, A., Eds.; Springer: Cham, Switzerland, 2015; pp. 243–251. [Google Scholar]
  57. Duan, H.; Zhang, Q. Visual measurement in simulation environment for vision-based UAV autonomous aerial refueling. IEEE Trans. Instrum. Meas. 2015, 64, 2468–2480. [Google Scholar] [CrossRef]
  58. Lin, S.; Garratt, M.A.; Lambert, A.J. Monocular vision-based real-time target recognition and tracking for autonomously landing an UAV in a cluttered shipboard environment. Auton. Robot. 2017, 41, 881–901. [Google Scholar] [CrossRef]
  59. Zheng, D.; Wang, H.; Wang, J.; Chen, S.; Chen, W.; Liang, X. Image-based visual servoing of a quadrotor using virtual camera approach. IEEE/ASME Trans. Mechatron. 2017, 22, 972–982. [Google Scholar] [CrossRef]
  60. Gomez-Balderas, J.E.; Salazar, S.; Guerrero, J.A.; Lozano, R. Vision-based autonomous hovering for a miniature quad-rotor. Robotica 2014, 32, 43–61. [Google Scholar] [CrossRef]
  61. Altuğ, E.; Ostrowski, J.P.; Taylor, C.J. Control of a quadrotor helicopter using dual camera visual feedback. Int. J. Robot. Res. 2005, 24, 329–341. [Google Scholar] [CrossRef]
  62. Yu, C.; Cai, J.; Chen, Q. Multi-resolution visual fiducial and assistant navigation system for unmanned aerial vehicle landing. Aerosp. Sci. Technol. 2017, 67, 249–256. [Google Scholar] [CrossRef]
  63. Li, H.; Duan, H. Verification of monocular and binocular pose estimation algorithms in vision-based UAVs autonomous aerial refueling system. Sci. China Technol. Sci. 2016, 59, 1730–1738. [Google Scholar] [CrossRef]
  64. Tang, D.; Hu, T.; Shen, L.; Zhang, D.; Kong, W.; Low, K.H. Ground stereo vision-based navigation for autonomous take-off and landing of UAVs: A Chan-Vese model approach. Int. J. Adv. Robot. Syst. 2016, 13, 67. [Google Scholar] [CrossRef]
  65. Warren, M.; Corke, P.; Upcroft, B. Long-range stereo visual odometry for extended altitude flight of unmanned aerial vehicles. Int. J. Robot. Res. 2016, 35, 381–403. [Google Scholar] [CrossRef]
  66. Kong, W.; Hu, T.; Zhang, D.; Shen, L.; Zhang, J. Localization framework for real-time UAV autonomous landing: An on-ground deployed visual approach. Sensors 2017, 17, 1437. [Google Scholar] [CrossRef]
  67. Hein, D.; Kraft, T.; Brauchle, J.; Berger, R. Integrated UAV-Based Real-Time Mapping for Security Applications. ISPRS Int. J. Geo-Inf. 2019, 8, 219. [Google Scholar] [CrossRef]
  68. Harmat, A.; Trentini, M.; Sharf, I. Multi-camera tracking and mapping for unmanned aerial vehicles in unstructured environments. J. Intell. Robot. Syst. 2015, 78, 291–317. [Google Scholar] [CrossRef]
  69. García Carrillo, L.R.; Dzul López, A.E.; Lozano, R.; Pégard, C. Combining stereo vision and inertial navigation system for a quad-rotor UAV. J. Intell. Robot. Syst. 2012, 65, 373–387. [Google Scholar] [CrossRef]
  70. Schauwecker, K.; Zell, A. On-Board Dual-Stereo-Vision for the Navigation of an Autonomous MAV. J. Intell. Robot. Syst. 2014, 74, 1–16. [Google Scholar] [CrossRef]
  71. Fu, C.; Olivares-Mendez, M.A.; Suarez-Fernandez, R.; Campoy, P. Monocular visual-inertial SLAM-based collision avoidance strategy for fail-safe UAV using fuzzy logic controllers. J. Intell. Robot. Syst. 2014, 73, 513–533. [Google Scholar] [CrossRef]
  72. Hinas, A.; Roberts, J.; Gonzalez, F. Vision-based target finding and inspection of a ground target using a multirotor UAV system. Sensors 2017, 17, 2929. [Google Scholar] [CrossRef] [PubMed]
  73. Cesetti, A.; Frontoni, E.; Mancini, A.; Ascani, A.; Zingaretti, P.; Longhi, S. A visual global positioning system for unmanned aerial vehicles used in photogrammetric applications. J. Intell. Robot. Syst. 2011, 61, 157–168. [Google Scholar] [CrossRef]
  74. García-Pulido, J.; Pajares, G.; Dormido, S.; de la Cruz, J. Recognition of a landing platform for unmanned aerial vehicles by using computer vision-based techniques. Expert Syst. Appl. 2017, 76, 152–165. [Google Scholar] [CrossRef]
  75. Molloy, T.L.; Ford, J.J.; Mejias, L. Detection of aircraft below the horizon for vision-based detect and avoid in unmanned aircraft systems. J. Field Robot. 2017, 34, 1378–1391. [Google Scholar] [CrossRef] [Green Version]
  76. Belmonte, L.M.; Morales, R.; Fernández-Caballero, A.; Somolinos, J.A. A Tandem Active Disturbance Rejection Control for a Laboratory Helicopter With Variable-Speed Rotors. IEEE Trans. Ind. Electron. 2016, 63, 6395–6406. [Google Scholar] [CrossRef]
  77. Belmonte, L.M.; Morales, R.; Fernández-Caballero, A.; Somolinos, J. Robust Decentralized Nonlinear Control for a Twin Rotor MIMO System. Sensors 2016, 16, 1160. [Google Scholar] [CrossRef] [PubMed]
  78. Belmonte, L.M.; Morales, R.; Fernández-Caballero, A.; Somolinos, J.A. Nonlinear Cascade-Based Control for a Twin Rotor MIMO System. In Nonlinear Systems—Design, Analysis, Estimation and Control; InTech: London, UK, 2016. [Google Scholar] [CrossRef] [Green Version]
  79. Siti, I.; Mjahed, M.; Ayad, H.; El Kari, A. New trajectory tracking approach for a quadcopter using genetic algorithm and reference model methods. Appl. Sci. 2019, 9, 1780. [Google Scholar] [CrossRef]
  80. Amirkhani, A.; Shirzadeh, M.; Papageorgiou, E.I.; Mosavi, M.R. Visual-based quadrotor control by means of fuzzy cognitive maps. ISA Trans. 2016, 60, 128–142. [Google Scholar] [CrossRef] [PubMed]
  81. Ghommam, J.; Fethalla, N.; Saad, M. Quadrotor circumnavigation of an unknown moving target using camera vision-based measurements. IET Control Theory Appl. 2016, 10, 1874–1887. [Google Scholar] [CrossRef]
  82. Park, J.; Kim, Y. Landing site searching and selection algorithm development using vision system and its application to quadrotor. IEEE Trans. Control Syst. Technol. 2015, 23, 488–503. [Google Scholar] [CrossRef]
  83. Andert, F.; Adolf, F.M.; Goormann, L.; Dittrich, J.S. Autonomous vision-based helicopter flights through obstacle gates. J. Intell. Robot. Syst. 2010, 57, 259–280. [Google Scholar] [CrossRef]
  84. Artieda, J.; Sebastian, J.M.; Campoy, P.; Correa, J.F.; Mondragón, I.F.; Martínez, C.; Olivares, M. Visual 3D SLAM from UAVs. J. Intell. Robot. Syst. 2009, 55, 299–321. [Google Scholar] [CrossRef]
  85. Benini, A.; Mancini, A.; Longhi, S. An IMU/UWB/vision-based extended Kalman filter for mini-UAV localization in indoor environment using 802.15.4a wireless sensor network. J. Intell. Robot. Syst. 2013, 70, 461–476. [Google Scholar] [CrossRef]
  86. Caballero, F.; Merino, L.; Ferruz, J.; Ollero, A. Vision-based odometry and SLAM for medium and high altitude flying UAVs. J. Intell. Robot. Syst. 2009, 54, 137–161. [Google Scholar] [CrossRef]
  87. Campoy, P.; Correa, J.F.; Mondragón, I.; Martínez, C.; Olivares, M.; Mejías, L.; Artieda, J. Computer vision onboard UAVs for civilian tasks. J. Intell. Robot. Syst. 2009, 54, 105–135. [Google Scholar] [CrossRef]
  88. Ceren, Z.; Altuğ, E. Image based and hybrid visual servo control of an unmanned aerial vehicle. J. Intell. Robot. Syst. 2012, 65, 325–344. [Google Scholar] [CrossRef]
  89. Cesetti, A.; Frontoni, E.; Mancini, A.; Zingaretti, P.; Longhi, S. A vision-based guidance system for UAV navigation and safe landing using natural landmarks. J. Intell. Robot. Syst. 2010, 57, 233–257. [Google Scholar] [CrossRef]
  90. Gui, Y.; Guo, P.; Zhang, H.; Lei, Z.; Zhou, X.; Du, J.; Yu, Q. Airborne vision-based navigation method for UAV accuracy landing using infrared lamps. J. Intell. Robot. Syst. 2013, 72, 197–218. [Google Scholar] [CrossRef]
  91. Magree, D.; Mooney, J.G.; Johnson, E.N. Monocular visual mapping for obstacle avoidance on UAVs. J. Intell. Robot. Syst. 2014, 74, 17–26. [Google Scholar] [CrossRef]
  92. Martínez, C.; Mondragón, I.F.; Campoy, P.; Sánchez-López, J.L.; Olivares-Méndez, M.A. A hierarchical tracking strategy for vision-based applications on-board UAVs. J. Intell. Robot. Syst. 2013, 72, 517–539. [Google Scholar] [CrossRef]
  93. Natraj, A.; Ly, D.S.; Eynard, D.; Demonceaux, C.; Vasseur, P. Omnidirectional vision for UAV: Applications to attitude, motion and altitude estimation for day and night conditions. J. Intell. Robot. Syst. 2013, 69, 459–473. [Google Scholar] [CrossRef]
  94. Ramírez, A.; Espinoza, E.S.; García Carrillo, L.R.; Mondié, S.; García, A.; Lozano, R. Stability analysis of a vision-based UAV controller. J. Intell. Robot. Syst. 2014, 74, 69–84. [Google Scholar] [CrossRef]
  95. Tarhan, M.; Altuğ, E. EKF based attitude estimation and stabilization of a quadrotor UAV using vanishing points in catadioptric images. J. Intell. Robot. Syst. 2011, 62, 587–607. [Google Scholar] [CrossRef]
  96. Campa, G.; Napolitano, M.; Fravolini, M. Simulation environment for machine vision based aerial refueling for UAVs. IEEE Trans. Aerosp. Electron. Syst. 2009, 45, 138–151. [Google Scholar] [CrossRef]
  97. Carrillo, L.R.G.; Dzul, A.; Lozano, R. Hovering quad-rotor control: A comparison of nonlinear controllers using visual feedback. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 3159–3170. [Google Scholar] [CrossRef]
  98. Huh, S.; Cho, S.; Jung, Y.; Shim, D.H. Vision-based sense-and-avoid framework for unmanned aerial vehicles. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 3427–3439. [Google Scholar] [CrossRef]
  99. Xie, H.; Fink, G.; Lynch, A.F.; Jagersand, M. Adaptive visual servoing of UAVs using a virtual camera. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2529–2538. [Google Scholar] [CrossRef]
  100. Zhang, J.; Liu, W.; Wu, Y. Novel technique for vision-based UAV navigation. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 2731–2741. [Google Scholar] [CrossRef]
  101. Zhang, J.; Wu, Y.; Liu, W.; Chen, X. Novel approach to position and orientation estimation in vision-based UAV navigation. IEEE Trans. Aerosp. Electron. Syst. 2010, 46, 687–700. [Google Scholar] [CrossRef]
  102. Nguyen, P.H.; Kim, K.W.; Lee, Y.W.; Park, K.R. Remote marker-based tracking for UAV landing using visible-light camera sensor. Sensors 2017, 17, 1987. [Google Scholar] [CrossRef]
  103. Yang, T.; Li, G.; Li, J.; Zhang, Y.; Zhang, X.; Zhang, Z.; Li, Z. A ground-based near infrared camera array system for UAV auto-landing in GPS-denied environment. Sensors 2016, 16, 1393. [Google Scholar] [CrossRef]
  104. Patruno, C.; Nitti, M.; Stella, E.; D’Orazio, T. Helipad detection for accurate UAV pose estimation by means of a visual sensor. Int. J. Adv. Robot. Syst. 2017, 14. [Google Scholar] [CrossRef] [Green Version]
  105. Wang, T.; Wang, C.; Liang, J.; Chen, Y.; Zhang, Y. Vision-aided inertial navigation for small unmanned aerial vehicles in GPS-denied environments. Int. J. Adv. Robot. Syst. 2013, 10, 276. [Google Scholar] [CrossRef]
  106. Yu, Z.; Nonami, K.; Shin, J.; Celestino, D. 3D vision based landing control of a small scale autonomous helicopter. Int. J. Adv. Robot. Syst. 2007, 4, 7. [Google Scholar] [CrossRef]
  107. Alkowatly, M.T.; Becerra, V.M.; Holderbaum, W. Bioinspired autonomous visual vertical control of a quadrotor unmanned aerial vehicle. J. Guid. Control Dyn. 2015, 38, 249–262. [Google Scholar] [CrossRef]
  108. Hosen, J.; Helgesen, H.H.; Fusini, L.; Fossen, T.I.; Johansen, T.A. Vision-aided nonlinear observer for fixed-wing unmanned aerial vehicle navigation. J. Guid. Control Dyn. 2016, 39, 1777–1789. [Google Scholar] [CrossRef]
  109. Lee, D.; Lim, H.; Kim, H.J.; Kim, Y.; Seong, K.J. Adaptive image-based visual servoing for an underactuated quadrotor system. J. Guid. Control Dyn. 2012, 35, 1335–1353. [Google Scholar] [CrossRef]
  110. Eynard, D.; Vasseur, P.; Demonceaux, C.; Frémont, V. Real time UAV altitude, attitude and motion estimation from hybrid stereovision. Auton. Robot. 2012, 33, 157–172. [Google Scholar] [CrossRef] [Green Version]
  111. Meier, L.; Tanskanen, P.; Heng, L.; Lee, G.H.; Fraundorfer, F.; Pollefeys, M. PIXHAWK: A micro aerial vehicle design for autonomous flight using onboard computer vision. Auton. Robot. 2012, 33, 21–39. [Google Scholar] [CrossRef]
  112. Mondragón, I.F.; Olivares-Méndez, M.A.; Campoy, P.; Martínez, C.; Mejias, L. Unmanned aerial vehicles UAVs attitude, height, motion estimation and control using visual systems. Auton. Robot. 2010, 29, 17–34. [Google Scholar] [CrossRef] [Green Version]
  113. Kim, H.J.; Kim, M.; Lim, H.; Park, C.; Yoon, S.; Lee, D.; Choi, H.; Oh, G.; Park, J.; Kim, Y. Fully autonomous vision-based net-recovery landing system for a fixed-wing UAV. IEEE/ASME Trans. Mechatron. 2013, 18, 1320–1333. [Google Scholar] [CrossRef]
  114. Xie, H.; Lynch, A.F. Input saturated visual servoing for unmanned aerial vehicles. IEEE/ASME Trans. Mechatron. 2017, 22, 952–960. [Google Scholar] [CrossRef]
  115. Mejías, L.; Saripalli, S.; Campoy, P.; Sukhatme, G.S. Visual servoing of an autonomous helicopter in urban areas using feature tracking. J. Field Robot. 2006, 23, 185–199. [Google Scholar] [CrossRef] [Green Version]
  116. Richardson, T.S.; Jones, C.G.; Likhoded, A.; Sparks, E.; Jordan, A.; Cowling, I.; Willcox, S. Automated vision-based recovery of a rotary wing unmanned aerial vehicle onto a moving platform. J. Field Robot. 2013, 30, 667–684. [Google Scholar] [CrossRef]
  117. Amidi, O.; Kanade, T.; Fujita, K. A visual odometer for autonomous helicopter flight. Robot. Auton. Syst. 1999, 28, 185–193. [Google Scholar] [CrossRef]
  118. Garcia-Pardo, P.J.; Sukhatme, G.S.; Montgomery, J.F. Towards vision-based safe landing for an autonomous helicopter. Robot. Auton. Syst. 2002, 38, 19–29. [Google Scholar] [CrossRef]
  119. Kendoul, F.; Fantoni, I.; Nonami, K. Optic flow-based vision system for autonomous 3D localization and control of small aerial vehicles. Robot. Auton. Syst. 2009, 57, 591–602. [Google Scholar] [CrossRef] [Green Version]
  120. Mondragón, I.F.; Campoy, P.; Martinez, C.; Olivares, M. Omnidirectional vision applied to Unmanned Aerial Vehicles (UAVs) attitude and heading estimation. Robot. Auton. Syst. 2010, 58, 809–819. [Google Scholar] [CrossRef] [Green Version]
  121. Alizadeh, M.; Mehrandezh, M.; Paranjape, R. Vision-based adaptive prediction, planning, and execution of permissible and smooth trajectories for a 2DOF model helicopter. Can. Aeronaut. Space J. 2013, 59, 81–92. [Google Scholar] [CrossRef]
  122. Ebrahimi, A.; Janabi-Sharifi, F.; Ghanbari, A. UavisBug: Vision-based 3D motion planning and obstacle avoidance for a mini-UAV in an unknown indoor environment. Can. Aeronaut. Space J. 2014, 60, 9–21. [Google Scholar] [CrossRef]
  123. Kummer, N.; Beresowskaja, A.; Firouzi, H.; Najjaran, H. Autonomous UAV controlled collision landing via eye-in-hand visual servoing. Can. Aeronaut. Space J. 2016, 61, 1–22. [Google Scholar] [CrossRef]
  124. Choi, H.; Kim, Y. UAV guidance using a monocular-vision sensor for aerial target tracking. Control Eng. Pract. 2014, 22, 10–19. [Google Scholar] [CrossRef]
  125. Huh, S.; Shim, D.H. A vision-based landing system for small unmanned aerial vehicles using an airbag. Control Eng. Pract. 2010, 18, 812–823. [Google Scholar] [CrossRef]
  126. Zhang, X.; Xian, B.; Zhao, B.; Zhang, Y. Autonomous flight control of a nano quadrotor helicopter in a GPS-denied environment using on-board vision. IEEE Trans. Ind. Electron. 2015, 62, 6392–6403. [Google Scholar] [CrossRef]
  127. Zhao, S.; Hu, Z.; Yin, M.; Ang, K.Z.Y.; Liu, P.; Wang, F.; Dong, X.; Lin, F.; Chen, B.M.; Lee, T.H. A robust real-time vision system for autonomous cargo transfer by an unmanned helicopter. IEEE Trans. Ind. Electron. 2015, 62, 1210–1219. [Google Scholar] [CrossRef]
  128. Bourquardez, O.; Mahony, R.; Guenard, N.; Chaumette, F.; Hamel, T.; Eck, L. Image-based visual servo control of the translation kinematics of a quadrotor aerial vehicle. IEEE Trans. Robot. 2009, 25, 743–749. [Google Scholar] [CrossRef]
  129. Guenard, N.; Hamel, T.; Mahony, R. A practical visual servo control for an unmanned aerial vehicle. IEEE Trans. Robot. 2008, 24, 331–340. [Google Scholar] [CrossRef]
  130. Mebarki, R.; Lippiello, V.; Siciliano, B. Nonlinear visual control of unmanned aerial vehicles in GPS-denied environments. IEEE Trans. Robot. 2015, 31, 1004–1017. [Google Scholar] [CrossRef]
  131. Aksenov, A.Y.; Kuleshov, S.V.; Zaytseva, A.A. An application of computer vision systems to solve the problem of unmanned aerial vehicle control. Transp. Telecommun. J. 2014, 15. [Google Scholar] [CrossRef]
  132. Algabri, M.; Mathkour, H.; Mekhtiche, M.A.; Bencherif, M.A.; Alsulaiman, M.; Arafah, M.A.; Ghaleb, H. Wireless vision-based fuzzy controllers for moving object tracking using a quadcopter. Int. J. Distrib. Sens. Netw. 2017, 13. [Google Scholar] [CrossRef]
  133. Angelopoulou, M.E.; Bouganis, C.S. Vision-based egomotion estimation on FPGA for unmanned aerial vehicle navigation. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 1070–1083. [Google Scholar] [CrossRef]
  134. Azinheira, J.R.; Rives, P. Image-based visual servoing for vanishing features and ground lines tracking: Application to a UAV automatic landing. Int. J. Optomechatron. 2008, 2, 275–295. [Google Scholar] [CrossRef]
  135. Bi, Y.; Duan, H. Implementation of autonomous visual tracking and landing for a low-cost quadrotor. Opt.-Int. J. Light Electron Opt. 2013, 124, 3296–3300. [Google Scholar] [CrossRef]
  136. Bin, X.; Sen, Y.; Xu, Z. Control of a quadrotor helicopter using the COMPASS (BeiDou) system and on-board vision system. Opt.-Int. J. Light Electron Opt. 2016, 127, 6829–6838. [Google Scholar] [CrossRef]
  137. Campa, G.; Napolitano, M.R.; Perhinschi, M.; Fravolini, M.L.; Pollini, L.; Mammarella, M. Addressing pose estimation issues for machine vision based UAV autonomous serial refuelling. Aeronaut. J. 2007, 111, 389–396. [Google Scholar] [CrossRef]
  138. Chen, S.; Duan, H.; Deng, Y.; Li, C. Drogue pose estimation for unmanned aerial vehicle autonomous aerial refueling system based on infrared vision sensor. Opt. Eng. 2017, 56, 1. [Google Scholar] [CrossRef]
  139. Chen, Y.; Huang, R.; Zhu, Y. A cumulative error suppression method for UAV visual positioning system based on historical visiting information. Eng. Lett. 2017, 25, 424–430. [Google Scholar]
  140. Chiu, C.C.; Lo, C.T. Vision-only automatic flight control for small UAVs. IEEE Trans. Veh. Technol. 2011, 60, 2425–2437. [Google Scholar] [CrossRef]
  141. Corke, P. An inertial and visual sensing system for a small autonomous helicopter. J. Robot. Syst. 2004, 21, 43–51. [Google Scholar] [CrossRef]
  142. de Plinval, H.; Morin, P.; Mouyon, P.; Hamel, T. Visual servoing for underactuated VTOL UAVs: A linear, homography-based framework. Int. J. Robust Nonlinear Control 2014, 24, 2285–2308. [Google Scholar] [CrossRef]
  143. Eresen, A.; İmamoğlu, N.; Önder Efe, M. Autonomous quadrotor flight with vision-based obstacle avoidance in virtual environment. Expert Syst. Appl. 2012, 39, 894–905. [Google Scholar] [CrossRef]
  144. Fan, C.; Liu, Y.; Song, B.; Zhou, D. Dynamic visual servoing of a small scale autonomous helicopter in uncalibrated environments. Sci. China Inf. Sci. 2011, 54, 1855–1867. [Google Scholar] [CrossRef]
  145. Fink, G.; Xie, H.; Lynch, A.F.; Jagersand, M. Dynamic visual servoing for a quadrotor using a virtual camera. Unmanned Syst. 2017, 5, 1–17. [Google Scholar] [CrossRef]
  146. Fink, G.; Franke, M.; Lynch, A.F.; Röbenack, K.; Godbolt, B. Visual inertial SLAM: Application to unmanned aerial vehicles. IFAC-PapersOnLine 2017, 50, 1965–1970. [Google Scholar] [CrossRef]
  147. Fravolini, M.L.; Campa, G.; Napolitano, M.R. Evaluation of machine vision algorithms for autonomous aerial refueling for unmanned aerial vehicles. J. Aerosp. Comput. Inf. Commun. 2007, 4, 968–985. [Google Scholar] [CrossRef]
  148. Ivancsits, C.; Ricky Lee, M. Visual navigation system for small unmanned aerial vehicles. Sens. Rev. 2013, 33, 267–291. [Google Scholar] [CrossRef]
  149. Jabbari Asl, H.; Yazdani, M.; Yoon, J. Vision-based tracking control of quadrator using velocity of image features. Int. J. Robot. Autom. 2016, 31. [Google Scholar] [CrossRef]
  150. Jabbari Asl, H.; Yoon, J. Adaptive vision-based control of an unmanned aerial vehicle without linear velocity measurements. ISA Trans. 2016, 65, 296–306. [Google Scholar] [CrossRef]
  151. Jabbari Asl, H.; Yoon, J. Bounded-input control of the quadrotor unmanned aerial vehicle: A vision-based approach. Asian J. Control 2017, 19, 840–855. [Google Scholar] [CrossRef]
  152. Jan, I.U.; Khan, M.U.; Iqbal, N. Visual landing of helicopter by divide and conquer rule. IEICE Electron. Express 2011, 8, 1542–1548. [Google Scholar] [CrossRef] [Green Version]
  153. Jurado, F.; Palacios, G.; Flores, F.; Becerra, H.M. Vision-based trajectory tracking system for an emulated quadrotor UAV. Asian J. Control 2014, 16, 729–741. [Google Scholar] [CrossRef]
  154. Kemsaram, N.; Thatiparti, V.R.K.; Guntupalli, D.R.; Kuvvarapu, A. Design and development of an on-board autonomous visual tracking system for unmanned aerial vehicles. Aviation 2017, 21, 83–91. [Google Scholar] [CrossRef] [Green Version]
  155. Kim, Y.; Jung, W.; Bang, H. Visual target tracking and relative navigation for unmanned aerial hehicles in a GPS-denied environment. Int. J. Aeronaut. Space Sci. 2014, 15, 258–266. [Google Scholar] [CrossRef]
  156. Lee, D.; Kim, Y.; Bang, H. Vision-aided terrain referenced navigation for unmanned aerial vehicles using ground features. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2014, 228, 2399–2413. [Google Scholar] [CrossRef]
  157. Lee, D.J.; Kaminer, I.; Dobrokhodov, V.; Jones, K. Autonomous feature following for visual surveillance using a small unmanned aerial vehicle with gimbaled camera system. Int. J. Control Autom. Syst. 2010, 8, 957–966. [Google Scholar] [CrossRef] [Green Version]
  158. Lee, J.; Lee, K.; Park, S.; Im, S.; Park, J. Obstacle avoidance for small UAVs using monocular vision. Aircr. Eng. Aerosp. Technol. 2011, 83, 397–406. [Google Scholar] [CrossRef] [Green Version]
  159. Lee, J.O.; Kang, T.; Lee, K.H.; Im, S.K.; Park, J. Vision-based indoor localization for unmanned aerial vehicles. J. Aerosp. Eng. 2011, 24, 373–377. [Google Scholar] [CrossRef]
  160. Liu, C.; Prior, S.D.; Teacy, W.L.; Warner, M. Computationally efficient visual–inertial sensor fusion for Global Positioning System–denied navigation on a small quadrotor. Adv. Mech. Eng. 2016, 8. [Google Scholar] [CrossRef]
  161. Mammarella, M.; Campa, G.; Napolitano, M.R.; Fravolini, M.L.; Gu, Y.; Perhinschi, M.G. Machine vision/GPS integration using EKF for the UAV aerial refueling problem. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2008, 38, 791–801. [Google Scholar] [CrossRef]
  162. Maravall, D.; de Lope, J.; Pablo Fuentes, J. Vision-based anticipatory controller for the autonomous navigation of an UAV using artificial neural networks. Neurocomputing 2015, 151, 101–107. [Google Scholar] [CrossRef]
  163. Metni, N.; Hamel, T. A UAV for bridge inspection: Visual servoing control law with orientation limits. Autom. Constr. 2007, 17, 3–10. [Google Scholar] [CrossRef]
  164. Park, J.; Im, S.; Lee, K.H.; Lee, J.O. Vision-based SLAM system for small UAVs in GPS-denied environments. J. Aerosp. Eng. 2012, 25, 519–529. [Google Scholar] [CrossRef]
  165. Pebrianti, D.; WeiWang; Iwakura, D.; Song, Y.; Nonami, K. Sliding mode controller for stereo vision based autonomous flight of quad-rotor MAV. J. Robot. Mechatron. 2011, 23, 137–148. [Google Scholar] [CrossRef]
  166. Rahino Triputra, F.; Riyanto Trilaksono, B.; Adiono, T.; Adhy Sasongko, R. Visual servoing of fixed - wing unmanned aerial vehicle using command filtered backstepping. Int. J. Electr. Eng. Inform. 2015, 7, 584–604. [Google Scholar] [CrossRef]
  167. Rawashdeh, N.A.; Rawashdeh, O.A.; Sababha, B.H. Vision-based sensing of UAV attitude and altitude from downward in-flight images. J. Vib. Control 2017, 23, 827–841. [Google Scholar] [CrossRef]
  168. Razinkova, A.; Cho, H.C. Vision-based tracking of a moving ground object by quadcopter UAV using noise filtering. Int. J. Imaging Robot. 2016, 16, 1–16. [Google Scholar]
  169. Rilanto Trilaksono, B.; Triadhitama, R.; Adiprawita, W.; Wibowo, A.; Sreenatha, A. Hardware-in-the-loop simulation for visual target tracking of octorotor UAV. Aircr. Eng. Aerosp. Technol. 2011, 83, 407–419. [Google Scholar] [CrossRef]
  170. Saripalli, S.; Montgomery, J.; Sukhatme, G. Visually guided landing of an unmanned aerial vehicle. IEEE Trans. Robot. Autom. 2003, 19, 371–380. [Google Scholar] [CrossRef]
  171. Silva, C.; Goltz, G.; Shiguemori, E.; Castro, C.D.; Velho, H.D.C.; Braga, A.D. Image matching applied to autonomous navigation of unmanned aerial vehicles. Int. J. High Perform. Syst. Arch. 2016, 6, 205. [Google Scholar] [CrossRef]
  172. Stefas, N.; Bayram, H.; Isler, V. Vision-based UAV navigation in orchards. IFAC-PapersOnLine 2016, 49, 10–15. [Google Scholar] [CrossRef]
  173. Tippetts, B.J.; Lee, D.J.; Fowers, S.G.; Archibald, J.K. Real-time vision sensor for an autonomous hovering micro unmanned aerial vehicle. J. Aerosp. Comput. Inf. Commun. 2009, 6, 570–584. [Google Scholar] [CrossRef]
  174. Vendra, S.; Campa, G.; Napolitano, M.R.; Mammarella, M.; Fravolini, M.L.; Perhinschi, M.G. Addressing corner detection issues for machine vision based UAV aerial refueling. Mach. Vis. Appl. 2007, 18, 261–273. [Google Scholar] [CrossRef] [Green Version]
  175. Wang, C.L.; Wang, T.M.; Liang, J.H.; Zhang, Y.C.; Zhou, Y. Bearing-only visual SLAM for small unmanned aerial vehicles in GPS-denied environments. Int. J. Autom. Comput. 2013, 10, 387–396. [Google Scholar] [CrossRef]
  176. Wang, T.; Wang, C.; Liang, J.; Zhang, Y. Rao-Blackwellized visual SLAM for small UAVs with vehicle model partition. Ind. Robot. Int. J. 2014, 41, 266–274. [Google Scholar] [CrossRef]
  177. Xie, H.; Lynch, A.F. State transformation-based dynamic visual servoing for an unmanned aerial vehicle. Int. J. Control 2016, 89, 892–908. [Google Scholar] [CrossRef]
  178. Merriaux, P.; Dupuis, Y.; Boutteau, R.; Vasseur, P.; Savatier, X. A study of VICON system positioning performance. Sensors 2017, 17, 1591. [Google Scholar] [CrossRef]
Figure 1. Systematic mapping process.
Figure 1. Systematic mapping process.
Applsci 09 03196 g001
Figure 2. Systematic search consort diagram.
Figure 2. Systematic search consort diagram.
Applsci 09 03196 g002
Figure 3. Classification scheme.
Figure 3. Classification scheme.
Applsci 09 03196 g003
Figure 4. Examples of vision-based tasks. (a) visual navigation: a.1 (reprinted from [19] under the terms of the Creative Commons Attribution License), a.2 (reprinted by permission from Springer Nature: [30], Copyright 2012); (b) vision-based control: b.1 (reprinted from [31] under the terms of the Creative Commons Attribution License), b.2 (reprinted from [32] by permission from Taylor & Francis Ltd); (c) vision-based tracking/guidance: c.1 (illustration of a target tracking mission, e.g. [33]), c.2 (reprinted from [34], Copyright 2010, with permission from Elsevier); (d) vision-based sense-and-avoid: d.1 (reprinted from [29] under the terms of the Creative Commons Attribution License), d.2 (reprinted from [35] by permission from John Wiley and Sons).
Figure 4. Examples of vision-based tasks. (a) visual navigation: a.1 (reprinted from [19] under the terms of the Creative Commons Attribution License), a.2 (reprinted by permission from Springer Nature: [30], Copyright 2012); (b) vision-based control: b.1 (reprinted from [31] under the terms of the Creative Commons Attribution License), b.2 (reprinted from [32] by permission from Taylor & Francis Ltd); (c) vision-based tracking/guidance: c.1 (illustration of a target tracking mission, e.g. [33]), c.2 (reprinted from [34], Copyright 2010, with permission from Elsevier); (d) vision-based sense-and-avoid: d.1 (reprinted from [29] under the terms of the Creative Commons Attribution License), d.2 (reprinted from [35] by permission from John Wiley and Sons).
Applsci 09 03196 g004
Figure 5. Classes of Unmanned Aerial Vehicles (UAVs): Examples of commercial and prototype aircraft. (a) fixed-wing UAV: a.1 Sig rascal model (reprinted from [53] by permission from American Society of Civil Engineers), a.2 fixed-wing platform (reprinted from [38] under the terms of the Creative Commons Attribution License); (b) rotatory-wing UAV: b.1 Yamaha Rmax helicopter (reprinted from [54] under the terms of the Creative Commons Attribution License), b.2 experimental platform based on an Ascending Technologies Pelican quadrotor (reprinted from [35] by permission from John Wiley and Sons); (c) flapping-wing UAV: c.1 Robo Raven V (reprinted from [55] under the terms of the Creative Commons Attribution License), c.2 Carbonsail Ornithopter kit (reprinted from [49] under the terms of the Creative Commons Attribution License); (d) airship: d.1 Airship ACC – 15X developed by CS AERO (https://www.csaero.cz/en), d.2 Skye drone developed by AEROTAIN (http://www.aerotain.com/).
Figure 5. Classes of Unmanned Aerial Vehicles (UAVs): Examples of commercial and prototype aircraft. (a) fixed-wing UAV: a.1 Sig rascal model (reprinted from [53] by permission from American Society of Civil Engineers), a.2 fixed-wing platform (reprinted from [38] under the terms of the Creative Commons Attribution License); (b) rotatory-wing UAV: b.1 Yamaha Rmax helicopter (reprinted from [54] under the terms of the Creative Commons Attribution License), b.2 experimental platform based on an Ascending Technologies Pelican quadrotor (reprinted from [35] by permission from John Wiley and Sons); (c) flapping-wing UAV: c.1 Robo Raven V (reprinted from [55] under the terms of the Creative Commons Attribution License), c.2 Carbonsail Ornithopter kit (reprinted from [49] under the terms of the Creative Commons Attribution License); (d) airship: d.1 Airship ACC – 15X developed by CS AERO (https://www.csaero.cz/en), d.2 Skye drone developed by AEROTAIN (http://www.aerotain.com/).
Applsci 09 03196 g005
Figure 6. Classes of vision system and examples of configurations (location and orientation). (a.1) monocular (reprinted from [67] under the terms of the Creative Commons Attribution License); (a.2) multi-camera (reprinted by permission from Springer Nature: [68], Copyright 2014); (a.3) stereo (reprinted by permission from Springer Nature: [69], Copyright 2011); (b.1) on board camera pointing downward (reprinted by permission from Springer Nature: [18], Copyright 2015); (b.2) on board stereo camera pointing forward (reprinted by permission from Springer Nature: [70], Copyright 2013); (b.3) on ground stereo camera pointing toward the UAV (reprinted from [66] under the terms of the Creative Commons Attribution License.)
Figure 6. Classes of vision system and examples of configurations (location and orientation). (a.1) monocular (reprinted from [67] under the terms of the Creative Commons Attribution License); (a.2) multi-camera (reprinted by permission from Springer Nature: [68], Copyright 2014); (a.3) stereo (reprinted by permission from Springer Nature: [69], Copyright 2011); (b.1) on board camera pointing downward (reprinted by permission from Springer Nature: [18], Copyright 2015); (b.2) on board stereo camera pointing forward (reprinted by permission from Springer Nature: [70], Copyright 2013); (b.3) on ground stereo camera pointing toward the UAV (reprinted from [66] under the terms of the Creative Commons Attribution License.)
Applsci 09 03196 g006
Figure 7. Annual trend of publications.
Figure 7. Annual trend of publications.
Applsci 09 03196 g007
Figure 8. Quartile distribution of the number of journals and papers according to Journal Citation Reports (year 2017).
Figure 8. Quartile distribution of the number of journals and papers according to Journal Citation Reports (year 2017).
Applsci 09 03196 g008
Figure 9. Category distribution over the database.
Figure 9. Category distribution over the database.
Applsci 09 03196 g009
Figure 10. Annual trend per category.
Figure 10. Annual trend per category.
Applsci 09 03196 g010
Figure 11. UAV class distribution over the database.
Figure 11. UAV class distribution over the database.
Applsci 09 03196 g011
Figure 12. Annual trend per UAV class.
Figure 12. Annual trend per UAV class.
Applsci 09 03196 g012
Figure 13. UAV class distribution over the categories.
Figure 13. UAV class distribution over the categories.
Applsci 09 03196 g013
Figure 14. Vision system class distribution over the database.
Figure 14. Vision system class distribution over the database.
Applsci 09 03196 g014
Figure 15. Annual trend per vision system.
Figure 15. Annual trend per vision system.
Applsci 09 03196 g015
Figure 16. Vision system distribution over the categories.
Figure 16. Vision system distribution over the categories.
Applsci 09 03196 g016
Figure 17. Validation process distribution over the database.
Figure 17. Validation process distribution over the database.
Applsci 09 03196 g017
Figure 18. Papers per validation process.
Figure 18. Papers per validation process.
Applsci 09 03196 g018
Figure 19. Papers per flight conditions.
Figure 19. Papers per flight conditions.
Applsci 09 03196 g019
Table 1. Search string.
Table 1. Search string.
ConceptAlternative Terms and Synonyms
UAV(“unmanned aerial vehicle” OR “unmanned air vehicle” OR
“unmanned aircraft systems” OR “UAV” OR “UAS” OR “autonomous
aerial vehicles” OR “aerial robotics” OR “flying machines” OR
“quadrotor” OR “quad-rotor” OR “quadcopter” OR “helicopter” OR
“rotorcraft” OR “VTOL” OR (“vertical taking off” AND “landing”))
AND
vision(“vision” OR “image” OR “visual” OR “video” OR “artificial
intelligence” OR “camera” OR “lidar”)
Table 2. Validation process. Features of the trials.
Table 2. Validation process. Features of the trials.
Validation ProcessAircraftCameraEnvironmentFlight
FlightUAVPhysicalIndoor/OutdoorReal-Time
Experimental
Offline TestsUAVPhysicalIndoor/OutdoorOffline
Lab TestsUAV/PlatformPhysicalLabLimited
Simulation
Hardware-in-the-Loop Simulation (HILS)Virtual/ModelPhysicalVirtualSimulated
Image-in-Loop Simulation (IILS)Virtual/ModelPhysicalIndoor/OutdoorSimulated
3D Virtual RealityVirtualVirtualVirtualSimulated
Numerical SimulationModelModelModelSimulated
Table 3. List of the most popular publication venues.
Table 3. List of the most popular publication venues.
JournalN%Papers
Journal of Intelligent & Robotic Systems2315.97%[18,20,27,30,40,51,68,69,71,73,83,84,85,86,87,88,89,90,91,92,93,94,95]
IEEE Transactions on Aerospace and Electronic Systems74.86%[28,96,97,98,99,100,101]
Sensors74.86%[17,29,31,66,72,102,103]
International Journal of Advanced Robotic Systems53.47%[38,64,104,105,106]
Journal of Guidance, Control, and Dynamics53.47%[33,39,107,108,109]
Autonomous Robots42.78%[58,110,111,112]
IEEE/ASME Transactions on Mechatronics42.78%[22,59,113,114]
Journal of Field Robotics42.78%[35,75,115,116]
Robotics and Autonomous Systems42.78%[117,118,119,120]
Canadian Aeronautics and Space Journal32.08%[121,122,123]
Control Engineering Practice32.08%[34,124,125]
IEEE Transactions on Industrial Electronics32.08%[44,126,127]
IEEE Transactions on Robotics32.08%[128,129,130]
Other 55 Journals6947.92%[19,21,23,24,25,32,41,42,43,45,53,54,57,60,61,62,63,65,74,80,81,82,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177]
Table 4. List of papers per category.
Table 4. List of papers per category.
CategoryN%Papers
1. Visual Navigation
1.1. Self-Localization106.94%[19,30,54,73,85,100,101,111,156,171]
1.2. State Estimation1913.19%[18,65,69,93,95,105,108,110,112,117,119,120,133,136,139,141,148,167,173]
1.3. Based on SLAM128.33%[17,51,68,84,86,126,146,159,160,164,175,176]
2. Vision-Based Control
2.1. Stabilization42.78%[21,31,131,140]
2.2. Control of Maneuvers2416.67%[22,32,44,45,60,61,72,87,88,97,99,109,114,115,128,129,130,142,144,150,162,163,165,177]
2.3. Autonomous Landing2718.75%[20,38,40,41,58,62,64,66,74,82,89,90,92,102,103,104,106,107,113,116,118,123,125,134,135,152,170]
3. Vision-Based Tracking/Guidance
3.1. Target Tracking1711.81%[24,33,59,80,81,121,124,132,145,149,151,153,154,155,166,168,169]
3.2. Features/Path Tracking64.17%[25,34,53,94,127,157]
3.3. Autonomous Aerial Refueling (AAR)106.94%[23,42,57,63,96,137,138,147,161,174]
4. Vision-Based Sense-and-Avoid (SAA)
4.1. Detection and Avoidance of Intruders53.47%[27,39,43,75,98]
4.2. Mapping-based Obstacle Avoidance64.17%[28,35,71,91,122,158]
4.3. Autonomous Flight and Obstacle Avoidance42.78%[29,83,143,172]
Table 5. List of papers per UAV class.
Table 5. List of papers per UAV class.
UAV ClassN%Papers
Rotatory-Wing: Helicopter2920.14%[43,54,73,83,84,86,87,89,91,104,105,106,112,115,116,117,118,120,121,127,141,142,144,148,152,163,170,175,176]
Rotatory-Wing: Multirotor74.86%[42,63,68,131,138,169,172]
Rotatory-Wing: Quadrotor6545.14%[17,18,19,20,21,22,24,25,28,29,32,34,35,44,58,59,60,61,62,69,71,72,74,80,81,82,85,88,92,93,94,95,97,99,102,107,109,110,111,114,119,122,126,128,129,130,132,135,136,139,143,145,146,149,150,151,153,159,160,162,164,165,168,173,177]
Rotatory-Wing: Sit-On-Tail VTOL10.69%[45]
Fixed-Wing3423.61%[23,27,30,31,33,38,41,53,57,64,65,66,75,90,96,98,103,108,113,123,124,133,134,137,140,147,154,155,156,157,161,166,167,174]
Fixed-Wing: Blended Wing-Body21.39%[40,125]
Fixed-Wing: Shadow10.69%[39]
Airship: Blimp10.69%[51]
Not Specified42.78%[100,101,158,171]
Table 6. List of papers per vision system.
Table 6. List of papers per vision system.
Vision SystemLocalizationOrientationN%Papers
MonocularOn BoardDownward8055.56%[17,18,20,22,24,25,30,32,33,40,51,53,54,58,59,72,73,74,80,81,84,85,86,88,89,90,91,92,93,94,95,97,99,100,101,102,104,105,107,108,109,111,114,116,118,119,121,125,126,127,128,129,130,131,133,134,135,136,144,145,146,149,150,151,152,154,155,156,157,160,166,167,168,169,170,171,173,175,176,177]
MonocularOn BoardForward2618.06%[19,21,27,39,41,43,45,71,75,83,98,113,115,120,122,123,124,140,143,148,153,158,159,162,163,164]
MonocularOn BoardToward the Receiver UAV10.69%[57]
MonocularOn BoardToward the Tanker UAV74.86%[23,96,137,138,147,161,174]
MonocularOn BoardDepending on the Test10.69%[142]
MonocularOn GroundToward the UAV10.69%[44]
Multi-Camera (2)On BoardForward & Downward64.17%[29,31,34,60,62,132]
Multi-Camera (2)On Board & On GroundDownward & Toward the UAV10.69%[61]
Multi-Camera (4)On BoardDepending on the Test10.69%[68]
StereoOn BoardDownward85.56%[65,82,106,110,112,117,139,141]
StereoOn BoardForward53.47%[28,35,69,87,172]
StereoOn BoardToward the Receiver UAV21.39%[42,63]
StereoOn GroundToward the UAV53.47%[38,64,66,103,165]

Share and Cite

MDPI and ACS Style

Belmonte, L.M.; Morales, R.; Fernández-Caballero, A. Computer Vision in Autonomous Unmanned Aerial Vehicles—A Systematic Mapping Study. Appl. Sci. 2019, 9, 3196. https://0-doi-org.brum.beds.ac.uk/10.3390/app9153196

AMA Style

Belmonte LM, Morales R, Fernández-Caballero A. Computer Vision in Autonomous Unmanned Aerial Vehicles—A Systematic Mapping Study. Applied Sciences. 2019; 9(15):3196. https://0-doi-org.brum.beds.ac.uk/10.3390/app9153196

Chicago/Turabian Style

Belmonte, Lidia María, Rafael Morales, and Antonio Fernández-Caballero. 2019. "Computer Vision in Autonomous Unmanned Aerial Vehicles—A Systematic Mapping Study" Applied Sciences 9, no. 15: 3196. https://0-doi-org.brum.beds.ac.uk/10.3390/app9153196

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop