Next Article in Journal
Enhanced Convolutional-Neural-Network Architecture for Crop Classification
Next Article in Special Issue
A Composite Metric Routing Approach for Energy-Efficient Shortest Path Planning on Natural Terrains
Previous Article in Journal
Locally Optimal Subsampling Strategies for Full Matrix Capture Measurements in Pipe Inspection
Previous Article in Special Issue
Autonomous Balloon Controls for Protection against Projectiles with Known Destinations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Constructing 3D Underwater Sensor Networks without Sensing Holes Utilizing Heterogeneous Underwater Robots

Electronic and Electrical Convergence Department, Hongik University, Sejong 2639, Korea
Submission received: 22 April 2021 / Revised: 5 May 2021 / Accepted: 7 May 2021 / Published: 10 May 2021
(This article belongs to the Special Issue Advances in Robot Path Planning)

Abstract

:
This article handles building underwater sensor networks autonomously using multiple surface ships. For building underwater sensor networks in 3D workspace with many obstacles, this article considers surface ships dropping underwater robots into the underwater workspace. We assume that every robot is heterogeneous, such that each robot can have a distinct sensing range while moving with a distinct speed. The proposed strategy works by moving a single robot at a time to spread out the underwater networks until the 3D cluttered workspace is fully covered by sensors of the robots, such that no sensing hole remains. As far as we know, this article is novel in enabling multiple heterogeneous robots to build underwater sensor networks in a 3D cluttered environment, while satisfying the following conditions: (1) Remove all sensing holes. (2) Network connectivity is maintained. (3) Localize all underwater robots. In addition, we address how to handle the case where a robot is broken, and we discuss how to estimate the number of robots required, considering the case where an obstacle inside the workspace is not known a priori. Utilizing MATLAB simulations, we demonstrate the effectiveness of the proposed network construction methods.

1. Introduction

Sensor networks can perform various autonomous tasks in dangerous and harsh environments. In the case where each sensor has mobility, sensor network can perform environmental monitoring [1,2,3], area coverage [4,5,6,7], formation control [8,9,10], target tracking [11,12], or intruder detection [13,14,15,16].
This article solves the coverage problem using multiple mobile sensors in an underwater cluttered environment. A mobile sensor needs to avoid underwater obstacles while it moves, since we consider 3D underwater environments with many obstacles. Underwater mobile sensors are deployed in the workspace, to perform collaborative monitoring and data collection tasks. For complete monitoring of the cluttered workspace, it is required to build sensor networks such that no sensing hole exists in the workspace.
In underwater environments, optic, electromagnetic, or acoustic sensors have been used for wireless communication [17,18]. But, building sensor networks in underwater environments is difficult, since communication range is rather short [19,20,21].
As a method to build underwater sensor networks autonomously, this article considers surface ships dropping underwater robots into the underwater 3D workspace. Here, each robot can move and has sensing/communication abilities. Since Global Positioning Systems (GPS) cannot be used to localize an underwater robot, this article introduces network building approaches to generate underwater sensor networks, while localizing every robots in the networks.
As a ship arrives at the sea surface of the designated workspace, it begins dropping underwater robots while not moving at the site. Once multiple robots are dropped, they autonomously move into the unknown 3D cluttered workspace until no sensing hole exists in the workspace. As a robot moves in a cluttered workspace, it needs to avoid obstacles while maintaining the network connectivity with other robots.
Many papers handled how to make multiple robots perform area coverage in 2D environments [4,5,8,22,23]. To build a sensor network in an unknown 2D workspace, the reference [2] made a single robot explore the workspace while deploying static sensor nodes. One restriction of this approach is that the robot must be large enough to carry all sensor nodes. Also, the robot must have enough power to explore the 2D workspace completely, since a sensor node has no mobility. the references [6,7] used Voronoi tessellations to make all robots spread in order to increase network coverage based on distributed information from their neighbors. The authors of [8] considered deploying a swarm of robots into an unknown 2D environment to remove all sensing holes in the environment. The authors of [8] handled the case where each robot has bearing-only sensors measuring the bearing to its neighbor in its local coordinate frame. Cluttered environments were not considered in the papers addressed in this paragraph.
Our paper is distinct from other papers addressed in the previous paragraph, since our paper considers the coverage problem in 3D workspace with many obstacles. Since a robot moves in a cluttered workspace, it needs to avoid obstacles while building the sensor network.
As far as we know, only a few papers handled coverage path planning for 3D space. In [24], different deployment strategies for 2D and 3D communication architectures for UnderWater Acoustic Sensor Networks (UW-ASNs) were proposed. For deploying underwater sensors, the reference [24] introduced a sensor which is anchored to the sea bottom and is equipped with a floating buoy that can be inflated by a pump. The buoy pulls the sensor towards the ocean surface. The depth of the sensor can then be regulated by adjusting the length of the wire that connects the sensor to the anchor. However, this deployment approach cannot be used for covering a deep sea. Also, collection of these sensors is not trivial, since a sensor is anchored to the sea bottom. Moreover, the deployment strategy in [24] does not assure that all coverage holes are removed after the deployment.
In this paper, we consider a robot which can freely move in 3D underwater environments. Under the proposed deployment strategy, all robots are deployed so that no sensing holes exist in the workspace. Suppose that a coverage mission of the underwater network is done. In this case, we need to gather the robots, so that they can be deployed for coverage of another space. By controlling the buoyancy of each robot, we can easily make each robot move to the sea surface. Then, a ship can collect the robots easily.
The references [25,26] handled 3D coverage path planning (i.e., path planning process for enabling full coverage of a given 3D space by one or multiple robots). The goal of such planning is to provide nearly full coverage, minimizing the occurrence of visiting an identical area multiple times. However, the references [25,26] did not address how to deploy autonomous sensors, so that the 3D workspace is continuously monitored by the deployed sensors. In our paper, we develop coverage control so that once the robots are located at their designated target positions, then the 3D workspace is continuously monitored by the deployed robots.
In practice, a robot may not be identical to another robot. For instance, some robots may not move as fast as other robots, since their hardware systems are partially broken during their operations. Hence, we assume that every robot moves with a distinct speed. Moreover, a robot may have a distinct sensing range with another robot. Hence, this article assumes that every robot is heterogeneous, such that each robot can have a distinct sensing range while moving with a distinct speed. This article thus handles a scenario where heterogeneous robots are deployed in unknown underwater environments.
Since obstacles can block the communication link between robots, each robot is controlled while maintaining the network connectivity in obstacle-rich environments. The proposed coverage methods work by moving a single robot at a time to spread out the 3D sensor network, until the 3D workspace is fully covered without sensing holes.
The surface ships dropping robots can use GPS to localize themselves. However, GPS cannot be used for localization of a robot. This article assumes that a robot can calculate the relative location of its neighboring robot using local sonar sensors. In underwater environments, sonar sensors are preferred, since sonar signal can travel a longer distance, compared to electromagnetic wave [18]. The proposed coverage methods result in underwater sensor networks without sensing holes, while localizing all robots.
As far as we know, this article is novel in enabling multiple heterogeneous robots to build underwater sensor networks in a 3D cluttered environment, while satisfying the following conditions:
  • remove all sensing holes.
  • network connectivity is maintained.
  • localize all underwater robots.
This paper further addresses how to handle the case where a robot is broken. Also, we address how to conjecture the number of robots required, considering the case where an obstacle inside the 3D workspace is unknown a priori. Utilizing MATLAB simulations, we demonstrate the effectiveness of the proposed coverage methods.
The article is organized as follows: Section 2 introduces the preliminary information of this article. Section 3 introduces the 3D coverage methods utilizing multiple heterogeneous robots. Section 4 shows the MATLAB simulations of the proposed 3D coverage methods. Section 5 presents the discussion on the paper. Section 6 discusses conclusions.

2. Preliminary Information

2.1. Problem Statement

This article solves the space coverage problem using multiple robots in underwater environments. For complete monitoring of the given 3D workspace, it is required to build sensor networks such that no sensing hole exists in the given workspace.
Initially, all robots are carried by surface ships. Here, each robot has both sensing and communication abilities. As a ship arrives at the sea surface of the workspace, it begins dropping robots while not moving at the site.
The dropped robots move autonomously until the 3D workspace is fully covered by sensors of the robots. The obstacles in the workspace are unknown to each robot, and every robot moves based on local interaction with its neighboring robot. Our coverage methods result in underwater sensor networks without sensing holes, while localizing all robots. Section 3 discusses the 3D coverage methods using multiple robots.

2.2. Assumptions and Notations

Using the notations in graph theory [27], a weighted directed graph G is a set G = ( V ( G ) , E ( G ) ) , in which V ( G ) is the node set and E ( G ) is the directed edge set. The weight of every directed edge, say e E ( G ) , is w ( e ) : e Z + . A directed path is a sequence of directed edges.
Suppose that we have S surface ships in total. Let p s denote the s-th surface ship where s { 1 , 2 , , S } . Each surface ship p s carries N s robots in total.
As a ship arrives at the sea surface of the workspace, it begins dropping robots while not moving at the site. Let n i s denote the i-th robot dropped by p s . Since p s carries N s robots, we have i { 1 , 2 , , N s } . The ship p s drops robots in the following order: n 1 s n 2 s n N s s .
Suppose that each robot has both Ultra-Short BaseLine acoustic positioning (USBL) and depth sensors. USBL sensors can be used to derive both the range and the azimuth of a signal source. USBL sensors can also be used for mutual communication, i.e., a transmitter can send data to a receiver. Each USBL sensor can work as a transmitter as well as a receiver. For instance, one can access the information on commercial USBL sensors in the following website: https://www.ixblue.com/products/usbl-positionning-system (accessed on 10 May 2021).
Let r i d represent the detection (sensing) range of n i s . Also, let r i c represent the communication range of n i s . Let detect-communicate range r i d c be defined as
r i d c = m i n ( r i d , r i c ) .
Let S p h e r e d c of a robot n i s define a sphere with radius r i d c , whose center is located at n i s . Let B o u n d a r y d c of a robot indicate the boundary of the S p h e r e d c of the robot.
Let n i s denote the 3D coordinates of n i s where i { 1 , 2 , , N s } . If n i s n j s < r i d c , then n i s can sense n j s . Also, n i s can send a communication signal to n j s .
On the B o u n d a r y d c of a robot n i s , Q points are evenly generated. These points represent the points where the sensing ray of n i s can reach.
As an example, we discuss generating Q = 18 18 points on a B o u n d a r y d c of a robot. (This generation method is utilized in Section 4.) Let us rotate one vector [ r i d c , 0 , 0 ] T by ϕ [ π / 9 , 2 π / 9 , 3 π / 9 , 2 π ] about the z-axis. The matrix presenting the rotation is
R ( ψ ) = c ( ψ ) s ( ψ ) 0 s ( ψ ) c ( ψ ) 0 0 0 1 .
Thereafter, we rotate R ( θ ) [ r i d c , 0 , 0 ] T by θ [ π / 9 , 2 π / 9 , 3 π / 9 , , 2 π ] about the y-axis. The matrix presenting the rotation is
R ( θ ) = c ( θ ) 0 s ( θ ) 0 1 0 s ( θ ) 0 c ( θ ) .
The resulting vector after two rotations is
fp ( ψ , θ ) = R ( ψ ) R ( θ ) [ r i d c , 0 , 0 ] T .
The vector fp ( ψ , θ ) represents the relative position of a point on the B o u n d a r y d c of n i s , with respect to n i s . As we add n i s to fp ( ψ , θ ) , we obtain the 3D coordinates of a point on the B o u n d a r y d c of n i s . Note that ψ and θ are chosen from 18 values, respectively. Therefore, Q = 18 18 points are derived on B o u n d a r y d c of n i s .
Let L ( n , m ) denote a straight line segment connecting two robots n and m . We say that L ( n , m ) is safe once it does not intersect an obstacle. This implies that a robot avoids colliding with an obstacle, as it travels along L ( n , m ) .
A freePoint of n i s denotes a point among Q points of n i s , such that a straight line segment connecting n i s and the point does not intersect obstacles. In practice, freePoints can be detected by active sonar sensors. A robot has active sonars, which can detect an obstacle close to the robot [28].
The authors of [28] considered the case where the active sonars are positioned, so that the viewing angle of the sonars is 200 degrees. In our paper, we need to install active sonars surrounding the robot. In this way, the sonars can cover the space surrounding the robot. As a sonar ray does not meet an obstacle, the ray is associated to a freePoint of the robot.
A frontierPoint f ( n i s ) of a robot n i s denotes a freePoint of n i s , which is outside the S p h e r e d c of any other robot. As Q , one obtains dense frontierPoints on a B o u n d a r y d c . Let frontierSurface denote the set of frontierPoints as Q .
A frontierSurface is on the border between a space covered by all S p h e r e d c s and a space covered by no S p h e r e d c . If every robot has no frontierSurface, then all robots’ S p h e r e d c s cover the open space completely.
Recall that n i s denotes the i-th robot which is dropped by the ship p s ( s { 1 , 2 , , S } ). Each robot n i s stores and expands the traversible graph, say I s = ( V ( I s ) , E ( I s ) ) . Each vertex in V ( I s ) indicates a deployed robot. Every directed edge, say { n i s , n j s } E ( I s ) , is established as the following conditions are met:
  • L ( n i s , n j s ) is safe.
  • n i s n j s r i d c .
Using the above definition, { n i s , n j s } E ( I s ) implies that n i s can detect n j s . Also, n i s can send a communication signal to n j s . n i s n j s is accessible using USBL sensors.
Suppose that a new robot, say n i s , is deployed in I s , so that this new robot and a robot, say n j s , in I s ( s s ) satisfy the following two conditions for establishing a directed edge.
  • L ( n i s , n j s ) is safe.
  • n i s n j s r i d c .
In this case, a new directed edge is established from n i s to n j s . Then, I s and I s merge to generate the bigger graph I s I s . In this way, I s can expand to contain a robot n j s which is dropped by a ship p s where s s .
In the case where I s is not connected to I s ( s s ) , I s and I s expand independently to each other. Once I s is connected to I s , they merge to generate the bigger graph I s I s .
Utilizing the notation of E ( I s ) , n i s avoids collision while traversing an edge in E ( I s ) . This implies that I s is a safe topological map for n i s .
The weight of every directed edge, say e E ( I s ) , is w ( e ) . n i s n j s is accessible using USBL sensors and is set as w ( { n i s , n j s } ) . The weight of every edge in I s is used to let n i s find the path in I s .
We say that two robots n i s and n j s are neighbors in the case where n i s n j s < r i d c + r j d c . Let N ( n i s ) denote a neighbor of n i s .
We assume that a ship dropping robots can localize itself using GPS. In summary, every robot satisfies the below assumptions:
(A1)
Using the local sonar sensor of a robot n, n is able to calculate the relative location of its neighbor robot N ( n ) .
(A2)
A robot n is able to access r d c , the detect-communicate range, of its neighbor robot N ( n ) .
(A3)
A robot n is able to calculate the relative location of a frontierPoint in f ( n ) .

2.3. Each Robot Uses USBL and Depth Sensors to Enable Assumptions (A1) and (A2)

Suppose that a robot n transmits a signal to another robot, say m N ( n ) . The bearing of the signal is measured by the USBL of m. The signal bearing is measured by the USBL of m, which contains an array of transducers. The transceiver head normally contains three or more transducers separated by a baseline of 10 cm or less. A method called “phase-differencing” within this transducer array is used to calculate the signal direction. The equation for bearing measurement is
b r g = a t a n 2 ( nm [ 2 ] , nm [ 1 ] ) + N b .
Here, nm = n m . Furthermore, nm [ j ] indicates the j-th element in nm . In (5), N b indicates the bearing measurement noise. In (5), a t a n 2 ( y , x ) define the angle of a complex number, say x + i y .
Suppose that a robot n transmits a signal to another robot, say m N ( n ) . Then, m replies with its own acoustic pulse. This return pulse is detected by n. The time from the transmission of the initial acoustic pulse until the reply is detected is measured by the USBL system of n and is converted into a range. This is feasible, since n has the reference system to measure the signal speed in underwater environments. This range estimation is then shared by both n and m.
The equation for range measurement between n and m is
r n g = nm + N r .
Here, N r indicates the range measurement noise.
Note that m has depth sensors to measure its depth. The depth of m is measured using
d ( m ) = m [ 3 ] + N d .
Here, N d indicates the depth measurement noise. Also, m [ 3 ] indicates the third element in m . We define
r n g p = r n g 2 ( d ( m ) d ( n ) ) 2 .
Then, the relative position of n with respect to m is
r e l P o s = [ r n g p cos ( b r g ) , r n g p sin ( b r g ) , d ( n ) d ( m ) ] .
USBL sensors can be used to transmit information from a transmitter to a receiver. Hence, the relative position information in (9) can be shared by both m and n. Moreover, r d c information of m and n can be shared by two robots. Thus, Assumptions (A1) and (A2) hold.
We next calculate the number of communication pings required, so that two neighboring robots m and n can share the relative position information. The bearing measurement in (5) requires a single communication ping from n to m. The range measurement in (6) further requires a return pulse from m to n. Then, the calculated range measurement is sent from n to m. Also, the bearing measurement in (5) is sent from m to n. In total, four communication pings are required, so that two neighboring robots m and n can share the relative position information.

2.4. Each Robot Uses Active Sonars to Enable Assumption (A3)

Recall that a frontierPoint f ( n ) of a robot n denotes a freePoint of n, which is outside the S p h e r e d c of any other robot. A robot n can detect an obstacle close to it using its local active sonar sensors. A robot has active sonars, which can sense an obstacle close to the robot [28]. Thus, a robot can calculate the relative location of its freePoint.
Moreover, n can derive the relative location of its neighbor robot using Assumption (A1). Note that n has N ( n ) neighbors. Hence, 4 N ( n ) communication pings are required, so that n and every robot in N ( n ) can share the relative position information.
Furthermore, using Assumption (A2), n can access the detect-communicate range of its neighbor in N ( n ) . N ( n ) communication pings are required for this access. In summary, 5 N ( n ) communication pings are required, so that n can access both the relative position and the detect-communicate range of its every neighbor in N ( n ) .
Thus, n can find a freePoint, which is outside the S p h e r e d c of any other robot. In other words, n is able to calculate the relative location of a frontierPoint in f ( n ) . Hence, Assumption (A3) is derived. Note that 5 N ( n ) communication pings are required to calculate the relative location of a frontierPoint in f ( n ) .

3. Coverage Methods Utilizing Multiple Heterogeneous Robots

This section handles the following problem: Consider surface ships dropping multiple heterogeneous robots. Build 3D sensor networks without sensing holes, while localizing all robots. While each robot moves, it satisfies the following aspects: obstacle avoidance, and network connectivity are preserved.
To solve the above problem, this article proposes the coverage methods (Algorithm 1) as follows. Initially, all robots are dropped by a surface ship on the sea surface of the 3D workspace. Then, every robot is controlled one after one sequentially, such that as we locate more robots at their designated target positions, an unsensed open space reduces gradually. The proposed coverage algorithm works by moving a single robot at a time to spread out the sensor network, starting from an initial site and spreading out to cover the workspace completely.
Suppose a robot reaches its designated target position (see Algorithm 1). Thereafter, the robot activates its local sonar sensors to sense its surroundings. To build a connected network, a robot n i s moves into an unsensed open space within r i d c distance units. As a robot n i s reaches its designated target position f n i s , the robot activates its local sonar sensors with range r i d c .
Algorithm 1 Coverage methods using a ship p s .
 1:
Initially, all robots are stored in p s on the sea surface of the 3D workspace;
 2:
One robot, say n 1 s , is dropped by the ship for the first time;
 3:
Localize n 1 s using the 3D coordinates of p s ;
 4:
The robot n 1 s activates its local sonar sensors with range r 1 d c ;
 5:
FrontierPoints of n 1 s are generated;
 6:
i = 2 ;
 7:
repeat
 8:
 One robot, say n i s , is dropped by the ship;
 9:
 By applying the directed breadth-first search on I s , n i s finds the nearest robot, say m, with a frontierPoint;
10:
 One frontierPoint on m is chosen as the target position, say f n i s , of n i s ;
11:
 Based on the directed breadth-first search, n i s travels along the directed path to reach m;
12:
n i s moves to f n i s ;
13:
if n i s reaches f n i s then
14:
   n i s activates its local sonar sensors with range r i d c ;
15:
  Localize n i s using the 3D coordinates of f n i s ;
16:
   n i s transmits a signal to the ship via multi-hop communication using USBL, so that the ship can deploy the next robot;
17:
end if
18:
i = i + 1 ;
19:
until i = = N s or I s finds no frontierPoint;
In Algorithm 1, every robot is controlled one after one sequentially. Initially, n 1 s activates its active sonar sensors to sense an obstacle close to the robot. Thus, n 1 s can calculate the relative location of its freePoint. Thereafter, frontierPoints of n 1 s are generated. Since n 1 s is the robot which is deployed for the first time, it has no neighbors when it generates frontierPoints. Therefore, no communication pings are required to calculate the relative location of a frontierPoint in f ( n 1 s ) .
Whenever a new robot, say n, is dropped by a ship, n moves until arriving at the nearest frontierPoint. The robot n then activates its local sonar sensors to sense its surroundings. FrontierPoints of n are generated accordingly. Note that 5 N ( n ) communication pings are required to calculate the relative location of a frontierPoint in f ( n ) . Repeat this until every robot is positioned at its designated target position.
Recall that robots are dropped in the following order: n 1 s n 2 s n N s s . Consider the case where n i 1 s just activated its local sonar sensors to sense its surroundings. The robot n i s finds the nearest robot, say m, with a frontierPoint. The directed breadth-first search can be used for finding m. The time complexity of the directed breadth-first search is O ( E ( I s ) ) , since every edge will be explored in the worst case.
One frontierPoint on m is then chosen as f n i s . Based on the directed breadth-first search, n i s travels along a directed path, say P, in I s to arrive at m. Recalling the notation of I s , P is a safe path for n i s . Furthermore, the length of every straight line segment of P is shorter than r i d c .
As n i s arrives at a robot in the path P, n i s can move towards the next robot in the path P under Assumption (A1). GPS is not required for this local maneuver. In order to traverse the path P, n i s utilizes local interaction with a robot in P. As n i s arrives at a robot, say n l s , in P, n l s lets n i s access the next robot in P. This information sharing is possible under Assumption (A1).
As n i s arrives at the last robot in the path P, n i s can move towards f n i s under Assumption (A3). After n i s arrives at f n i s , n i s activates its local sonar sensors to sense its surroundings.
Once n i s activates its local sonar sensors, several frontierPoints inside the S p h e r e d c of n i s are deleted. This deletion is possible based on local interaction of n i s .
We explain the deletion process in detail. Suppose that a point in f ( n j s ) is inside the S p h e r e d c of n i s . Since the relative distance between n j s and a point in f ( n j s ) is r j d c , n j s is a neighbor to n i s . Under Assumption (A1), n i s calculates the relative location of n j s .
Suppose that n i s calculates the relative location of n j s . The relative vector from n i s to a point in f ( n j s ) is derived by adding the below two vectors, which are available based on Assumptions (A1) and (A3):
1
the vector from n j s to the point in f ( n j s ) .
2
the vector from n i s to n j s .
As n i s activates its local sonar sensors, several frontierPoints inside the S p h e r e d c of n i s are deleted. Recall that 5 N ( n i s ) communication pings are required to calculate the relative location of a frontierPoint in f ( n i s ) .
Then, n i s broadcasts the deletion of frontierPoints to every other robot which can communicate with n i s using multi-hop communication. Here, multi-hop communication is feasible using USBL of each robot.
This broadcast of n i s requires V ( I s ) communication pings, since there are V ( I s ) nodes in I s . Using Algorithm 1, V ( I s ) increases from 1 to s = 1 S N s . Therefore, the number of broadcast pings is 1 + 2 + + s = 1 S N s .
Also, n i s transmits a signal to the ship via multi-hop communication using USBL, so that the ship can deploy the next robot n i + 1 s . n i + 1 s finds the nearest frontierPoint. Then, n i + 1 s travels along I s until arriving at the frontierPoint. This repeats until i = = N s in Algorithm 1.
Figure 1 illustrates the case where n i s heads towards f n i s . In this figure, n i s is illustrated as a sphere. n i s travels along a narrow passage in Figure 1. Red curves indicate the obstacle boundaries, and the path of n i s is illustrated with yellow line segments. The large dots indicate the robots which are already at their target positions. S p h e r e d c of every robot is illustrated as a sphere. FrontierPoints are depicted as points on one robot’s B o u n d a r y d c .

3.1. Analysis

Algorithm 1 ends when i = = N s or I s finds no frontierPoint. If i = = N s , then there is no remaining robot in the ship p s . Acknowledge that all sensing holes disappear only when s = 1 S N s is sufficiently large. Based on simulations, Section 4.1 presents how to estimate the number of robots required for covering a 3D workspace, considering the case where an obstacle inside the workspace is not known a priori.
In Algorithm 1, a robot n i s travels along a directed path in I s until arriving at f n i s . The next theorem proves that n i s can find at least one directed path from n 1 s to f n i s . Recall that n 1 s is the first robot dropped by the ship p s .
Theorem 1.
Consider the situation where all robots move under Algorithm 1. A robot n i s can find at least one directed path from n 1 s to f n i s .
Proof. 
In Algorithm 1, n i s finds the nearest robot, say m, with a frontierPoint by applying the directed breadth-first search on I s . Then, one frontierPoint on m is chosen as the target position, say f n i s , of n i s .
Since f n i s is a frontierPoint, it exists on B o u n d a r y d c of m. Since m is found by applying the directed breadth-first search, there exists a directed path from n 1 s to m.
We next prove that there exists a directed path from m to f n i s . Since a frontierPoint is a freePoint, the straight line segment connecting m and f n i s is safe. f n i s exists on B o u n d a r y d c of m. Hence, there exists a directed edge from n j s to f n i s .
We proved that there exists a directed path from n 1 s to n j s . Also, we proved that there exists a directed path from n j s to f n i s . Thus, there exists a directed path from n 1 s to f n i s , such that the path contains n j s . We proved that there exists a directed path from n 1 s to f n i s .  □
The next theorem proves that f n i s and all robots along the directed path are localized. While n i s travels along the path, it is connected to n 1 s . In addition, n i s avoids collision during the maneuver.
Theorem 2.
Consider the situation where all robots move under Algorithm 1. While a robot n i s moves along the directed path in I s until meeting f n i s , n i s is connected to n 1 s . During the maneuver, n i s avoids collision with obstacles. Also, f n i s and all robots along the directed path are localized.
Proof. 
n i s travels along the directed path in I s until arriving at f n i s . Let { m 1 m 2 m e n d } define the order of robots along the path until arriving at f n i s . Here, m 1 is n 1 s , since n 1 s is the first robot dropped by the ship p s . After n i s arrives at m j ( j e n d 1 ), n i s moves toward m j + 1 . In addition, after n i s arrives at m e n d , n i s moves toward f n i s .
While n i s travels along an edge from m k to m k + 1 , n i s avoids collision, since I s is safe for a robot. In addition, n i s is connected to m k utilizing the notation of I s . Furthermore, m k is connected to m 1 = n 1 s , since the directed path is found using the directed breadth-first method. Hence, n i s is connected to n 1 s during the maneuver.
Recall that every frontierPoint is a freePoint. Thus, while n i s travels along an edge from m e n d to f n i s , n i s avoids collision. Furthermore, n i s is connected to m e n d . Utilizing the similar argument as in the previous paragraph, n i s is connected to n 1 s during the maneuver.
We discuss how to localize all robots { m 1 m 2 m e n d } using Assumption (A1). m 1 is n 1 s , since n 1 s is the first robot dropped by the ship p s . Thus, we localize m 1 using n 1 s .
Also, the 3D coordinates of m j + 1 are the sum of the following two coordinates:
1
the 3D coordinates of m j .
2
the relative position of m j + 1 with respect to m j .
Here, the relative position of m j + 1 with respect to m j is available using Assumption (A1). Using deduction, we localize m j for all j e n d .
We next address how to localize f n i s . The 3D coordinates of f n i s are the sum of the following two coordinates:
1
the 3D coordinates of m e n d .
2
the relative position of f n i s , with respect to m e n d .
Here, the relative position of f n i s with respect to m e n d is available using Assumption (A3). This theorem is proved.  □
In Theorem 3, it is proved that after Algorithm 1 is done, I s contains all robots dropped by the ship p s . Also, all robots in I s are localized.
Theorem 3.
Consider the situation where all robots move under Algorithm 1. After Algorithm 1 is done, I s contains all robots dropped by p s . Also, all robots in I s are localized.
Proof. 
Utilizing Theorem 2, n i s is connected to n 1 s , while n i s travels along the directed path to f n i s . As n i s arrives at f n i s , n i s is connected to n 1 s . Note that n 1 s doesn’t move at all under Algorithm 1.
Utilizing deduction, all robots n 2 s , n 3 s , n i 1 s are connected to n 1 s . As i varies from 1 to N s under Algorithm 1, all robots dropped by p s are located at their target positions and I s contains all these robots.
We next prove that all robots in I s are localized. On contrary, suppose that a robot, say n c s , is not localized. Since I s contains all robots dropped by p s , there exists a directed path from n 1 s to n c s using Theorem 1. Using the similar arguments as in the last two paragraphs of proof for Theorem 2, all robots along the path are localized. Thus, n c s is localized. We proved that all robots in I s are localized.  □
Theorem 4 proves that if one can’t find a frontierSurface, then the obstacle-free space is covered by all S p h e r e d c s completely, such that no coverage hole exists.
Theorem 4.
If we can’t find a frontierSurface, then all S p h e r e d c cover the obstacle-free space completely, such that no coverage hole exists.
Proof. 
Using the transposition rule, the below statement is proved: if an obstacle-free space, which is outside a S p h e r e d c , exists, then we can find a frontierSurface.
Suppose that an obstacle-free space, which is outside a S p h e r e d c , exists. Let O indicate this uncovered obstacle-free space. Then, at least one robot, say n i s , has a frontierSurface intersecting O. Using Theorem 3, I s contains n i s under Algorithm 1. Hence, we can find this frontierSurface using I s .  □
Theorem 4 proved that once the 3D network is completely generated using Algorithm 1, then no sensing hole remains.
In Algorithm 1, n i s moves along the directed path in I s until meeting f n i s . Let { m 1 m 2 m e n d } represent the robots on the path. Here, m 1 is n 1 s . After n i s arrives at m j ( j e n d 1 ), n i s heads towards m j + 1 . In addition, after n i s arrives at m e n d , n i s heads towards f n i s .
A robot n i s needs to follow the path { m 1 m 2 m e n d } . In practice, n i s can be an Autonomous Underwater Vehicle (AUV) with acceleration constraints. An AUV n i s can follow the path using path following controls in [29,30,31,32]. Note that designing path following controls for a robot is not within the scope of this paper.

3.2. Handling Broken Robot or Network Delay

After the network is completely built, a robot may be broken. In this case, sensing holes may appear due to the broken robot. In this case, a frontierPoint of a robot, say a n , can be detected using its local sensor. Suppose that a new robot, say n N , is deployed from a ship to replace the broken robot. Based on Assumption (A3), each frontierPoint of a n can be located using the local sensor of a n . One frontierPoint on a n is then chosen as a target position, say f n N . Based on I s , n N travels along the directed path to reach the target position. In this way, n N replaces the broken robot.
Algorithm 1 does not require synchronization among the robots. Suppose that a robot, say n j s , reached its target position. It then transmits a signal to the ship via multi-hop communication using USBL, so that the ship can deploy the next robot, say n i s . It is inevitable that network delay occurs in multi-hop communications.
This article assumes that a robot can communicate with other robots via multi-hop communication using USBL. However, using multi-hop communication, time delay may occur during data transfer.
Algorithm 1 is robust to network delay due to data transfer. Suppose that it takes δ seconds until n i s finds the directed path to the nearest frontierPoint based on I s . The robot n i s begins moving only after it finds a directed path to the found frontierPoint. In addition, no robot moves while n i s moves. Hence, Algorithm 1 works regardless of how long the delay δ is.
Note that while a robot n i s travels along the path to the nearest frontierPoint, other robots stand still. Hence, the speed of a moving robot doesn’t make effects on Algorithm 1. In the case where a robot moves slower than other robots, it may take longer to make the robot reach its associated frontierPoint. Since only a single robot moves at a time, a robot’s speed doesn’t disturb the process of Algorithm 1.
It can be argued that the proposed coverage methods run slow, since we make a single robot move at a time. However, we can speed up the coverage process by increasing the speed of each robot. This speed up is possible, since the speed of a moving robot doesn’t make effects on Algorithm 1. Moreover, we can speed up the network building process by letting multiple surface ships drop multiple robots simultaneously.

4. MATLAB Simulation Results

We verify the performance of Algorithm 1 with MATLAB simulations. The sampling interval is d t = 1 second. We use Q = 36 36 , and consider a given workspace with a known size [ 0 , 200 ] [ 0 , 200 ] [ 0 , 200 ] in meters. This implies that the workspace has [ 0 , 200 ] as its x-coordinate range, y-coordinate range, or z-coordinate range. Note that obstacles are unknown a priori.
In Algorithm 1, a robot n i s moves along a directed path to reach its target position. Designing path following controls is not within the scope of this paper. In simulations, the dynamics of n i s are given by
n i s ( k + 1 ) = n i s ( k ) + u i s ( k ) d t ,
where d t presents the sampling interval in discrete-time systems. Also, u i s ( k ) presents the velocity of n i s at sample-step k. The dynamic model in (10) is commonly used in multi-robot systems [10,33,34]. Let U i s = m a x k u i s ( k ) denote the maximum speed of n i s . Considering heterogeneous robots, U i s U j s is possible.
The controllers for n i s are designed in discrete-time systems. Let W indicate the next waypoint that n i s will encounter as n i s travels along the directed path to its target position f n i s . Let W indicate the coordinates of W.
Let g = W n i s ( k ) . n i s is controlled to move towards W . If g > U i s d t , then (10) is set as
n i s ( k + 1 ) = n i s ( k ) + U i s g g d t .
If g U i s d t and W f n i s , then n i s heads towards the next waypoint after W . If W = = f n i s and g U i s d t , then (10) is set as
n i s ( k + 1 ) = n i s ( k ) + ( g d t ) g g d t .
This leads to
n i s ( k + 1 ) = W .
We consider two ships in total, i.e., S = 2 . Using the simulation in the workspace without obstacles (Section 4.1), we can estimate the number of required robots as 32. Hence, we conjecture that 5 , 10 , 20 robots are not sufficient to cover the workspace. We hence use 50 robots in the Simulation section.
Each ship contains 25 robots in total. The position of one ship carrying N 1 = 25 robots is ( 30 , 30 , 0 ) in meters. In other words, n 1 1 , n 2 1 , , n 25 1 are located at ( 30 , 30 , 0 ) initially. The position of another ship carrying N 2 = 25 robots is ( 150 , 150 , 0 ) in meters. In other words, n 1 2 , n 2 2 , , n 25 2 are located at ( 150 , 150 , 0 ) initially.
We consider heterogeneous robots as follows. Each robot may move with distinct speed, while having distinct USBL sensors. In the case where i mod 3 is zero, r i d c is 100 m, and the maximum speed is U i s = 5 m/s. In the case where i mod 3 is one, r i d c is 50 m, and the maximum speed is U i s = 3 m/s. In the case where i mod 3 is two, r i d c is 75 m, and the maximum speed is U i s = 2 m/s.
In practice, there exists sonar sensing noise and external disturbance. Considering these practical aspects, a robot may not be located at its target position accurately. As we localize n i s using the 3D coordinates of f n i s (Algorithm 1), we added a Gaussian noise with mean 0 and standard deviation 1 m to each element in the 3D coordinates of f n i s . In this way, n i s is not accurately located at f n i s .
Figure 2 shows the final sensor configuration after the robots are deployed to cover the 3D workspace with a known size 200 200 200 in meters. 966 s are spent to cover the workspace completely. Using MATLAB simulations, 7 s are spent to build the complete network without sensing holes. Among 50 robots, 18 robots move to cover the workspace. Yellow diamonds indicate robot positions dropped from p 1 , and blue diamonds indicate robot positions dropped from p 2 . The position of a ship is marked with a circle. In the figure, spheres indicate obstacles in the environment.
Considering the scenario in Figure 2, Figure 3 shows the sensor network that is built utilizing our 3D coverage methods. In Figure 3, the path of a robot that is dropped from the ship at ( 30 , 30 , 0 ) is marked with yellow asterisks. Also, the path of a robot that is dropped from the ship at ( 150 , 150 , 0 ) is marked with blue asterisks.

4.1. Estimate the Number of Robots Required

Next, we address how to estimate the number of robots required for covering a 3D workspace, considering the case where an obstacle inside the workspace is not known a priori. To estimate the number of robots, we simulate Algorithm 1, while setting no obstacles in the environment. We set no obstacles in the simulation, since we estimate the number of robots, in the case where an obstacle inside the workspace is not known a priori.
Figure 4 shows the sensor configuration, as Algorithm 1 is used to simulate the robot maneuvers for covering the 3D workspace with a known size [ 0 , 200 ] [ 0 , 200 ] [ 0 , 200 ] in meters. See that there is no obstacle in the simulated environment.
In Figure 4, the path of a robot that is dropped from the ship at ( 30 , 30 , 0 ) is marked with yellow asterisks. Also, the path of a robot that is dropped from the ship at ( 150 , 150 , 0 ) is marked with blue asterisks. Using MATLAB simulations, 14 s are spent to build the complete network without sensing holes.
32 robots are dropped in the simulations, and the position of a robot is depicted with a green circle in Figure 4. Using the simulation in the workspace without obstacles, we can estimate the number of required robots as 32.

5. Discussions

In Algorithm 1, a robot moves along a directed path to reach its target position. A robot, which can be an AUV with acceleration constraints, can follow the path using path following controls in [29,30,31,32]. As a robot moves along a path, it must avoid collision with moving obstacles, such as other vehicles or animals. Various reactive collision avoidance methods [35,36,37] can be integrated with the proposed path planner, so that a robot can avoid collision with abrupt obstacles.

6. Conclusions

As a method to build underwater sensor networks in a cluttered underwater environment, this article considers surface ships dropping robots into the sea. Multiple robots move autonomously to build 3D sensor networks, while avoiding collision with unknown obstacles. The 3D coverage algorithm works by deploying a single robot at a time to spread out networks, until the open space is fully covered. Moreover, this article discusses how to handle broken robots, as well as how to estimate the number of robots required, considering the case where an obstacle inside the workspace is not known a priori.
Utilizing MATLAB simulations, we demonstrate the effectiveness of the proposed network construction methods. As our future works, we will do experiments to verify the proposed methods utilizing real underwater robots.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1F1A1063151).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Akyildiz, I.; Su, W.; Sankarasubramaniam, Y.; Cayirci, E. A survey on sensor networks. IEEE Commun. Mag. 2002, 40, 102–114. [Google Scholar] [CrossRef] [Green Version]
  2. Batalin, M.A.; Sukhatme, G.S. Coverage, exploration and deployment by a mobile robot and communication network. IEEE Commun. Mag. 2004, 26, 181–196. [Google Scholar] [CrossRef]
  3. Felemban, E.; Shaikh, F.K.; Qureshi, U.M.; Sheikh, A.A.; Qaisar, S.B. Underwater Sensor Network Applications: A Comprehensive Survey. Int. J. Distrib. Sens. Netw. 2015, 11, 896832. [Google Scholar] [CrossRef] [Green Version]
  4. Smith, B.; Howard, A.; McNew, J.; Egerstedt, M. Multi-Robot Deployment and Coordination with Embedded Graph Grammars. Auton. Robot. 2009, 26, 79–98. [Google Scholar] [CrossRef]
  5. Cortés, J.; Martínez, S.; Karatas, T.; Bullo, F. Coverage control for mobile sensing networks. IEEE Trans. Robot. Autom. 2004, 20, 243–255. [Google Scholar] [CrossRef]
  6. Siligardi, L.; Panerati, J.; Kaufmann, M.; Minelli, M.; Ghedini, C.; Beltrame, G.; Sabattini, L. Robust Area Coverage with Connectivity Maintenance. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 2202–2208. [Google Scholar]
  7. Stergiopoulos, Y.; Kantaros, Y.; Tzes, A. Connectivity-aware coordination of robotic networks for area coverage optimization. In Proceedings of the 2012 IEEE International Conference on Industrial Technology, Athens, Greece, 19–21 March 2012; pp. 31–35. [Google Scholar]
  8. Ramaithitima, R.; Whitzer, M.; Bhattacharya, S.; Kumar, V. Sensor coverage robot swarms using local sensing without metric information. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3408–3415. [Google Scholar]
  9. Franchi, A.; Masone, C.; Grabe, V.; Ryll, M.; Bülthoff, H.H.; Giordano, P.R. Modeling and Control of UAV Bearing Formations with Bilateral High-level Steering. Int. J. Robot. Res. 2012, 31, 1504–1525. [Google Scholar] [CrossRef]
  10. Luo, S.; Kim, J.; Parasuraman, R.; Bae, J.H.; Matson, E.T.; Min, B.C. Multi-robot rendezvous based on bearing-aided hierarchical tracking of network topology. Ad Hoc Netw. 2019, 86, 131–143. [Google Scholar] [CrossRef]
  11. Dehnavi, S.M.; Ayati, M.; Zakerzadeh, M.R. Three dimensional target tracking via Underwater Acoustic Wireless Sensor Network. In Proceedings of the 2017 Artificial Intelligence and Robotics (IRANOPEN), Qazvin, Iran, 9 April 2017; pp. 153–157. [Google Scholar]
  12. Kim, J. Three-dimensional multi-robot control to chase a target while not being observed. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419829667. [Google Scholar] [CrossRef]
  13. Kim, J. Cooperative Exploration and Protection of a Workspace Assisted by Information Networks. Ann. Math. Artif. Intell. 2014, 70, 203–220. [Google Scholar] [CrossRef]
  14. Kim, J. Capturing intruders based on Voronoi diagrams assisted by information networks. Int. J. Adv. Robot. Syst. 2017, 14, 1–8. [Google Scholar] [CrossRef]
  15. Kim, J.; Maxon, S.; Egerstedt, M.; Zhang, F. Intruder Capturing Game on a Topological Map Assisted by Information Networks. In Proceedings of the 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, USA, 12–15 December 2011; pp. 6266–6271. [Google Scholar]
  16. Kim, J. Intruder capture algorithms considering visible intruders. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419846739. [Google Scholar] [CrossRef]
  17. Spagnolo, G.S.; Cozzella, L.; Leccese, F. Underwater Optical Wireless Communications: Overview. Sensors 2020, 20, 2261. [Google Scholar] [CrossRef] [PubMed]
  18. Wu, T.C.; Chi, Y.C.; Wang, H.Y.; Tsai, C.T.; Lin, G.R. Blue Laser Diode Enables Underwater Communication at 12.4 Gbps. Sci. Rep. 2017, 7, 1–10. [Google Scholar]
  19. Liu, J.; Wang, Z.; Cui, J.H.; Zhou, S.; Yang, B. A Joint Time Synchronization and Localization Design for Mobile Underwater Sensor Networks. IEEE Trans. Mob. Comput. 2016, 15, 530–543. [Google Scholar] [CrossRef]
  20. Misra, S.; Ojha, T.; Mondal, A. Game-Theoretic Topology Control for Opportunistic Localization in Sparse Underwater Sensor Networks. IEEE Trans. Mob. Comput. 2015, 14, 990–1003. [Google Scholar] [CrossRef]
  21. Han, G.; Zhang, C.; Shu, L.; Rodrigues, J.J.P.C. Impacts of Deployment Strategies on Localization Performance in Underwater Acoustic Sensor Networks. IEEE Trans. Ind. Electron. 2015, 62, 1725–1733. [Google Scholar] [CrossRef]
  22. Jadbabaie, A.; Lin, J.; Morse, A.S. Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans. Autom. Control 2003, 48, 349–368. [Google Scholar]
  23. McNew, J.M.; Klavins, E. Locally interacting hybrid systems with embedded graph grammars. In Proceedings of the 45th IEEE Conference on Decision and Control, San Diego, CA, USA, 13–15 December 2006; pp. 6080–6087. [Google Scholar]
  24. Pompili, D.; Melodia, T.; Akyildiz, I.F. Three-dimensional and two-dimensional deployment analysis for underwater acoustic sensor networks. Ad Hoc Netw. 2009, 7, 778–790. [Google Scholar] [CrossRef]
  25. Lin, Y.; Ni, C.; Lei, N.; David Gu, X.; Gao, J. Robot Coverage Path planning for general surfaces using quadratic differentials. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5005–5011. [Google Scholar]
  26. Galceran, E.; Carreras, M. Planning coverage paths on bathymetric maps for in-detail inspection of the ocean floor. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 4159–4164. [Google Scholar]
  27. Douglas, B.W. Introduction to Graph Theory, 2nd ed.; Prentice Hall: Chicago, IL, USA, 2001. [Google Scholar]
  28. Calado, P.; Gomes, R.; Nogueira, M.B.; Cardoso, J.; Teixeira, P.; Sujit, P.B.; Sousa, J.B. Obstacle avoidance using echo sounder sonar. In Proceedings of the OCEANS 2011 IEEE, Santander, Spain, 6–9 June 2011; pp. 1–6. [Google Scholar]
  29. Peng, Z.; Wang, J.; Wang, J. Constrained Control of Autonomous Underwater Vehicles Based on Command Optimization and Disturbance Estimation. IEEE Trans. Ind. Electron. 2019, 66, 3627–3635. [Google Scholar] [CrossRef]
  30. Peng, Z.; Wang, J.; Han, Q. Path-Following Control of Autonomous Underwater Vehicles Subject to Velocity and Input Constraints via Neurodynamic Optimization. IEEE Trans. Ind. Electron. 2019, 66, 8724–8732. [Google Scholar] [CrossRef]
  31. Shen, C.; Shi, Y.; Buckham, B. Path-Following Control of an AUV: A Multiobjective Model Predictive Control Approach. IEEE Trans. Control Syst. Technol. 2019, 27, 1334–1342. [Google Scholar] [CrossRef]
  32. Suarez Fernandez, R.A.; Parra, R.E.A.; Milosevic, Z.; Dominguez, S.; Rossi, C. Nonlinear Attitude Control of a Spherical Underwater Vehicle. Sensors 2019, 19, 1445. [Google Scholar] [CrossRef] [Green Version]
  33. Ji, M.; Egerstedt, M. Distributed Coordination Control of Multi-Agent Systems While Preserving Connectedness. IEEE Trans. Robot. 2007, 23, 693–703. [Google Scholar] [CrossRef]
  34. Kim, J.; Kim, S. Motion control of multiple autonomous ships to approach a target without being detected. Int. J. Adv. Robot. Syst. 2018, 15, 1–8. [Google Scholar] [CrossRef]
  35. Chakravarthy, A.; Ghose, D. Obstacle avoidance in a dynamic environment: A collision cone approach. IEEE Trans. Syst. Man Cybern. 1998, 28, 562–574. [Google Scholar] [CrossRef] [Green Version]
  36. Van den Berg, J.; Guy, S.J.; Lin, M.; Manocha, D. Reciprocal n-Body Collision Avoidance. Robot. Res. Springer Tracts Adv. Robot. 2011, 70, 3–19. [Google Scholar]
  37. Kim, J. Control laws to avoid collision with three dimensional obstacles using sensors. Ocean Eng. 2019, 172, 342–349. [Google Scholar] [CrossRef]
Figure 1. n i s heads towards f n i s . Red curves indicate obstacle boundaries, and the path of n i s is illustrated with yellow line segments. The large dots indicate the robots which are already at their target positions. FrontierPoints are depicted as points on one robot’s B o u n d a r y d c .
Figure 1. n i s heads towards f n i s . Red curves indicate obstacle boundaries, and the path of n i s is illustrated with yellow line segments. The large dots indicate the robots which are already at their target positions. FrontierPoints are depicted as points on one robot’s B o u n d a r y d c .
Applsci 11 04293 g001
Figure 2. The final sensor configuration after the robots are deployed to cover the 3D workspace. Among 50 robots, 18 robots move to cover the workspace. Yellow diamonds indicate robot positions dropped from p 1 , and blue diamonds indicate robot positions dropped from p 2 . The ship position is marked with a circle. Spheres indicate obstacles in the environment.
Figure 2. The final sensor configuration after the robots are deployed to cover the 3D workspace. Among 50 robots, 18 robots move to cover the workspace. Yellow diamonds indicate robot positions dropped from p 1 , and blue diamonds indicate robot positions dropped from p 2 . The ship position is marked with a circle. Spheres indicate obstacles in the environment.
Applsci 11 04293 g002
Figure 3. Considering the scenario in Figure 2, this figure shows the sensor network that is built under our 3D coverage methods. The path of a robot that is dropped from the ship at ( 30 , 30 , 0 ) is marked with yellow asterisks. Furthermore, the path of a robot that is dropped from the ship at ( 150 , 150 , 0 ) is marked with blue asterisks.
Figure 3. Considering the scenario in Figure 2, this figure shows the sensor network that is built under our 3D coverage methods. The path of a robot that is dropped from the ship at ( 30 , 30 , 0 ) is marked with yellow asterisks. Furthermore, the path of a robot that is dropped from the ship at ( 150 , 150 , 0 ) is marked with blue asterisks.
Applsci 11 04293 g003
Figure 4. The sensor configuration after the robots are simulated to cover the 3D workspace without obstacles. 32 robots are dropped in total, and the position of a robot is depicted with a green circle. The path of a robot that is dropped from the ship at ( 30 , 30 , 0 ) is marked with yellow asterisks. Also, the path of a robot that is dropped from the ship at ( 150 , 150 , 0 ) is marked with blue asterisks. Using the simulation in the workspace without obstacles, we can estimate the number of required robots as 32.
Figure 4. The sensor configuration after the robots are simulated to cover the 3D workspace without obstacles. 32 robots are dropped in total, and the position of a robot is depicted with a green circle. The path of a robot that is dropped from the ship at ( 30 , 30 , 0 ) is marked with yellow asterisks. Also, the path of a robot that is dropped from the ship at ( 150 , 150 , 0 ) is marked with blue asterisks. Using the simulation in the workspace without obstacles, we can estimate the number of required robots as 32.
Applsci 11 04293 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, J. Constructing 3D Underwater Sensor Networks without Sensing Holes Utilizing Heterogeneous Underwater Robots. Appl. Sci. 2021, 11, 4293. https://0-doi-org.brum.beds.ac.uk/10.3390/app11094293

AMA Style

Kim J. Constructing 3D Underwater Sensor Networks without Sensing Holes Utilizing Heterogeneous Underwater Robots. Applied Sciences. 2021; 11(9):4293. https://0-doi-org.brum.beds.ac.uk/10.3390/app11094293

Chicago/Turabian Style

Kim, Jonghoek. 2021. "Constructing 3D Underwater Sensor Networks without Sensing Holes Utilizing Heterogeneous Underwater Robots" Applied Sciences 11, no. 9: 4293. https://0-doi-org.brum.beds.ac.uk/10.3390/app11094293

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop