Next Article in Journal
The Concept of Optimal Compaction of the Charge in the Gravitation System Using the Grains Triangle for Cokemaking Process
Previous Article in Journal
Promoted Disappearance of CO2 Hydrate Self-Preservation Effect by Surfactant SDS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Localization of Sound Sources: A Systematic Review

by
Muhammad Usman Liaquat
1,
Hafiz Suliman Munawar
2,
Amna Rahman
3,
Zakria Qadir
4,*,
Abbas Z. Kouzani
5 and
M. A. Parvez Mahmud
5
1
Department of Electronics Engineering, North Ryde Campus, Macquarie University, Sydney, NSW 2109, Australia
2
School of Built Environment, University of New South Wales, Kensington, Sydney, NSW 2052, Australia
3
Department of Computer Engineering, National University of Sciences and Technology, Islamabad 44000, Pakistan
4
School of Computing Engineering and Mathematics, Western Sydney University, Locked Bag 1797, Penrith, NSW 2751, Australia
5
School of Engineering, Deakin University, Geelong, VIC 3216, Australia
*
Author to whom correspondence should be addressed.
Submission received: 13 May 2021 / Revised: 23 June 2021 / Accepted: 27 June 2021 / Published: 29 June 2021

Abstract

:
Sound localization is a vast field of research and advancement which is used in many useful applications to facilitate communication, radars, medical aid, and speech enhancement to but name a few. Many different methods are presented in recent times in this field to gain benefits. Various types of microphone arrays serve the purpose of sensing the incoming sound. This paper presents an overview of the importance of using sound localization in different applications along with the use and limitations of ad-hoc microphones over other microphones. In order to overcome these limitations certain approaches are also presented. Detailed explanation of some of the existing methods that are used for sound localization using microphone arrays in the recent literature is given. Existing methods are studied in a comparative fashion along with the factors that influence the choice of one method over the others. This review is done in order to form a basis for choosing the best fit method for our use.

1. Introduction

Sound localization deals with finding the source of sound with respect to an array of microphones. In practice, sound source localization is done using two type of cues, these are: binaural and monaural. Binaural cues are determined by using differences in sound signals reaching at the two ears [1]. This difference is calculated using either time to intensity of the incident sound signal. Monaural cues are measured through the angle of incidence of the sound signal on the ear. The ability to distinguish and identify particular sounds from the surrounding noise is an important aspect of normal auditory system. People with hearing loss suffer from the disability of being unable to interpret speech in the presence of background noise and not being able to recognize and distinguish between multiple speakers [2]. Hence, this mechanism is implemented in hearing aids for people who are suffering from hearing loss in one or both ears. For the last two to three decades, sound source localization using a set of microphone arrays has been a major topic of interest for researchers and has been discussed in a number of noteworthy studies [3,4,5,6]. To this day, this problem receives immense importance by researchers from the field of medicine, robotics and signal processing. One of the many challenges faced in this domain is the problem of acoustic localization in reverberate environments [7]. Apart from that, the number of microphones in the arrangement as well as their geometry is also a matter of ongoing research, as there is a need to limit the use of microphones in the setting to make the system compact, reduce complexity and minimize resource consumption. Sound source localization using microphones is still a theoretical concept which is still being researched vigorously [8].
Sound localization has many applications in modern technologies and help in producing even better systems which are used in various fields. One of the most important application of this is in hearing aid for the disabled people. A massive research for producing personal guidance system is done to facilitate blind people so they get familiarize with the environment. This guidance system includes headphones, electronic compass, transmitter and receiver. This will use the sense of hearing in place of sight to easily move around [9]. Sound localization is also used for navigation. Sonar used this technique to find the location of target. In addition, localization help in creating better virtual reality (VR) scenarios which greatly increase their realness [10]. Some other uses of the sound localization are audio surveillance, teleconferencing, improved speech recognition and speech enhancement [11,12,13,14,15,16].
The sound source localization now has a wide range of applications in a variety of fields. It is applied frequently in the industries as well as in domestic and military applications. In audio communication, this technology has become crucial for the development of smart devices to be used for voice enhancement. For instance, a sound source localization technology is integrated in a camera which is used in video conferencing [17,18,19]. Using source localization, the camera automatically moves in the direction of the speaker [20]. This technology is also being used in hearing aids which are used to assist people having hearing disabilities. In such a device, the location of the source of sound is determined which is then passed through an integrated array technology for enhancing the voice. Sound coming from all other directions is minimized making the sound of the source voice more strong and distinct [21]. Sound source localization has been successfully implemented in both speech recognition and enhancement systems. The smart water-mine used in the ocean wars, uses sound localization technology for automatic identification of target and its location. This data is communicated to the control system which attacks the identified target [22,23,24,25,26,27].
Sound source localization has many prospects in the field of robotics [28,29,30]. Apart from having basic senses like sight, hearing and touch, the robots also have some power to think logically due to which they are being used in a wide range of intelligent applications and are getting increased validation. Sound source localization using microphones is still a theoretical concept which is still being researched vigorously and has not reached the field of robotics till now. Locating the source of the sound using just two microphones have been pondered upon in the past [31,32,33]. Two microphones have been used as left and right ears of a robot to locate the source of a sound. However, only two microphones are not enough to meet this objective due to the inability to achieve the required arrangement in space. SR-SLOMA is a new sound localization technology which consists of a microphone array with a system of speakers. This system has been used to recognize verbal patterns [34,35,36,37]. This technology has applications in teleconferencing and interactive classrooms. It uses both source localization of sound and a voice recognition mechanism to identify the recording process of some input audio and video files. This helps to improve the working of a standard transmitter and receiver making it smarter and more human.
As discussed earlier, source localization is a growing field and there are many advancements related to it. Every development in this field gives rise to many research options. The ongoing research and development in this field has opened doors of numerous facilities for the people may it be for the use of disabled people in form of hearing aid or by the forces to do audio surveillance and locate the targets, one such example is of sonar that locate the position of target by using sound waves.
In this paper, some state of the art methods which are used for the purpose of sound localization are discussed in detailed. More precisely, this paper targets the following research questions:
RQ-1. 
Which sound localization methods have been recently presented in literature?
RQ-2. 
Which factors affect the sound localization methods?
RQ-3. 
How can the limitations in the existing sound localization methods be overcome using the current technologies?

2. Materials and Methods

This paper reviews the sound source localization technologies. It probes the current limitations of the methods and presents an insight about how the process can be improved to enhance the precision of sound source localization. In this section, a comprehensive detail about the materials, method and the resources required to conduct this study have been presented. The overall process followed for the study have been illustrated in Figure 1.
As sound localization is a vast field so it is necessary to define the boundary of this document. Therefore, the data collection step has been facilitated by first forming three main categories of data. These categories have been created on the basis of the three research questions specified in the Section 1. These categories are listed as follows:
C-1: sound source localization methods
C-1(a): sound source localization in 3D space
C-2: Factors influencing sound source localization
C-3: Improvement over sound source localization
The major sources which are consulted to retrieve the recent research articles related to each category are the websites of journals and conferences like Elsevier, Institute of Electrical and Electronics Engineer (IEEE) and Research Gate. Other websites include MDPI, Google Scholar and arxiv. Table 1 gives the restricted domains for this purpose along with the number of articles retrieved from each domain. It tells the number of papers studied for the literature and the keywords for easy access. The search keywords and phrases to be entered on the search engines of these websites are formulated so as to completely exhaust the database of each website and get maximum number of articles relevant to the topic of interest. To facilitate the generation of keywords, first a set of basic keywords is formed. This initial set consists of keywords include phrases like “sound source localization”, “influencing factors” and “improvements”. Once, this set is created, more keywords relevant to the keywords in this list are searched. VOS viewer software was used to conduct this search by using the keywords from the initial set. As a result of this search, the most relevant keywords in literature related to the three categories were found which include words like “signal”, “microphone”, “array”. A detailed diagram of the cluster of keywords is given in Figure 2. The generated keywords were then used by to form phrases which were used on the search engines of websites. Hence, this process yielded maximum number of relevant research articles from the search platforms. For the category C-1 the keywords designed consist of phrases like: “direction of arrival”, “sound localization” and “ad-hoc microphone array”. For C-2, the defined keywords include “sound localization dependencies” and “factors affecting sound localization”. For C-3, the keywords included phrases such as “improving sound localization” and “enhance sound localization precision”.
After this step, a set of ranked research papers was obtained from the selected search platforms. The next step was to screen these articles to filter them. For this purpose, some assessment criteria were defined to screen the papers. These criteria for article screening are given below:
(1)
Published between the years 2011 to 2021.
(2)
Written in English language.
(3)
No duplicate papers.
(4)
Articles must be research papers, reviews or book chapters.
The research papers passing the screening criteria were downloaded. The content of each article was then studied and examined to confirm its relevance to the defined categories. For this purpose, abstract, introduction and methodology sections of the research articles were carefully analyzed. Hence, a finalized version of the research articles for the study was prepared which contained only the articles passing the final content assessment criteria. The total number of articles in this list was 28. These papers are shown in Table 2 along with the name of their source journal or conference. From these articles, 19 belonged to C-1, 3 belonged to C-2 while 6 papers belonged to C-3.

3. Results

RQ-1. Which Sound Localization Methods Have Been Recently Presented in Literature?

3.1. Methods for Sound Source Localization

3.1.1. Energy-Based Localization

Most of the energy-based methods are used for the wireless acoustic sensory network (WASNs) because of low variation of acoustic power. Source for the sound are microphones which are represented by nodes. Taking the sound energy input and using it for localization depends on the acquired averaged readings received by the microphone for defined signal samples. Energy based techniques do not have the issue of synchronization also they do not need multiple microphones for each node [38,39,40,41]. The energy difference from different microphones of same node are minor. The basic idea of energy-based localization method is to use the energy ratios of the sensors and the target is restricted to a hypersphere. Increasing sensors will increase hyperspheres and the target will be at the point of hypersphere intersection.

3.1.2. Time of Arrival (TOA)

The time instant at which the source signal is detected by the microphones is called the time-of-arrival (TOA). Time-of-Flight (ToF) is a technique to determine the distance between microphones and an object. It is calculated by finding the time taken by the source signal to reach the microphone after being emitted by the source and reflected by an object [42]. Direct mapping from TOA to source-node distance is not possible because TOA and TOF may or may not be equivalent.
For TOA measurements the source and sensor nodes are cooperated so that the propagation time of the signal is easily detected by the nodes. In case of non-availability of the cooperation the initial transmission time will be unknown and without this, TOA is not able to determine the propagation time of signal alone [42,43,44]. TOA uses the method of trilateration by forming equations for the anchors representing the circle having radius equal to the distance from the source. The solution to these equation gives the intersection point which the location of the source [45,46,47].

3.1.3. Time Difference of Arrival (TDOA)

Time difference of arrival (TDOA) works with the time difference between the signals. This can be done with the measurement of time difference between zero level crossings or between the onset times of both signals. TDOA is also calculated by using the assumption of sound source signal to be narrowband. Another popular way of estimating TDOA is by calculating cross-correlation vector between the signals which can be sensitive to noise sources [12]. The algorithms using TDOA or TOA are designed such that they localize the sound source using nodes whose positions are known. To implement these methods, there is a need to choose a reference node to nullify the noise factors and ease the synchronization needs. Hence, the accuracy of sound source localization greatly depends on the choice of this reference. Due to this reliance, the performance of such systems often suffers in the case of a poor choice of the reference. To overcome this issue, Wang et al. proposed a set up where the nodes with known positions are synchronized while the clock of the sound source runs independently [48]. Through vigorous experimentation, the authors demonstrated that in such a configuration, there is no need to choose and rely on a reference node.

3.1.4. Direction of Arrival (DOA)

Each node in this approach estimates the direction of arrival (DOA) of the sources and transmit these estimates to the center. As each node does the estimation individually so there is no need of synchronization. It will work fine with unsynchronized inputs as long as the motion of the source is very low. It uses triangulation of the points in locating the source. This approach needs more computational power along with multiple microphones [9].
Basic principle for the DOA estimation is that if the incoming wave meets the conditions for far field narrow band then the difference between the normal to array and direction vector plane gives the angle of arrival. For far field wide signals there exists a wave-way difference between different array elements for the same signal provide the angle of arrival [13]. Entire structure for the DOA estimation is composed of three stages. Figure 3 gives the idea for these spaces that make up the entire architecture for the DOA estimation.
The technique to find DOA has been proposed in [49], where phase difference among signals are calculated to determine the angle of arrival of the source signal. In this method, first a fast Fourier transform (FFT) is applied to the signal received by each microphone. After this step, the frequency and phase values of each signal at peak points are measured [49]. For each node, the phase difference at peak points are then calculated to determine phase delays.
First is the target space which comprises of the signal source and also includes the environment with its complexities. Unknown parameters of the signal are estimated at this stage.
The second stage is the observation space that receives the information from target space. The received information contains environmental characteristics such as noise and interference.
The third and final stage is for the estimation techniques which may be array correction or filtering technique. This stage basically reconstructs the target space signal whose accuracy depends on many factors. Energy distribution of signal forms the spatial spectrum and this estimation of spectrum is basically DOA estimation [13].
Various factor affect the DOA estimation results which are briefly discussed here:
(1)
Array elements
For same parameters of the array, increasing the number of array elements increase the estimation performance. Here, parameters refer to the sensor properties, sensors physical position and the errors in the calculated positions.
(2)
Signal-to-noise ratio (SNR)
With low value of SNR will affect the performance as the incoming sound will be contaminated by the noise and interference to a larger scale and this will drop the performance of the DOA estimation.
(3)
Coherence in source signal
Signal which have the same frequency and propagate with constant phase offset are called coherent signals. It becomes a major problem to differentiate these two signals. In return affecting the performance of the DOA estimates [50].
(4)
Position of sensors
The location of the sensors is also very important. They should be within range of the sound-producing source so that they can easily detect the sound and work on the localization task afterwards. The sensors or microphones are placed in a geometrical shape. Previously, sensors were arranged in the form of equilateral triangle [51]. By calculating time delays manifested by the source signal in reaching each microphone, the distance of each microphone from the source signal along with the angle of the source can be estimated.
Many different type of algorithms are used for the DOA estimation out of which multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance technique (ESPRIT) algorithms are subspace- based methods which work by the decomposing the eigenvalue of the signal correlation matrices coming from the microscope. The MUSIC algorithm works when the array geometry is completely known and calibrated and also has complex computation [14]. ESPRIT, on the other hand is more robust and doesn’t have to search for all the possible DOAs, which reduces its computational complexity [15]. In addition to the above briefly described sub-space methods for finding the DOA estimates there exists other methods too. Some of them are described in Table 3.

3.1.5. Beamforming

Beamforming uses a microphone array in the far field which is defined as being further away from the source than the diameter. Sound waves which hit the microphone array in far field are planar waves so it is easy to propagate the incoming sound directly to the test object. Signals from the beamforming array are added incorporating the propagation distance delay [19].

3.1.6. Inter-Microphone Intensity Difference (IID)

This method works for a 2-microphone array which measures the difference of energy between the signals at any instance. The obtained time domain signal helps in determining whether the source is at right, left or front of the microphone. In order to increase the resolution, greater number of microphones can be used. This time domain signal can be changed to frequency domain version which is inter-microphone level difference (ILD) which uses the difference spectrum for the signal [12]. Logarithmically spaced set of filters in frequency domain called the filter bank is the similar feature to ILD but it is more robust against noise as compared to ILD.

3.1.7. Steered Response Power (SRP)

SRP is a beam forming technique that computes the power of a filter and sum it to a set of source location defined by the spatial grid [9]. Generalized cross correlation (GCC) data from multiple microphone pairs are accumulated and are used for computation. Estimated source location is generated by the highest value in SRP power map which is the grid of the set of SRP [52,53,54].
Many different methods are used for the sound localization purposes. According to the application of the user particular method is selected. A detailed review of some of the methods is taken and they are compared in order to build a clear basis that helps in choosing the method best suited for our application. Table 4 shows a comparative analysis for better understanding of different methods.

3.2. Sound Source Localization in 3D Space

Locating the sound source in 3D space is referred to as 3D sound localization. The method involves analyzing both the horizontal and vertical angles of the arrival of sound waves along with the different between the sound source and the microphones. This requires the microphones to be arranged in a particular structure with the sound source. Usually the 3D coordinates are determined by applying signal processing techniques [55,56,57]. Many mammals along with human beings make use of binaural hearing for sound source localization. In this process, the information received from each ear undergoes some comparative analysis which is part of a synthesis process. In the experimental setting the binaural hearing functionality is achieved through the use of two microphones [33]. In this project, the use of two nodes of microphones where each node contains an array of two microphones were introduced [58,59,60,61].

3.2.1. Technologies

The sound source localization technology is mostly used in the fields of audio and acoustics like direction navigation, speech enhancement, surveillance and hearing aids [34]. The current sound localization routines make use of the time difference of arrival of each sound signal. These systems mostly limit the localization to two-dimensional space and therefore are not viable to be used to solve practical sound localization problems [56,57,58].

3.2.2. Sound Localization Features

The sound source is identified through the use of some features [23]. These cues can be binaural or monoaural. Vertical sound localization can be done by using monaural cues. These cues can be achieved through spectral analysis. Binaural cues are used for horizontal sound source localization. The difference in hearing between the left and right ears are analyzed. The time difference between the arrival of sound wave to both ears and the differences in intensities are both taken into account during the analysis [59,60,61].

3.2.3. Methods

Most common methods being used for 3D sound localization are listed below:
(1)
A structure comprising of multiple sensors like microphones or hearing robots can be used to mimic the sound localization technique biologically used by mammals [24].
(2)
Acoustic vector sensor (AVS) arrays [25] is a method used for real time sound localization.
(3)
Offline methods
(4)
Result optimization with the use of classification techniques, neural network and maximum likelihood methods are applied.
The following sections provide an overview of the different methods that are employed for localization of sound source in 3D space.
(1)
Steered beam-former method
The steered beam-former method using microphones which are combined using a steered beam-former. The DoA is detected by a robotic sensor network. The incoming signals are then filtered to reduce the noise. This method is considered useful in speech recognition applications in complex environments, where sound entropy has to be reduced for successful localization [62].
(2)
Beam-former method
The beam-former method relies on generating pulses towards a projector at multiple time points, such that all the pulses hit the projector at the same time, creating a large sound impact. This method is used as a basis for a multiple input multiple output (MIMO) model for improving the performance of the sound localization systems, such as cellular technologies. Such a system is suggested to reduce the bit-error-rate in sound transmission. The presence of multiple transmission (Tx) and receiving (Rx) channels adds sub-channels for increasing channel capacity without increasing the overall bandwidth of the system. The use of multiple channels has proven to be efficient at providing a focused sound beam without the need of increasing design complexity [63].
(3)
Acoustic vector sensor (AVS) array
The acoustic vector sensor (AVS) array is used to measure the acoustic pressure. An AVS contains three velocity sensors along with a pressure sensor. These sensors detect signals in the form an XYZO array. The DoA of the sound can then be estimated using these arrays. The DoA performance of AVS has been deemed to be better than other methods reported in literature. The key feature of AVS is that it utilizes all the information available about acoustics in a defined space. This feature makes AVS a desirable method on platforms where space is limited. Figure 4 shows an AVS array configuration [64].
(4)
Multiple microphone array
The underlying principle of the multiple microphone array method is to record the time of difference between the arrival of sound for determining the direction. To accurately determine the spatial distribution of the sound beams, triangulation is applied using the distance between the microphone placement and using the ratio of distance between the microphones.
Mobility in the multiple microphone array is an advantage as it helps in identifying the source of sound by determining the distance between different microphones. However, sound coming from multiple sources can cause difficulty in determining the source of the signal. Moreover, the identification of DoA becomes more complex in the case of moving objects [65].
RQ-2. Which methods affect the choice of sound localization methods?

3.3. Factors Affecting the Choice

Choosing a certain method among many different methods comes with challenges. Some of the methods are cost efficient but they lack in accuracy, while others provide good results with bandwidth inefficiency so there is a need to find a tradeoff between them which is best suited. Some of the factors which effect in choosing a method are as follows.

3.3.1. Cost Efficiency

In order to achieve high accuracy for the sound localization, many methods use advanced and recent techniques which can provide high processing speeds and ease the computational complexities. This can be ensured by using high end hardware systems which are quite expensive. Keeping this in mind many single-board computing devices are made which minimize this problem but still managing the cost of the whole system and keep it within budget is very important.

3.3.2. Measurement Errors

All of the sound localization methods are subject to errors which are caused by the surrounding noise and interference. Sound waves are subjected to the problems of signal diffraction, echoes, reflection, deflection and diffractions which produce many measurement errors and wrong localization. Many of the errors also occur due to lack of synchronization of the nodes which is very crucial in some methods like TOA and TDOA [9], so, it is necessary to choose a method which minimizes this problem and is less susceptible to noise and provides better results.

3.3.3. Power Dissipation

To achieve better performance of the system many battery powered nodes are used in the hardware which leads to energy dissipation. This loss needs to be properly checked and should be tried best to minimize it as much as possible.

3.3.4. Deployment Issues

All of the methods have a particular hardware requirement which needs to be done accordingly in order to utilize it in best possible way. Requirements of the methods differ from each other as in TOA synchronization of nodes is of primary importance and in energy based methods calibrated gains are needed. Some methods also require physical administration along with seasonal variation. Calibration of these problems can be time consuming [9].

3.3.5. System Flexibility

System flexibility is measure by how easily it is to cope when an issue arises within the network. As the methods for sound localization are applied in an open environment they face a lot of physical challenges. In this process, sometimes a part or a node (microphones) may fail to work so there is a need of backup so that even if a part fails, the localization estimates still get measured properly [9].

3.3.6. Scalability

With wide use of sound localization in many different applications, systems may need to be applied in very small places as well as larger places. Changing the hardware accordingly and scaling it with respect to the requirement is very important. With the correct knowledge of user’s application different sound localization techniques must be scaled up or down.

4. Discussion

RQ-3. How can the limitations in the existing sound localization methods Be overcome using the current technologies?
After a complete and detailed study of different methods of localization along with the factors that should be kept in mind when choosing a particular method it is clear that choosing one method is very crucial task and there will always exist a tradeoff between the attributes. Considering the application of designing a stable system using only two microphones, DOA has been chosen because of its no need of synchronization of the inputs which is very important in many other methods and it is very difficult to achieve that precise synchronization [66,67,68]. No need for synchronization gives the user freedom of working with various inputs. It also consumes less bandwidth in comparison to other methods, which is highly beneficial as bandwidth is a costly resource. Bandwidth is needed to be preserved in order to have a budget friendly product which can readily be used by the people [9]. DOA is a broader field of signal processing which is divided into two categories which are self-adaption array signal processing and spatial spectrum. Spatial spectrum gives the distribution of signal to the receiver from all directions. So getting the information of signal’s spatial spectrum will give the information about the DOA [13]. Along with some advantages, there also exists a drawback, which is the complex computation. This can be overcome by using powerful single board processors. Many in node signal processing single boards are being made to make the computation easy and they are readily available in market [9].
Keeping in mind the advancements and importance of sound localization, many different methods have been proposed to know the source location correctly. For this purpose, many different arrangements of microphones are tested by changing the distances between the microphones in order to have the correct knowledge about how changing one parameter change the performance of the system and how it can be further improved. Till now, many different types of microphone arrays are used, which include circular arrays [39], hexagonal arrays [48], 2D arrays [49], linear arrays [50] and ad hoc arrays [51]. All of these arrangements have advantages over one another and are used depending upon the application by the user. Single microphone arrays have certain limitations of physical size and processing power which are overcome by the use of ad-hoc microphone arrays as they can cover larger area due to smaller in size and thus increases the spatial information [30].
The concept of using ad-hoc microphone arrays, will be very helpful in conferences because it can easily connect different devices and form an instant network. This will improve the experience of people attending the conferences and provide them great convenience. They need not to worry of any type of special arrangement or devices, as easily available daily use devices such as tablets and mobile phone can also be connected to form an ad-hoc array.
Although the use of ad-hoc microphones comes with certain challenges in their use as all the microphones in the array have their own clock which will give rise to the problem of sampling rate mismatch affecting the performance of traditional multichannel sound enhancement algorithm. Mismatch exists between the test and anechoic training data, which can be reduced by the use of spatial distribution of nodes in the ad-hoc, arrays [31]. In ad-hoc arrays, the microphones have un-calibrated nodes and their relative position is also unknown which provide no information regarding the geometry of the array and partial information of the distance between microphones. This problem can be solved by the use of Euclidean distance matrix completion algorithm [32].
In order to bring novelty to the field of sound source localization and enhance precision, a new geometrical arrangement of microphones can be proposed to locate the sound source in 3D space with maximum accuracy. This configuration should require a minimum number of microphones to avoid complexity and abundant use of resources. One possible way is to estimate DOA using phase difference information among the sound waves received by different microphones [52,53]. The use of a small number of microphones will make the system a good choice to be used in the embedded systems and wearable devices such as hearing aids and supporting devices for blinds. The sound source localization problem is also needed to be tackled in real environments with less than ideal conditions due to the presence of noise, echo and other reverberations. Figure 5 shows how various signal processing techniques have been combined and applied to measure both the DOA and the exact 3D location of the sound source [68,69,70].

5. Conclusions

This paper has discussed the most recent sound source localization techniques presented in the literature. Three major research questions have been proposed which have been elaborated extensively. These are: RQ-1. Which sound localization methods have been recently presented in literature? RQ-2. Which methods affect the choice of sound localization methods? RQ-3. How can the limitations in the existing sound localization methods be overcome using the current technologies? In the light of the results presented in the paper, it can be concluded that the most common and effective methods for sound localization include energy-based localization, TOA, TDOA, DOA, beamforming, IID and SRP. For sound localization specifically in 3D space the most common technologies are the steered beam-former method, beam-former method, AVS array, advanced microphone array and multiple microphone array. The most common factors affecting the sound source localization methods include cost efficiency, measurement errors, power dissipation, deployment issues, system flexibility and scalability. Minimizing the number of microphones in the configuration can reduce the resource consumption while presenting a more cost-effective solution. This can be achieved by using only two microphones mimicking the auditory system in human beings consisting of two ears. For such a system DOA is identified as an effective approach as it does not require the inputs to be synchronized which can be difficult to obtain precisely. Moreover, the benefits of using ad-hoc microphones have been highlighted as such an arrangement does not require special geometry and an instant network can be formed for application of the sound localization system in conferences. In the end, a system has been proposed which effectively targets the issues faced by the current sound source localization technologies. This system is composed of a minimum number of microphones. A DOA method is proposed, which calculates the phase difference among the sound waves received by the microphones. Such a system will be compact in size and can be efficiently used in embedded systems and hearing aids.

Author Contributions

Conceptualization, M.U.L., H.S.M., A.R. and Z.Q.; methodology H.S.M., A.R. and Z.Q.; software, H.S.M. and Z.Q.; validation, A.R.; formal analysis, Z.Q.; investigation, H.S.M., Z.Q. and A.R.; resources, A.Z.K.; data curation, M.A.P.M.; writing—original draft preparation, H.S.M., A.R. and Z.Q.; funding acquisition, M.A.P.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pinto, P.C.L.; Sanchez, T.G.; Tomita, S. The impact of gender, age and hearing loss on tinnitus severity. Braz. J. Otorhinolaryngol. 2010, 76, 18–24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Middlebrooks, J.C. Sound localization. Huntingt. Dis. 2015, 129, 99–116. [Google Scholar]
  3. Yalta, N.; Nakadai, K.; Ogata, T. Sound source localization using deep learning models. J. Robot. Mechatron. 2017, 29, 37–48. [Google Scholar] [CrossRef]
  4. Pavlidi, D.; Griffin, A.; Puigt, M.; Mouchtaris, A. Real-time multiple sound source localization and counting using a circular microphone array. IEEE Trans. Audio Speech Lang. Process. 2013, 21, 2193–2206. [Google Scholar] [CrossRef] [Green Version]
  5. Strauss, M.; Mordel, P.; Miguet, V.; Deleforge, A. DREGON: Dataset and methods for UAV-embedded sound source localization. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–8. [Google Scholar]
  6. Alameda-Pineda, X.; Horaud, R. A geometric approach to sound source localization from time-delay estimates. IEEE/ACM Trans. Audio. Speech Lang. Process. 2014, 22, 1082–1095. [Google Scholar] [CrossRef] [Green Version]
  7. Castellini, P.; Sassaroli, A. Acoustic source localization in a reverberant environment by average beamforming. Mech. Syst. Signal Process. 2010, 24, 796–808. [Google Scholar] [CrossRef]
  8. Athanasopoulos, G.; Verhelst, W.; Sahli, H. Robust speaker localization for real-world robots. Comput. Speech Lang. 2015, 34, 129–153. [Google Scholar] [CrossRef]
  9. Shaukat, M.; Shaukat, H.; Qadir, Z.; Munawar, H.; Kouzani, A.; Mahmud, M. Cluster Analysis and Model Comparison Using Smart Meter Data. Sensors 2021, 21, 3157. [Google Scholar] [CrossRef] [PubMed]
  10. Cobos, M.; Antonacci, F.; Alexandridis, A.; Mouchtaris, A.; Lee, B. A survey of sound source localization methods in wireless acoustic sensor networks. Wirel. Commun. Mob. Comput. 2017, 2017, 1–24. [Google Scholar] [CrossRef]
  11. Meng, W.; Xiao, W. Energy-Based Acoustic Source Localization Methods: A Survey. Sensors 2017, 17, 376. [Google Scholar] [CrossRef] [Green Version]
  12. Thomas, F.; Ros, L. Revisiting trilateration for robot localization. IEEE Trans. Robot. 2005, 21, 93–101. [Google Scholar] [CrossRef] [Green Version]
  13. Munawar, H.S. Image and video processing for defect detection in key infrastructure. Mach. Vis. Insp. Syst. Image Process. Concepts Methodol. Appl. 2020, 1, 159–177. [Google Scholar]
  14. Rascon, C.; Meza, I. Localization of sound sources in robotics: A review. Robot. Auton. Syst. 2017, 96, 184–210. [Google Scholar] [CrossRef]
  15. Tang, H. DOA Estimation Based on MUSIC Algorithm. Bachelor’s Thesis, Linnaeus University, Växjö, Sweden, 2014. [Google Scholar]
  16. Hassani, A.; Bertrand, A.; Moneen, M. Cooperative integrated noise reduction and node-specific direction-of-arrival estimation in a fully connected wireless acoustic sensor network. Signal Process. 2015, 107, 68–81. [Google Scholar] [CrossRef] [Green Version]
  17. Roy, R.; Kailath, T. ESPRIT-estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 984–995. [Google Scholar] [CrossRef] [Green Version]
  18. Pradhan, D.; Bera, R. Direction of arrival estimation via ESPRIT algorithm for smart antenna system. Int. J. Comput. Appl. 2015, 118, 5–7. [Google Scholar] [CrossRef]
  19. Munawar, H.S.; Maqsood, A. Isotropic surround suppression based linear target detection using Hough transform. Int. J. Adv. Appl. Sci. 2017. [Google Scholar] [CrossRef]
  20. Khan, S.I.; Qadir, Z.; Munawar, H.S.; Nayak, S.R.; Budati, A.K.; Verma, K.D.; Prakash, D. UAVs path planning archi-tecture for effective medical emergency response in future networks. Phys. Commun. 2021, 47, 101337. [Google Scholar] [CrossRef]
  21. Aich, A.; Palanisamy, P. On-grid DOA estimation method using orthogonal matching pursuit. In Proceedings of the 2017 International Conference on Signal Processing and Communication (ICSPC), Coimbatore, India, 28–29 July 2017; pp. 483–487. [Google Scholar]
  22. Griffin, A.; Alexandridis, A.; Pavlidi, D.; Mastorakis, Y.; Mouchtaris, A. Localizing multiple audio sources in a wireless acoustic sensor network. Signal Process. 2015, 107, 54–67. [Google Scholar] [CrossRef]
  23. Lanslots, J.; Deblauwe, F.; Janssens, K. Selecting sound source localization techniques for industrial applications. Sounds Vib. 2010, 44, 6–10. [Google Scholar]
  24. Knapp, C.; Carter, G. The generalized correlation method for estimation of time delay. IEEE Trans. Acoust. Speech Signal Process. 1976, 24, 320–327. [Google Scholar] [CrossRef] [Green Version]
  25. DiBiase, J.H. A high-Accuracy, Low-Latency Technique for Talker Localization in Reverberant Environments Using Micro-phone Arrays. Ph.D. Thesis, Brown University, Providence, RI, USA, 2000. [Google Scholar]
  26. Heusdens, R.; Gaubitch, N. Time-delay estimation for TOA-based localization of multiple sensors. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2014; pp. 609–613. [Google Scholar]
  27. Nokas, G.; Dermatas, E. Continuous speech recognition in noise using a spectrum-entropy beam-former. Int. J. Robot. Autom. 2007, 22, 103–111. [Google Scholar] [CrossRef]
  28. Kundu, T.; Misra, I.S.; Sanyal, S.K. Developing a 3D beam former model with varied MIMO channels. In Proceedings of the 2019 Second International Conference on Advanced Computational and Communication Paradigms (ICACCP), Gangtok, India, 25–28 February 2019; pp. 1–7. [Google Scholar]
  29. Wang, W.; Zhang, Q.; Tan, W.; Shi, W.; Pang, F. Direction finding via acoustic vector sensor array with non-orthogonal factors. Digit. Signal Process. 2021, 108, 102910. [Google Scholar] [CrossRef]
  30. Hu, J.-S.; Chan, C.-Y.; Wang, C.-K.; Lee, M.-T.; Kuo, C.-Y. Simultaneous localization of a mobile robot and multiple sound sources using a microphone array. Adv. Robot. 2011, 25, 135–152. [Google Scholar] [CrossRef]
  31. Goldstein, E.B. Sensation and Perception, 8th ed.; Cengage Learning: Boston, MA, USA, 2009; pp. 293–297. ISBN 978-0-495-60149-4. [Google Scholar]
  32. Munawar, H.S.; Zhang, J.; Li, H.; Mo, D.; Chang, L. Mining multispectral aerial images for automatic detection of strategic bridge locations for disaster relief missions. In Trends and Applications in Knowledge Discovery and Data Mining; Pacific-Asia Conference on Knowledge Discovery and Data Mining; Springer: Cham, Switzerland, 2019. [Google Scholar]
  33. Nakashima, H.; Mukai, T. 3D Sound Source Localization System Based on Learning of Binaural Hearing. In Proceedings of the 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, HI, USA, 12 October 2005; pp. 3534–3539. [Google Scholar]
  34. Liang, Y.; Cui, Z.; Zhao, S.; Rupnow, K.; Zhang, Y.; Jones, D.L.; Chen, D. Real-time implementation and performance optimization of 3D sound localization on GPUs. In Proceedings of the 2012 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 12–16 March 2012; pp. 832–835. [Google Scholar]
  35. Ephraim, Y.; Malah, D. Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator. IEEE Trans. Acoust. Speech Signal Process. 1984, 32, 1109–1121. [Google Scholar] [CrossRef] [Green Version]
  36. Munawar, H.; Khan, S.; Anum, N.; Qadir, Z.; Kouzani, A.; Mahmud, M.P. Post-flood risk management and resilience building practices: A case study. Appl. Sci. 2021, 11, 4823. [Google Scholar] [CrossRef]
  37. Valin, J.-M.; Michaud, F.; Rouat, J. Robust 3D localization and tracking of sound sources using beamforming and particle filtering. In Proceedings of the 2006 IEEE International Conference on Acoustics Speed and Signal Processing, Toulouse, France, 14–19 May 2006. [Google Scholar]
  38. Salas, N.M.A.; Martinez, R.O.R.; de Haro, A.L.; Sierra, P.M. Calibration proposal for new antenna array architectures and technologies for space communications. IEEE Antennas Wirel. Propag. Lett. 2012, 11, 1129–1132. [Google Scholar] [CrossRef] [Green Version]
  39. Ishi, C.T.; Even, J.; Hagita, N. Using multiple microphone arrays and reflections for 3D localization of sound sources. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), Tokyo, Japan, 3–7 November 2013; pp. 3937–3942. [Google Scholar]
  40. Bertrand, A.; Doclo, S.; Gannot, S.; Ono, N.; Van Waterschoot, T. Special issue on wireless acoustic sensor networks and ad hoc microphone arrays. Signal Process. 2015, 107, 1–3. [Google Scholar] [CrossRef]
  41. Liaquat, M.U.; Munawar, H.S.; Rahman, A.; Qadir, Z.; Kouzani, A.Z.; Mahmud, M.A. Sound Localization for Ad-Hoc Microphone Arrays. Energies 2021, 14, 3446. [Google Scholar] [CrossRef]
  42. Gergen, S.; Nagathil, A.; Martin, R. Classification of reverberant audio signals using clustered ad hoc distributed microphones. Signal Process. 2015, 107, 21–32. [Google Scholar] [CrossRef]
  43. Taghizadeh, M.J.; Parhizkar, R.; Garner, P.N.; Bourlard, H.; Asaei, A. Ad hoc microphone array calibration: Euclidean distance matrix completion algorithm and theoretical guarantees. In Proceedings of the 18th International Conference on Digital Signal Processing (DSP), Fira, Santorini, Greece, 1–3 July 2013. [Google Scholar]
  44. Munawar, H.S. Flood disaster management: Risks, technologies, and future directions. Mach. Vis. Insp. Syst. Image Process. Concepts Methodol. Appl. 2020, 1, 115–146. [Google Scholar]
  45. Pang, C.; Liu, H.; Zhang, J.; Li, X. Binaural sound localization based on reverberation weighting and generalized para-metric mapping. IEEE/ACM Trans. Audio Speech Lang. Process. 2017, 25, 1618–1632. [Google Scholar] [CrossRef] [Green Version]
  46. Munawar, H.S. An overview of reconfigurable antennas for wireless body area networks and possible future prospects. Int. J. Wirel. Microw. Technol. 2020, 10, 1–8. [Google Scholar] [CrossRef] [Green Version]
  47. Keyrouz, F.; Diepold, K.; Keyrouz, S. High performance 3D sound localization for surveillance ap-plications. In Proceedings of the 2007 IEEE Conference on Advanced Video and Signal Based Surveillance, London, UK, 5–7 September 2007; pp. 563–566. [Google Scholar]
  48. Li, M.; Lu, Y.; He, B. Array Signal Processing for Maximum Likelihood Direction-of-Arrival Estimation. J. Electr. Electron. Syst. 2013, 3, 117. [Google Scholar]
  49. Munawar, H.S.; Hammad, A.; Ullah, F.; Ali, T.H. After the flood: A novel application of image processing and machine learning for post-flood disaster management. In Proceedings of the 2nd International Conference on Sustainable Development in Civil Engineering (ICSDC 2019), Jamshoro, Pakistan, 5–7 December 2019; pp. 5–7. [Google Scholar]
  50. Loomis, J.M.; Golledge, R.G.; IClatzky, R.L.; Speiglel, J.M.; Tietz, T. Personal Guidance System for the Visually Impaired. In Proceedings of the First Annual ACM Conference on Assistive Technologies, Marina Del Rey, CA, USA, 31 October 1994. [Google Scholar]
  51. Aston, J. Sound Localization and New Applications of its Research. Applied Perception Projects and Service-Learning Project. 2003. Available online: https://www.laurenscharff.com/courseinfo/SL03/sound_loc.htm (accessed on 4 April 2003).
  52. Anushirvani, R. Sound Source Localization with Microphone Arrays; University of Illinois Urbana-Champaign: Champaign, IL, USA, 2014. [Google Scholar]
  53. Gala, D.R.; Misra, V.M. SNR improvement with speech enhancement techniques. In Proceedings of the International Conference & Workshop on Emerging Trends in Technology—ICWET ’11, Association for Computing Machinery (ACM), Mumbai, India, 25–26 February 2011; pp. 163–166. [Google Scholar]
  54. Traa, J.; Smaragdis, P. Multiple speaker tracking with the Factorial von Mises-Fisher Filter. In Proceedings of the 2014 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), Reims, France, 21–24 September 2014. [Google Scholar]
  55. Busso, C.; Hernanz, S.; Chu, C.W.; Kwon, S.I.; Lee, S.; Georgiou, P.G.; Cohen, I.; Narayanan, S. Smart room: Participant and speaker localization and identification. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Philadelphia, PA, USA, 23 March 2005. [Google Scholar]
  56. Munawar, H.S.; Qayyum, S.; Ullah, F.; Sepasgozar, S. Big data and its applications in smart real estate and the disaster management life cycle: A systematic analysis. Big Data Cogn. Comput. 2020, 4, 4. [Google Scholar] [CrossRef] [Green Version]
  57. Risoud, M.; Hanson, J.N.; Gauvrit, F.; Renard, C.; Lemesre, P.E.; Bonne, N.X.; Vincent, C. Sound source localization. Eur. Ann. Otorhinolaryngol. Head Neck Dis. 2018, 135, 259–264. [Google Scholar] [CrossRef]
  58. Tamai, Y.; Sasaki, Y.; Kagami, S.; Mizoguchi, H. Three ring microphone array for 3D sound localization and separation for mobile robot audition. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 4172–4177. [Google Scholar]
  59. Munawar, H.S. Reconfigurable origami antennas: A review of the existing technology and its future prospects. Int. J. Wirel. Microw. Technol. 2020, 10, 34–38. [Google Scholar] [CrossRef]
  60. Martins, W.; Nunes, L.; Haddad, D.; Biscainho, L.; Lee, B.; Lima, M.; Costa, M.; De Campos, M.L.R.; Ramos, R.V.; Zão, L.; et al. Time-of-flight selection for improved acoustic sensor localization using multiple loudspeakers. In Proceedings of the 23rd Brazilian Telecommunication Symposium, Fortaleza, Brazil, 9–14 September 2013; pp. 1–5. [Google Scholar]
  61. Fan, J.; Luo, Q.; Ma, D. Localization estimation of sound source by microphones array. Procedia Eng. 2010, 7, 312–317. [Google Scholar] [CrossRef] [Green Version]
  62. Wang, Y.; Leus, G. Reference-free time-based localization for an asynchronous target. EURASIP J. Adv. Signal Process. 2012, 2012, 19. [Google Scholar] [CrossRef] [Green Version]
  63. Munawar, H.S.; Khalid, U.; Jilani, R.; Maqsood, A. Version Management by Time Based Approach in Modern Era. Int. J. Educ. Manag. Eng. 2017, 7, 13–20. [Google Scholar] [CrossRef] [Green Version]
  64. Qadir, Z.; Ullah, F.; Munawar, H.S.; Al-Turjman, F. Addressing disasters in smart cities through UAVs path planning and 5G communications: A systematic review. Comput. Commun. 2021, 168, 114–135. Xiong, B.; Li, G.-L.; Lu, C.-H. DOA estimation based on phase-difference. In Proceedings of the 2006 8th International Conference on Signal Processing, Guilin, China, 16–20 November 2006; Volume 1. [Google Scholar]
  65. Chetupalli, S.R.; Ram, A.; Thippur, V.S. Robust offline trained neural network for TDOA based sound source localization. In Proceedings of the 2018 Twenty Fourth National Conference on Communications (NCC), Hyderabad, India, 25–18 February 2018; pp. 1–5. [Google Scholar]
  66. Kagami, S.; Mizoguchi, H.; Tamai, Y.; Kanade, T. Microphone array for 2D sound localization and capture. In Proceedings of the IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA ’04. 2004, New Orleans, LA, USA, 26 April–1 May 2004; Volume 1, pp. 703–708. [Google Scholar]
  67. Cai, W.; Wang, S.; Wu, Z. Accelerated steered response power method for sound source localization using orthogonal linear array. Appl. Acoust. 2010, 71, 134–139. [Google Scholar] [CrossRef]
  68. Gaubitch, N.D.; Kleijn, W.B.; Heusdens, R. Auto-localization in ad-hoc microphone arrays. In Proceedings of the 2013 IEEE International Conference on Acoustics, Vancouver, BC, Canada, 26–31 May 2013; pp. 106–110. [Google Scholar]
  69. Kuntzman, M.L.; Lee, J.G.; Hewa-Kasakarage, N.N.; Kim, D.; Hall, N.A. Micromachined piezoelectric microphones with in-plane directivity. Appl. Phys. Lett. 2013, 102, 054109. [Google Scholar] [CrossRef]
  70. Kim, D.; Hewa-Kasakarage, N.N.; Kuntzman, M.L.; Kirk, K.D.; Yoon, S.H.; Hall, N.A. Piezoelectric micromachined microphones with out-of-plane directivity. Appl. Phys. Lett. 2013, 103, 013502. [Google Scholar] [CrossRef]
Figure 1. Methodology.
Figure 1. Methodology.
Energies 14 03910 g001
Figure 2. Keywords generated using VOS Viewer.
Figure 2. Keywords generated using VOS Viewer.
Energies 14 03910 g002
Figure 3. Structure of DOA estimation.
Figure 3. Structure of DOA estimation.
Energies 14 03910 g003
Figure 4. XYZO-AVS array.
Figure 4. XYZO-AVS array.
Energies 14 03910 g004
Figure 5. Sound source localization using DOA and 3D modeling.
Figure 5. Sound source localization using DOA and 3D modeling.
Energies 14 03910 g005
Table 1. A review of existing literature.
Table 1. A review of existing literature.
SourceKeywordsNo. of Papers
ElsevierDirection of arrival, ad-hoc microphone arrays, wireless acoustic sensor network, audio signal classification, location estimation.16
IEEESource localization, Direction of arrival, Trilateration, Time delay estimation, Position calibration.16
Research GateAd-hoc microphone arrays, ESPRIT algorithm, Speech enhancement, acoustic source localization, Position calibration.19
MiscellaneousMaximum likelihood, Signal processing, MUSIC algorithm, Beamforming, Wireless sensor network.17
Table 3. Methods for DOA estimation.
Table 3. Methods for DOA estimation.
MethodAdvantagesDisadvantages
Conventional BeamformingProduce maximum output power needed for estimation in certain time [13].Limited to the beam width height and side lobe giving low resolution [13].
ESPRIT AlgorithmComputational complexity and storage requirements are less than MUSIC [16].Noise effect the precise value of the arrival angle.
Multipath fading is also seen [16].
Prone to errors.
MUSIC AlgorithmMeasure multiple signals simultaneously.
High precision.
Real time processing is achieved by the use of high-speed processing technology.
Small difference of incident angle with low SNR while moving will decrease the performance of algorithm.
Increasing array element spacing will give false peaks for spatial spectrum [13].
Non-linear Least SquaresGive superior results in presence of low SNR, coherent sources and short data sample than its counterpart methods [35].Good initial estimates are required.
High time complexity [17].
Grid-based methodDoesn’t require initial points.
Low computational burden.
Accuracy is limited to grid point’s density [18].
Table 4. Comparative study of sound localization methods.
Table 4. Comparative study of sound localization methods.
MethodSynchronizationRequirementsAdvantagesDrawbacks
Energy-based methodsDirection of arrival, ad-hoc microphone arrays, wireless acoustic sensor network, audio signal classification, location estimation.Simpler capturing and transmission devices.Power efficient [10].
Less susceptibility to perturbation.
Robust.
Low bandwidth.
Gain calibration is required at nodes for high energy ratio [10]
BeamformingSource localization, direction of arrival, trilateration, time delay estimation, position calibration.Simultaneous measurement of data is requirement.Results have good spatial resolution [19]. Fast analysis speed.Works with frequencies above 1000 Hz.
Exists a tradeoff between range and accuracy
TDOAAd-hoc microphone arrays, ESPRIT algorithm, Speech enhancement, acoustic source localization, Position calibration.Linear cost functions are used to overcome the difficulty of non-linearity in measurements.Moderate computational cost [20].
Desired bandwidth.
Less transmission power.
Prone to errors like noise and interference.
Steered Response PowerMaximum likelihood, signal processing, MUSIC algorithm, beamforming, wireless sensor networks.SRPs power maps are required.Robust in noisy environment [21].Graphics Processing Units (GPUs) are needed for implementation.
TOAPrecise synchronization.Precise timing hardware is a requirementUsing reasonable assumptions, higher accuracy along with reduced execution time can be achieved [22].Unknown internal delays that need to be dealt with data fitting.
DOAEasily works with unsynchronized inputs at slow rates of sources.Data association needs to be done for false alarms [9].Low bandwidth usage.Complex computation.
Inter-microphone Intensity DifferenceIn frequency domain, correlation exists.Incorporation of learning based-mapping.Robust against interferences [12].Works for a 2-microphone arrays.
Table 2. Retrieved research articles.
Table 2. Retrieved research articles.
CategoryTitleJournal/ConferenceRef.
C-1“Multiple speaker tracking with the Factorial Von Mises-Fisher filter”IEEE International Workshop on Machine Learning for Signal Processing[5]
“Using multiple microphone arrays and reflections for 3D localization of sound sources”IEEE/RSJ International Conference on Intelligent Robots and Systems[29]
“Special issue on wireless acoustic sensor networks and ad hoc microphone arrays”Signal Processing[30]
“Classification of reverberant audio signals using clustered ad hoc distributed microphones”Signal Processing[31]
“Ad Hoc Microphone Array Calibration: Euclide-an Distance Matrix Completion Algorithm and Theoretical Guarantees”International Conference on Digital Signal Processing[32]
“Binaural sound localization based on reverberation weighting and generalized parametric mapping”IEEE/ACM Transactions on Audio, Speech, and Language Processing[33]
“Sound source locali-zatio”European Annals of Otorhinolaryngology, Head and Neck Diseases[7]
“A Survey of Sound Source Localization Methods in Wireless Acoustic Sensor Networks”Wireless Communications and Mobile Computing[9]
“Energy-based acoustic source localization methods: a survey”Sensors[10]
“Localization of sound sources in robotics: A review”Robotics and Autonomous Systems[12]
“DOA estimation based on MUSIC algorithm”Digitala Vetenskapliga Arkivet DiVA, Småland[13]
“Direction of Arrival Estimation via ESPRIT Algorithm for Smart Antenna System”International Journal of Computer Applications[16]
“On-Grid Doa Estimation Method Using Orthogonal Matching Pursuit”International Conference on Signal Processing and Communication (ICSPC)[17]
“Localizing multiple audio sources in a wireless acoustic sensor network”Signal Processing[18]
“Robust 3D Localization and Tracking of Sound Sources Using Beamforming and Particle Filtering”Acoustics, Speech and Signal Processing[27]
“3D Sound Source Localization System Based on Learning of Binaural Hearing”IEEE International Conference on Systems, Man and Cybernetics[24]
“Revisiting trilateration for robot localization”IEEE Transactions of the Source[11]
“Three ring microphone array for 3-D sound localization and separation for mobile robot audition”IEEE/RSJ International Conference on Intelligent Robots and Systems[8]
“Sound Localization and New Applications of its Research”Applied Perception Projects and Service-Learning Project[2]
C-2“Selecting Sound Source Localization Techniques for Industrial Applications”Sounds & Vibrations[19]
“Time-delay estimation for TOA-based localization of multiple sensors”IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)[22]
“Sensation and Perception (Eighth Edition)”Cengage Learning[23]
C-3“Real-time implementation and performance optimization of 3D sound localization on GPUs”Automation and Test in Europe Conference and Exhibition[28]
“High performance 3D sound localization for surveillance applications”IEEE Conference on Advanced Video and Signal Based Surveillance[34]
“SNR improvement with speech enhancement techniques”Proceedings of the ICWET[4]
“Cooperative integrated noise reduction and node-specific direction-of-arrival es-timation in a fully connected wireless acoustic sensor network”Signal Processing[14]
“Array Signal Processing for Maximum Likelihood Direction-of-Arrival Estimation”Electrical Electronic System[35]
“Smart room: participant and speaker localization and identification”IEEE International Conference on Acoustics, Speech, and Signal Processing[6]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liaquat, M.U.; Munawar, H.S.; Rahman, A.; Qadir, Z.; Kouzani, A.Z.; Mahmud, M.A.P. Localization of Sound Sources: A Systematic Review. Energies 2021, 14, 3910. https://0-doi-org.brum.beds.ac.uk/10.3390/en14133910

AMA Style

Liaquat MU, Munawar HS, Rahman A, Qadir Z, Kouzani AZ, Mahmud MAP. Localization of Sound Sources: A Systematic Review. Energies. 2021; 14(13):3910. https://0-doi-org.brum.beds.ac.uk/10.3390/en14133910

Chicago/Turabian Style

Liaquat, Muhammad Usman, Hafiz Suliman Munawar, Amna Rahman, Zakria Qadir, Abbas Z. Kouzani, and M. A. Parvez Mahmud. 2021. "Localization of Sound Sources: A Systematic Review" Energies 14, no. 13: 3910. https://0-doi-org.brum.beds.ac.uk/10.3390/en14133910

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop