APP下载

Wideband acoustic source localization using multiple spherical arrays: anangular-spectrum smoothing approach

2015-04-24WANGFangzhou王方洲PANXi潘曦

WANG Fang-zhou(王方洲), PAN Xi(潘曦)

(School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China)



Wideband acoustic source localization using multiple spherical arrays: anangular-spectrum smoothing approach

WANG Fang-zhou(王方洲), PAN Xi(潘曦)

(School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China)

A novel algorithm using multiple spherical arrays based on spherical harmonic analysis is proposed to localize wideband acoustic sources. In the novel spherical harmonic algorithm, the received microphone signals are firstly used to do the spherical Fourier transformation. Then, the multiple signal classification (MUSIC) algorithm is applied to the spherical components to obtain the angular-spectrum. Finally, the angular-spectrum smoothing technique is proposed to obtain the accurate localization of wideband sources. In contrast to the traditional single spherical array, the multiple spherical arrays used in this paper consist of several randomly distributed spheres in a given plane. The microphones are uniformly placed on each sphere, the same as the usual single spherical array. Simulation comparison of wideband sources localization between a single spherical array and multiple spherical arrays based on the novel algorithm is carried out to validate our proposed method.

multiple spherical array; spherical harmonicanalysis; wideband sources; source localization

The algorithms based upon the concept of signal-subspace are widely used for the estimation of the direction of arrival (DOA) of wideband signals in the past few decades[1].

One of the simplest approaches exploiting the concept of signal subspace, which is proposed in literature to deal with the wideband problem, is referred to as incoherent signal-subspace method (ISSM)[2]. The key idea of the method is to decompose the received wideband signals into nonoverlapping narrowband portions and at each frequency the narrowband techniques are applied to estimate the DOA of the impinging waves. Then, the partial results are averaged to yield the final estimation according to various methods[3]. The method, though simple, can’t resolve coherent signals which are extremely likely to appear due to multiple propagations. To solve the problem, Wang and Kaveh[4]propose a technique referred to as coherent signal-subspace method (CSSM), which transforms the cross-spectrum matrices at many frequency portions into one general cross-spectrum matrix at one focusing frequency by using a focusing matrix. The design of focusing matrix requires the knowledge of the exact DOAs, which are the final objective of the whole estimation procedure. An improved design procedure for the focusing matrices was proposed[5].

All those methods are based on the design of the focusing matrices which requires the initial DOA values, and the estimation performance of CSSM is very sensitive to those initial values. Considering that the spherical microphone arrays offer an ideal tool for capturing and analyzing three-dimensional wavefields[6]. Numerous studies for the localization of wideband sources using spherical microphone arrays are carried out[7]. By decomposing the wavefield into spherical harmonics, the frequency-dependent components in the spherical harmonic domain are decoupled from the angular-dependent components which are extremely useful for frequency smoothing of wideband case[8].

In this paper, a novel algorithm using spherical array based on spherical harmonic analysis is discussed, and the core of the algorithm is the initially proposed angular-spectrum smoothing technique (ASST). Notice that all the former related studies of the localization of wideband sources using spherical harmonic analysis are based on a single spherical microphone array, in our application, the algorithm with multiple spherical arrays consisting of several randomly distributed spheres was developed, hoping to obtain better estimation stability and performance. The simulation results verify our expectations.

1 Wavefield decomposition

Fig.1 depicts the geometric model of a planar wave impinging on a spherical aperture of radiusrfrom a far field sourceS(Ωs=(θs,φs)). The source generates a unit magnitude plane wave with wave number vectorkat the observation point,P(Ωp=(θp,φp)),on the surface of the sphere. So the incident field at the observation pointPon the sphere surface can be expressed as[9-10]

(1)

where i2=-1,r=(rcosφpsinθp,rsinφpsinφp,rcosθp)Tare the positions of the microphones on the sphere,jn(kr) is thenth order spherical Bessel function, andk=‖k‖=2πf/cis the wave number which is related to the frequency of the plane wave, andcis the sound speed. * denote complex conjugation,Ynmdefined by

wherePn|m|(cosθp) is the associated Legendre function.Ynmis the spherical harmonics of ordernand degreem. In practical application, the order of spherical harmonics can’t be infinite, so we can replace it withNwhich is mainly determined by the number of microphones on the sphere.

Fig.1 Planar wave impinging on a spherical aperture

Spherical harmonics describe an orthonormal decomposition of the pressure of sound field. By applying an inverse spherical Fourier transform[11]to Eq.(1) mathematically, the pressure field presented on the spherical aperture can be integrated to obtain the expression for the pressure on the spherical aperture due to the incoming plane wave[12]. The expression can be calculated as

(2)

It is independent from the radial and angular information of the observation point and dependent on the frequency and direction of the source. As the information of the source can’t be obtained initially, the integration was applied to calculate the sound field in terms of spherical harmonics expansion:

(3)

In practical application, the spherical integration can’t be worked out, soGnm(kr,Ωs) is measured by the spherical microphone array. Considering a sphere withM×Huniformly distributed microphones (Ωpt=(θpt,φpt)(t=1,2,…,M×H)), then Eq.(3) can be rewritten as:

(4)

whereαtis the coefficient to make sure the accuracy of approximation from the integral to the summation, andαtcan be expressed as

αt=sinθptΔθptΔφpt

(5)

where Δθpt=π/M,Δφpt=2π/H.

2 Wideband source localization algorithm

In this paper, a multiple spherical array structure is proposed to estimate the directions of multiple wideband sources. Assuming that there areLirregularly placed spheres of radiusrwith the same uniform distribution of microphones on each of them, while their self-coordinates are randomly located on a plane as shown in Fig.2. SupposeT=M×Hmicrophones on each sphere.

Fig.2 Geometry of three spherical arrays

ConsideringQsources with directionsΩsq=(θsq,φsq)(q=1,2,…,Q), in the case of multiple spherical array, the pressure owing to multiple sources can be obtained based on Eqs.(4)(5):

(6)

whereysq=(cosφsqsinθsq,sinφsqsinθsq,cosθsq)Tis the direction vector with respect to theqth source, androlis the direction vector from the origin of the coordinate to the center of thelth sphere whilertis the direction vector from the center of the sphere to thetth microphone on the sphere.

According to Eq.(2), in the presence of an additive noise the model commonly used in array processing is

(7)

wheres(k) is the amplitude of the incoming plane wave andn(k) is the noise signal at the microphone that is assumed to be uncorrelated with the signal. Notice that the steering matrix on the right side of Eq.(7),Gnm(kr,Ωs), contains frequency and angular information simultaneously, therefore, to decouple the frequency-dependent components from the angular-dependent components, Eq.(7) by can be simply divided byjn(kr) to get

(8)

Combining Eq.(6) and Eq.(8), the steering matrixWcan be obtained as:

W=[B×C(1),B×C(2),…,B×C(l),…,B×C(L)]

(9)

whereBis a (N+1)2×Tmatrix expressed as

andC(l)is aT×Qmatrix defined as

So the steering matrixWis aL(N+1)2×Qmatrix. Eq.(8) can be rewritten in the form of matrix as

anm(k)=W×S(k)+n(k)

(10)

where the vector of signal waveform is defined as

S(k)=[s1(k),s2(k),…sQ(k)]T

(11)

Now,anmcan be applied to do the DOA estimation using MUSIC algorithm through the following processes. First, the correlation matrix can be acquired

(12)

Taking eigenvalue decomposition ofRnm

eigen(Rnm)=[Ea,En]

(13)

whereEaandEnare the signal subspace and noise subspace respectively. Then we can get the angular-spectrum equation with respect to the direction of the source

(14)

whereais the steering vector which is a column ofWfor any direction. By scanningΩsqthe peaks of the spectrum will be in conformity to the signals’ directions.

In wideband case, the wide ranges of thekrowing to the source bandwidth can cause estimation errors. Noticing the angular-spectrum characteristics of differentkr, we propose the angular-spectrum smoothing method to obtain accuracy estimation. Supposing that the frequency range includesXfrequency sectors, the correspondingXvalues ofkrcan be averaged to get the final angular-spectrum

(15)

Now, scanningΩsq, the peaks of the smoothed angular-spectrum will be in conformity to the wideband sources’ directions.

3 Simulations

In this section, single spherical array (SSA) and multiple spherical arrays (MSA) simulations are provided to illustrate the efficiency of our novel algorithm. For all the following simulations, The comparison of performance is given in terms of root mean square error (R) (averaged over the sources).

(16)

3.1 Angular-spectrum smoothing technique

Twenty microphones are placed on a sphere of radius 0.01 m, six microphones are distributed on each latitude and two microphones on the two poles as shown in Fig.2.

Two sound signals (S1,S2) impinge from approximate directions (θs,φs)=(120°,120°),(270°,60°) respectively. Assuming that 1 kHz

When the pitch angleφsof the two sources are fixed at 120° and 60°, their angular-spectrum in this direction can be obtained by scanning the azimuth angleθsof them. Fig.3 shows the spectrum of the azimuth angleθsfor differentkr. Notice that differentkrcan cause various spectrums, and each spectrum indicates some characteristics of the directions of one or both the sources. As we have assumed that there are 200 frequency sectors and correspondingly 200kr, Eq.(15) can be used to obtain the smoothed angular-spectrum in this direction as Fig.4 shown. The smoothed angular-spectrum can show the azimuth angle values of the sources through its’ two extremely obvious peaks, and the simulation results areθs1=119.5° andθs2=271.4° when their actual values areθs1=120° andθs2=270° respectively.

Fig.3 Angular-spectrum of the azimuth angle θs for different kr

Fig.4 Smoothed angular-spectrum of the azimuth angle θs

3.2 Single and multiple spherical array

In this part, the microphones on the single spherical array was distributed as the same as the former part. As for the multiple spherical arrays model, three spheres are placed in the limited field randomly, twenty microphones are distributed on each sphere with the same arrangement as a single spherical array. The results of DOA estimation based on the angular-spectrum smoothing technique using single and multiple spherical arrays are shown in Fig.5 and Fig.6 respectively. Showing that both the structures can estimate the directions of wideband sources by using the novel algorithm. However, the 3-D angular-spectrum of multiple arrays has more clear and pointed peaks than that of the single array; the spectrum of single array has many peaks at one direction simultaneously, indicating that the single array will lead to estimation deviations while the multiple arrays can obtain accuracy estimation results efficiently.

TheSof the two sources range from -15 dB to 5 dB can be made simultaneously. For differentS, 200 times simulations are performed in order to get RMSE and estimation probability. As Fig.7 shows, when theSincreases, theRdegrades. The multiple spherical arrays have a more accurate estimation than the single array through theSwhich can improve the robustness of the angular-spectrum smoothing algorithm.

Fig.5 DOA estimation using single spherical array

Fig.6 DOA estimation using multiple spherical arrays

Fig.7 RMSE vs SNR curves

All the above comparative simulations within single and multiple arrays are based on the fact that the total number of microphones for the two structures are different, the multiple arrays have more microphones than that of the single array.

3.3 Various number of arrays

In this part, microphones are placed on sphere of radius 0.1 m. Four structures consisting of different numbers of spheres are designed; the total number of microphones in any structure are kept as the same, each structure has 18 microphones. The distributions of microphones on each sphere are varying from structure to structure, while the distributions are identical within the same structure. The 1st structure: one sphere, 5×36 distribution; the 2nd structure: two spheres,5×18 distribution on each sphere; the 3rd structure: three spheres, 5×12 distribution on each sphere; the 4th structure: four spheres, 5×9 distribution on each sphere.( Fora×b,adepicts the five latitude on the sphere whilebindicates the number of microphones on each latitude ). The distribution of microphones on each latitude is referred to as equal-angle distribution. Six sound sources with the frequency ranging from 0.5 kHz to 3 kHz are applied in those simulations. Their directions are(120°,60°), (180°,120°), (60°,150°), (30°,30°), (300°,60°) and (240°,150°) respectively. 250 frequency sectors are included, and the (SNR) for the six sound signals are all 0 dB.

As shown in Figs.8-11, when the number of sound source increases to six, the single array can hardly distinct them while the multiple arrays can have higher distinction with more spheres in the structure, even under the condition that the total number of microphones in each structure are the same. The more the number of spheres in the structure, the more pointed the spectrum peak, which is very helpful in practical application.

Fig.8 DOA estimation using the 1st structure

Fig.9 DOA estimation using the 2nd structure

Fig.10 DOA estimation using the 3rd structure

Fig.11 DOA estimation using the 4th structure

4 Conclusions

Based on spherical harmonics decomposition of the sound field, a novel algorithm is proposed for estimating the directions of wideband sources, and the core of the algorithm is the proposed angular-spectrum smoothing technique which averages the angular-spectrums of all theXfrequency sectors to obtain the smoothed angular-spectrum for the accurate DOA estimation. Furthermore, a multiple spherical arrays structure consisting of several randomly distributed spheres in a given plane is developed. Simulation results show that both the single and multiple spherical array can localize the wideband sources while the multiple spherical arrays can be more accurate for its’ lower sidelobe and more pointed peaks in the angular-spectrum.

Only open sphere was introduced in this paper, for an extension of the presented method to multiple rigid spherical baffles, some more investigations were needed for its’ more complicated scattered field which is caused by the multiple arrays structure. Future work includes the influence of the placement of the spheres located in the multiple spherical arrays especially the multiple rigid spherical baffles and the research of blind signals processing using the novel algorithm in noisy and reverberant environments.

[1] Xu Y G, Liu Z W. Joint estimation of 2-D DOA and polarization by using the linear array with diverse polarization[J]. Journal of Beijing Institute of Technology, 2006,15(1):102-105.

[2] Wax M,Shan T J, Kailath T. Spatio-temporal spectral analysisby eigenstructure methods[J]. IEEE Trans Acoust Speech Signal Process,1984, 32:817-827.

[3] Claudio E D, Parisi R. Waves: weighted average of signalsubspaces for robust wideband direction finding[J]. IEEE Trans Speech Signal Process, 2001, 49: 2179-2190.

[4] Wang H, Kaveh M. Coherent signal-subspace processing forthe detection and estimation of angles of arrival of multiple wide-band sources[J]. IEEE Transactions on Acoustics, Speech and Signal Processing, 1985, ASSP-33: 823-831.

[5] Fabrizio S. Robust auto-focusing wideband DOA estimation[J]. Signal Process, 2006,86: 17-37.

[6] Teutsch H, Kellermann W. Detection and localization of multiple wideband acoustic sources based on wavefield decomposition using spherical apertures[C]∥Proc IEEE International conference on Acoustics Speech, and Signal Processing (ICASSP), 2008: 5276-5279.

[7] Meyer J, Elko G W. A highly scalable spherical microphone array based on an orthonormal decomposition of the soundfield[C]∥Proc IEEE International Conference on Acoustics Speech, and Signal Processing (ICASSP), 2002: 1781-1784.

[8] Khaykin D, Rafaely B. Coherent signals direction-of-arrival estimation using a spherical microphone array: Frequency smoothing approach[C]∥Proc IEEE Workshop on Applications of Signal Processing to Audio and Acoustics(WASPAA), 2009: 221-224.

[9] Trees H L V. Optimum array processing Part IV of detection,estimation, and modulation theory[M]. New York, NY: John Wiley Sons,Inc., 2002.

[10] Teutsch H, Kellermann W. Eigen-beam processing for direction-of-arrival estimation using spherical apertures[C]∥Proc Joint Workshopon Hands-Free Communication and Microphone Arrays, 2005:c-13-c-14.

[11] Williams E G. Fourier acoustics: sound radiation and nearfieldacoustic holography[M].New York, NY: Academic Press, 1999.

[12] Teutsch H. Modal array signal processing: principles and applications of acoustic wavefield decomposition[M]. Berlin: Springer, 2007.

(Edited by Wang Yuxia)

10.15918/j.jbit1004-0579.201524.0104

TN 912 Document code: A Article ID: 1004- 0579(2015)01- 0018- 08

Received 2013- 09- 24

E-mail: panxi@bit.edu.cn