Application of Regularization Methods in the Sky Map Reconstruction of the Tianlai Cylinder Pathfinder Array
2024-03-22KaifengYuShifanZuoFengquanWuYougangWangandXueleiChen
Kaifeng Yu, Shifan Zuo, Fengquan Wu, Yougang Wang, and Xuelei Chen,4,5
1 National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China; xuelei@cosmology.bao.ac.cn
2 Key Laboratory of Radio Astronomy and Technology, Chinese Academy of Sciences, Beijing 100101, China
3 School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China
4 Key Laboratory of Cosmology and Astrophysics (Liaoning) & College of Sciences, Northeastern University, Shenyang 110819, China
5 Center of High Energy Physics, Peking University, Beijing 100871, China
Abstract The Tianlai cylinder pathfinder is a radio interferometer array to test 21 cm intensity mapping techniques in the post-reionization era.It works in passive drift scan mode to survey the sky visible in the northern hemisphere.To deal with the large instantaneous field of view and the spherical sky, we decompose the drift scan data into mmodes, which are linearly related to the sky intensity.The sky map is reconstructed by solving the linear interferometer equations.Due to incomplete uv coverage of the interferometer baselines, this inverse problem is usually ill-posed, and regularization method is needed for its solution.In this paper, we use simulation to investigate two frequently used regularization methods,the Truncated Singular Value Decomposition(TSVD),and the Tikhonov regularization techniques.Choosing the regularization parameter is very important for its application.We employ the generalized cross validation method and the L-curve method to determine the optimal value.We compare the resulting maps obtained with the different regularization methods, and for the different parameters derived using the different criteria.While both methods can yield good maps for a range of regularization parameters, in the Tikhonov method the suppression of noisy modes are more gradually applied, produce more smooth maps which avoids some visual artefacts in the maps generated with the TSVD method.
Key words: techniques: interferometric – methods: numerical – cosmology: observations – radio continuum:general
1.Introduction
The Tianlai experiment is designed to develop and test the H I Intensity Mapping (IM) technique (Chen 2012; Li et al.2020),in which the large scale distribution of neutral hydrogen is observed at low angular resolutions without resolving individual galaxies.This allows fast survey speed to cover the large volume required for cosmological studies (Chang et al.2008).This technique has been applied to the observation of the Green Bank Telescope (GBT) and the Parkes telescope(Chang et al.2010;Masui et al.2013;Switzer et al.2013;Wolz et al.2017; Anderson et al.2018; Wolz et al.2021).Other existing or ongoing H I IM experiments focusing on the latetime cosmology include both single-dish telescopes and interferometers, such as FAST (Hu et al.2020), BINGO(Battye et al.2013), CHIME (CHIME Collaboration et al.2022), and SKA in the near future (Square Kilometre Array Cosmology Science Working Group et al.2020).However,H I IM observation is very challenging due to the strong foreground radiation that is 4–5 orders of magnitude brighter than the cosmological signal.Additionally, the instrumental systematics and other observational effects also complicate the separation of the two, a highly sophisticated analysis is required to extract this signal (Liu & Shaw 2020).
The Tianlai experiment includes a cylinder array and a dish array.The Tianlai cylinder pathfinder array is fixed on the ground and relies on the rotation of the Earth to survey the sky.It consists of three adjacent 15 m×40 m cylindrical reflectors,on which there are 31,32,33 feeds equipped from east to west,respectively.The basic performance of the Tianlai arrays has been analyzed in Li et al.(2020) for the cylinder pathfinder,and in Wu et al.(2021) for the dish pathfinder.Although making synthesis images of the sky from the interferometric raw data is strictly speaking not needed for the power spectrum estimation, in practice it is still an essential procedure to compress the data for further scientific analysis and to provide an intuitive means of checking the data quality and algorithms applied.
The output of an interferometer, the interferometric visibility, is the cross-correlation between the voltage of two feed elements.For the unpolarized case, the visibility is given by
whereT(nˆ) is the sky brightness temperature,nˆgives the direction on the sky,Aiis the beam of feed i,uij=(ri-rj)/λ is the separation between feeds in the unit of observed wavelength.In the second line,we introduce the beam transfer function Bij.In discrete form,the integral is replaced by a sum over pixelated sky indexed by nias follows
where ΔΩ is the pixel angular size,or in a matrix-vector form,
Synthesis imaging is to estimateT(nˆ) given the measurements Vij.In the flat sky limit, the measured visibilities of the interferometer array correspond to Fourier modes of sky intensity variation, with the orientation and angular scale determined by the direction and length of the baselines.However,in practice the baselines of an array often do not form a complete coverage of the spatial frequency domain, so reconstructing the sky map from the visibilities is not trivial,as it is mathematically an ill-posed inverse problem, with no unique solution.To overcome the gap of measured visibilities on the uv plane, a number of techniques have been developed to reconstruct the image in such cases, e.g., the variants of the CLEAN algorithm (Högbom 1974), and some burgeoning methods based on neural networks (Xu et al.2020; Connor et al.2022; Schmidt et al.2022).
The general principle of sky image reconstruction for Tianlai Array was investigated in Zhang et al.(2016a, 2016b).Zuo et al.(2021) developed a pipeline for the map-making process for the Tianlai array.In a recent paper (Yu et al.2023,heretofore referred to as paper I),we presented a simulation of the Tianlai cylinder pathfinder array up to the making of synthesis map, taken into account both thermal noise and calibration error.This simulation reveals that although the Tianlai array is a relatively compact and dense array, there are still incompleteness in its baseline coverage,and as a result,the reconstructed image has some artifacts arising from the incomplete uv coverage or beam side lobes.In the present paper, we will investigate regularization methods for dealing with the ill-posed inverse problem.
The structure of this paper is as follows.In Section 2, we present our simulation setup and give a brief summary of the m-mode formalism for map-making.In Section 3,we apply the regularized m-mode formalism imaging to the simulation data and explore the choice of appropriate regularization parameters.In Section 4, we discuss the errors in these regularized maps, and finally in Section 5, we present our conclusion.
2.Map-making with m-modes
2.1.The m-mode Formalism
Although imaging the sky by solving for T in Equation (4)directly is intuitive,the amount of computation is very large.If the interferometer has Nblbaselines and has run for Ntimerounds, for each single frequency and Npixdiscrete sky pixels,the visibility vector V has a size of Nbl×Ntime,the dimensions of beam transfer matrix B will be (Nbl×Ntime, Npix).Zheng et al.(2017) demonstrated this method for a small array.For Tianlai cylinder pathfinder, the number of non-redundant baselines is Nbl≈3300, and the time samples are Ntime≈21 600 in a sidereal day for 4 s integration time.If we pixelate the whole sky within latitude[-30°,90°]in HEALPix scheme with Nside=256 (Górski et al.2005), the pixel number is Npix≈6×105.Then the dimensions of the transfer matrix B would be 71 M×0.6 M per frequency.Apparently, solving Equation(4)directly would be intractable for a large array,due to the large matrix involved.
The m-mode method (Shaw et al.2014, 2015; Zhang et al.2016b) provides a convenient and computationally efficient approach for the data processing and analysis of drift scan telescopes,especially wide-field telescopes.It has been applied in the data analysis of LWA (Eastwood et al.2018), EDA2(Kriele et al.2022) and CHIME (CHIME Collaboration et al.2022).We also have implemented this method in the mapmaking procedure of Tianlai data processing pipeline tlpipe(Zuo et al.2021).
As the drift scan telescope measures the sky periodically with the rotation of the Earth, we can decomposeT(nˆ) andBij(nˆ ;φ)in spherical harmonics,
the Fourier transform of the visibility Vij(φ) can be written as
and take the rotated beam transfer function(φ)=(φ= 0)eimφ, Equation (5) adding up the noise term becomes
which gives the key equation in m-mode analysis, we can rewrite it in matrix form as
The label(ij,±)indicates that the positive and negative values of m are grouped together for the fact that the positive and negative m-modes are measurements of the same real-valued sky, which gives al,-m=(-1)ma*lm.We can rewrite Equation (6) in a general form as
where matrix B has a block diagonal structure, so we can treat each m-block independently.Then the Fourier transform of the visibility v is related to the spherical harmonics coefficients of the sky a by a simple linear equation.The imaging process is to solve for a given the observation v with prior information about B and available n.Compared with directly solving for T(ni) in Equation (3), the m-mode method handles matrix operations with much smaller matrices,which increases the computational efficiency tremendously.The main portion of computation is devoted to the transfer matrix B.However,once computed and stored, it can be reused in subsequent processing.
2.2.Simulation
We then simulate the map-making process, here the unpolarized case with a single frequency (750 MHz) is considered.We use the publicly available software cora6https://github.com/radiocosmology/cora(Shaw et al.2014,2015)to generate our mock sky model,which contains foreground including diffuse emission from the Galaxy and extra-galactic point sources, alongside the cosmological 21 cm signal.The diffuse emission is generated by extrapolating Haslam map with a specified spectral index map, and including random spectral and angular fluctuations.The extra-galactic point sources consist of a catalog of real bright sources from NVSS and VLSS, a synthetic catalog of fainter sources, and a random background for even fainter sources.The cosmological 21 cm signal is generated by drawing Gaussian realizations of a power spectrum (Shaw et al.2014, 2015).
We model the beam pattern,which is characterized as a long strip along the North-South direction, in the following analytical form (see paper I for details), with parameters determined from a fit to the electromagnetic simulation of Tianlai cylinder array (Sun et al.2022):
wherexˆandyˆare the unit vector pointing East and North,respectively, and
We use the mean of the power beam of X and Y polarization at 750 MHz from Sun et al.(2022) to fit the models above.The fitting parameters are α=1.04, F=0.2, θEW=2.74.
The noise in each m-mode(Shaw et al.2015)is generated by sampling a Gaussian distribution with rms
2.3.Beam Coverage in Spherical Harmonics Space
The angular resolution and sensitivity on the sky of a telescope is limited by its size and configuration.We take the maximum spherical harmonics degree l and order m that Tianlai cylinder pathfinder sensitive to at 750 MHz as
whereDmaxis the maximum dimension of the entire array,DEWis the physical size in the East–West direction.
We can define the spherical harmonic beam coverage of array by combining the beam transfer matrix from all baselines as
where N is the total number of non-redundant baselines.This quantity can be used to describe the sensitivity of array on the sky in spherical harmonic space, analogy to the uv coverage,which is used to define the spatial frequencies measured by the array.In Figure 1, we show the beam coverageBl∣m∣of the Tianlai cylinder pathfinder array.
Figure 1.The spherical harmonic beam coverage Bl ∣m ∣ of the Tianlai cylinder pathfinder array.The three vertical dotted lines indicate the m values corresponding to the width of a single cylinder(15 m),the maximum baseline projected in the East–West direction (30 m), and the total width of the array(45 m).
Figure 2.The singular values of matrix B for several m values.
The resolution in the East–West direction of a baseline is determined by its projected distance along this direction.The limit on m for a telescope ism<2πDEWcosδ λby taking into account that the m mode corresponds to the Fourier mode in the azimuthal direction,where δ is the latitude of the Tianlai site about 44°,thus m ≾510 with DEW=45,λ ≈0.3997 at 750 MHz.The beam coverage is expected to center at(l,m) =(2πDλ, 2πDEWcosδ λ).Owing to the massive isometric short baselines on the identical cylinder,we expect to see two areas in the spherical harmonic space where the Tianlai cylinder pathfinder has higher sensitivity, which can be estimated by
with Dwequals 1 or 2 times the cylinder width,and b=0.4 m,these are at(l,m)≈(235,170)and(470,340),which is indeed the case as can be seen in Figure 1.
Roughly,the sensitivity of our model telescope is relatively high in the range of about [m, m+200] at m ≾400, and decreases significantly at and outside the edge region(i.e.,m ∼510)that the telescope can reach.Consequently, when reconstructing the sky map from the spherical harmonic coefficient almin this paper,we filtered out all modes l>600, as well as the m=0 modes which measure the average over sidereal time.
2.4.Linear Least-squares Map-Maker
The imaging procedures can be seen as trying to solve the solution of Equation(8)for the spherical harmonic coefficients a given the measurement v and the beam matrix B.The leastsquares solution to it,which minimizesis given by
where*denotes the conjugate transpose.
In our current work, before making the general linear leastsquares solution,we first subtract the contributions of four very strong radio sources to the visibilities, i.e.,Cas A, Cyg A, Tau A, and Vir A, which can be treated as point sources at known locations for the current Tianlai cylinder array pathfinder.This reduces the dynamic range of the input map.If they are not subtracted, the reconstructed map will show some apparent sidelobes around their positions.
The matrix B usually does not have full rank, as there are unmeasured modes due to incomplete uv-coverage and incomplete coverage of the full sky.In Figure 2, we plot the singular values of several B, a cluster of small singular values approaching zero results in these matrices being rank-deficient.Hence B∗B is not invertible, which makes the problem illposed.The solution to ill-posed inverse problems is usually unstable, which may deviate greatly from the true one in the presence of noise.
3.Regularization
A common approach to ill-posed problems is regularization(see, e.g., Engl et al.1996; Hansen 1998), which gives a stable approximate solution by imposing additional constraints on it.The common regularization methods include Truncated Singular Value Decomposition (TSVD) and Tikhonov regularization.
3.1.Truncated Singular Value Decomposition
For the least-squares solution (Equation (13)), the matrix B generally does not have full rank,B*B is then singular and not invertible.With the singular values decomposition of the m×n(m ≥n) matrix B,
Figure 3.The reconstructed sky maps with different truncation thresholds for the data with a noise level corresponding to 30 days integration time.From left to right,∊=1×10-4, 1×10-3, and 5×10-3.The four brightest point sources (i.e., Cas A, Cyg A, Tau A and Vir A) have been removed in the visibility before reconstruction.
where U=(u1, u2, …,un) and V=(v1, v2, …,vn) are unitary matrices, and the diagonal matrix Σ=diag(σ1, σ2, …,σn) has non-negative elements in non-increasing order.The Moore–Penrose pseudo-inverse of matrix B is defined as
where + denotes the pseudo-inverse, the diagonal element of matrix Σ-1is the reciprocal of each non-zero element on the diagonal of Σ, leaving the zeros in place.The minimum-norm least-squares solution to Equation (8) in terms of the Moore–Penrose pseudo-inverse is then given by
where r=rank(B).For the noisy measurement v=v0+n,where v0=Batrue, this solution becomes
the first term gives the true solution, while the second term gives the contribution from noise.The latter might be magnified by those extremely small singular values σiof B,making the solution unstable and heavily contaminated by noise.
Regularization is a technique to solve the problem above by dampening or filtering out those unwanted components corresponding to the small singular values, leading to an approximate but stable solution.In terms of the singular values decomposition of matrix B, the regularized solution to Equation (8) can be written as
where fiare the filter factors.Different regularization methods can be defined by choosing suitable filter factors.
For the TSVD regularization, the filter factors are
the pseudo-inverse B+in Equation(14)is replaced by its rankk approximation, with the remaining components filtered out.In paper I, we used the TSVD regularization to compute the regularized solution of Equation (8), where the truncation thresholds ∊ satis fiesσk+1<∊×max(σi) ≤σk, and we selected ∊=1×10-3as the default.
As an illustration,we show in Figure 3 the reconstructed maps with three truncation thresholds ∊, which can represent the case with a too small,moderate and too large ∊,respectively.Noise in the reconstructed map is significantly amplified if there is no regularization or with a too small threshold value ∊,the image is totally dominated by noise.For a too large ∊, although noise is not significantly amplified, the reconstructed signal may fail to capture the true signal completely in certain modes,so again the resulting image is not optimal.For a properly chosen regularization value,we can see the reconstruction result is relatively good,with the input sky map well recovered.However,as we noted in paper I, even in this case, near the bright part of the Galactic plane, there are comb-like artifacts, which are produced by the incomplete reconstruction of certain modes which are truncated,especially those with small m values in our case.
3.2.Tikhonov Regularization
Another widely used regularization method is Tikhonov regularization.In contrast to TSVD regularization, which imposes a hard threshold for the components corresponding to those small singular values,Tikhonov regularization aims to suppress undesirable modes by solving for a that minimizes
where||·||2denotes the Frobenius norm,λ is the regularization parameter,a0is a prior of a,which can be set to a0=0 if prior information is unavailable, and L is called the Tikhonov regularization matrix.There are several common choices for L,including the identity matrix or a discrete approximation of the derivative operator.This minimization problem gives solution
Figure 4.The reconstructed sky map with different Tikhonov regularization parameter λ.From left to right, λ2=10-9, 9×10-8, and 10-5.
In a simple case where L=I,the Tikhonov regularization is referred to as the standard form.The more general form of Tikhonov regularization, where L ≠I and prior information about a and n is available,can be transformed into the standard form (Eldén 1977).For simplicity, we adopt L=I and a0=0,then the objective function becomes
and Equation (19) is reduced to
It can be expressed in the form of Equation (16) with filter factors as
Unlike the TSVD regularization which simply trims out the modes corresponding to small eigenvalues, the Tikhonov regularization makes a gradual and smooth suppression to all modes.The degree of smoothness can be adjusted by the choice of the regularization parameter λ,a small regularization parameter value yields a solution which is closer to the one obtained from TSVD regularization.
In Figure 4,we illustrate the Tikhonov regularized sky maps with different regularization parameter λ.For a small λ, as shown in the left panel, the noise in the regularized map is amplified obviously.The noise becomes less pronounced with a moderate λ value, as shown in the center panel.The comblike artifact around the Galactic Center shown in Figure 3 is also less prominent, perhaps thanks to the gradual suppression on the modes in the Tikhonov regularization.However,for the case with an excessively large λ shown in the right panel as an example,the underlying signal is also suppressed,generating a more bland map than the actual sky.Furthermore,accompanied with the diminished actual sky signal,the sidelobes from some bright sources are more pronounced, e.g., an arc structure becomes noticeable near the north celestial pole and above the Cyg A.An appropriate selection of λ value, which strikes a balance between noise amplification and information loss, is expected to yield a satisfactory map that contains moderate noise while faithfully preserving the true sky as much as possible.
The regularized solution error is given by
where the first term corresponds to the regularization error,arising from the regularization of true signal atrue, and the second term represents the perturbation error, attributed to the presence of noise.The filter factors fican befiTSVDor.For Tikhonov regularization, as λ →0, the filter factors→ 1,the regularization error approaches zero, but the perturbation error might be large, the solution tends to be noisy.As λ increases, the filter factors→ 0, leading to a smaller perturbation error but a larger regularization error, and the regularized solution is oversmoothed and approaches zero.The key to obtaining a good regularized solution is to choose an optimal regularization parameter λ in Tikhonov regularization or k in TSVD regularization to balance the two errors.
3.3.The Choice of Regularization Parameter
Choosing the optimal regularization parameter that balances the trade-off between the noise amplification and signal recovery is not always easy, which depends on particular problems including factors such as the specific array configuration,and the level of noise in the data.There are still several approaches available for choosing the regularization parameter that is near the optimal one.Below we adopt the notation used in Tikhonov regularization (i.e., take λ as regularization parameter) for our description, these methods are also applicable to TSVD regularization.If the statistics of the noise is well known, λ can be chosen by applying the discrepancy principle (Engl 1987), such that the residue is at a comparable level to that of the noise,||Baλ-v||2≤η||n||2,where aλis the solution given by Equation (21) with the regularization parameter value λ, and η is a user-specified constant to constrain the bound.If the error is unknown,techniques such as the generalized cross validation (Golub et al.1979) or the L-curve criterion(Hansen&O’Leary 1993)are usually applied to search for the appropriate regularization parameter.
Figure 5.The GCV function as a function of λ for m=200 case.The red star marks the minimum point of the GCV function,the corresponding λ gives the optimal regularization parameter by this method.
Generalized Cross Validation (GCV)—The GCV method(Golub et al.1979) is based on the following idea: for a linear vector equation Ax=b, if we drop a data point bifrom the vector b, and obtain a regularized solutionxiλfrom the rest of vector components A([1:i-1, i+1:m], :)m]), then the valueA(i, :)xλishould be close to the excluded value biif a reasonable parameter λ is chosen (see Chapter 4 of Wahba 1990).This results in the selection of a parameter λ that minimizes the GCV function
whereB#(λ) ≡ (B*B+λ2I)-1B* is referred to as the regularized inverse, aλ=B#v is the corresponding regularized solution.
For illustration, in Figure 5 we show the logarithmic plot of the calculated GCV function applied to the data with 30 days integration noise for the case of m=200.In our computation,the λ values vary from 10-10to 10-1and are sampled evenly on logarithmic scale.The GCV function decreases slowly as λ increases, until at some point increases rapidly, and the minimum is reached before this rapid rise, which is marked by a red star in the figure.
L-curve—In a log–log scale plot depicting the residual norm||Baλ-v||2versus the solution norm||aλ||2,the resulting curve typically exhibits a L-shaped profile, which is comprised of a flat part where the regularization error dominates at large λ,and a steep part where the perturbation error dominates at small λ.The optimal λ value should be chosen to balance these two errors,which corresponds to the corner of the L-curve,and can be identified by locating the maximum curvature of the curve(Hansen 1999).Letηˆ = log∣∣aλ∣∣22,ρˆ = log∣ ∣Baλ-v∣∣22, then the curvature of the L-curve is given by
In Figure 6 we plot the L-curve (left) and the corresponding curvature κ(right)applied to data with the 30 days integration noise for the m=200 case, the red star marks the optimal parameter λ determined using this method.
The optimal values for the regularization parameter λ as determined by the GCV criterion and the L-curve criterion for all m cases are shown in Figure 7.Comparing the optimal values determined by these two criteria, it is observed that the optimal λ obtained from the L-curve criterion is typically larger than the one provided by the GCV criterion at the same m,especially for the case of higher noise (e.g., 1 day integration noise level).The optimal λ values increase as m at m ≾100,and then are relatively stable for the rest m.
In Figure 8, we present a comparison between the input almand the Tikhonov regularized solution to the data with 30 days integration noise for the m=10(left)and m=200(right)cases as an example,where the regularization parameter λ are given by the two criteria.The real and imaginary parts are plotted in the top and bottom sub-figures, and the residues are plotted in the bottom panel of each sub-figure.As expected, a smaller regularization parameter obtained from the GCV criterion results in a solution that is slightly closer to the true value,especially at the smaller ls, but it may also amplify the noise for certain modes, as illustrated here in the region l ∊[200, 400] for the m=10 case.While for the m=200 case as shown, with the smaller regularization parameter provided by the GCV criterion,the residue is smaller without the cost of amplifying the noise.
Moreover, we apply the L-curve criterion to choose the truncation threshold value ∊or k in Equation(17)for the TSVD regularization.we sample 100 values of ∊ evenly on a logarithmic scale ranging from 1×10-4to 1×10-1to compute the points on the TSVD L-curve(||Bak-v||2, ||ak||2).Different from the case of Tikhonov regularization, the points on TSVD regularization L-curve are discrete, a quadratic spline interpolation is applied to these discrete values to compute the curvature of the L-curve and select the corresponding sampled ∊value closest to the maximum curvature.In Figure 9,we show results for the data with 30 days integration noise.For comparison,the case for fixed ∊=1×10-3is also plotted.We can see that at large m the two curves almost coincide, but at some relative smaller m the L-curve suggests a smaller k,indicating that more low sensitive modes are trimmed off to avoid amplifying noise.
In the top panels of Figure 10 we illustrate the reconstructed maps with the automatically chosen regularization parameters for the data with 30 days noise level, and in the bottom panels their relative error, i.e., the fractional difference with the input map.We show the cases of Tikhonov regularization using the GCV (left) and the L-curve (middle) criteria, and the TSVD regularization using the L-curve criterion (right).
Figure 6.The L-curve(left)and the corresponding curvature(right)κ as a function of λ for m=200 case.The red star marks the point of maximum curvature at the corner of the L-curve.
Figure 7.The optimal regularization parameters for all 0 It seems that the Tikhonov regularized solutions generally yield visually better maps of the sky without the comb-like feature in the TSVD regularized map.The errors are typically small except around places where bright sources are located.With the λ value obtained by the GCV criterion, which is smaller than the one chosen with the L-curve criterion, the region near the Galactic Center is better reproduced, but the northern region tends to be noisier.With the larger λ value chosen using the L-curve criterion, the map is more smooth.However, we note that the λ value chosen by both methods would vary with the noise level, therefore in the present case the larger λ value obtained from the L-curve criterion produces a better overall visual impression, this may change for a different noise level or set up.Owing to the flat tail of the GCV function curve (Figure 5), the λ value determined from the minimum of the GCV function might not always be accurate and satisfactory,while the L-curve method gives a more robust result.In the reconstructed maps, the smaller regularization parameter obtained from the GCV criterion leads to a more noisy reconstructed map compared to the one obtained using the L-curve criterion.As for the TSVD regularized map, it is generally similar to the map shown in Figure 3,and still shows the comb-like structure near the Galactic Center which is produced by the truncation of certain modes. Figure 8.The comparison between the input(blue)and the solved spherical harmonics coefficients obtained using the Tikhonov regularization,with the regularization parameter determined by the GCV(green)and L-curve(orange)methods,for the m=10(left)and m=200(right)modes,in the case of 30 days noise level.The top sub-figure of each column shows the real part,and the bottom one gives the imaginary part.The bottom panel of each sub-figure shows the residue between the input and the solution. As matrix B in Equation (8) is block diagonal, the m-mode method takes advantage of this structure by processing each mblock individually.To produce the regularized maps, we have also optimized the choice of regularization parameter for each m separately,using the GCV and L-curve criteria.For simplicity,it is also possible to adopt a single overall regularization parameter λ considering all m-blocks.We illustrate the L-curve criterion for Tikhonov regularization here.Then the point on the L-curve associated with the regularization parameter λ is give by In Figure 11, we show the relevant L-curve for two cases with different noise level.For the 1 day case,the noise level is relatively high, and the L-curve criterion yields a relatively large optimal regularization parameter value λ=2.31×10-3,while for the 30 days case, the level of noise is significantly lower,leading to a correspondingly smaller optimal regularization parameter value λ=5.34×10-4. In Figure 12 we illustrate the reconstructed map with these regularization parameters (the visibility data for imaging has the corresponding level of noise).In the 1 day case, due to the relatively large noise and regularization parameter, the map shows more deviation than the 30 days map, and the side lobe feature near the north celestial pole is quite obvious.However,we can see that for the 30 days noise level map,the deviation is also quite significant if we compare it to the maps made by the m-by-m mode analysis shown in Figure 10.This is not surprising, since a single λ value may not be suitable for all different m-modes.For those less sensitive modes that are susceptible, this value of λ is perhaps too small to regularize the problem.On the other hand, for the modes to which the interferometer is sensitive, it may suppress too much to reconstruct the true signal. Figure 9.The truncation parameter k for TSVD regularization obtained by the L-curve criteria for data with the 30 days integration noise.For comparison,the case for fixed ∊=10-3 is also plotted. Since the regularized solution is only an approximation of the true value, it inevitably introduces bias.In Figure 13, we show the fractional difference between the Tikhonov regularized map from the noise-free data and the input map, which gives the bias from regularization.We can see that the major bias is around the bright point sources and is located at the region near the horizon.The bias around the bright point sources is from their convolution with the point-spread function, which is expect to be mitigated through further deconvolution techniques such as the CLEAN algorithm. Figure 10.The reconstructed regularized maps (top) and their fractional error (bottom) using automatically determined regularization parameters.Left: Tikhonov regularization with λ from the GCV method;Middle:Tikhonov regularization with λ from the L-curve method;Right:TSVD regularization with threshold given by the L-curve method, all for the data with 30 days integration noise level. Figure 11.The overall(summing all m-modes)L-curve of Tikhonov regularization for the data containing 1 day(left)and 30 days integration noise(right),and the red star marks the regularization parameter determined from each curve. Figure 12.The Tikhonov regularized maps made with the optimal λ determined from the overall L-curve for the data with 1 day(left)and 30 dayss noise(right).A larger regularization parameter (left) may result in an alias of the Galactic Center obviously in the map. We can also quantify the quality of the reconstructed map by using the angular cross power spectrum between the input map and the reconstructed maps.In the cross power spectrum,defined as the noise from the maps being crossed are uncorrelated,so they do not contribute to the cross power. In Figure 14, we show the cross-correlation coefficients, i.e.,, the ratio of the cross-correlation power between the reconstructed and input maps to the square root of the product of the corresponding auto power spectra.For the Tikhonov regularization, we show the cases where the regularization parameter λ is determined m-by-m using the GCV and L-curve criteria, and for the TSVD regularization, the truncation threshold is determined based on the L-curve criterion. In all cases,the correlation falls below 1 as l increases,which means that the correlation is not perfect due to reconstruction error.There is a general trend which all three curves follow.The result obtained from TSVD regularization has a correlation coefficient close to that of the Tikhonov regularization with the λ determined by the L-curve at l ≤470, but beyond this scale there is a significant deterioration.The result of the Tikhonov regularization with λ determined by the GCV criterion exhibits higher correlation in the range l ≤150, but shows lower correlation in the range 200 ≤l ≤550 than that of the L-curve criterion, where the noise is prone to be amplified.However,despite comparable performance as measured by the crosscorrelation, the Tikhonov regularization produced better visual impression. Figure 14.The cross-correlation coefficients for the Tikhonov regularized maps with the parameter determined using the GCV and L-curve criteria, and the TSVD regularized maps with the parameter determined using the L-curve criterion.All angular power spectra are computed for the observable part of sky (δ ≿-45°) and with the four bright point sources subtracted. In this paper we investigated the regularization methods applied to the sky map reconstruction from the radio interferometer data taken by the Tianlai cylinder pathfinder array.Due to the incomplete uv-coverage from the limited number of baselines, regularization is generally necessary in the reconstruction of sky from interferometer data, but the strategy and method adopted differ case by case.In our previous paper (Yu et al.2023), we have investigated the map reconstruction for the Tianlai cylinder array, with emphasis on assessing the impact of array calibration error and noise on the final map,there we have only used the Moore–Penrose pseudoinverse method in the map reconstruction.However, there are various different regularization methods, which also affect the map-making result.The exploration of the different regularization methods is the subject of the present paper. In this work we have studied the TSVD regularization and the Tikhonov regularization applied to our map reconstruction.The TSVD regularization is in some sense similar or equivalent to the Moore–Penrose pseudo-inverse used in paper I, which removes the modes susceptible to noise.The Tikhonov regularization, on the other hand, suppresses modes gradually,those susceptible to noise are more suppressed but not completely removed, consequently yielding a more smoothly regularized map.This smooth approach by the Tikhonov regularization can avoid the generation of obvious artefacts by the sharp cut off used in the TSVD regularization, producing visually better maps,though it is not necessarily more accurate than the TSVD regularization in a quantitative sense. To obtain a high-fidelity sky map with the regularization techniques, it is crucial not to over-regularize the data by using an excessively large regularization parameter.However,the map may be greatly affected by the noise if a too small regularization parameter is chosen.We have applied the GCV and L-curve methods to determine the optimal regularization parameter.We find that both methods do produce generally good maps for a reasonable level of noise.In our case the L-curve criterion provides a more stable regularization parameter, which also results in a map with better visual impression.However,we note that this result is specific to the case we investigated.For different data sets, noise levels, etc., the result can be different.Furthermore, although these methods can be used to optimize parameter selection,they are still based on simple reasoning.To achieve results that better meet specific requirements,additional tuning may be needed. Acknowledgments We acknowledge the support of the National SKA Program of China (Nos.2022SKA0110100 and 2022SKA0110101),and the National Natural Science Foundation of China (Nos.12203061, 12273070, and 12303004).3.4.An Overall Regularization Parameter
4.Discussions
5.Conclusion
杂志排行
Research in Astronomy and Astrophysics的其它文章
- A Fermi-LAT Study of Globular Cluster Dynamical Evolution in the Milky Way: Millisecond Pulsars as the Probe
- Effect of Cosmic Plasma on the Observation of Supernovae Ia
- Variable Stars in the 50BiN Open Cluster Survey.III.NGC 884
- Free Energy of Anisotropic Strangeon Stars
- Data-driven Simulations of Magnetic Field Evolution in Active Region 11429:Magneto-frictional Method Using PENCIL CODE
- The Clumpy Structure of Five Star-bursting Dwarf Galaxies in the MaNGA Survey