Effective Frameworks Based on Infinite Mixture Model for Real-World Applications
2022-08-24NorahSalehAlghamdiSamiBourouisandNizarBouguila
Norah Saleh Alghamdi, Sami Bourouisand Nizar Bouguila
1Department of Computer Sciences, College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University, P.O.Box 84428, Riyadh 11671, Saudi Arabia
2College of Computers and Information Technology, Taif University, Taif, 21944, Saudi Arabia
3The Concordia Institute for Information Systems Engineering (CIISE), Concordia University,Montreal, QC H3G 1T7, Canada
Abstract: Interest in automated data classification and identification systems has increased over the past years in conjunction with the high demand for artificial intelligence and security applications.In particular, recognizing human activities with accurate results have become a topic of high interest.Although the current tools have reached remarkable successes, it is still a challenging problem due to various uncontrolled environments and conditions.In this paper two statistical frameworks based on nonparametric hierarchical Bayesian models and Gamma distribution are proposed to solve some realworld applications.In particular, two nonparametric hierarchical Bayesian models based on Dirichlet process and Pitman-Yor process are developed.These models are then applied to address the problem of modelling grouped data where observations are organized into groups and these groups are statistically linked by sharing mixture components.The choice of the Gamma mixtures is motivated by its flexibility for modelling heavy-tailed distributions.In addition, deploying the Dirichlet process prior is justified by its advantage of automatically finding the right number of components and providing nice properties.Moreover, a learning step via variational Bayesian setting is presented in a flexible way.The priors over the parameters are selected appropriately and the posteriors are approximated effectively in a closed form.Experimental results based on a real-life applications that concerns texture classification and human actions recognition show the capabilities and effectiveness of the proposed framework.
Keywords: Infinite Gamma mixture model; variational Bayes; hierarchical Dirichlet process; Pitman-Yor process; texture classification; human action recognition
1 Introduction and Literature Review
Data clustering has been the subject of wide research to the present days [1-4].The goal of clustering is to group the observed data into separate subgroups, such as the data in each subgroup shows some similarities to each other.Various clustering approaches have been developed in the past and some of them are based on distance metrics (such as the K-means and SOM algorithms).It is noteworthy that these algorithms are sensitive to noise, are not flexible when dealing with incomplete data, and may not definitely capture heterogeneity inherent in complex datasets.Another important alternative is the model-based clustering approach [5].It is effective for modelling the structure of clusters since data are supposed to be produced by some distributions.In particular, finite mixture models (such as Gaussian mixture model) have shown to be effective (in terms of discovering complex patterns and grouping them into similar clusters) for many computer vision and pattern recognition applications [6].Nevertheless, Gaussian model cannot fit well complex non-Gaussian shapes.To cope with the disadvantages related to conventional Gaussian assumption, many contributions have occurred to develop other flexible mixture models and were interested to non-Gaussian behavior of real datasets [7,8].For instance,Gamma (GaMM) mixtures have demonstrated to offer high flexibility and ease of use than Gaussian [9] for many image processing and pattern recognition problems.This is due their compact analytical form which is able to cover long-tailed distributions and to approximate data with outliers.This mixture has been used with success for a range of interesting problems [7,10-12]especially when dealing with proportional data.An illustration of how mixture models can be applied for data modelling, classification and recognition is given in Fig.1.
Figure 1: Block diagram of application of mixture model for data classification and recognition
Unfortunately, the inadequacy of finite mixture model has been apparent when selecting the appropriate number of mixture components.In other word, model selection (i.e., model’s complexity)is one of the difficult problems within finite mixture models.The crucial problem of“how many groups in the dataset?”still remains of great interest for various data mining fields since determining inappropriate number of clusters may conduct to poor generalization capability.This problem can be solved by considering an infinite number of components via nonparametric Bayesian methods[13,14], principally within“Dirichlet process (DP)”[15].Indeed, a Dirichlet Process (DP) is a parameterized stochastic process characterized by a base distribution and it can be defined as a probability distribution over discrete distributions.Numerous studies have been devoted to infinite mixture models which are emerged to cope with the challenging problem of model selection.
Two sound hierarchical Bayesian alternatives to the conventional DP named as hierarchical Dirichlet process (HDP) and hierarchical Pitman-Yor process (HPYP) have exposed encouraging results especially when dealing with modeling grouped data [16,17].Indeed,within hierarchical models,mixture components can be shared through data groups (i.e., parameters are shared among groups).In such cases, it is possible to make a Bayesian hierarchy on the DP and the base distribution of the DP is distributed according to another DP.
Thus, the main contributions of this manuscript are first to extend our previous works about the Gamma mixture by investigating two efficient nonparametric hierarchical Bayesian models based on both Dirichlet and Pitman-Yor processes mixtures of Gamma distributions.Indeed, in order to reach enhanced modeling performance, we consider Gamma distribution which is able to cover longtailed distributions and to approximate accurately visual vectors.Another critical issue when dealing with mixture models is model parameters estimation.Accordingly, we propose to develop an effective variational Bayes learning algorithm to estimate the parameters of the implemented models.It is noteworthy to indicate that the complexity of variational inference-based algorithm still remains less than Markov Chain Monte Carlo-Based Bayesian inference and leads to faster convergence.Finally,the implemented hierarchical Bayesian models and the variational inference approaches are validated via challenging real-life problems namely human activity recognition and texture categorization.
This manuscript is organized as follows.In Section 2 we introduce the hierarchical DP and PYP mixtures of Gamma distributions which are based on stick-breaking construction.In Section 3, we describe the details of our variational Bayes learning framework.Section 4 reports the obtained results, which are based on two challenging applications, to verify the merits and effectiveness of our framework, and Section 5 is devoted to conclude the manuscript.
2 Model Specification
In this section, we start by briefly presenting finite Gamma mixture model and then we present our nonparametric frameworks based on hierarchical Dirichlet and Pitman-Yor processes mixtures.
2.1 Finite Gamma Mixture Model
If aD-dimensional random vector= (Y1,...,YD) is distributed according to a multidimensional Gamma distribution, then its probability density function (pdf) is defined as:
2.2 Hierarchical Dirichlet Process Mixture Model
The hierarchical Dirichlet process (HDP) is an an effective nonparametric Bayesian method to modelling grouped data, which allows the mixture models to share components.Here, observed data are arranged into groups (i.e., mixture model) that we want to make them statistically linked.HDP It is built on the Dirichlet process (DP) as well described in [17] for each group data.It is noteworthy that the DP has acquired popularity in machine learning to handle nonparametric problems [18].The DP was presented as a prior on probability distributions and this makes it extremely appropriate for specifying infinite mixture models thanks to the use of the stick-breaking process [19].In the case of hierarchical Dirichlet process (HDP), the DPs for all groups share a base distribution which is itself distributed according to a Dirichlet process.Let’s assume that we have a grouped data set Y separated intoMgroups, such that each group is associated with a DPGj, thus the HDP takes part an indexed set of DPsGjthat share a global (or base) distributionG0which is itself distributed as a DP with base distributionHand concentration parameter γ:
Here, the hierarchical Dirichlet process is represented using the stick-breaking construction[19,20].The global measureG0is distributed according toDP(γ,H) and it can be expressed as
where δΩkrepresents an atom concentrated at Ωk, and {Ωk} is a set of independent random variables drawn fromH.The variable {ψk} are denoted as stick-breaking that verify= 1.AsG0is defined as the base distribution of the DPGjand has the stick-breaking representation as shown in Eq.(3),thenGjincludes all the atoms Ωkwith distinct weights (by following the property of Dirichlet process[18]).On the other side, we carry out another stick-breaking process to construct each group-level DPGjaccording to [21] as
where {πjt} is a set of stick-breaking weights which shall be positive and sum to one and δωjtare grouplevel Dirac delta atoms at ωjt.As ωjt(group-level atom) is distributed according toG0, thus it will take onto Ωk(base-level atoms) with probability ψk.
Next, we introduce a latent indicatorWjtkas an indicator variable, such thatWjtk∈{0,1} (in order to indicate which group-level atom maps to).Wjtk= 1 if ωjt(group-level atom) maps to the Ωk(globallevel atom) that is indexed byk; otherwise,Wjtk= 0.Accordingly, we can have ωjt=.By this way, there is no need to keep a representation for ωjt.The indicator variable= (Wjt1,Wjt2,...) is distributed as:
Given that ψ is a function of ψ′according to Eq.(4), it is possible to rewrite the indicator variablep() as:
According to Eq.(4) the stick lengths ψ′are drawn from a specific Beta distribution and their realization is determined as
To complete the description of the HDP mixture model, given a grouped observation (data) Y,we associate each data pointwith a factor(hereiindexes the data in each groupj) such thatand= (θj1,θj2,...) are distributed according toF(θji) andGj, respectively.In this case the likelihood function can be written as:
whereF(θji) is the probability distribution ofYjigiven θji.The base distributionHprovides the prior distribution for θji.This setting (i.e., hierarchical Dirichlet process (HDP) mixture model) plays an important role and ensures that each group is associated with a mixture model, and the components of the mixture are shared across different groups.
As each θjiis distributed according toGj(see Eq.(9)), it takes the value ωjtwith probability πjt.We then introduce another indicator variableZjit∈{0,1} for θjias
That is, the indicatorZjitis used to indicate which component θjibelongs to.In particular,Zjitis equal to 1 if θjiis associated with componentt(also maps to the group-level atom ωjt); else,Zjit= 0.Thus, we can write.As ωjtmaps to Ωk, we then can write.
Based on the stick-breaking construction of the DP (see Eq.(5)), we notice thatis a function of, therefore the previous equation becomes
Finally, according to Eq.(5), the prior distribution of π′is a Beta and it is given as follows:
2.3 Hierarchical Pitman-Yor Process Mixture Model
The Pitman-Yor process (PYP) [22] is a two-parameter extension to the DP (i.e., is a generalization of DP) that permits modelling heavier-tailed distributions.It can be applied to build hierarchical models.It offers a sophisticated way to cluster data such that the number of clusters is unknown.It is characterized by an additional discount parameter γain addition to the concentration parameter γb, that satisfying the 0<γa<1,γb>-γa.Similar to DP, the sample drawn from PYP also associated a probability measureH[23].Here, a hierarchical Pitman-Yor process (HPYP) is introduced where the base measure for a PYP is itself a draw from a PYP.Specifically, HPYP defines the global-level measureG0and group-level distributionGj(that is the indexedGjshares a same baseG0which itself follows a PYP).This behavior makes the HPYP especially suitable for complex visual data modeling and classification.We can use the HPYP to cluster data by applying the stick-breaking construction that defines the base measure as follows:
where {Λk} is a set of independent random variables drawn fromHand δΛkis an atom (probability mass) at Λk.The random variables ηkrepresent the stick-breaking weights that satisfying= 1.The stick-breaking representation for the group-level PYPGjis expressed as:
where {pjt} are the stick-breaking weights.ψjtis the atom of second-level PYP that is distributed according toG0.Then, a global-level indicator variablesIand a group-level indicator variablesCare introduced.Here,Cis used to map θjito group-level atom ψjtand the indicatorIis used to map the atom θjito base-level atom Λk.
2.4 Hierarchical Infinite Gamma Mixture Model
In this subsection, we introduce two hierarchical infinite mixture models with Gamma distributions.In this case, each vector= (Yji1,...,YjiD),fromthe grouped data, is drawn froma hierarchical infinite Gamma mixture model.Then, the two likelihood functions of these hierarchical models given the unknown parameters of Gamma and latent variables can be expressed as follows:
Next, we have to place conjugate distributions over the unknown parameters α and β.As α and β are positive, then it is convenient that they follow Gamma distributions G(.).Thus, we have
3 Model Learning via Variational Bayes
Variational inference [3,24] is a well-defined method deterministic approximation method that is used in order to approximate posterior probability via an optimization process.In this section,we propose to develop a variational learning framework of our hierarchical infinite Gamma mixture models.Here,Θ=represents both unknown and the latent variables.Our objective is to estimate a suitable approximationq(Θ) for the true posterior distributionp(Θ|Y) via a process of maximizing the lower bound of lnp(Y) given as
In particular, we adopt one of the most successfully variational inference techniques namely the factorial approximation (or mean fields approximation) [3], which is able to offer effective updates.Thus, we apply this method to fully factorizeq(Θ) of on HDPGaM and HPYPGaM mixtures into disjoint factors.Then, we apply a truncation method as previously applied in [20] to truncate the variational approximations into global truncation levelKand group truncation levelTas follow:
where the truncation levelsKandTwill be optimized over the learning procedure.The approximated posterior distribution is then factorized as
For a specific variational factorqs(Θs), the general equation of the optimal solution is expressed as [3]:
where〈.〉i/=sdenotes an expectation with respect to all the distributions ofqi(Θi) except fori=s.The parametric forms for the variational posteriors (for each factor) are determined on the basis of Eq.(26) as
where the corresponding hyperparameters in the above equations can be calculated as a similar way in[25,26].The complete variational Bayes inference algorithms of both HDPGaM and HPYPGaM are summarized in Algorithm 1 and Algorithm 2, respectively.
Algorithm 1: HDPGaM: Proposed Hierarchical Dirichlet process gamma mixture algorithmimages/BZ_1121_265_950_1515_1565.png
Algorithm 2: HPYPGaM: Proposed Hierarchical Pitman-Yor process gamma mixture algorithmimages/BZ_1121_265_1730_1515_2347.png
4 Experimental Results
The principal purpose of the experiment section is to investigate the performance of the developed two frameworks based on HDP mixture and HPYP mixture model with Gamma distributions.Hence,we propose to compare them with other statistical models using two challenging applications: Texture categorization and human action categorization.In all these experiments, the global truncation levelKand the group level truncation levelTare both initialized to 120 and 60, respectively.For HDP mixture, we set the hyperparameters of the stick lengths γ and λ as (0.25, 0.25).The parameters of HPYP mixture γa,γb,βaand βbare initialized to (0.5, 0.25, 0.5, 0.25).The hyperparameters of Gamma base distribution are initialized by sampling from priors.
4.1 Texture Classification
In thisworkwe are primarily motivated by the problem of modeling and classifying texture images.Contrary to natural images which include certain objects and structures, texture images are very special case of images that do not include a well-defined shape.Texture pattern is one of the most important elements in visual multimedia content analysis.It forms the basis for solving complex machine learning and computer vision tasks.In particular, texture classification supports a wide range of applications, including information retrieval, image categorization [27-30], image segmentation [27,31,32], material classification [33], facial expression recognition [29], and object detection [28,34].The goal here to classify texture images using the two hierarchical infinite mixtures and also by incorporating three different representations (to extract relevant features from images) from the literature, namely Local Binary Pattern (LBP) [35], Local Binary Pattern (LBP) [35], scale-invariant feature transform (SIFT) [36], and dense micro-block difference (DMD) [37].A deep review for these methods is outside the scope of current work.Instead, we focus on some powerful feature extraction methods that have shown interesting state-of-the-art results.
4.1.1 Methodology
For this application, we start by extracting features from input images and then we model them using the proposed HDPGaM and HPYPGaM.Each imageIjis considered as a group and is related to a infinite mixture modelGj.Next, every vectorYjiofIjis assumed generated fromGj, whereGjrepresents visual words.The next step is to generate a global vocabulary to share it among all groups viaG0(global infinite model).It is noteworthy that the the building of the visual vocabulary, here, is part of our hierarchical models and therefore, the size of the vocabulary (i.e., number of components)is is inferred automatically from the data thanks to the characteristic of nonparametric Bayesian models.Regarding SIFT features, the bag-of-visual-words model is adopted here to calculate the histogram of visual words from each input image.Regarding the set of DMD descriptors, these are obtained after extracting DMD features and then encoding them though th Fisher vector method as proposed in[37].The resulting descriptors are able to attain good discrimination thanks to their invariance with respect to scale, resolution, and orientation.Finally, each image is represented by a multidimensional vector of high-order statistics1The Matlab code for the features is available at http://www.cs.tut.fi/ mehta/texturedmd..
4.1.2 Dataset and Results
We conducted our experiments of texture classification using the proposed hierarchical HDP Gamma mixture (referred to as HDPGaM) and HPYP Gamma mixture (referred to as HPYPGaM)on three publicly available databases.The first one namely UIUCTex [38] contains 25 texture classes and each class has 40 images.The second dataset namely UMD [39] contains 25 textures classes containing each one 40 images.The third dataset namely KTH-TIPS [33] includes 10 classes and each one contains 81 images.Some sample texture images from each class and each dataset are shown in Fig.2.We use 10-fold cross-validation technique to partition these databases and to study the performance.In addition, the evaluation process and the obtained results are based on 30 runs.
In order to quantify the performance of the proposed frameworks (HDPGaM and HPYPGaM),we proceed by evaluating and comparing the obtained results with seven other methods namely infinite mixture of Gaussian distribution (inGM), infinite mixture of generalized Gaussian distribution (inGGM), infinite mixture of multivariate generalized Gaussian distribution (inMGGM),Hierarchical Dirichlet Process mixture of Gaussian distribution (HDPGM), hierarchical Pitman-Yor process mixture of Gaussian distribution (HPYPGM), Hierarchical Dirichlet Process mixture of generalized Gaussian distribution (HDPGGM), and hierarchical Pitman-Yor process mixture of generalized Gaussian distribution (HPYPGGM).
Figure 2: Texture samples in different categories for different datasets (a) KTH-TIPS, (b) UIUCTex,and (c) UMD dataset
We run all methods 30 times and calculate the average classification accuracy which are depicted in Tabs.1-3 respectively.According to these results, we can notice that HDPGaM and HPYPGaM have the highest achieved accuracy for the three databases in terms of the texture classification accuracy rate.It is noted that when comparing these results by considering the Student’s t-test, the differences in performance are statistically significant between our frameworks and the rest of methods.In particular, results indicate the benefits of our proposed models in terms of texture modeling and classification capabilities which surpass those obtained by HDPGM, HDPGGM,HPYPGM, andHPYPGGM.By contrast, the worst performance is obtained within the infinite Gaussian mixture models.It should be noted that the proposed frameworks outperform the other methods and that the three adopted feature extraction methods (SIFT, LBP and DMD).Thus, these results confirm the merit of the proposed methods.Due to the effectiveness of DMD descriptor for describing and modelling observed texture images, we also find that DMDachieves better accuracy compared with both SIFT and LBP.It shows the merits of DMD which is able to consider all possible fine details images at different resolutions.We also note that with HPYP mixture we can reach better results compared to HDP mixture and this is for all tested distributions.This can be explained by the fact that HPYP mixture model has better generalization capability and better capacity to model heavier-tailed distribution (PYP prior can lead to better modeling ability).
Table 3: The average accuracy results of texture classification using different algorithms for the 3 texture-datasets using DMD features
Table 1: The average accuracy results of texture classification using different algorithms for the 3 texture-datasets using SIFT features
Table 2: The average accuracy results of texture classification using different algorithms for the 3 texture-datasets using LBP features
Table 3: Continued
4.2 Human Actions Categorization
Visual multimedia recognition has been a challenging research topic which could attract many applications such as actions recognition [40,41], image categorization [42,43], biomedical image recognition [44], and facial expressions [29,30].In this work, we are focusing on a particular problem that has received a lot of attention namely Human actions recognition (HAR) through sequence of videos.Indeed, the intention of recognizing activities is to identify and analyze various human actions.At present, HAR is one the hot computer vision topics not only in research but also in industries where automatic identification of any activity can be useful, for instance, for monitoring, healthcare,robotics, and security-based applications [45].Recognizing manually activities is very challenging and time consuming.This issue has been addressed and so various tools have been implemented such as in [40,45-47].However, precise recognition of actions is still required using advanced and efficient algorithms in order to deal with complex situations such as noise, occlusions, and lighting.
We perform here the recognition of Human activities using the proposed frameworks HDPGaM and HPYPGaM.Our methodology is outlined as following: First, we extract and normalize SIFT3D descriptor [40] from observed images.These features are then quantized as visual words via bag-ofwords (BOW) model and K-means algorithm [48].Then, these features are quantized as visual words via K-means algorithm [48].Then, a probabilistic Latent Semantic Analysis (pLSA) [49] is adopted to construct ad-dimensional vector.In particular, each imageIjis considered as a“group”and is associated with an infinite mixture modelGj.Thus, we suppose that Each SIFT3D feature vector is drawn from the infinite mixture modelGjand“visual words”denote mixture components ofGj.
On the other hand, a global vocabulary is generated and shared between all groups via the global-modelG0of the proposed hierarchical models.This setting agrees with the purpose behind the hierarchical process mixture model.It is also noted that the building of the visual vocabulary is part of the hierarchical process mixture models and this step is not carried out separately via the k-means algorithm as many other approaches do.It is also noted that the generation of the visual vocabulary is part of our hierarchical process mixture models and this process is not performed separately via kmeans as many other approaches did.In fact, it is due to the property of the nonparametric Bayesian model that the number of components in the global-level mixture model can be deduced from the data.We conducted our experiments of actions recognition using a publicly dataset known as KTH human action dataset [46].This database contains 2391 sequences of different actions grouped into 6 classes.It also represents four scenarios (outdoors (s1), outdoors with scale variation (s2), outdoors with different clothes (s3) and indoors (s4)).Some scenarios from this dataset are given in Fig.3.We randomly divided this dataset into 2 subsets to train the developed frameworks and to evaluate its robustness.
Our purpose through this application is to show the advantages of investigating our proposed hierarchical models HDPGaM and HPYPGaM over other conventional hierarchical mixtures and other methods from the state of the art.Therefore, we focused first on evaluating the performance of HDPGaM and HPYPGaM over Hierarchical Dirichlet Process mixture of Gaussian distribution(HDPGM), hierarchical Pitman-Yor process mixture of Gaussian distribution (HPYPGM), Hierarchical Dirichlet Process mixture of generalized Gaussian distribution (HDPGGM), and hierarchical Pitman-Yor process mixture of generalized Gaussian distribution (HPYPGGM).It is noted that we learned all the implemented models using variational Bayes.The average recognition performances of our frameworks and models based on HDP mixture and HPYP mixture are depicted in Tab.4.
Figure 3: Sample frames of the KTH dataset actions with different scenarios
Table 4: Average recognition performance (%) obtained using our frameworks and other models based on HDP mixture and HPYP mixture for KTH database
As we can see in this table, the proposed frameworks were able to offer the highest recognition rates (82.27% for HPYPGaM and 82.13% for HDPGaM) among all tested models.For different runs, we have p-values<0.05 and therefore, the differences in accuracy between our frameworks and other models are statistically significant according to Student’s t-test.Next, we compared our models against other mixture models (here finite Gaussian mixture (GMM) and finite generalized Gaussian mixture (GGMM) and methods from the literature.The obtained results are given in Tab.5.
Table 5: Average recognition performance (%) obtained using our frameworks and other methods from the literature for KTH database
Accordingly, we can observe that models again are able to provide higher discrimination rate than the other methods.Clearly, these results confirm the effectiveness of our frameworks for activities modeling and recognition compared to other conventional Dirichlet and Pitman-Yor processes based on Gaussian distribution.Another remark is that our model HPYPGaM outperforms our second model HDPGaM for this specific application and this demonstrates the advantages of using hierarchical Pitman-Yor process over Dirichlet process which is flexible enough to be used for such recognition problem.
5 Conclusions
In this paper two non-parametric Bayesian frameworks based on both hierarchical Dirichlet and Pitman-Yor processes and Gamma distribution are proposed.The Gamma distribution is considered because of its flexibility for semi-bounded data modelling.Both frameworks are learned using variational inference which has certain advantages such as easy assessment of convergence and easy optimization by offering a trade-off between frequentist techniques and MCMC-based ones.An important property of our approach is that it does not need the specification of the number of mixture components in advance.We carried out experiments on texture categorization and human action recognition to demonstrate the performance of our models which can be used further for a variety of other computer vision and pattern recognition applications.
Acknowledgement:The authors would like to thank Taif University Researchers Supporting Project number(TURSP-2020/26),Taif University,Taif,Saudi Arabia.They would like also to thank Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R40), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Funding Statement:The authors would like to thank Taif University Researchers Supporting Project number(TURSP-2020/26),Taif University,Taif,Saudi Arabia.They would like also to thank Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R40),Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.
杂志排行
Computers Materials&Continua的其它文章
- Constructing Collective Signature Schemes Using Problem of Finding Roots Modulo
- Modeling and Simulation of Two Axes Gimbal Using Fuzzy Control
- Artificial Monitoring of Eccentric Synchronous Reluctance Motors Using Neural Networks
- An Optimal Scheme for WSN Based on Compressed Sensing
- Triple-Band Metamaterial Inspired Antenna for Future Terahertz (THz)Applications
- Adaptive Multi-Cost Routing Protocol to Enhance Lifetime for Wireless Body Area Network