Em Algorithm(Expectation Maximization) And Hierarchical Cluster
Em algorithm(Expectation Maximization)-
The EM algorithm is the extension of K means algorithm. The EM algorithm is assign each object to a cluster according their weight representation. The probability are mention here for clustering definition they are based on weighted and measures of objects.
Hierarchical Method Cluster-
It is working for grouping data objects into cluster. It is divided into two types-
1.Aaglomerative Hierarchical Clustering
2.Divisible Hierachical Clustering
1.Aaglomerative Hierarchical Clustering-
It is fallow bottom-up strategy. In that merging a small atomic cluster into larger cluster. That process is repeated until the termination condition holds.
2.Divisible Hierachical Clustering-
It follows the top-down strategy and it is Reverse process of Aaglomerative hierarchical clustering. That is starting with all objects with one cluster and subdivided cluster into smaller unit until termination condition not satisfied well in Manner.
Explanation :
Clustering is a fundamental task in data mining and machine learning that groups similar data points into clusters without predefined labels. Two widely used clustering approaches are the EM Algorithm (Expectation Maximization) and Hierarchical Clustering. Both play a key role in pattern discovery and data analysis, though they differ in their principles and applications.
EM Algorithm (Expectation Maximization)
The Expectation Maximization (EM) algorithm is an iterative optimization method used for finding the best parameters of probabilistic models, especially when data contains hidden (latent) variables. In clustering, EM is often applied through the Gaussian Mixture Model (GMM), where data is assumed to be generated from a mixture of several Gaussian distributions.
The EM algorithm operates in two main steps:
-
Expectation (E-step):Estimates the probability that each data point belongs to a specific cluster based on current parameters (means, variances, and probabilities).
-
Maximization (M-step):Updates the parameters of each cluster to maximize the likelihood of the observed data, given the current assignments.
Hierarchical Clustering
Hierarchical Clustering builds a hierarchy of clusters either from the bottom up (agglomerative) or from the top down (divisive).
-
Agglomerative clustering starts by treating each data point as a separate cluster and successively merges the closest pairs until all points form a single cluster.
-
Divisive clustering, on the other hand, starts with one large cluster and recursively splits it into smaller clusters.
The relationships between clusters are visualized using a dendrogram, a tree-like diagram that shows how clusters are merged or divided at each step. The choice of linkage criteria—such as single linkage, complete linkage, or average linkage—determines how distances between clusters are measured.
Hierarchical clustering does not require specifying the number of clusters in advance and is particularly useful for visualizing the structure of data. However, it can be computationally expensive for large datasets.
Read More-

Comments
Post a Comment