Classification and Regression Tree (CART)-Explanation of Classification and Regression Tree (CART)and Advantages
Classification and Regression Tree (CART)-
Classification and Regression Tree module are popularly used for alternatives of any method for regression. It is introduced by beriman at 1984. The CART follows different method for calculating the future outcomes. It is used a binary tree structure with sequential manner and that all sequence are represent a classified data. The variables are divided in tree structure and find a predicted values for future use.
The CART also used cross validation for checks accuracy. The CART model is very valuable tool for predicting Modelling and data mining. The all previous tree methodologies suffer from problem including accuracy, greediness, stability at the time of split root. The CART recover all various drawbacks about tree mining data mining and work great.
Definition of CART-“Build’s classification or regression trees for numeric attributes means regression are categorical attributes means classification.”
The following steps are follows for in CART method-
1.Start with root node.
2.Split the node with more purity of data.
3.Assigning predefined the classes to each and every node.
4.Stop tree building when every expect of data set is visible in decision tree value check in cart.
5.Optimal selection fallow means checks the errors in that tree.
6.Stop tree building.
Advantages of CART-
1.Handles data with any structure.
2.Using machine learning in CART.
3.the final result will summarized with logical if-then condition.
Explanation :
The Classification and Regression Tree (CART) is one of the most widely used decision tree techniques in Data Warehousing and Data Mining (DWDM). It is a predictive modeling method used for both classification and regression problems. Developed by Breiman, Friedman, Olshen, and Stone, CART works by splitting a dataset into subsets based on the values of input variables. The final model is represented as a tree, where internal nodes represent tests on attributes, branches represent outcomes of those tests, and leaf nodes represent class labels or predicted values.
In classification, CART is used when the output variable is categorical, such as predicting whether a customer will buy a product or not. In regression, it is used when the output variable is continuous, such as predicting sales or temperature. CART uses the Gini Index for classification tasks and the Mean Squared Error (MSE) for regression tasks to determine the best splits in the data.
The tree-building process is recursive. It starts with the entire dataset and then splits it into two or more homogeneous groups based on the attribute that provides the maximum information gain or minimum impurity. This process continues until no further meaningful splits can be made or a stopping condition is reached. The resulting tree can sometimes overfit the training data, so a pruning process is applied to simplify the model and improve generalization on unseen data.
CART has several advantages—it is easy to interpret, handles both numerical and categorical data, and requires little data preprocessing. It also provides insight into the most significant attributes affecting predictions. However, its main drawback is sensitivity to small data variations, which can lead to different tree structures.
In DWDM, CART is useful for decision support, trend analysis, and customer segmentation. For example, in business intelligence, it helps classify customers based on purchasing behavior or predict future sales trends. Thus, the CART algorithm plays a vital role in transforming large datasets stored in data warehouses into actionable knowledge, supporting data-driven decision-making.
Read More-

Comments
Post a Comment