{"id":14555,"date":"2020-05-18T08:39:11","date_gmt":"2020-05-18T03:09:11","guid":{"rendered":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/"},"modified":"2024-10-15T01:14:40","modified_gmt":"2024-10-14T19:44:40","slug":"bagging-boosting","status":"publish","type":"post","link":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/","title":{"rendered":"Understanding the Ensemble method Bagging and Boosting"},"content":{"rendered":"\n<ol class=\"wp-block-list\">\n<li><a href=\"#sh1\">Bias and Variance\u00a0<\/a><\/li>\n\n\n\n<li><a href=\"#sh2\">Ensemble Methods<\/a><\/li>\n\n\n\n<li><a href=\"#sh3\">Bagging<\/a><\/li>\n\n\n\n<li><a href=\"#sh4\">Boosting<\/a><\/li>\n\n\n\n<li><a href=\"#sh5\">Bagging vs Boosting<\/a><\/li>\n\n\n\n<li><a href=\"#sh6\">Implementation<\/a><\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"bias-and-variance\"><strong>Bias and Variance&nbsp;<\/strong><br><\/h2>\n\n\n\n<p>For the same set of data, different algorithms behave differently. For example, if we want to predict the price of houses given for some dataset, some of the algorithms that can be used are <a href=\"https:\/\/www.mygreatlearning.com\/blog\/linear-regression-in-machine-learning\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Linear Regression (opens in a new tab)\">Linear Regression<\/a> and <a href=\"https:\/\/www.mygreatlearning.com\/blog\/decision-tree-algorithm\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Decision Tree Regressor (opens in a new tab)\">Decision Tree Regressor<\/a>. Both of these algorithms will interpret the dataset in different ways and thus make different predictions. One of the key distinctions is how much bias and variance they produce.<br><\/p>\n\n\n\n<p>There are 3 types of prediction error: bias, variance, and irreducible error. Irreducible error, also known as \"noise,\" can't be reduced by the choice of algorithm. The other two types of errors, however, can be reduced because they stem from your algorithm choice.<\/p>\n\n\n\n<figure class=\"wp-block-pullquote has-vivid-cyan-blue-color has-text-color\"><blockquote><p>\u201c\u2026 bias is an error from erroneous assumptions in the learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs (underfitting). \u201d<br><\/p><cite>Wikipedia<\/cite><\/blockquote><\/figure>\n\n\n\n<p>Bias is an assumption made by a model to make the target function easier to learn. Models with high bias are less flexible and are not fully able to learn from the training data.<\/p>\n\n\n\n<p>In the given figure, we use a linear model, such as linear regression, to learn from the model. As we can see, the regression line fails to fit the majority of the data points and thus, this model has high bias and low learning power. Generally, models with low bias are preferred.<\/p>\n\n\n\n<figure class=\"wp-block-pullquote is-style-default has-vivid-cyan-blue-color has-text-color\"><blockquote><p>&nbsp;\u201c\u2026 variance is an error from sensitivity to small fluctuations in the training set. High variance can cause an algorithm to model the random noise in the training data, rather than the intended outputs (overfitting). \u201d&nbsp;<br><\/p><cite>Wikipedia<\/cite><\/blockquote><\/figure>\n\n\n\n<p>Variance defines the deviation in prediction when switching from one dataset to another. In other words, it defines how much the predictions of a model will change from one dataset to another. It can also be defined as the amount that the estimate of the target function will change if different training data is used.<\/p>\n\n\n\n<p>In the given figure, we can see a non-linear model such as SVR (<a href=\"https:\/\/www.mygreatlearning.com\/blog\/introduction-to-support-vector-machine\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Support Vector Regressor (opens in a new tab)\">Support Vector Regressor<\/a>) tries to generate a polynomial function that passes through all the data points. This may seem like the perfect model, but such models are not able to generalize the data well and perform poorly on data that has not been seen before. Ideally, we want a model with low variance.<\/p>\n\n\n\n<p>But there seem to be tradeoffs between the bias and variance. This is known as a <a href=\"https:\/\/www.mygreatlearning.com\/blog\/overfitting-and-underfitting-in-machine-learning\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">bias-variance tradeoff<\/a>. Hence when we decrease one, the other increases, and vice versa.&nbsp;<\/p>\n\n\n\n    <div class=\"courses-cta-container\">\n        <div class=\"courses-cta-card\">\n            <div class=\"courses-cta-header\">\n                <div class=\"courses-learn-icon\"><\/div>\n                <span class=\"courses-learn-text\">Texas McCombs, UT Austin<\/span>\n            <\/div>\n            <p class=\"courses-cta-title\">\n                <a href=\"https:\/\/onlineexeced.mccombs.utexas.edu\/online-data-science-business-analytics-course\" class=\"courses-cta-title-link\">Post Graduate Program in Data Science with Generative AI: Applications to Business<\/a>\n            <\/p>\n            <p class=\"courses-cta-description\">Learn how to turn data into strategy in this UT Data Science and Business Analytics Course \u2014 now with a focus on Generative AI. Gain practical experience through 7 hands-on projects over a 7-month duration.<\/p>\n            <div class=\"courses-cta-stats\">\n                <div class=\"courses-stat-item\">\n                    <div class=\"courses-stat-icon courses-user-icon\"><\/div>\n                    <span>7 Hands-on Projects<\/span>\n                <\/div>\n                <div class=\"courses-stat-item\">\n                    <div class=\"courses-stat-icon courses-star-icon\"><\/div>\n                    <span>Duration: 7 months<\/span>\n                <\/div>\n            <\/div>\n            <a href=\"https:\/\/onlineexeced.mccombs.utexas.edu\/online-data-science-business-analytics-course\" class=\"courses-cta-button\">\n                Discover the Program\n                <div class=\"courses-arrow-icon\"><\/div>\n            <\/a>\n        <\/div>\n    <\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ensemble-methods\"><strong>Ensemble Methods<\/strong><\/h2>\n\n\n\n<p>The general principle of an ensemble method in <a href=\"https:\/\/www.mygreatlearning.com\/blog\/what-is-machine-learning\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Machine Learning (opens in a new tab)\">Machine Learning<\/a> to combine the predictions of several models. These are built with a given learning algorithm in order to improve robustness over a single model. Ensemble methods can be divided into two groups:<br><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Parallel ensemble methods<\/strong>: In these methods, the base learners are generated in parallel simultaneously. For example, when deciding the movie you want to watch, you may ask multiple friends for suggestions and probably watch the movie which got the highest votes.<\/li>\n\n\n\n<li><strong>Sequential ensemble methods:<\/strong> In this technique, different learners learn sequentially with early learners fitting simple models to the data. Then the data is analyzed for errors. The goal is to solve for net error from the prior model. The overall performance can be boosted by weighing previously mislabeled examples with higher weight.<\/li>\n<\/ul>\n\n\n\n<p>Most ensemble methods use a single base learning algorithm to produce homogeneous base learners, i.e. learners of the same type, leading to homogeneous ensembles. For example, <a href=\"https:\/\/www.mygreatlearning.com\/blog\/random-forest-algorithm\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Random forests (opens in a new tab)\">Random forests<\/a> (Parallel ensemble method) and Adaboost(Sequential ensemble methods).<\/p>\n\n\n\n<p>Some methods use heterogeneous learners, i.e. learners of different types. This leads to heterogeneous ensembles. For ensemble methods to be more accurate than any of its members, the base learners have to be as accurate and as diverse as possible. In <a href=\"https:\/\/www.mygreatlearning.com\/blog\/scikit-machine-learning\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Scikit-learn (opens in a new tab)\">Scikit-learn<\/a>, there is a model known as a voting classifier. This is an example of heterogeneous learners.&nbsp;<br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"bagging\"><strong>Bagging<\/strong><br><\/h2>\n\n\n\n<p>Bagging, a Parallel ensemble method (stands for Bootstrap Aggregating), is a way to decrease the variance of the prediction model by generating additional data in the training stage. This is produced by random sampling with replacement from the original set. By sampling with replacement, some observations may be repeated in each new training data set. In the case of Bagging, every element has the same probability to appear in a new dataset. By increasing the size of the training set, the model\u2019s predictive force can't be improved. It decreases the variance and narrowly tunes the prediction to an expected outcome.<\/p>\n\n\n\n<p>These multisets of data are used to train multiple models. As a result, we end up with an ensemble of different models. The average of all the predictions from different models is used. This is more robust than a model. Prediction can be the average of all the predictions given by the different models in case of <a href=\"https:\/\/www.mygreatlearning.com\/blog\/linear-regression-in-machine-learning\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"regression (opens in a new tab)\">regression<\/a>.&nbsp; In the case of classification, the majority vote is taken into consideration.<\/p>\n\n\n\n<p>For example, <a href=\"https:\/\/www.mygreatlearning.com\/blog\/decision-tree-algorithm\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Decision tree (opens in a new tab)\">Decision tree<\/a> models tend to have a high variance. Hence, we apply bagging to them. Usually, the Random Forest model is used for this purpose. It is an extension over-bagging. It takes the random selection of features rather than using all features to grow trees. When you have many random trees. It\u2019s called Random Forest.&nbsp;<br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"boosting\"><strong>Boosting<\/strong><\/h2>\n\n\n\n<p>Boosting is a sequential ensemble method that in general decreases the bias error and builds strong predictive models. The term 'Boosting' refers to a family of algorithms which converts a weak learner to a strong learner.<br><\/p>\n\n\n\n<p>Boosting gets multiple learners. The data samples are weighted and therefore, some of them may take part in the new sets more often.<br><\/p>\n\n\n\n<p>In each iteration, data points that are mispredicted are identified and their weights are increased so that the next learner pays extra attention to get them right. The following figure illustrates the boosting process.&nbsp;<\/p>\n\n\n\n<p>During training, the algorithm allocates weights to each resulting model. A learner with good prediction results on the training data will be assigned a higher weight than a poor one. So when evaluating a new learner, Boosting also needs to keep track of learner\u2019s errors.<br><\/p>\n\n\n\n<p>Some of the Boosting techniques include an extra-condition to keep or discard a single learner. For example, in AdaBoost an error of less than 50% is required to maintain the model; otherwise, the iteration is repeated until achieving a learner better than a random guess.<br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"bagging-vs-boosting\"><strong>Bagging vs Boosting<\/strong><br><\/h2>\n\n\n\n<p>There\u2019s no outright winner, it depends on the data, the simulation, and the circumstances. <a href=\"https:\/\/www.mygreatlearning.com\/academy\/learn-for-free\/courses\/bagging-and-boosting\" target=\"_blank\" rel=\"noreferrer noopener\">Bagging and Boosting in machine learning<\/a> decrease the variance of a single estimate as they combine several estimates from different models. As a result, the performance of the model increases, and the predictions are much more robust and stable.<br><\/p>\n\n\n\n<p>But how do we measure the performance of a model? One of the ways is to compare its training accuracy with its validation accuracy which is done by splitting the data into two sets, viz- training set and validation set.&nbsp;<br><\/p>\n\n\n\n<p>The model is trained on the training set and evaluated on the validation set. Thus, the training accuracy is evaluated on the training set and gives us a measure of how good the model can fit the training data. On the other hand, validation accuracy is evaluated on the validation set and reveals the generalization ability of the model. A model's ability to generalize is crucial to the success of a model. Thus, we can say that the performance of a model is good if it can fit the training data well and also predict the unknown data points accurately.<br><\/p>\n\n\n\n<p>If a single model gets a low performance, Bagging rarely gets a better bias. However, Boosting can generate a combined model with lower errors. As it optimizes the advantages and reduces the pitfalls of the single model. On the other hand, Bagging can increase the generalization ability of the model and help it better predict the unknown samples. Let us see an example of this in the next section.<\/p>\n\n\n\n    <div class=\"courses-cta-container\">\n        <div class=\"courses-cta-card\">\n            <div class=\"courses-cta-header\">\n                <div class=\"courses-learn-icon\"><\/div>\n                <span class=\"courses-learn-text\">Advance Data Science with MIT<\/span>\n            <\/div>\n            <p class=\"courses-cta-title\">\n                <a href=\"https:\/\/idss-gl.mit.edu\/mit-idss-data-science-machine-learning-online-program\" class=\"courses-cta-title-link\">MIT Data Science and Machine Learning Course<\/a>\n            <\/p>\n            <p class=\"courses-cta-description\">Unlock the power of data. Build hands-on data science and machine learning skills to drive innovation in your career.<\/p>\n            <div class=\"courses-cta-stats\">\n                <div class=\"courses-stat-item\">\n                    <div class=\"courses-stat-icon courses-user-icon\"><\/div>\n                    <span>Duration: 12 weeks<\/span>\n                <\/div>\n                <div class=\"courses-stat-item\">\n                    <div class=\"courses-stat-icon courses-star-icon\"><\/div>\n                    <span>4.62\/5 Rating<\/span>\n                <\/div>\n            <\/div>\n            <a href=\"https:\/\/idss-gl.mit.edu\/mit-idss-data-science-machine-learning-online-program\" class=\"courses-cta-button\">\n                Discover the Program\n                <div class=\"courses-arrow-icon\"><\/div>\n            <\/a>\n        <\/div>\n    <\/div>\n\n\n\n<p><br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"implementation\"><strong>Implementation<\/strong><br><\/h2>\n\n\n\n<p>In this section, we demonstrate the effect of Bagging and Boosting on the decision boundary of a classifier. Let us start by introducing some of the algorithms used in this code.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Decision Tree Classifier: <\/strong>Decision Tree Classifier is a simple and widely used classification technique. It applies a straightforward idea to solve the classification problem. Decision Tree Classifier poses a series of carefully crafted questions about the attributes of the test record. Each time it receives an answer, a follow-up question is asked until a conclusion about the class label of the record is reached.<\/li>\n\n\n\n<li><strong>Decision Stump:<\/strong> A decision stump is a machine learning model consisting of a one-level decision tree. That is, it is a decision tree with one internal node (the root) which is immediately connected to the terminal nodes (its leaves). A decision stump makes a prediction based on the value of just a single input feature. Here we take decision stump as a weak learner for the AdaBoost algorithm.<\/li>\n\n\n\n<li><strong>RandomForest:<\/strong> Random forest is an ensemble learning algorithm that uses the concept of Bagging.<\/li>\n\n\n\n<li><strong>AdaBoost:<\/strong> AdaBoost, short for Adaptive Boosting, is a machine learning meta-algorithm that works on the principle of Boosting. We use a Decision stump as a weak learner here.<\/li>\n<\/ul>\n\n\n\n<p>Here is a piece of code written in <a href=\"https:\/\/www.mygreatlearning.com\/blog\/python-tutorial-for-beginners-a-complete-guide\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Python (opens in a new tab)\">Python<\/a> which shows&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>How Bagging decreases the variance of a Decision tree classifier and increases its validation accuracy<\/li>\n\n\n\n<li>How Boosting increases the bias of a Decision stump and increases its training accuracy<\/li>\n<\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code>import matplotlib.pyplot as plt\n#to plot decision boundary import mlxtend\nfrom mlxtend.plotting import plot_decision_regions\nimport matplotlib.gridspec as gridspec\nimport itertools\nfrom sklearn.ensemble import RandomForestClassifier,AdaBoostClassifier\nfrom sklearn.tree import DecisionTreeClassifier\nfrom mlxtend.preprocessing import shuffle_arrays_unison\n\n# Loading some example data\niris = datasets.load_iris()\nX, y = iris.data&#091;:, &#091;0,2]], iris.target\nX, y = shuffle_arrays_unison(arrays=&#091;X, y], random_seed=3)\n# split data into  training and validation set\nX_train, y_train = X&#091;:100], y&#091;:100]\nX_test, y_test = X&#091;100:], y&#091;100:]\n\n# define arrangement of subplots\ngs = gridspec.GridSpec(2, 2)\nfig = plt.figure(figsize=(10,8))\n\nlabels = &#091;'DecisionTreeClassifier', 'Random Forest', 'Decision Stump','AdaBoost']\nfor clf, lab, grd in zip(&#091;DecisionTreeClassifier(),RandomForestClassifier(100,max_features=2,max_leaf_nodes=3,min_samples_split=5) \n                          ,DecisionTreeClassifier(max_depth=1),AdaBoostClassifier(DecisionTreeClassifier(max_depth=1))],labels,\n                         itertools.product(&#091;0, 1], repeat=2)):\n\n    clf.fit(X_train, y_train)\n    print(clf.score(X_train, y_train))\n    print(clf.score(X_test,y_test))\n    ax = plt.subplot(gs&#091;grd&#091;0], grd&#091;1]])\n    fig = plot_decision_regions(X=X, y=y, clf=clf, legend=2)\n    plt.title(lab)\n\nplt.show()\n<\/code><\/pre>\n\n\n\n<p>Output:<\/p>\n\n\n\n<p>Lets elaborate on these results-<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Decision Tree classifier fits the training data perfectly well but does not score as well in validation accuracy.<\/li>\n\n\n\n<li>Random Forest can fit the training data decently and outperforms the decision tree in the validation accuracy which implies that Random Forest is better able to generalize the data.<\/li>\n\n\n\n<li>Decision stump performed very poorly, both in training and validation accuracy which means it is not able to fit the training data well.<\/li>\n\n\n\n<li>AdaBoost increases the prediction power of stump and increases the training as well as validation accuracy.<\/li>\n<\/ol>\n\n\n\n<p>From the above plot, we see that the RandomForest algorithm softens the decision boundary, hence decreases the variance of the decision tree model whereas AdaBoost fits the training data in a better way and hence increases the bias of the model.<br><\/p>\n\n\n\n<p>This brings us to the end of this article. We have learned about Bagging and Boosting techniques to increase the performance of a Machine learning model.<br><\/p>\n\n\n\n<p>If you wish to learn more about Python and the concepts of <a rel=\"noreferrer noopener\" aria-label=\"Machine learning (opens in a new tab)\" href=\"https:\/\/www.mygreatlearning.com\/blog\/what-is-machine-learning\/\" target=\"_blank\">Machine learning<\/a>, upskill with <a href=\"https:\/\/www.mygreatlearning.com\/pg-program-artificial-intelligence-course\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">Great Learning\u2019s PG Program Artificial Intelligence and Machine Learning.&nbsp;<\/a><br><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Bias and Variance&nbsp; For the same set of data, different algorithms behave differently. For example, if we want to predict the price of houses given for some dataset, some of the algorithms that can be used are Linear Regression and Decision Tree Regressor. Both of these algorithms will interpret the dataset in different ways and [&hellip;]<\/p>\n","protected":false},"author":41,"featured_media":14193,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[2],"tags":[36799],"content_type":[],"class_list":["post-14555","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","tag-machine-learning"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Understanding the Ensemble method Bagging and Boosting<\/title>\n<meta name=\"description\" content=\"Bagging and Boosting are ensemble techniques that reduce bias and variance of a model. It is a way to avoid overfitting and underfitting in Machine Learning models.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Understanding the Ensemble method Bagging and Boosting\" \/>\n<meta property=\"og:description\" content=\"Bagging and Boosting are ensemble techniques that reduce bias and variance of a model. It is a way to avoid overfitting and underfitting in Machine Learning models.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/\" \/>\n<meta property=\"og:site_name\" content=\"Great Learning Blog: Free Resources what Matters to shape your Career!\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/GreatLearningOfficial\/\" \/>\n<meta property=\"article:published_time\" content=\"2020-05-18T03:09:11+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-10-14T19:44:40+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1245\" \/>\n\t<meta property=\"og:image:height\" content=\"843\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Great Learning Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@https:\/\/twitter.com\/Great_Learning\" \/>\n<meta name=\"twitter:site\" content=\"@Great_Learning\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Great Learning Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/\"},\"author\":{\"name\":\"Great Learning Editorial Team\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#\\\/schema\\\/person\\\/6f993d1be4c584a335951e836f2656ad\"},\"headline\":\"Understanding the Ensemble method Bagging and Boosting\",\"datePublished\":\"2020-05-18T03:09:11+00:00\",\"dateModified\":\"2024-10-14T19:44:40+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/\"},\"wordCount\":1798,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/04\\\/iStock-1066422156.jpg\",\"keywords\":[\"Machine Learning\"],\"articleSection\":[\"AI and Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/\",\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/\",\"name\":\"Understanding the Ensemble method Bagging and Boosting\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/04\\\/iStock-1066422156.jpg\",\"datePublished\":\"2020-05-18T03:09:11+00:00\",\"dateModified\":\"2024-10-14T19:44:40+00:00\",\"description\":\"Bagging and Boosting are ensemble techniques that reduce bias and variance of a model. It is a way to avoid overfitting and underfitting in Machine Learning models.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/04\\\/iStock-1066422156.jpg\",\"contentUrl\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/04\\\/iStock-1066422156.jpg\",\"width\":1245,\"height\":843,\"caption\":\"bagging and boosting\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/bagging-boosting\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Blog\",\"item\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI and Machine Learning\",\"item\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/artificial-intelligence\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Understanding the Ensemble method Bagging and Boosting\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/\",\"name\":\"Great Learning Blog\",\"description\":\"Learn, Upskill &amp; Career Development Guide and Resources\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#organization\"},\"alternateName\":\"Great Learning\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#organization\",\"name\":\"Great Learning\",\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/06\\\/GL-Logo.jpg\",\"contentUrl\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/06\\\/GL-Logo.jpg\",\"width\":900,\"height\":900,\"caption\":\"Great Learning\"},\"image\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/GreatLearningOfficial\\\/\",\"https:\\\/\\\/x.com\\\/Great_Learning\",\"https:\\\/\\\/www.instagram.com\\\/greatlearningofficial\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/school\\\/great-learning\\\/\",\"https:\\\/\\\/in.pinterest.com\\\/greatlearning12\\\/\",\"https:\\\/\\\/www.youtube.com\\\/user\\\/beaconelearning\\\/\"],\"description\":\"Great Learning is a leading global ed-tech company for professional training and higher education. It offers comprehensive, industry-relevant, hands-on learning programs across various business, technology, and interdisciplinary domains driving the digital economy. These programs are developed and offered in collaboration with the world's foremost academic institutions.\",\"email\":\"info@mygreatlearning.com\",\"legalName\":\"Great Learning Education Services Pvt. Ltd\",\"foundingDate\":\"2013-11-29\",\"numberOfEmployees\":{\"@type\":\"QuantitativeValue\",\"minValue\":\"1001\",\"maxValue\":\"5000\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#\\\/schema\\\/person\\\/6f993d1be4c584a335951e836f2656ad\",\"name\":\"Great Learning Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/unnamed.webp\",\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/unnamed.webp\",\"contentUrl\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/unnamed.webp\",\"caption\":\"Great Learning Editorial Team\"},\"description\":\"The Great Learning Editorial Staff includes a dynamic team of subject matter experts, instructors, and education professionals who combine their deep industry knowledge with innovative teaching methods. Their mission is to provide learners with the skills and insights needed to excel in their careers, whether through upskilling, reskilling, or transitioning into new fields.\",\"sameAs\":[\"https:\\\/\\\/www.mygreatlearning.com\\\/\",\"https:\\\/\\\/in.linkedin.com\\\/school\\\/great-learning\\\/\",\"https:\\\/\\\/x.com\\\/https:\\\/\\\/twitter.com\\\/Great_Learning\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UCObs0kLIrDjX2LLSybqNaEA\"],\"award\":[\"Best EdTech Company of the Year 2024\",\"Education Economictimes Outstanding Education\\\/Edtech Solution Provider of the Year 2024\",\"Leading E-learning Platform 2024\"],\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/author\\\/greatlearning\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Understanding the Ensemble method Bagging and Boosting","description":"Bagging and Boosting are ensemble techniques that reduce bias and variance of a model. It is a way to avoid overfitting and underfitting in Machine Learning models.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/","og_locale":"en_US","og_type":"article","og_title":"Understanding the Ensemble method Bagging and Boosting","og_description":"Bagging and Boosting are ensemble techniques that reduce bias and variance of a model. It is a way to avoid overfitting and underfitting in Machine Learning models.","og_url":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/","og_site_name":"Great Learning Blog: Free Resources what Matters to shape your Career!","article_publisher":"https:\/\/www.facebook.com\/GreatLearningOfficial\/","article_published_time":"2020-05-18T03:09:11+00:00","article_modified_time":"2024-10-14T19:44:40+00:00","og_image":[{"width":1245,"height":843,"url":"http:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156.jpg","type":"image\/jpeg"}],"author":"Great Learning Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@https:\/\/twitter.com\/Great_Learning","twitter_site":"@Great_Learning","twitter_misc":{"Written by":"Great Learning Editorial Team","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/#article","isPartOf":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/"},"author":{"name":"Great Learning Editorial Team","@id":"https:\/\/www.mygreatlearning.com\/blog\/#\/schema\/person\/6f993d1be4c584a335951e836f2656ad"},"headline":"Understanding the Ensemble method Bagging and Boosting","datePublished":"2020-05-18T03:09:11+00:00","dateModified":"2024-10-14T19:44:40+00:00","mainEntityOfPage":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/"},"wordCount":1798,"commentCount":0,"publisher":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/#primaryimage"},"thumbnailUrl":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156.jpg","keywords":["Machine Learning"],"articleSection":["AI and Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/","url":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/","name":"Understanding the Ensemble method Bagging and Boosting","isPartOf":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/#primaryimage"},"image":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/#primaryimage"},"thumbnailUrl":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156.jpg","datePublished":"2020-05-18T03:09:11+00:00","dateModified":"2024-10-14T19:44:40+00:00","description":"Bagging and Boosting are ensemble techniques that reduce bias and variance of a model. It is a way to avoid overfitting and underfitting in Machine Learning models.","breadcrumb":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/#primaryimage","url":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156.jpg","contentUrl":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156.jpg","width":1245,"height":843,"caption":"bagging and boosting"},{"@type":"BreadcrumbList","@id":"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Blog","item":"https:\/\/www.mygreatlearning.com\/blog\/"},{"@type":"ListItem","position":2,"name":"AI and Machine Learning","item":"https:\/\/www.mygreatlearning.com\/blog\/artificial-intelligence\/"},{"@type":"ListItem","position":3,"name":"Understanding the Ensemble method Bagging and Boosting"}]},{"@type":"WebSite","@id":"https:\/\/www.mygreatlearning.com\/blog\/#website","url":"https:\/\/www.mygreatlearning.com\/blog\/","name":"Great Learning Blog","description":"Learn, Upskill &amp; Career Development Guide and Resources","publisher":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/#organization"},"alternateName":"Great Learning","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.mygreatlearning.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.mygreatlearning.com\/blog\/#organization","name":"Great Learning","url":"https:\/\/www.mygreatlearning.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.mygreatlearning.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2022\/06\/GL-Logo.jpg","contentUrl":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2022\/06\/GL-Logo.jpg","width":900,"height":900,"caption":"Great Learning"},"image":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/GreatLearningOfficial\/","https:\/\/x.com\/Great_Learning","https:\/\/www.instagram.com\/greatlearningofficial\/","https:\/\/www.linkedin.com\/school\/great-learning\/","https:\/\/in.pinterest.com\/greatlearning12\/","https:\/\/www.youtube.com\/user\/beaconelearning\/"],"description":"Great Learning is a leading global ed-tech company for professional training and higher education. It offers comprehensive, industry-relevant, hands-on learning programs across various business, technology, and interdisciplinary domains driving the digital economy. These programs are developed and offered in collaboration with the world's foremost academic institutions.","email":"info@mygreatlearning.com","legalName":"Great Learning Education Services Pvt. Ltd","foundingDate":"2013-11-29","numberOfEmployees":{"@type":"QuantitativeValue","minValue":"1001","maxValue":"5000"}},{"@type":"Person","@id":"https:\/\/www.mygreatlearning.com\/blog\/#\/schema\/person\/6f993d1be4c584a335951e836f2656ad","name":"Great Learning Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2022\/02\/unnamed.webp","url":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2022\/02\/unnamed.webp","contentUrl":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2022\/02\/unnamed.webp","caption":"Great Learning Editorial Team"},"description":"The Great Learning Editorial Staff includes a dynamic team of subject matter experts, instructors, and education professionals who combine their deep industry knowledge with innovative teaching methods. Their mission is to provide learners with the skills and insights needed to excel in their careers, whether through upskilling, reskilling, or transitioning into new fields.","sameAs":["https:\/\/www.mygreatlearning.com\/","https:\/\/in.linkedin.com\/school\/great-learning\/","https:\/\/x.com\/https:\/\/twitter.com\/Great_Learning","https:\/\/www.youtube.com\/channel\/UCObs0kLIrDjX2LLSybqNaEA"],"award":["Best EdTech Company of the Year 2024","Education Economictimes Outstanding Education\/Edtech Solution Provider of the Year 2024","Leading E-learning Platform 2024"],"url":"https:\/\/www.mygreatlearning.com\/blog\/author\/greatlearning\/"}]}},"uagb_featured_image_src":{"full":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156.jpg",1245,843,false],"thumbnail":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156-150x150.jpg",150,150,true],"medium":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156-300x203.jpg",300,203,true],"medium_large":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156-768x520.jpg",768,520,true],"large":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156-1024x693.jpg",1024,693,true],"1536x1536":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156.jpg",1245,843,false],"2048x2048":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156.jpg",1245,843,false],"web-stories-poster-portrait":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156.jpg",640,433,false],"web-stories-publisher-logo":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156.jpg",96,65,false],"web-stories-thumbnail":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/04\/iStock-1066422156.jpg",150,102,false]},"uagb_author_info":{"display_name":"Great Learning Editorial Team","author_link":"https:\/\/www.mygreatlearning.com\/blog\/author\/greatlearning\/"},"uagb_comment_info":0,"uagb_excerpt":"Bias and Variance&nbsp; For the same set of data, different algorithms behave differently. For example, if we want to predict the price of houses given for some dataset, some of the algorithms that can be used are Linear Regression and Decision Tree Regressor. Both of these algorithms will interpret the dataset in different ways and&hellip;","_links":{"self":[{"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/posts\/14555","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/users\/41"}],"replies":[{"embeddable":true,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/comments?post=14555"}],"version-history":[{"count":16,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/posts\/14555\/revisions"}],"predecessor-version":[{"id":110588,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/posts\/14555\/revisions\/110588"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/media\/14193"}],"wp:attachment":[{"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/media?parent=14555"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/categories?post=14555"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/tags?post=14555"},{"taxonomy":"content_type","embeddable":true,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/content_type?post=14555"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}