{"id":15575,"date":"2020-06-06T16:03:41","date_gmt":"2020-06-06T10:33:41","guid":{"rendered":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/"},"modified":"2024-09-02T15:33:00","modified_gmt":"2024-09-02T10:03:00","slug":"gradient-boosting","status":"publish","type":"post","link":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/","title":{"rendered":"What is Gradient Boosting and how is it different from AdaBoost?"},"content":{"rendered":"\n<p>Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. There are various ensemble methods such as <a aria-label=\"stacking, blending (opens in a new tab)\" href=\"https:\/\/www.mygreatlearning.com\/blog\/ensemble-learning\/\" target=\"_blank\" rel=\"noreferrer noopener\">stacking, blending<\/a>, <a aria-label=\"bagging, and boosting (opens in a new tab)\" href=\"https:\/\/www.mygreatlearning.com\/blog\/bagging-boosting\/\" target=\"_blank\" rel=\"noreferrer noopener\">bagging and boosting<\/a>. Gradient Boosting, as the name suggests is a boosting method.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"introduction\"><strong>Introduction<\/strong><\/h2>\n\n\n\n<p>Boosting is loosely-defined as a strategy that combines multiple simple models into a single composite model. With the introduction of more simple models, the overall model becomes a stronger predictor. In boosting terminology, the simple models are called weak models or weak learners. Over the last years boosting techniques like <a rel=\"noreferrer noopener\" aria-label=\"AdaBoost (opens in a new tab)\" href=\"https:\/\/www.mygreatlearning.com\/blog\/adaboost-algorithm\/\" target=\"_blank\">AdaBoost<\/a> and XGBoost have become much popular because of their great performance in online competitions like Kaggle. The two main boosting algorithms are Adaptive Boosting(AdaBoost) and Gradient Boosting.<\/p>\n\n\n\n<p><em>While there are ample resources available online to help you understand the subject, there\u2019s nothing quite like a certificate. Check out Great Learning\u2019s <a href=\"https:\/\/www.mygreatlearning.com\/pg-program-artificial-intelligence-course\">PG program in Artificial Intelligence and Machine Learning<\/a> to upskill in the domain. This course will help you learn from a top-ranking global school to build job-ready AIML skills. This 12-month program offers a hands-on learning experience with top faculty and mentors. On completion, you will receive a Certificate from The <a href=\"https:\/\/www.mygreatlearning.com\/universities\/utaustin\" target=\"_blank\" rel=\"noreferrer noopener\">University of Texas at Austin<\/a>, and Great Lakes Executive Learning.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-is-gradient-boosting\"><strong>What is Gradient Boosting?<\/strong><\/h2>\n\n\n\n<p>The term gradient boosting consists of two sub-terms, gradient and boosting. We already know that gradient boosting is a boosting technique.Let us see how the term \u2018gradient\u2019 is related here.<br><\/p>\n\n\n\n<p>Gradient boosting re-defines boosting as a numerical optimisation problem where the objective is to minimise the loss function of the model by adding weak learners using gradient descent. <a aria-label=\"Gradient descent (opens in a new tab)\" href=\"https:\/\/www.mygreatlearning.com\/blog\/gradient-descent\/\" target=\"_blank\" rel=\"noreferrer noopener\">Gradient descent<\/a> is a first-order iterative optimisation algorithm for finding a local minimum of a differentiable function. As gradient boosting is based on minimising a loss function, different types of loss functions can be used resulting in a flexible technique that can be applied to <a aria-label=\"regression (opens in a new tab)\" href=\"https:\/\/www.mygreatlearning.com\/blog\/linear-regression-in-machine-learning\/\" target=\"_blank\" rel=\"noreferrer noopener\">regression<\/a>, <a href=\"https:\/\/www.mygreatlearning.com\/blog\/multiclass-classification-explained\/\">multi-class classification<\/a>, etc.<br><\/p>\n\n\n\n<p>Intuitively, gradient boosting is a stage-wise additive model that generates learners during the learning process (i.e., trees are added one at a time, and existing trees in the model are not changed). The contribution of the weak learner to the ensemble is based on the gradient descent optimisation process. The calculated contribution of each tree is based on minimising the overall error of the strong learner.<\/p>\n\n\n\n<p>Gradient boosting does not modify the sample distribution as weak learners train on the remaining residual errors of a strong learner (i.e, pseudo-residuals). By training on the residuals of the model, this is an alternative means to give more importance to misclassified observations. Intuitively, new weak learners are being added to concentrate on the areas where the existing learners are performing poorly. The contribution of each weak learner to the final prediction is based on a gradient optimisation process to minimise the overall error of the strong learner.<br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"difference-between-gradient-boosting-and-adaptive-boostingadaboost\"><strong>Difference between Gradient Boosting and Adaptive Boosting(AdaBoost)<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\"><table class=\"has-fixed-layout\"><thead><tr><th><\/th><th><\/th><\/tr><\/thead><tbody><tr><td><strong>Gradient boosting<\/strong><\/td><td><strong>Adaptive Boosting<\/strong><\/td><\/tr><tr><td>This approach trains learners based upon minimising the loss function of a learner (i.e., training on the residuals of the  model)&nbsp;<br><\/td><td>This method focuses on training upon misclassified observations. Alters the distribution of the training dataset to increase weights on sample observations that are difficult to classify.<br><br><\/td><\/tr><tr><td>Weak learners are decision trees constructed in a greedy manner with split points based on purity scores (i.e., Gini, minimise loss). Thus, larger trees can be used with around 4 to 8 levels. Learners should still remain weak and so they should be constrained (i.e., the maximum number of layers, nodes, splits, leaf nodes)<br><\/td><td><br>The weak learners incase of adaptive boosting are a very basic form of decision tree known as stumps.<\/td><\/tr><tr><td>All the learners have equal weights in the case of gradient boosting. The weight is usually set as the learning rate which is small in magnitude.<\/td><td>The final prediction is based on a majority vote of the weak learners\u2019 predictions weighted by their individual accuracy.<\/td><\/tr><\/tbody><tfoot><tr><td><\/td><td><\/td><\/tr><\/tfoot><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"understand-gradient-boosting-algorithm-with-example\"><strong>Understand Gradient Boosting Algorithm with example<\/strong><\/h2>\n\n\n\n<p>Gradient Boosting is used for regression as well as classification tasks. In this section, we are going to see how it is used in regression with the help of an example. Following is a sample from a random dataset where we have to predict the weight of an individual, given the height, favourite colour, and gender of a person. Obviously favourite colour may seem irrelevant, but let us see if the learning algorithm figures it out too. In practice, it is better to remove such features but here we will go with the same. The target variable is shown in a red box while as features are shown in the green box.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td>Height(m)<\/td><td>Favourite Color<\/td><td>Gender<\/td><td>Weight(kg)<\/td><\/tr><tr><td>1.6<\/td><td>Blue<\/td><td>Male<\/td><td>88<\/td><\/tr><tr><td>1.6<\/td><td>Green<\/td><td>Female<\/td><td>76<\/td><\/tr><tr><td>1.5<\/td><td>Blue<\/td><td>Female<\/td><td>56<\/td><\/tr><tr><td>1.8<\/td><td>Red<\/td><td>Male<\/td><td>73<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>It builds a first learner to predict the observations in the training dataset. This learner is a basic learner. Usually for simplicity, we take an average of all the target variables and assume that to be predicted value in case of Regression as shown below.&nbsp;<br><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td>Height(m)<\/td><td>Favourite Color<\/td><td>Gender<\/td><td>Weight(kg)<\/td><td>Prediction 1<\/td><\/tr><tr><td>1.6<\/td><td>Blue<\/td><td>Male<\/td><td>88<\/td><td>73.5<\/td><\/tr><tr><td>1.6<\/td><td>Green<\/td><td>Female<\/td><td>76<\/td><td>73.5<\/td><\/tr><tr><td>1.5<\/td><td>Blue<\/td><td>Female<\/td><td>56<\/td><td>73.5<\/td><\/tr><tr><td>1.8<\/td><td>Red<\/td><td>Male<\/td><td>73<\/td><td>73.5<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Then, it calculates the loss (i.e., the value between the first learner's outcomes and the actual values).&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td>Height(m)<\/td><td>Favourite Color<\/td><td>Gender<\/td><td>Weight(kg)<\/td><td>Prediction 1<\/td><td>Residuals(1)(New target variable)<\/td><\/tr><tr><td>1.6<\/td><td>Blue<\/td><td>Male<\/td><td>88<\/td><td>73.5<\/td><td>14.5<\/td><\/tr><tr><td>1.6<\/td><td>Green<\/td><td>Female<\/td><td>76<\/td><td>73.5<\/td><td>2.5<\/td><\/tr><tr><td>1.5<\/td><td>Blue<\/td><td>Female<\/td><td>56<\/td><td>73.5<\/td><td>-17.5<\/td><\/tr><tr><td>1.8<\/td><td>Red<\/td><td>Male<\/td><td>73<\/td><td>73.5<\/td><td> -0.5<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>It will build a second learner that is fitted\/trained on the residual error usually known as pseudo-residual produced by the first learner to predict the loss after the first step and continue to do so until it reaches a threshold (i.e., residuals are zero).<br><\/p>\n\n\n\n<p>Now calculate the new residuals using the new decision tree. Also, we use a learning rate of 0.1 to avoid big jumps.<br><\/p>\n\n\n\n<p>New residual value for first sample =(88-73.5+0.1(14.5))=13.05. In a similar manner calculate the rest of the entries.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td>Height(m)<\/td><td>Favourite Color<\/td><td>Gender<\/td><td>Weight(kg)<\/td><td>Residuals(2)(New target variable)<\/td><\/tr><tr><td>1.6<\/td><td>Blue<\/td><td>Male<\/td><td>88<\/td><td>13.05<\/td><\/tr><tr><td>1.6<\/td><td>Green<\/td><td>Female<\/td><td>76<\/td><td>2.25<\/td><\/tr><tr><td>1.5<\/td><td>Blue<\/td><td>Female<\/td><td>56<\/td><td>-15.75<\/td><\/tr><tr><td>1.8<\/td><td>Red<\/td><td>Male<\/td><td>73<\/td><td>-0.45<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Now using these residuals, we create a new tree and continue to this process till the loss is negligible. The next residuals are calculated by using the residuals of all the previous trees.<\/p>\n\n\n\n<p>For example, the next residual of the first record is calculated as = (88- R) = 11.6<\/p>\n\n\n\n<p>Where R = 73.5+0.1(14.5)) + 0.1(14.5))&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br><\/p>\n\n\n\n<p>By training the next learner on the gradient of the error with respect to the loss predictions of the previous learner, it is being trained to correct the mistakes of the previous model. You may notice that errors are gradually decreasing.<br><\/p>\n\n\n\n<p><em>Note: In the above example, we have used a tree with 4 leaf nodes. In practice, we use trees with 8 to 34&nbsp; leaves.<\/em><br><\/p>\n\n\n\n<p>This is the core of gradient boosting and allows many simple learners to compensate for each other\u2019s weaknesses to better fit the data. There are some variants of gradient boosting and a few of them are briefly explained in the coming sections.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"extreme-gradient-boosting-xgboost\"><strong>Extreme Gradient Boosting (XGBoost)<\/strong><\/h2>\n\n\n\n<p>XGBoost is one of the most popular variants of gradient boosting. It is a decision-tree-based ensemble <a href=\"https:\/\/www.mygreatlearning.com\/academy\/learn-for-free\/courses\/machine-learning-algorithms\" target=\"_blank\" rel=\"noreferrer noopener\">Machine Learning algorithm<\/a> that uses a gradient boosting framework. XGBoost is basically designed to enhance the performance and speed of a Machine Learning model. In prediction problems involving unstructured data (images, text, etc.), artificial neural networks tend to outperform all other algorithms or frameworks. However, when it comes to small-to-medium structured\/tabular data, decision tree-based algorithms are considered best-in-class right now.&nbsp;<br><\/p>\n\n\n\n<p>XGBoost uses pre-sorted algorithm &amp; histogram-based algorithm for computing the best split. The histogram-based algorithm splits all the data points for a feature into discrete bins and uses these bins to find the split value of the histogram. Also, in XGBoost, the trees can have a varying number of terminal nodes and left weights of the trees that are calculated with less evidence is shrunk more heavily.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"lightgbm\"><strong>LightGBM<\/strong><\/h2>\n\n\n\n<p>LightGBM stands for lightweight gradient boosting machines. It uses a novel technique of Gradient-based One-Side Sampling (GOSS) to filter out the data instances for finding a split value. LightGBM is prefixed as \u2018Light\u2019 because of its high speed. LightGBM is popular as it can handle the large size of data and takes lower memory to run.&nbsp;<\/p>\n\n\n\n<p>Another reason why LightGBM is popular is as it focuses on the accuracy of results. LGBM also supports GPU learning and thus data scientists are widely using LGBM for data science application development.<\/p>\n\n\n\n<p>It can also handle categorical features by taking the input of feature names. It does not convert to one-hot coding and is much faster than one-hot coding. LGBM uses a special algorithm to find the split value of categorical features. Both LighGBM and XGBoost grow the trees leaf wise.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"cat-boost\"><strong>Cat Boost<\/strong><\/h2>\n\n\n\n<p>Cat Boost is a recently open-sourced machine learning algorithm from Yandex. It can easily integrate with deep learning frameworks like Google\u2019s TensorFlow and Apple\u2019s Core ML. Cat Boost can work with diverse data types to help solve a wide range of problems that businesses face today.<\/p>\n\n\n\n<p>Cat Boost has the flexibility of giving indices of categorical columns so that it can be encoded as one-hot encoding using one_hot_max_size (Use one-hot encoding for all features with a number of different values less than or equal to the given parameter value). Also If you don\u2019t pass any anything in cat_features argument, CatBoost will treat all the columns as numerical variables.<br><\/p>\n\n\n\n<p>Catboost deals with categorical features by, \u201cgenerating random permutations of the dataset and for each sample computing the average label value for the sample with the same category value placed before the given one in the permutation\u201d. They also process the data with GPU acceleration and do feature discretisation into a fixed number of bins (128 and 32).<br><\/p>\n\n\n\n<p>One main difference between CatBoost and other boosting algorithms is that the CatBoost implements symmetric trees. This may sound odd but it helps in decreasing prediction time, which is extremely important for low latency environments.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"gradient-boosting-in-python\"><strong>Gradient Boosting in Python<\/strong><\/h2>\n\n\n\n<p>In this section, we are going to compare the performance of AdaBoost and Gradient boosting on a regression problem. Here we specifically use the <a href=\"https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.datasets.load_diabetes.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\" aria-label=\"diabetes dataset (opens in a new tab)\">diabetes dataset<\/a> from the sk-learn library to compare the two algorithms. The dataset contains ten baseline variables, i.e., age, sex, body mass index, average blood pressure, and six blood serum measurements that were obtained for 442 diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline.<br><\/p>\n\n\n\n<p>In the following code, we use <a href=\"https:\/\/www.mygreatlearning.com\/blog\/python-tutorial-for-beginners-a-complete-guide\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Python programming language (opens in a new tab)\">Python programming language<\/a> and the sk-learn library for implementation.<br><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>from sklearn import datasets, ensemble\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.model_selection import train_test_split\ndiabetes = datasets.load_diabetes()\nX, y = diabetes.data, diabetes.target\nX_train, X_test, y_train, y_test = train_test_split(\n    X, y, test_size=0.1, random_state=13)\n\nparams = {'n_estimators': 500,\n          'max_depth': 4,\n          'min_samples_split': 5,\n          'learning_rate': 0.01,\n          'loss': 'ls'}\n#gradient boosting classifier\nreg = ensemble.GradientBoostingRegressor(**params)\nreg.fit(X_train, y_train)\n\n#adaboost classifier\nreg1=ensemble.AdaBoostRegressor()\nreg1.fit(X_train, y_train)\n\n\nmse = mean_squared_error(y_test, reg.predict(X_test))\nprint(\"The mean squared error (MSE) on test set for gradient boosting: {:.4f}\".format(mse))\n\nmse1 = mean_squared_error(y_test, reg1.predict(X_test))\nprint(\"The mean squared error (MSE) on test set for adaboost : {:.4f}\".format(mse1))\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"output\"><strong>Output<\/strong>:<\/h3>\n\n\n<figure class=\"wp-block-image size-large zoomable\" data-full=\"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/28gb.png\"><img decoding=\"async\" width=\"690\" height=\"60\" src=\"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/28gb.png\" alt=\"gradient boosting result\" class=\"wp-image-15581\" srcset=\"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/28gb.png 690w, https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/28gb-300x26.png 300w\" sizes=\"(max-width: 690px) 100vw, 690px\" \/><\/figure>\n\n\n\n<p>From the above results, we conclude that for this particular dataset and hyperparameters, gradient boosting outperformed AdaBoost, though this might always be the case.<\/p>\n\n\n\n<p>This brings us to the end of this article where we have learned about Gradient Boosting, a little about its variants and implementation of Gradient Boosting with SK-learn.<\/p>\n\n\n\n<p><br><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. There are various ensemble methods such as stacking, blending, bagging and boosting. Gradient Boosting, as the name suggests is a boosting method.&nbsp; Introduction Boosting is loosely-defined as a strategy that combines multiple simple models into [&hellip;]<\/p>\n","protected":false},"author":41,"featured_media":15583,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[2],"tags":[],"content_type":[],"class_list":["post-15575","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>What is Gradient Boosting | Great Learning<\/title>\n<meta name=\"description\" content=\"Gradient boosting vs Adaboost: Gradient Boosting is an ensemble machine learning technique. Some of the popular algorithms such as XGBoost and LightGBM are variants of this method.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"What is Gradient Boosting and how is it different from AdaBoost?\" \/>\n<meta property=\"og:description\" content=\"Gradient boosting vs Adaboost: Gradient Boosting is an ensemble machine learning technique. Some of the popular algorithms such as XGBoost and LightGBM are variants of this method.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/\" \/>\n<meta property=\"og:site_name\" content=\"Great Learning Blog: Free Resources what Matters to shape your Career!\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/GreatLearningOfficial\/\" \/>\n<meta property=\"article:published_time\" content=\"2020-06-06T10:33:41+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-09-02T10:03:00+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Great Learning Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@https:\/\/twitter.com\/Great_Learning\" \/>\n<meta name=\"twitter:site\" content=\"@Great_Learning\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Great Learning Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/\"},\"author\":{\"name\":\"Great Learning Editorial Team\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#\\\/schema\\\/person\\\/6f993d1be4c584a335951e836f2656ad\"},\"headline\":\"What is Gradient Boosting and how is it different from AdaBoost?\",\"datePublished\":\"2020-06-06T10:33:41+00:00\",\"dateModified\":\"2024-09-02T10:03:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/\"},\"wordCount\":1821,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/05\\\/shutterstock_473646325.jpg\",\"articleSection\":[\"AI and Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/\",\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/\",\"name\":\"What is Gradient Boosting | Great Learning\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/05\\\/shutterstock_473646325.jpg\",\"datePublished\":\"2020-06-06T10:33:41+00:00\",\"dateModified\":\"2024-09-02T10:03:00+00:00\",\"description\":\"Gradient boosting vs Adaboost: Gradient Boosting is an ensemble machine learning technique. Some of the popular algorithms such as XGBoost and LightGBM are variants of this method.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/05\\\/shutterstock_473646325.jpg\",\"contentUrl\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2020\\\/05\\\/shutterstock_473646325.jpg\",\"width\":1000,\"height\":667,\"caption\":\"Email Marketing Best Practices\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/gradient-boosting\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Blog\",\"item\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI and Machine Learning\",\"item\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/artificial-intelligence\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"What is Gradient Boosting and how is it different from AdaBoost?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/\",\"name\":\"Great Learning Blog\",\"description\":\"Learn, Upskill &amp; Career Development Guide and Resources\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#organization\"},\"alternateName\":\"Great Learning\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#organization\",\"name\":\"Great Learning\",\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/06\\\/GL-Logo.jpg\",\"contentUrl\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/06\\\/GL-Logo.jpg\",\"width\":900,\"height\":900,\"caption\":\"Great Learning\"},\"image\":{\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/GreatLearningOfficial\\\/\",\"https:\\\/\\\/x.com\\\/Great_Learning\",\"https:\\\/\\\/www.instagram.com\\\/greatlearningofficial\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/school\\\/great-learning\\\/\",\"https:\\\/\\\/in.pinterest.com\\\/greatlearning12\\\/\",\"https:\\\/\\\/www.youtube.com\\\/user\\\/beaconelearning\\\/\"],\"description\":\"Great Learning is a leading global ed-tech company for professional training and higher education. It offers comprehensive, industry-relevant, hands-on learning programs across various business, technology, and interdisciplinary domains driving the digital economy. These programs are developed and offered in collaboration with the world's foremost academic institutions.\",\"email\":\"info@mygreatlearning.com\",\"legalName\":\"Great Learning Education Services Pvt. Ltd\",\"foundingDate\":\"2013-11-29\",\"numberOfEmployees\":{\"@type\":\"QuantitativeValue\",\"minValue\":\"1001\",\"maxValue\":\"5000\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/#\\\/schema\\\/person\\\/6f993d1be4c584a335951e836f2656ad\",\"name\":\"Great Learning Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/unnamed.webp\",\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/unnamed.webp\",\"contentUrl\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/unnamed.webp\",\"caption\":\"Great Learning Editorial Team\"},\"description\":\"The Great Learning Editorial Staff includes a dynamic team of subject matter experts, instructors, and education professionals who combine their deep industry knowledge with innovative teaching methods. Their mission is to provide learners with the skills and insights needed to excel in their careers, whether through upskilling, reskilling, or transitioning into new fields.\",\"sameAs\":[\"https:\\\/\\\/www.mygreatlearning.com\\\/\",\"https:\\\/\\\/in.linkedin.com\\\/school\\\/great-learning\\\/\",\"https:\\\/\\\/x.com\\\/https:\\\/\\\/twitter.com\\\/Great_Learning\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UCObs0kLIrDjX2LLSybqNaEA\"],\"award\":[\"Best EdTech Company of the Year 2024\",\"Education Economictimes Outstanding Education\\\/Edtech Solution Provider of the Year 2024\",\"Leading E-learning Platform 2024\"],\"url\":\"https:\\\/\\\/www.mygreatlearning.com\\\/blog\\\/author\\\/greatlearning\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"What is Gradient Boosting | Great Learning","description":"Gradient boosting vs Adaboost: Gradient Boosting is an ensemble machine learning technique. Some of the popular algorithms such as XGBoost and LightGBM are variants of this method.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/","og_locale":"en_US","og_type":"article","og_title":"What is Gradient Boosting and how is it different from AdaBoost?","og_description":"Gradient boosting vs Adaboost: Gradient Boosting is an ensemble machine learning technique. Some of the popular algorithms such as XGBoost and LightGBM are variants of this method.","og_url":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/","og_site_name":"Great Learning Blog: Free Resources what Matters to shape your Career!","article_publisher":"https:\/\/www.facebook.com\/GreatLearningOfficial\/","article_published_time":"2020-06-06T10:33:41+00:00","article_modified_time":"2024-09-02T10:03:00+00:00","og_image":[{"width":1000,"height":667,"url":"http:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg","type":"image\/jpeg"}],"author":"Great Learning Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@https:\/\/twitter.com\/Great_Learning","twitter_site":"@Great_Learning","twitter_misc":{"Written by":"Great Learning Editorial Team","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/#article","isPartOf":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/"},"author":{"name":"Great Learning Editorial Team","@id":"https:\/\/www.mygreatlearning.com\/blog\/#\/schema\/person\/6f993d1be4c584a335951e836f2656ad"},"headline":"What is Gradient Boosting and how is it different from AdaBoost?","datePublished":"2020-06-06T10:33:41+00:00","dateModified":"2024-09-02T10:03:00+00:00","mainEntityOfPage":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/"},"wordCount":1821,"commentCount":0,"publisher":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/#primaryimage"},"thumbnailUrl":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg","articleSection":["AI and Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/","url":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/","name":"What is Gradient Boosting | Great Learning","isPartOf":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/#primaryimage"},"image":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/#primaryimage"},"thumbnailUrl":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg","datePublished":"2020-06-06T10:33:41+00:00","dateModified":"2024-09-02T10:03:00+00:00","description":"Gradient boosting vs Adaboost: Gradient Boosting is an ensemble machine learning technique. Some of the popular algorithms such as XGBoost and LightGBM are variants of this method.","breadcrumb":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/#primaryimage","url":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg","contentUrl":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg","width":1000,"height":667,"caption":"Email Marketing Best Practices"},{"@type":"BreadcrumbList","@id":"https:\/\/www.mygreatlearning.com\/blog\/gradient-boosting\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Blog","item":"https:\/\/www.mygreatlearning.com\/blog\/"},{"@type":"ListItem","position":2,"name":"AI and Machine Learning","item":"https:\/\/www.mygreatlearning.com\/blog\/artificial-intelligence\/"},{"@type":"ListItem","position":3,"name":"What is Gradient Boosting and how is it different from AdaBoost?"}]},{"@type":"WebSite","@id":"https:\/\/www.mygreatlearning.com\/blog\/#website","url":"https:\/\/www.mygreatlearning.com\/blog\/","name":"Great Learning Blog","description":"Learn, Upskill &amp; Career Development Guide and Resources","publisher":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/#organization"},"alternateName":"Great Learning","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.mygreatlearning.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.mygreatlearning.com\/blog\/#organization","name":"Great Learning","url":"https:\/\/www.mygreatlearning.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.mygreatlearning.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2022\/06\/GL-Logo.jpg","contentUrl":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2022\/06\/GL-Logo.jpg","width":900,"height":900,"caption":"Great Learning"},"image":{"@id":"https:\/\/www.mygreatlearning.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/GreatLearningOfficial\/","https:\/\/x.com\/Great_Learning","https:\/\/www.instagram.com\/greatlearningofficial\/","https:\/\/www.linkedin.com\/school\/great-learning\/","https:\/\/in.pinterest.com\/greatlearning12\/","https:\/\/www.youtube.com\/user\/beaconelearning\/"],"description":"Great Learning is a leading global ed-tech company for professional training and higher education. It offers comprehensive, industry-relevant, hands-on learning programs across various business, technology, and interdisciplinary domains driving the digital economy. These programs are developed and offered in collaboration with the world's foremost academic institutions.","email":"info@mygreatlearning.com","legalName":"Great Learning Education Services Pvt. Ltd","foundingDate":"2013-11-29","numberOfEmployees":{"@type":"QuantitativeValue","minValue":"1001","maxValue":"5000"}},{"@type":"Person","@id":"https:\/\/www.mygreatlearning.com\/blog\/#\/schema\/person\/6f993d1be4c584a335951e836f2656ad","name":"Great Learning Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2022\/02\/unnamed.webp","url":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2022\/02\/unnamed.webp","contentUrl":"https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2022\/02\/unnamed.webp","caption":"Great Learning Editorial Team"},"description":"The Great Learning Editorial Staff includes a dynamic team of subject matter experts, instructors, and education professionals who combine their deep industry knowledge with innovative teaching methods. Their mission is to provide learners with the skills and insights needed to excel in their careers, whether through upskilling, reskilling, or transitioning into new fields.","sameAs":["https:\/\/www.mygreatlearning.com\/","https:\/\/in.linkedin.com\/school\/great-learning\/","https:\/\/x.com\/https:\/\/twitter.com\/Great_Learning","https:\/\/www.youtube.com\/channel\/UCObs0kLIrDjX2LLSybqNaEA"],"award":["Best EdTech Company of the Year 2024","Education Economictimes Outstanding Education\/Edtech Solution Provider of the Year 2024","Leading E-learning Platform 2024"],"url":"https:\/\/www.mygreatlearning.com\/blog\/author\/greatlearning\/"}]}},"uagb_featured_image_src":{"full":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg",1000,667,false],"thumbnail":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325-150x150.jpg",150,150,true],"medium":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325-300x200.jpg",300,200,true],"medium_large":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325-768x512.jpg",768,512,true],"large":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg",1000,667,false],"1536x1536":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg",1000,667,false],"2048x2048":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg",1000,667,false],"web-stories-poster-portrait":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg",640,427,false],"web-stories-publisher-logo":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg",96,64,false],"web-stories-thumbnail":["https:\/\/www.mygreatlearning.com\/blog\/wp-content\/uploads\/2020\/05\/shutterstock_473646325.jpg",150,100,false]},"uagb_author_info":{"display_name":"Great Learning Editorial Team","author_link":"https:\/\/www.mygreatlearning.com\/blog\/author\/greatlearning\/"},"uagb_comment_info":1,"uagb_excerpt":"Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. There are various ensemble methods such as stacking, blending, bagging and boosting. Gradient Boosting, as the name suggests is a boosting method.&nbsp; Introduction Boosting is loosely-defined as a strategy that combines multiple simple models into&hellip;","_links":{"self":[{"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/posts\/15575","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/users\/41"}],"replies":[{"embeddable":true,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/comments?post=15575"}],"version-history":[{"count":26,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/posts\/15575\/revisions"}],"predecessor-version":[{"id":104969,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/posts\/15575\/revisions\/104969"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/media\/15583"}],"wp:attachment":[{"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/media?parent=15575"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/categories?post=15575"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/tags?post=15575"},{"taxonomy":"content_type","embeddable":true,"href":"https:\/\/www.mygreatlearning.com\/blog\/wp-json\/wp\/v2\/content_type?post=15575"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}