This article is all about the end-to-end project of News Classifier.
App has been built using streamlit, containerized with Docker and deployed on AWS using Fargate.
This article aims to give a complete walkthrough of the process.

For this, we will be using the data from Kaggle. LINK

Create a .IPYNB file

For this, you can use jupyter notebook or Colab.
I personally recommend Colab as most of the packages are already installed.

→ Installing Libraries

→ Importing Libraries


Photo by Tim Gouw on Unsplash

If you’ve been learning data science or been in the field for some time now.
You might have build tons of Classification models and checked the performance of the model with different metrics.

The big 4 metrics :
* Precision
* Recall
* Accuracy
* F1-Score

We know, that when we have imbalanced data accuracy is not the metrics we should be looking at as it may be misleading, instead we should account for F1-Score when our dataset is imbalanced.

But will F1 really going to help?

ROC curves are one of the best methods for comparing the model’s goodness.

let’s revise our basics.

Also, check out my article on Calculating Accuracy of an ML Model.

  • The…


Content:

  1. What are Ensemble Methods?
  2. Intuition Behind Ensemble Methods!
  3. Different Ensemble Methods
    * Bagging
    →Intuition behind Bagging
    * Boosting
    →Intuition behind Boosting
    * Stacking
    →Intuition behind Stacking
    * Bucket of models

What are Ensemble Methods?

  • Ensemble methods are techniques that create multiple models and then combine them to produce improved results.
  • This approach allows the production of better predictive performance compared to a single model.
  • Ensemble methods usually produce more accurate solutions than a single model would. This has been the case in many machine learning competitions, where the winning solutions used ensemble methods.

The intuition behind the ensemble:


In Machine Learning most of the algorithms work on the assumption of the normal distribution of the data.
However not all machine learning algorithms make such assumptions to know beforehand the type of data distribution it will work on but learns it directly from the data used for training.

Content:

  1. What is the need for and Importance of Gaussian Distribution?
    → What is Gaussian Distribution?
    → Need for Normal Distribution?
    → Importance of Normality in Machine Learning!
  2. Need for Data Transformation!!
  3. Importance of Data Distribution Transformation.
  4. Different methods to Transform the Distribution.
    → The ladder of powers.
    → Box-Cox Transformation Method…


One of the many problems with the real world machine learning classification problems is the issue of the imbalanced data.
Imbalanced data means when the classes present in our data disproportionate, Meaning, the ratio of each class differs where one of the class are majorly present in the dataset and the other is minorly present.

Problems with imbalanced data?


What is Decision Tree?

  • A decision tree is a decision support tool that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility.
  • Decision tree learning is one of the predictive modelling approaches used in statistics and machine learning. It uses a decision tree to go from observations about an item (represented in the branches) to conclusions about the item’s target value (represented in the leaves).
  • Decision Trees are a non-parametric supervised learning method used for both classification and regression tasks.
  • The goal is to create a model that predicts the value of a target variable…


In my previous articles we took an overview of Linear Regression and Logistic Regression.
Let’s see another algorithm in the Regression Family.

Content:

  1. What is Polynomial Regression?
  2. Assumptions of Polynomial Regression.
  3. Why do we need Polynomial Regression?
  4. How to find the right degree of the Polynomial Equation?
  5. Math Behind Polynomial Equation.
  6. Cost Function of Polynomial Regression.
  7. Polynomial Regression with Gradient Descent.

What is Polynomial Regression?

  • Polynomial Regression is a form of regression analysis in which the relationship between the independent variables and dependent variables are modeled in the nth degree polynomial.
  • Polynomial Regression models are usually fit with the method of least squares.The least square…


In my previous Blog, I tried explaining about Linear Regression and how it works.Let’s See why Logistic Regression is one of the important topic to understand.
Here’s the link to my previous article on Linear Regression in case you missed it.

Content

  1. What is Logistic Regression?
  2. Types of Logistic Regression.
  3. Assumptions of Logistic Regression.
  4. Why not Linear Regression for Classification?
  5. The Logistic Model.
  6. Interpretation of the co-efficients.
  7. Odds Ratio and Logit
  8. Decision Boundary.
  9. Cost Function of Logistic Regression.
  10. Gradient Descent in Logistic Regression.
  11. Evaluating the Logistic Regression Model.

Let’s get Started


Everyone new to the field of data science or machine learning,often starts their journey by learning the Linear Models of the vast set of Algorithm’s available.
So,Let’s Start!!!

Content:

  1. What is Linear Regression?
  2. Assumptions of Linear Regression.
  3. Types of Linear Regression?
  4. Understanding Slopes and Intercepts.
  5. How does a linear Regression Work?
  6. What is a Cost Function?
  7. Linear Regression with Gradient Descent.
  8. Interpreting the Regression Results.

What is Linear Regression?

Linear Regression is a statistical supervised learning technique to predict the quantitative variable by forming a linear relationship with one or more independent features.
It helps determine:
→ If a independent variable does a good job in…


As we know Data Preprocessing is a very important part of any Machine Learning lifecycle. Most of the Algorithm’s expect the data passed on to be of a certain scale.That is where the part of feature scaling comes to play.
Feature scaling is a method used to scale the range of independent variables or features of data,so that the features comes down to the same range in order to avoid any kind of bias in the modelling.

Why is Feature Scaling needed?

  • The range of values of raw data varies widely, in some machine learning algorithms, functions will not work properly without normalization.

FOR EXAMPLE:
Many…

Abhigyan

An electronics and communication engineer with passion towards data science,I write articles for people like me to understand things in laymen terms.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store