Data Science is a comparatively new concept in the tech world, and it could be overwhelming for professionals to seek career and interview advice while applying for jobs in this domain. Also, there is a need to acquire a vast range of skills before setting out to prepare for data science interview. Interviewers seek practical knowledge on the data science basics and its industry-applications along with a good knowledge of tools and processes. Here we will provide you with a list of important data science interview questions for freshers as well as experienced candidates that one could face during job interviews. If you are aspiring to be a data scientist then you can start from here.

## Top 19 Data Science Interview Questions and Answers for Freshers

**1. How is Data Science different from Big Data and Data Analytics?**

**Ans.** Data Science utilizes algorithms and tools to draw meaningful and commercially useful insights from raw data. It involves tasks like data modelling, data cleansing, analysis, pre-processing etc.

Big Data is the enormous set of structured, semi-structured, and unstructured data in its raw form generated through various channels.

And finally, Data Analytics provides operational insights into complex business scenarios. It also helps in predicting upcoming opportunities and threats for an organization to exploit.

**2. What is the use of Statistics in Data Science?**

**Ans.** Statistics provides tools and methods to identify patterns and structures in data to provide a deeper insight into it. Serves a great role in data acquisition, exploration, analysis, and validation. It plays a really powerful role in Data Science.

**3. What is the importance of Data Cleansing?**

**Ans.** As the name suggests, data cleansing is a process of removing or updating the information that is incorrect, incomplete, duplicated, irrelevant, or formatted improperly. It is very important to improve the quality of data and hence the accuracy and productivity of the processes and organization as a whole.

**Read Also:** Practical Ways to Implement Data Science in Marketing

**4. What is a Linear Regression?**

**Ans. **The linear regression equation is a one-degree equation of the form Y = mX + C and is used when the response variable is continuous in nature for example height, weight, and the number of hours. It can be a simple linear regression if it involves continuous dependent variable with one independent variable and a multiple linear regression if it has multiple independent variables.

**5. What is logistic regression?**

**Ans.** When it comes to logistic regression, the outcome, also called the dependent variable has a limited number of possible values and is categorical in nature. For example, yes/no or true/false etc. The equation for this method is of the form Y = eX + e – X

**6. Explain Normal Distribution**

**Ans.** Normal Distribution is also called the Gaussian Distribution. It has the following characteristics:

- The mean, median, and mode of the distribution coincide
- The distribution has a bell-shaped curve
- The total area under the curve is 1
- Exactly half of the values are to the right of the centre, and the other half to the left of the centre

**7. Mention some drawbacks of the linear model**

**Ans.** Here a few drawbacks of the linear model:

- The assumption regarding the linearity of the errors
- It is not usable for binary outcomes or count outcome
- It can’t solve certain overfitting problems

**8. Which one would you choose for text analysis, R or Python?**

**Ans.** Python would be a better choice for text analysis as it has the Pandas library to facilitate easy to use data structures and high-performance data analysis tools. However, depending on the complexity of data one could use either which suits best.

**9. What steps do you follow while making a decision tree?**

**Ans.** The steps involved in making a decision tree are:

- Pick up the complete data set as input
- Identify a split that would maximize the separation of the classes
- Apply this split to input data
- Re-apply steps ‘a’ and ‘b’ to the data that has been
- Stop when a stopping criterion is met
- Clean up the tree by pruning

**10. What is Cross-Validation? **

**Ans.** It is a model validation technique to asses how the outcomes of a statistical analysis will infer to an independent data set. It is majorly used where prediction is the goal and one needs to estimate the performance accuracy of a predictive model in practice.

The goal here is to define a data-set for testing a model in its training phase and limit overfitting and underfitting issues. The validation and the training set is to be drawn from the same distribution yo avoid making things worse.

**Read Also:** Why Data Science Jobs Are in Demand

**11. **What is Bias-Variance tradeoff?

**11.**What is Bias-Variance tradeoff?

**Ans. **The error introduced in your model because of over-simplification of the algorithm is known as Bias. On the other hand, Variance is the error introduced to your model because of the complex nature of machine learning algorithm. In this case, the model also learns noise and perform poorly on the test dataset.

The bias-variance tradeoff is the optimum balance between bias and variance in a machine learning model. If you try to decrease bias, the variance will increase and vice-versa.

**12. ****M****ention the types of biases that occur during sampling?**

**M**

**Ans. **The three types of biases that occur during sampling are:

a. Self-Selection Bias

b. Under coverage bias

c. Survivorship Bias

**13. Explain selection bias**

**Ans.** Selection bias occurs when the research does not have a random selection of participants. It is a distortion of statistical analysis resulting from the method of collecting the sample. Selection bias is also referred to as the selection effect. When professionals fail to take selection bias into account, their conclusions might be inaccurate.

Some of the different types of selection biases are:

- Sampling Bias – A systematic error that results due to a non-random sample
- Data – Occurs when specific data subsets are selected to support a conclusion or reject bad data
- Attrition – Refers to the bias caused due to tests that didn’t run to completion.

**14. **What is p-value?

**14.**What is p-value?

**Ans.** p-value helps you determine the strengths of your results when you perform a hypothesis test. It is a number between 0 and 1. The claim which is on trial is called the Null Hypothesis. Lower p-values, i.e. ≤ 0.05, means we can reject the Null Hypothesis. High p-value, i.e. ≥ 0.05, means we can accept the Null Hypothesis. An exact p-value 0.05 indicates that the Hypothesis can go either way.

**15. What are exploding gradients?**

**Ans.** Exploding Gradients is the problematic scenario where large error gradients accumulate to result in very large updates to the weights of neural network models in the training stage. In an extreme case, the value of weights can overflow and result in NaN values. Hence the model becomes unstable and is unable to learn from the training data.

**16. ****Explain the Law of Large Numbers**

**Ans.** The ‘Law of Large Numbers’ states that if an experiment is repeated independently a large number of times, the average of the individual results is close to the expected value. It also states that the sample variance and standard deviation also converge towards the expected value.

**17. What is the importance of A/B testing**

**Ans.** The goal of A/B testing is to pick the best variant among two hypotheses, the use cases of this kind of testing could be a web page or application responsiveness, landing page redesign, banner testing, marketing campaign performance etc.

The first step is to confirm a conversion goal, and then statistical analysis is used to understand which alternative performs better for the given conversion goal.

**18. ****What are over-fitting and under-fitting?**

**Ans.** In the case of over-fitting, a statistical model fails to depict the underlying relationship and describes the random error and or noise instead. It occurs when the model is extremely complex with too many parameters as compared to the number of observations. An overfit model will have poor predictive performance because it overreacts to minor fluctuations in the training data.

In the case of underfitting, the machine learning algorithm or the statistical model fails to capture the underlying trend in the data. It occurs when trying to fit a linear model to non-linear data. It also has poor predictive performance.

**19. Explain Eigenvectors and Eigenvalues**

**Ans.** Eigenvectors depict the direction in which a linear transformation moves and acts by compressing, flipping, or stretching. They are used to understand linear transformations and are generally calculated for a correlation or covariance matrix.

The eigenvalue is the strength of the transformation in the direction of the eigenvector.

**20. Why Is Re-sampling Done?**

**Ans.** Resampling is done to:

- Estimate the accuracy of sample statistics with the subsets of accessible data at hand
- Substitute data point labels while performing significance tests
- Validate models by using random subsets

**21. What is systematic sampling and cluster sampling**

**Ans.** Systematic sampling is a type of probability sampling method. The sample members are selected from a larger population with a random starting point but a fixed periodic interval. This interval is known as the sampling interval. The sampling interval is calculated by dividing the population size by the desired sample size.

Cluster sampling involves dividing the sample population into separate groups, called clusters. Then, a simple random sample of clusters is selected from the population. Analysis is conducted on data from the sampled clusters.

**22. What are Autoencoders?**

**Ans.** An autoencoder is a kind of artificial neural network. It is used to learn efficient data codings in an unsupervised manner. It is utilised for learning a representation (encoding) for a set of data, mostly for dimensionality reduction, by training the network to ignore signal “noise”. Autoencoder also tries to generate a representation as close as possible to its original input from the reduced encoding.

Stay tuned to this page for more such information on interview questions and career assistance. If you are not confident enough yet and want to prepare more to grab your dream job in the field of Data Science, upskill with Great Learning’s PG program in Data Science Engineering or M.Tech in Data Science and Machine Learning, and learn all about Data Science along with great career support.

1