0

Machine Learning Naive Bayes

Description: This quiz covers the fundamental concepts, applications, and implementation aspects of Naive Bayes, a widely used classification algorithm in machine learning.
Number of Questions: 14
Created by:
Tags: machine learning naive bayes classification conditional probability bayes' theorem
Attempted 0/14 Correct 0 Score 0

What is the underlying principle behind Naive Bayes?

  1. Bayes' Theorem

  2. Decision Tree

  3. Support Vector Machine

  4. K-Nearest Neighbors


Correct Option: A
Explanation:

Naive Bayes is based on Bayes' Theorem, which provides a framework for calculating conditional probabilities and making predictions.

What is the key assumption made by Naive Bayes?

  1. Conditional Independence of Features

  2. Linear Separability of Data

  3. Gaussian Distribution of Features

  4. Equal Prior Probabilities


Correct Option: A
Explanation:

Naive Bayes assumes that the features are conditionally independent given the class label, simplifying the computation of posterior probabilities.

How does Naive Bayes calculate the probability of a class given a set of features?

  1. Using Bayes' Theorem

  2. Applying Logistic Regression

  3. Computing Euclidean Distances

  4. Evaluating Decision Trees


Correct Option: A
Explanation:

Naive Bayes employs Bayes' Theorem to calculate the posterior probability of a class given the observed features.

What is the primary advantage of Naive Bayes?

  1. High Computational Efficiency

  2. Robustness to Overfitting

  3. Ability to Handle Missing Values

  4. Interpretability of Results


Correct Option: A
Explanation:

Naive Bayes is computationally efficient, making it suitable for large datasets and real-time applications.

What is a potential limitation of Naive Bayes?

  1. Sensitivity to Irrelevant Features

  2. Requirement for Independent Features

  3. Inability to Capture Complex Relationships

  4. High Variance in Predictions


Correct Option: C
Explanation:

Naive Bayes may struggle to capture complex relationships between features due to its assumption of conditional independence.

In which scenario is Naive Bayes particularly effective?

  1. When Features are Highly Correlated

  2. When Data is Sparse and High-Dimensional

  3. When Class Priors are Unequal

  4. When Features Follow a Non-Gaussian Distribution


Correct Option: B
Explanation:

Naive Bayes performs well with sparse and high-dimensional data, where other methods may struggle due to the curse of dimensionality.

How can the performance of Naive Bayes be improved?

  1. Applying Feature Selection Techniques

  2. Using Smoothing Techniques to Handle Zero Probabilities

  3. Incorporating Prior Knowledge into the Model

  4. All of the Above


Correct Option: D
Explanation:

Combining feature selection, smoothing techniques, and incorporating prior knowledge can enhance the performance of Naive Bayes.

Which of the following is a common application of Naive Bayes?

  1. Email Spam Filtering

  2. Sentiment Analysis

  3. Image Classification

  4. Medical Diagnosis


Correct Option: A
Explanation:

Naive Bayes is widely used for email spam filtering due to its efficiency and effectiveness in classifying emails as spam or legitimate.

What is the formula for calculating the posterior probability of a class given a set of features in Naive Bayes?

  1. $P(C | X) = \frac{P(X | C)P(C)}{P(X)}$

  2. $P(C | X) = \frac{P(C)P(X | C)}{P(X)}$

  3. $P(C | X) = \frac{P(X | C)}{P(C)}$

  4. $P(C | X) = \frac{P(C)P(X)}{P(X | C)}$


Correct Option: A
Explanation:

The formula for calculating the posterior probability of a class given a set of features in Naive Bayes is $P(C | X) = \frac{P(X | C)P(C)}{P(X)}$.

What is the name of the technique used to address the problem of zero probabilities in Naive Bayes?

  1. Laplace Smoothing

  2. Additive Smoothing

  3. Lidstone Smoothing

  4. Jelinek-Mercer Smoothing


Correct Option: A
Explanation:

Laplace Smoothing is a technique used to address the problem of zero probabilities in Naive Bayes by adding a small positive value to the count of each feature in each class.

Which of the following is not a variant of Naive Bayes?

  1. Gaussian Naive Bayes

  2. Multinomial Naive Bayes

  3. Bernoulli Naive Bayes

  4. Decision Tree Naive Bayes


Correct Option: D
Explanation:

Decision Tree Naive Bayes is not a variant of Naive Bayes. It is a hybrid algorithm that combines decision trees with Naive Bayes.

What is the computational complexity of training a Naive Bayes model?

  1. $O(n)$

  2. $O(n log n)$

  3. $O(n^2)$

  4. $O(n^3)$


Correct Option: A
Explanation:

The computational complexity of training a Naive Bayes model is $O(n)$, where n is the number of training examples.

Which of the following is not a measure of the performance of a Naive Bayes model?

  1. Accuracy

  2. Precision

  3. Recall

  4. F1-score


Correct Option: D
Explanation:

F1-score is not a measure of the performance of a Naive Bayes model. It is a measure of the performance of a binary classification model.

What is the name of the algorithm used to learn the parameters of a Naive Bayes model?

  1. Maximum Likelihood Estimation

  2. Bayesian Estimation

  3. Gradient Descent

  4. Expectation-Maximization


Correct Option: A
Explanation:

Maximum Likelihood Estimation is the algorithm used to learn the parameters of a Naive Bayes model.

- Hide questions