Machine Learning Set 1
Free Online Best Machine Learning MCQ Questions for improve your basic knowledge of Machine Learning. This Blockchain Machine Learning Set 1 test that contains 25 Multiple Choice Questions with 4 options. You have to select the right answer to a question.
Start
Congratulations - you have completed Machine Learning Set 1.
You scored %%SCORE%% out of %%TOTAL%%.
Your performance has been rated as %%RATING%%
Your answers are highlighted below.
Question 1 |
For Kernel Regression, which one of these structural assumptions is the one that most affects the trade-off between underfitting and overfitting:
A | Whether kernel function is Gaussian versus triangular versus box-shaped |
B | Whether we use Euclidian versus L1 versus L? metrics |
C | The kernel width |
D | The maximum height of the kernel function |
Question 2 |
You trained a binary classifier model which gives very high accuracy on the training data, but much lower accuracy on validation data. Which of the following may be false?
A | This is an instance of overfitting. |
B | This is an instance of underfitting |
C | The training was not well regularized |
D | The training and testing examples are sampled from different distributions |
Question 3 |
Which of the following methods can achieve zero training error on any linearly separable dataset?
A | Decision tree |
B | 15-nearest neighbors |
C | Perceptron |
D | Logistic regression |
Question 4 |
What are support vectors?
A | The examples farthest from the decision boundary |
B | The only examples necessary to compute f(x) in an SVM |
C | The class centroids |
D | All the examples that have a non-zero weight ?k in a SVM |
Question 5 |
The numerical output of a sigmoid node in a neural network:
A | Is unbounded, encompassing all real numbers. |
B | Is unbounded, encompassing all integers. |
C | Is bounded between 0 and 1. |
D | Is bounded between -1 and 1 |
Question 6 |
Ridge and Lasso regression are simple techniques to ________ the complexity of the model and prevent over-fitting which may result from simple linear regression.
A | Increase |
B | Decrease |
C | Eliminate |
D | None of the above |
Question 7 |
For a neural network, which one of these structural assumptions is the one that most affects the trade-off between underfitting (i.e. a high bias model) and overfitting (i.e. a high variance model):
A | The number of hidden nodes |
B | The learning rate |
C | The initial choice of weights |
D | The use of a constant-term unit input |
Question 8 |
Consider a point that is correctly classified and distant from the decision boundary. Which of the following methods will be unaffected by this point?
A | Nearest neighbor |
B | SVM |
C | Logistic regression |
D | Linear regression |
Question 9 |
If your training loss increases with number of epochs, which of the following could be a possible issue with the learning process?
A | Regularization is too low and model is overfitting |
B | Regularization is too high and model is underfitting |
C | Step size is too large |
D | Step size is too small |
Question 10 |
Given two Boolean random variables, A and B, where P(A) = ½, P(B) = 1/3, and P(A | ¬B) = ¼, what is P(A | B)?
A | 0.166666667 |
B | 0.25 |
C | 0.75 |
D | 1 |
Question 11 |
Neural networks______
A | Optimize a convex objective function |
B | Can only be trained with stochastic gradient descent |
C | Can use a mix of different activation functions |
D | None of the above |
Question 12 |
Averaging the output of multiple decision trees helps ________.
A | Increase bias |
B | Decrease bias |
C | Increase variance |
D | Decrease variance |
Question 13 |
When compared with Lasso regression, the Ridge regression works well in cases where we
A | if we have more features |
B | if we have less features |
C | if features have high correlation |
D | if features have low correlation |
Question 14 |
Which of the following is a clustering algorithm in machine learning?
A | Expectation Maximization |
B | CART |
C | Gaussian Naïve Bayes |
D | Apriori |
Question 15 |
You've just finished training a decision tree for spam classification, and it is getting abnormally bad performance on both your training and test sets. You know that your implementation has no bugs, so what could be causing the problem?
A | Your decision trees are too shallow |
B | You need to increase the learning rate. |
C | You are overfitting. |
D | None of the above |
Question 16 |
Which one of the following is the main reason for pruning a Decision Tree?
A | To save computing time during testing |
B | To save space for storing the Decision Tree |
C | To make the training set error smaller |
D | To avoid overfitting the training set |
Question 17 |
Which of the following guidelines is applicable to initialization of the weight vector in a fully connected neural network.
A | Should not set it to zero since otherwise it will cause overfitting |
B | Should not set it to zero since otherwise (stochastic) gradient descent will explore a very small space |
C | Should set it to zero since otherwise it causes a bias |
D | Should set it to zero in order to preserve symmetry across all neurons |
Question 18 |
Which among the following prevents overfitting when we perform bagging?
A | The use of sampling with replacement as the sampling technique |
B | The use of weak classifiers |
C | The use of classification algorithms which are not prone to overfitting |
D | The practice of validation performed on every classifier trained |
Question 19 |
________refers to a model that can neither model the training data nor generalize to new data.
A | good fitting |
B | overfitting |
C | underfitting |
D | all of the above |
Question 20 |
Suppose your model is overfitting. Which of the following is NOT a valid way to try and reduce the overfitting?
A | Increase the amount of training data |
B | Improve the optimization algorithm being used for error minimization |
C | Decrease the model complexity |
D | Reduce the noise in the training data |
Question 21 |
A 6-sided die is rolled 15 times and the results are: side 1 comes up 0 times; side 2: 1 time; side 3: 2 times; side 4: 3 times; side 5: 4 times; side 6: 5 times. Based on these results, what is the probability of side 3 coming up when using Add-1 Smoothing?
A | 15-Feb |
B | 7-Jan |
C | 16-Mar |
D | 5-Jan |
Question 22 |
The model obtained by applying linear regression on the identified subset of features may differ from the model obtained at the end of the process of identifying the subset during
A | Best-subset selection |
B | Forward stepwise selection |
C | Forward stage wise selection |
D | All of the above |
Question 23 |
Modeling a classification rule directly from the input data like in logistic regression fits which of the following classification methods?
A | Discriminative classification |
B | Generative classification |
C | Probabilistic classification |
D | All of the above |
Question 24 |
How does the bias-variance decomposition of a ridge regression estimator compare with that of ordinary least squares regression?
A | Ridge has larger bias, larger variance |
B | Ridge has larger bias, smaller variance |
C | Ridge has smaller bias, larger variance |
D | Ridge has smaller bias, smaller variance |
Question 25 |
Consider the Bayesian network given below.
How many independent parameters are needed for this Bayesian Network?
A | 2 |
B | 4 |
C | 8 |
D | 16 |
Once you are finished, click the button below. Any items you have not completed will be marked incorrect.
Get Results
There are 25 questions to complete.
← |
List |
→ |
Return
Shaded items are complete.
1 | 2 | 3 | 4 | 5 |
6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 |
21 | 22 | 23 | 24 | 25 |
End |
Return
You have completed
questions
question
Your score is
Correct
Wrong
Partial-Credit
You have not finished your quiz. If you leave this page, your progress will be lost.
Correct Answer
You Selected
Not Attempted
Final Score on Quiz
Attempted Questions Correct
Attempted Questions Wrong
Questions Not Attempted
Total Questions on Quiz
Question Details
Results
Date
Score
Hint
Time allowed
minutes
seconds
Time used
Answer Choice(s) Selected
Question Text
All done
Need more practice!
Keep trying!
Not bad!
Good work!
Perfect!