Question 1
True
Question 2
True
Question 3
0.78
Question 4
0.68
Question 5
0.33
Question 6
0.23
Question 7
0.28
Question 8
55%
Question 9
40000
Question 10
39600
Question 11
12870
Question 12
26730
Question 13
0.32
Question 14
0.33
Question 15
Type 2 Error
Question 16
Type 1 Error
Question 17
N1,1
Question 18
Question 19
All answers are correct
Question 20
Question 21
N1,2
Question 22
N2,2
Question 23
N2,1
Question 24
Question 25
Either a. or b.
Question 26
True
Question 27
False
Question 28
True
Question 29
False
Question 30
All answers are correct
Question 31
Validation data set
Question 32
All the statements are correct
Question 33
Both a. and b.
Question 34
All the answers are correct
Question 35
All the answers are correct
Question 36
All the answers are correct
Question 37
- In the K-Nearest Neighbor method, high values of k provide more smoothing that reduce the risk of overfitting due to noise in the training data, less noise, but may miss local structure.
- In K-Nearest Neighbor method, low values of k(1, 2, 3, …) capture local structure in data, but also quite susceptible to noise.
- The performance of K-Nearest Neighbor method may suffer from the phenomenon of the curse of dimensionality.
Question 38
a. matches 2. RMSE
b. matches 1. Average Error
c. matches 5. Total SSE
d. matches 3. MAE
e. matches MAPE