# NPTEL An Introduction to Artificial Intelligence Week 9 Assignment Answers Update 2022

Are you looking for help in NPTEL An Introduction to Artificial Intelligence Week 9 Assignment Answers? So, you are on the right article because here we have provided answers hints for you.

## NPTEL An Introduction to Artificial Intelligence Week 9 Assignment Answers

Q1. Which of the following is true about the MAP (Maximum a posteriori estimate) estimation learning framework?

a. It is equivalent to Maximum Likelihood learning with infinite data
b. It is equivalent to Maximum Likelihood learning if P(θ) is independent of θ
c. it can be used without having any prior knowledge about the parameters
d. The performance of MAP is better with dense data compared to sparse data

Answer: a. It is equivalent to Maximum Likelihood learning with infinite data
d. The performance of MAP is better with dense data compared to sparse data

Q2. What facts are true about smoothing?

a. Smoothed estimates of probabilities fit the evidence better than un-smoothed estimates.
b. The process of smoothing can be viewed as imposing a prior distribution over the set of parameters.
c. Smoothing allows us to account for data which wasn’t seen in the evidence.
d. Smoothing is a form of regularization which prevents overfitting in Bayesian networks.

Answer: a. Smoothed estimates of probabilities fit the evidence better than un-smoothed estimates.

c. Smoothing allows us to account for data which wasn’t seen in the evidence.

Q3. Consider the following data:

There can be multiple Bayesian networks that can be used to model such a universe. Assume that we assume a Bayesian Network as shown below:

If the value of the parameter P(¬z|x,¬y) is m/n such that m and n have no common factors. Then, what is the value of m+n? Assume add-one smoothing.

Q4. Consider the following Bayesian Network from which we wish to compute P(x|z) using rejection sampling:

Q5. Assume that we toss a biased coin with heads probability p, 100 times. We get heads 66 times out of 100. If the Maximum Likelihood estimate of the parameter p is m/n where m and n don’t have common factors, then the value of m+n is?

Q6. Now, assume that we had a prior distribution over p as shown below:

Q7. Which of the following task(s) are not suited for a goal based agent?

Q8. Which of the following are true ?

a. Rejection sampling is very wasteful when the probability of getting the evidence in the samples is very low.

b. We perform conditional probability weighting on the samples while doing Gibbs Sampling in MCMC algorithm since we have already fixed the evidence variables.

c. We perform random walk while sampling variables in Likelihood Weighting, MCMC with Gibbs sampling, but not in Rejection sampling.

d. Likelihood Weighting functions well if we have many evidence wars with some samples having nearly all the total weight

Answer: a. Rejection sampling is very wasteful when the probability of getting the evidence in the samples is very low.

Q9. Consider the following Bayesian Network:

a. P(C|A,B,D,F,E) = α. P(C|A). P(C|B)

b. P(C|A,B,D,F,E) = α. P(C|A,B)

c. P(C|A,B,D,F,E) = α. P(C|A,B). P(D|C,E)

d. P(C|A,B,D,F,E) = α. P(C|A,B,D,E)

Answer: b. P(C|A,B,D,F,E) = α. P(C|A,B)
c. P(C|A,B,D,F,E) = α. P(C|A,B). P(D|C,E)

Q10. Which of the following options are correct about the environment of Tic Tac Toe?

a. Fully observable
b. Stochastic
c. Continuous
d. Static

c. Continuous

Disclaimer: These answers are provided only for the purpose to help students to take references. This website does not claim any surety of 100% correct answers. So, this website urges you to complete your assignment yourself.

Also Available:

Top 9 Free Coursera Courses with Certificate