Q1.Consider the following code for Class Activation Maps. Which layer(s) of the model do we choose as outputs to draw out the class activation map ? Check all that apply. Answer: The layer which holds the extracted features in the model The layer which performs classification on the model QueRead more
Q1.Consider the following code for Class Activation Maps. Which layer(s) of the model do we choose as outputs to draw out the class activation map ? Check all that apply.
Answer:
The layer which holds the extracted features in the model
The layer which performs classification on the model
Question 2 To compute the Class Activation Map you ____________.
Answer:
Take the dot product of the features associated with the prediction on the image, with the weights that come from the last global average pooling layer.
Question 3 In a Salience map you get to see parts of the image the model was paying attention to when deciding what class to assign to the image.
Answer:
False
Question 4 In Saliency Maps, the pixels that most impact the final classification are found by looking at the gradients of the final layers to see which ones had the steepest curve, and figure out their location and plot them on the original image.
Answer:
True
Question 5 Which of the following statements are not true about GradCAM? Check all that apply.
Answer:
The negative values in the heatmapof the gradCAM are kept as they enhance the performance and accuracy of the gradCAM.
Q1. At the heart of image segmentation with neural networks is an encoder/decoder architecture. What functionalities do they perform? Answer: The encoder extracts features from an image and the decoder takes those extracted features and assigns class labels to each pixel of the image. Q2. IsRead more
Q1. At the heart of image segmentation with neural networks is an encoder/decoder architecture. What functionalities do they perform?
Answer: The encoder extracts features from an image and the decoder takes those extracted features and assigns class labels to each pixel of the image.
Q2. Is the following statement true regarding SegNet, UNet and Fully Convolutional Neural Networks (FCNNs):
Unlike the similarity between the architecture design of SegNet & UNet, FCNNs do not have a symmetric architecture design.
Answer: True
Question 3
What architectural difference does the numberrepresent in the names of FCN-32, FCN-16, FCN-8 ?
Answer:
The number represents the factor by which the final pooling layer in the architecture up-samples the image to make predictions.
Question 4
Take a look at the following code and select the type of scaling that will be performed
x = UpSampling2D(
size=(2, 2),
data_format=None,
interpolation=’bilinear’)(x)
Answer:
The upsampling of the image will be done by means of linear interpolation from the closest pixel values
Question 5
What does the following code do?
Conv2DTranspose(
filters=32,
kernel_size=(3, 3)
)
Answer:
It takes the pixel values and filters and tries to reverse the convolution process to return back a 3×3 array which could have been the original array of the image.
Question 6
The following is the code for the last layer of a FCN-8 decoder. What key change is required if we want this to be the last layer of a FCN-16 decoder ?
def fcn8_decoder(convs, n_classes):
…
o = tf.keras.layers.Conv2DTranspose(n_classes, kernel_size=(8,8), strides=(8,8)) (0)
o = (tf.keras.layers.Activation(‘softmax’)) (0)
return o
Answer: kernel_size=(16, 16)
Question 7:
Which of the following is true about Intersection Over Union (IoU) and Dice Score, when it comes to evaluating image segmentation? (Choose all that apply.)
Answer:
Both have a range between 0 and 1
For IoU the numerator is the area of overlap for both the labels, predicted and ground truth, whereas for Dice Score the numerator is 2 times that.
Question 8:
Consider the following code for building the encoder blocks for a U-Net. What should this function return?
For U-Net, on the decoder side you combine skip connections which come from the corresponding level of the encoder. Consider the following code and provide the missing line required to account for those skip connections with the upsampling.
(Important Notes: Use TensorFlow as tf, Keras as keras. And be mindful of python spacing convention, i.e (x, y) not (x,y) )
Which one of the following pieces of code is used to train Autoencoder?
autoencoder.fit(X_train, X_train, epochs=epochs)
autoencoder.fit(X_train, X_train, epochs=epochs)
See lessWhat does model_1 output in this AutoEncoder code snippet?
Answer: Displaying the internal representation of the input the model is learning to replicate.
Answer:
Displaying the internal representation of the input the model is learning to replicate.
See lessCalculate Content Loss Value between Generated and Content Images: 5 2 1 7 vs. 3 5 5 4
=[5 - 3, 2 - 5, 1 - 5, 7 - 4] = [2, -3, -4, 3] =[2^2, (-3)^2, (-4)^2, 3^2] = [4, 9, 16, 9] =4 + 9 + 16 + 9 = 38 =38 * (1/2) = 19 Anwer: 19
=[5 – 3, 2 – 5, 1 – 5, 7 – 4] = [2, -3, -4, 3]
=[2^2, (-3)^2, (-4)^2, 3^2] = [4, 9, 16, 9]
=4 + 9 + 16 + 9 = 38
=38 * (1/2) = 19
Anwer: 19
See lessConsider the following code snippet. How will you include Total Loss Variation in it? Use TensorFlow as tf.
Answer: total_variation_weight * tf.image.total_variation(image)
Answer:
total_variation_weight * tf.image.total_variation(image)
See lessadvance computer vision with tensorflow week4
Q1.Consider the following code for Class Activation Maps. Which layer(s) of the model do we choose as outputs to draw out the class activation map ? Check all that apply. Answer: The layer which holds the extracted features in the model The layer which performs classification on the model QueRead more
Q1.Consider the following code for Class Activation Maps. Which layer(s) of the model do we choose as outputs to draw out the class activation map ? Check all that apply.
Answer:
The layer which holds the extracted features in the model
The layer which performs classification on the model
Question 2
To compute the Class Activation Map you ____________.
Answer:
Take the dot product of the features associated with the prediction on the image, with the weights that come from the last global average pooling layer.
Question 3
In a Salience map you get to see parts of the image the model was paying attention to when deciding what class to assign to the image.
Answer:
False
Question 4
In Saliency Maps, the pixels that most impact the final classification are found by looking at the gradients of the final layers to see which ones had the steepest curve, and figure out their location and plot them on the original image.
Answer:
True
Question 5
Which of the following statements are not true about GradCAM? Check all that apply.
Answer:
The negative values in the heatmapof the gradCAM are kept as they enhance the performance and accuracy of the gradCAM.
advance computer vision with tensorflow week3
Q1. At the heart of image segmentation with neural networks is an encoder/decoder architecture. What functionalities do they perform? Answer: The encoder extracts features from an image and the decoder takes those extracted features and assigns class labels to each pixel of the image. Q2. IsRead more
Q1. At the heart of image segmentation with neural networks is an encoder/decoder architecture. What functionalities do they perform?
Answer: The encoder extracts features from an image and the decoder takes those extracted features and assigns class labels to each pixel of the image.
Q2. Is the following statement true regarding SegNet, UNet and Fully Convolutional Neural Networks (FCNNs):
Unlike the similarity between the architecture design of SegNet & UNet, FCNNs do not have a symmetric architecture design.
Answer: True
Question 3
What architectural difference does the numberrepresent in the names of FCN-32, FCN-16, FCN-8 ?
Answer:
The number represents the factor by which the final pooling layer in the architecture up-samples the image to make predictions.
Question 4
Take a look at the following code and select the type of scaling that will be performed
x = UpSampling2D(
size=(2, 2),
data_format=None,
interpolation=’bilinear’)(x)
Answer:
The upsampling of the image will be done by means of linear interpolation from the closest pixel values
Question 5
What does the following code do?
Conv2DTranspose(
filters=32,
kernel_size=(3, 3)
)
Answer:
It takes the pixel values and filters and tries to reverse the convolution process to return back a 3×3 array which could have been the original array of the image.
Question 6
The following is the code for the last layer of a FCN-8 decoder. What key change is required if we want this to be the last layer of a FCN-16 decoder ?
def fcn8_decoder(convs, n_classes):
…
o = tf.keras.layers.Conv2DTranspose(n_classes, kernel_size=(8,8), strides=(8,8)) (0)
o = (tf.keras.layers.Activation(‘softmax’)) (0)
return o
Answer: kernel_size=(16, 16)
Question 7:
Which of the following is true about Intersection Over Union (IoU) and Dice Score, when it comes to evaluating image segmentation? (Choose all that apply.)
Answer:
Both have a range between 0 and 1
For IoU the numerator is the area of overlap for both the labels, predicted and ground truth, whereas for Dice Score the numerator is 2 times that.
Question 8:
Consider the following code for building the encoder blocks for a U-Net. What should this function return?
def unet_encoder_block(inputs, n_filters, pool_size, dropout): blocks = conv2d_block(inputs, n_filters=n_filters)
after_pooling = tf.keras.layers.MaxPooling2D(pool_size)(blocks)
after_dropout = tf.keras.layers.Dropout(dropout)(after_pooling)
return # your code here
Answer:
blocks
Question 9
For U-Net, on the decoder side you combine skip connections which come from the corresponding level of the encoder. Consider the following code and provide the missing line required to account for those skip connections with the upsampling.
(Important Notes: Use TensorFlow as tf, Keras as keras. And be mindful of python spacing convention, i.e (x, y) not (x,y) )
def decoder_block(inputs, conv_output, n_filters, kernel_size, strides, dropout):
upsampling_layer = tf.keras.layers.Conv2DTranspose(n_filters, kernel_size, strides = strides, padding=’same’)(inputs)
skip_connection_layer = # your code here
skip_connection_layer = tf.keras.layers.Dropout(dropout)(skip_connection_layer)
skip_connection_layer = conv2d_block(skip_connection_layer, n_filters, kernel_size=3)
return skip_connection_layer
Answer:
tf.keras.layers.concatenate([upsampling_layer, conv_output])