Spread the word.

Share the link on social media.

Share
  • Facebook
Have an account? Sign In Now

Sign Up

Have an account? Sign In Now

Sign In

Forgot Password?

Don't have account, Sign Up Here

Forgot Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Have an account? Sign In Now

Sorry, you do not have permission to ask a question, You must login to ask a question.

Forgot Password?

Need An Account, Sign Up Here
Sign InSign Up

SIKSHAPATH

SIKSHAPATH Logo SIKSHAPATH Logo

SIKSHAPATH Navigation

  • Home
  • Questions
  • Blog
    • Computer Science(CSE)
    • NPTEL
    • Startup
  • Shop
    • Internshala Answers
Search
Ask A Question

Mobile menu

Close
Ask A Question
  • Home
  • Questions
  • Blog
    • Computer Science(CSE)
    • NPTEL
    • Startup
  • Shop
    • Internshala Answers
Home/ Questions/Q 26931
Next
Answered

SIKSHAPATH Latest Questions

Saurav kumar
  • 0
  • 0
Saurav kumar
Asked: February 21, 20242024-02-21T13:31:41+05:30 2024-02-21T13:31:41+05:30In: Computer Science

advance computer vision with tensorflow week3

  • 0
  • 0

Q1. At the heart of image segmentation with neural networks is an encoder/decoder architecture. What functionalities do they perform ?

Options:

 

The decoder extracts features from an image and the encoder takes those extracted features, and assigns class labels to each pixel of the image.

 

The decoder extracts features from an image and the encoder takes those extracted features, and assigns class label to the entire image.

 

The encoder extracts features from an image and the decoder takes those extracted features, and assigns class label to the entire image.

 

The encoder extracts features from an image and the decoder takes those extracted features, and assigns class labels to each pixel of the image.

 

 

Q2. Is the following statement true regarding SegNet, UNet and Fully Convolutional Neural Networks (FCNNs):

 

Unlike the similarity between the architecture design of SegNet & UNet, FCNNs do not have a symmetric architecture design.

 

Question 3

What architectural difference does the numberrepresent in the names of FCN-32, FCN-16, FCN-8 ?

 

The number represents the total number of filters used in the final pooling layer in the architecture to make predictions.

 

 

The number represents the total number of convolutional layers used in the final pooling layer in the architecture to make predictions.

 

 

The number represents the factor by which the final pooling layer in the architecture up-samples the image to make predictions.

 

 

The number represents the total number of pooling layers used in the architecture to help make predictions.

 

 

Question 4

Take a look at the following code and select the type of scaling that will be performed

 

x = UpSampling2D(

 

size=(2, 2),

 

data_format=None,

 

interpolation=’bilinear’)(x)

 

Options:

 

The upsampling of the image will be done by means of linear interpolation from the closest pixel values

 

 

The upsampling of the image will be done by copying the value from the closest pixels.

 

 

 

Question 5

What does the following code do?

 

Conv2DTranspose(

 

filters=32,

 

kernel_size=(3, 3)

 

)

 

Options:

 

It takes the pixel values and filters and tries to reverse the convolution process to return back a 3×3 array which could have been the original array of the image.

 

 

It takes pixel values in the image, in a 3×3 array, and using the specified filters, creates a transpose of that array.

 

 

 

Question 6

 

The following is the code for the last layer of a FCN-8 decoder. What key change is required if we want this to be the last layer of a FCN-16 decoder ?

 

 

def fcn8_decoder(convs, n_classes):

…

 

o = tf.keras.layers.Conv2DTranspose(n_classes, kernel_size=(8,8), strides=(8,8)) (0)

 

o = (tf.keras.layers.Activation(‘softmax’)) (0)

 

return o

 

 

options:

strides=(16, 16)

 

 

kernel_size=(16, 16)

 

 

Using sigmoid instead of softmax.

 

 

n_classes=16

 

 

 

Question 7:

Which of the following is true about Intersection Over Union (IoU) and Dice Score, when it comes to evaluating image segmentation? (Choose all that apply.)

 

 

Options:

Unlike IoU, for Dice Score the closer the value is to 0 the closer the prediction is to the ground truth.

 

 

Both have a range between 0 and 1

 

For both, IoU & Dice Score the denominator is the total area of both the labels, predicted and ground truth

 

For IoU the numerator is the area of overlap for both the labels, predicted and ground truth, whereas for Dice Score the numerator is 2 times that.

 

 

Question 8:

Consider the following code for building the encoder blocks for a U-Net. What should this function return?

 

def unet_encoder_block(inputs, n_filters, pool_size, dropout): blocks = conv2d_block(inputs, n_filters=n_filters)

 

after_pooling = tf.keras.layers.MaxPooling2D(pool_size)(blocks)

 

after_dropout = tf.keras.layers.Dropout(dropout)(after_pooling)

 

return # your code here

 

 

Options are:

 

blocks

 

 

blocks, after_dropout

 

 

after_dropout

 

 

after_dropout, after_pooling (you need to return after_pooling to be used in skip connections)

 

 

Question 9

For U-Net, on the decoder side you combine skip connections which come from the corresponding level of the encoder. Consider the following code and provide the missing line required to account for those skip connections with the upsampling.

 

 

(Important Notes: Use TensorFlow as tf, Keras as keras. And be mindful of python spacing convention, i.e (x, y) not (x,y) )

 

 

def decoder_block(inputs, conv_output, n_filters, kernel_size, strides, dropout):

 

upsampling_layer = tf.keras.layers.Conv2DTranspose(n_filters, kernel_size, strides = strides, padding=’same’)(inputs)

 

skip_connection_layer = # your code here

 

skip_connection_layer = tf.keras.layers.Dropout(dropout)(skip_connection_layer)

 

skip_connection_layer = conv2d_block(skip_connection_layer, n_filters, kernel_size=3)

 

return skip_connection_layer

 

advance computer vision with tensorflowat the heart of image segmentation with neural networks is an encoder/decoder architecture. what functionalities do they perform ?tensorflow-advanced-techniques-specialization
  • 1 1 Answer
  • 379 Views
  • 0 Followers
  • 0
Answer
Share
  • Facebook

    1 Answer

    • Voted
    • Oldest
    • Recent
    1. I'M ADMIN
      Best Answer
      I'M ADMIN
      2024-02-21T13:42:30+05:30Added an answer on February 21, 2024 at 1:42 pm

      Q1. At the heart of image segmentation with neural networks is an encoder/decoder architecture. What functionalities do they perform?

      Answer: The encoder extracts features from an image and the decoder takes those extracted features and assigns class labels to each pixel of the image.

       

      Q2. Is the following statement true regarding SegNet, UNet and Fully Convolutional Neural Networks (FCNNs):

      Unlike the similarity between the architecture design of SegNet & UNet, FCNNs do not have a symmetric architecture design.

       

      Answer: True

       

      Question 3

      What architectural difference does the numberrepresent in the names of FCN-32, FCN-16, FCN-8 ?

      Answer:

      The number represents the factor by which the final pooling layer in the architecture up-samples the image to make predictions.

       

      Question 4

      Take a look at the following code and select the type of scaling that will be performed

       

      x = UpSampling2D(

       

      size=(2, 2),

       

      data_format=None,

       

      interpolation=’bilinear’)(x)

       

      Answer: 

      The upsampling of the image will be done by means of linear interpolation from the closest pixel values

       

      Question 5

      What does the following code do?

       

      Conv2DTranspose(

       

      filters=32,

       

      kernel_size=(3, 3)

       

      )

       

      Answer: 

      It takes the pixel values and filters and tries to reverse the convolution process to return back a 3×3 array which could have been the original array of the image.

       

       

      Question 6

       

      The following is the code for the last layer of a FCN-8 decoder. What key change is required if we want this to be the last layer of a FCN-16 decoder ?

       

       

      def fcn8_decoder(convs, n_classes):

      …

       

      o = tf.keras.layers.Conv2DTranspose(n_classes, kernel_size=(8,8), strides=(8,8)) (0)

       

      o = (tf.keras.layers.Activation(‘softmax’)) (0)

       

      return o

      Answer: kernel_size=(16, 16)

       

      Question 7:

      Which of the following is true about Intersection Over Union (IoU) and Dice Score, when it comes to evaluating image segmentation? (Choose all that apply.)

       

      Answer:

      Both have a range between 0 and 1

      For IoU the numerator is the area of overlap for both the labels, predicted and ground truth, whereas for Dice Score the numerator is 2 times that.

       

       

      Question 8:

      Consider the following code for building the encoder blocks for a U-Net. What should this function return?

       

      def unet_encoder_block(inputs, n_filters, pool_size, dropout): blocks = conv2d_block(inputs, n_filters=n_filters)

       

      after_pooling = tf.keras.layers.MaxPooling2D(pool_size)(blocks)

       

      after_dropout = tf.keras.layers.Dropout(dropout)(after_pooling)

       

      return # your code here

      Answer:

      blocks

       

      Question 9

      For U-Net, on the decoder side you combine skip connections which come from the corresponding level of the encoder. Consider the following code and provide the missing line required to account for those skip connections with the upsampling.

       

      (Important Notes: Use TensorFlow as tf, Keras as keras. And be mindful of python spacing convention, i.e (x, y) not (x,y) )

       

      def decoder_block(inputs, conv_output, n_filters, kernel_size, strides, dropout):

       

      upsampling_layer = tf.keras.layers.Conv2DTranspose(n_filters, kernel_size, strides = strides, padding=’same’)(inputs)

       

      skip_connection_layer = # your code here

       

      skip_connection_layer = tf.keras.layers.Dropout(dropout)(skip_connection_layer)

       

      skip_connection_layer = conv2d_block(skip_connection_layer, n_filters, kernel_size=3)

       

      return skip_connection_layer

      Answer: 

      tf.keras.layers.concatenate([upsampling_layer, conv_output])

       

        • 1
      • Reply
      • Share
        Share
        • Share on WhatsApp
        • Share on Facebook
        • Share on Twitter
        • Share on LinkedIn

    Leave an answer
    Cancel reply

    You must login to add an answer.

    Forgot Password?

    Need An Account, Sign Up Here

    Sidebar

    store ads

    Stats

    • Questions 1k
    • Answers 1k
    • Posts 149
    • Best Answers 89
    • This Free AI Tool Translates Entire Books in Minute !
    • AI News: 🎬 Hollywood’s AI Studios, 🎓 OpenAI’s Latest Gift to Educators, 🚚 Class8 Bags $22M, 🧠 Google Gemini’s Memory Upgrade
    • AI NEWS: Legal Action Against OpenAI, $16M Paid, & Elon Musk’s Praise from Investor 🤖💰📑 | AI Boosts Cloud Seeding for Water Security 🌱💧
    • AI News: 🎬AI Video Tool Scam Exposed🤯, 🛰️ AI-Powered Drones to Ukraine 😱, Google’s $20M AI Push, Sam Altman Joins SF’s Leadership Team
    • AI News: 🤝 Biden Meets Xi on AI Talks, 💡 Xavier Niel’s Advice for Europe, ♻️ Hong Kong’s Smart Bin Revolution, 🚀 AI x Huawei

    Explore

    • Recent Questions
    • Questions For You
    • Answers With Time
    • Most Visited
    • New Questions
    • Recent Questions With Time

    Footer

    SIKSHAPATH

    Helpful Links

    • Contact
    • Disclaimer
    • Privacy Policy Notice
    • TERMS OF USE
    • FAQs
    • Refund/Cancellation Policy
    • Delivery Policy for Sikshapath

    Follow Us

    © 2021-24 Sikshapath. All Rights Reserved

    Insert/edit link

    Enter the destination URL

    Or link to existing content

      No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.