Get Deep Learning with TensorFlow Quiz Answers
Traditional neural networks rely on shallow nets, composed of one input, one hidden layer and one output layer. Deep-learning networks are distinguished from these ordinary neural networks having more hidden layers, or so-called more depth. These kind of nets are capable of discovering hidden structures within unlabeled and unstructured data (i.e. images, sound, and text), which consitutes the vast majority of data in the world.
TensorFlow is one of the best libraries to implement deep learning. TensorFlow is a software library for numerical computation of mathematical expressional, using data flow graphs. Nodes in the graph represent mathematical operations, while the edges represent the multidimensional data arrays (tensors) that flow between them. It was created by Google and tailored for Machine Learning. In fact, it is being widely used to develop solutions with Deep Learning.
In this TensorFlow course, you will be able to learn the basic concepts of TensorFlow, the main functions, operations and the execution pipeline. Starting with a simple “Hello Word” example, throughout the course you will be able to see how TensorFlow can be used in curve fitting, regression, classification and minimization of error functions.
This concept is then explored in the Deep Learning world. You will learn how to apply TensorFlow for backpropagation to tune the weights and biases while the Neural Networks are being trained. Finally, the course covers different types of Deep Architectures, such as Convolutional Networks, Recurrent Networks and Autoencoders.
Enroll on Cognitive Class
Module 1 – Intro to TensorFlow
Question: Which statement is FALSE about TensorFlow?
- TensorFlow is well suited for handling Deep Learning Problems
- TensorFlow library is not proper for handling Machine Learning Problems
- TensorFlow has a C/C++ backend as well as Python modules
- TensorFlow is an open source library
- All of the above
Question: What is a Data Flow Graph?
- A representation of data dependencies between operations
- A cartesian (x,y) chart
- A graphics user interface
- A flowchart describing an algorithm
- None of the above
Question: What is the main reasons of increasing popularity of Deep Learning?
- The advances in machine learning algorithms and research.
- The availability of massive amounts of data for training computer systems.
- The dramatic increases in computer processing capabilities.
- All of the above
Question: Which statement is TRUE about TensorFlow?
- Runs on CPU and GPU
- Runs on CPU only
- Runs on GPU only
Question: Why is TensorFlow the proper library for Deep Learning?
- It will benefit from TensorFlow’s auto-differentiation and suite of first-rate optimizers
- It provides a collection of trainable mathematical functions that are useful for neural networks.
- It has extensive built-in support for deep learning
- All of the above
Module 2 – Convolutional Networks
Question: What can be achieved with “convolution” operations on Images?
- Noise Filtering
- Image Smoothing
- Image Blurring
- Edge Detection
- All of the above
Question: For convolution, it is better to store images in a TensorFlow Graph as:
- Placeholder
- CSV file
- Numpy array
- Variable
- None of the above
Question: Which of the following statements is TRUE about Convolution Neural Networks (CNNs)?
- CNN can be applied ONLY on Image and Text data
- CNN can be applied on ANY 2D and 3D array of data
- CNN can be applied ONLY on Text and Speech data
- CNN can be applied ONLY on Image data
- All of the above
Question: Which of the following Layers can be part of Convolution Neural Networks (CNNs)
- Dropout
- Softmax
- Maxpooling
- Relu
- All of the above
Question: The objective of the Activation Function is to:
- Increase the Size of the Network
- Handle Non-Linearity in the Network
- Handle Linearity in the Network
- Reduce the Size of the Network
- None of the above
Module 3 – Recurrent Neural Networks (RNNs)
Question: What is a Recurrent Neural Network?
- A Neural Network that can recur to itself, and is proper for handling sequential data
- An infinite layered Neural Network which is proper for handling structured data
- A special kind of Neural Network to predict weather
- A markovian model to handle temporal data
Question: What is NOT TRUE about RNNs?
- RNNs are VERY suitable for sequential data.
- RNNs need to keep track of states, which is computationally expensive.
- RNNs are very robust against vanishing gradient problem.
Question: What application(s) is(are) suitable for RNNs?
- Estimating temperatures from weather data
- Natural Language Processing
- Video context retriever
- Speech Recognition
- All of the above
Question: Why are RNNs susceptible to issues with their gradients?
- Numerical computation of gradients can drive into instabilities
- Gradients can quickly drop and stabilize at near zero
- Propagation of errors due to the recurrent characteristic
- Gradients can grow exponentially
- All of the above
Question: What is TRUE about LSTM gates?
- The Read Gate in LSTM, determine how much old information to forget
- The Write Gate in LSTM, reads data from the memory cell and sends that data back to the network.
- The Forget Gate, in LSTM maintains or deletes data from the information cell.
- The Read Gate in LSTM, is responsible for writing data into the memory cell.
Module 4 – Restricted Boltzmann Machines (RBMs)
Question: What is the main application of RBM?
- Data dimensionality reduction
- Feature extraction
- Collaborative filtering
- All of the above
Question: How many layers does an RBM (Restricted Boltzmann Machine) have?
- Infinite
- 4
- 2
- 3
- All of the above
Question: How does an RBM compare to a PCA?
- RBM cannot reduce dimensionality
- PCA cannot generate original data
- PCA is another type of Neural Network
- Both can regenerate input data
- All of the above
Question: Which statement is TRUE about RBM?
- It is a Boltzmann machine, but with no connections between nodes in the same layer
- Each node in the first layer has a bias
- The RBM reconstructs data by making several forward and backward passes between the visible and hidden layers
- At the hidden layer’s nodes, X is multiplied by a W (weight matrix) and added to h_bias
- All of the above
Question: Which statement is TRUE statement about an RBM?
- The objective function is to maximize the likelihood of our data being drawn from the reconstructed data distribution
- The Negative phase of an RBM decreases the probability of samples generated by the model
- Contrastive Divergence (CD) is used to approximate the negative phase of an RBM
- The Positive phase of an RBM increases the probability of training data
- All of the above
Module 5 – Autoecoders
Question: what is the difference between Autoencoders and RBMs?
- Autoencoders are used for supervised learning, but RBMs are used for unsupervised learning.
- Autoencoders use a deterministic approach, but RBMs use a stochastic approach.
- Autoencoders have less layeres than RBMs.
- All of the above
Question: Which of the following problems cannot be solved by Autoencoders:
- Dimensionality Reduction
- Time series prediction
- Image Reconstruction
- Emotion Detection
- All of the above
Question: What is TRUE about Autoencoders:
- Help to Reduce the Curse of Dimensionality
- Used to Learn the Most important Features in Data
- Used for Unsupervised Learning
- All of the Above
Question: What are Autoencoders:
- A Neural Network that is designed to replace Non-Linear Regression
- A Neural Network that is trained to attempt to copy its input to its output
- A Neural Network that learns all the weights by using labeled data
- A Neural Network where different layer inputs are controlled by gates
- All of the Above
Question: What is a Deep Autoencoder:
- An Autoencoder with Multiple Hidden Layers
- An Autoencoder with multiple input and output layers
- An Autoencoder stacked with Multiple Visible Layers
- An Autoencoder stacked with over 1000 layers
- None of the Above
Final Exam
Question: Why use a Data Flow graph to solve Mathematical expressions?
- To create a pipeline of operations and its corresponding values to be parsed
- To represent the expression in a human-readable form
- To show the expression in a GUI
- Because it is the only way to solve mathematical expressions in a digital computer
- None of the above
Question: What is an Activation Function?
- A function that triggers a neuron and generates the outputs
- A function that models a phenomenon or process
- A function to normalize the output
- All of the above
- None of the above
Question: Why is TensorFlow considered fast and suitable for Deep Learning?
- It is suitable to operate over large multi-dimensional tensors
- It runs on CPU
- Its core is based on C++
- It runs on GPU
- All of the above
Question: Can TensorFlow replace Numpy?
- None of the above
- No, whatsoever
- With only Numpy we can’t solve Deep Learning problems, therefore, TensorFlow is required
- Yes, completely
- Partially for some operations on tensors, such as minimization
Question: What is FALSE about Convolution Neural Networks (CNNs)?
- They fully connect to all neurons in all of the layers
- They connect only to neurons in the local region (kernel size) of input images
- They build feature maps hierarchically in every layer
- They are inspired by human visual systems
- None of the above
Question: What is the meaning of “Strides” in Maxpooling?
- The number of pixels the kernel should add
- The number of pixels the kernel should move
- The size of the kernel
- The number of pixels the kernel should remove
- None of the above
Question: What is TRUE about “Padding” in Convolution?
- size of the input image is reduced for the “VALID” padding
- Size of the input image is reduced for the “SAME” padding
- Size of the input image is increased for the “SAME” padding
- Size of the input image is increased for the “VALID” padding
- All of the above
Question: Which of the following best describes the Relu Function?
- (-1,1)
- (0,5)
- (0, Max)
- (-inf,inf)
- (0,1)
Question: Which are types of Recurrent Neural Networks? (Select all that apply)
- LSTM
- Hopfield Network
- Recursive Neural Network
- Deep Belief Network
- Elman Networks and Jordan Networks
Question: Which is TRUE about RNNs?
- RNNs can predict the future
- RNNs are VERY suitable for sequential data
- RNNs are NOT suitable for sequential data
- RNNs are ONLY suitable for sequential data
- All of the above
Question: What is the problem with RNNs and gradients?
- Numerical computation of gradients can drive into instabilities
- Gradients can quickly drop and stabilize at near zero
- Propagation of errors due to the recurrent characteristic
- Gradients can grow exponentially
- All of the above
Question: What type of RNN would you use in an NLP project to predict the next word in a phrase? (only one is correct)
- Bi-directional RNN
- Neural history compressor
- Long Short-Term Memory
- Echo state network
- None of the above
Question: Which one does NOT happen in the “forward pass” in RBM?
- Making a deterministic decision about returning values into network.
- Multiplying inputs by weights, and adding an overall bias, in each hidden unit.
- Applying an activation function on the results in hidden units.
- Feeding the nework with the input images converted to binary values.
Question: Which one IS NOT a sample of CNN application?
- Creating art images using pre-trained models
- Object Detection in images
- Coloring black and white images
- Predicting next word in a sentence
Question: Select all possible uses of Autoencoders and RBMs (select all that apply):
- Clustering
- Pattern Recognition
- Dimensionality Reduction
- Predict data in time series
Question: Which technique is proper for solving Collaborative Filtering problem?
Question: Which statement is TRUE for training Autoencoders?
- The Size of Last Layer must be at least 10% of the Input Layer Dimension
- The size of input and Last Layers must be of the Same Dimensions
- The Last Layer must be Double the size of Input Layer Dimension
- The Last Layer must be half the size of Input Layer Dimension
- None of the Above
Question: To Design a Deep Autoencoder Architecture, what factors are to be considered?
- The size of the centre-most layer has to be close to number of Important Features to be extracted
- The centre-most layer should have the smallest size compared to all other layers
- The Network should have an odd number of layers
- All the layers must be symmetrical with respect to the centre-most layer
- All of the Above
Question: With is TRUE about Back-propogation?
- It can be used to train LSTMs
- It can be used to train CNNs
- It can be used to train RBMs
- It can be used to train Autoencoders
- All of the Above
Question: How can Autoencoders be improved to handle highly non-linear data?
- By using Genetic Algorithms
- By adding more Hidden Layers to the Network
- By using Higher initial Weight Values
- By using Lower initial Weight Values
- All of the Above
Conclusion:
We hope you know the correct answers to Deep Learning with TensorFlow If Queslers helped you to find out the correct answers then make sure to bookmark our site for more Course Quiz Answers.
If the options are not the same then make sure to let us know by leaving it in the comments below.
Course Review:
In our experience, we suggest you enroll in this and gain some new skills from Professionals completely free and we assure you will be worth it.
This course is available on Cognitive Class for free, if you are stuck anywhere between quiz or graded assessment quiz, just visit Queslers to get all Quiz Answers and Coding Solutions.
More Courses Quiz Answers >>
Building Cloud Native and Multicloud Applications Quiz Answers
Accelerating Deep Learning with GPUs Quiz Answers
Blockchain Essentials Cognitive Class Quiz Answers