Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Get hands-on experience creating and training machine learning models so that you can predict what animal is making a specific sound, like a cat purring or a dog barking. Integrate those models in a simple web page that you build in Node-RED. Then, add visual recognition so that you can identify the image of an animal.
If you’re a developer and want to learn about machine learning, this is the course for you. Even if you have some experience with machine learning, you might not have worked with audio files as your source data. Either way, you’ve come to right place. In this course, you’ll learn to create basic machine learning models that you train to recognize the sounds of dogs, cats, and birds. You’ll also integrate visual recognition to identify images of these animals.
You’ll build a basic user interface in Node-RED that shows the results of the predictions for both sound and images. You’ll use IBM Watson Studio to build classification models to predict and identify animal sounds and use IBM Watson Visual Recognition to identify images of those animals. You’ll learn how best to gather and prepare data, create and deploy models, deploy and test a signal processing application, create models with binary and multiclass classifications, and display the predictions on a web page.
Lab 1: Gather and prepare the data
Question: Data gathering is a key component in machine learning.
Question: For machine learning models, data needs to be quantifiable and not comparable.
Question: Audio files can be compared directly.
Question: Truncating audio files to the same length makes them compatible.
Question: Silence detection can be used to locate the start of a tune or noise even though the noise might already be any number of bars into a tune.
Lab 2: Build a machine learning model
Question: In Watson Studio, which type of project do you create for a machine learning project to make it easy to find associated assets and models?
Question: Which statement is true for a typical machine learning project in Watson Studio?
Question: What’s the best way to select columns in a machine learning model in Watson Studio?
Question: What does a 60%, 20%, 20% split of data mean?
Question: How does an overfitting model perform?
Lab 3: Create predictions in a Node-RED application
Question: Node-RED allows you to import and export flows.
Question: In Node-RED, you can install nodes by using which method?
Question: Which combination of Node-RED nodes are required to inject audio into a flow?
Question: You can use the http request node to do which task?
Question: What do the two numbers mean that are returned from the binary machine learning classification?
Lab 4: Create multiclass classification models
Question: When do you use a multiclass classification?
Question: You run machine learning predictions against which type of model or data?
Question: What does each Lite Plan instance of the Watson Machine Learning service allow?
Question: What output do you get from a Watson Machine learning prediction?
Question: Your Node-RED application only retrieves the predicted class and probability from the API call to a model deployment.
Lab 5: Create UIs and integrate visual recognition
Question: The following HTML code in the Node-RED UI application allows the HTML web page to process JavaScript.
<script>{{{payload.script}}}</script>
Question: One way to train the Watson Visual Recognition service is to feed it positive images of what you want to predict, say, domestic cats, and negative images, say, dogs, lions, birds, and other animals, that you don’t want to predict.
Question: Which type of node is a one-way communication link that can update a web page every time the Machine Learning service makes a prediction?
Question: If the Node-RED application for Lab 5 processes nine machine learning models, how many machine learning nodes are required?
Question: The Visual Recognition service is simply an API that you can connect to by using http input and output nodes.
Final Exam
Question: If you split your data by 70%, 20%, 10%, which percentage is used for the training data?
Question: You can use digital signal processing to create numbers for audio files so that you can compare the audio files and then use these numbers as the basis of a machine learning model.
Question: In which Node-RED node do you set the Mode field to run a prediction?
Question: When you create new projects in Watson Studio, a machine learning service is automatically associated with the new project.
Question: The Naïve Bayes estimator does not work with data that contains negative numbers.
Question: In this course, you use Cloud Object Storage to store data files, such as CSV files.
Question: In Lab 3 of this course, you ran a hardcoded prediction test by using Build Payload Values function node. In the code for this function node, why do the columns start with column 2 rather than column 1?
Question: Why is it necessary to deploy the Python Flask digital signal processing application in Lab 3?
Question: After you deploy a model in Watson Studio, you see a deployment ID. You use this ID to call the predictor in Node-RED or other application.
Question: The Watson Visual Recognition service can be trained to recognize both audio and images.
We hope you know the correct answers to Introduction to Machine Learning with Sound If Why Quiz helped you to find out the correct answers then make sure to bookmark our site for more Course Quiz Answers.
If the options are not the same then make sure to let us know by leaving it in the comments below.
In our experience, we suggest you enroll in this and gain some new skills from Professionals completely free and we assure you will be worth it.
This course is available on Cognitive Class for free, if you are stuck anywhere between quiz or graded assessment quiz, just visit Queslers to get all Quiz Answers and Coding Solutions.
More Courses Quiz Answers >>
Building Cloud Native and Multicloud Applications Quiz Answers