Onica’s Containers Practice Lead Mency Woo recently spoke at a Meetup in Victoria, BC about AWS Deeplens. This blog summarizes some of the information from that session and is part one of a two part series.
The Difference Between Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL)
For a lot of people, the terms AI, ML, and DL can cause major confusion in how they’re related. Artificial intelligence is something of an umbrella term in this case, with machine learning being a subset of AI, and deep learning being a subset of machine learning. AI is a machine defined as conscious like in the Turing Test, whereas machine learning describes a machine that acts without explicit instruction. Deep learning in turn is a machine with deeply emerged multi-layer decision making aiming to mimic that of a human brain.
On AWS most services around AI, ML, and DL involve data processing at different levels and through different methods. With artificial intelligence this occurs around an input from the environment which generates some sort of action to be taken, then gives an output.
In machine learning, this process evolves. While there’s still input, and action, the output is a prediction. In addition, the action is broken into feature extraction and classification, which takes a defined component, measures it, and matches it to some sort of criteria which helps label it.
This is the basis around which machine learning operates. For machine learning to work, there must be a lot of samples of inputs and outputs for extraction and classification to help formulate a model. This model then can predict an outcome based on the samples processed. The process is iterative, and can have tremendous value for big pools of data that need to be analyzed. Deep learning builds on this even further because it offers a scalability that machine learning just doesn’t have. Deep learning still has inputs and outputs that are predictions, as well as feature extraction and classification, however, in deep learning, this process isn’t linear. Instead, feature extraction and classification will re-iterate itself multiple times before coming to a prediction.
Quality Data and its Impact on Machine Learning Success
As previously noted, the main applications for ML and DL in AWS revolve around data. But as is true with any data processing, the success of these features are very dependent on the integrity and quality of the data being used. If your data is not high quality, your model may not have the accuracy necessary to be successful. And if you don’t have the right data to train your model, it won’t magically be able to derive the data you need.
There area few important steps one can take to ensure data is “good.” One feature of good data is labeling. Labeled data offers indication of the features within the data, as well as potential outcomes of data. In essence, labeling is context for the model. Preprocessing is also important for data integrity. By cleaning, formatting, and organizing your data, you can ensure data is accurate and standardized. Finally, there’s data splitting. Data splitting is the method of training your model. Once the model’s architecture is defined, you can input the data set, which is split into a training set and a validation set. Iterations with the training set allow the model to learn while testing the model with the validation set allows you to ensure model accuracy without facing overfitting because the same data is being used. For best results, AWS recommends that 70% of a data set should be used to train a model, and 30% should be used to validate.
For more information on machine learning, see our webinar on predictive analytics and machine learning and keep an eye out for part two of this blog.