Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Natural Language Generation (NLG), Neural Networks and Deep Learning (DL); these acronyms all fall under the AI umbrella of making machines smarter. This could be as tech rudimentary as offering spelling and grammar checking to complex solutions such as autonomous driving vehicles.

Artificial Intelligence – making machines smarter

Over the last couple years, I have presented on AI, ML and Bias and this blog post encompasses that presentation. AI is not a secret salt that can be poured over a problem or a dataset in the hopes it will spit out insights. Start by dreaming and asking ‘What If’ AI can help us do BLANK. Pleas don’t expect BLANK to appear only after you have sprinkled AI on top. I hope that after reading this post, you are inspired with what’s possible in AI, understand potential use cases, and reach out personally to explore AI together!

What is Machine Learning? In the not so distant past, computer scientists would build up complex algorithms to compute some output based on an array of inputs. At the time of writing this, the real-estate market in the United States is exploding, so let’s use an example from website Zillow. Zillow has their Zestimate price, which predicts the most realistic sale price of the house given factors like house specs, assessed/listing price, recent nearby sales, etc. In the past, this algorithm may have been built like the example below,

function getHouseZestimate(house) { 
  var houseValue = 0;
  if (house.sqft > 1000) houseValue += 100000;
  if (house.sqft > 2000) houseValue += 130000;
  if (house.sqft > 3000) houseValue += 150000;
  if (house.bathrooms == 1) houseValue += 10000;
  if (house.bathrooms == 2) houseValue += 13000;
  if (house.bathrooms >= 3) houseValue += 18000;
  if (house.pool) houseValue += 50000;
  return houseValue;
}

As companies like Zillow have collected more data, this algorithm quickly becomes unwieldy for an engineer to maintain a single algorithm in explicit code. Let’s say we wanted to add cost of goods to rebuild, material of the roof, current insurance rates, local school test scores, website traffic views/saves, and on and on. The complexity of the algorithm would be overwhelming, but this is where we can introduce Machine Learning (ML) to help build its own model for solving for the Zestimate.

Machine Learning – when explicit programming is too rigid or impractical; recognize patterns in data using algorithms to interpret the pattern and make a prediction.

Let’s start with a little data science grounding for those new to the field. The inputs that we have in this example above are all considered the ‘features’ of the data. There is a whole discipline (Feature Engineering) focused on how to search and manipulate these ‘features’/inputs to discover which ones are missing or hidden to make sure they are included as part of the inputted data. The output is what you’re looking for, in this case, the Zestimate. The output in Machine Learning is called the label.

Machine Learning Depends on Data

Machine Learning depends on existing data, lots of it. The data must be prepared such that you have all your ‘features’ followed by the ‘label’. This historical data is helping to state that all these inputs (features) have led to their respective outputs (label).

The data must already be ‘labelled’ for the ML process to build a model. It uses these historical results to build a model such that when the next set of input (features) arrives, it can predict the output (label). The process to build a model, also referred to as training the model, requires writing code that does 3 things.

  1. Point to the data set and explicitly call out which columns are the features, which one is the label
  2. Specify which portions of the data will be used for training and which will be used for testing
  3. Specify which machine learning model algorithm will be used in the trainer

If we dive into #2, the majority of your data will be used for training your model, but you will also partition out a sliver of your data that will be saved for testing your model after it’s built. So let’s say you take 90% of your data to build an ML model, and then come back and use the other 10% (without the labels) and see how good it was at predicting the correct label. This helps to evaluate how good the model is at predicting the correct amount.

Next, step #3, involves picking out the right algorithm for the given use case. Broadly, there are several core use-cases of Machine Learning and there are many respective algorithms for each one. I’ve linked my blog article on Use Cases for Artificial Intelligence and Machine Learning. As for picking out the right algorithm, it’s a little trial and error as each Algorithm may produce better or worse results based on your unique dataset. As this field continues to evolve, scripts like Microsoft’s AutoML can iterate through many algorithms automatically and return the results of each one’s efficiency and use the best algorithm.

Train your ML Model

The last step in the process is to “train” or “fit” your model, which the program will generate its own unique function based on the data. Back to the Zestimate example above, the program will build its own version of the programming example above for the house prediction. The complexity of all the input variables (features) that leads to the output (label) is too verbose to present a nice clean function details, so often these ‘black-box’ models are difficult to understand the AI magic that’s happening in the function. This unfortunately has the potential to introduce bias as it is predicting output labels based on the historical data that was used during building the model. While this could be great for predicting sale prices, it could be discriminatory if used in a hiring process, ie “Amazon scraps secret AI recruiting tool that showed bias against women”

Now that you have your black-box model, you can incorporate this as a function call in your applications. As new data arrives, use those inputs as the features to your function and it will output (predict) a value that you can then use to make decisions. It’s important to consider how data will change over time and you may need to rebuild your model as new factors are introduced, economic changes, etc.

Check out the other articles in this series on AI and ML on the blog by searching for the tag AI.

Hope you enjoyed a quick introduction to AI and ML, dive into the other posts or reach out if you want to chat about what’s possible!

Photo by Eiliv-Sonas Aceron on Unsplash