Geek blog series - part 3
In this ‘Geek blog series’ these 'Geeks’ share their expertise and insights on the latest AI and Analytics topics. Below you will find the third part of this series.
Image classification? Use DLPy
By Jaimy van Dijk
Everybody is busy nowadays. Some might say that we need better work-life management. I say, let’s focus on automating some of our work! Take for instance all kind of controlling tasks, such as controlling if the usage of the company logo in vlogs and presentations is okay. Or controlling whether all products are straight on the packaging line so that the machine can pack them correctly. But there is good news. You can automate these tasks with computer vision, which means: extracting information from images using SAS DLPy, which stands for SAS Deep Learning in Python. This allows users to build deep learning models using friendly high-level API’s.
To show you how this works, we build a deep learning model based on Conventional Neural Networks (CNN) for image classification called: Geek/No Geek (also read this blog). We learned the model that geeks often do dress a bit strange. They wear striking glasses, suspenders, bowties and other eye-catching items. Using these attributes the model can classify people to be geeks or not.
Using model interpretation techniques we can see that the model focuses on the face, presumably to find glasses, and on bright colors. Of course, you need to monitor the model over time. Since fashion changes the model will need to change over time as well.
I wish you good luck in training your own model for image classification. Questions? Don’t hesitate to reach out to me via LinkedIn.
LIME and ICE: The Secret Ingredients in your AI Cocktail
By Véronique Van Vlasselaer
The world of Machine Learning and Artificial Intelligence is growing more rapidly than ever before. Creative AI ideas sprout at an unbeatable pace, organizations are exploring the new emerging opportunities of Machine Learning and AI. Without realizing, machines are making millions of automated decisions without any human intervention to date, facilitating and augmenting people’s day-to-day life. But then the question arises: can we trust decisions made by AI? The most powerful algorithms that steer our daily life are typically so-called black box models. These are models that we as human beings cannot understand easily. The rationale and the logic of each decision is hidden in (often) billions of mathematical formulas, which are hard to unravel. On the other hand, we have white box models, where the logic behind each decision is easily explainable. Unfortunately, the performance of those white box models is often unsatisfactory. It seems that the modern data scientist has to make an impossible choice: a choice between accuracy and interpretability.
But is that true? Do we have to choose between the accuracy of a black box model on the one hand and explainability, fairness, accountability and trustworthiness of white box models on the other hand? Can’t we have both: explainable and accurate models? Yes, we can. Nowadays, lots of academic research is centered around the interpretability of black box models. Typically, interpretability techniques can be classified into two categories:
(1) techniques to explain the prediction of each individual observation.
(2) techniques that reveal the impact of each of the inputs or variables to the prediction.
And within SAS you have many options for this, amongst which LIME and ICE.
On individual observation level, you can use LIME. This stands for Local Interpretable Model-agnostic Explanations. For each individual prediction it generates an explanation: why did an observation receive that prediction? More specifically, LIME will fit for each observation a local model using the predicted value of the observation and nearby observations. The local model is a white box model, typically a LASSO regression. LIME will report on the values of the coefficients for the parameter estimates of the variables when using such a localized linear regression model.
Read also my blog about the GeekNoGeek App that we have developed to calculate someone’s geekiness score (i.e., a probability on how geeky you are) based on their profile picture.
To explain the impact of the predictors to the prediction you can use ICE, which stands for Individual Conditional Expectation. ICE analyses what would happen to the predicted value of one observation if we change the value of one of the input variables. Take for instance color: what happens with the result if you adjust the color intensity of your picture?
Tijdens dit evenement werden een aantal inspirerende praktijkvoorbeelden van innovatieve AI-projecten gepresenteerd. Daarnaast werden handvatten aangereikt om AI-projecten [...]
SAS werkt samen met NVIDIA op het gebied van deep learning en computer vision. Het doel is om klanten te helpen met het toepassen van Artificial Intelligence (AI) bi [...]
SAS investeert de komende drie jaar $1 miljard in AI door middel van software-innovatie, onderwijs, aanbod van expert services en meer. De investering betreft nieuwe [...]
Kunstmatige intelligentie (AI) biedt veel kansen, mogelijkheden en voordelen voor innovatie. Wat is de impact van deze nieuwe privacywetgeving op AI-projecten?
Artificiële Intelligentie (AI) en analytics worden door de politiek gebruikt om hun boodschap op een gerichte manier bij het juiste kiespubliek te krijgen.
72 procent erkent de voordelen van analytics, maar bij slechts 39 procent is het de kern van hun bedrijfsstrategie
Artificial intelligence (AI) helpt organisaties wereldwijd om innovatie te stimuleren. Met de AI-oplossingen van SAS zijn organisaties als Notilyze, Sci [...]