Geek blog series - part 3
29 juli 2019 [04:07], 958 views
Door Redactie

Geek blog series - part 3

In this ‘Geek blog series’ these 'Geeks’ share their expertise and insights on the latest AI and Analytics topics. Below you will find the third part of this series.

Image classification? Use DLPy

By Jaimy van Dijk

Everybody is busy nowadays. Some might say that we need better work-life management. I say, let’s focus on automating some of our work! Take for instance all kind of controlling tasks, such as controlling if the usage of the company logo in vlogs and presentations is okay. Or controlling whether all products are straight on the packaging line so that the machine can pack them correctly. But there is good news. You can automate these tasks with computer vision, which means: extracting information from images using SAS DLPy, which stands for SAS Deep Learning in Python. This allows users to build deep learning models using friendly high-level API’s.

To show you how this works, we build a deep learning model based on Conventional Neural Networks (CNN) for image classification called: Geek/No Geek (also read this blog). We learned the model that geeks often do dress a bit strange. They wear striking glasses, suspenders, bowties and other eye-catching items. Using these attributes the model can classify people to be geeks or not. 

Using model interpretation techniques we can see that the model focuses on the face, presumably to find glasses, and on bright colors. Of course, you need to monitor the model over time. Since fashion changes the model will need to change over time as well.

I wish you good luck in training your own model for image classification. Questions? Don’t hesitate to reach out to me via LinkedIn.

LIME and ICE: The Secret Ingredients in your AI Cocktail

By Véronique Van Vlasselaer

The world of Machine Learning and Artificial Intelligence is growing more rapidly than ever before. Creative AI ideas sprout at an unbeatable pace, organizations are exploring the new emerging opportunities of Machine Learning and AI. Without realizing, machines are making millions of automated decisions without any human intervention to date, facilitating and augmenting people’s day-to-day life. But then the question arises: can we trust decisions made by AI? The most powerful algorithms that steer our daily life are typically so-called black box models. These are models that we as human beings cannot understand easily. The rationale and the logic of each decision is hidden in (often) billions of mathematical formulas, which are hard to unravel. On the other hand, we have white box models, where the logic behind each decision is easily explainable. Unfortunately, the performance of those white box models is often unsatisfactory. It seems that the modern data scientist has to make an impossible choice: a choice between accuracy and interpretability.

But is that true? Do we have to choose between the accuracy of a black box model on the one hand and explainability, fairness, accountability and trustworthiness of white box models on the other hand? Can’t we have both: explainable and accurate models? Yes, we can. Nowadays, lots of academic research is centered around the interpretability of black box models. Typically, interpretability techniques can be classified into two categories:
(1) techniques to explain the prediction of each individual observation.
(2) techniques that reveal the impact of each of the inputs or variables to the prediction.
And within SAS you have many options for this, amongst which LIME and ICE.

On individual observation level, you can use LIME. This stands for Local Interpretable Model-agnostic Explanations. For each individual prediction it generates an explanation: why did an observation receive that prediction? More specifically, LIME will fit for each observation a local model using the predicted value of the observation and nearby observations. The local model is a white box model, typically a LASSO regression. LIME will report on the values of the coefficients for the parameter estimates of the variables when using such a localized linear regression model.

Read also my blog about the GeekNoGeek App that we have developed to calculate someone’s geekiness score (i.e., a probability on how geeky you are) based on their profile picture. 

To explain the impact of the predictors to the prediction you can use ICE, which stands for Individual Conditional Expectation. ICE analyses what would happen to the predicted value of one observation if we change the value of one of the input variables. Take for instance color: what happens with the result if you adjust the color intensity of your picture?

Are you struggling with explaining the results of your black box models and need some help? Feel free to reach out to me via LinkedIn or Twitter.

Learn more

Want to learn more? Hear our stories during the Talk of the Geeks at the World Summit AI from Oct. 10 in Zaandam. The number of seats is limited! To secure your seat, please let us know in advance. If you can’t make it from Oct 10, join us at Analytics Experience from Oct. 21-23 in Milan.

 

Reacties

KNVB en SAS maken één-tweetje om voetbaldoelgroepen te binden
23 september 2019 [06:33], 728 views

KNVB en SAS maken één-tweetje om voetbaldoelgroepen te binden

SAS gaat met haar Customer Intelligence-oplossingen de KNVB en dochterorganisatie Voetbalmedia helpen om de miljoenen voetballiefhebbers nog persoonlijker te b [...]

 

Lees meer  

Kwaliteit voorspellen met de Smart Industry Assistant
30 augustus 2019 [11:27], 1060 views

Kwaliteit voorspellen met de Smart Industry Assistant

Bright Cape heeft de Smart Industry Assistant (SIA) ontwikkeld om de hoeveelheid productieafval te reduceren en de winstgevendheid te verhogen. 

 

Lees meer  

Geek blog series - part 3
29 juli 2019 [04:07], 959 views

Geek blog series - part 3

In this ‘Geek blog series’ these Geeks share their expertise and insights on the latest AI and Analytics topics. Read the third part of this series.

 

Lees meer  

Geek blog series - part 2
22 juli 2019 [03:48], 1233 views

Geek blog series - part 2

In this ‘Geek blog series’ these ‘Geeks’ share their expertise and insights on the latest AI and Analytics topics. Read now the blogs of the second part of this series.

 

Lees meer  

Geek blog series - part 1
15 juli 2019 [03:17], 1144 views

Geek blog series - part 1

 In this ‘Geek blog series’ these ‘Geeks’ share their expertise and insights. A must-read for everyone who likes to learn more about data scien [...]

 

Lees meer  

Hoogtepunten van het SAS Analytics Forum ‘Be Curious’
24 juni 2019 [04:53], 1346 views

Hoogtepunten van het SAS Analytics Forum ‘Be Curious’

Op 6 juni stond het NBC Congrescentrum in Nieuwegein bol van Analytics en AI. Met ruim 1.200 bezoekers kijken we terug op een succesvol SAS Analytics Forum 2019.

 

Lees meer  

Digitale transformatie stelt andere eisen aan een analytics platform
12 juni 2019 [10:38], 1070 views

Digitale transformatie stelt andere eisen aan een analytics platform

Drie vragen aan Arno Klijnman, Technology Solution Consultant bij SAS, over digitale transformatie en de invloed die het heeft op de eisen die organisaties stellen aan een [...]

 

Lees meer  

Geeks of the world: unite!
3 juni 2019 [06:00], 2015 views

Geeks of the world: unite!

Geeks unite at the SAS Analytics Forum and at SAS Curiosity Forum! They will broadcast all their expertise and insights they have gathered from the marvelous world of Mach [...]

 

Lees meer  

Algoritmes, de illusie van objectiviteit
6 mei 2019 [11:30], 3467 views

Algoritmes, de illusie van objectiviteit

Een vooroordeel is een typisch menselijk verschijnsel. We zijn allemaal bevooroordeeld, door onze eigen aard, en elke dag nemen we talloze beslissingen die ge [...]

 

Lees meer  

De impact van AI in de zorg
3 mei 2019 [10:45], 3123 views

De impact van AI in de zorg

Toonaangevende bedrijven in het bankwezen, de gezondheidszorg en andere branches gebruiken de advanced analytics en AI-gedreven mogelijkhed [...]

 

Lees meer