Geek blog series - part 3
29 juli 2019 [04:07], 302 views
Door Redactie

Geek blog series - part 3

In this ‘Geek blog series’ these 'Geeks’ share their expertise and insights on the latest AI and Analytics topics. Below you will find the third part of this series.

Image classification? Use DLPy

By Jaimy van Dijk

Everybody is busy nowadays. Some might say that we need better work-life management. I say, let’s focus on automating some of our work! Take for instance all kind of controlling tasks, such as controlling if the usage of the company logo in vlogs and presentations is okay. Or controlling whether all products are straight on the packaging line so that the machine can pack them correctly. But there is good news. You can automate these tasks with computer vision, which means: extracting information from images using SAS DLPy, which stands for SAS Deep Learning in Python. This allows users to build deep learning models using friendly high-level API’s.

To show you how this works, we build a deep learning model based on Conventional Neural Networks (CNN) for image classification called: Geek/No Geek (also read this blog). We learned the model that geeks often do dress a bit strange. They wear striking glasses, suspenders, bowties and other eye-catching items. Using these attributes the model can classify people to be geeks or not. 

Using model interpretation techniques we can see that the model focuses on the face, presumably to find glasses, and on bright colors. Of course, you need to monitor the model over time. Since fashion changes the model will need to change over time as well.

I wish you good luck in training your own model for image classification. Questions? Don’t hesitate to reach out to me via LinkedIn.

LIME and ICE: The Secret Ingredients in your AI Cocktail

By Véronique Van Vlasselaer

The world of Machine Learning and Artificial Intelligence is growing more rapidly than ever before. Creative AI ideas sprout at an unbeatable pace, organizations are exploring the new emerging opportunities of Machine Learning and AI. Without realizing, machines are making millions of automated decisions without any human intervention to date, facilitating and augmenting people’s day-to-day life. But then the question arises: can we trust decisions made by AI? The most powerful algorithms that steer our daily life are typically so-called black box models. These are models that we as human beings cannot understand easily. The rationale and the logic of each decision is hidden in (often) billions of mathematical formulas, which are hard to unravel. On the other hand, we have white box models, where the logic behind each decision is easily explainable. Unfortunately, the performance of those white box models is often unsatisfactory. It seems that the modern data scientist has to make an impossible choice: a choice between accuracy and interpretability.

But is that true? Do we have to choose between the accuracy of a black box model on the one hand and explainability, fairness, accountability and trustworthiness of white box models on the other hand? Can’t we have both: explainable and accurate models? Yes, we can. Nowadays, lots of academic research is centered around the interpretability of black box models. Typically, interpretability techniques can be classified into two categories:
(1) techniques to explain the prediction of each individual observation.
(2) techniques that reveal the impact of each of the inputs or variables to the prediction.
And within SAS you have many options for this, amongst which LIME and ICE.

On individual observation level, you can use LIME. This stands for Local Interpretable Model-agnostic Explanations. For each individual prediction it generates an explanation: why did an observation receive that prediction? More specifically, LIME will fit for each observation a local model using the predicted value of the observation and nearby observations. The local model is a white box model, typically a LASSO regression. LIME will report on the values of the coefficients for the parameter estimates of the variables when using such a localized linear regression model.

Read also my blog about the GeekNoGeek App that we have developed to calculate someone’s geekiness score (i.e., a probability on how geeky you are) based on their profile picture. 

To explain the impact of the predictors to the prediction you can use ICE, which stands for Individual Conditional Expectation. ICE analyses what would happen to the predicted value of one observation if we change the value of one of the input variables. Take for instance color: what happens with the result if you adjust the color intensity of your picture?

Are you struggling with explaining the results of your black box models and need some help? Feel free to reach out to me via LinkedIn or Twitter.

 

Reacties

Geek blog series - part 3
29 juli 2019 [04:07], 303 views

Geek blog series - part 3

In this ‘Geek blog series’ these ‘Geeks’ share their expertise and insights on the latest AI and Analytics topics. Read the third part of this series. 

Lees meer  

Geek blog series - part 2
22 juli 2019 [03:48], 302 views

Geek blog series - part 2

In this ‘Geek blog series’ these ‘Geeks’ share their expertise and insights on the latest AI and Analytics topics. Read now the blogs of the second part of this series.

 

Lees meer  

Geek blog series - part 1
15 juli 2019 [03:17], 415 views

Geek blog series - part 1

 In this ‘Geek blog series’ these ‘Geeks’ share their expertise and insights. A must-read for everyone who likes to learn more about data scien [...]

 

Lees meer  

Hoogtepunten van het SAS Analytics Forum ‘Be Curious’
24 juni 2019 [04:53], 691 views

Hoogtepunten van het SAS Analytics Forum ‘Be Curious’

Op 6 juni stond het NBC Congrescentrum in Nieuwegein bol van Analytics en AI. Met ruim 1.200 bezoekers kijken we terug op een succesvol SAS Analytics Forum 2019.

 

Lees meer  

Digitale transformatie stelt andere eisen aan een analytics platform
12 juni 2019 [10:38], 688 views

Digitale transformatie stelt andere eisen aan een analytics platform

Drie vragen aan Arno Klijnman, Technology Solution Consultant bij SAS, over digitale transformatie en de invloed die het heeft op de eisen die organisaties stellen aan een [...]

 

Lees meer  

Geeks of the world: unite!
3 juni 2019 [06:00], 1300 views

Geeks of the world: unite!

Geeks unite at the SAS Analytics Forum and at SAS Curiosity Forum! They will broadcast all their expertise and insights they have gathered from the marvelous world of Mach [...]

 

Lees meer  

Algoritmes, de illusie van objectiviteit
6 mei 2019 [11:30], 3057 views

Algoritmes, de illusie van objectiviteit

Een vooroordeel is een typisch menselijk verschijnsel. We zijn allemaal bevooroordeeld, door onze eigen aard, en elke dag nemen we talloze beslissingen die ge [...]

 

Lees meer  

De impact van AI in de zorg
3 mei 2019 [10:45], 2994 views

De impact van AI in de zorg

Toonaangevende bedrijven in het bankwezen, de gezondheidszorg en andere branches gebruiken de advanced analytics en AI-gedreven mogelijkhed [...]

 

Lees meer  

Cassave Case van D[N]A Lab
2 mei 2019 [10:23], 1730 views

Cassave Case van D[N]A Lab

Het D[N]A Lab, geïnitieerd en ondersteund door SAS, heeft samen met voedseltechnoloog en ondernemer Bart van Schie de ‘Cassave Challenge’ aangedragen voor de Global Entrep [...]

 

Lees meer  

Blue Jay: je toekomstige drone assistent
16 april 2019 [09:34], 4143 views

Blue Jay: je toekomstige drone assistent

Studententeam Blue Jay Eindhoven, die tot doel heeft om de toekomstige drone assistant te maken, probeert intelligentie te integreren in haar autonome drones.

 

Lees meer