Global Data Science Forum

Data Science Community News | Volume 1, Issue 4

By Christina Howell posted 30 days ago

  
IBM Data Science Community Newsletter
November 2019 | Volume 1, Issue 4


Spotlight

spacer.gif
ds_newsletter-feature.jpg

Technique Makes it Easier for AI to Understand Video

Understanding actions in video is complex, but MIT and IBM teamed up to produce the Temporal Shift, which reduces training time from over 49 hours to under 15 minutes! This development could fundamentally change the economics of applying AI to comprehend videos. In what application would you use AI to better understand video content?


AI Skills


New! Certification for Practicing Data Scientists

There are not many offerings available for professional data scientists to advance their skills. Find out about IBM's new AI Workflow Certification. Read more

Interpretable Machine Learning Book

Machine Learning is commonly black box and difficult-to-explain to decision makers. Chistoph Molnar published his excellent aproach to ML that is easily interpretable to make results more understable. Read more


Tools & Libraries


Apache OpenWhisk

Are you small? Loosely coupled? And some have application logic? You might be a microservice. Apache OpenWhisk makes event handling for services easy. Read more

Netflix Open Sources Polynote to Make Data Science Notebooks Better

Polynote is a multi-language notebook development and exploration environment. It enables editing and manages the challenges of hidden states that hinder development in traditional notebooks. Read more


Solutions & Products


Visualizing Personality Profile of A Film Character Using Python & IBM Watson

See a hands-on walk through on how to analyze the personalities of movie characters with Watson's Personality AI algorithms. Read more

Infrastructure for Data Science

Solutions to infrastructure challenges, posed by data science applications, are well understood when they overlap with traditional software development; see how they're solvable. Read more


Research


A Unified Approach to Interpreting Model Predictions

Shapley values help us think through how to attribute cause in black-box models. This foundational paper walks through the framework (SHAP) and how it is an efficient generalization of other feature importance techniques through the use of game theory. Read more

AI Automation for AI Fairness

A step-by-step walk through on how to integrate AI Fairness 360 and streamline hyperparameter selection through automated tuning. Read more


Events

See all upcoming community events, here.
Nov 12–13 | Live Event | Columbia University, New York
Dec 3–5 | Live Event | California State University, Los Angeles

#In-the-know
#In-the-know-feature
#newsletter
0 comments
41 views

Permalink