How to make an AI Solomon

plus ICML takeaways, building a deep style-tranfer app with Flask and AWS, and more

AltDeep is a newsletter focused on microtrend-spotting in data and decision science, machine learning, and AI. It is authored by Robert Osazuwa Ness, a Ph.D. machine learning engineer at an AI startup and adjunct professor at Northeastern University.

Overview:

  • Ear to the Ground: Reinforcement learning at ICML 2019, Big tech poaches animal neuroscientists

  • AI Long Tail: Conversational marketing is the new black

  • Data-Sciencing for Fun & Profit: Guide to building a style-tranfer deep net into an app using AWS and Flask

  • The Tao of Data Science: How to make an AI Solomon

Ear to the Ground

Curated posts aimed at spotting microtrends in data science, machine learning, and AI.

Reinforcement learning a key theme at ICML 2019

The central RL theme was off-policy evaluation and off-policy learning, as many anticipate the further implementation of RL applications will create a large amount of data from sub-optimal policies. Further, exploration was a hot topic as well, and it became clear that more standardized approaches to evaluation in RL are needed.

Promising applications for for RL:

  • Personalization of news stories, advice, layouts, healthcare, etc. Recommendation systems, nudges/influencing people’s behaviors in helpful (or insidious) ways. This contrasts with five years ago when the main application was robotics and games.

  • Agents that can act on behalf of users, lots of problems though (multi-agent, preference elicitation).

  • Contextual bandits as a replacement for standard A/B testing procedures.

Resources:

Big tech poaches animal neuroscientists for AI R&D

Neuroscientists studying animals such as zebrafish and mice are being poached into big tech to drive AI research.

The trouble with learning how humans learn tasks, perceive the world, and process information is constrained by the limited degree to which you can experiment on humans. With animals, on the other hand, you can run controlled experiments, plugging electrodes directly in the brain, or in the case of zebrafish, see exactly exactly what the neurons are up to (this was the topic of one of my favorite papers at Neurips last year).

If you have a self-driving car that can learn as well as a rat or process scenery as well as a bird, you would be at the cutting edge.

Resources:

AI Long Tail

Pre-hype AI microtrends

Conversational marketing is the new black

Though I'm no marketing expert, my understanding of the term conversational marketing is that it is marketing that focuses on feedback from customers primarily through chatbots and social media, and is driven by quantifiable KPS's like online conversions, engagement, growth of a customer base, and growth in revenue.

Though I'm clearly biased due to my day job, I perceive a growing amount of signal that conversational marketing is where B2B AI startups can make a dent.

The itch being scratched here is the need for analytics tools. A VP of digital marketing is seeing data from individual customers or prospects coming in from multiple channels, much of which is unstructured human natural language. Just having a dashboard that visualizes the data is an immense value add. Helping people make decisions based on that data is a harder problem. From an ML perspective, the path from algorithm platform to commercial product is a windy one. Even if one were going to avoid the complexity of engineering algorithms and just constrained themselves to NLTK or pretrained deep learning models downloaded from Github, product-market fit remains a challenge. But marketers seem interested.

Further, companies such as Salesforce are acquiring. Salesforce just acquired Bonobo AI. From what I can tell they only surface insights, for example, which customers have especially worrisome complaints, what product issues are trending. In my view, this is not AI, this is bread and butter data science and data engineering, with some nice dashboarding -- and that's great if that is all that is required to get started.

Data Sciencing for Fun and Profit

Data-Sciencing for Fun & Profit examines emerging trends on the web that could be exploited for profit using quantitative techniques.

Guide to building a style-tranfer deep net into an app using AWS and Flask

This Medium post by Puneet Saini summarizes how to build a Flask app that uses a neural net deployed to AWS to do style-transfer. It is quite well done.

Tao of Data Science

The Tao of Data Science is a column I write that explores how centuries of philosophers have been tackling the key problems of machine learning and data science. In include snippets and links to this column in this newsletter.

How to make an AI Solomon

The Legend of Wise King Solomon - The Junction - Medium

I’m teaching a course on causality modeling in machine learning and trying to explain two probabilistic notions that turn out to be crucial in building causal models in fields like reinforcement learning. Here is a first attempt.

Consider these two sentences:

  1. The user would not have unsubscribed if only the customer service agent had not offended him.

  2. The user would not have unsubscribed if their Internet service not been working.

Both are true. Both are causes.

However, most people, if asked why the user had unsubscribed, would say it was because the customer service agent had offended the user.

In contrast, most statistical ML algorithms, looking at activity logs of users and agents, and seeing a nearly perfect correlation between Internet connectivity and the unsubscription rate, and a nearly perfect correlation between agents cursing out their users and unsubscription rates, might weight these two events equally in terms of assigning blame.

Understanding the difference becomes easier with the notion of counterfactual probability of necessity and sufficiency.

Probability of necessity:

(1) Given that the user unsubscribed and the agent offended him, what is the probability the agent would not have unsubscribed had the agent not offended him?

(2) Given that the user unsubscribed and the call was connected, what is the probability that the user would not have unsubscribed if the call was not connected?

Probability of sufficiency:

(3) Given a user who didn’t unsubscribe and wasn’t offended by an agent, what is the probability that this user would unsubscribe if offended the agent?

(4) Given a user who didn’t unsubscribe and the call wasn’t connected, what is the probability that this user would have unsubscribed if the call was connected?

These questions reveal the asymmetry between the two causes. (2) is probably 100% while (1) < 100%. However, (3) is probably much greater than (4).

It turns out these probabilities can be estimated from data, though they require a causal model. But this means that we could use these probabilities to teach machines to make judgments and assign blame based on causal understandings of the world.