Discover more from Altdeep.ai Newsletter
No, Machine Learning Does Not Make Understanding Obsolete
Plus, startups with causal inference, and the black hole of talent that are top-tier cities
AltDeep is a newsletter focused on microtrend-spotting in data and decision science, machine learning, and AI. It is authored by Robert Osazuwa Ness, a Ph.D. machine learning engineer at an AI startup and adjunct professor at Northeastern University.
Negative Examples: What you don’t need to read and why.
Dose of Philosophy: Or, A Primer in Sounding Not Stupid at Dinner Parties: Kuhn’s scientific revolutions
Ear to the Ground: Top-tier cities continue to inhale more tech talent, Insight opens a new program focused on merging data science with infosec and privacy
Startups You Could Have Started: Clearbrain wants stats, experimental design, and causal inference
Data-Sciencing for Fun & Profit: How do apply data science to fantasy sports
The Essential Read: My humble effort to counter bullshit AI journalism, with historic examples
Trending items that I’m not linking to and why
There is a hideous article trending on how machine learning means humans should take their hands off the wheel and let AI do all the decision making. Rather than read it, you can read my dissection below.
Datacamp is currently having a public sexual misconduct scandal. From what I can determine, the CEO danced inappropriately and made uninvited physical contact with another employee. Many instructors have signed a protest letter and are encouraging people not to take the courses they instruct. I may link once the dust settles and there is a clearer picture of events.
If you want an order-of-magnitude increase in Twitter followers, do some quick text-mining of the Meuller report and post it online. Or don’t.
Dose of Philosophy: Or, A Primer in Sounding Not Stupid at Dinner Parties
Thomas Kuhn and why prediction isn’t everything
Philosophy has been tackling the key problems of machine learning and data science for centuries. This section makes sure you are up to speed on the philosophical roots of your craft.
In his classic work, The Structure of Scientific Revolutions, Thomas Kuhn argues that what he calls “normal science” takes place within the context of particular paradigms, which provide the rules and standards for any particular scientific discipline. These paradigms enable scientists to develop productive strategies for research, understand how to construct questions, and how to evaluate and interpret results. For example, when trying to come up with a new ML algorithm, we generally have a good idea, based on the current ML research paradigm, what the appropriate way to evaluate it will be, and what other methods to benchmark it against.
He then claims that science undergoes periodic “revolutions”, which displaces the dominant paradigm in a particular field for a new one. This is preceded by a crisis during which it becomes clear that, due to the growing number of intractable problems within the domain, the paradigm cannot be maintained. The crisis ends when researchers within that domain shift their alliance to the new paradigm.
So what? Arguably, the most sought after performance metric in machine learning is predictive accuracy. In Kuhn’s example of the shift from Ptolemy’s geocentric solar system to Copernicus’s heliocentric, he points out that Copernican predictions of the movements of objects in the night sky were not more accurate than Ptolemaic predictions. Yet, the shift to the Copernican model paved the way for Keppler’s more accurate model, and later, nice things like Google Maps. More on this below.
Ear to the ground:
Miscellaneous happenings that ought to be on your radar.
Bay Area, NYC, Boston, Austin increase their share of tech jobs
If you’re reading this, your probably not in Detroit
Despite talk about replicating the tech boom in second-tier cities like Pittsburgh and Detroit, the work is continuing to concentrate in the top-tier cities. Second-tier cities remain second-tier. The top jobs driving this change are in data science.
Austin is an interesting outlier because it is their coolness that attracted the tech.
So what? I take this as a signal that data science entrepreneurship might be less accessible, as big tech companies hoard all the data and create more barriers to entry, and as VC-funding seems like the only viable path given the costs of operations and talent in these hubs.
Insight opens new program
Insight Data Science is launching a new program for training security engineers, data privacy engineers, and advanced security analysts. The curriculum ties data science, data engineering, and machine learning. Taken from the white paper:
“A new field of data ethics is forming amongst industry leaders, with the goal of actively ensuring algorithms don’t discriminate against protected classes or have deleterious effects on society.” — Methods for addressing algorithmic bias was a key theme at Neurips last year. Major data breaches and privacy violations like with Facebook and Cambridge Analytica have been omnipresent on the news.
“While there’s a huge demand for security engineers and analysts with this [data ethics] skill set, there are few with the required knowledge in the security and data privacy domain. The skills gap is even sharper in the advanced security roles that require knowledge of data, cloud infrastructure, and machine learning skills.” —This pairing of security and data privacy as a single discipline is new to me. Computational methods for data privacy, such as differential privacy, get pretty deep into theoretical domains like cryptography and information theory. Your average pentester or data scientist does not have experience in these domains.
More on the convergence of ML and Infosec. Anomaly-detection is clearly a dominant application of machine learning in industry, particularly in finance, and social networks. Both researchers and black hat hackers have taken interest in adversarial attacks on in-production deep neural nets algorithms. Hacker conference Defcon featured an AI Village (a “village” is a series of co-located events and talks centered on one topic in infosec and privacy) for the first time.
More freedom? The data science and ML engineering career trajectory is largely controlled by a few big-name tech companies. When people hit obstacles in there established paths of development, for example due to cultural attitudes about their age or gender, or because of geographical or work-life balance lifestyle choices, there is generally not much room to pivot. In contrast, the infosec community has a reputation for being a place for misfits, where people with specialized skill sets can find interesting and unconventional niches. Pursuing this skill set could be a great way to attain more freedom in one’s lifestyle.
AI Startups You Could Have Started
A weekly mini-profile of AI startups with (shocker) an actual revenue model.
Hi I’m Bilal, cofounder of ClearBrain. ClearBrain helps you automatically build predictive models for which of your users are most likely to convert or churn in your app. Think AmazonML for marketing analysts.
I recently came across a recruitment posting from the company:
Looking for data scientists with experience working on high dimensional data with Spark and Hadoop
Looking for experience in algorithmic approaches to “identity resolution across disparate schemas”
Looking for statistical inference, causal inference, and experimental design
I’m noticing a trend that AI startups that don’t call themselves AI startups, often focus on digital marketers as customers, and care a lot about stats, causal inference, and experimental design.
Data-Sciencing for Fun & Profit:
Data-Sciencing for Fun & Profit examines emerging trends on the web that could be exploited for profit using quantitative techniques.
Lynda’s has a class on data science for gaming and fantasy sports
Get a fast-paced, fun, and non-technical overview of how the gambling and fantasy sports industries structure their offerings. Curt Frye uses data science to examine how casinos make money, design their games, offer comps to customers, and expand their revenue streams through dining, golf, entertainment, and other attractions.
Essential Read: No, Machine Learning Does Not Make Understanding Obsolete
There is a trending article called Machine learning Widens the Gap Between Knowledge and Understanding. Thes main argument is that we can’t understand complex systems, AI algorithms can predict them well, so we should let AI make all the decisions about these complex systems for us. The author (who has a forthcoming book on the topic) follows this up with a cringe-worthy speculation that this will lead to the next stage in human evolution. I drafted a response. My main points are:
Prediction without understanding cannot advance science
Predictive accuracy is not the only performance measure we care about
When our models predict but don’t understand, we make bad decisions
The solution is not to give up on understanding, but to build AI that understands
I invite you to read the full post.