Hey Elon, maybe we are the AI...
plus weight agnostic neural nets, assassination makets, and Chinese walls in Silicon Valley
AltDeep is a newsletter focused on microtrend-spotting in data and decision science, machine learning, and AI. It is authored by Robert Osazuwa Ness, a PhD machine learning engineer at an AI startup and adjust professor at Northeastern University.
Overview:
Announcement: Switching back to Sunday publishing, as Thursday isn’t currently feasible
Ear to the Ground: Weight agnostic neural nets and a Chinese wall in Silicon value
AI Long Tail: Assassination markets
Data-Sciencing for Fun & Profit: AI plays Cards Against Humanity
The Tao of Data Science: Hey Elon, maybe we are the AI...
Ear to the Ground
Curated posts aimed at spotting microtrends in data science, machine learning, and AI.
Silicon Valley walls off Chinese VC money
Recent political tension with China is leading to an expulsion of Chinese money from Silicon Valley.
Silicon Valley startup Pilot AI Labs Inc. signed a Chinese-backed venture-capital firm as its first big investor in 2015. By last summer, Pilot AI wanted it gone.
Since last year, Chinese venture firms have been dialing back U.S. investments, shutting down U.S. offices, and finding ways to structure deals that avoid regulators.
U.S. venture firms are dumping their Chinese limited partners. U.S. startups that have taken Chinese money are keeping the investments quiet or trying to push their Chinese investors out to avoid scrutiny.
The FBI is getting more active in preventing IP transfer to China through U.S. startups, whether the U.S. startups like it or not.
Washington’s lack of understanding of tech means they can’t provide a convincing case as to why your A.I. startup taking a six figure seed investment from a Chinese investor constitutes a national security threat.
Silicon Valley however doesn’t understand high-stakes geopolitics, and China’s very non-American strategy of integrating technology industrial policy, corporate espionage, and military development.
Further reading:
Chinese Cash That Powered Silicon Valley Is Suddenly Toxic — WSJ
“Has anyone noticed a lot of ML research into facial recognition of Uyghur people lately?” — Altdeep
Weight agnostic neural networks
A new neural net paper made a splash last week.
The paper investigates to what extent the prediction performance of neural nets is due to the architecture (structure of the network) as opposed to weight parameters. The paper builds of the observation that some structures appear more suited to some tasks than others (e.g. convolutions for computer vision).
They take a novel strategy of randomly selecting one weight value for every network weight, and then varying the structure, searching for structures that do better a given task. They show that the networks can perform better-than-chance supervised prediction as well as perform some RL tasks.
They provide an excellent interactive web-based explanation, linked below.
Further Reading:
Growing opportunity in generating, understanding, and detecting deepfakes
"A deepfake could cause a riot; it could tip an election; it could crash an IPO. And if it goes viral, [social media companies] are responsible," says Danielle Citron, a UMD law professor who has written extensively about deepfakes.
Yet social media platforms have been loath to pass judgment on a clip's veracity on their own.
When the “cheap fake” Nancy Pelosi video (no AI, just made her look drunk by slowing playback speed) went viral, Facebook did nothing to slow its spread.
This week, a deepfake video featuring Facebook’s own CEO still failed to illicit any response from the company.
Opportunity for deepfake detection. At Neurips last year I saw a few interesting posters (such as this one) on detecting adversarial examples in computer vision (false images designed to fool a classifiers). In the run up to 2020, journalists are going to need experts in detecting deepfakes — democracy’s adversarial examples. Social media companies cannot be relied on to curb the spread of these on their own platforms.
Further Reading:
Some deepfake detection efforts
Li, Yuezun, et al. "In ictu oculi: Exposing ai generated fake face videos by detecting eye blinking." arXiv preprint arXiv:1806.02877 (2018).
Truepic raises $8M to expose Deepfakes, verify photos for Reddit — Techcrunch
AI Long Tail
Pre-hype AI microtrends
This is only about a year old but it only recently came to my attention.
There are markets on Augur that provide incentives for assassinations (I am intentionally not linking).
Given a market that says "So and so will be killed before October 2020; Killed means killed, not die of natural causes or accidents.", one can then sell a large number of shares in that outcome. If that outcome were to occur, they would have to pay out for each share they sold. This is equivalent to incentivizing someone to kill so and so, who could buy shares in the outcome, commit the murder, and then collect payout when the contract is activated. It is an Ethereum contract on someone's head.
Background. Augur is a decentralized prediction market protocol where individuals can bet on the outcomes of real-world events. It is on the radar of this newsletter because prediction markets are essentially prediction algorithms that generate predictions by crowdsourcing.
Prediction markets are nothing new, but Augur's decentralized nature is new. It is based on the cryptocurrency Ethereum, which uses a blockchain, and thus has no central authority. No central authority means there is no one to censor offensive prediction markets.
References
Predictions.Global — Searchable director of Augur markets
Data Sciencing for Fun and Profit
Data-Sciencing for Fun & Profit examines emerging trends on the web that could be exploited for profit using quantitative techniques.
Experimentation with GPT-2: AI-generated Cards Against Humanity
An engineer created a Cards Against Humanity app that generates cards using Open AI’s public GPT-2 natural language model.
Tao of Data Science
The Tao of Data Science is a column I write that explores how centuries of philosophers have been tackling the key problems of machine learning and data science. I include snippets and links to this column in this newsletter.

Hey Elon, maybe we are the AI...
I am a fan of machine learning models that work by simulation.

This type of modeling uses mathematical relations that describe a system (think Ohm's law, or the law of supply and demand) and then model the system's behavior of that system by actually simulating that behavior according to those mathematical relationships. We get these mathematical relations from theories of how the world works.
The simulation approach to machine learning gets far less attention than others for a few reasons. Firstly, simulation models tend to be hyper-specific to a problem, and are hard to generalize in a way that enables applying the approach to different problems; contrast this with deep learning where a handful of canonical architectures work with a broad class of problems. Secondly, they are hard to train on real data because it is generally hard to construct likelihood functions or other loss function that can quantify how well a simulation instance performs as a hypothesis for the process that generated the data, especially in high dimensional settings. That said, there is active research on solving these problems (if you are interested, see the inference for simulation models references).
Simulation machine learning and the simulation hypothesis
The simulation hypothesis is a thought experiment that philosophers have iterated on since antiquity, most recently by philosopher Nick Bostrom. Elon Musk recently popularized the idea. The hypothesis supposes the denizens of that distant future will want to run computational simulations of their ancestors, i.e., modern-day humans. The computing power at these post-humans' disposal will be so immense that they will be able to simulate conscious minds as well as a virtual reality sufficiently fine-grained to convince those minds that this VR is real. If they run such simulations, then they will run many, and therefore the vast majority of minds like ours do not belong to the original human race but rather to people simulated by the advanced descendants of an original race.
My problem with this hypothesis is that it presupposes computing technology that doesn't exist; one cannot merely use Moore's law or some similar rule to extrapolate today's computing technology to future tech sophisticated enough to model millions of human consciousness.
However, we can consider extending this hypothesis with the question of why a post-human would run such a simulation. Sure, it could be to study the human condition and how we spend our lives fumbling around for meaning.
Alternatively, the simulation we live in could be part of the training algorithm or a prediction algorithm of some post-human AI. I imagine the difficulties in simulation-based machine learning mentioned above might be solved by then.
Bostrum and Musk have both worried about the threat of human ruin at the hand of a hostile future AI. However, if simulation-hypothesis these guys believe in is true, then we are probably part of that AI.
Post-humans, if you are reading this, spoiler-alert, the answer is 42.
References
Elon Musk on how we might be living in a simulation — The Guardian
Excellent explainer video on the simulation hypothesis — Kurzgesagt
References for inference in simulation-based machine learning models
Tran, Dustin, Rajesh Ranganath, and David Blei. "Hierarchical implicit models and likelihood-free variational inference." Advances in Neural Information Processing Systems. 2017.
Wilkinson, Darren J. Stochastic modelling for systems biology. CRC press, 2011.
Diggle, P. J., & Gratton, R. J. (1984). Monte Carlo methods of inference for implicit statistical models. Journal of the Royal Statistical Society Series B.
Hartig, F., Calabrese, J. M., Reineking, B., Wiegand, T., & Huth, A. (2011). Statistical inference for stochastic simulation models - theory and application. Ecology Letters, 14(8), 816–827.