Rant on the coming CS job apocalypse
plus neighborhood watch 2.0, and cucumber farmers show us how to DIY AI
|May 6, 2019|
AltDeep is a newsletter focused on microtrend-spotting in data and decision science, machine learning, and AI. It is authored by Robert Osazuwa Ness, a Ph.D. machine learning engineer at an AI startup and adjunct professor at Northeastern University.
Dose of Philosophy: Or, A Primer in Sounding Not Stupid at Dinner Parties: A quick note on Gödel’s theorems and implications to AI
Ear to the Ground: Amazon app helps neighbors surveil neighbors, learn more about offline policy learning, Jupytext, and OpenAI’s potentially dangerous NLP
AI For the Rest of Us: Japanese cucumber farmers
Data-Sciencing for Fun & Profit: Deep learning has solved death metal
The Essential Read: A second-hand rant on why a CS degree will be the next art history degree
Trending items worth knowing about but not really worth reading.
/r/machinelearning is excited about an MIT paper about computational models of animal models’ visual cortex where individual neurons can be precisely controlled. The team then showed that the information gained from the computational model enabled them to create images that strongly activated specific brain neurons of their choosing.
Neural networks for image data that don’t have bias terms have the interesting property property that
f(a x) = a f(x).This means that these models should still work on new data that has different lighting conditions that the training data with minimal affects on prediction.
There is a petition to abandon the United States as a venue for machine learning conferences because of visa issues. This reminds me of a scene at the BlackInAI workshop at Neurips 2018, where half of the presenters were denied visas and had to give their poster presentations over video.
Dose of Philosophy: Or, A Primer in Sounding Not Stupid at Dinner Parties
A quick thought on Godel’s Theorem
Due to the length of this issue, I am going to save a write-up on Godel’s theorem’s and their relevance to AI for next week. But the gist is that some believe that Godel’s theorem shows that our minds do not work like computers. So trying to pursue generalized AI as software that runs in computers might be folly.
Ear to the Ground:
Miscellaneous happenings that ought to be on your radar.
Amazon's releases a neighborhood watch app that raises fears of discrimination and privacy violation
Privacy advocates worry that an Amazon-owned mobile app, used by owners of its Ring security cameras to upload videos for neighbors to see, could entrench racial discrimination and violate people's privacy.
So what: The app, called Neighbors, features a feed where users can post videos and photos from their cameras, file reports of activity they think is suspicious and read crime reports from the app's “News Team.” Ring as recently as a few weeks ago was hiring an editor for this “News Team”. This is prompting concerns that Amazon would stoke community fears to sell security systems.
Other items worthy of note:
Amazon Neighbors team is looking to partner with law enforcement
A similar app (with no cameras) called Nextdoor has had trouble dealing with racism on its platform. Vice reports on how this problem might be worse with Ring and Neighbors.
The ubiquitous nature of the cameras means they are going to be snooping on much constitutionally protected activity
Freedom of information request for anything related to the Richmond PD and the Neighbors app
You should learn more about offline policy learning
Reinforcement learning is the darling of the research community and the media, with splashy stories about beating experts at tabletop games and video games. There hasn’t been much adoption in the commercial space yet.
I get the sense, through conversations that I’ve been having, that offline policy evaluation and offline, off-policy learning is the industry problem to solve. Most systems that one needs to model don’t have rapid transitions from old states to new states, nor do they provide instant reward feedback. The alternative in these cases is to learn directly from fixed logs of the system’s behavior. If logs from an in-production policy are available, they could be used to in the training of a replacement policy. This setup is an off-line and off-policy training regime where the policy needs to be trained from batches of data.
Jupyter notebooks with version control is now a thing
It is called Jupytext, and I have been waiting for something like this for a long time.
OpenAI experiments with controlled release of its potentially dangerous language model
OpenAI recently announced its natural language model called GPT-2, trained to predict the next word in a 40GB corpus of Internet text, was so good that it could not release the trained model due to concerns about malicious applications. Now it has updated its statement to say that it has decided to adopt a staged release and partnership-based sharing approach.
AI for the Rest of Us
A weekly mini-profile of AI startups and ventures that (shocker) solve a problem without the VC-fueled hype.
This one came to me from my friend Rashi.
How a Japanese cucumber farmer is using deep learning and TensorFlow
It is a Tensorflow case study of a cucumber farm in Japan that implemented Tensorflow in a system that sorts cucumbers.
"The sorting work is not an easy task to learn. You have to look at not only the size and thickness, but also the color, texture, small scratches, whether or not they are crooked and whether they have prickles.”
Sure, that sounds like a perfect job for a random forest. Getting enough cucumber picks to train a deep net sounds like a nightmare. From the article:
One of the current challenges with deep learning is that you need to have a large number of training datasets. To train the model, Makoto spent about three months taking 7,000 pictures of cucumbers sorted by his mother, but it’s probably not enough.
That said, the cool thing about the article is the pipeline that the farmer/hacker set up.
This supports the hypothesis that one can create value outside of the resources of big tech simply by training an off-the-shelf framework to one’s own data and plugging it into data pipeline. The video shown in the article drives this point home.
Data-Sciencing for Fun & Profit:
Data-Sciencing for Fun & Profit examines emerging trends on the web that could be exploited for profit using quantitative techniques.
Deep learning has solved death metal.
Humans have been completely removed from the process. This channel on Youtube perpetually livestreams music made by this neural net.
The Essential Read:
Some anonymous CS student made a post summarizing a rant by his CS professor on Reddit and it made a splash. The user has since removed the post, but the comments remain. While I do not agree with everything this professor says, the ideas are thought-provoking, and so I post them here.
A second-hand rant on why the CS major will be the next art history major
The following is the deleted text.
He made the argument that globalization (read: Asia) is pumping out masses of CS people who corporations will continue to use to write cheap code and do CS work at low ass wages.
Too large a pipeline
He believes that, on top of Asia, there are masses of US CS grads in the pipeline. He also says alternative sources of code monkeys, etc in the form of people willing to work in the industry who are self taught/bootcamps/etc are in the pipeline and will further flood the CS industry in the coming years.
The end of Moore's Law
He made the case that it is an "open secret" in industry among engineers that Moore's Law has long been dead. He believes that Wall Street and the "MBAs" (as he calls them) are too ignorant to be aware of this and neither is the general public, and he believes large sums of money are flowing into tech companies that otherwise would not be. He believes that once this hits the public/Wall St. etc a lot of money is going to dry up.
Hype and BS
To kind of go along with the end of Moore's Law, he says that there are a ton of "fantasy" CS "phantom products" in the pipeline that only exist to drive up stock prices and get cash. He gave the example of the self driving cars that many tech companies are supposedly working on that he says is all "smoke and mirrors but the public is too dumb to know it". He also says tech like "AI" is drastically inflated beyond what is actually technically possible and people have no idea what the actual limits of AI are and likely will be for decades to come and that this is "going nowhere". He basically said that one company in particular is basically running a full scale scam saying they have "AI" that "is glorified conditional logic code that does nothing special and basically doesn't really exist except to get them cash from Wall Street investors".
He foresees another global recession, this time in the healthcare and education industries being the bubble catalysts rather than the housing market. He believes there is a TON of "dead weight" that he sees in industry currently. (For example, he says there are tech companies he consults at that are paying 100K+ for people to write essentially basic scripts that a community college CS student could handle.). Basically, he says when the next recession hits, TONS of this "dead weight" is going to be laid off and there will be large swaths of unemployed CS grads all over the country.
Rush to hiring people that aren’t needed
He says that tons of the companies he does consulting for have basically snapped up anything with STEM on it as far as hires. He says that they have rapidly hired STEM grads "for no reason, just because it's become an hysteria among the MBAs". In other words, they have directed their HR departments to mass hire STEM/CS grads whenever possible far beyond what the actual needs of the company are "just to have them on the payroll" in large numbers and that many of them are basically doing nothing but collecting a paycheck for small amounts of actual work being done.
Greedy grad schools
This is kind of arrogant on his part, but he basically said that tons of CS students he sees these days are there for "the money" and are "not nearly as competent as they think they are". He says that the "cultural narrative of STEM being the hot thing" feeds into "inflated narcissistic egos" that "the young" are prone to latch on to. He basically went into a ramble about how there are tons of people working in industry with no CS degree but run circles around a lot of CS grads the universities are "pushing through as long as their financial aid check clears the bank". He says again that tons of these grads are going to be unemployed in a few years.
Dumbing things down for women and “the blacks”
He says that lots of CS programs are being dumbed down now. For example, he says that there are schools who are removing math requirements from CS and are doing things to "improve diversity" by dumbing CS down drastically and that there is becoming increasing numbers of CS grads who "hold a degree from a university that would be equal to a community college CS degree 10 years ago".
“My executive friends say they can’t wait to fire everybody”
He basically quoted his insider knowledge from all the big wigs he talks to in industry. (These are C-suite connections he is talking about). He basically said that he talked to a high level corporate executive recently at one of the top 5 tech companies who basically confirms his entire thesis and says "they know" it is coming and are "prepared for it". "IT" being the coming economic recession, ways to drastically cut CS salaries, plans to lay off "dead weight" etc.
Interestingly, it turns out a lot of 1%’ers majored in art history. I suppose that’s because having a big inheritance means they can afford to.
© 2019 Substack Inc. Powered by Substack