A long-standing debate in the machine learning community is the debate about whether symbolic AI or the connectivist AI (read deep learning) is the right path forward. The debate still lives on, especially on Twitter, and not without some degree of vitriol.
One thing I find missing from the debate is the consideration of the practicalities of actually engineering AI. Eminent researchers are not typically responsible, at least directly, for producing production-quality code.
I’m squarely in the symbolic camp, but for different reasons than most. Symbolic AI systems are better when it comes to actually building the damn thing.
Symbolic AI boils down to structured representations of knowledge and logic, i.e., databases and readable code in a high-level programming language, respectively. Heck, the programming language Lisp was literally invented to build AI systems. These are things that modern software engineering teams are especially good at developing and managing.
The missingness of this glaringly obvious point makes me scratch my head when I read articles like these:
Why AI companies don’t always scale like traditional software startups ~ VentureBeat
Toward Human-Centered Design for ML Frameworks ~ Google AI Blog
Sculley, David, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo, and Dan Dennison. "Hidden technical debt in machine learning systems." In Advances in neural information processing systems, pp. 2503-2511. 2015.