4. Neural Network Compression
AI is moving to the edge. Perhaps most importantly, edge AI enhances data privacy because data need not be moved from its source to a remote server. Edge AI is also lower latency since all processing happens locally; this makes a critical difference for time-sensitive applications like autonomous vehicles or voice assistants. It is more energy- and cost-efficient, an increasingly important consideration as the computational and economic costs of machine learning balloon.
But in order for this lofty vision of ubiquitous intelligence at the edge to become a reality, a key technology breakthrough is required: AI models need to get a lot smaller. Researchers and entrepreneurs have made tremendous strides in this field in recent years, developing a series of techniques to miniaturize neural networks. These techniques can be grouped into five major categories: pruning, quantization, low-rank factorization, compact convolutional filters, and knowledge distillation.
- Pruning entails identifying and eliminating the redundant or unimportant connections in a neural network in order to slim it down.
- Quantization compresses models by using fewer bits to represent values.
- In low-rank factorization, a model’s tensors are decomposed in order to construct sparser versions that approximate the original tensors.
- Compact convolutional filters are specially designed filters that reduce the number of parameters required to carry out convolution.
- Finally, knowledge distillation involves using the full-sized version of a model to “teach” a smaller model to mimic its outputs.
These techniques are mostly independent from one another, meaning they can be deployed in tandem for improved results. Some of them (pruning, quantization) can be applied after the fact to models that already exist, while others (compact filters, knowledge distillation) require developing models from scratch.
Large technology companies are actively acquiring startups in this category, underscoring the technology’s long-term strategic importance. Earlier this year Apple acquired Seattle-based Xnor.ai for a reported $200 million; Xnor’s technology will help Apple deploy edge AI capabilities on its iPhones and other devices. In 2019 Tesla snapped up DeepScale, one of the early pioneers in this field, to support inference on its vehicles.
And one of the most important technology deals in years—Nvidia’s pending $40 billion acquisition of Arm, announced last month—was motivated in large part by the accelerating shift to efficient computing as AI moves to the edge.
5. Generative AI
Today’s machine learning models mostly interpet and classify existing data. Generative AI is a fast-growing new field that focuses instead on building AI that can generate its own novel content. To put it simply, generative AI takes artificial intelligence beyond perceiving to creating.
Two key technologies are at the heart of generative AI: generative adversarial networks (GANs) and variational autoencoders (VAEs).
The more attention-grabbing of the two methods, GANs were invented by Ian Goodfellow in 2014 while he was pursuing his PhD at the University of Montreal under AI pioneer Yoshua Bengio. Goodfellow’s conceptual breakthrough was to architect GANs with two separate neural networks—and then pit them against one another. VAEs, introduced around the same time as GANs, are a conceptually similar technique that can be used as an alternative to GANs. In general, GANs generate higher-quality output than do VAEs but are more difficult and more expensive to build.
On the positive side, one of the most promising use cases for generative AI is synthetic data. Synthetic data is a potentially game-changing technology that enables practitioners to digitally fabricate the exact datasets they need to train AI models. As synthetic data approaches real-world data in accuracy, it will democratize AI, undercutting the competitive advantage of proprietary data assets. In a world in which data can be inexpensively generated on demand, the competitive dynamics across industries will be upended. Counterbalancing the enormous positive potential of synthetic data, a different generative AI application threatens to have a widely destructive impact on society: deepfakes.
6. “System 2” Reasoning
System 1 thinking is intuitive, fast, effortless and automatic. System 2 thinking is slower, more analytical and more deliberative. Humans use System 2 thinking when effortful reasoning is required to solve abstract problems or handle novel situations.
No one yet knows with certainty the best way to move toward System 2 AI. The debate over how to do so has coursed through the field in recent years, often contentiously. It is a debate that evokes basic philosophical questions about the concept of intelligence.