Artificial Intelligence (AI) and Machine Learning (ML) aren’t something out of sci-fi movies anymore, it’s very much a reality. While we took many decades to get here, recent heavy investment within this space has significantly accelerated development.
While ML is making significant strides within cyber security and autonomous cars, this segment as a whole still has a long way to go. This is because ML hasn’t been able to overcome a number of challenges that still stand in the way of progress.
What are these challenges? Let’s take a look.
Check out a related article:
1. Memory networks
Memory networks or memory augmented neural networks still require large working memory to store data. This type of neural network needs to be hooked up to a memory block that can be both written and read by the network.
This is a major hurdle that ML needs to overcome. To attain truly efficient and effective AI, we have to find a better method for networks to discover facts, store them, and seamlessly access them when needed.
2. Natural language processing (NLP)
Although a lot of money and time has been invested, we still have a long way to go to achieve natural language processing and understanding of language.
This is still a massive challenge even for deep networks. At the moment, we teach computers to represent languages and simulate reasoning based on that. However, this has been consistently poor.
Human visual systems use attention in a highly robust manner to integrate a rich set of features. But at the moment, ML is all about focusing on small chunks of input stimuli, one at a time, and then integrate the results at the end.
Check out a related article:
For ML to truly realize its potential, we need mechanisms that work like a human visual system to be built into neural networks. Right now we’re using a softmax function to access memory blocks, but in reality, attention is meant to be non-differentiable.
4. Understand deep nets training
Although ML has come very far, we still don’t know exactly how deep nets training work. So if we don’t know how training nets actually work, how do we make any real progress?
5. One-shot learning
While applications of neural networks have evolved, we still haven’t been able to achieve one-shot learning. So far, traditional gradient-based networks need an enormous amount of data to learn and this is often in the form of extensive iterative training.
Instead, we have to find a way to enable neural networks to learn using just one or two examples.
6. Deep reinforcement learning to control robots
If we can figure out how to enable deep reinforcement learning to control robots, we can make characters like C-3PO a reality (well, sort of). In fact, when you allow deep reinforcement learning, you enable ML to tackle harder problems.
7. Semantic segmentation
According to Tapabrata Ghosh, Founder and CEO at Vathys, “we've solved image classification, now let's solve semantic segmentation.”
8. Video training data
We have yet to utilize video training data, instead, we are still relying on static images. To allow ML systems to work better, we need to enable them to learn by listening and observing.
Video datasets tend to be much richer than static images, as a result, we humans have been taking advantage of learning by observing our dynamic world. Why shouldn’t machines be enabled to do the same?
9. Object detection
Object detection is still hard for algorithms to correctly identify because imagine classification and localization in computer vision and ML are still lacking. The best way to resolve this is to invest more resources and time to finally put this problem to bed.
10. Democratizing AI
AI is still not completely democratized with big data and computer power. If we can do this, we will have the significant intelligence required to take on the world’s problems head on.
What else would you add to this list?