We've finally reached a point where Cloud technologies have become a commodity used across many various niches such as physics and astronomy, geography and genetics, healthcare and HR, and many others. Virtual infrastructures allow scientists to process huge data pools in a split second which leads to new discoveries.

Yet, there's one more technology able to change the way we currently perceive data processing. I'm speaking about machine learning that is gaining momentum right now.

A few years ago, Google fully refactored its image search services by implementing machine learning algorithms; on June 16, 2016, the tech giant announced plans to expand its Zurich-based R&D Center and re-purpose it to make new developments in artificial intelligence (AI), natural language processing (NLP) and deep learning.

According to Greg Corrado, a senior research scientist at Google, active implementation of machine learning can be as revolutionary as founding the Internet. In the long run, it'll lead to a situation when we won't need to scrutinize every single detail of a certain process and all we'll have to do is just upload data to the system based on which it'll start self-learning.

Deep learning is the most prospective area of machine learning. It's based on neural networks that require large data volumes to teach themselves. The neural networks were first described back in the 1930s, but they started to gain traction only 3-4 years ago as computing power increased significantly.

Last year, Google made its deep learning library TensorFlow publicly available in an attempt to pick developers' interest and engage them in machine learning based application development process. The system presents calculations as a data stream graph. Unlike Theano and Torch, TensorFlow supports distributed computing and that's how it makes a difference. Google uses TensorFlow literally for every single event, from speech recognition to image search, but the truth is that it'll come more handy for data scientists experimenting with neural networks deep learning as well as organizations that need to quickly train and test their systems. Feel free to play with TensorFlow here.

Artificial Intelligence is Entering Journalism and Fiction Writing

The Guardian reporter Alex Hern described his attempt to train a simple neural network on 119mb of The Guardian leader columns and set up a computer to a point where it could start learning from the corpus of text. After running the training application for 30 minutes and getting around 1% of the way through, he realized he would need a much faster computer and rent a Cloud server from Amazon to complete his task in 8 hours. Trying to use his algorithm for reading The Guardian editorials, he used the phrase "Thursday’s momentous vote to stay in the EU was..." as the seed and let the system imagine what the rest of the sentence should read like. The variants he got from the system were pretty disappointing as they made little sense (e.g., "Thursday’s momentous vote to stay in the EU was more contracts in the 1970s", "Thursday’s momentous vote to stay in the EU was on the promise of the pronouncements"), but good news was - were he able to train a machine to write compelling Guardian editorials or even convincing sentence extracts from editorials, then his job would be much less secure than it is now.

Do you need professional assistance with your machine learning or AI application development?
Let's talk NOW!

Yet, his misfortune with machine learning is easy to explain: a neural network used could only recognize symbols and had no idea about how to put the words together or how to use proper grammar. To enable a network to put the words together adequately based on the real-life data, you should provide it with as much data as possible. A collection of editorials from just one publisher is definitely not enough to train a machine.

Human Rescue Rangers

Speaking about successful cases and efficient machine learning tools, I can't help mentioning AlphaGo, an AI-based program that not so long ago won $1 million by beating the world’s second best player in Go.

Image: The Guardian

Image: The Guardian

The program uses two types of learning:

  • learning with a trainer when machine gets data of all Go matches ever played between people, and
  • Reinforcement learning when machine plays against itself and learns from lessons learned.

Yet, there were things that even AlphaGo couldn't teach itself to do. According to Thore Graepel, research lead at DeepMind that built AlphaGo, the system understands well what game board areas it should focus its thinking on. However, the program doesn't know when it should stop thinking and make a move in a game. It's a critical moment, as the professional Go matches use complex time control systems. For instance, in the match played against Lee Sedol, the world's second best player in Go, each player had a total of two hours to make all their moves, and 3-minute long refreshing buffers, aka “byo-yomi”, that they could play into once the two hours were up.

Developers didn't add any timing rules to the system, but only introduced a restriction by creating a special algorithm. Although AlphaGo was able to optimize it based on experimentation at a later stage, the fact is irrefutable that it wouldn't have been able to win the match without involving a human being.

Microsoft has recently posted its Project Malmo to GitHub. It's a platform that aims to explore the AI potential further and train game characters to take different actions like crossing a bridge or building robust and complex objects. Besides, the project allows AI to play against us, humans, as well as enables communication between AI and people through a special chatbot. According to Project Manager Katja Hofmann, Project Malmo's key objective is to build an AI system that would learn from users and help them solve their issues. It's based on reinforcement learning algorithms and can be used to train a machine to successfully navigate through a room with many obstacles. Random players can provide tips or instructions that AI will gradually learn to recognize and use for proper decision making.

Minecraft was used to train a robot at Brown University. According to one of its professors, Minecraft can be an effective method of collecting data about robot - human interactions.

These and other projects not covered in this post suggest we can have full-fledged communication with the AI in the near future. But still, will it be possible without human interventions?

Vik is our Brand Journalist and Head of Online Marketing / PR with 11+ years of international experience in IT B2B. He's also a guest blog contributor to Business2community, SitePoint, Journal of mHealth, Wearable Valley and other IT portals. You can contact him directly on LinkedIn.

Leave a comment

Get a Quote