What happens if you leave artificial intelligence unattended?

Content:

Independent AI

Although it is already a bad form - to mention last year’s match of Alpha Go and a person in the first paragraph, let’s start with this example. He is interested in us because this is perhaps the first adequate case of "self-learning AI". There are many other examples, but to this day they have not left the laboratories and are generally unknown to the general public. At the core of AlphaGo's self-study was the many hours of playing games with oneself, plus the study of the games played.

This competition of man and car attracted the attention of all major publications. I, as always, brake and did not have time for this gorgeous memo train of modern journalism. But the “confrontation of the century” is interesting not only by the hype around it (a million dollars in prize money, awarding an honorary 9th dan in the game of go, a scientific breakthrough of the year according to Science) and a distinct flavor of Azimov fiction. The essence of the action in brief: the AlphaGo game machine dominated and won 4 of 5 matches in the traditional oriental game of go. She won not at some first-ranked player, but at Korean go-professional 9 dan Lee Sedol (2nd place in the international rating). Experts say that this case does not look like a chess battle of a computer and Garry Kasparov, because in 1997 the computer was trained under the supervision of chess players, who themselves wrote strategies and coached him. For training the AlphaGo player-player, the brute force method was used (the machine viewed a sample of hundreds of thousands of parties) remotely resembling the working scientific models popularly known as GANs (generative adversarial networks). They are of particular interest, because representatives of the AlphaGo team have come to grips with these competitive neural networks. We will consider them in this article.

Such an approach to the training of artificial intelligence is no longer news - generative competing networks or just GANs first appeared in 2014 with the help of Ian Goodfellow. GANs work very simply - as a bundle of attorney-attorney, a bad-good cop or a critic-author. One network (discriminator, D) classifies, marks incoming data as false or true. A competing network (generator, G) studies the discriminator's estimates and can create new data based on these estimates. These neural networks mutually teach each other. And, most interestingly, GANs need very small samples of training information - it takes only a few hundred images and three or four rounds of repetitions for the generator to start producing its own versions of the original images (before the learning process of neural networks required many hours and millions of samples).

One of the most interested in the GAN models of AI was Facebook, which even hurried to publish a post about it. Why Facebook? Because it is the most public player in the high-tech market - both Google, and Amazon, and Microsoft are massively buying teams and startups in artificial intelligence to make their own developments. But they are a little behind Facebook, which has a huge training sample for AI training on images (computer vision is one of the most popular AI training methods) and an excellent FAIR team (Facebook Artificial Intelligence Research group).

Summary: the discriminator network learns to distinguish real photos from computer-generated photos, and the network generator trains to create realistic photographs indistinguishable from the original. In this training race, both networks have equal (?) Chances for success. What will happen when they complete their training?

Trendfall

In recent years, machine learning is experiencing just a golden age - the increased power of computers, instant access to large data arrays make this area very hot. Today, AI is a Ford car at the beginning of the last century or space satellites in the 60s - a general rush, dizzying predictions and a weak understanding of what to do with all this wealth. Below are examples of the latest high-profile technologies in the field of AI.

One-shot learning is the training of neural networks on a small amount of data, ideally with a single example and a small sample for training. More and more startups are working on fast-learning AI.

So, the game algorithm DeepStack did not repeat the fate of Alpha Go, but came very close to successful training in small samples. At the end of 2016, DeepStack conducted a series of learning Texas poker games with 11 players from the international poker organization. The algorithm took 3000 combinations with each player to show decent results - confident (average 396 points) victories over ten players and a close victory over the eleventh (70 points, statistical accuracy). The algorithm was not just learned in the process of games, but used the method of re-solving (adaptation to each new player and each new combination of cards). DeepStack is the result of sharing deep recursive neural networks and GANs.

The Microsoft ResNet neural network project is used for image recognition. If you capture the work of the neural network while sorting and recognizing images, you get these images:

Perspective direction in forensic science and photography, Face Aging With GANs - a pair of discriminator-generator after a workout on 5,000 photographs of human faces of different ages can reproduce, predict changes of individuals with age. If the generator reproduces an aged person, the discriminator determines how much the result matches the original.

King of Goldman Sachs traders replaced some of their traders with algorithms. The place of 600 ordinary traders is now occupied by 200 developers and engineers who support trading algorithms. This is associated with a large (146 points) bank management plan for automating simple brokerage operations. Traders with extensive experience and experienced salespeople will not be affected by this.

Although in some hedge funds (Sentient Technologies inc., Numerai, Emma hedge fund), AI-based trader algorithms already do all the work of analytics and forecasting results. Typically, specialists in AI are not enthusiastic about working for financial corporations, but the benefits of large data sets and opportunities for training AI outweigh the skepticism and unwillingness to work for capitalist Molochs. 2016 was the year of birth of several hedge funds at once, in which artificial intelligence is traded.

Chinese twin "Google" Baidu also does not sleep. Most Chinese developments in the field of AI, machine learning are distributed free of charge and anyone can test and study them. In January 2017, an artificial reality laboratory opened in Beijing, where Andrew Eun wants to make friends with virtual reality and the work of search engines.

Another promising development of Baidu is the medical bot Melody, which is able to conduct a primary patient survey and threatens to replace the whole registration department in polyclinics.

Democratization of AI - Today, researchers need large amounts of information and computing power, so now only large companies and research institutes are competitive in the field of AI. As soon as AI models appear that are able to study on small amounts of information, it will be even more interesting, because even more people will be able to train and explore AI. Perhaps there will be social networks (already) where people will be able to share progress in training their AI agents.

Distribution will receive mechanisms for automatic detection of fake news, photos, videos. The development of IBR (image-based rendering), a technology that allows drawing new frames based on existing ones (something similar to methods already implemented inbetweening or motion interpolation), simply requires the appearance of such a fake analyzer.

Another hello from fraternal China is the development of Face ++ facial recognition, which allows you to pay with your face (it’s hard to count how many layers there are in this pun). Testing of the development is based on the Alipay mobile payment system: now you can make payments by providing only your own face.

In the sphere of speech recognition and reproduction, several cases are of interest: Adobe Voco (Voice Conversion) presentation "Photoshop for voice" - an application for Adobe Audition that manipulates the original sample of human speech, adds new words and meanings to the original message. Now, ventriloquization takes on new meanings.

A good example of how an independent researcher can teach AI languages ​​is:

The program teaches English:

The program teaches Japanese:

And what will happen if you leave the AI ​​unattended? He will self-study without stopping and become more and more perfect, for example, in music:

Algorithmic mashup or artificial Stravinsky

Instead of conclusions: when I hear that young people with an MBA degree are doing AI startups, my hand reaches for the mouse. If we consider how much free software and powerful computers are available to ordinary people today, then the fashion on AI should not be surprising. Despite the HYIP around artificial intelligence and machine learning, awesome predictions and childish tricks like Rocket AI, despite all the advances in this area, AI can hardly be called intelligence in the exact definition of this word ("people, people everywhere" - all work on the development and support Artificial intelligence is now performed by people, the AI ​​cannot even name itself, it only says what scientists have invested in it). Most of the services that operate on the basis of artificial intelligence are still supported by developers, we can only talk about a very small fraction of the automation of intelligent machines. So far, artificial intelligence only repeats and reproduces educational or work information - yes, it amazes with computing power and learning speed, but that's about it. It’s too early to talk about something similar to human higher nervous activity. "Yes, and not necessarily," Larry Niven * would say.

Update 02.23.17: Facebook released the Prophet project, an automatic business forecasting tool. Prophet uses additive non-parameter regression analysis models for its predictions.

Based on the podcast with Ian Goodfellow and Richard Mallah.

* "There are minds that think differently." 15th Niven's law *. - "There are minds that think the same way as you. Only in a different way." 15 Niven's Law.

Watch the video: A funny look at the unintended consequences of technology. Chuck Nice (October 2019).

Leave Your Comment