DeepMind's artificial intelligence has advanced one step further in terms of the complexity of the work it is capable of doing as it is now also taught to play football. Previously DeepMind has been taught to control human avatars in the virtual world to walk. Next it is taught by looking at the data set of the ball player's movements. Based on the data provided, DeepMind can now control four soccer players in a 2v2 match.
Although it seems still not very stable, the time taken to learn to control two football players who can cooperate with each other is equivalent to 10 years of teaching or 3 months in a computer simulation. This relatively easy task for humans is complex for an artificial intelligence.
The main reason DeepMind was taught to play soccer was to teach it to cooperate in the future without human intervention. Later it can control robots in the warehouse to perform tasks automatically without accidents and interact with other robots. It may seem a bit strange but previously DeepMind collaborated with Liverpool football club to analyze football players to offer tactics on the field.
In five years DeepMind went from an intelligence that beat humans in the game of Go to beating humans in the professional game of Starcraft II. Next it predicts the weather with high accuracy, interprets ancient texts and then makes predictions of all 200 million possible protein structures in the world.