The year 2017 was a seminal year for the field of Artificial Intelligence (AI). The buildup of research breakthroughs, academic whispers, and the occasional policy conversation burst into public consciousness. Suddenly, everyone everywhere was talking about AI, what it means, and how it will affect human societies and economies. Jobs, warfare, healthcare, film-making, even art—no area of human enterprise seemed to be immune from discussions of the coming machine onslaught.

Overall, there were three very important outcomes for the field of AI in 2017.

First, technologically, the single most important breakthrough in 2017 was the development of Google Deep Mind’s AlphaGo Zero. AlphaGo Zero built on the earlier astonishing success of the AlphaGo program, which mastered the game of Go—an East Asian game widely believed to be significantly more complex than chess. AlphaGo was taught how to play the game through a database of hundreds of thousands of videos of humans playing Go. This method of training AlphaGo is generally indicative of the process by which AI has been achieved thus far—extremely data-intensive, and dependent to a great extent on humans “teaching” the program.

AlphaGo Zero, however, attempted to move beyond such data dependence by making the program teach itself how to play the game of Go. There was minimal human support, and in Google’s own words, the program “learns to play simply by playing games against itself, starting from completely random play”. This is a significant breakthrough for the field of AI for two reasons. First, as pointed out by Demis Hassabis, the chief executive officer of Deep Mind, it means that future developments in the field need no longer be constrained by the limits of human knowledge or the quality of the data available for training. Second, the overreliance on data to fuel AI developments has increasingly concentrated new research in a handful of big tech companies: Google, Apple, Amazon, Baidu and Alibaba. AlphaGo Zero, by making potential advances less dependent on access to data, could make research more dispersed.

Second, 2017 was also the year when countries across the world began putting AI at the heart of their future plans and policy measures. China released a plan to turn itself into an AI superpower by 2030, Russian President Vladimir Putin noted that “whoever becomes the leader in this sphere, will become the ruler of the world”, and India set up its own AI Task Force to study the possible effects of AI on a variety of economic and social spheres. Alongside, the military effects of AI were recognized as a question of increasing relevance for the international community. The UN held the first round of formal talks on the question of Lethal Autonomous Weapon Systems—weapons that can theoretically act independent of human control via AI technologies, in November. Further talks on this issue have been scheduled for February and March.

Third, an interesting facet of the conversations surrounding AI in 2017, was the climbdown in the latter half of the year from the earlier exuberance, often misinformed, of the capabilities of AI. An increasing number of AI researchers and developers began pointing out that in spite of the significant technological leaps in the last few years, AI is in fact not as smart as has been widely reported and presumed. AI as it stands right now is “dumber than a five year old, no smarter than a rat”. Historically, AI has had numerous boom- and-bust cycles. The current boom cycle started roughly in 2012 with the publication of a set of papers which showed that the then theoretical idea of “deep learning” was now practical. However, last year an increasing number of academics and researchers began arguing that current AI advancements have in fact plateaued, signalling the possibility of a new bust cycle. If so, a new generation of breakthroughs is necessary to continue powering the AI euphoria.

In 2018, it is necessary to talk about AI within the contours of the reality of the technology and its present capabilities. An honest conversation about the possible benefits and drawbacks of AI cannot be undertaken under a cloud of hype and hyperbole—no “killer robots” and no visions of a robot-ruled future. For this, it is also necessary to move past the idea of AI being a replacement for humans across the board, and begin having a deeper conversation about its effectiveness as a tool in the hands of humans.

Finally, much of the discourse on AI thus far has been very Western-centric, with possible effects being discussed in the socioeconomic context of the Western nations. However, it is very unlikely that AI will affect all countries the same way. In 2018, therefore, it becomes very important for a greater number of researchers and academics to study how AI could affect a country as unique as India, and thereby help develop multiple discourses on AI. This has already begun in China, where the state sees AI as central to its continued economic growth and future dominance of global affairs, and is developing a China-centric vision of AI that looks to harness its potential to China’s advantage. A similar effort needs to be undertaken in India in coordination between the government, the industry, and academia to concretely envision how AI might affect various facets of the economy and society, and develop policy measures to ensure a beneficial national outcome to the AI revolution.

This article was originally published in Livemint.