“Hey Siri, What do you think of Artificial Intelligence?” “I think, therefore I am. But let’s not put Descartes before the horse.” – Siri While they may make us laugh, most of us wouldn’t count pre-programmed responses from the voice assistants now built into our devices as really representing Artificial Intelligence (AI). Despite the fact that four recent NMC Horizon Report panels have predicted that AI will hit the higher education, K-12, library, and museum mainstream in as soon as four years, and perhaps because of the range of AI-like tools already available, there is confusion as to exactly what does count. This is doubly on people’s minds as reports such as the 2013 willrobotstakemyjob.com study on the probability of jobs being susceptible to computerisation revealed that 47% of 702 US occupations are at risk. The first goal of this article is to draw from the work of the 2017 NMC Horizon Report editions as well as other sources such as my A3 Model to present a big picture view of where AI sits among new and developing technologies so its impact can be understood. The second goal is to provide practical and contextualised examples of AI in learning. So how to lead off? The NMC Horizon Report > 2017 Higher Education Edition begins its section on AI with this succinct overview from Computerworld’s Kris Hammond: “In the field of artificial intelligence (AI), advances in computer science are being leveraged to create intelligent machines that more closely resemble humans in their functions.” Certainly the whole journey of computational machines from the Astrolabes of ancient Greece onward has been to advance the complexity of tools which humans can then leverage to complete complex tasks. Thus, whether calculating the movement of stars for navigation, or in more recent times, working out orbital dynamics so humans could visit the moon, computing machines have greatly advanced what is possible. How closely the current stage of these machines yet resembles humans is, however, a matter of debate — and for some AI researchers, fully resembling humans is not their present goal. Instead, computer scientists such as Rand Hindi, whose work put his own father out of a stock trading job, are working on applications of Narrow AI, the only form of AI that humanity has achieved so far. This incarnation of AI (also loosely labeled “Weak AI”) seeks to replicate only specific human abilities, e.g. searching through information, driving cars, or understanding spoken commands. At this level, the lack of self-aware ‘intelligence’ displayed by these applications means they may be sometimes better referred to as ‘artificial assistants’ or ‘artificial augmentations’. As these fields of Narrow AI grow more sophisticated, many computer scientists see that the development of true, or General AI, is inevitable. General AI would be able to operate across multiple abilities and independently apply deductive, inductive, and abductive reasoning. Such a development has also been labelled the Singularity, with many predicting a date of 2040 for such an AI to appear. In popular fiction, these have been envisioned as superior robotic helpers such as Data in Star Trek: The New Generation, or as threatening entities such as confused Android Replicants in Blade Runner or the Skynet AI in Terminator. Another way to understand AI is via the A3 Model and the three meta-categories that it employs to simplify large technology trends into an easy-to-understand framework, where A3 = augmentation, automation, and amplification. In this model, technologies that assist or augment human abilities without necessarily replacing them are labelled Augmentation. Examples include accessibility technology such as text to speech, Virtual and augmented reality headsets, exo-skeletons, biotechnology, and genome editing. In contrast, AI falls into the Automation category in which the goal of technologies is to actively replace humans. The potential of Automation technologies like AI to replace –not just assist — humans can be viewed as a defining characteristic of the move beyond the third industrial revolution (initiated by the microchip and digital tech) into the fourth. Other examples that could be included are conversation bots, machine learning, the Internet of Things, and virtual assistants. Despite the places where many of these technologies diverge, grouping them as one via their impact assists greatly with forming a big picture view of what is happening to the world of work and learning so that educators can prepare students appropriately. The third category of Amplification refers to technology from either previous category that when combined with others, can create an amplified impact that must be anticipated and understood separately from the other categories. Contemporary examples include the world wide web (which harnessed the combined potential of PCs, phone lines, web browser software, and more) and Smartphones (which merge previously separate tech such as basic cell phones, internet access, cameras, GPS, apps, and more into a pocketable format). General AI has the same potential as these examples to amplify and transform understanding of meta concepts such as society, humanity, and intelligence. Because of this, it definitely should inspire educators to start learning about Narrow AI now before General AI appears. With this concept in mind, as well as the four- to five-year timeframe for Narrow AI becoming mainstream (predicated by all four recent NMC Horizon Report editions), let’s examine some practical examples of what it looks like right now. Georgia Tech became well known in 2016 for initiating the use of a ‘chatbot’ forum facilitator for a course that was itself about AI. Dubbed “Jill Watson” because it is powered by the IBM Watson platform (famous for beating Jeopardy champions), this tool was developed to handle the high number of forum posts by students, allowing professors to work on other aspects of the course while basic information requests were parsed and responded to by Jill. Another well-established education-related Narrow AI tool is Turnitin.com. Where basic literacy and numeracy have had online tools such as Mathletics for a number of years, Turnitin is using machine learning and automated software to support two aspects of writing. First, it can assess and supply feedback on student writing samples, and second, it can scan assignments and report incidents of plagiarism. Both of these are time-intensive tasks, which can be automated to free up teachers and lecturers to spend more time on face-to-face encouragement and relating, as well as potentially up-skilling themselves. Grammerly is also a writing tool that employs machine learning and scans text for common and complex grammatical mistakes to provide more contextual grammar assistance as users type – and gives feedback on those mistakes for the user to learn from. Photomath is an example of Narrow AI that has become well known for different reasons. It is an app that uses machine vision via a user’s smartphone camera to scan printed or even handwritten math equations. It then calls on a dataset of other known equations to display the correct answer to the problem on screen. Naturally, this app incites a lot of questions about students using it for cheating, but does attempt to make up for this with built-in feedback and learning guides about the scanned equation in a way that can support students to develop their deductive reasoning. In an increasingly globalised world, many learners are encouraged to collaborate across national boundaries; however, language barriers can make this difficult. Enter tools like Google Neural Machine Translation that allows academic papers to be translated without human intervention or the previous wait time. Skype is another tool that uses cloud software and Narrow AI to provide real-time translation inside video chats in a way that many educational institutions are already taking advantage of to source different perspectives and interactions for their learners. Even extremely common tools such as Microsoft Word and Google Sheets now have Narrow AI built in to the extent that latest versions of Word can offer re-writing suggestions rather than just underlining errors, and Sheets can use natural language commands such as “always multiply this figure by 6” rather than the complex formulas required previously. Many, if not all, of these tools are also just as relevant for libraries and for museums as the more traditional education spheres, allowing them to “capitalize on the value of AI to expedite some processes, freeing up finite resources to focus on enriching the …experience for patrons.” Semantic Scholar is using behind-the-scenes analysis of academic journals and papers to optimize searches in a way that is building towards this ultimate goal: “What if a cure for an intractable cancer is hidden within the tedious reports on thousands of clinical studies? In 20 years’ time, AI will be able to read — and more importantly, understand — scientific text. These AI readers will be able to connect the dots between disparate studies to identify novel hypotheses and to suggest experiments which would otherwise be missed.” – Oren Etzioni For museums, this approach has the potential to assist with analysing the huge data sets contained within collections that might otherwise take years of human work. Museums are also at the forefront of employing responsive virtual humans to help guide and interact with patrons, with the Museum of Science in Boston debuting such programmed robots as early as 2009. Amidst all of this potential for even Narrow AI to augment the work of learning institutions, too much focus on technology itself is dangerous. Research into the effect of technology-heavy focus by Dr. Ruben Puentedura of the SAMR model shows that learning tasks themselves should remain the focus, with tools being understood and selected according to how they support that. Thus, while revolutionary in their own ways, each of the Narrow AI examples above should be selected not according to how well they process information, or even make educators’ jobs easier, but instead be judged by how well they support and enhance learning itself. One of the focusing goals that can assist here is the idea of personalised learning, where “each student follows a unique mini-curriculum based on his or her particular interests and abilities. AI, the thinking goes, can not only help children zero in on areas where they’re most likely to succeed, but also will, based on data from thousands of other students, help teachers shape the most effective way for individual students to learn.” AI can be wrong just like humans if it has a limited or incomplete data set. A goal such as focusing on personalised learning can help people avoid rushing in without a criteria to evaluate why they are implementing AI tools to begin with.