Artificial Intelligence (AI) involves computers doing things that it was thought only humans can do. Whilst robots, cyborgs and reaching human level intelligence may be part of the picture, there is much more nuance to the meaning of AI than many realise.
Defining artificial intelligence
Artificial intelligence involves a series of methods that can create intelligence in machines. It is not new, with many of the foundational concepts it is now based having emerged in the 1940’s and 1950s. Since then AI research has fluctuated, with the quietest times referred to as the ‘AI Winters’.
Over the last 10 years the significant advancements that have been made in computational power, data capture and associated costs have enabled an unprecedented wave of AI activity (e.g. Moore’s law). AI development requires complex computer processing capabilities to strive for functions such as the neural networks of our brains, as well as huge amounts of data input.
It is important to note that when discussing artificial intelligence, there is no universally agreed definition of ‘intelligence’ itself, whether that be artificially created or human. Different fields and actors attribute their own definition. It might be described as the capacity for problem solving, reasoning and some might say creativity. Further to an extensive review of intelligence definitions, researchers at the Swiss AI Lab created a concise definition of intelligence (Legg and Hutter, 2007):
“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”
Defining artificial intelligence is one of the initial challenges that policymakers need to overcome to devise appropriate regulations for its development and use. For the purpose of a foundational understanding of artificial intelligence, the European Commission’s definition is helpful. It employs a goal-oriented definition of AI (European Commission, 2018):
“Artificial intelligence (AI) refers to systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions.”
Public perception of what is artificially intelligent tends to be dependent on the state of technology at the time. For example, voice assistant technology in smart devices is a form of AI, but because the technology is mainstream consumers do not consider it as such. In light of this, AI is sometimes used as an umbrella term for those things in computer science that haven’t been done yet (Tesler’s Theorem).
Creating AI
The creation of artificial intelligence until now has been significantly based on complex mathematics. It involves the processing of data according to rules or instructions and the identification patterns to make predictions. Some AI researchers advise that it would be more helpful to describe the developments of the past few years as having occurred in computational statistics rather than in artificial intelligence.
To achieve AI, a software agent must perceive, interpret and reason in order to take the best action towards achieving its goal (refer to Figure 1 below). A software agent is a computer program that acts for a user or another program. Constructing artificial intelligence generally involves the software agent being able to act rationally i.e. it acts to achieve the best outcome, or the best expected outcome in light of its goal. Rational agents are able to operate autonomously, perceive the environment, persist over time, adapt to change, to create and pursue goals (Russell & Norvig, 2009). They are also called intelligent agents.
The software agent is made up of numerous scientific and engineering methods and techniques which are applied together to form an AI system. Depending on the goal of the AI system, the AI software agent might have a manifestation in our physical world such as through robotics, it might involve a virtual avatar, or it may only exist intangibly in the form of software.
As a component of AI systems, machine learning is a common method being applied today. It provides the ability for an AI agent to process large data sets and then improve with more data. Types of machine learning include:
Supervised learning – with labelled data the AI agent learns how to predict the output from the data input.
Unsupervised learning – the AI agent identifies commonalities in unlabelled data and thereafter reacts according to these patterns with new data.
Semi-supervised learning – a combination of supervised and unsupervised learning. A small amount of labelled data is given with a large amount of unlabelled data.
Reinforcement learning – the AI agent learns how to react in a given context by performing actions and observing the results (aiming for a goal/reward).
For example Netflix’s personalisation of program recommendations is enabled by an AI system incorporating statistical and machine-learning techniques. The AI system learns from data generated by (individual and grouped) user behaviour and makes recommendations based on this. If a user chooses a recommendation a goal has been achieved. Ofcourse there is a multitude of data and algorithms involved for machine learning in this instance such as recommended programs chosen and watched (or unfinished), series binging i.e. high enjoyment, searches, the image shown when a program was clicked etc (Netflix Tech Blog - Data Science).
There are many techniques and subsets of machine learning that have been developed in recent years. In current use cases reinforcement learning, deep learning and transfer learning are prominent examples (McKinsey Global Institute Analysis, 2018). AI systems involving deep learning have particularly underpinned the advancement of AI capabilities such as speech recognition, machine translation and computer vision. It has also been integral to AI applications for a multitude of industries such as transport and logistics, retail, healthcare systems and pharmaceuticals.
Deep learning coupled with artificial neural networks involves data computation and processing so complex that it is not yet fully understood how the output was formulated. To address this, scientists are attempting to determine ways to understand black box outcomes. This may become a necessity if people affected by AI based decisions have the legal right to know how an AI system determined the decision it did. Whether a “right to explanation” should be legislated by governments is the subject of current debates with some fearing it may stifle innovation or arguing that it is an unreasonable goal as human decision making often cannot be fully explained. Others advocate the use of explainable AI over black box systems so that the outcome can be trusted.
Potentials of AI
In terms of potentials for the development of artificial intelligence, there are different levels that can be discerned.
Artificial narrow intelligence
All AI achieved thus far has been in the realm of narrow AI. Narrow AI are systems that display a degree of intelligence through performing one specific task or within a particular environment. Applications of narrow AI include internet search engines, social media feeds, recommendations on content or product on websites, email spam filtering and device voice assistants. Even the headline making computers, Watson and AlphaGo, that have beat humans at strategy and knowledge games are examples of weak or narrow AI.
Although able to process extraordinary amounts of data, narrow AI functions only in a single domain and is not able to successfully achieve goals outside the specific tasks or contexts it has been programmed for.
Artificial general intelligence
Artificial General Intelligence (AGI) refers to a system with a full range of human cognitive abilities making it able to cope with any generalised intellectual task that a human is capable of. There are many open scientific and technological challenges to build the capabilities needed to achieve general AI, such as common sense reasoning, self-awareness, and the ability of the software agent to define its own purpose.
AGI requires significant advancements in the perception and logic capabilities of AI. Whilst narrow AI has achieved high-level reasoning, this requires very little computation in comparison to low-level perception and mobility skills, which require enormous computational resources (Moravec’s paradox). A human child can be shown a single car and then be able to identify if they see a car from then on, whereas a software agent needs thousands (if not millions) of images to be able to perceive a car. Artificial intelligence so far has been based on reasoning by association, which some argue is not actually representative of intelligence, but rather intelligence requires the capacity to reason cause and effect.
In light of the complexities involved, one can only speculate in terms of the timeframe that general intelligence will be achieved. The world’s AI experts are divided on the issue, with estimates ranging from decades to centuries and some even saying never. One survey taken of over 300 AI researchers in 2017 predicted a 50% chance of artificial general intelligence being achieved by 2060. The numbers vary according to the source, with Silicon Valley figureheads tending to make the soonest predictions. It is extremely difficult to predict when scientific breakthroughs will unfold and regardless of credentials no one can argue their position with certainty.
Super intelligence
Some researchers believe that through the achievement of artificial general intelligence a much higher order level of intelligence will come to fruition shortly afterwards; a superintelligence. In theory an agent of artificial general intelligence would be capable of rapidly creating a more advanced version of itself not possible by humans. The superintelligence could then develop capabilities that we are unable to match or even comprehend, and could pose an existential risk to humanity.
This thinking has fuelled a wave of interest from prominent thinkers including Stephen Hawking and Elon Musk and been the subject of numerous recent popular books and narratives. Philosopher Nick Bostrom, who has been central to the conceptualisation of superintelligence, argues that we should be making efforts towards reducing the risks of superintelligence, such as through instilling human values into AI. Others disagree in paying much attention to the threat of superintelligence because of the uncertainty involved and there are a multitude of existing societal issues that we should be focusing our attention on.
Looking forward
As previously mentioned, narrow artificial intelligence is already being implemented across industries. Problems (or goals) that AI methods are now being applied include: optical character recognition (the conversion of typed, handwritten or printed text into machine-encoded text), speech recognition and natural language processing (e.g. translation, chat bots), facial recognition, computer vision, photo and video manipulation, robotics, medical diagnosis, computer gaming and judicial decision making.
AI development is primarily being driven by private for-profit enterprises, however the technology is also being applied to address broader societal challenges. AI use cases can be found for all of the United Nations Sustainable Development Goals to varying degrees. The most prominent being AI applications for health and wellbeing; peace, justice and strong institutions, and; quality education.
The current hype concerning AI often sees in conflated with process automation in digital transformation or future of work discussions and product marketing. Whilst process automation involves applying computer technology and software engineering to increase the efficiency of industrial and manufacturing processes, it does not involve an adaptable goal-oriented software agent as is the case with AI.
Comments