top of page
Writer's pictureTina Gallico

Key AI challenges

Updated: Mar 4, 2022

Algorithmic amplification, the black box problem, legal and moral responsibility and the concentration of the most advanced AI capabilities are key challenges that need to be addressed for AI to realise its potential benefits.

 


Applications of narrow artificial intelligence (AI) are already prevalent in our everyday lives although we may not be aware of its application. Global financial markets, militaries, police forces, medical providers, energy and industrial operations as well as service industry businesses utilise AI systems in different capacities. Through machine learning artificial agents have been trained to detect cancer, drive vehicles, check crops for diseases, provide customer service, filter email spam or identify credit card fraud.


In my last post I outlined the problem of AI bias. This post discusses a number of additional challenges of AI research and implementation.   


Algorithmic amplification


The problem of algorithmic amplification is one that is closely linked to bias. Algorithmic amplification occurs when platforms provide content suggestions that are consistent with and reinforce things that one has previously viewed, often evoking emotion to keep us engaged for more content viewing. It reinforces views or prejudices that people already have and can lead to ideological radicalisation.


Techno-sociologist Zeynep Tufekci argues that the content shown on platforms such as YouTube gets more and more extreme until the viewer is not only shown content they agree with, but content that will amplify their racial, gender, and social class views and beliefs, often with misinformation. Algorithmic amplification enables platforms that are ‘free’ for users to maintain user attention levels. This converts to advertising views and thus revenue generation. Tufekci warns:


We may be creating a dystopian world in order to get people to click on ads.

Content discoverability goes hand in hand with algorithmic amplification. Algorithms designed to recommend content similar to what has previously been viewed would be unlikely to show content that gives alternating perspectives or sources of information or formats.


With an ever increasing abundance of content on the world wide web, content creators are finding ever more challenging to reach and engage audiences. The curation of content by content platforms through prioritisation in newsfeeds or recommended viewing determines what type of content users can access or are passively exposed to and consequently the type of content ‘hidden’. Mckelvey and Hunt describe discoverability in this sense has become a new kind of media power being cultivated by content platforms who are coordinating users, content creators and software to achieve engagement targets as opposed to objectively informing or updating users.



Black box problem

In the case that we identify a certain bias or questionable output of an AI agent, with complex AI systems interpretability is not a simple matter. At this point in time we do not fully understand how AI neural networks deep learning intricacies take place. To make AI transparent and understandable by humans we need to dissect the AI systems back propagation - how AI neural networks process inputs and learn.


The better performing and more accurate the AI system, such as deep neural networks for image and speech recognition, the more complex their computations and learning. These black box models are the ones that can exceed human capabilities in narrow domains, providing the most useful AI applications. Human comprehension of all aspects of these AI systems currently comes at a cost to the AI system complexity; at the moment applying advanced AI techniques means we cannot completely understand its thinking.


Understanding AI decision making processes is central to trust of AI systems by its users and those impacted by its outputs. Can we trust an AI agent if it is not possible to map the steps it took to reach its decision? There are many aspects of human’s brain functions that scientists still do not understand, yet that doesn’t preclude our trust in human decision-making or reasoning capabilities; that is even despite increasing research proving inherent weaknesses in the psychology of our decision making processes (e.g. is the need for explainability in complex AI systems an unnecessary double standard?


Considering that we will be utilising AI in fields including medical diagnosis, judicial process, government service provision and in military combat, finding solutions to the black box problem to ensure transparency or a middle ground for AI explainability is critical to trust and fairness in the use of this technology.


Understanding how AI works is central to its uptake by businesses along with public acceptance and trust of AI solutions. This importance is driving an increasing amount of research in the field of explainable AI. Explainable AI initiatives develop mechanisms to identify and clearly communicate how AI makes its calculations and decisions. They involve purpose-built algorithms and statistics to analyse AI systems ‘black box’ aspects to determine the basis for their outputs e.g. Wei Koh and Liang, Strobelt & Gehrmann.

One example of an explainable AI method is the open source IBM AI Fairness 360 toolkit. It provides ways to detect AI bias as well as mitigate bias for future system outputs. It includes a collection of libraries, algorithms and tutorials to enable AI developers to integrate bias detection into their machine learning models.



Responsibility


The use of AI systems so far has not been void of accidents and mistakes. One example is autonomous vehicle accidents, which have occurred with human mortalities although in all four cases the vehicle was not a fully (level 5) autonomous vehicle i.e. the human driver should have been paying attention and ready to take control of the vehicle if needed.

Use of AI in government has resulted in some instances of flawed algorithms leading to incorrect decision making. In the UK a flawed algorithm resulted in the deportation of thousands of students who were incorrectly tagged as having invalid student visas. In medicine, IBM’s Watson was identified as having recommended incorrect cancer treatment in its AI based diagnoses. There are many use cases where decisions are made by algorithms that affect us, but no human is involved in the decision unless the outcome is questioned or shown to be a mistake (by humans).


As humans cannot be expected to always be correct, we can expect from AI some degree of inaccuracy, but to a lesser extent to the occurrence of human error in the same context. Those working with AI based decision making may neglect, or not know, to keep this in mind and may place too much trust in decisions made by the AI system decision to the point where they are complacent even when the machine is radically incorrect. Bond, Koene and others propose criteria for data science literacy to ensure users of AI understand its limitations and to better ensure its responsible development and deployment.


The opposite problem of under trust in AI decision making can also arise – when people do not trust a decision because it was based on an algorithm rather than human judgement. This was the case at a university in the US when in response to complaints from students about two teaching assistants marking an exam according to different standards a Professor adjusted the underscored papers with an algorithm they developed to weight the results to be fair. In that case the students complained even more about the use of an algorithm under the circumstances.


Whoever makes decisions has control. Those programming the algorithms that underpin AI are the ones that define the parameters for how decisions will be made. This provides great power to those with the capabilities to build AI systems. This power to influence outcomes will be prone to manipulation by commercial interests if left unchecked.


Although those that design the algorithms design the frameworks that an AI agent’s decisions will be made in the future, are those that later use the technology more at fault if something goes wrong, or is the AI system itself a responsible entity similar to a person or company? AI systems pose challenging questions for negligence law and liability. So far, technologies and platforms that have entered markets and amassed huge user bases have (and continue to) experiment with society to build their businesses. When they have enabled real world social misconduct, personal tragedies or crimes their makers promise better self-governance, but have not been legally responsible for any wrongdoing.


Ideally, those deploying and using AI systems would at least be aware of, and even contribute to, the array of design decisions that are made in the development of their algorithms. For example, lawyers or judges using AI systems for predictive outputs such as sentencing should interact with developers in the initial design of the systems as well as to routinely update the algorithms based on analysis of real-world outcomes. They should scrutinise the hypotheses that underpinned the system design as well as be part of ensuring appropriate data is used for the training and performance metrics.



Monopolisation


The most advanced AI capabilities are limited to a handful of global companies. Whilst ‘the big 9’ are investing huge amounts of resources to develop AI systems, they do so as businesses seeking maximum economic return on investment. Whilst research institutes such as universities are also contributing to the advancement of AI, there is a relatively small pool of qualified persons that are capable of developing AI and they are commonly scouted by the global corporations who have the biggest budgets and potential to be at the forefront of AI development.


This impeding monopolisation of the most powerful AI systems would provide a small handful of corporations or (depending on the political context) governments with unpreceded potential for economic and societal influence. Already with monopolistic power in some domains, companies such as Apple, Amazon and Google are able to preclude competition and dictate the terms of engagement in their own favour to the detriment of small businesses and independent workers.


In addition to fuelling unfair business practices, the location of advanced AI companies tends to be in the United States, China and to a lesser extent Europe. If the gap between those with and without AI capabilities continues, there is potential for artificial intelligence to exacerbate global inequalities.


To counter possible ill uses of AI technologies by a selected few, and to encourage more sustainable ecosystems of innovation, data access and AI development must be distributed around the world with diverse stakeholders driving AI progress forward for a range of outcomes, including those without profit maximisation aims.



Benefits outweigh risks


Artificial intelligent systems and related technology have potential to provide unprecedented gains in efficiency, safety and cost effectiveness compared to humans undertaking the same tasks. They will transform the nature work, enable improvements to quality of life and find solutions to previously unresolved challenges for humanity.

Automation of mundane aspects of work and life may enable unprecedented time for cultural, intellectual and social pursuits, and for people to pursue more interesting and rewarding work that wouldn’t otherwise be possible.


At least at first, humans and AI will fundamentally work collaboratively. AI should free us from repetitive or routine tasks and collaborate with us for tasks involving complexity. They might even be a tool to help us better collaborate and relate to other people. In a number of experiments Yale sociologist Nicholas A. Christakis has found that in what he calls “hybrid systems”- where people and robots interact socially - the right kind of AI can improve the way humans relate to one another.


Considering the potential gains AI could have for human progress, the question is not whether opportunities to develop the technology should be taken. Rather we must decipher how to ensure it lives up to moral imperatives to benefit humanity in a sustainable manner rather than primarily serving corporate profit maximisation or political regimes. Part of this is ensuring that a multiplicity of stakeholders globally take part in the development and deployment of artificial intelligence.


...


Next topics in this AI introductory series concern artificial intelligence governance. I’ll create an overview of international government policy and strategy concerning AI development and deployment. I will then move on to AI ethics in terms of non-government activities. I’ll review recently developed ethical frameworks and initiatives aiming to ensure positive use of AI and related technologies.



Comentarios


Los comentarios se han desactivado.
bottom of page