Cultural and ethical concerns are arising out of the development and growth of artificial intelligence (AI). However, before we embark on the debate about these concerns, cultural and ethical bias in particular, we need to answer the question as to what is AI. The simplest definition of AI as set out by the AI Transparency Institute, is that it is “intelligence demonstrated by machines as opposed to natural intelligence displayed by humans and animals”. It is also a definition used in the book, “SeniorITy: how AI and tech can enhance senior living”, by Lucia Dore and Carole Railton. Other insights from this book are also used in this article.

According to a variety of sources, the term  AI was coined in 1956. It is described by the Oxford Dictionary of English, as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making,  and translation between languages.”

AI is often conflated with robotics. To be clear, they are different disciplines. Some robots are programmed to perform human tasks without AI, but others are programmed to perform human tasks with AI. When AI is used, robots can emulate the human mind and follow human actions. An example would be Amazon’s Astro ‘Alexa on Wheels’ home robot, which performs human tasks.

AI is expected to become the most transformative technology that humanity has ever seen. But there are different types of AI. The first is reactive AI, which is the oldest and most limited form. The next type is limited AI. As well as having the capabilities of purely reactive machines, this type of AI can learn from historical date to make decisions. Nearly all existing applications that we know of, such as chatbot, personal assistants, and digital finger printing  come under this category of AI. “Theory of mind” and “self-aware” are two types of AI that are at the conceptual or hypothetical stage. The aim of these is that the machine will possess an adequate amount of self-awareness to emulate the human brain. It is possible for an entity like this to become more powerful than any human being.

Artificial narrow intelligence (ANI) refers to AI systems that can only perform a specific task autonomously using human like capabilities, but you would still know it is a robot. It uses data and learning experiences from previous incidents. Most AI currently used fits into this category. An AI agent that uses artificial general intelligence (AGI) is more developed than ANI. This type of agent can learn, perceive, understand and function completely like a human being in a way that is more than ‘human-like’. It is human. AGI is still evolving.

The next step is artificial superintelligence (ASI) which means that AI-powered agents are likely to be more intelligent than the brightest and most gifted human minds. When AI reaches this stage, IT developers and the human species as a whole will certainly need to take cultural and ethical concerns into account. This article will delve into these concerns soon.

AI is important in the business world and in everyday living. In the business world, AI is predicted to be the biggest commercial growth area in the next few years. A study by the World Economic Forum estimates that Gross Domestic Product (GDP) would advance by 14 per cent by 2030 (that’s about USD15 million), if all businesses were to adopt AI. Amazon is a good example of a company that uses AI, generating about 35 per cent of its revenue from its recommendation engine, according to an article by Forbes.

AI is used to power a number of gadgets we use each day – from smartphones to robots in factories. AI can also be used in our web search, machine translations, cybersecurity, in the fight against disinformation, smart air conditioning or smart homes, transportation such as smart cars, online shopping and advertising, the internet of things (IoT) such as smart vacuum cleaners, refrigerators, ovens, televisions or watches, and retail and fashion.

Cultural and Ethical Concerns

Is the use of technology desirable? As more governments, companies and organisations are using AI technologies than ever before, ethical issues are becoming of greater concern. These concerns have to be considered closely and carefully. It’s not only scholars and academics that have to undertake this scrutiny but advisers and politicians too. The most important question to ask is whether AI is being deployed in the best interests of society and the individual. What are the cultural concerns that must be taken into account?

For example, when it comes to culture there can be a cultural bias. Some 54 per cent of respondents who took part in a survey (6000 consumers from North America, UK, Australia, Japan, Germany and France) by Pega Systems, a US based company founded in 1983 that develops software for customer relationship management, robotic process automation and business process management, said they would expect some sort of bias. This could be discrimination based on race, gender or socio-economic status. AI can perpetuate these biases, according to researchers. These biases, especially gender related ones, can be a result of the views of the developer or programmer, who is usually male.

Bias also comes into play because AI gathers data and picks up patterns from everyday activities. It then correlates the cause and effect based on existing knowledge. As one management consultant says: “Only humans can think logically and distinguish between useful and worthless AI advice.”

What we must consider is that cultural bias usually comes back to what we consider “ethical”. What are the ethical issues associated with AI?

According to the Alan Turing Institute based in the UK, AI ethics are a “set of values, principles and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.” Many of these ethical concerns are culturally bound. For example, Emmanuel Goffi, philosopher of AI and co-director and co-founder of the Global AI Ethics Institute in Paris, says in the book ‘Seniority’ that “Ethics tends to be a Western construct, but different cultures consider ethics, whether around AI or other issues, differently. This is known as ‘contextual ethics’”. He continues: “Between 60 per cent and 70 per cent of the codes of ethics around AI comes from the West. Is this right? This affects programming. In the future, China will be a leader in AI. There is also a big dynamic of AI in India. Consequently, the Institute wants to ensure a more philosophical debate about the introduction of AI technologies and products, whether within the EU or other cultures”.

What we must ensure is that governments and organisations are transparent and vigilant when it comes to their use of AI and that cultural biases are addressed. Entities such as the Global AI Ethics Institute are addressing some of these concerns.