Ethics in a Digital World – Framework Towards Implementing Ethics in Organizations

What is Digital AI? According to AI & Business Basics (2019), AI is the ability of machines to mimic tasks that humans can do and also some tasks that humans cannot, such as multiplying large sums of numbers quickly, learning how to perform specific tasks, exercising creative reasoning in new and unexpected ways, perceiving and…

By

min read

Piece of paper with AI Ethics on it

Ethics in a Digital World

What is Digital AI?

According to AI & Business Basics (2019), AI is the ability of machines to mimic tasks that humans can do and also some tasks that humans cannot, such as multiplying large sums of numbers quickly, learning how to perform specific tasks, exercising creative reasoning in new and unexpected ways, perceiving and reacting to situations and creating new systems using the abilities that they have been programmed with. In the business world and in many organizations, AI is becoming more and more integrated as companies develop more sophisticated ways of marketing, organizing data, hiring, analyzing risk, maintaining internal processes, increasing sales and many other applications.

There are different kinds of AI: narrow, general and broad. These terms refer to the abilities of different kinds of AI to deal with specific tasks. Some are designed to deal with only one task (narrow). Others are less specialized. Many companies make use of narrow AI to deal with specific kinds of tasks on a daily basis.

Digital transformation is the process of making companies friendlier to digital AI. This involves a process of implementing policies, methods, structures and tools to enhance various aspects of the way in which the company functions. The purpose of digital AI in businesses and other organizations is to improve their efficiency. Digital AI is used to improve the proper functioning of operations within the organization such as marketing and customer services.

With the advent of AI, many new possibilities are available to organizations and companies that never seemed possible in the past. In the marketing sector, for example, potential target audiences can be reached on a scale unheard of 30 or 40 years ago due to the influx of digital AI technologies to spread information to these client bases. Customer service can be enhanced through the medium of digital AI technology because advice and other services can be offered in new and innovative ways. By looking at examples of some of the technology that already exists, it is possible to begin to understand what digital AI entails in the context of business.

Smart assistants are programs designed to help users perform basic tasks or tasks that they either would rather not do or do not have time to do. This leaves the user with more time to work on the higher priority tasks at hand. These smart assistants are installed in devices as a form of software and can be voice-activated to ask questions. Examples of these are Google Assistant, Siri and Alexa.

Chatbots are AI programs that are installed on websites and other user-end interfaces as a way of simulating real human conversation through the medium of a messaging application. The purpose of these bots is to assist users with accessing information or with performing specific tasks on the site or application.

Facial recognition is used mainly by social media sites such as Facebook. It is a form of machine learning technology that stores user data based on their facial biometrics. The AI recognizes the specific features of a face because it has been programmed to analyze specific features and measures these features against a set of parameters stored in its database or the database of the website or application being used.

Recommendation technology is a form of digital AI which tracks user behaviour while they are online and calculates that specific users might like to see specific advertisements over certain others. These bots are known as recommendation engines. The data gathered by these bots allows companies to offer users personalized and relevant ads.

Further examples of helpful digital AI-controlled products are things like Roomba Robot Vacuums, Twitter social media bots used for monitoring the site, Instagram moderator bots, robo-advisors on fintech sites like Betterment and AI that can be installed in buildings to control things like climate control and energy usage. The most significant kind of AI that one needs to be aware of is the type that is used within businesses in the areas of sales, marketing and finance departments because it can encroach on private data. Data is the most important asset an organization can have. The data of employees is sensitive and therefore must be treated with respect.

As with any helpful tool, the rise of digital AI has also led to questions about its use. Because AI can be controlled by humans programming it to do certain things, digital AI has some grey areas due to the agenda behind this programming. Who determines what the intention of the programming is? Ethical studies examine the areas within society and organizations relating to the standard of ethical practice and what drives this process. In any business or organization, there has to be a standard of morality that drives what the business stands for. It is important to know what approach works best in your particular business or organization when dealing with shady and risky areas that present themselves. What methodological approach you will employ to address these gray areas depends on the nature of the challenge. A balanced, measured and strategic approach is desirable.

The word “ethics” refers to the moral code that companies, organizations and individuals employ in their daily activities. According to Velasquez, Andre, Shanks and Meyer (2010), ethics refers to the standards of right and wrong that humans hold themselves to. It is not always a matter of feeling, but doing what one knows is right based on any given situation. Right and wrong are not subjective terms. They refer to the moral obligations that we as humans have in society. In business, we need to have a moral standard that guides our behaviour for the good of the company and also the customers we work for and serve. How does this play into the idea of digital AI?

AI is a system created by humans. It is not infallible because it has been imbued with the qualities of its creator-in other words, intelligence. Any study on digital ethics itself would have to take into consideration how the notion of ethics or morality could be measured. What would a standard for digital ethics look like? The answer is through analytics. Analytics is the key to breaking down digital AI into a quantifiable state and seeing what makes it tick. What are the parts of digital AI programming that can be modified in ways that make them ethical or unethical in various situations?

The challenge with digital AI comes when the values of the company are in opposition to the most effective way to use it. What is in the best interest of the company ethically speaking doesn’t always equate to getting the most value out of digital AI. Companies and organizations sometimes have to make moral or immoral decisions that are not in line with their values but which will help them in the long run. For example, using digital AI to infringe on the privacy of users might be considered a necessary step by some companies in order to expand their marketing actions, but it is certainly not a moral or an ethical one. Another example would be the way that companies use digital AI to manage user data. The use of this data is paramount when considering the ethical decisions that companies have to make. If companies want to use data in a specific way that infringes the rights of their users, then they face a dilemma. If companies don’t want to be transparent with their use of user data then it already leads to questions over the motives of the people in charge of both programming the digital AI responsible for managing the data and the people in positions of governance. It is the purpose of this ebook to facilitate discussion on the ethical concerns and challenges surrounding the nature of digital AI and its usage in many organizations around the world today.

What is Analytics?

Analytics refers to knowledge gained through new tools such as digital AI that allows for more efficient ways of analyzing, predicting, decision making and examining trends. It can refer to the way in which statistics are read and computed within businesses and organizations. Basically, it is the analysis of patterns within data. So now that we know what analytics is, why is it important in terms of ethicality in organizations around the world today?

Analytics has undergone many changes over the years. When it started, we were only asking simple questions, but now, we are asking machines to think for us and even tell us what to ask. In many respects, we have given machines the power to make decisions on our behalf. Organizations are using analytics as a way to strategize and make key components of their businesses work through the medium of digital AI. How do programmers tell computers what to do? Well, they use forms of coded language that only computers can understand in order to “train” them to act in a certain way in response to certain kinds of input. One of these applications is known as machine learning, which is the application of digital AI to machines in order to teach them to perform specific tasks. The machine will then gradually learn to perform these tasks without the need to be specifically programmed to do it. So in essence, it is teaching the computer to think for itself.

One field of machine learning (hereafter referred to as ML) is natural language processing. This form of ML is used by chatbots in order to help and process customer requests. It can recognize what customers type in the chatbox and provide them with ready answers. We can see that analytics plays an increasingly important role in our lives, from self-driving cars to mobile devices that can read our every move and predict with accuracy what we want and when we want it.

Historical Challenges of Big Data, Digital AI and Analytics

Brief History of Digital AI and Data

For most people, the thought of AI is a science-fiction idea. If you had mentioned the idea some 40 years ago, people would have laughed and probably associated the term with fantasy movies. However, the idea of computing has been around for several hundred years as mankind toyed with the idea of creating machines to do work for them. More efficient ways of computing and calculating numbers have always been at the forefront of technological inventions. Digital AI is a historically recent invention, and it is a product of the digital age. With the development of computers, programming became a fundamental tool of computer functionality. Programming is the term that is used when we talk about how a computer is wired to follow the user’s instructions, but what do we mean when we talk about analytics? The purpose of analytics is to gather, store and analyze customer data in order to improve systems within organizations. This not only helps the organization market more effectively, but it could also lead to more effective service delivery and improved overall customer service. So, it serves a dual purpose. But where did the idea of using customer data originate from?

“Big Data” refers to the massive amount of data that is in the world today. The digital revolution began with what is commonly known as the “Big Data revolution.” The idea of Big Data began in the early 1990s, although it is not known who coined the term originally. It is thought that John R. Mashey popularized the term. He is a computer scientist from Bell Labs in Silicon Valley who works in business, entrepreneurship and technology. The most significant thing about him is that he was one of the founders of the Standard Performance Evaluation Committee (SPEC) and was a guest editor at IEEE Micro, which is a journal dealing with small systems and semiconductor chips. He understands how digital AI systems can be challenging. The idea of using data to make decisions is not a new concept. It has been around for many thousands of years as civilizations have worked with variables to try and produce the best outcomes for themselves in various sectors of society and particularly in business. As technology has advanced, the amount of data in the world has proliferated at a staggering rate. The need to analyze, store and use all this data has also increased as a result. So, different technologies are required to be able to quantify and work with the amount of new data that is produced each day. The amount of data used by humans is in the region of several quintillion tons each day. It is simply too much to quantify. As technology began to advance, the revolution created newer and more efficient methods of dealing with this data. The “Big Data” revolution is linked to digital AI because as humans found themselves with more data to process, they needed bigger and better systems that were capable of making independent decisions with regards to the handling of this data. In this way, the data boom led to the need for these improved systems.

Since 2010, technology has made the process of accessing and using data within businesses, organizations and society much simpler. More efficient machines were developed to be able to deal with large volumes of data in a shorter amount of time. According to Ribeiro (2020), data has been described as the “new oil” because of its value and its ubiquitous nature. With anything of value, challenges arise as to how it can be handled, and the same is true of data.

Transparency Challenges

The first two phases of the data revolution dealt with the development of sophisticated systems for managing data. With the advent of a third phase in the data revolution called “Big Data phase 3.0,” mobile devices emerged as a way to monitor user data. This data could be used to track health-related behaviour, physical behaviour, and other important information. However, this also left the data open to being analyzed and used by big corporations in unethical ways. One of the key ethical challenges that have arisen from the rise of Big Data is the issue of transparency between organizations and their customer bases. Customers are not always aware of how their data is being analyzed and handled, and this can lead to tension and conflict between the client-side and the interface side. On the company side, organizations have to make decisions on how to responsibly use this data. People want to know why their data is being required and where and in what ways it is being used.

The lack of supervision mechanisms, low transparency and lack of public knowledge of how data is used has led to what Pomares and Abdala call severe issues of governance (n.d.). This led to a realization amongst some organizations that measures needed to be taken in order to address this lack of transparency as it was becoming both a challenge and an ethical issue. The OECD was formed as an intergovernmental organization consisting of 37 countries in response to the need for a system of governance regulating economic interdependence in Europe after the Second World War. The name stands for Organization for Economic Cooperation and Development. Later, the OECD’s Artificial Intelligence Principles were established. They expressed a desire to set up an ethical system that was customer-focused. By laying out such a set of principles, it was hoped that a standard could begin to be constructed that companies and organizations could reference when faced with these ethical challenges. In any case, ethical frameworks for dealing with the challenges that digital AI poses will be addressed a bit later on.

Enforceability of Ethical Standards Within Digital AI Practices

Another ethical challenge that has arisen is the divide over where to apply standards of ethical governance when they are created. Digital AI has many jurisdictions. Not all supposed standards can be practically and equitably applied in all situations. In other words, there is no one-size-fits-all approach. This lack of a unified approach leads to doubt and insecurity amongst both the users of systems where digital AI is employed and in the companies and organizations themselves. Perhaps the standards themselves need to be broken down and simplified or a new approach needs to be given that can lead to a more unified system of digital AI governance. The construction of such an approach is also the focus of this discussion.

Neutrality

Neutrality is the idea that AI should be completely impartial in all the tasks that it performs. Balance and neutrality are critical within the use and implementation of digital AI because if digital AI is not truly neutral in its focus, it could lead to discriminatory outcomes within businesses, organizations and the workplace. According to Adams (2019), AI has been trained in various businesses to discriminate against various groups based on race and gender. This is due to the quality of the data used to train the AI. The pool of the data used to train specific kinds of AI was predominantly male and the results of the training reflected this. Certain types of postcodes associated with specific marginalized groups could also lead to discrimination against them if these imbalances were not addressed. If AI data is unintelligible to humans, it may be perceived to be biased against them due to a lack of understanding. Therefore, it is incumbent upon the person programming the AI to ensure that it adheres to specific ethical standards for development. Programming itself needs to be free, fair, apolitical and neutral. Who decides what machines can and cannot do? The fact is, digital AI is controlled by people with agendas. With no standard for how digital AI should be managed, consistent gray areas keep popping up. The issue is the lack of communication and common ground between the creators of digital AI themselves. This shortfall in the transparency of how data is used also leads to a lack of trust between humans and digital AI. As part of an organization or company, one then has to consider how customers are affected when there is a lack of trust due to this lack of transparency. Consistent and clear-minded governance could be key to creating a standard for ethical practices within the implementation of digital AI systems.

Inaccuracies

If digital AI is not truly neutral, it can lead to problematic outcomes with the supposedly balanced data that it is meant to manage. If it is programmed incorrectly from the start, it will lead to other inaccurate outcomes because the data analyzed could be used incorrectly or in an unethical way. The purpose of digital AI is to remain neutral at all times so that the outcomes of calculations or activities are neutral and therefore useful in analyzing data.

Discriminatory Outcomes and Embedded Biases in Programming

As mentioned before, incorrectly programmed AI can lead to faulty analytical outcomes because inherent biases have been programmed into the system. If certain groups of people program data into a specific system without considering the need for balanced and fair analysis, the digital AI program itself might be inherently discriminatory. If these processes are wrong, they not only violate trust but basic human rights as well. Machine learning has great economic potential and can lead to the growth of organizations, companies and businesses. However, if it is misused, it can hinder this growth and lead to the aforementioned lack of trust. A real-world example of this issue occurred when Google’s Vision AI labelled user images inconsistently based on skin tone within a given image. Many airports and train stations were using handheld digital thermometers to monitor temperature spikes for COVID-19. However, when images of these digital thermometers were searched online, they came back with some surprising results. Through some inaccurate programming, Google Vision AI began labelling users holding the device who were darker in skin tone as holding “guns” while similar images with a light-skinned individual were labelled as holding “electronic devices.” A different search using a person of a different racial group holding the device was labelled as “monocular.” Google was quick to correct the error but it did highlight incidents that occur all the time with discriminatory programming.

Power Imbalances

In the relationship between customer and organization, there is the constant question of who is really in charge. If the organization controls the customer data, they can do what they like with it unless the customer has some kind of legal recourse or control over what data they choose to give to the company. Laws are in place that try to govern the nature of this relationship and these laws exist to protect the customer’s interests. But this power imbalance still exists. Who owns the data? Is it the customer or the organization? If it is the customer, does the organization have a right to control or hold on to this data or use it in any way? If it belongs to the organization, what is the process by which they gain ownership of such data? Who is the organization accountable to in the way that they use that data?

The Increase in Potential for Ethical Violations

Another ethical challenge that has arisen from the rise of digital AI and analytics is the potential for ethical violations. This potential stems from the fact that computing is now more powerful than ever before and offers new ways to violate ethical norms and standards when considering user privacy, transparency and data integrity. According to an article entitled “AI Ethics in 2021” (2021), the book “1984” by George Orwell may not be so distant from reality when one considers the surveillance tactics being employed by many organizations today. According to the AI Global Surveillance Index, 176 countries are using AI surveillance to check up on their citizens. Fifty-one percent of liberal democratic countries employ these systems as opposed to thirty-seven percent autocratic countries. This might be due to the wealth gap between these nations, but one never knows. It is against the law to invade the privacy of citizens for any reason without their consent. Some in positions of power have called for regulation of AI technology. IBM has stopped using mass surveillance technology due to its propensity for racial profiling and Microsoft president Brad Smith called on facial recognition technology to be regulated due to the fact that it violates fundamental human rights.

According to a study conducted by Buzzfeed, more than 7,000 individuals from 2,000 public agencies have been using startup AI Clearview in order to scan millions of Americans’ facial data while looking for rioters, petty criminals, extremists and even their own friends and family. Such practices endanger the privacy of everyone. There are many similar occurrences that go on in the world each day.

Capabilities of Analytics and AI

Organizations and businesses that embrace the newest technologies such as analytics and digital AI are more effective at making their businesses function and grow. Workloads are reduced and productivity is increased.

Problem Solving

The Essence of Speed

According to an article entitled “How artificial intelligence is improving efficiency” (2019), AI can be extremely beneficial in improving the speed of data analysis. It can produce reports from large amounts of data in record time, and it can perform tasks far quicker than any human could manage on their own.

Convenience and Security

AI is also vital for improving the security of a network, protecting application data and ensuring that trade secrets are kept securely away from prying eyes. Networks often contain information that is vital to the proper functioning of a business and is not for general use. Therefore, they need to be protected, and digital AI technology is the latest tool for this.

AI in Business: Digital Tasks

Data Collection

Data is collected through various digital means so that it can be used to analyze user behaviour and provide more relevant services. With more relevant services, customers will be more willing to reuse the service offered. Data collection can take place through cookies, which store user information. Web traffic is monitored and the data is compartmentalized into specific data sets that can be analyzed to tell organizations and companies where they should focus their marketing efforts.

The second half of this definition deals with the storage of data. Data storage is a key part of many organizations and business strategies. How data is stored determines how it can be accessed and used at any given moment. If data is not easily accessible, it hinders the proper functioning of a company’s activities. Therefore, more efficient ways of managing and organizing data are also part of the field of analytics and what it offers. Different kinds of data require different kinds of storage.

Data tracking takes place through using tracking algorithms. Data lineage tracks the “lifetime” of data, from the time it is produced to the time it fulfills its intended goal or is transformed into another digital “material.” This is especially helpful when dealing with large volumes of data or “Big Data.” Tracked data can then be visualized on mediums such as charts and graphs.

Administrative and Physical Tasks

According to Prabhu (2020), ML is great for performing everyday business administrative tasks because it is excellent at probing and sifting through data to gather insights from it. ML can be used to highlight trends, create reminders and confirm appointments. Digital AI can make bookings for people, store and organize travel data and complete many other menial everyday tasks.

Software and Application Management

Cloud services are a wide range of services designed to provide easy access to applications and hardware over the Internet. If data is present within the cloud, it can be accessed whenever the user needs it, regardless of whether they have the device that the data is stored on. As long as they have access to the cloud account or a web browser, they will be able to access whatever kind of information they need. Cloud computing, as it is commonly known, is used for streamlining data analytics services. Gathering information is also easier using cloud services. Because the data is available readily, companies don’t have to worry about large amounts of data being lost if a computer or network goes down. The information will still be there as long as they can access their cloud services.

Application Protection

Applications can be remotely accessed and exploited like many other forms of data. What digital AI does is monitor networks for abnormalities so that it can create reports of these problems. It protects applications and networks by analyzing patterns and making a note of when there are changes in the way systems work.

Overall, analytics has led to a whole new way of looking at data. Whereas before, the focus would have been on just storing the data and finding ways to use it, now we are able to analyze and use data as a means to progress organizations and societies. This discovery of knowledge also includes the capability of digital AI to predict trends in data and to identify patterns of consumer behaviour so that they can give people what they want. If you look at many of the digital platforms today, they make use of knowledge-based services in order to assist customers and to help them find the information they are looking for. Take, for example, Google itself. This is an app that takes data and disseminates it in a way that is easy to access and analyze it.

Digital Marketing

Analytics in marketing refers to the way in which performance data is analyzed in order to improve marketing effectiveness and efficiency. For any company or organization, it is important to maximize ROI (or return on investment). Making use of digital AI can help to streamline this process. Marketing analytics can help to analyze customer behaviour, preferences and market-based trends. Marketing analytics allows for the monitoring of marketing campaigns and their outcomes in order to see what is working and not working. Making correct use of the data provided through the use of analytics can lead to improvement in ROI. Keywords are one of the key factors in making effective use of market-related data. Keywords refer to the terms most commonly associated with a particular product or service. Effective use of keyword research can help you reach more customers online, and it can tell you what customers are thinking. When products are searched using digital mediums such as online browsers, those which are commonly associated with specific words will be highlighted first. You can analyze the number of times a product has been searched using a browser and what terms the customer used to find it or associated with it. Search engine optimization (SEO), paid search marketing and search engines have all contributed to more effective marketing practices. One can say that the need for more efficient ways of marketing products and services has led to more modern means of analytics. The capabilities of analytics within the field of marketing are to help companies and organizations reach larger and more diverse audiences as well as approach them with more relevant information. One such keyword helper is called “Adwords.” It is a form of technology that helps to grade the performance of marketing campaigns by examining the way they use keywords.

Customer Service

Analytics offers more effective customer service and a more efficient way of dealing with consumers. Digital AI offers organizations and businesses more effective ways of calling, contacting and communicating with customers. Improving customer service through the use and application of analytics starts with using customer feedback gathered through analyzing their data. This data can provide valuable insights into what customers want. Most customers admit that companies fall short in their appeal to customer needs and expectations. Understanding the data that is provided through looking at website traffic and keyword research can help companies make the best decisions regarding how they are falling short. AI can help customer care by identifying customer sentiments and analyzing the content of customer tickets provided through customer service helplines. These tickets are useful because they provide a wealth of generated information that can be analyzed for specific keyword patterns such as keyword type; i.e., are the words associated with customer feedback negative or positive for the most part?

User Experience

Analytics has the capability to improve customer experience as a result of these measures. User experience can be measured through the use of analytical tools such as customer satisfaction score, average time of resolution and first response time. These concepts are simple. Clearly, customer satisfaction refers to the experience that the customer had. Were they upset or were they satisfied, and how was this reflected in the rating they gave the company when they logged out or ended the call? Average time resolution refers to the time that the customer had to wait to get their issue addressed. Finally, first response time refers to the time that the customer had to wait to get their claim acknowledged in the first place. Analytics is the way in which digital AI is applied to measuring these customer service standards, and it has the power to help companies improve customer satisfaction in the long run.

The Challenges of Digital AI & Analytics as Expressed in Critical Literature

Challenges

As effective as AI is in helping businesses and organizations to run more efficiently, there are always challenges that arise when traditional human roles in the workplace are interfered with. According to Whittlestone, Nyrup, Alexandrova, Dihal & Cave (2019), there are a number of concerns and considerations over the use of AI and its applications in organizations around the world. While technology is advancing, the need to remain responsible in the use of this technology is also of the utmost importance. Some of the issues with digital AI that Whittlestone, et al. (2019) pointed out were issues surrounding values, such as fairness and bias (or lack thereof), fair and equal treatment, respecting the autonomy of individuals and making people’s lives easier at the expense of their individual privacy levels. A list of comparisons was drawn up in order to more clearly highlight the perceived areas of digital AI that needed to be addressed in a theoretical framework later on. We will examine how the challenges of ethical AI are reflected in what people are writing about.

The following tensions in the application of digital AI were listed:

● Algorithms as a means to make accurate predictions versus treating people fairly;

● Personalization in the digital sphere versus solidarity and community in the real world;

● Using data to improve the quality of services versus respecting customer privacy and dignity; and

● Using digital AI to make people’s lives more convenient versus promoting self-actualization and realization of self-worth.

Other concerns that were identified were the fact that digital AI is inherently unequal because the benefits are only available to a select group of wealthy people (people that are able to afford to use this kind of software or application). The short-term benefits of digital AI may erode long-term values because of the increasingly technological nature of our world. When we cease to have human contact and instead rely on machines to do all the work for us, we begin to lose a sense of human perspective. From this viewpoint, there is an ethical concern. Digital AI may be beneficial to individuals, but it may not work at a collective level due to its restrictive nature. A common approach to these challenges needs to be tabled.

It’s important to look at some of the other pressing issues in the area of AI. According to “The 7 Most Pressing Ethical Issues” (2019), there are seven specific ways in which digital AI is causing ethical challenges to society in general, particularly companies. These issues are job loss and wealth inequality, AI inaccuracies or mistakes, rogue AIs, killer AIs, power imbalances and what happens if an AI surpasses human intelligence, AI bias, the issue of AIs as part of general society and what rights should be given to forms of intelligence other than our own.

Job Loss and Wealth Inequality

The future of society is bleaker and more concerning when human jobs are in danger due to their replacement by digital AI and bots. One of the qualities humans pride themselves on is being able to work. Without this ability, humans would quickly lose interest in life. This loss of jobs could lead to widespread inequality due to the inability of low-wage workers to do the jobs that are now taken over by machines. Already, there are some instances of this in the world. In factories, in particular, many jobs that were traditionally done by humans are now done by machines. Robots do not get paid a traditional wage as humans do. They can keep working indefinitely. This could cause the people who own the machines to get richer and those who used to work for them to get poorer without being able to earn a livable wage. Those in charge of the companies would be able to keep the wages that they would have had to give to workers; thus, the economy would collapse without an equitable flow of money.

AI Mistakes

While a machine is undergoing the training (ML) process, it can make mistakes. When machines are in charge of delicate or complicated work, this can be disastrous. If AIs aren’t programmed properly or they are programmed with bad data, they can make the wrong decisions at critical times, just as any normal human undergoing a bad influence would. It would be a good idea to look at an example of when such a mistake occurred in a real-life situation. A chatbot belonging to Microsoft was created and named “Tay.” The chatbot was released on Twitter in 2016 and immediately created a stir when it grew and “learned” from other Twitter users, eventually spouting abuse and racial slurs. As a result, Microsoft was forced to discontinue the bot and issue an apology. It was an important lesson in how AI can also make errors in their judgment of situations based on the way they have been programmed. This was one instance of a few racial slurs being thrown around by the program. What if the program made other more costly errors during the course of its job? If it was placed in a situation where lives were at stake, the stakes might be higher. The key to controlling this AI is in the programming itself. This can be controlled. The damage that rogue AI does cannot easily be reversed once it has been programmed incorrectly.

Digital Disruption

One of the challenges that have arisen from the advent of digital AI is that of digital disruption. Digital disruption arises as a result of transformation caused by emerging technologies and business models, according to “Digital Disruption” (n.d.). This transformation can be a positive thing when it leads to more effective functioning within businesses, companies and organizations, but it can also lead to the need for re-evaluation of a company’s methods and ways of functioning. This can lead to short-term loss of income when a company’s marketing strategy and customer service are negatively affected. When newer technologies are employed, there is invariably the need to move aspects of a company around, which is never easy for stability and structure within the organization. An example of this digital disruption in action occurred within Kodak.

Kodak is a brand of digital camera, and they were one of the first to introduce the camera to the mainstream market. For much of the 20th century, they were the dominant force in the photography market. However, as technology changed over the decades, they did not pay close attention to their customer interests. Digital cameras changed from being simply an instrument used to take photos to a much more versatile piece of technology. Kodak had originally targeted mainly the female demographic, however, more and more males were taking interest in the photography business as well. This lack of attention and care led other competitors such as Sony and Canon to swoop in and capture large swathes of what was formerly Kodak’s target audience. Despite this apparent need for change, Kodak refused to transform and eventually paid the price, declaring bankruptcy in 2012. What this shows is that while digital disruption is a painful process, it is also a necessary one. So, in this sense, the challenge arising from the advent of digital AI is a positive challenge, leading to the need for continual adaptation and change.

Rogue AI

There are instances where digital AI goes rogue either due to faulty programming or embedded biases programmed by someone with an agenda. The idea of going rogue doesn’t necessarily mean that the AI turns evil and starts attempting to exterminate humanity, but rather that the AI doesn’t do what it has been programmed to do. According to Fitzgerald (n.d.), Reuters recently pulled an AI program that was designed to assist with the recruiting process at a company. The program was designed to automate the recruiting process by going through CVs and identifying the most credible candidates. However, it was discovered that the AI was discriminating against female candidates either because of limited data sets that were programmed into the system or because the software used to implement the data into the program was flawed in some way. It is a glimpse into what might go wrong if careful attention is not paid to the way in which AI software training programs are implemented.

AI Bias

Outdated systems can lead to AI bias. Such is the example of a system used by Amazon which was trained to recognize users by their facial features (facial recognition). Facial recognition systems by Microsoft, IBM and other major manufacturers have all been found to have inherent biases. In some systems, this was found to be the case because the programs implemented were over 10 years old according to “The 7 Most Pressing Ethical Issues” (2019).

Literature Overview

What an analysis of the literature of digital AI ethical challenges tells us is that AI has the potential for great benefit, but it also has the potential to lead to some major controversies when it is not managed correctly. Whittlestone, et al. (2019) demonstrated that AI can be used in ways that promote unfairness and inequality. “The 7 Most Pressing Ethical Issues” (2019) showed that there are several ways in which digital AI is challenging companies and organizations. Fitzgerald (n.d.) highlighted a specific program that was used for recruitment but was discontinued due to being discriminatory. Overall, these sources demonstrate the need for great care in the programming of AI so as to avoid using limited data sets that can produce biased or discriminatory outcomes. As for the future of digital AI and its effect on jobs, care needs to be taken in order to mitigate the effect on the human worker and their livelihood. We cannot allow digital AI to progress to such an extent that humans suffer as a result, and therein lies the ethical dilemma that seems to be the overarching narrative surrounding the debate on digital AI and analytics. How do we allow for the progression of digital AI while at the same time maintaining a level of care and responsibility when using it?

Methodological Approaches to Digital AI

Digital AI is a system that needs to be managed, and there are right and wrong ways to do this. An AI project is not just simply putting together specific building blocks and reaping the end result. AI systems are far more complex and interactive than people realize. They are a slowly built process of experimentation and evolution. AI projects and systems are managed through a process of trial and error and finding out what works in any given situation. Approaching the ethical side of such systems is a similar process. However, choosing the right methodological approach can be complicated. What kind of AI works best with a specific ethical approach? There’s a growing trend towards the need for specific solutions to these ethical challenges. What works in one situation may not be effectively or morally correct in another situation. According to Morley, Floridi, Kinsi and Elhalal (2019), there has been a lot of focus on the challenges and what they mean, but there is a gap between the principles and the application of these principles. In other words, there is too much focus on the “what” (the principles, ideas and moral standards) and not enough emphasis on the “how.” There has been a lot of talk and not enough action. A sound methodological approach to the ethical challenges of analytics and digital AI in organizations contains a few key factors:

●      Problems and challenges are foreseen and prepared for.

●      There is a well-thought-out policy framework.

●      There is a flexible approach.

Let’s examine what each of these key factors could mean. A proper methodological approach contains ideas that foresee all eventualities not only in a theoretical sphere but also in a practical sphere. All bases are covered, in other words. It takes into consideration not only the need for proper management of data and how it functions within organizations but also the ethical concerns that are apparent as a result of decisions that have to be made with regards to digital AI and its management.

A proper methodological approach contains a well thought-0ut policy framework. A policy framework is a way of holding people responsible for programming AI to be morally responsible. It contains all the dos and don’ts of what people are supposed to do in any morally uncertain situation. It should be the driving force of how people within the organization handle data and should be the governing set of principles behind every ethical decision within the organization’s hierarchy as it concerns digital AI. Structuring a policy framework is an entire discussion on its own and needs to be given careful consideration.

Choosing the Right Methodological Approach

Data-Centric Approach

This is an approach whereby the data is made the primary asset of the business and everything builds around this. Applications are considered temporary structures. The data model precedes the incorporation of application structures and will be around when they are gone. How does this apply to ethical approaches? When the data is central to the company’s vision, moral and ethical decisions will be based on how data is affected or managed. According to McComb (2016), being data-centric isn’t only about the acquisition of more data but rather having a core model of concepts in your organization. Instead of simply acquiring and dumping data into the data warehouse environment and letting data scientists handle it, you make the data the central organizing approach for your company or organization.

The model is used as the central concept for the analysis of data. It informs the way applications are designed and decisions are made within the organization itself.

How does this help to address the ethical challenges that arise as a result of the implementation of digital AI systems? The key is found in the organization of data systems. “Dirty data” is data that is mismanaged and in disarray. It cannot be analyzed with a degree of integrity because it is not organized. A consistent data-centric model solves this problem by providing a vision and framework for the management of data. Employing the data-centric approach can lead to the more ethical use of data because the approach is more consistent in its application. A data-centric approach can be helped by the implementation of a set of data management policies.

Techno-centric Approach

A technocentric approach to ethically managing data is, in brief, the focus on technological means to protect data interests. It is an environmentally-oriented and technological approach to data management that places value on collective sustainability. Ethical decisions are made based on their value in a technological sense. That is, problems are solved using an emphasis on technological means. It gives humans a measure of control over the resources within an organization.

Mixed-Method Approach

A mixed-method approach, as the name suggests, makes use of multiple methodologies when considering the decisions to be made with regard to ethical challenges. It is a combination of qualitative approaches to address specific questions posed by research in the analysis of data. When managing data, the mixed-method approach can be more flexible because it offers many solutions for dealing with the ethical challenges that digital AI produces. This is crucial when planning business strategy, product development and branding ideas. However, how does this approach assist in making sound ethical decisions in a more general sense? Multiple approach strategies mean that the data being analyzed can be addressed in an unbiased way which is one of the key challenges that digital AI faces. One solution might propose a specific idea and another might present the situation in a completely different light. It is necessary to have numerous perspectives so that all possible outcomes can be considered.

The benefits of the mixed-method approach are manifold. Having multiple methods of analyzing, storing and managing data leads to success because of the varying number of solutions that arise to particular problems.

How Does One Choose the Right Methodological Approach in Any Given Situation?

Choosing the right approach is difficult because it is largely dependent on the context and situation surrounding an organization’s policies at the time. Applying ethics to the development of AI is a fundamentally open issue that can be dealt with in a number of different ways. We can make these ethical decisions based on what is most sustainable, what is in line with the company policy framework and what best serves customer needs. Overall, as a general rule, the best way to approach selecting the best methodological framework for an ethical policy is to think about works in the best interest of all concerned and then work your way from there.

Chapter 5: Governance

AI governance has much to do with who is taking responsibility for the ethical decisions relating to the management of data and digital AI within organizations. AI governance is about accountability. The challenge with AI governance is determining who is responsible and who makes the decisions. Who should organizations be accountable to in terms of their ethical decisions? At present, there is no way to measure the concept of “governance.” To have an effective standard for AI governance, there need to be principles that are accepted globally, and these standards need to be applied in all situations equally and consistently. What are the current leaders in terms of AI governance? Who is making the decisions? This is what we need to discover for ourselves.

Who Makes the Decisions Today?

The current system of AI governance is uncertain. First, it would be good to look at what the term governance really means. In most cases, it would be mean who is in charge of making decisions in a particular organization or set of organizations. But in the case of ethics, who decides which decisions are correct and which aren’t? There cannot be one single entity that governs every decision-making policy. A standard has to be created which is in itself the purview of every law-abiding organization around the world. AI governance is about being transparent, explainable and ethical, according to Sondergaard (2019). What do these things mean? Transparency is the capability of governance to be open and clear about its motives and ideals. If an organization or standard is in place to govern ethical decision-making processes, it cannot have hidden agendas. It must be able to be trusted by companies and organizations so that if they do follow its guidelines, they themselves will be walking along a clear moral path. A standard must be explainable and understandable to all. Its methods must be clearly discernible and followed by all concerned. A standard of governance must be ethical. It must be agreed upon by all and follow a moral standard that takes the needs of the majority into consideration. These three terms also need to be codified or standardized for every organization so that there is common ground on what they mean. There can be no doubts or quibbles over language where governance is concerned. Having briefly outlined what solid governance could look like, a look at the current state of AI governance in the world today would also be fruitful.

The Weaknesses of Current Governance Structures

Lack of Unity

A lack of unity means that governance structures around the world cannot agree on specific measures related to AI governance. One such example is the case of measurement. According to Sondergaard (2019), there is a lack of unity over how governance should be measured. Practically speaking, what should this governance even look like? Lack of measures is a weakness because there is no precedent or rulebook to consult when faced with a practical situation requiring an ethical standpoint. If the theoretical base as put forward by the governance structure is not in place, there is a lack of practical options available to governing authorities.

Lack of Consistency

A lack of consistency means that the rules are in place in specific organizations but there is no consistent structure for employing these measures. If different organizations don’t have unified rules, it leads to this lack of consistency. A lack of consistency leads to a lack of moral standing and a muddied view of what is ethical in challenging situations. The first step in building consistency is for organizations to agree with each other on what measures are most relevant. If organizations cannot agree on a unified framework, then they cannot put clear and sound theories into practice except in the confines of their own company structures.

Lack of International Standards and Norms

There is a clear lack of policy framework for ethical AI governance at the international level. Many AI-focused organizations do in fact exist at the international level, such as the ISO (International Organization for Standardization) and the IEEE (Institute for Electrical and Electronic Engineers). However, the scope of these governance organizations are limited. Their focus on ethical issues is too limited, and their main objective seems to be improving market efficiency. If their standards fail to meet policy objectives, there is nothing to govern how digital AI is produced and functions. Companies and organizations have to invent their own standards for policing. The weakness of governance, in this case, is the lack of scope or reach. There needs to be a more widespread range of measures that govern the management of digital AI throughout the world.

Insights and the Construction of an Ethical Framework

Finally, we arrive at the construction of what should be a considered and measured approach to constructing a policy framework that will form the basis of AI governance and inform ethical policies going forward. There are many ideas of what this should look like, but it is a complex process that involves many considerations. What are the key factors that one needs to consider in light of what we have examined? There have been several key areas of ethics within the digital world that have been examined, including the challenges that digital AI and analytics have created, the weaknesses of AI governance structures and the methodological approaches to dealing with the weaknesses of this governance. We can only start to create this framework once we address the challenges that have been put forward. So, a framework starts by laying out the context for change. What are the challenges that we’ve examined? We can divide these challenges into specific sectors. The first sector deals with the challenges that are present due to digital AI and analytics themselves. The second sector looks at the challenge surrounding governance. 

Building a Policy Framework

The Issue of Digital AI and Data Analytics

Given the issues created by data analytics and the rise of digital AI, it seems pertinent to focus on the key areas of ethical concern. First is the effect that digital AI will have on human jobs. A framework needs to be designed that protects the human worker. The second issue is that of bias, lack of equality and discrimination in AI programming. This can be solved with the implementation of policy frameworks that govern the development of software used for digital AI. What will decide the limits of digital AI? This framework needs to make it absolutely clear what is and what is not allowed to be implemented in ML training software and digital AI programming. This policy framework needs to be codified into law and used in all major organizations and businesses. It needs to be accepted and implemented by them. Leading AI development hubs need to agree on one single standardization process for both the development and the management of data within a digital AI framework.

This process should be ongoing. The concept of standardization should undergo a review process every year in order to determine the status of ethical standards. For example, organizations could report to a central hub detailed in a single policy framework that forms the governing body for ethical standards. They would report which policies worked, which were too loose and which were too restrictive.

This policy framework would lead to a new culture of sustainability and safety. It would be wise to lay out the policy framework as it might theoretically look, given what has been discussed:

● Governments around the world would agree that steps need to be taken to standardize AI governance.

● A system of AI neutrality in programming is developed that is free of discriminatory features and is vetted by an independent organization without political and racial biases. This software becomes the standardized software used for digital AI programming. It is evaluated every year and keeps evolving based on changing practices and customer needs.

● The policy framework commits to protecting consumers and workers alike from unfair labour practices stemming from the rise of digital AI. The policy framework pledges commitment to a future where humans and AI work together and one does not replace the other.

● The policy framework pledges to work on research new and improved ways of holding digital AI and its programmers accountable always.

● The policy framework maintains a set of core values based on lack of discrimination, lack of unfairness and lack of gender and racial bias. These core values are always at the heart of the policy’s activities and they inform everything that the policy stands for.

● The policy framework lays out the ideal steps for dealing with grey areas in the challenges put forward by the rise of digital AI. When these challenging situations occur, the framework gives a set of guidelines that must be followed in accordance with the principles of the framework. Failure to do so means that the company is no longer in agreement with the principles and must face appropriate action. Ethical standards need to be enforced in a manner that is free and fair. If companies and organizations breach these rules, they should face the same actions that other companies breaching similar charters face. The framework is to be viewed as a policy document that has some judicial and legal standing as it relates to free and fair ethical practices in businesses and organizations.

The Issues of Consistent Governance

Governance weaknesses can be addressed through the implementation of this framework. If a specific organization can be set up to deal specifically with ethical issues in the technological sphere, it would go a long way to setting an international standard.

Insights

What has been discovered through looking at ethics in the digital world is that there are issues that stem from a lack of fairness and equality. In many instances, AI is not human and does not think like a human. This lack of empathy is the nature of the machine and program. It is a cold and dark perspective but one which can be changed through the implementation of a moral code of behaviour that governs the way in which AI can act in any given situation. In many instances, the human programmer is the one giving life to an AI program. We can instill their morality through the creation of a code of behaviour for them to be governed by. And when they transgress, we will have a standard that they can be judged by. We also examined the nature of the challenges that stem from the rise of AI both historically and in the present day, and we discovered that humans and AI need to work together and not replace each other. The more unity there is the less these gray areas matter.

What did we discover about the methodological approaches as they relate to AI? We examined several different methodologies: data-centric, techno-centric and mixed methods. We discovered that the most effective methods in approaching ethically-based problems are those which are flexible and which have a multitude of perspectives governing them.

As far as governance goes, we discovered that when governance policies are not in unity, a standard of ethical practice cannot be implemented. It is in the best interest of governments around the world to fix their policies and ensure that they agree upon a single ethical framework so that ethical challenges can be dealt with and overcome. The best way for governance structures around the world to be in unity is to band together and create a single cohesive document that they all agree with and that will be in the best interests of customers and organizations alike. Unity is the key to overcoming any challenge that can be faced, be it moral or otherwise.

Conclusion

In conclusion, one could say that digital AI has brought many benefits to the world. It provides security online, improves our lives and does the tasks that no one wants to do. It is in many respects a silent guardian in our lives because it is always there, and yet it is invisible. In businesses, it is responsible for much of the change we see in our lives. It controls our devices and gives us information when we need it. It is, in many respects, irreplaceable. But this reliance on it can also be our downfall if it is used in a manner that is not ethical and sustainable. The keys to using AI sustainably and equitably lie in the fair and equitable use of it. In everything that we do, we need to maintain integrity. We need to develop a moral code for using the technologies that impact our lives, and we need to stick to it for the betterment of all society. Otherwise, we will become as dark and amoral as the technologies that we create. 

Leave a Reply

Your email address will not be published. Required fields are marked *