Code of Conduct for AI Ethics in Your Organization

It is extremely easy to imagine how many things would go wrong with Artificial Intelligence if it were to grow into an entity more powerful than any other on Earth. You don’t even need to read anything about it – watching Terminator is enough to send shivers of terror down your spine. Indeed, the End of the…

By

min read

ethics, moral, integrity

Code of Conduct for AI Ethics in your Organization

It is extremely easy to imagine how many things would go wrong with Artificial Intelligence if it were to grow into an entity more powerful than any other on Earth. You don’t even need to read anything about it – watching Terminator is enough to send shivers of terror down your spine.

Indeed, the End of the World by AI seems like a plausible scenario – one that is not based on logical analysis based on experience, but on mankind’s prevalence to oddly hate itself for all its mistakes.

In reality, Artificial Intelligence is far from the evil entity Hollywood portrays it to be, and far from the all-knowing sentient beings, Asimov wrote so much (and so eloquently!) about. Of course, at this given date, AI is as far from all that as it can be, especially since this field is still relatively new.

Growth and The Prevention of Risks

The question of if AI will grow to be ultra-intelligent and all-knowing should not be posted here. Advances are made every day and, sooner or later, we are bound to live in an AI-infused world. From the way we watch movies to the way we dress and communicate, we are standing in the gateway of a new era – one that is easier, more productive, healthier, and, overall, less stressful than the one we currently live in.

Is there any chance AI will ever grow into a supervillain?

In the way movies and books portray it, this is nearly impossible with the information we have at the moment.

Can AI take a wrong turn and hurt us?

Maybe, but the advances made today are making sure that this doesn’t happen by setting in stone a code of conduct in everything connected to Artificial Intelligence and the ethics behind it.

It is, in the end, to everyone’s best advantage that we all follow this code of conduct.

What Should a Code of Conduct for AI Ethics Include?

Establishing clear ground rules on what is and isn’t acceptable when running Artificial Intelligence research projects and developing AI products is absolutely essential.

Without ethics, AI can, indeed, turn against everything it came from: first, against its creators (population would lose confidence in them and their products) and then, at a grand scale, against its users.

With ethical practices in mind, however, AI can become the harbinger of a better world: one where people are safer, where they work in higher levels of employment, and where wealth distribution itself is better managed.

It sounds idyllic – but it’s a world within our grasp, and the grasp of the shiny robotic hands the Artificial Intelligence industry is developing as we speak.

AI ethics are all about asking the right questions: the kind that drives change and supports evolution by putting humans above any other participant in this game.

More specifically, the best way to develop a code of conduct for your organization’s Artificial Intelligence efforts is by putting the Five C’s before anything:

1. Consent. At this point at least, Artificial Intelligence and automation are heavily reliant on feeding large amounts of data into systems capable of processing them and predicting future behaviour and potential actions based on said data.

Under these circumstances, an agreement between the source of the data and its processors (i.e., the organization working with that information to develop AI products) is a quintessential must.

There is a very good reason GDPR happened in Europe: it was high time regulations were set in place to ensure all the information you share online and offline is properly managed.

Even without GDPR, companies working with data should always ask for the consent of the involved parties – it is a fair, correct, and healthy action to take for your own company. Not asking for consent can make you liable to a wide range of risks – including tremendously expensive lawsuits that will eventually hurt not only your company’s image but its very integrity as well.

2. Clarity. How can anyone consent to anything if the rules of that game aren’t clearly stipulated? You wouldn’t start playing poker without being familiar with the rules and techniques specific to this game – so why would you consent to share part of yourself without knowing all the information there is to know about who you are sharing your information with and what it will be used for?

When asking people or entities for their data, be sure you are clear about what you are using, how you do this, why you do this, and what the end result will be.

3. Consistency. When you work with people’s personal information and when you ask them to help you in the development of something as grand as AI or automatization, you need to be consistent.

This means that the type of data you ask of them should always be the same, it should always be used the same, and the results you deliver should always stick to what you promised.

Without consistency, you are losing your participants’ trust – and that’s a terribly bad idea from every point of view.

4. Control. The participants should always have the right to control the kind of data they share with you. This is not a game they should enter for life, and at any point, they might change their mind about what they will share from thereon, they have the right to control it entirely.

Be sure you let them know that they can access their data at any point and that they can control what they want to share from any given point onwards.

5. Consequences. Your data sharing participants need to know very well how their data will be shared and what the consequences of doing this will be.

Same as with every other point described here, be ultra-clear about everything and organize your message in a way that makes it easy for people to skim through, as well as to deeply understand it by reading it in its entirety.

Are Companies Doing All This?

If you look back at 2018, there were several moments that marked the world of technology.

There was, of course, Elon Musk’s rocket launch – a glorious moment when science prevailed and offered everyone a glimpse into the future.

There was also the (in) famous Facebook scandal when Mark Zuckerberg and his billion-dollar company were put to the wall. The reason this happened? Although they had followed legislation in how they used the data shared by users on Facebook, they weren’t crystal clear about it. Consequently, their image was heavily damaged.

Proof? By the end of 2018, fifteen million users had left Facebook.

Google, the other pillar of modern internet technology, is dealing with everything in a far more ethical way. In fact, they are working on making their Artificial Intelligence development efforts extremely clear to everyone. This entire page is a very good example of how the AI code of conduct and ethics should be managed.

Artificial Intelligence is just starting out – our automatizations are far from complete, and our robots are far from capable of serving us entirely (most are pretty much unable to open doors at this point).

Yet, the world of AI is growing at a high pace – so implementing codes of conduct now, rather than at any point in the future will set us all to a good head-start and help build a better, cleaner, more transparent future for everyone.

Leave a Reply

Your email address will not be published. Required fields are marked *