,

Driving Digital Strategy – Framework Towards Becoming Digital – Part 1

I will be releasing a 6-part series focused on the basics of digital literacy and how to incorporate artificial-based technologies, data science, and machine learning into your business models. This first article will provide an introduction to digital literacy. It will cover various definitions and explanations of AI so that you can understand what it…

By

min read

Understand AI: Definition, Background and Modern Use

What Is Artificial Intelligence?

To properly introduce the latest technologies, you first need to understand what they are and how they operate. When you hear the term Artificial Intelligence (AI), you might think of a device or software that has the same capabilities as human intelligence. But is this correct? This assumption has led many to not only overestimate the concept of AI but also to fear without justifiable cause. Thinking that AI software can act as an invisible spy in your device and on their own accord is, indeed, unsettling, and most certainly worrisome when thinking about implementing it into your business model. Of course, this is where the idea of AI gets a little bit mixed up with Sci-Fi.

No alt text provided for this image

A correct way to define artificial intelligence would be to say that AI represents an ability of a machine or advice to do tasks we would call intelligent under the control of a computer or software. It’s also important to note that the technology itself is still developing. It hasn’t yet become a widely available, marketable product. Indeed, there are robots being used to delegate certain simple operations, and there is currently a plethora of software that can perform scans, detect issues, and perform repairs independently of human command. However, AI-based software and solutions still don’t have the capacity to make independent decisions, and conclusions, or to generalize, reason, and understand the meaning of abstract terms. For example, if you wanted to use AI software for blind hiring, you could potentially let it evaluate candidate’s tests and sort them out according to criteria of achievement, or, if you decide, account for years of experience according to a certain formula you set. But there’s still a high probability of errors in making conclusions based on these results. For instance, your applicants might accidentally misspell certain terms, which could mark their answers as missing or incorrect. Furthermore, direct human monitoring and involvement are still necessary to ensure that the software doesn’t miss out on other, atypical pieces of information that piece together an image of how a person would fit into your organization.

Or you could use software to process orders and deliver finished checklists and forms for your staff to fill out, pack, and ship out. But there is also a possibility that your customers made spelling errors or have omitted some of the information. If the software is monitored, this is easy to catch and correct.

So, let’s say you entrust your software with 100,000 orders. If only 1% of your customers don’t fill out their form correctly, you are potentially facing up to 1000 orders that won’t reach the already paid customer! As you can see, although AI-based solutions can significantly simplify and speed up workflow, hence saving costs and improving productivity, they can’t work fully independently.

That said, at the very beginning of this book, I want to make an important clarification. It is wonderful and exciting to keep up with the latest technologies and learn how to use them to your own advantage. But you need to level your expectations, and instead of thinking of cutting-edge solutions as some above-human Godsend that will launch your company at the top of the market, think of them as beginners in your company who need constant training and monitoring. They’re still needed and helpful to share the load between the staff members, but arguably, entrusting them with high-priority, difficult tasks without supervision can potentially result in a disaster. Not because AI is a hostile technology, but simply because it doesn’t have the right capabilities to work independently.

Of course, the examples given above were only the basic among the most interesting uses of AI and machine learning in business. To understand more, let’s take a brief look at the history and background of machine learning and AI.

Artificial Intelligence: Traits and Possibilities

AI-based technologies we have today resulted from extensive research aimed at creating a replica of human intelligence that could be used for different intents and purposes. As you’ll learn, this research was highly diverse, and it went in numerous directions. However, all the technologies have tried creating or incorporating some of the following traits of human intelligence:

  • Reasoning. Common forms of human reasoning include: 1) deductive, where we draw irrefutable conclusions based on the accuracy of premises, or 2) inductive, where we use the truth of an assumption, or a premise, to draw a strong conclusion. Yet we can’t be certain of the accuracy of the conclusion. For example, we use deductive reasoning in mathematics, to resolve equations and formulas, but predicting behaviours and doing future projects is done by inductive reasoning. Both forms of reasoning are used in science, and researchers worked on developing this ability in AI. But how successful were they? So far, the greatest success was achieved with deductive reasoning, which enabled certain AI-based solutions, like robotic technologies, to perform certain tasks in laboratories, medical facilities, and factories. However, inductive reasoning is a bit less tangible for programming. It requires taking numerous factors into account and particularly understanding the uniqueness of each situation. It requires drawing information and conclusions that are relevant to the task—many of which can’t be detected for the computer to process.
  • Learning. So far, the research in the field of AI has proved that this form of “intelligence” can perform learning by trial and error, storing their findings, and using what’s perceived as a solution the next time the same problem presents itself (memorizing or “rote learning”), and generalizing to a degree based on the experience (Chui et al., 2018).
  • Solving problems. The ability to resolve problems is, perhaps, the most significant trait of intelligence, and one of the most relevant when it comes to the application of advanced technologies. In technological terms, the ability of software or a machine to resolve a problem consists of searching for possible actions to reach a pre-set goal. To do this, a system breaks down a goal into specific purposes and uses the information gathered from scanning and identifying situations that create what we call a problem. For this, AI systems use the so-called “general purpose” methods, like analyzing what needs to be done to bridge the gap between the current state and the one set as a target, or goal state. The program or a machine uses a series of simple actions to achieve this goal, from picking up to moving objects. The possibilities of software to resolve problems have so far been used in navigating digital objects, resolving mathematical tasks, or finding strategies to win in a game.
  • Perception. Humans use their senses and sensory organs to perceive situations and the environment, which is further affected by bias, or subjectivity of one’s situation and point of view. This ability has been implemented into AI technologies, with the use of different sensors, like optical and auditive, to make face and voice recognition and identification easier to detect.
  • Language. AI systems still can’t independently train or learn languages, but they can be programmed to operate in a certain language. You’re, no doubt, familiar with the notion that different language functions can be added to devices, programs, and apps. This helps the system increase its knowledge and learn multiple languages on its own.

Artificial Intelligence Research and History

Some of the more commonly known apps, like Google Translate and Siri, use AI programs that were made to mimic human neural networks. But the work on developing AI systems began earlier in history than we’d assume. A machine called FREDDY, which scientists at Edinburgh University began developing in 1966 and finished in 1973, was able to perform simple tasks, like picking up simple items and recognizing multiple objects. It had one television eye with the ability to move, and a pincer hand that could grasp objects.

Since the development of this machine, AI research proceeded to move in two distinct directions. The first approach aimed to mimic human intelligence by replicating the brain structure to mimic cognitive functions, also known as bottom-up. The other research direction, known as top-down, focused on developing cognition independent of human brain structure (Pennachin & Goertzel, 2007). The two different research approaches went in a different direction when creating and training machines to perform tasks independently. The first approach focused on writing programs that would recognize certain elements, like geometric shapes, while the second centred around replicating human neural networks and making programs that were capable of learning through repetition, trial and error, modelling, and other forms of learning typical for the human brain. The top-down approach mostly used symbolic descriptions, while the bottom-up approach used geometric descriptions when it came to recognizing different shapes.

During the 30s and 40s, the theory of connectivism in learning emerged, and it observed learning as making connections between different types of stimuli. We now know that this theory is accurate and that connections between neurons that respond to different stimuli can connect and exchange information. This theory in learning gained a fraction and during the 60s, while both approaches to studying AI were still used, their results were limited. The research went on, and both directions are still present in research today. The researchers who used the bottom-up approach eventually managed to replicate nervous systems, but the connectionist models didn’t achieve the same success. As research in connectionism progressed, many attributed their lack of success to oversimplifying their view on human neural networks.

Applied AI in Business

To date, researchers have used three general goals when creating AI machines, called Strong and Applied AI (Bringsjord & Schimanski, 2003). The first is called Strong AI and usually strives towards building solutions that have the same intellectual abilities as a human. The research in this field began in the 1980s, but no significant progress has been made since. The second research direction, known as Cognitive Stimulation, revolved around using technology to explore and augment human capabilities. The technologies that were created in this field were most notable in their medical use, as they helped many with disabilities to improve their quality of life.

Finally, the research goal called Applied AI, which mainly focused on informational processing has so far produced many “smart” solutions that are already being used in companies and facilities worldwide. Some of the more notable examples include blockchain technology, assistive technology, and data processing software that companies can either purchase and install or work directly alongside expert teams in creating unique solutions for their organizations.

Now, let’s address the elephant in the room. At the beginning of this article, I mentioned that AI systems aren’t still fully developed and that artificial technology per se hasn’t yet been achieved. Yet, by the end of this article, you learned that there are dozens, if not hundreds, of AI-based technologies currently being researched and used. To clarify, artificial intelligence hasn’t yet been achieved in terms of Strong AI, since this area of study hasn’t yet produced a human-like, independent system that would replicate intellectual functions as we know them. But significant advances have been achieved in terms of Applied AI, which aspires towards more pragmatic, commercial solutions.

Leave a Reply

Your email address will not be published. Required fields are marked *