Introduction
The concept of machines becoming increasingly intelligent until it gains sentience has been popular for at least five decades now. At least since Arthur C. Clark introduced the concept of rouge AI in his novel 2001 Space Odessy novel, the concept of an AI rebellion has been a popular sci-fi trope. In these artistic works, machines would rise up and attempt to subjugate or exterminate their former masters. Needless to say, so far, real artificial intelligence has been much more mundane, but no less exciting. It is worth noting that AI development may perhaps have more impact on humans than having a robot overlord.
AI vs. Machine Learning
At the highest level, the term “artificial intelligence” simply means the ability of a machine to replicate what most people consider to be “intelligence” (Copland, 2020). But over the years, what people consider as “intelligence” has shifted considerably as technology improved. Activities such as text recognition used to be classified under “intelligence”. Today, such machines would hardly be considered to be so. Instead, what most people refer to as artificial intelligence has shifted to big data and image recognition. Such systems can analyze trillions of data inputs and build mathematical models that can make accurate predictions. Such systems are employed in areas that include the recommendation bots common to e-commerce websites, social media platforms, the Internet of Things devices, and of course image recognition applications such as Google Lens.
Machine Learning
Meanwhile, machine learning (ML) means the ability of a machine to improve itself without direct input from programmers (Kavlakoglu, 2021). To understand why this ability to self-improve is important, we can simply look at how most computer programs work today, and why it is not considered “intelligent”. In such computer applications, the algorithm for running a computer program is static. That is to say, human programmers, write the algorithm as a set of instructions for computers to carry out. The computer then simply takes data inputs and processes them using the implemented algorithm. The result the computer finally spits out may or may not be what the programmer had intended. But the computer does not know any better. The computer ceases the operation after it has executed the program.
In contrast, “machine learning” algorithms train computers to make inductive reasoning based on data in a self-correcting manner. The user specifies a machine-learning algorithm and gives the AI a large enough dataset. The computer can then run many iterations of an analysis on the dataset. Each cycle of the analysis is used to assist in determining the outcome of the next.
The algorithm allows the computer to continuously change weights placed on variables and make corrections. The AI will generate increasingly more accurate mathematical models as more iterations of the analysis are run. This process is similar to how most people and animals learn through experience. By identifying patterns from experience, computers can also make inferences when presented with new data it has not analyzed before. In this way, the machine “learns” and then applies the lesson to new datasets it encounters. The end result of AI is usually more accurate mathematical predictions compared to traditional statistical analysis methods.
AI Today
We are witnessing a blossoming of AI today that promises everything from treating cancer (Strickland, 2019) to trading stocks (Chen, 2020) to pushing targeted advertisements. For the foreseeable future, the study and application of artificial intelligence are likely to continue, as more industries identify areas to apply artificial intelligence.
This is not to say that the implementation of AI in society will be smooth. First, the ethics of some AI-focused applications are questionable for many people. Second, there just the plain technical limitations of AI. Third, no one has yet been able to provide a solution to the threat of AI to many existing jobs and people’s livelihood. Such questions regarding AI application can see us plunging from the heights of “technological euphoria” to the “trough of disillusionment” in the technology hype cycle.
AI Limitations
Starting from the limitations, the more we learn about human intelligence, the more it has become uncertain whether current technology can fully mimic it. Advances in neuroscience and artificial neural networks appear to indicate there is more to intelligence than neuron connections and their firing. However, this model of intelligence is what many current AI systems are based on (Nail, 2021). It is noteworthy that even today, with the massive advances in computational power, AI still has difficulties in modeling a simple worm. There are doubts that whether AI will even be able to match the intellectual capacity of an ant (Copland, 2020).
Instead, most of the AI systems in use today are Artificial Narrow Intelligence (ANI). ANI systems are designed to accomplish one task and one task only. For example, computer scientists designed the systems Deep Blue and Alpha-Go to be only good at winning chess and go games (Kavlakoglu, 2021). Such systems are unlikely to be able to solve climate change or develop the next biggest philosophical breakthrough by itself any time soon.
Ethics of Artificial Intelligence Deployment
This leads to the second problem with artificial intelligence, the ethics surrounding many of the deployments that people have deployed AI systems in. Experience has been shown that AI will become racist, sexist, or develop any other types of biases when they are fed biased data(Crawford, 2021; John Oliver et al., 2020; Kleinman, 2017). The mass deployment of such systems, without the intellect to filter out biases, will be a disaster for any organization that uses AI to provide analysis and act on them.
Inducing bias is not the only ethical considerations. AI also introduces many privacy and security risks that can sift through trillions of data. Organizations are increasingly developing AI that can create subliminal messaging, break encryption, and create information to mislead manipulate (Simple, 2020). As AI becomes more common, society must set new boundaries on what is considered to be private personal information and what is public information.
Economic Considerations
Economically, the continuous replacement of humans by machines without providing alternative forms of work for the unemployed is another source of concern. As AI matures, computer systems will replace humans in many business roles. And AI systems will not only replace the low-wage manual workers. Even highly paid workers, such as accounts and stockbrokers, are at risk of being replaced by AI. The lack of “intelligence” of the average AI system is no barrier to implementation in these cases. As the case with ANI systems, the non-human replacements only need to excel in their designated role, rather than be excellent all-around employees.
Without addressing these ethical problems surrounding AI, the resistance facing their deployment will only increase as more systems are deployed. AI will negatively impact more people as deployments continue. People will react negatively when they find out their job replacements are computers and their personal information is used without their permission. Although a powerful modern Luddite movement is unlikely to develop any time soon, irresponsible mass deployment of intelligent systems can lead to very detrimental outcomes for the vast majority of people.
Conclusion
Artificial Intelligence has come a long way since it was proposed by Alan Turning some 80 years ago. Today, machines are increasingly capable of accomplishing tasks that people have previously regarded as hallmarks of intelligent operations. Machines can now even learn from experience, mimicking human and animal behavior. Despite these advances, machines are still far from “intelligent” even in the sense of the animal kingdom.
However, this lack of general intelligence has not stopped corporations and government agencies to deploy AI systems in a variety of roles in business and society. This widespread AI deployment will have a great impact on how people work and live. Not all of these impacts will be positive. While AI is unlikely to replace the human race through armed rebellion any time soon, we should seriously ponder the implications of AI and the ethical dilemmas it has brought on, rather than blindly charging forward.
Bibliography
Adams, S. (2017). Robot Will Crush Employees—Dilbert Comic Strip. https://dilbert.com/strip/2017-09-07
Chen, J. (2020). Algorithmic Trading Definition. In Investopedia. https://www.investopedia.com/terms/a/algorithmictrading.asp
Copland, B. J. (2020). Artificial intelligence | Definition, Examples, and Applications. In Encyclopedia Britannica. Encyclopedia Britannica. https://www.britannica.com/technology/artificial-intelligence
Crawford, S. by K. (2021, April 27). Artificial Intelligence Is Misreading Human Emotion. The Atlantic. https://www.theatlantic.com/technology/archive/2021/04/artificial-intelligence-misreading-human-emotion/618696/
John Oliver, Tim Carvell, James Taylor, & Jon Thoday. (2020, June 15). Facial Recognition: Last Week Tonight with John Oliver (HBO). https://www.youtube.com/watch?v=jZjmlJPJgug
Kavlakoglu, E. (2021, April 22). AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference? [Corproate website]. IBM Cloud. https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks
Kleinman, Z. (2017, April 13). Artificial intelligence: How to avoid racist algorithms. BBC News. https://www.bbc.com/news/technology-39533308
Nail, T. (2021, April 30). Artificial intelligence research may have hit a dead end. Salon. https://www.salon.com/2021/04/30/why-artificial-intelligence-research-might-be-going-down-a-dead-end/
Simple, I. (2020, January 13). What are deepfakes – and how can you spot them? The Guardian. http://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them
Strickland, E. (2019, April 2). How IBM Watson Overpromised and Underdelivered on AI Health Care—IEEE Spectrum. IEEE Spectrum: Technology, Engineering, and Science News. https://spectrum.ieee.org/biomedical/diagnostics/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care