In defense of Artificial Intelligence
There is not much “human” intelligence to AI. Intelligence, in humanity’s sense, refers to the capacity for self-awareness, emotional knowledge, creativity, planning, problem-solving, empathy, and innovation. Most importantly, intelligence for humans, implies the ability to think and imagine. Artificial intelligence, on the other hand, does not think, believe or imagine.
The fears of AI, in my opinion, are arguably blown out of proportion. I am not invalidating some of the fears, but rather clarifying what I believe is of true concern. The name “Artificial Intelligence”, seems to question human existence on its own. The real existential dangers of using AI, in my opinion, are more philosophical than apocalyptic. What AI can arguably erase, is the fine line that defines humanity, which includes the way people view themselves. It can invalidate, and minimize the abilities and experiences unique to humans that we consider essential to humanity.
Before coming to conclusions regarding AI technology, its’ benefits and harms to humanity, we first need to understand what Artificial Intelligence is.
“At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.” (IBM, 2020)
AI processes large and complex amounts of data, sources them and then based on the common similarities and differences, tries to make the right guesses in terms of the decision output. AI critics can argue that humans can analyze the data, but in this case, it is not feasible or efficient for humans to analyze these sets of data. We are talking about data that is in the millions, maybe Billions, no human has enough time or brain power to analyze millions of data by themselves.
What artificial intelligence does is, it takes the data and over time, the machine repeats the same task, and the more information that is inputted, the more they can analyze and adjust their processing and they become better at pattern recognition and consequently, better at guessing. This is the basis on which machines “learn”. It is kind of like preparing for a math test, the more questions you do, the better you get at “guessing” at the answer and the higher chances you get at solving the correct answer. Because at the end of the day, isn’t math a set of inputs and outputs, that you guess for accuracy based on logic?
What AI essentially does is take the data provided, use the algorithms to find patterns, and guess the correct answer. Simple, however, this act in itself could threaten the existence of humanity philosophically, as it begs to ask the question, “whose judgements do we listen to?”. Do we as humanity utilize the decisions and guesses made by artificial intelligence? In what scenarios must humans make decisions without the use of AI?
Humans make decisions and judgements. We use logic and rationality to weigh the variables and particulars to make small and big judgements every day. This can include what to wear, what to eat, something simple as daily tasks, but can also include which companies to invest in, what vaccines to work on, and what cities the drone should bomb. The dangers of AI are not job automation or AI control or threatening humanity, but rather, AI blurs the lines of what it is to be human. If people gradually lose the capacity to make judgements and decisions, and AI gets better, how do we make the decisions to control such technologies?
Critics also argue that scientists of artificial intelligence, don’t understand how AI works. This is not an incorrect statement as computer scientists will not be able to accurately predict AI’s outputs each time. The algorithms and models that some of the AI programs use today, are too complex for even the creators to understand, they can only predict or guess based on their understanding of the algorithm. The other critical opinion is that because of the lack of understanding we have of the likely output that the technology will provide, it will yield incorrect results or guesses. To this, I argue that the fault will be a human error, not the error of artificial intelligence. The fault will lay in the problem sets given to AI, which include the input, the data types, the algorithm, the problem-solving method, and the output you want.
Coming back to the math question analogy, if you are given the data, i.e. the numbers, but you make an error during the process of solving the math question (faulty algorithm), you will always yield wrong results, similarly, if you have the right algorithm or method of solving such a question, but have the wrong data set, it will always yield either wrong results, or run into errors.
For example, Amazon tested AI technologies to see which candidates would be most suitable and successful to work at Amazon (Dastin, 2018). The company gave their AI technology, a list of candidates and their applications. After the analysis, the engineers and computer scientists found that the artificial intelligence overwhelmingly favoured male applicants over female applicants. When computer scientists and engineers went back to see what resulted in this yield, they found that Amazon as a company, in their hiring practices tended to have a bias towards men rather than women. AI recognized such patterns, the similarities and differences, and sought to follow that trend. To clarify, the company tested the technology but did not utilize it in their hiring practices.
Here you see the fundamental flaw lying, not in the technology of artificial intelligence, but in humanity itself. Amazon failed to recognize that they had an underlying bias towards male applicants. The company had failed to see the fault in their hiring process done primarily by humans. Even the engineers who provided the technology with data from Amazon also failed to recognize the patterns within the data set. Artificial intelligence unfortunately did, it recognized the trend and pattern provided by Amazon and their engineers and yielded what it thought were the “appropriate” results. The technology determined “successful employees” as candidates who lacked the words “female” or “women” in their application. What Amazon failed to notice AI picked up and created results unfortunately based on unconscious prejudice, inequality and gender bias.
In the current moment, AI is not a post-human intelligence that will control human behaviour and humanity as a whole. At the moment, given the technology we have, it is still, I would say in the beginning to middle stages of development, with lots of research to help integrate this technology with our current world. At this time, I would argue that AI is not artificial intelligence, it is not yet developed enough for that title, rather it would be more useful, and less threatening to refer to AI as I.A, ‘intelligent assets’ or ‘intelligent assistance’, as its main priority is to execute standard repetitive and routine tasks in smarter and efficient ways.
In the case of automation, in which AI will replace jobs, this may signal that we as humans lack efficiencies and will benefit from this technology. There is a study that says that AI will replace 45% of jobs (Tse & Esposito, 2018). It is estimated that up to 300 million jobs worldwide could be automated (Anderson, 2023), which significantly impacts the world economy and the job market. This essentially means that 45% of the jobs in the current market and 300 million worldwide are data-based, meaning that there is lots of data with lots of repetition and only a small set of outputs or results.
Examples of such can be something as simple as driving. Driving is following a set of rules, such as signs, signals, roads and speed limits to reach the desired destination. Lots of repetition, not a lot of intelligence required, and simple rules to follow with clear instructions and clear results. The same can be said of the game Go and Chess. A set of simple rules to follow with a clear result. That is why we have Artificial intelligence technologies from companies like AlphaGo (Google DeepMind) and Deep Blue (IBM), Cruise (General Motors) and Waymo (Alphabet). This is why we now have self-driving cars and intelligence that can beat Grandmasters and Go world champions.
Although the technology may be useful, it is not yet perfect, and I argue it will never be. It may have a high accuracy rate, but the technology will never be perfect as we saw with critics of self-driving cars and in the case of AlphaGo losing one out of three matches in Go, to the Go world-class champion. What artificial intelligence fails at, is analyzing data that is unaccounted for.
This includes the uncertainties and the rare, and random variables that we as humans have not yet encountered or don’t have the data for. Artificial intelligence does not react to sudden and random changes well. This is why an AI-driven car, may not recognize a dog or pedestrian suddenly running into traffic, it may not recognize a unique move first tried out by a grandmaster in the game of chess, or an intricate play conducted by a Go world champion. It can not problem-solve creatively, or react to unaccounted variables. Therefore, artificial intelligence will never replace humans, not until it will be able to predict our world accurately 100% of the time. No technology, I argue will be able to do that as we as humanity with the tool of artificial intelligence did not detect or prevent the last pandemic that so impacted the world’s economy. Again useful, but not perfect.
What AI does is alleviate humans from doing repetitive tasks. If the current job market could be replaced with AI by 45%, that means there is a lot of repetition, which could stifle innovation and limit human development. What AI can do is provide humans with more time and opportunity for efficiency and improvement, to rid of repetitive tasks, so that we have more time to advance technologically. It’s like someone who can do the dishes so that you can have the option to spend more time doing things that you want to do.
Whether people like it or not, I can tell you now that AI is only going to expand. Numerous companies are already trying to, or have adopted AI technologies to aid with their business practices. Scientists use AI to conduct research or predict every known protein structure (Service, 2020) or detect new methods to detect serious illnesses (Anderson, 2023). The militaries use AI to conduct drone attacks and improve cyber security measures. Because at the end of the day, why not use a tool, instead of using our own hands? If there is a chance to do things better, more efficiently, and more easily, why not take the opportunity?
I think it is smart to be critical about such technologies, but I disagree with trying to refuse their integration into society when so much of your daily life is already powered by and through AI. There is no point in denying the impact of AI. The intelligent thing to do as humans is to be aware of the technology and try to refine it to benefit and further humanity's growth, development and interests.
Our responsibilities when it comes to AI technologies, are to monitor responsible growth and oversight. More importantly, what we need to do as a society is try to stay on top of AI as much as possible. We need to educate the public on the importance of AI and inform the positives and the negatives about this technology. We need to integrate AI learning into elementary, middle and high school education worldwide, if we are concerned with the loss of jobs or stifling human development. Integrating computer coding and engineering design courses in primary and secondary education for Generation Z, increased the number of students entering the tech industry, which is such a big part of our current day-to-day lives. This also means holding our leaders and people in positions of power and influence to familiarize themselves with the current technology, such as having lawmakers and C.E.Os take mandatory classes that require them to understand this technology to make appropriate legal, and financial inquiries and decisions to protect human interests.
As the late Professor Stephen Hawking once said, “Success in creating effective AI, could be the biggest event in the history of our civilization”. What we do moving forward with this technology will be the turning point in humanity’s next chapter.
References
Anderson, B. (2023, July 19). 1,300 experts: AI is not a threat to humanity!. ReadWrite. https://readwrite.com/1300-experts-ai-is-not-a-threat-to-humanity/
Barbaro (Host). (2023, June 28). Suspicion, Cheating and Bans: A.I. Hits America’s Schools. The Daily. The New York Times. https://open.spotify.com/episode/6lfbAWCk2RI1vi3QLrsmwo?si=8dbc46aa1d6d4504
Barbaro (Host). (2023, July 18). The Writer’s Revolt Against A.I. Companies. The Daily. The New York Times. https://open.spotify.com/episode/26xt8MwmfaBlmU6GFjYumu?si=61a5063b6ab940ba
Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Eisikovits, N. (2023, July 12). Ai is an existential threat - just not the way you think. Scientific American. https://www.scientificamerican.com/article/ai-is-an-existential-threat-just-not-the-way-you-think/
Graham (Host). (2023, May 19). The incredible creativity of deepfakes — and the worrying future of AI | Tom Graham. TED Talks Daily. TED. https://open.spotify.com/episode/2Uu7rOjWvqJhxysqGP3l5V?si=3b2be9bafd6e4679
Hassenfeld (Host). (2023, July 12). The Black Box: Even AI’s creators don’t understand it. Unexplainable. Vox. https://open.spotify.com/episode/7onym0eP8yEkUATJyYz58G?si=35d2b24d4be6498b
Hassenfeld (Host). (2023, July 19). The Black Box: In AI we trust?. Unexplainable. Vox. https://open.spotify.com/episode/3npjXNCtUSGRUjVR4EYb4Y?si=7800bd9c7c834e58
Hassenfeld (Host). (2023, September 1). We don’t know how AI works…. Today Explained. Vox. https://open.spotify.com/episode/1ghtvaazr1aZhhjtik9A8h?si=dc8c686e37c640e6
Hassenfeld (Host). (2023, September 4). …We’re trusting it anyway. Today Explained. Vox. https://open.spotify.com/episode/1ghtvaazr1aZhhjtik9A8h?si=2ca49f40ad804f64
IBM. (2020, June 4). What is Artificial Intelligence (AI) ?. IBM. https://www.ibm.com/topics/artificial-intelligence
Marcus (Host). (2023, May 12). The urgent risks of runaway AI — and what to do about them | Gary Marcus. TED Talks Daily. TED. https://open.spotify.com/episode/2cAGTRd4ZbwbOXGQSKRcQw?si=1d6d01c902ce4fd8
Rameswaram (Host). (2023, July 25). Inside the AI factory. Today Explained. Vox. https://open.spotify.com/episode/1QfaZ4Kxo8NOP1ttKdrF8R?si=f85bda16a7614f52
Rameswaram (Host). (2023, August 17). RoboCab. Today Explained. Vox. https://open.spotify.com/episode/4KjM99LHUYDYZ9SVqSPofA?si=87d176a712a547ca
Service, R. F. (2020, November 30). ‘The game has changed.’ AI triumphs at solving protein structures. Science. https://www.science.org/content/article/game-has-changed-ai-triumphs-solving-protein-structures
Sidhu (Host). (2023, August 31). The AI-powered tools supercharging your imagination | Bilawal Sidhu. TED Talks Daily. TED. https://open.spotify.com/episode/7j3tbxzaPhVCCsMz3XPO3v?si=5953b698931340cd
Skyers (Host). (2023, August 8). In the age of AI art, what can originality look like? | Eileen Isagon Skyers. TED Talks Daily. TED. https://open.spotify.com/episode/7nI1Z8vrxylOmRas10FoL5?si=b17c1721df2e470b
Tavernise (Host). (2023, May 30). The Godfather of A.I Has Some Regrets. The Daily. The New York Times. https://open.spotify.com/episode/5iPjFDKUJlX2ZceJAyUdSG?si=867b0b17a6914187
Tse, T., & Esposito, M. (2018, October 11). Why ai isn’t the threat we think it is. Duke Corporate Education - Leadership for What’s Next. https://www.dukece.com/insights/why-ai-isnt-threat-we-think/
Wang (Host). (2023, July 10). War, AI and the new global arms race | Alexandr Wang. TED Talks Daily. TED. https://open.spotify.com/episode/0LqZnqKN3QwARZekBbTUyS?si=d6fbbf7a4ffa416e
Whittemore (Host). (2023, Aug 16). Why AI Hype Has Peaked (And Why That’s A Good Thing). The AI Breakdown: Daily Artificial Intelligence News and Discussions. https://open.spotify.com/episode/2r8ydALIO0MN2W0G4JszsF?si=51e3080f44984f57
Whittemore (Host). (2023, Aug 19). Could AI End Up Being Good for Democracy? The AI Breakdown: Daily Artificial Intelligence News and Discussions. https://open.spotify.com/episode/6EH05YZX6D0svzuFATlSg0?si=9741c33ca1484628
Whittemore (Host). (2023, Aug 27). How Will We Know When AI Becomes Conscious? The AI Breakdown: Daily Artificial Intelligence News and Discussions. https://open.spotify.com/episode/3hhdRfouUjlwOm8c5tFVeV?si=59a6a7bbca764615
Whittemore (Host). (2023, September 1). Are Educators Ready for a ChatGPT School Year? The AI Breakdown: Daily Artificial Intelligence News and Discussions. https://open.spotify.com/episode/6ky0U8wwPu2JN0Y1KfgUmV?si=039090c1429a48b7
Yudkowsky (Host). (2023, July 11). Will superintelligent AI end the world? | Eliezer Yudkowsky. TED Talks Daily. TED. https://open.spotify.com/episode/6raVb4fH7S4r7BR4pb1prd?si=5de05108afb2457a