Let’s Talk About AI

Credit: Alex/Adobe

Credit: Alex/Adobe

Hannah Moses, General Writer

Artificial intelligence, abbreviated to AI, is technology that “makes it possible for machines to learn from experience, adjust from new inputs and perform human- like tasks,” according to the SAS Institute.  People use AI in everyday life, with things like navigation apps, video games, spam filters, and facial recognition.  AI can do a lot of things that humans can’t do: thinking rationally and quickly because it doesn’t have feelings, work endlessly without breaks, do many things at a time with good results, and handle difficult and repetitive tasks easily with the help of AI algorithms.  Basically, it’ll cost less money to hire them rather than humans, since it is technology, and soon, in many jobs, I believe people will be replaced by AI.  The jobs that won’t be able to be replaced are the ones that involve a lot of human interaction, strategic interpretation, decision making, skilled jobs, and subject matters.  These include: lawyers, leadership roles, people who work in the medical field, and people who work with technology.

   In 1956, at Dartmouth College, the AI field was founded at a workshop, and the people who attended it would go on to become the leaders of AI research for a long time.  By the beginning of the 21st century, interest and investment in AI boomed.  This was due to machine learning being used to solve many problems in both the industry and academia due to new methods, the use of computer hardware, and the collection of many sets of data.  By 2016, the market for everything AI- related was worth over $8 billion, and according to the New York Times, interest in AI had reached a frenzy.”

 

AGI: Artificial General Intelligence

   General intelligence, which is the ability to solve any problem, has now become artificial general intelligence, abbreviated to AGI.  According to Wikipedia, “…AGI is a program which can apply intelligence to a wide variety of problems, in a lot of similar ways that humans can.  Artificial general intelligence is the ability of an intelligent agent to learn or understand any intellectual task that human beings or other animals can.”  AGI has the ability to think, understand, learn, and apply its intelligence to solve any problem as humans do in any given situation.  According to a Swedish author and philosopher named Nick Bostrom, an advanced AGI system would have “an intellect that’s much smarter than the best human brains in particularly every field, including scientific creativity, general wisdom and social skills.”  AGI is only in its beginning stages, and most experts don’t expect it to be usable until 2050 or sooner.  AGI has already had many achievements, including:  passing the Turing test, passing third grade, accomplishing scientific breakthroughs worthy of a Nobel Prize, and achieving superhuman intelligence.

   There are many AI apps, the most well- known being ChatGPT, which was released in November 2022.  It is an AI chatbot developed by a company called Open AI.  In the US, the app is currently worth $29 billion.  A pro of ChatGPT is that it gives detailed responses and answers for many areas of knowledge, while a con of it is that it has uneven factual accuracy.

   ChatGPT has been accused of lots of bias, though.  Examples include telling jokes only about men and not women, and being political, supporting only the Democratic parties.  In response to the criticism, Open AI published a blog post that acknowledged plans that allow ChatGPT to create “outputs that other people (ourselves included) may strongly disagree with.”  In the last few months, lots of books appeared on Amazon that listed ChatGPT as an author or co- author.  The show South Park recently did an episode making fun of the app.

 

7 Risks of AI

  1. Job loss: Between 2020- 2025, 85 million jobs are expected to be lost to AI, which will create 97 million jobs.  According to futurist Martin Ford, “The reason we have a low unemployment rate, which doesn’t actually capture people that aren’t looking for work, is largely that lower- wage sector jobs have been pretty robustly created by this economy.  I don’t think that’s going to continue.  If you’re flipping burgers at McDonald’s and more automation comes in, is one of these new jobs going to be a good match for you?  Or is it likely that the new jobs require lots of education or training or maybe even intrinsic talents- really strong interpersonal skills or creativity- that you might not have?  Because those are the things that, at least so far, computers aren’t very good at”. 
  2. Misinformation: Social manipulation is one of the main dangers of AI, according to a report from 2018.  The social media app TikTok uses an AI algorithm that fills the user’s ‘For You’ page with content related to other content they’ve watched.  Critics of the app target this process and the algorithm for not being able to get rid of incorrect and harmful content.  This raises doubts over TikTok’s ability to protect its users from dangerous media and disinformation.  Over the last few years, online news and media has become more questionable because of fakeness getting into social and political spheres.  This technology makes it very easy to replace one image with another one.  As a result, it’s very hard to distinguish between true and fake news.  Ford said, “No one knows what’s real and what’s not.  So it really leads to a situation where you can’t believe your own eyes and ears; you can’t rely on what, historically, we’ve considered to be the best possible evidence… That’s going to be a huge issue.”
  3. Potential government misuse: In China, the Chinese government is using facial recognition to track people’s movements, and they might be able to gather enough data to monitor people’s activities, political views, and relationships.  US police departments are using social media algorithms to predict where crimes will take place.  A problem with these algorithms is that they’re influenced by arrest rates, and these overly affect Black communities.  This can lead to over-policing in those communities, and even though the cops are using the AI for good, there are questions over whether or not it can turn into an authoritarian weapon.  Ford said, authoritarian regimes use or are going to use it.  The question is, how much does it invade Western countries, democracies, and what constraints do we put on it?”
  4. AI bias: It’s very obvious that all the AI apps have bias, as I wrote about it in my paragraphs about ChatGPT.  In an interview with the New York Times, Olga Russakovsky, a computer science professor at Princeton, said that AI bias goes beyond race and gender.  AI is biased because it’s created by humans, who are very biased.  “AI researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities.  We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues.”, Russakovsky said.
  5. Greater socioeconomic gap: Another cause for concern about AI is the bigger amount of socioeconomic inequality triggered by AI-driven job loss.  There have been as high as 70% wage declines for blue-collar workers who perform more manual and repetitive tasks.  On the other hand, white-collar workers have remained mostly unaffected by AI, as some people are getting higher wages.
  6. Increased laziness/slacking in people: In a 2019 meeting at the Vatican, Pope Francis warned against AI’s ability to “circulate tendentious opinions and false data,” and he stressed the extreme punishments of letting AI develop without the proper restraints.  He went on to say, “If man-kind’s so-called technological progress were to become an enemy of the common good, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.”  The way people are using ChatGPT gives substance to the pope’s claims.  Many people have used the app to get out of writing assignments, and this threatens creativity and integrity.  When OpenAI, which owns ChatGPT, tried to make the app less toxic, they forced underpaid Kenyan laborers to do the work.  It’s all about the money.
  7. Risk of use during war: Over the last 100 years, advanced technology has been used for war.  AI will probably be used for war in the upcoming years, in my opinion.  In a 2016 letter, over 30,000 people, AI and robotics researchers included, fought back against investing in AI- fueled autonomous weapons.  The letter stated, “The key question for humanity today is whether to start an AI global arms race or to prevent it from starting.  If any major power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the end point of this technological trajectory is obvious:  autonomous weapons will become the Kalashnikovs of tomorrow.”  Many new weapons pose huge risks to civilians, but the danger becomes worse when autonomous weapons fall into the wrong hands.  AI could end up being used with the worst intentions if warmongering tendencies and political rivalries aren’t kept in check.

Sources:

  • Builtin.com
  • Techtarget.com
  • Aimultiple.com
  • India Times
  • SAS Institute
  • Simplilearn
  • The New York Post