Categories
Programming Python Learning

Python Learning Lesson 11

Lesson 11 Slides

  • OOP’s
  • Classes and Constructors
  • Methods vs Functions
Categories
Writing

Evaluating AI’s impacts on Modern Society

In our history classes we have learned that during the 20th century, the Industrial Revolution had a great impact on American and global societies. The mass production of goods allowed America to trade more efficiently with European countries which boosted the American economy. But there was a backlash to this new technology; skilled workers and craftsmen saw this new technology, which could be easily used by ordinary people, as a threat to their businesses. The modern age brings a new technology—Artificial Intelligence (AI). The advent of AI is beneficial for society, but it’s important to take into consideration its negative implications so we can come to ethical conclusions. Why might some be against AI?

One of the major negative impacts of AI includes how it might impact society. “…AI-driven technologies have a pattern of entrenching social divides and exacerbating social inequality, particularly among historically marginalized groups” (Hagerty and Rubinov). Because AI models train from data sets that could be biased (not diverse enough), it could potentially create further divides among social groups. Many civilians might consider these issues to be a major threat that comes with AI. Beyond the economic sphere, the possibilities of AI are being explored in Healthcare. New Healthcare systems are being developed to help doctors with their medical processes and evaluations. But this new AI can be biased thus leading to harmful results for the patient. “…algorithms trained with gender-imbalanced data do worse at reading chest x-rays for an underrepresented gender…”(Kaushal and Altman and Langlotz). Because these AI models are trained on data that is biased it could lead to false-positive or false negatives for specific groups. This could be potentially fatal for patients. For example, if a patient is believed to have cancer, but they actually don’t, this false positive could make them go through chemotherapy which is dangerous. Likewise, if a patient has cancer which the model doesn’t detect this would mean that the patient would be left untreated. Issues of biased data also extend into the legal system.  In the legal system, AI is being used to help lawyers with court decisions. But the biased data sets lead to unfair rulings being made. “ Using COMPAS, Black defendants were incorrectly labeled as “high-risk” to commit a future crime twice as often as their white counterparts” (Mesa). For these reasons, groups–especially underrepresented groups–may feel threatened by AI because of the biased data. Regardless of these reasons, AI is being improved and developed to fix such issues and promises a safer future for us.

AI promises beneficial technology. Even though many may argue that AI is bad for the environment because of the metric tons of carbon dioxide it emits into the atmosphere (during the training process), the long-term effects seem to far outweigh the negatives. “It can also underpin low-carbon systems, for instance, by supporting the creation of circular economies and smart cities that efficiently use their resources”(The Role of Artificial Intelligence in Achieving the Sustainable Development Goals). AI could actually help cities use energy more efficiently which is better for the environment. The Vice President of Manifold (An AI Company), Rajendra Koppula, said that in the short term AI might be creating more harm to the environment, but in the long run, it may be more beneficial. AI is also being utilized in the legal system to help lawyers make decisions and resolve disputes. “ some offer litigation analytical tools that analyze precedent case data and other data to aid lawyers in predicting case outcomes”(AI in legal services: new trends in AI-enabled legal services). With AI, lawyers could potentially help more people because they would be able to go through court cases faster. Even though some may argue that AI is detrimental if it’s implemented in health care systems, it does promise benefits. “Experts predict AI to have a significant impact in diverse areas of healthcare such as chronic disease management and clinical decision making”(Ahuja). AI could help doctors in various fields of healthcare. For example, AI is being used to help radiologists analyze images. Thus helping them save time to help other patients or even get more accurate results. AI won’t replace doctors but rather improve their jobs (Koppula). 

Now because there are plausible explanations as to whether or not AI really poses a threat to society, we must come to a common ground as to what is considered ethical or unethical. For example, even though AI is believed to help lawyers make legal decisions, Koppula and I agree that AI shouldn’t be used in the legal system. Morally this is wrong because the predictions by these models have been greatly biased. “You should not try to replace a judge, or some legal proceeding simply because now you have a model that is trained on historical cases…far more moral and ethical to not jump the gun (Koppula).” Since court systems are very important to uphold the rights of citizens, it would be morally incorrect to use a biased system to decide whether or not someone is guilty or innocent. This is why more research should be done as to the impacts that these AI models would have on society so that such issues would be resolved before they happened. Creating these AI systems is very mind-numbing in the sense that it is difficult to decide what is considered ethical or unethical. One prevalent use of AI is in self-driving cars. Those who are in favor of Self Driving cars claim that they drive safer than humans which would result in fewer accidents and more lives being saved. But Self Driving cars bring controversy,“For instance, should autonomous vehicles be programmed to always minimize the number of deaths? Or should they perhaps be programmed to save their passengers at all costs (Nyholm and Smids)?” It may be unsettling to think that your self-driving car might sacrifice your life and the lives of your passengers in order to save more people overall. This is why it has created more ethical dilemmas. Even though AI brings lots of promise to help society, there are still some more questions that need to be asked as to the impacts it brings.

Ahuja, Abhimanyu S. “The Impact of Artificial Intelligence in Medicine on the Future Role of the Physician.” PeerJ, 4 Oct. 2019, https://peerj.com/articles/7702/. 

Hagerty, Alexa, and Igor Rubinov. Global AI Ethics: A Review of the Social Impacts … – Arxiv. https://arxiv.org/pdf/1907.07892.pdf. 

Kauffman, Marcos Eduardo, and Marcelo Negri Soares. “Ai in Legal Services: New Trends in AI-Enabled Legal Services – Service Oriented Computing and Applications.” SpringerLink, Springer London, 18 Oct. 2020, https://link.springer.com/article/10.1007/s11761-020-00305-x. 

Kaushal, Amit, et al. Health Care AI Systems Are Biased. 17 Nov. 2020, https://fully-human.org/wp-content/uploads/2021/01/Health-Care-AI-Systems-Are-Biased.pdf.

Nyholm, Sven, and Jilles Smids. “The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem? – Ethical Theory and Moral Practice.” SpringerLink, Springer Netherlands, 28 July 2016, https://link.springer.com/article/10.1007/s10677-016-9745-2. 

Mesa, Natalia. “Can the Criminal Justice System’s Artificial Intelligence Ever Be Truly Fair?” Massive Science, 13 May 2021, https://massivesci.com/articles/machine-learning-compas-racism-policing-fairness/. 

Vinuesa, Ricardo, et al. “The Role of Artificial Intelligence in Achieving the Sustainable Development Goals.” Nature News, Nature Publishing Group, 13 Jan. 2020, https://www.nature.com/articles/s41467-019-14108-y.