Skip to Main Content

Artificial Intelligence (AI) & AI Literacy

Your quick guide to all things related to artificial intelligence, ChatGPT, and the ethics of using GenAI in higher education

Other Ethical Considerations

The rapid rise of generative AI in mainstream culture has required extensive work to keep up with the demand. An early stage of the supply chain process involves subjecting workers, commonly in the Democratic Republic of the Congo, to dangerous, precarious positions to extract minerals for the production of AI hardware. Further down the supply chain, labor involves workers manually training the AI models, which in turn teaches the models how to identify items, faces, and other objects. This tedious, repetitive work comes at the detriment of workers abroad, largely, according to AI journalist and author of Empire of AI, live in Chile, Colombia, and Kenya, including physical and psychological damage. In some of these regions, workers are paid $2USD/hour while their American contemporaries make on average $20USD/hour. This is due to Western tech companies outsourcing their work abroad to cheaper labor, working with subcontractor companies like Samasource and Teleperformance in the process. 

As of 2025, there is no federal law regulating AI in the workplace in the United States. That includes no protections against the displacement of employees. Researchers can learn more about legislation introduced at the state-level here.

It is evident that labor regulations regarding AI are necessary to provide workers with fair pay and recognition for their labor. This looks like equitable contracts that are not punitive of employees who seek better working conditions. In additional, psychological care for the post-traumatic stress disorder (PTSD), depression, and other mental afflictions, AI moderators and trainers grapple with because of their work must also be addressed. 

The first video below is an investigative news segment from 60 Minutes, called, “Humans in the Loop,” in which interviewer Lesley Stahl asks a group of Kenyan digital workers – Naftali Wambalo, Nerima Wako-Ojiwa, Nathan, Fasica, Ephantus, Joan, Joy, Michael, and Duncan – to speak on their hidden labor, working conditions, and extremely low wages from labeling videos and images for AI tech companies.

Author and journalist, Karen Hao, also uncovers the human cost of AI innovation and implementation through her latest work, Empire of AI, as stated in an interview with independent news program, Democracy Now! (see video below). She chronicles the history of AI tech company, OpenAI, under Sam Altman’s leadership, as well as AI’s environmental impacts and exploitative nature among marginalized communities.

Generative AI tools like ChatGPT, DALL-E, and more are trained and managed via large data centers. As of April 2025, the U.S. houses over 3,600 data centers, with about the most being in Northern Virgina, Texas, and California. A high amount of electricity powers these robust data centers, and, in turn, requires copious amounts of freshwater to cool them down. Most data centers use fresh, drinkable water for the Hereculian task of cooling such powerful systems, utilizing tubes surrounding IT equipment. According to Bloomberg News, two-thirds of data centers built since 2022 are located in areas of crippled by high levels of water stress, thus further exhausting already-depleted areas. 

Corporations like Microsoft and Google note their contribution to carbon emissions is increasing due to construction of new data centers to support AI. The demand for electricity to run these AI data centers contributes to rising carbon dioxide emissions and more demand to electric grids

Exhaust from data centers can emit harmful toxins into the immediate area. For example, in the American South, Memphis, a majority-Black city, is currently reckoning with harmful pollutants like smog, nitrogen oxides, and formaldehyde contaminating the air. This is thanks in part to Elon Musk’s xAI, a powerful data warehouse, whose unpermitted turbines run constantly, continuing to plague the community

It’s worth highlighting that organizations and advocacy groups have called for steps to be taken to make AI greener. Proposals for moves in the right direction include the alternative of running cloud data centers on carbon-free energy by 2030, utilizing hardware that releases less heat, and more. Steps in the right direction for the sake of our environment must be taken now, and users should hold these corporations accountable for the commitments they are making.

The video below is a clip from PBS NewsHour that talks about the growing effect AI data centers have on the environment, shedding light on the fact that there are currently (at the time this guide was created in 2025) very little regulations in place to mitigate the use of fossil fuels, energy, and water consumption. 

It is crucial to be aware that there are biases present in AI software, particularly because AI has been created by humans who have internal biases. This is especially apparent in facial recognition software and image generating AI tools that have either failed to recognize faces of color, erroneously mistaken one person for another, or have generated images that reveal a problematic pattern when inputting specific queries (typing characteristics for a lawyer, for instance, revealed mostly white, male figures). 

ChatGPT and, most recently, xAI's chatbot, Grock, have also been known to iterate hate speech, racial biases, as well as gender stereotypes, from writing resumes, producing recommendation letters, constructing college major recommendations, and even in drawing up medical reports. Microsoft’s Tay, an AI chatbot, had been released through Twitter (now X) back in 2016 and was removed within 24 hours because of multiple racist, sexist, anti-semitic, and false information that had been posted. It bears stating again that, because LLMs are trained and programmed by humans with implicit biases, tools like ChatGPT and Tay will draw on the information that is available on the internet, whether or not the information is factually accurate or contains problematic language. Before completely relying on AI software and other algorithmic tools, it is imperative to check and pay close attention to accuracy, in addition to questionable wording or phrasing that might reveal harmful rhetoric against marginalized communities. 
The videos below feature Dr. Joy Buolamwini, MIT researcher and founder of the Algorithmic Justice League, and Dr. Safiya Noble, the David O. Sears Presidential Endowed Chair of Social Sciences, Director of the Center on Resilience & Digital Justice, and UCLA professor, whose research both emphasize the need for ethical integration in AI technology, stressing that – as it currently stands – GenAI must incorporate more equitable programming practices and policies to reduce and minimize harm towards women and people of color.

While advancements in AI technology have provided benefits such as more streamlined workflow processes, assistance with language learning, and visual/artistic enhancement capabilities (among others), privacy concerns–as well as data breaches–should be taken into consideration. Whatever input is added into a GenAI tool (whether it's through ChatGPT or another system), is often saved and stored into the company that runs the system in order to analyze the information and then train their models to improve them.

GenAI has also been exploited for scams and fraud through voice phishing and identity theft by creating believable text, images, and even voice mimicry for the purpose of impersonating celebrities or even members of the public to extort funds from organizations and other individuals. There have been a number of instances reported where AI-generated images have also been used in photos, videos, and other content without the subject’s consent or knowledge. These cases are not limited to well-known public figures and have affected communities globally.

AI-driven phishing scams are another example of data breaches caused by GenAI tools. These attempts often target online accounts from platforms for wide-ranging services, including Gmail, healthcare/insurance, and even dating apps. As previously mentioned, AI is used in facial recognition technology for surveillance purposes, but has also been known to wrongfully identify individuals as criminals. Corporations and large companies have taken advantage of this tool to scrutinize their employees, which sometimes led to faulty data and mistaken identities. This often disproportionately affects Black and Brown folks since the software would incorrectly identify them, leading to unlawful arrests.   

The consequences of incorporating AI into healthcare – particularly the deleterious effects this has on the Black community who are most at risk and in need of medical attention – are mentioned in the video clip below by Sayash Kapoor, Princeton University’s Computer Science Ph.D. candidate and co-author of the book, AI Snake Oil.

Programs like ChatGPT and other LLMs are trained on information that is freely and readily available online. However, this includes original work that falls under copyright protection according to Title 17 of the U. S. Code § 201. ChatGPT’s creator, OpenAI, has been accused of violating copyright law and infringing on the privacy of a vast number of users because it employed data scraping techniques to train ChatGPT. 

Although there are a number of similar lawsuits currently underway which include tech giants like Google, Microsoft, Midjourney, and more (with cases ranging from written works, music, art, etc.), the most recent and notable development surrounding intellectual property and artificial intelligence is the ruling of two separate lawsuits involving AI companies, Anthropic and Meta. Writers and creators argued that these companies have trained their LLMs using the authors’ works without permission, which they believed counted as copyright infringement. However, both cases ruled in favor of Anthropic and Meta, claiming that training their large language models with the writers’ published materials was permissible under fair use

In addition to these allegations, both Anthropic and Meta had accessed and downloaded these works through pirated sites. The rulings of both cases raise important questions for courts to consider moving forward regarding AI’s role in creative content:

  • What constitutes copyright for works that are available and uploaded online?
  • Does the onus lie on platforms and pages that allow pirated content to be made readily available (and those who upload works protected by copyright), or on companies like Anthropic and Meta that access and use them for LLM training purposes?
  • What are the implications for future human-AI collaboration?
  • Will artificial intelligence eventually be recognized as an author, collaborator, or co-creator for works of art and/or scholarly publications?
  • Or will the nature of AI and how it generates content prove to be a deterrent towards its acknowledgement and credibility as a creator?

The following videos provide brief overviews and perspectives on how the rulings on current court cases (at the time of this LibGuide’s creation in 2025) might possibly affect copyright regulations over different sectors of the arts and intellectual property:

Additional References

Unfair Labor Practices

The Economist. (2025, January 23). Three big lawsuits against Meta in Kenya may have global implications. The Economisthttps://www.economist.com/middle-east-and-africa/2025/01/23/three-big-lawsuits-against-meta-in-kenya-may-have-global-implications

Kelly, B. (2023). Wage against the machine: artificial intelligence and the fair labor standards act. Stanford Law & Policy Review, 34(2), 261.

Levy Daniel, M. (2025, February 13). Regional cooperation crucial for AI safety and governance in Latin America. Brookings Institutehttps://www.brookings.edu/articles/regional-cooperation-crucial-for-ai-safety-and-governance-in-latin-america/

Perrigo, B. (2023, July 20). Gig Workers Behind AI Face ‘Unfair Working Conditions,’ Oxford Report Finds. Timehttps://time.com/6296196/ai-data-gig-workers/


Environmental Impacts

Chen, S. (2025, March 5). How much energy will AI really consume? The good, the bad and the unknown. Nature. https://www.nature.com/articles/d41586-025-00616-z

Kerr, D. (2025, April 24). Elon Musk’s xAI accused of pollution over Memphis supercomputer. The Guardian. https://www.theguardian.com/technology/2025/apr/24/elon-musk-xai-memphis


Biases in AI

Amin, K. S., Forman, H. P., & Davis, M. A. (2024). Even with ChatGPT, race matters. Clinical Imaging, 109, 110113. https://doi.org/10.1016/j.clinimag.2024.110113

Buolamwini, J. (2023). Unmasking AI: My mission to protect what is human in a world of machines. Random House.

Clayton, B. J. (2024, May 25). I was misidentified as shoplifter by facial recognition tech. BBC. https://www.bbc.com/news/technology-69055945

Conger, K. (2025, July 8). Elon Musk’s Grok chatbot shares antisemitic posts on X. New York Times. http://proxy-bc.researchport.umd.edu/login?url=https://www.proquest.com/blogs-podcasts-websites/elon-musk-s-grok-chatbot-shares-antisemitic-posts/docview/3228113494/se-2?accountid=14577

Goodin, D. (2023, June 6). FBI warns of increasing use of AI-generated deepfakes in sextortion schemes. Ars Technica. https://arstechnica.com/information-technology/2023/06/fbi-warns-of-increasing-use-of-ai-generated-deepfakes-in-sextortion-schemes/

Gorti, A., Chadha, A., & Gaur, M. (2024). Unboxing occupational bias: Debiasing LLMs with u.s. labor data. Proceedings of the AAAI Symposium Series, 4(1), 48–55. https://doi.org/10.1609/aaaiss.v4i1.31770

Govil, P., Jain, H., Bonagiri, V., Chadha, A., Kumaraguru, P., Gaur, M., & Dey, S. (2025). COBIAS: Assessing the contextual reliability of bias benchmarks for language models. In Websci ’25: Proceedings of the 17th ACM Web Science Conference 2025 (pp. 460–471). Association for Computing Machinery. https://doi.org/10.1145/3717867.3717923

Stokel-Walker, C. (2023, November 22). ChatGPT replicates gender bias in recommendation letters. Scientific American. https://www.scientificamerican.com/article/chatgpt-replicates-gender-bias-in-recommendation-letters/

The Guardian & Reuters. (2016, March 26). Microsoft “deeply sorry” for racist and sexist tweets by AI chatbot. The Guardian. https://www.theguardian.com/technology/2016/mar/26/microsoft-deeply-sorry-for-offensive-tweets-by-ai-chatbot

Zheng, A. (2024). Dissecting bias of ChatGPT in college major recommendations. Information Technology and Management. https://doi.org/10.1007/s10799-024-00430-5


Privacy Concerns

AI Incident Database & Our World in Data. (2025, April). Global annual number of reported artificial intelligence incidents and controversies. Our World in Data. https://ourworldindata.org/grapher/annual-reported-ai-incidents-controversies

Barr, K. (2023, May 31). Eating disorder helpline takes down chatbot after its advice goes horribly wrong. Gizmodo. https://gizmodo.com/ai-chatbot-eating-disorder-helpline-neda-1850490751

Coyer, C. C. (2025, June 9). OpenAI Case Amplifies Legal Tension Between Discovery, Privacy. Bloomberg Law. https://news.bloomberglaw.com/privacy-and-data-security/openai-case-amplifies-legal-tension-between-discovery-privacy

David, E. (2024, February 15). Don’t date robots — their privacy policies are terrible. The Verge. https://www.theverge.com/2024/2/15/24074063/ai-chatbot-virtual-girlfriend-apps-mozilla-privacy-report

Ellery, S. (2023, March 28). Fake photos of Pope Francis in a puffer jacket go viral, highlighting the power and peril of AI. CBS News. https://www.cbsnews.com/news/pope-francis-puffer-jacket-fake-photos-deepfake-power-peril-of-ai/

Federal Bureau of Investigation. (2024, December 3). Criminals use generative artificial intelligence to facilitate financial fraud. Public Service Announcement: Federal Bureau of Investigation. https://www.ic3.gov/PSA/2024/PSA241203

Flitter, E., & Cowley, S. (2023, August 31). Voice deepfakes are coming for your bank balance. New York Times. http://proxy-bc.researchport.umd.edu/login?url=https://www.proquest.com/newspapers/voice-deepfakes-are-coming-your-bank-balance/docview/2863293142/se-2?accountid=14577

Gallaga, O. (2024, October 14). Security experts warn Gmail users of more sophisticated AI hacks. CNET. https://www.cnet.com/tech/services-and-software/security-experts-warn-gmail-users-of-more-sophisticated-ai-hacks/

Graham, M. M. (2024, June 26). Deepfakes: federal and state regulation aims to curb a growing threat. Thomson Reuters Institute. https://www.thomsonreuters.com/en-us/posts/government/deepfakes-federal-state-regulation/

Grose, J. (2024, March 2). A.I. is making the sexual exploitation of girls even worse. New York Times. http://proxy-bc.researchport.umd.edu/login?url=https://www.proquest.com/blogs-podcasts-websites/i-is-making-sexual-exploitation-girls-even-worse/docview/2933752684/se-2?accountid=14577

Meta. (2025). AI at meta. Facebook. https://www.facebook.com/privacy/guide/generative-ai/?entry_point=privacy_center_home

Pierson, B. (2023, November 14). Lawsuit claims UnitedHealth AI wrongfully denies elderly extended care. Reuters. https://www.reuters.com/legal/lawsuit-claims-unitedhealth-ai-wrongfully-denies-elderly-extended-care-2023-11-14/


Intellectual Property

Becker, J. (2024, September 19). Governor signs landmark AI transparency bill, empowering consumers to identify AI-generated content [Press release]. https://sd13.senate.ca.gov/news/press-release/september-19-2024/governor-signs-landmark-ai-transparency-bill-empowering

Bondari, N. (2025, February 4). AI, copyright, and the law: the ongoing battle over intellectual property rights. USC Intellectual Property and Technology Law Society (“IPTLS”). https://sites.usc.edu/iptls/2025/02/04/ai-copyright-and-the-law-the-ongoing-battle-over-intellectual-property-rights/#_ftn7

Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 37 C.F.R. pt. 202 (2023). https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence

Isaac, M. (2025, June 4). Reddit sues Anthropic, accusing it of illegally using data from its site. New York Times. http://proxy-bc.researchport.umd.edu/login?url=https://www.proquest.com/blogs-podcasts-websites/reddit-sues-anthropic-accusing-illegally-using/docview/3215555682/se-2?accountid=14577

Jahner, K. (2025, July 2). OpenAI sued by new set of authors over training data copyrights. Bloomberg Law. https://news.bloomberglaw.com/artificial-intelligence/openai-sued-by-new-set-of-authors-over-training-data-copyrights

Justia Dockets & Filings. (n.d.). Getty Images (US), Inc. v. Stability AI, Inc.: Complaint filed with Jury Demand against Stability AI, Inc. https://docs.justia.com/cases/federal/district-courts/delaware/dedce/1:2023cv00135/81407/1

U. S. Copyright Office. (2023). Copyright and Artificial intelligence. copyright.gov. https://www.copyright.gov/ai/

Vincent, J. (2023, January 16). AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit. The Verge. https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart

Zirpoli, C. T. (2025, July 18). Generative artificial intelligence and copyright law (By Congressional Research Service). https://www.congress.gov/crs-product/LSB10922