Skip to Main Content Skip to footer content

Artificial intelligence (AI) and the library

Academic integrity

As of this writing, Fresno State does not have any campus policies specifically about using AI in academic work. Be sure you check with your instructor for any policies they may have about what kinds of uses are allowable in what contexts.

Even without any policies, there are two general principles you should keep in mind:

  • Just as you wouldn't copy another person's work and put your name on it, don't claim AI-generated text as your own work. See below for guidelines on citing any output from AI that you might want to use.
  • Be transparent about your use of AI and what you used it for. Academic publishers, such as APA Journals, are increasingly adopting policies requiring disclosure of AI use.

(This guide was written entirely by human librarians. But we did ask ChatGPT for ideas on what to include.)

Citing AI output

How to Cite AI

Citing generative AI (such as Chat GPT) allows you to participate in scholarly conversations by acknowledging others’ words or ideas and drawing on them to shape your own work. Citing AI also creates transparency by informing your reader of how you used AI in your process.

When incorporating ChatGPT or similar AI tools in your research, include a brief description in the Method section or a comparable section of your paper. If you're writing a literature review or a response paper, mention your usage of the tool in the introduction. Provide the prompt used and a portion of the relevant generated text. Remember, as ChatGPT's output is not retrievable, treat it like sharing an algorithm's output. Credit the algorithm's author with a reference list entry and corresponding in-text citation.

Format

Author. (Date). Name of tool (Version of tool) [Large language model]. URL

Example

When prompted with “Is the left brain right brain divide real or a metaphor?” the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, “the notation that people can be characterized as ‘left-brained’ or ‘right-brained’ is considered to be an oversimplification and a popular myth” (OpenAI, 2023).

Reference

OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

In-Text Citation

  • Parenthetical citation: (OpenAI, 2023)
  • Narrative citation: OpenAI (2023)

 

Reference: APA: How to cite ChatGPT? (As of 7 April 2023)

Format

"Description of chat" prompt. Name of AI tool, version of AI tool, Company, Date of chat, URL.

Example

"Examples of harm reduction initiatives" prompt. ChatGPT, 23 Mar. version, OpenAI, 4 Mar. 2023, chat.openai.com/chat.

In-Text Citation

("Examples of harm reduction")

If you create a shareable link to the chat transcript, include that instead of the tool's URL.

MLA also recommends acknowledging when you used the tool in a note or your text as well as verifying any sources or citations the tool supplies.

 

Reference: How do I cite generative AI in MLA style? (As of 17 March 2023)

Footnotes or Endnotes

For formal citations in a student paper or research article, use numbered footnotes or endnotes. For example,

1. Text generated by ChatGPT, OpenAI, March 7, 2023, https://chat.openai.com/chat.

In this format, ChatGPT serves as the "author" and OpenAI as the publisher or sponsor, followed by the date the text was generated. The URL is optional as readers may not be able to access the content.

If the prompt is not included in the text, it can be added in the note:

1. ChatGPT, response to “Explain how to make pizza dough from common household ingredients,” OpenAI, March 7, 2023.

If you've edited the AI-generated text, mention it in the note (e.g., "edited for style and content").

Author-Date

For author-date citations, include any additional information in parentheses within the text. For example, 

(ChatGPT, March 7, 2023).

Don't cite ChatGPT in a bibliography or reference list unless a publicly available link is provided. Without an accessible link, treat the information like personal communication.

Reference: The Chicago Manual Online: Citation, Documentation of Sources (As of 7 March 2023)

Bias and prejudice

AI systems learn and make decisions based on their training data. If this data contains biases, the AI will likely replicate and even amplify these biases in its outputs. For instance, if an AI is trained on historical hiring data that shows a preference for a certain gender in tech roles, the AI might adopt and perpetuate this bias, favoring that gender in future hiring recommendations.  This is especially true when dealing with controversial issues.  

These biases can stem from various sources, including societal stereotypes, lack of representation in data sets, or biased human decision-making captured in historical data. When AI systems are trained on such biased data, they lack the ability to critically evaluate the fairness or ethical implications of their training material. They simply learn patterns and associations present in the data, without understanding context or societal norms.  This means the viewpoints that AI systems might overlook are those that are underrepresented or misrepresented in their training data. This includes minority perspectives, non-traditional paths, and any nuances that require contextual understanding or empathy. For example, an AI trained on predominantly Western literature might not accurately reflect or understand cultural nuances from non-Western societies, leading to a lack of cultural sensitivity or awareness in its outputs.

AI's reliance on historical data means it can amplify existing inequalities or biases, making it crucial to carefully curate and examine training data and continuously monitor AI systems for biased outcomes. Addressing these challenges requires a multidisciplinary approach, incorporating ethical considerations, diversity, and inclusivity from the outset of AI development to ensure more equitable and fair outcomes.  The question is, has this been the case?  

Copyright


Copyright and AI is a hot topic currently, especially as AI systems often use copyrighted content for training without the owners' consent. This raises significant legal and ethical issues.  Copyright laws protect creators' rights to their works, but AI's need for vast data sets for "training" has led to the use of copyrighted materials without explicit permission. This practice, while advancing AI capabilities, challenges the traditional bounds of copyright by potentially creating derivative works that resemble the originals too closely.

The crux of the issue lies in whether AI's use of copyrighted materials for training constitutes fair use—a legal doctrine allowing limited use of copyrighted content without permission for specific purposes like research or education. However, the scale and commercial intent behind AI's use of such content stretch the fair use concept thin.  Since the legal world is still catching up with these technological advancements, there remains a gray area around AI's use of copyrighted works. 

Beyond legality, there's an ethical dimension: the responsibility of AI developers to respect intellectual property and possibly compensate creators whose works contribute to AI training. If you were a copyright holder how would you feel about AI using your writing, or design work to train its model?  What if it created art mimicking your style?  This evolving debate underscores the need for a balanced approach that fosters innovation in AI while respecting and protecting the rights of copyright holders.

Further reading on AI & Copyright from ArsTechnica.

Lee, T. B. (2024, February 20). Why The New York Times might win its copyright lawsuit against OpenAI. Ars Technica. https://arstechnica.com/tech-policy/2024/02/why-the-new-york-times-might-win-its-copyright-lawsuit-against-openai/

 

Privacy

Privacy concerns in AI usage primarily revolve around data collection, storage, and processing. AI systems require vast amounts of data to learn and improve, leading to potential risks in how personal and sensitive information is handled. Key concerns to consider as ethical AI users include:

  • Data Collection: AI can collect data from a wide range of sources without explicit user consent. This raises questions about the transparency of data collection practices and the extent of data being gathered.  Some AI companies provide options for individuals to opt-out of training models.  

  • Data Security: Storing large datasets poses security risks. Data breaches can expose sensitive personal information, leading to identity theft, financial loss, and other forms of cybercrime.  AI makes it both easier to create opportunities for data theft and harder to detect and prevent these activities due to its ability to rapidly analyze and exploit vulnerabilities in data security systems.  You may begin to see more sophisticated phishing and malware practices arise through AI assisted efforts.  

  • Data Usage: AI systems use collected data to make decisions, predictions, or recommendations. There's a risk that this data could be used in ways that discriminate, manipulate, or unfairly target individuals or groups.  Take a look at the bias in AI box for more information on data usage.  

  • Surveillance: AI-driven surveillance technologies, such as facial recognition, can infringe on privacy rights and lead to unauthorized tracking and monitoring of individuals.

Other ethical concerns

Additional ethical concerns around Artificial Intelligence include:

  • Climate change. Creating and training AI models requires a large amount of power. AI computing results in a significant and growing amount of carbon added to the atmosphere.
  • Labor practices. Training AI isn't a completely automated process. It requires human labor, which is sometimes low-paid, traumatizing, and/or outsourced to firms with unfair practices.

Can you think of other issues raised by the growing use of Artificial Intelligence?

Read more:

Dhar, P. (2020). The carbon impact of artificial intelligence. Nature Machine Intelligence, 2(8), 423–425. https://doi.org/10.1038/s42256-020-0219-9
Bartholomew, J. (2023, August 29). Q&A: Uncovering the labor exploitation that powers AI. https://www.cjr.org/tow_center/qa-uncovering-the-labor-exploitation-that-powers-ai.php