Teaching Commons > Teaching Guides > Technology > Artificial Intelligence
Artificial intelligence (AI) encompasses a variety of computer-based tools that source existing data to solve a problem, such as search tools that rely on algorithms to find information or language translation tools (McCarthy, 2007). Generative artificial intelligence refers to a suite of tools that source existing data to create new artifacts in response to user prompts (Goodfellow et al., 2020). For example, ChatGPT is a large language model (LLM) and conversational generative AI that is built on many existing texts. In response to prompts from users, ChatGPT generates text that mimics the writing of humans through a process of statistical correlation.
The Modern Language Association and Conference on College Composition and Communication's working paper explains how LLMs and ChatGPT work in greater detail.
Generative AI is an emergent and rapidly evolving space. New tools, or updates to existing tools, are released frequently, but a few examples are helpful in framing out what these tools do:
At DePaul, we’re encouraged to learn more about emerging AI technologies and help students to learn to use them responsibly.
As noted in
DePaul's Approach to Artificial Intelligence, “Universities have a special challenge: on the one hand, we need to prepare our students for a world of work in which AI will certainly play a part, but on the other hand, we want our students to understand and practice integrity in the use of any sources, including those generated by Artificial Intelligence.” DePaul faculty and staff have been charged “to create conditions for members of the University community to learn about both the benefits and dangers of AI and act responsibly.”
As we continue learning about generative AI tools and considering our responses, the resources in this guide will help you to acknowledge the tools with students and consider how they might impact your teaching right now.
Masland (2023) describes a range of ongoing responses to emerging AI technologies in higher education. In doing so, Masland encourages instructors to reflect while considering the best ways to integrate AI in their teaching:
A syllabus statement and other classroom policies will help you to enforce your expectations for AI use in your classroom.
An AI syllabus statement should help you and your students identify appropriate AI use for the class. The following questions are designed to help you reflect on and describe appropriate AI use in your classes, and were generated from a review of
syllabus statements from many institutions curated by Lance Eaton.
Everyone is still learning about generative AI tools. Providing students with some information about the limitations of AI tools will help them to make informed decisions about how they use them.
These examples have been written with DePaul’s approach to AI in mind. Please note some departments and colleges may have their own AI policies that must be used verbatim or considered when drafting AI statements.
Students are allowed to use generative AI tools, such as ChatGPT or Dall-E 2, on specific assignments in this course. Each assignment sheet will indicate if AI use is permitted. When permitted, students must document and credit the AI tool. For example, paraphrased text generated using ChatGPT should include a citation according to APA, MLA, or Chicago Style generative AI citation guidelines.
Material generated using other AI tools should follow a similar citation convention.
The use of generative AI tools is permitted in this course for the following activities:
If you use generative AI in any of the above ways, please describe the tools you used, and how you used them, in a paragraph at the end of your discussion post or essay. Please include the prompts you provided to the generative AI tools.
The use of generative AI tools is prohibited in this course for the following assignments and activities:
If you are unsure about whether or not a specific tool or use of AI is permitted, please contact me. Using generative AI in a way that is not permitted is considered a violation of DePaul’s Academic Integrity Policy.
Generative AI tools are trained on existing texts, algorithms, and models to generate content like writing, images, and videos based on prompts from users. ChatGPT, Midjourney, Google Bard, and DALL-E are examples of generative AI tools. Please refrain from using generative AI in this course for any purpose. We will be developing skills that are important to practice on your own and using generative AI may inhibit development, practice, or understanding of those skills. During class, we will discuss how and why generative AI may disrupt your learning.
If you’re unsure if a specific tool makes use of AI, or if a specific tool is permitted for use on assignments in this course, please contact me. Attempting to pass off AI-generated work as your own will violate DePaul’s Academic Integrity Policy.
In this course, we will spend some time analyzing how AI impacts our work as writers. We will think critically about the best and worst ways to use AI in our University writing and the writing we do outside of the University. I will use AI to brainstorm assignment and activity ideas and to generate writing that we can analyze and provide feedback on as a class.
For your course work, we will treat AI-based assistance, such as the use of ChatGPT and Google Bard, the same way we treat collaboration with other people: you are welcome to talk about your ideas and work with other people, both inside and outside the class, as well as with AI-based tools. You can use these resources for every assignment in our course unless otherwise noted in the assignment instructions. However, all work you submit must be your own. Avoid hitting “Copy” within your conversation with an AI tool. Do not have your assignment and the AI tool open at the same time to limit the urge to copy the responses. Use your conversation with the AI as a learning experience, then close the interaction down, open your assignment, and let your assignment reflect your revised knowledge.
You should never include in your assignment anything that was not written directly by you without proper citation. In this course, we will predominantly use MLA citation style. Use the generative AI MLA citation guidelines any time you paraphrase, quote, or incorporate into your own work any content (whether text, image, data, or other) that was created by AI. Please note that the MLA guidelines also recommend adding notes to your work where you explain in greater detail how you used AI. Throughout the quarter, I will also ask you to reflect on your AI use in class discussions and process-based reflective writing, and will ask you to provide direct links to your chats with ChatGPT or Google Bard.
AI tools can generate inaccurate information, even to the point of making up sources. AI tools are trained on existing information and therefore are limited in supporting future-oriented thinking. AI tools are trained on existing information that contains bias. Therefore, they may perpetuate those biases. Using an AI tool gives it permission to integrate your prompts into its learning; essentially, you’re providing free labor in exchange for the use of the tool, and you can’t always control how your work will be used to inform future iterations of the tools.
I will read your work throughout the quarter and learn more about your writing style. During class, we will also talk about the features and limitations of AI-generated writing. Throughout the quarter, I will see how AI responds to our assignment prompts. If I suspect that you have submitted work that is not your own without citation, including work generated by AI, I will set a time to meet with you. If you blatantly violate DePaul’s Academic Integrity Policy and the citation guidelines in our syllabus, I will report an academic integrity violation.
While all DePaul courses are guided by DePaul’s Academic Integrity Policy, professors in other courses may set different guidelines for using AI assistance. Please check with your professors before using AI assistance to support your course work.
We’re currently in a time of transition and learning with generative AI, the new-ish tools that respond to prompts and create new texts, code, images, and other materials that are grounded in existing human- created work. Here’s how I’d like us to think about, and learn about, these tools together:
In this course, you’re welcome to treat generative AIs like ChatGPT, Google Bard, Microsoft Bing, Anthropic Claude, DALL-E, etc., as you would a human resource: You can bounce ideas off of the AI. You can share some of your writing with the AI and ask for grammatical or syntactical advice (as long as you’re comfortable with your work becoming part of the AI’s training, with no compensation for you!). You can have a conversation with the AI as you’re planning a draft.
But, my expectation is that the work you submit in class is primarily authored by you. A key learning path in our class involves practicing different writing skills, and neither of us will know what you’re learning if you’re relying on AI to produce the text that represents your thinking.
Human authorship is especially important in creative and reflective writing, and we’ll be doing both in this class! You’ll miss out on key learning if you’re relying on AI for the creation of these types of texts. In each assignment description, I’ve included explicit expectations on how you may or may not use AI for that assignment.
If I think an assignment might have a bit too much AI intervention, my first step will be to have a conversation with you, so that we can understand your composition process and work together to align expectations. However, if this becomes a pattern, I’ll be relying on DePaul’s Academic Integrity Policy (also detailed below) for next steps.
The screenwriting program at DePaul is committed to fostering the unique voices and perspectives of up-and-coming screenwriters, as evidenced by our motto: “learn your craft, find your voice, make your mark.” Writers should be aware that AI (generative artificial intelligence) is a tool and only a tool. It does not create; it aggregates and repurposes previously published content. As such, AI may facilitate the story development process, but cannot replace what each individual writer brings to a story: their distinctive point of view and voice. We understand that students may wish to use AI tools to aid in the development and early drafts of screenplays and screenwriting assignments. To do so with academic integrity, students must understand and adhere to the following:
“Curated Readings and Podcasts About AI” to guide some of your discussion. Consider how your field or discipline impacts the conversation.
This pre-packaged 30-minute lesson plan is designed to help you begin a conversation about AI with your students. The lesson, developed by DePaul University Senior Instructional Designer Jes Klass, is divided into the following parts:
The following resources map the lesson parts and are meant to be copied and adapted to suit your context.
Lesson Plan Slide Deck
Lesson Plan Instructor Guide
A number of AI detection tools are available and designed to identify what text might have been generated by AI. Educators experimenting with detection tools are discovering that they
incorrectly identify student writing as AI-generated writing. Detection tools may also incorrectly fail to detect AI-generated writing, especially in cases where the tool was built to only detect content generated by a specific AI tool.
For these reasons, we recommend that you
use AI detection tools with caution and
consider the false positive rates.
If you’re using an AI detection tool with student writing, use the results only to start a conversation with the student. Review the work further and be open to the possibility that the tool’s detection results were incorrect.
Turnitin has developed resources to help instructors design assignment prompts to help avoid AI misuse, including an
AI Misuse Checklist and
Rubric. See “Draft a syllabus statement and discuss academic integrity with students” for additional ideas for how to consider AI misuse in the classroom.
When Turnitin released AI detection in April 2023 as part of
the Originality Check tool, clients were not able to opt out of using the AI detection tool; now that clients have a choice, DePaul joined other universities in evaluating this tool and determining it’s not the right fit for our community.
As the Provost noted in her message about AI at DePaul, faculty are encouraged to approach the unique challenge our university community faces by learning about the affordances and limitations of AI tools, discussing the impact of AI on your discipline and curriculum with colleagues, and evaluating how you’ll approach AI in your classes by developing syllabus statements that set expectations and reviewing your assignments for AI impact.
This guide is a resource to support these efforts. Please also stay tuned to the
Teaching Commons Events page for future programming offerings. If your unit is planning to discuss how AI is impacting your work, a member of our team would be happy to join you and support that conversation. Please email
If there are questions or concerns about this decision please contact TLT@depaul.edu or
your faculty representative to this committee.
The following are great resources for engaging with higher education’s ongoing responses to generative AI:
The following texts represent some of the recent conversations about generative AI. DePaul Driehaus College of Business Online Learning Director James Moore also maintains
a curated list of texts that address AI.
Roose provides an overview of how ChatGPT works as an A.I. chatbot, including some screenshots that demonstrate the interface.
This alarmist piece is referenced in many of the other articles. Eliciting panic can be effective! Useful to read as part of the initial discourse.
Bogost demonstrates some limitations of ChatGPT and argues “ChatGPT isn’t a step along the path to an artificial general intelligence that understands all human knowledge and texts; it’s merely an instrument for playing with all that knowledge and all those texts.”
Tony Wan discusses the emerging field of AI tools in use for education beyond ChatGPT, including platforms that can help with online language learning, and more collaborative approaches to using AI technology in education.
In this white paper, the Office of Science and Technology Policy (OSTP) provides a “set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.”
Vogels discusses the results of a March Pew Research Center survey. The survey results indicate that 58% of U.S. adults are familiar with ChatGPT, but few have tried it.
Klein examines how the internet and AI affect human cognition and productivity. Klein calls for increased attention to the way AI, and the jobs and systems that use AI, can be built to enhance human intelligence.
Thompson and Hsu explore multiple AI image generators and AI image detectors, explain how the image detectors are designed to work, and provide many examples. They begin to discuss the implications of these tools and their successes and failures.
McMurtie provides a near-immediate, yet thoughtful, academically-framed response to the release of ChatGPT: adjusting learning processes, incorporating generative AI in curricula, and rethinking assessment.
Warner argues ChatGPT has created an opportunity to examine how we value learning and how we create experiences that help students learn.
Eleven academics share their advice for approaching ChatGPT. Suggestions include “Think a Few Years Out,” “Invite Students Into the Conversation,” and “Experiment. Don’t Panic.”
This detailed walkthrough of how Grobe incorporated ChatGPT into his course shows the affordances and limitations of the tool.
D'Agostino maps some of the ways GPT-4 compares to GPT-3.5. D'Agostino describes some recent thinking about generative AI tools and their place in education.
Joppie, a DePaul University Senior Instructional Designer, explores how education has encountered and integrated new technologies in the past and recommends steps for considering generative AI in teaching contexts.
In this newsletter McMurtie discusses the challenges and considerations surrounding the use of AI in grading and the potential impact on faculty and students. McMurtie explores the ethical, pedagogical, and legal aspects of AI tools and emphasizes the need for guidelines, transparency, and awareness of the limitations of AI in teaching.
D'Agostino describes how faculty members are seeking to upskill in AI for teaching and learning during the summer and maps some current grassroots and institutional efforts to support faculty. D’Agostino advocates for more comprehensive and widespread support to help faculty navigate the AI landscape in education effectively.
D'Agostino explores how AI tools in education can assist disadvantaged students and but also widen the education gap. D’Agostino maps concerns related to biases and unequal access as well as some of the efforts to address and anticipate those concerns.
Ethan Mollick focuses on the importance of learning to use AI correctly, by learning to collaborate with it as opposed to expecting it to do the work for you.
Melo explains how ChatGPT can be used as an assistive technology for students and instructors with ADHD
Mills compares GPT-4 to GPT-3.5 and provides some updated recommendations for assignment design. Mills explains why AI detection tools are unreliable.
McMurtie explores two ways professors and teaching experts have involved students in research and policymaking related to AI tools.
Knox anticipates the launch of Turnitin’s AI detection too. Knoxl identifies some concerns associated with the tool, including the impact of the fast rollout and questions about the reported accuracy rates.
Klass, a DePaul University Senior Instructional Designer, encourages professors to consider the guidelines they provide for AI-use in their classes, provides suggestions for guiding students through using AI in a sustainable and learning-focused way, and argues it is time to for everyone to start working AI into their course materials.
McMurtie examines how AI tools like ChatGPT can impact students with disabilities. McMurtie emphasizes the need to involve students with disabilities in policy development and to educate all students about the limitations of AI.
Darby maps out four strategies to help all instructors navigate ChatGPT in their classes. Darby’s recommendations are geared toward all instructors, including those that have resisted addressing ChatGPT up until now.
Chen provides some strategies for most effectively using generative AI, including strategies for considering safety and responsibility while using generative AI. This resource could be a good starting point for students beginning to experiment with generative AI and explore prompt engineering.
A free and open source resource for learning how to engineer prompts. The resources are organized based on skill level.
Liu provides examples of how instructors might use generative AI and provides guidelines for effective prompt engineering.
A brief look at the trustworthiness of the content produced by generative AI.
Professors at the University of Minnesota Law School used ChatGPT to generate answers on a small set of blindly graded exams. ChatGPT performed, on average, at a C+ level. The professors describe their methods and results and discuss the implications for law education.
Terwiesch, an operations management professor, asked ChatGPT several final exam questions. She found that it performed well at general questions, made pretty glaring math errors, and couldn’t move towards more complex analytical responses. It got a B/B+ on the exam.
This technical report from OpenAI explores how GPT-4 performed on a series of standardized tests. For a summary of some of the results, see “OpenAI's GPT-4 aced all of its exams – except for these.”
Klein’s conversation with Altman from 2021 is pretty broad, but it’s helpful to hear from the founder of OpenAI (the source of ChatGPT and Dall-E) to understand his operating principles.
Thompson and Thompson offer a wide-angle-lens conversation that situates generative AI in potential technological, economical, and societal impacts.
Roose and Newton explore some possibilities associated with Google’s Bard, provide specific examples of how people are using ChatGPT in their everyday lives, and explore Spotify’s AI D.J.
Bruff, Lang, and Miller discuss generative AI and how instructors might respond to generative AI tools. Bruff points listeners to Lang’s Cheating Lessons and Miller’s Remembering and Forgetting in the Age of Technology as resources for considering how generative AI tools best fit in higher education.
Teachers and students reflect on their first semester with ChatGPT and how it is changing the classroom.
Events for opportunities to learn more about AI. Email firstname.lastname@example.org to suggest additions to this guide.