How to deal with generative AI in UNIGIS studies

Generative artificial intelligence (AI) systems offer a wide range of potential for studying and teaching, but also new challenges. Important questions about studying and using generative AI will be answered on this website. You should also take a look at our guideline to dealing with generative AI.

It is important to us that, despite or even with generative AI systems, you can manage your learning processes autonomously and secure your learning progress. Ultimately, the intellectual process and responsibility for the content produced must always remain with humans and must not be outsourced to a machine. Aleksandr Tiulkanov’s decision tree helps with this:

Flowchart generative AI

Based on Aleksandr Tiulkanov (2023): Is it safe to use ChatGPT for your task? Available online (© https://creativecommons.org/licenses/by/4.0/deed.de)

What exactly is generative artificial intelligence?

Generative artificial intelligence (AI) is a type of AI that is able to create content such as text, images, videos, audio or software code. Generative AI is based on sophisticated deep learning models that are trained with huge amounts of data in a so-called “self-supervised” approach. Patterns and relationships are identified and coded in a “self-learning” process. In a second step, the inference phase, this information is used to generate content that is similar to the learned patterns and relationships in response to a prompt.

The most important questions and answers

Here you will find questions and answers on the topic, which, in addition to the guidelines for dealing with generative AI, provide an initial orientation for the use of generative AI systems in UNIGIS studies. Please note that the available systems are constantly evolving and the situation is currently quite dynamic. It is therefore possible that this guide will also have to be adapted. Feel free to check back here, stay critical and use your common sense more than ever.

Status: 30.09.2024

Unless the use of generative AI systems in UNIGIS studies is explicitly prohibited by the lecturer, their use is generally subject to your own responsible use of this technology. Always consult Tiulkanov’s decision tree for a self-critical assessment of the permissibility of use. If generative AI has been used, this must be indicated (more on this under “How do I cite the use of generative AI correctly?”).

Generative AI is particularly suitable for supporting the introduction to a new topic, providing an initial overview of relevant aspects, or offering initial ideas. However, the deeper and more specific a question is, the more problematic the generative process step (the ‘inference phase’) becomes for generative AI, where a new product (such as text or images) is created by combining individual components from the trained dataset. The more thoroughly you have already understood the subject area, the better equipped you will be to accurately assess the validity of the results.

If you use generative artificial intelligence such as ChatGPT, Microsoft Copilot, Gemini or similar systems for assessment-relevant coursework (assignments, thesis), you must indicate this in a suitable form.

If generative AI is used in the workflow, for example in an analysis step or scripting, this must be done in the form of an “AI statement”. Referring directly to generative AI as a source is problematic insofar as texts generated in this way are classified as grey literature. If it is still necessary to cite generative AI, the use of the MLA citation style is recommended. This style avoids naming companies (e.g., OpenAI, Google, Microsoft) as authors, as authorship is always tied to the ethical and legal responsibility of a person.

To give you an example:

“Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.

For more information and examples, please consult the MLA-website.

Using generative AI without citing the source for the creation of scientific texts is plagiarism.

Generative AI is grey literature. The generated text is not reproducible and has not undergone any scientific quality control (such as peer review). It is therefore not suitable as a valid reference on which to base the scientific argumentation in a thesis.

The use of generative AI to create images (e.g., graphics, diagrams) is not permitted in theses. The legal framework regarding the use of AI-generated art in publicly accessible documents is currently unclear. For solution documents that are not published, AI-generated images may be used, provided they are clearly labelled as such.

The use of generative AI for the purpose of proofreading is acceptable as long as this use is documented in a so-called “AI Statement”. The declaration must include the software and its version and specify which sections of text have been manipulated and how.

Here is an example from a student’s thesis:

AI Statement
During the preparation of this thesis, the author used ChatGPT (version 3.5, April 2024) to improve the writing style and to check grammar and spelling. After using this tool, the author reviewed and edited the content as needed and she takes full responsibility for the content of the thesis.

The models underlying the language-based systems of generative AI are referred to as ‘large language models’ (LLMs). The potential benefits of these systems when combined with user interfaces of geographic information systems (GIS), as well as their applicability in cartographic contexts, is the subject of ongoing research.

Hochmair et al. (2024)  found that generative AI models such as ChatGPT-4, Gemini, Claude-3 and Copilot perform differently on different spatial questions. While these models perform solidly on general questions about geoinformatics concepts and methods (spatial literacy) and the interpretation of programming code and functions, they show clear weaknesses in areas such as map generation, code generation and spatial reasoning. This shows that LLMs currently still have difficulties in reliably mastering specialized geoinformatics tasks. A sound knowledge of the theories and concepts of geoinformatics and their application therefore remains essential for the time being.

High energy usage, poor working conditions for clickworkers, and the market power of large companies are factors that can create a lack of transparency and cause uncertainty. Estimates show that a single request to an AI bot uses 28 to 160 times more energy than a regular Google search[1][2][3][4]. According to a study from Cornell University, training OpenAI’s model “GPT-3” produced 552 tons of CO2[5]—about the same amount generated by the annual commutes of all 2,900 PLUS employees.

Because of this, and in line with the principle of proportionality, you should prioritize AI methods that are efficient in terms of data, energy, and resources. Certifications for data centers where training and inference happen can tell you about the sustainability of their computing power. There are common standards like CEEDA (Certification of Energy Efficiency for Data Centers). To measure the CO2 emissions from your AI usage, tools like CodeCarbon.io or theML CO2 Impact-Calculator[6] can help.

Less is more:

If:

  • you don’t really need AI tools to meet your goals and achieve the desired outcomes,
  • you’re unsure if the provider of your AI tool uses renewable energy for their data centers,
  • and your AI method causes a lot of CO2 emissions,

then consider choosing not to use artificial intelligence!

[1] Nitin Sreedhar. “AI and its carbon footprint: How much water does ChatGPT consume?” Mint lounge, 22 Oct. 2023, https://lifestyle.livemint.com/news/big-story/ai-carbon-footprint-openai-chatgpt-water-google-microsoft-111697802189371.html.
[2] Kasper Groes Albin Ludvigsen.  “ChatGPT’s energy use per query.” Medium, 6 Aug. 2023, https://towardsdatascience.com/chatgpts-energy-use-per-query-9383b8654487.
[3] Chris Pointon. “The carbon footprint of ChatGPT”. Medium, 22 Dec. 2022, https://medium.com/@chrispointon/the-carbon-footprint-of-chatgpt-e1bc14e4cc2a. 
[4] de Vries, Alex. “The growing energy footprint of artificial intelligence.” Joule 7.10 (2023): 2191-2194. DOI: https://doi.org/10.1016/j.joule.2023.09.004
[5] Patterson, David, et al. “Carbon emissions and large neural network training.” arXiv preprint arXiv:2104.10350 (2021). DOI: https://arxiv.org/abs/2104.10350
[6] Lacoste, Alexandre, et al. “Quantifying the carbon emissions of machine learning.” arXiv preprint arXiv:1910.09700 (2019). DOI: https://arxiv.org/abs/1910.09700

Further Ressources