How to deal with generative AI in UNIGIS studies

Generative artificial intelligence (AI) systems offer a wide range of potential for studying and teaching, but also new challenges. Important questions about studying and using generative AI will be answered on this website. You should also take a look at our guideline to dealing with generative AI.

It is important to us that, despite or even with generative AI systems, you can manage your learning processes autonomously and secure your learning progress. Ultimately, the intellectual process and responsibility for the content produced must always remain with humans and must not be outsourced to a machine. Aleksandr Tiulkanov’s decision tree helps with this:

Flowchart generative AI

Based on Aleksandr Tiulkanov (2023): Is it safe to use ChatGPT for your task? Available online (© https://creativecommons.org/licenses/by/4.0/deed.de)

What exactly is generative artificial intelligence?

Generative artificial intelligence (AI) is a type of AI that is able to create content such as text, images, videos, audio or software code. Generative AI is based on sophisticated deep learning models that are trained with huge amounts of data in a so-called “self-supervised” approach. Patterns and relationships are identified and coded in a “self-learning” process. In a second step, the inference phase, this information is used to generate content that is similar to the learned patterns and relationships in response to a prompt.

The most important questions and answers

Here you will find questions and answers on the topic, which, in addition to the guidelines for dealing with generative AI, provide an initial orientation for the use of generative AI systems in UNIGIS studies. Please note that the available systems are constantly evolving and the situation is currently quite dynamic. It is therefore possible that this guide will also have to be adapted. Feel free to check back here, stay critical and use your common sense more than ever.

Status: 02.12.2024

In UNIGIS studies, unless the module lecturer explicitly prohibits the use of generative AI systems, the application of this technology is generally subject to your own responsible handling. Should generative AI be used, it must always be clearly indicated (see the section “How do I cite the use of generative AI correctly?”). A separate section (“Is it allowed to use generative AI in the master thesis?”) refers to the use of generative AI in the master thesis.

Generative AI is particularly suitable for supporting the introduction to a new topic, providing an initial overview of relevant aspects, or offering initial ideas. However, the deeper and more specific a question is, the more problematic the generative process step (the ‘inference phase’) becomes when using generative AI; during this step a new output (such as text, images, etc.) is created by combining individual components from the trained dataset. The deeper your understanding of the thematic area, the better your capabilities to accurately verify the validity of the results.

We strongly advise against using generative AI in your thesis. This applies also to proofreading and language polishing.

This is an example, why using AI generated texts instead of your own causes problems:

Method xy offers a flexible solution by including smooth spatial terms to non-parametrically model complex spatial patterns (Author, date).

Did you write this text originally because you understood well the mathematical background of the method?

Would you be able to explain what is „smooth spatial term“ if you have to justify it?

A deep understanding of a topic’s concepts can be gained only with autonomous learning and intellectual processes. To this knowledge refers the disclosure statement “I take full responsibility for the content of this work”.

If you use generative AI such as ChatGPT, Microsoft Copilot, Gemini, or similar systems for your work (e.g. solution of assignments), you must clearly indicate this. The use of generative AI should be specified in an “Generative AI disclosure statement “. This statement should include the following:

  • The software used, and the version (number).
  • A clear indication of which sections in text or steps in the work were influenced or generated by the AI, and how.

Here is an example from a student’s assignment solution document:

Generative AI disclosure statement:
During the creation of this solution document, the author used ChatGPT (Version 3.5, April 2024) to improve the writing style as well as to check spelling and grammar. The content was reviewed and, if necessary, revised by the author after using the tool. The author takes full responsibility for the content of this work.

Should it be necessary to cite generative AI for text or code, we recommend using the MLA citation style to indicate these sections. Texts generated with generative AI are considered as grey literature and therefore they cannot be used as a reliable source in scientific work.

To give you an example:

“Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.

For more information and examples, please consult the MLA-website.

The use of generative AI to create assessment-relevant texts or code without proper indication is considered as academic misconduct. This is equivalent to plagiarism and may result in a negative grade or, in the worst case, even exclusion from your studies.

The models underlying the language-based systems of generative AI are referred to as ‘large language models’ (LLMs). The potential benefits of these systems when combined with user interfaces of geographic information systems (GIS), as well as their applicability in cartographic contexts, is the subject of ongoing research.

Hochmair et al. (2024)  found that generative AI models such as ChatGPT-4, Gemini, Claude-3 and Copilot perform differently on different spatial questions. While these models perform solidly on general questions about geoinformatics concepts and methods (spatial literacy) and the interpretation of programming code and functions, they show clear weaknesses in areas such as map generation, code generation and spatial reasoning. This shows that LLMs currently still have difficulties in reliably mastering specialized geoinformatics tasks. A sound knowledge of the theories and concepts of geoinformatics and their application therefore remains essential for the time being.

High energy consumption, questionable working conditions during AI development (such as those involving clickworkers), and the market dominance of large companies are factors that contribute to a lack of transparency and create uncertainty. The estimated energy consumption of an AI-bot query compared to a conventional Google search varies between a factor of 28 and 160[1][2][3][4]. According to a study conducted by the Cornell University, the calculated CO2 emissions caused by the training of OpenAI’s AI model ‘GPT-3’, amounts to 552 tonnes[5]. The commutes of all 2,900 PLUS employees generate roughly the same amount of CO2 within a year.

In light of this, you should prioritize data-, energy-, and resource-efficient AI methods, in accordance with the principle of proportionality. Certifications for data centers where training and inference occur provide insight into the sustainability of the computing power used to run AI tools. Widespread standards, such as CEEDA (Certification of Energy Efficiency for Data Centers), are available. Additionally, tools such as CodeCarbon.io or theML CO2 Impact-Calculator[6] can be used to measure the CO2 emissions associated with your AI usage. Less is more!

[1] Nitin Sreedhar. “AI and its carbon footprint: How much water does ChatGPT consume?” Mint lounge, 22 Oct. 2023, https://lifestyle.livemint.com/news/big-story/ai-carbon-footprint-openai-chatgpt-water-google-microsoft-111697802189371.html.
[2] Kasper Groes Albin Ludvigsen.  “ChatGPT’s energy use per query.” Medium, 6 Aug. 2023, https://towardsdatascience.com/chatgpts-energy-use-per-query-9383b8654487.
[3] Chris Pointon. “The carbon footprint of ChatGPT”. Medium, 22 Dec. 2022, https://medium.com/@chrispointon/the-carbon-footprint-of-chatgpt-e1bc14e4cc2a. 
[4] de Vries, Alex. “The growing energy footprint of artificial intelligence.” Joule 7.10 (2023): 2191-2194. DOI: https://doi.org/10.1016/j.joule.2023.09.004
[5] Patterson, David, et al. “Carbon emissions and large neural network training.” arXiv preprint arXiv:2104.10350 (2021). DOI: https://arxiv.org/abs/2104.10350
[6] Lacoste, Alexandre, et al. “Quantifying the carbon emissions of machine learning.” arXiv preprint arXiv:1910.09700 (2019). DOI: https://arxiv.org/abs/1910.09700

Further Resources