/ en / Traditional / help

Beats Biblionetz - Begriffe

GMLS-Detektor

Dieses Biblionetz-Objekt existiert erst seit April 2024. Es ist deshalb gut möglich, dass viele der eigentlich vorhandenen Vernetzungen zu älteren Biblionetz-Objekten bisher nicht erstellt wurden. Somit kann es sein, dass diese Seite sehr lückenhaft ist.

iconBiblioMap Dies ist der Versuch, gewisse Zusammenhänge im Biblionetz graphisch darzustellen. Könnte noch besser werden, aber immerhin ein Anfang!

Diese Grafik ist nur im SVG-Format verfügbar. Dieses Format wird vom verwendeteten Browser offenbar nicht unterstützt.

Diese Grafik fensterfüllend anzeigen (SVG)

iconSynonyme

GMLS-Detektor, AI Text Detection Tools, GMLS-Erkennung, Detection Tools for AI-Generated Text

iconBemerkungen

Doris WeßelsEin IT-Fachmann aus meinem Bekanntenkreis grinst. „Lass sie das Zeug doch kaufen, das ist eh von gestern.“
Von Susanne Bach, Doris Weßels im Text Das Ende der Hausarbeit (2022)
Doris WeßelsDer Untergang der klassischen Ghostwriter-Szene und der Anbieter von Plagiaterkennungssoftware ist wohl schon eingeläutet.
Von Susanne Bach, Doris Weßels im Text Das Ende der Hausarbeit (2022)
Forschung & Lehre 7/23

Obwohl LLMs einen elaborierten Schreibstil noch nicht perfekt imitieren können, ist also zu erwarten, dass natürliche und künstliche Texte nunterscheidbar werden und der hybride Text zur Norm wird.

Von Dirk Siepmann in der Zeitschrift Forschung & Lehre 7/23 im Text Vom Akkordarbeiter zum Gutachter (2023)
Do AI detectors work? In short, no, not in our experience. Our research into detectors didn't show them to be reliable enough given that educators could be making judgments about students with potentially lasting consequences.
Von OpenAI im Text How can educators respond to students presenting AI-generated content as their own? (2023)
Sich selbst könnte GPT-4 schon gar nicht überprüfen. Ebenso wenig kann eine Maschine entscheiden, ob ein Text von einer KI oder einem Menschen formuliert wurde. Das hat wohl auch OpenAI erkannt: Die Firma schaltete ihren Detektor für KI-Texte vor einigen Wochen ab. Er lieferte einfach keine verlässlichen Ergebnisse.
Von Hartmut Gieselmann in der Zeitschrift c't 21/2023 im Text Die 80-Prozent-Maschinen (2023) auf Seite  30
There is a wide range of software available which has been designed to classify whether text is machine or human generated, with providers claiming high levels of accuracy in being able to identify whether text is written by a human or by a GenAI tool (GPTZero, n.d.; Turnitin, 2023). While some of these tools are free and others require either registration or payment, research by Walters (2023) has identified that the accuracy of paid-for tools is only slightly higher than that of free versions. However, claims of accuracy are contradicted by studies which demonstrate the varied levels of the detectors' ability to distinguish accurately between AI and human-generated content. (Chaka, 2023a; Gao et al., 2022; Krishna et al., 2023; Orenstrakh et al., 2023; Perkins, Roe, et al., 2023; Walters, 2023; Weber-Wulff et al., 2023).
Von Mike Perkins, Jasper Roe, Binh H. Vu, Darius Postma, Don Hickerson, James McGaughran, Huy Q. Khuat im Text GenAI Detection Tools Adversarial Techniques and Implications for Inclusivity in Higher Education (2024)
Overall, our results demonstrate the challenges of current AI text detection tools being able to accurately determine whether a given piece of text was created by a human or a GenAI tool. This ability is further reduced when adversarial techniques are used to obscure the nature of a sample. If the goal of any given HEI was to use AI text detectors solely to determine whether a student has breached academic integrity guidelines, we would caution that the accuracy levels we have identified, coupled with the risks inherent in false accusations, means that we cannot recommend them for this purpose. This is not because of the demonstrated abilities of any one tool tested, as we recognise that developers are continuously updating these tools, and the detection of AI-generated content when subject to adversarial techniques is likely to improve. However, simultaneously, advances are being made in the development of more capable FMs that can produce more human-like content, resulting in a constant arms race between FMs and AI text detectors, with student inclusivity paying the price.
Von Mike Perkins, Jasper Roe, Binh H. Vu, Darius Postma, Don Hickerson, James McGaughran, Huy Q. Khuat im Text GenAI Detection Tools Adversarial Techniques and Implications for Inclusivity in Higher Education (2024)
Do AI detectors work?
  • In short, no, not in our experience. Our research into detectors didn't show them to be reliable enough given that educators could be making judgments about students with potentially lasting consequences.While other developers have released detection tools, we cannot comment on their utility.
  • Additionally, ChatGPT has no “knowledge” of what content could be AI-generated. It will sometimes make up responses to questions like “did you write this [essay]?” or “could this have been written by AI?” These responses are random and have no basis in fact.
  • To elaborate on our research into the shortcomings of detectors, one of our key findings was that these tools sometimes suggest that human-written content was generated by AI.
    • When we at OpenAI tried to train an AI-generated content detector, we found that it labeled human-written text like Shakespeare and the Declaration of Independence as AI-generated.
    • There were also indications that it could disproportionately impact students who had learned or were learning English as a second language and students whose writing was particularly formulaic or concise.
  • Even if these tools could accurately identify AI-generated content (which they cannot yet), students can make small edits to evade detection.
Von OpenAI im Text How can educators respond to students presenting AI-generated content as their own? (2023)
In our baseline testing protocol of both non-manipulated AI-generated samples tested alongside the human-written control samples, we see an initially lower than expected average accuracy rating for the detection of AI-generated content, coupled with a substantial rate of false accusations in the human-written control samples. When the AI-generated samples were subjected to manipulation, significant vulnerabilities in accurately detecting text were observed. If the goal of implementing AI detection tools as part of an overall academic integrity strategy is to support academic staff in identifying where machine-generated content has been used and has not been declared, these inaccuracies may lead to a false sense of security and a broader reduction in assessment security. As assessment security is a key component in ensuring inclusive, equitable, and fair opportunities for learners, this is problematic. The varying degrees of reduction in accuracy following the application of adversarial techniques also point to the broader issue of inconsistency and unpredictability in the current AI detection capabilities. The effectiveness of these techniques varies dramatically across detectors, suggesting that the internal algorithms and heuristics of these detectors are tuned differently and react distinctively to similar inputs. Therefore, the results even within an institution may differ depending on the tool being employed and how it is being used.
Von Mike Perkins, Jasper Roe, Binh H. Vu, Darius Postma, Don Hickerson, James McGaughran, Huy Q. Khuat im Text GenAI Detection Tools Adversarial Techniques and Implications for Inclusivity in Higher Education (2024)

iconVerwandte Objeke

icon
Verwandte Begriffe
(co-word occurance)
GPT Zero(0.11), Generative Machine-Learning-Systeme (GMLS)computer-generated text(0.04), Chat-GPT(0.03), Textgeneratoren-Verbot(0.03), ghostwritingghostwriting(0.03), Textgeneratoren & Bildung(0.03)

iconHäufig co-zitierte Personen

Steffen Albrecht Steffen
Albrecht
Illia Polosukhin Illia
Polosukhin
Lukasz Kaiser Lukasz
Kaiser
Aidan N. Gomez Aidan N.
Gomez
Llion Jones Llion
Jones
Jakob Uszkoreit Jakob
Uszkoreit
Niki Parmar Niki
Parmar
Noam Shazeer Noam
Shazeer
Ashish Vaswani Ashish
Vaswani

iconStatistisches Begriffsnetz  Dies ist eine graphische Darstellung derjenigen Begriffe, die häufig gleichzeitig mit dem Hauptbegriff erwähnt werden (Cozitation).

iconZitationsgraph

Diese Grafik ist nur im SVG-Format verfügbar. Dieses Format wird vom verwendeteten Browser offenbar nicht unterstützt.

Diese Grafik fensterfüllend anzeigen (SVG)

iconZeitleiste

icon20 Erwähnungen  Dies ist eine nach Erscheinungsjahr geordnete Liste aller im Biblionetz vorhandenen Werke, die das ausgewählte Thema behandeln.

iconAnderswo suchen  Auch im Biblionetz finden Sie nicht alles. Aus diesem Grund bietet das Biblionetz bereits ausgefüllte Suchformulare für verschiedene Suchdienste an. Biblionetztreffer werden dabei ausgeschlossen.

iconBiblionetz-History Dies ist eine graphische Darstellung, wann wie viele Verweise von und zu diesem Objekt ins Biblionetz eingetragen wurden und wie oft die Seite abgerufen wurde.