51±¾É«

Generative Artificial Intelligence (Gen AI) is an AI technology that automatically generates content in response to prompts written in natural language through conversational interfaces. Rather than simply searching for and relying on existing content (e.g. webpages, library or journal databases, media sources, etc) for its output. GenAI produces wholly new content. 

This novel content can be produced in a wide array of formats, e.g. text written in natural language, image (including photographs, digital paintings and cartoons), video, music and computer code. 

Gen AI systems produce content by statistically analysing the relations of words, pixels or other data that it has encountered in training. This training data is extremely large and varied, in the case of ChatGPT it initially involved 45 terabytes of information drawn from numerous sources across the internet. This process produces the underlying model, which can identify common patterns (for example, which words typically follow which other words). Further training, often involving human feedback on initial outputs, finetunes the system. Once trained the system can produce, meaningfully structured novel output in a wide variety of formats which up until recently was solely producible by human endeavour.    

This is currently an open and divisive topic. Gen AI can produce content in a wide variety of formats which, as was said above, was previously only producible by humans. As a result, a lot of the specialised skills taught within universities are becoming increasingly automatable. Globally universities are undergoing a period of reflection, to wrestle with this new reality. It is an opportunity for academics to reflect on the curriculum and consider the extent to which Gen AI should be integrated into teaching, learning and assessment to ensure that students are best prepare for the future of work within that discipline. 

 

Given the variety of offerings, disciplines and courses, the question of legitimate use of AI within academia, will inevitably be a local decision. That is, a module coordinator, programme lead or Head of Department, will dictate the limits of legitimate uses of AI within module or programme design, in keeping with the module or programme aims and content. These decisions may need to be taken in the context of an overall programme level discussion or indeed a departmental level discussion. To aid the inclusion of legitimate use of GenAI within the university’s offerings, there is a UL Artificial Intelligence and Assessment Framework (please see below) which offers a universally acceptable framing for how GenAI can be incorporated into a module’s design and importantly how this information is communicated to students and other stakeholders. This will ensure clarity and hence allow for GenAI to be legitimately incorporated.   

The International Center for Academic Integrity have released a statement on academic integrity and artificial intelligence:

The UL Interim Statement on Academic Integrity and Academic Misconduct provides clarity on acceptable use of Gen AI in UL:  Academic Integrity - Policies and Procedures | 51±¾É« (ul.ie)

This AI Standards & Assurance Roadmap is a collaborative effort based on input from many stakeholders and experts from across the Irish AI community, including: Irish academia; multinational corporations and largescale information technology industry; experts from Irish AI start-ups and small successful enterprises; legal experts from within the Irish technology industry and specialist Irish law firms. The deliberations of the Top Team on Standards in AI have been formulated into two outputs 

•&²Ô²ú²õ±è; 

•&²Ô²ú²õ±è; 

The  (not yet published) will regulate the use of AI in the EU. The Act categorises risk associated with use of AI as follows: 

·        Unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI). 

·        High-risk AI systems will be subject to strict obligations before they can be put on the market: 

·        Limited risk AI systems are subject to lighter transparency obligations: developers and deployers must ensure that end-users are aware that they are interacting with AI (chatbots and deepfakes). 

·        Minimal risk is unregulated (including the majority of AI applications currently available on the EU single market, such as AI enabled video games and spam filters. 

A prompt text is the instruction, or instructions provided by the user to the Gen AI tool asking it to do something. For example, write a sonnet in the style of William Shakespeare. Prompts can include images, video, code snippets amongst other things.   

Gen AI models are created to provide a meaningfully structured output based on prompts. Meaningful here, refers to the structure rather than to any notion of accuracy or coherence. The output is the result of the systems probabilistic model of language, which means it does not engage with a question in a meaningful human way, nor is the output a considered response to a question, it is a response to a prompt. Thus, it will always generate a response, in many cases an exceptionally good one, but it is not considering the prompt as humans would a question, so the output could be woefully inaccurate, or in some cases meaningless. 

As a result, all outputs from Gen AI tools which we rely on for essential information or which will inform decisions or influence research must be independently verified.   

Please check the Prompt Collection,  for further information.

We acknowledge the diversity of academic disciplines within the University and wish to emphasise that each discipline will seek to integrate Gen AI in different ways. It may be worth discussing the changes you are making to your academic practice with your colleagues in your discipline as they will fully appreciate the pedagogical paradigm of your field. The video may help to guide you in your own academic practice. This has been informed by the and international resources.

Following consultation with the Assistant Deans of Academic Affairs, some of whom also consulted with the Faculty Learning Teaching and Assessment Committees, the decision was made to not sign-up to the Turnitin Artificial Intelligence add-in from March 2024. 

The NAIN  recommend that it is not used: 

"Do not rely on GenAI ‘detection systems’. None of the tools which are currently available are fully capable of detecting the use of GenAI (except in the most obvious cases which may also have been identified by expert reading and scrutiny) and may also lead to ‘false positives’ (incorrectly concluding that human-written text was AI-generated) and difficult-to-interpret scoring. Detection systems cannot be relied upon to detect use of GenAI accurately or consistently. In addition, there may be serious data protection, privacy, and intellectual property concerns in the use of any such tool, particularly if it has not undergone appropriate approval by institution. Turnitin’s detection tool is available in some institutions, but users should be aware of concerns about its capabilities in terms of more recent versions of GenAI, a reported high rate of ‘false positives,’ and some ambiguity on how to interpret its results".