Using Generative Artificial Intelligence Large Language Models – Do’s and don’ts

Created By:  Magnus Smidak
Last updated: 15 May 2024

Socitm provides a set of guidelines on the ethical and responsible use of Generative AI Large Language Models (LLMs) like ChatGPT, Bard, and Bing for public service organizations, outlining specific do's and don'ts to maintain human oversight, comply with laws, and ensure content accuracy while avoiding misuse related to confidential data and copyright infringement.

Socitm's guide on using Generative Artificial Intelligence (AI) Large Language Models (LLMs) aims to assist councils, charities, and other local public service providers in navigating the challenges and opportunities presented by AI technologies. It emphasizes the importance of human oversight, ethical use, and adherence to legal and organizational policies when employing tools like ChatGPT, Bard, and Bing. The guidelines recommend using Generative AI LLMs for tasks such as drafting briefings, reports, and analysing public data, while cautioning against their use for processing confidential information or for purposes that might contravene copyright or data protection laws. The guide highlights the need for careful prompt definition, fact-checking AI-generated material, and staying vigilant against disinformation. Socitm's dos and don'ts serve as a roadmap for ensuring fair, legal, and safe use of AI technologies in the public sector.

Category: Characteristics » Use of data and intelligence Data maturity Data maturity » Culture and structure Data maturity » Leadership and strategy Data maturity » Systems and tools Data maturity » Skills and capability Data maturity » Governance and compliance