AI in local government: navigating challenges and embracing opportunities
Artificial intelligence (AI) tools have the potential to transform local government by increasing efficiency, saving costs and enhancing decision making. However, generative AI is by no means a ‘silver bullet’ and can come with its own risks, particularly related to data protection and privacy, meaning that local government organisations seeking to delve into this space must balance enthusiasm with caution.
What is it and how is it used?
There are a multitude of possible ways AI can be implemented to provide deeper and more meaningful engagement with the community whilst managing local government’s increasingly scarce resources more effectively. Common examples include:
Customer engagement - AI-powered chatbots and telephony solutions can be used to provide 24/7 customer service by managing routine queries from citizens, freeing up staff for more complex issues.
Email management - AI can be used to manage email correspondence by redirecting emails and sending auto-replies. The Information Commissioner's Office (ICO) uses inbox email management AI to reduce the burden of managing high volumes of queries.
Translation, summarisation and easy read – AI can be implemented to translate, summarise and convert documents into easy-read formats.
Specific projects – AI can be fine-tuned for specific projects, such as to enhance decision making in adult social care, manage debt within councils, or improve information governance processes.
Data protection risks
Unfortunately, there are significant potential risks associated with AI and, if things go wrong, it will be the implementing local authorities that are responsible, both legally and reputationally. Some examples of risks are that:
the use of AI is not lawful, fair or transparent, and the public cannot exercise their right to be informed and feel disempowered (potentially infringing the lawfulness, fairness and transparency principle in article 5(1)(a) UK GDPR)
data is processed for a different and incompatible purpose from that for which it was collected (which may breach the purpose limitation principle in article 5(1)(b)).
too much personal data is collected accidentally to train or develop AI, or retained in the deployment (potentially infringing the data minimisation principle in article 5(1)(c))
data is stored for an excessive amount of time, or is not anonymised/pseudonymised correctly (in breach of the storage limitation principle in article 5(1)(e))
outputs are inaccurate, such as due automation bias, AI hallucinations, overfitting, model drift or function creep (contravening the accuracy principle in article 5(1)(d))
attacks on AI systems / data breaches caused by poor security practices or insecure AI systems (breaching the integrity and confidentiality principle in article 5(1)(f)).
Organisations need to also carefully consider the accountability arrangements they have in place, which is a further key principle in the UK GDPR.
Practical tips
The ICO recommends taking a risk-based approach to AI. This means ‘assessing the risks to the rights and freedoms of individuals that may arise when you use AI and implementing appropriate and proportionate technical and organisational measures to mitigate these risks’. The ICO have published helpful tools on their website to support implementation.
To minimise risks, we always suggest that organisations:
Consider the governance in place to support the development and use of AI – what tools are you willing to permit and when? Who will sign off on deployment? Many organisations will have a steering group, programme board or committee who will make key decisions.
Perform a Data Privacy Impact Assessment, take a ‘privacy by design’ approach, and also consider equalities and digital exclusion issues in the development of any citizen-facing tools.
Implement strong security and business continuity measures.
Update Records of Processing Activities and information asset registers to reflect the deployment of AI tools.
Ensure that data subject rights are considered.
Consider anonymisation / pseuodymisation where possible, in both ‘training’ models and then deployment, to minimise where possible the tension between the data minimisation principle in data protection law, and the large volumes of data which are needed to develop and deploy many AI tools with confidence.
Enter into comprehensive agreements with data processors and service providers who are supporting the development and delivery of AI tools.
How Capsticks can help
Our specialist local government advice is cost-effective and strategic, complemented by practical knowledge of your daily challenges. Our team offers a full range of services including advising on governance, applying for and delivering large regeneration projects, updating constitutional documents, refresher procurement training and support in preparing for an upcoming employment tribunal.
We are experts on all aspects of advising local governments on the implementation of AI. For further information on how we might assist your organisation, please contact Tana Dryden-Strong or Emma Godding.
Tana Dryden-Strong, Associate
tana.dryden-strong@capsticks.com
Emma Godding, Principal Associate
Podcast
Podcast