Menu
Subscribe to Holyrood updates

Newsletter sign-up

Subscribe

Follow us

Scotland’s fortnightly political & current affairs magazine

Subscribe

Subscribe to Holyrood
by Sam Trendall
23 February 2018
Nesta proposes public sector code of conduct for AI decision making

Nesta proposes public sector code of conduct for AI decision making

Artificial intelligence concept brain vs network - Image credit: Pixabay

Government has to be as open as possible about the way that algorithms are created and used to inform decision making, says innovation charity Nesta.

Nesta has published a draft code of conduct for how algorithms are used to automate decision making or assessments.

The code, which contains 10 core principles, was written by the organisation’s director of government innovation Eddie Copeland.

In a blog post, he wrote: “The application of AI that seems likely to cause citizens most concern is where machine learning is used to create algorithms that automate or assist with decision making and assessments by public sector staff.

“While some such decisions and assessments are minor in their impact, such as whether to issue a parking fine, others have potentially life-changing consequences, like whether to offer an individual council housing or give them probation.

“The logic that sits behind those decisions is therefore of serious consequence.”

Copeland said that “considerable amount of work has already been done to encourage or require good practice in the use of data and the analytics techniques applied to it”.

He singled out the government’s Data Science Ethical Framework as an example of this work.

But he suggested greater efforts are needed, particularly on the part of governments and the wider public sector.

“After all, an individual can opt out of using a corporate service whose approach to data they do not trust,” he said.

“They do not have that same luxury with services and functions where the state is the monopoly provider.”

The 10 principles are:
 

1. Every algorithm used by a public-sector organisation should be accompanied with a description of its function, objectives and intended impact, made available to those who use it

2. Public sector organisations should publish details describing the data on which an algorithm was (or is continuously) trained, and the assumptions used in its creation, together with a risk assessment for mitigating potential biases

3. Algorithms should be categorised on an algorithmic risk scale of 1-5, with 5 referring to those whose impact on an individual could be very high, and 1 being very minor

4. A list of all the inputs used by an algorithm to make a decision should be published

5. Citizens must be informed when their treatment has been informed wholly or in part by an algorithm

6. Every algorithm should have an identical sandbox version for auditors to test the impact of different input conditions

7. When using third parties to create or run algorithms on their behalf, public sector organisations should only procure from organisations able to meet principles 1-6

8. A named member of senior staff (or their job role) should be held formally responsible for any actions taken as a result of an algorithmic decision

9. Public sector organisations wishing to adopt algorithmic decision making in high-risk areas should sign up to a dedicated insurance scheme that provides compensation to individuals negatively impacted by a mistaken decision made by an algorithm

10. Public sector organisations should commit to evaluating the impact of the algorithms they use in decision making, and publishing the results

Holyrood Newsletters

Holyrood provides comprehensive coverage of Scottish politics, offering award-winning reporting and analysis: Subscribe

Read the most recent article written by Sam Trendall - ‘Another Horizon’ – MPs table motion against DWP bank-monitoring proposals.

Get award-winning journalism delivered straight to your inbox

Get award-winning journalism delivered straight to your inbox

Subscribe

Popular reads
Back to top