Predictive analytics to prevent child abuse: what are the privacy implications?

September 17, 2018 in

Use of algorithms to identify child abuse is at the center of an investigation carried out by The Guardian, which has exposed how UK local councils are developing “predictive analytics” systems to identify families for attention from child services.

These systems are designed in order to allow child services to intervene before abuse happens and focus resources more effectively. Besides the benefits, which are mainly linked to better abuse prevention and support to social workers, this type of algorithmic profiling is also raising issues due to its potential to intrude into individual privacy. In fact, around 377,000 people’s personal data has been incorporated into the different predictive systems with the aim to identify characteristics commonly found in children who enter the care system.

This type of profiling, furthermore, risk perpetuating stereotyping and discrimination. Another criticism of predictive analytics is that there is a risk of oversampling underprivileged groups, because a council’s social services department will inevitably hold more data on poor families than it does on wealthy ones, which will result into inaccurate predictions. In this way, in fact, instead of mirroring inequalities, the model will simply amplify them.

An example presented by The Guardian refers to private versus public schools as a case where children at risk might not be identified. A model trained on public sector data sets will more likely not recognize these children as surpassing a certain level of risk and will not include them into the group of those who can potentially be abuse victims, even if it is not true that children attending private schools are not at risk.

Read more about predictive analytics to prevent child abuse in these articles from The Guardian:

https://www.theguardian.com/society/2018/sep/16/ch...

https://www.theguardian.com/society/2018/sep/16/co...

Comments