Guest Opinion Essay

Pittsburgh Misuses Big Data to Target Poor Children for Abuse Investigations

Smiling female robot cradles baby dressed in pink sucking on bottle.
Linda Bucklin/Shutterstock.com

One massive leak of middle-class Americans’ data seems to have the whole world in an uproar.

A firm known as Cambridge Analytica allegedly improperly obtained personal information given to Facebook by 50 million people. Then, according to The New York Times, “The firm, which was tied to President Trump’s 2016 campaign, used the data to target messages to voters.”

Richard Wexler (headshot), executive director of National Coalition for Child Protection Reform

The data were offered to Facebook voluntarily. Now that Facebook apparently failed to keep it secure, some Americans are exercising their right to delete their Facebook accounts. Others are exercising their right to share less information on Facebook.

But if this one leak causes so much outrage — outrage that is entirely justified — can you imagine what would happen if, say, people were forced to surrender vast amounts of personal data and a big government agency could use those data to investigate them and even take away their children?

Why if that ever happened the outrage would be — well, actually, it would be almost nonexistent.

Because it’s already happening in Allegheny County (Pittsburgh), Pennsylvania. Instead of outrage, this forced surrender of data and its use as a tool to investigate alleged child maltreatment is being celebrated — in the local newspaper and in a magazine supposedly dedicated to exposing the harm of big government. Even the Times, which first broke the Cambridge Analytica story, celebrates the way Pittsburgh uses predictive analytics in child welfare.

That celebration comes despite the fact that similar experiments failed spectacularly in Los Angeles and Illinois.

None of this is really surprising. The cause of this double standard is the same as all the other double standards when it comes to child welfare, such as this one and this one: What Allegheny County is doing only targets poor people.

How the algorithm works

They are targeted by something called the Allegheny Family Screening Tool (AFST), a semisecret “predictive analytics” algorithm that processes vast amounts of data about a family and boils it down to a “risk score” — a single number between 1 and 20. Supposedly the higher the number the greater the risk to the child. The number is a secret, like an invisible “scarlet number” etched on a child’s forehead.

Right now, a child is branded with this scarlet number if someone alleges the child is abused or neglected. The county uses the number to decide which cases need to be investigated. But the county is considering something even more Orwellian — stamping that scarlet number on every child born in the county, at birth. Such a number could haunt not only the child, but his or her children and grandchildren.

Predictive analytics was brought to the county by someone I admire and respect, Marc Cherna, director of the county Department of Human Services (DHS). He has an excellent record of reducing needless foster care. But this time, he’s on the wrong track.

Rather than eliminating the racial and class biases that permeate child welfare, AFST simply automates those biases, something made clear in a devastating critique by Professor Virginia Eubanks in her book, “Automating Inequality,” a portion of which was excerpted in Wired magazine — the one story that rebutted the general journalistic praise for AFST.

The problems with AFST are legion:

  • AFST doesn’t actually predict child abuse. Rather, claims of accuracy are based on its ability to predict whether, once reported as allegedly abusing or neglecting a child, a family will be reported again or whether the child is likely to be removed from her or his parents. But reporting itself is a highly subjective process. DHS itself acknowledges that decisions to call in a report alleging child abuse are rife with racial bias.
  • Most reports don’t fit the stereotype of parents brutally beating and torturing children. Far more common are cases in which poverty itself is confused with “neglect.” So if a family is reported because of poverty and, a year later, the family is still poor, there’s a good chance the poverty will be confused with neglect again and the family will be reported again.
  • The designers of AFST consider the existence of previous reports among the strongest supposed predictors of future danger. But the vast majority of reports — even those rated highest risk by AFST — are false. Yet once in the database, there’s almost no way out. In Pennsylvania the appeals process ranges from nearly impossible to nonexistent.
  • The various data points that the algorithm uses to create a risk score amount to what Eubanks calls “poverty profiling.” By the county’s own admission, in many cases, probably a majority, seeking out help through public benefits raises the risk score.

‘Parenting while poor’

As Eubanks explains: “Because the model confuses parenting while poor with poor parenting, the AFST views parents who reach out to public programs as risks to their children.” And, of course, because these are public benefits, such as SNAP (Supplemental Nutrition Assistance Program), TANF (Temporary Assistance for Needy Families) and Medicaid, the data are collected automatically by the county. Unlike when we “like” something on Facebook, the poor have no choice.

Eubanks reports that 25 percent of the variables in AFST are direct measures of poverty. Another 25 percent measure interaction with the child welfare and juvenile justice systems themselves.

This turns AFST into less prediction than self-fulfilling prophecy: Both poverty itself and seeking help to cope with poverty raise the AFST score, so more “helping” professionals — who also are mandated reporters of alleged child abuse — are in the family’s life. Both the score, and the professionals, put the family under a microscope. The microscope turns up something that normally would be seen as innocent but now is suspect. So the family is reported again, “proving” the algorithm was right.

And once it is known in the community that reaching out for help increases the chances that child protective services will be in your life, people may be less likely to seek help — and problems that can lead to actual child abuse could worsen.

The algorithm visits the sins of the parents, real or imagined, upon the children. Eubanks cites a family that was victimized by repeated false reports. When the child the county supposedly was “protecting” grows up, if she herself becomes the target of a child abuse report she will bear a higher scarlet number — because her own parents supposedly were neglectful. So her children will be at greater risk of enduring the enormous harm of needless foster care.

About that ‘ethics review …’

Stories praising AFST repeatedly cite a so-called “ethics review that gave AFST a seal of approval. But one of the two academicians chosen by DHS for the review not only is a faculty colleague of one of the AFST designers, but actually co-wrote papers with her. The nine-page review itself is startlingly superficial. The only references cited are papers written by either the designers of AFST or the authors of the review itself.

Even with this odd choice of ethics reviewers, the review based its endorsement in part on the fact that AFST would not target children at birth, but only after a child abuse report had been made.

But that may not last. Eubanks reports that the county is, at a minimum, considering introducing “‘a second predictive model … [that] would be run on a daily or weekly basis on all babies born in Allegheny County the prior day or week,’” according to a September 2017 email from [Cherna’s deputy, Erin] Dalton.” Such a model already exists — indeed it’s one of the models the designers of AFST proposed to the county in the first place.

So if the county starts branding every infant with a scarlet number at birth, a number that will even affect the number assigned to their children and grandchildren, is that model inherently unethical?

Cherna promises that the scarlet numbers under any such system will be used only to find the families most in need of help. He also promises that the current AFST will only be used as it is now, to decide which families to investigate, not whether to take away children. But this is a distinction without much difference. Though the investigators don’t know the exact AFST score, they know that AFST told them to investigate in the first place.

And what about Cherna’s successor, and his successor’s successor? Any system that depends for success on the benevolence of a single leader with near-absolute power is too dangerous for a free society.

Most of those pushing for the use of systems like AFST are nothing like Marc Cherna. On the contrary, they tend to be those most enthused about taking away more children and using algorithms to accomplish it. They are constantly pressuring child welfare systems to do just that.

So what happens on the next leader’s watch when there’s a high-profile child abuse fatality, and everyone claims that the pendulum supposedly has swung too far toward keeping families together? The new DHS leader decrees that a high AFST score leads to automatic removal, the system is flooded with children who don’t need to be there, doing those children enormous harm, and the whole system is so overloaded that there is less time to find children in real danger.

That’s so easy to predict even a human being can do it.

Richard Wexler is executive director of the National Coalition for Child Protection Reform. NCCPR’s full discussion of predictive analytics in child welfare is available here.

To Top
Skip to content