Child Welfare’s Harsh Test

When the federal government introduced a bold system five years ago to evaluate state child welfare systems, it drew cheers from case workers, administrators and researchers.

No longer would the U.S. Department of Health and Human Services (HHS) judge systems primarily by processes that are often unrelated to quality of care, such as checking to see if forms were signed. Instead, HHS would look at outcomes and procedures that reflect service delivery, such as case worker training and how quickly youth are moved out of government care.
Since then, every state, along with Washington, D.C., and Puerto Rico, has gone through the new Child and Family Services Reviews (CFSR) – and every one has failed. Any state that doesn’t improve to the satisfaction of HHS faces millions of dollars in fines.

Are the nation’s child welfare systems that bad? Or is the CFSR a bad test?

Child welfare administrators and researchers have complained about the test so much that HHS hired a consultant to recommend improvements and created a committee of state representatives and critics to discuss the problems and develop solutions. Susan Orr, associate commissioner of the Children’s Bureau within HHS, expects some “tweaking” for the next round of reviews.

But while critics claim that the CFSR is flawed in fundamental ways that create inaccurate assessments of child welfare systems, HHS says the test is fundamentally sound.

Even as officials in some states complain about the CFSRs, they are working to meet the standards, which HHS believes will improve child welfare systems.

“Yes, people are complaining,” says Steve Christian, program director at the National Conference of State Legislatures (NCSL). “At the same time, most people in the field would say the reviews have had a positive impact on the process and the way things are done.”

Built to Fail?

The evaluations were destined to be controversial.

The CFSR’s roots date to 1994, when amendments to the Social Security Act required outcomes-based evaluations of state child welfare systems. The CFSRs use self-assessments by each state and on-site reviews by HHS to measure seven child outcomes and seven system standards. States that achieve what HHS calls “substantial conformity” in all 14 areas are spared reviews for five years. States that don’t achieve substantial conformity must write improvement plans, to be approved by HHS.

If the federal agency determines two years later that those improvement plans are not being sufficiently implemented, fines are to be imposed. The fines vary by how much federal child welfare funding each state gets and the number of instances in which the state failed to meet the standards. California faces the biggest potential penalty: $18 million.

No one expected all or even most of the states to be in compliance in all 14 areas. The CFSRs, after all, were designed in part to compel states to improve their child welfare systems. But the first wave of evaluations in 2001 delivered a disturbing wake-up call: All 17 states that were evaluated failed, a pattern that continued through last year.

To Mark Testa, director of the Children and Family Research Center at the University of Illinois at Urbana-Champaign, the perfect failure rate makes the CFSR’s shortcomings obvious: If he administered a chemistry exam and all 52 students failed, he would suspect that the test was flawed.

A new evaluation measuring such disparate and complex systems was bound to have initial flaws, HHS says. The department says some state concerns will be addressed before the upcoming second round of reviews.

While praising HHS for moving to qualitative reviews, critics say problems with the measures and the threat of financial penalties could lead states to change their policies and practices in ways that would be detrimental to children. Or all the states could fail again, which would intensify doubts about the validity of the reviews.

“The idea of CFSRs is good,” Testa says. “If they had the right data, it would be better.”

The threat of federal fines appears to have made state officials circumspect when publicly discussing the CFSRs. Child welfare researchers are hesitant as well, because they want to work with HHS to improve the measures. Nevertheless, they raise several key issues:

Adequate sampling: HHS’ on-site evaluations include a review of 50 randomly selected cases – far too few, critics say, to be representative of a state system, especially when the cases come from only three counties in each state. Testa says it’s impossible for 50 cases to be a valid sample in a state like California, where 90,000 children are in foster care.

The Government Accountability Office (GAO) criticized the small sample size in a report last year, saying it allowed for a very large margin of error.

In assessing Arizona, where Gov. Janet Napolitano has made child welfare a top priority, HHS relied too heavily on the 50 cases, says Katherine Guffey, human services specialist in the state Administration for Children, Youth and Families. Those assessments included focus groups with people involved in the child welfare system.

“During the on-site focus groups, you may have a foster parent or attorney who brings up a concern that is anecdotal, and there is no way for the people doing the interviews to know if that is a trend,” Guffey says. “Those single comments too often become part of the final report.”

AFCARS data: Among the areas evaluated by the CFSRs are the waiting times for foster children to be reunified with their families or adopted. To set these standards, HHS looks at data from the Adoption and Foster Care Analysis and Reporting System (AFCARS). From that, it can answer such questions as: Of the children adopted this year, how many were adopted within 24 months of entering foster care? Of the children who were reunified this year, how many returned home within 12 months? The HHS standard dictates that at least 32 percent of the children adopted in a given year must have been adopted within 24 months of entering foster care.

The researchers say this skews what is happening with foster children in a state, because it only counts those who were adopted or reunited with families. In a paper titled, “Time to Improve on a Good Idea,” three researchers from the Center for Social Services Research in Berkeley, Calif., and the Chapin Hall Center for Children at the University of Chicago, say HHS should add “entry cohort” data into the mix – that is, tracking children from the time they enter care.

One of the researchers, Fred Wulczyn of Chapin Hall, explains it like this: “If you were studying cancer therapy, would you follow only those patients who survived or only those patients who died to understand whether a given therapy was effective? Or would you follow all of the people who received treatment from the time the treatment started? The answer is obvious.”

The writers say HHS should track all children for 24 months from their entry into a child welfare system. Some states collect such data for themselves, and the GAO report urged HHS to consider using longitudinal data for states that have it.

Long-time foster children: Researchers say that for states that pushed for the adoption of children who had been in foster care a long time, such as Illinois, the measure used by HHS punishes good behavior. That’s because by trying to find adoptive homes for children who have been in care for at least four years, a state hurts its chances of meeting the standard that 32 percent of its adopted children be adopted within two years.

Critics say this is a distorted measure that doesn’t accurately reflect how well a state is doing at moving children out of government custody. They warn that to achieve the national standard, a state might reduce its efforts to find adoptive placements for older children.

Compared with what? Some researchers, including Rob Geen, director of child welfare research at the Urban Institute, suggest that states be compared with themselves rather than held to standards based on national data. A baseline could be set for each state and improvement measured from there. This would eliminate the problem of comparing very different state programs.

For example, some states serve large numbers of status offenders, such as truants and runaways, making their foster care re-entry rates higher than in states that don’t serve as many teens. “It would be more useful and fair to look at the same state over time,” Geen says.

This Test Is Best

HHS says it recognizes shortcomings in the measurements, but the CFSR is the best it can do. Department officials say the 50-case review and the measures based on exit data are reasonable, partly because neither is used in isolation to judge any of the 14 outcomes or systemic factors.

To some extent, HHS was damned for not looking at outcomes of cases before CFSRs and is damned now for looking at outcomes in too few cases.

HHS doesn’t claim that the 50 case studies produce a statistically valid sample. Officials say the review gives a sense of the kind of practices occurring in a state. They say every child and family brought to the attention of the agency should be properly served, and the case reviews indicate whether that is happening.

They concede that the data sources being used, such as AFCARS, were not intended to serve in an outcomes-based evaluation. Their purpose was to take child welfare censuses. So their usefulness in this new role is limited.

Tracking entry cohorts has limitations as well, the officials say. An entry cohort would have to be followed for at least two years to determine what percentage of the children are adopted within 24 months. Also, many state systems are not set up for such tracking. Some struggle just to produce accurate information for AFCARS.

Orr, the HHS associate commissioner, notes that processes are built into the CFSRs for states to challenge negative findings.
HHS hired a consultant to convene a committee of state representatives and critics to talk about problems with the CFSRs and how they can be resolved.

Those looking for fundamental changes appear headed for disappointment. Susan Mitchell-Herzfeld, director of the bureau of evaluation and research for the New York State Office of Children and Family Services, wanted HHS to let New York measure adoption and reunification rates based on entry cohort data, just as the GAO, Wulczyn and his fellow researchers suggest.

“We had a number of meetings with the federal officials where we talked about problems with the federal measures,” Mitchell-Herzfeld says. She says HHS rejected any substitutions.

Some See Improvements

Despite the complaints, child welfare administrators and observers say the CFSRs may help to improve child welfare systems. While systems around the country are instituting significant changes for a variety of reasons, the CFSR “is what seems to be, in some cases, driving the changes,” says Christian at the NCSL.

The CFSR process overall “was extremely valuable, as much as it was a lot of hard work,” says Sheila Duranleau, policy and planning chief for the family services division of Vermont’s Department for Children and Families. “It helped us focus on some things we knew we had to pay attention to but had not because of time and resources.”

Vermont is one of eight states that have been spurred by the reviews to “beef up their quality assurance efforts,” according to Christian. Vermont worked with the Children’s Research Center, a division of the National Council on Crime and Delinquency, to create a system to evaluate the quality of its services and improve documentation, and it created an ongoing self-evaluation process.

But the reviews are not bringing states more money for improvements. “Most states are doing this without any new resources,” Christian says. Pennsylvania officials told GAO researchers that a state budget shortfall would leave them with no additional money to implement the state’s improvement plan.

The NCSL says three state legislatures provided more funding for child welfare because of the reviews: Alaska, West Virginia and Wyoming. The money was used to hire more staff and decrease caseloads, Christian says.

Mitchell-Herzfeld in New York believes that if every state approached HHS about using alternative measures, such as entry cohort data, HHS would agree. “I think they are going to have to adjust and accommodate this,” she says.

In the meantime, states are working to implement their improvement plans to the satisfaction of the federal government. James Payne, director of the Department of Child Services in Indiana, which received a letter of commendation from HHS for its improvement plan and no longer faces financial penalties, says the process was valuable.

“There is a tendency to inertia, to keep doing what you are doing,” he says. “The review said we cannot do that any more.”


Sheila Duranleau, Policy and Planning Chief
Vermont Family Services Division
Department for Children and Families
(802) 241-2669,

Susan Mitchell-Herzfeld, Director
Bureau of Evaluation and Research
New York State Office of Children and Family Services
(518) 474-9486,

Barbara Needell, Principal Investigator
Child Welfare Research Center
University of California at Berkeley
(510) 642-1893,

Susan Orr, Associate Commissioner
Children’s Bureau
U.S. Administration on Children, Youth and Families
Washington, D.C.
(202) 619-0257,

Mark Testa, Director
Children and Family Research Center
University of Illinois at
(217) 244-1029,

Fred Wulczyn, Research Fellow
Chapin Hall Center for Children
University of Chicago
(773) 753-5900,


Youth Today is the only independent, internationally distributed digital media publication that is read by thousands of professionals in the youth service field.

Youth Today adheres to high-quality journalistic standards, providing readers with professional news coverage dedicated to examining a wide spectrum of complex issues in the youth services industry from legislation to community-based youth work.


Our organization retains full authority over editorial content to protect the best journalistic and business interests of our organization. We maintain a firewall between news coverage decisions and sources of all revenue.


We are committed to transparency in every aspect of funding our organization. Donors may be quoted, mentioned or featured in our stories. Our news judgments are made independently – not based on or influenced by donors. Accepting financial support does not mean we endorse donors or their products, services or opinions…(read more)

Recent Comments




Kennesaw State University Mountain Logo & Ceneter for Sustainable Journalism Logo
LOGO Institute for Nonprofit News 3 turquoise boxes stacked in "J" shape

Copyright © 2018 Youth Today and MVP Themes --- Published by Center for Sustainable Journalism,
Kennesaw State University, 1200 Chastain Blvd. Suite 310, Kennesaw GA 30144

To Top