Department of Sociology, CUNY Queens College, New York, NY

Replicating Estimates of Household Net Worth in 2022 SCF

Replicating population estimates using the Survey of Consumer Finances 2022

This post replicates and assesses R-based methods for analyzing data from the Survey of Consumer Finances. It assesses population estimates of household net worth quartiles using methods to account for the data’s complex survey design and multiple imputation.1 The note is intended to walk reader’s through the process of obtaining population percentile score estimates from this data using officially-sanctioned scripts. In addition to the R code that is conventionally used, I generate estimates using code that I would conventionally apply in such a context to see if it replicates results in published reports or those obtained using recommended scripts.

I compare reported values from Federal Reserve-published reports (Aladangady et al. (2023) with those obtained trhough two methods (1) the data documentation’s recommended R scripts by Anthony Damico2 and those that implemented based on my own reading of best practices. Although the estimates obtained using Damico’s scripts do differ from Federal Reserve-published figures, they do replicate the results that I obtained by my own efforts to employ standard best practice.

My own conclusion is that, although there can be discrepancies between the Federal Reserve-published figures and those obtained using Fed-endorsed analytical techniques, the discrepancies are small to the point of non-substantive and are readily explainable by the subjective judgment element of statistical analysis that non-practitioners often do not understand is part of good practice. My assessment concludes that the Damico scripts follow best practice and render quality results.

Background

The Survey of Consumer Finances (SCF) is a high quality, nationally-representative survey of U.S. household finances published by the United States Federal Reserve. The survey has been collected for decades, but its modern incarnation has run triennially since 1989. Its data can be an invaluable information source for learning about the income, expenditures, assets, debts, and global financial situation of U.S. households. Such information can help inform planning and decision-making in fields where households’ financial situation is germane to decision-making, such as contemplating government policies (e.g., as in Cohen 2017), assessing marketing or human resource strategies, and a range of other applications.

The data is delivered in a complex structure that defies a simple and direct application of data analysis methods. The data was collected using a complex sampling scheme and deployed using a missing data imputation scheme for which the analyst must account in management and processing. Below, I detail the specifics of these considerations. The reader might consult Heeringa et al.(2017) for a general introduction to the analysis of complex survey data, and Lumley (2010) for the implementation of these methods in R. I recommend Allison (2010) for an accessible introduction to the basics of analyzing data with missing values, Carpenter et al. (2023) for a more advanced one, and Little and Rubin (2019) as a canonical text in this field.

Data and Methods

This section notes the practicalities of accessing, preparing, and analyzing these data.

Accessing the Data

Each year’s data is distributed publicly via the Internet. It is delivered as three tables. The first “main” table with the data collected in the survey. The second “summary” table has variables derived form the main table. For example, the table contains a net worth estimate that is calculated from balance sheet items in the main data table. The third set is a set of replicate weights designed to reweight observations to mitigate the effects of sample bias while retaining respondent anonymity.

Structure of the Data

The structure of this data is complex due to sample correction, anonymization, and missing data imputation methods. The main and summary data tables in the 2022 data include five imputations for each of the 4,595 households represented. It also contains a table of replicate weights that correspond to each of the households studied in this set. A household is a living arrangement in which an economically dominant individual or couple coalesce into a single economic unit acting under the general direction of the household head(s). It can be a nuclear or extended family that is financed by a working age couple, a single individual living alone, a pair that finances a joint livelihood, or any other number of configurations.

Replicate Weights: Anonymized Weighting

Respondents were chosen through one of two sampling mechanisms. Roughly two-thirds were selected through a geography-based clustering system in which allocations of respondents are randomly distributed across progressively narrower geographic regions until individual households are chosen. Another third was randomly sampled from IRS tax records to which the samplers were given special access, in part to oversample wealthy families Bricker et al. (2016).

Our analysis of this data must account for the fact that it was built on a stratified and clustered sampling mechanism (see Heeringa et al. 2017) These mechanisms violate a basic assumption that our analytical units had an equal chance of being chosen and are independent from one another. Dependencies among units can lead to underestimated standard errors and anticonservative significance estimates. The distortion to respondents’ probability of inclusion generates parameter estimates that are not properly calibrated to their true representation in the target population. This is certainly true in this case because the SCF deliberately oversamples the wealthy.

One problem is that crafting a correction requires detailed data on the respondents, such that we can make guesses about whether or not a particular respondent is in fact over- or under-sampled relative to their prevalnce in the population. However, the more data we offer, the greater the chance that people can be identified from the data, especially if they are have uncommon blends of demographic, geographic, and personal financial characteristics. The response is to craft replicate weights, which can be understood as a process that conducts multiple resamples of the data whose combination will ultimately adjust each individual observation’s weight to their representation in the population. The method replaces detailed information that could make observations identifiable with (presumably) impossibly complicated permutations of the sample.

The weights corresponding to the households represented in the data are given in the replicate weights table distributed with this data. Each row corresponds to a distinct household, and to each of the five rows of imputed data pertaining to them on the main and summary data tables.

Missing Data

The SCF uses multiple imputation with randomness, a method in which the analyst simulates missing data based on relationships within observed data. The method estimates missing values using a multivariate model, but creates multiple versions of the imputed sets with randomness injected into missing data estimates to address concerns that imputation artifically strengths the relationships upon which the missing data imputation model was built. In this set, the analysts created five different imputations, each of which replicates observed values and different, randomness-infused imputed values.

When trying to estimate linear statistics, like sample means or many types of regression coefficients, the process to yield population estimates from the five sets are as follows. The parameter estimate is the mean estimate of the five (or however many) imputed sets:

Coefficient Estimates. The combined estimate of the mean is calculated as the average of the imputed means.

Variance Estimates. The combined estimate of variance for the estimate takes into account both the within-imputation variance and the between-imputation variance.

These formulas allow for the calculation of combined estimates that reflect the uncertainty due to missing data by including both the within-imputation variability and the variability across different imputations.

For Percentile Estimates. This analysis focuses on estimating percentile scores, and Rubin’s Rule is conventionally applied to linear statistics. I did not find much literature engaging the issue of estimating percentiles from multiply imputed data. My analysis finds that the point estimates obtained using Lumley’s R package (and thus commonplace practical strategies when analyzing data with R) replicate the results obtained with a direct application of Rubin’s Rule.

Comparing the Performance of Scripts

It was my goal in this analysis to ensure that standard practice in the analysis of the SCF in R rendered acceptable analytical results. The issue was prompted by my inability to replicate Federal Reserve-published figures. This post provides an account of this analysis. I then checked that the recommended scripts replicated the results obtained in the SAS script in the documentation. The results obtained in Damico’s scripts replicate those obtained using my translation of the documentation-provided SAS script.

I then reanalyzed my data as I would a set of this type were a recommended script not provided. The main differences between Damico’s scripts and mine in how I structured the data. I broke the individual imputations into five data tables, analyzed them individually using the Lumley (2010) ‘survey’ package, and recombined using Rubin’s Rule. The Damico scripts process the data using the ‘ImputationList()’ operation in the package mitools and employed a customized function to estimate the five imputations built on the ‘survey’ package.

Empirical Results

Our analysis begins by comparing the empirical results obtained for estimates of the 25th, 50th, 75th, and 90th percentile values of U.S. household net worth in 2022. Table 1 (below) compares results yielded using both Damico’s scripts, my own implementation of standard practice, and to official estimate from Table B.2 in Aladangady et al. (2024).

The estimates in Aladangady et al. do not match those obtained in the Damico script. This discrepancy was the initial impetus of this analysis. Damico’s results those that I obtained as the mean of the five imputed set’s individual population percentile estimates.

Aladangady et al. Damico ScriptsReanalysis
25th Percentile27,10027,01627,016
50th Percentile192,900192,084192,084
75th Percentile658,900658,340658,340
90th Percentile1,938,0001,920,7581,920,758

The results obtained using the R scripts do not match the officially-published results. Although discrepant, they are very close to those obtained by Aladangady et et.. Estimates differed across a range of 0.08% and 0.9% acorss the four percentile estimates obtained here. Are these discrepancies a signal of something problematic? The official documentation argues that these discrepancies can occur even with rigorous estimates:

Results users may obtain from using this release of the 2022 SCF data may differ from those reported in this article for several reasons. First, a small number of the analysis weights used in that article may have been altered somewhat to provide robust estimates of the detailed categories shown. In brief, the data were examined for extreme outliers, and where a given case was overly influential in determining an outcome, the weight was trimmed and other weights were inflated to maintain a constant population. Second, as noted below, the public version of the data has been systematically altered to minimize the likelihood that unusual individual cases could be identified. Our analysis of the public data set suggests that these changes should not alter the conclusions of reasonable analyses of the data. Finally, over time we correct errors that we find in the data set. In our past experience, the effects of such errors on the estimates have been quite small.

This is consistent with best practice, as analysts should watch and correct for outliers and similar sources of distortion. Without access to the confidential data that they used, there is no way to verify and reproduce their decisions, but the discrepancies are so small as to be immaterial.

Conclusion

In this reanalysis of the household net worth percentiles estimated from the Survey of Consumer Finances, I am left with confidence in the quality of Anthony Damico’s scripts. Although they do not replicate official reports, they render effectively similar results, and they likely represent the best an analyst can do with the public release set.

Works Cited

  • Allison, Paul D. 2010. Missing Data. Vol. 200210. Thousand Oaks, CA: Sage.
  • Bricker, Jesse, Alice Henriques Volz, Jacob Krimmel, and John Sabelhaus. 2016. Measuring Income and Wealth at the Top Using Administrative and Survey Data. Brookings Institution.
  • Carpenter, James R., Jonathan W. Bartlett, Tim P. Morris, Angela M. Wood, Matteo Quartagno, and Michael G. Kenward. 2023. Multiple Imputation and Its Application. John Wiley & Sons. Cohen,
  • Joseph Nathan. 2017. Financial Crisis in American Households: The Basic Expenses That Bankrupt the Middle Class. Santa Barbara: Praeger.
  • Heeringa, Steven G., Brady T. West, and Patricia A. Berglund. 2017. Applied Survey Data Analysis. CRC Press.
  • Little, Roderick JA, and Donald B. Rubin. 2019. Statistical Analysis with Missing Data. John Wiley & Sons.
  • Lumley, Thomas. 2011. Complex Surveys: A Guide to Analysis Using R. John Wiley & Sons.

  1. Net worth is the money value of one’s personal assets, less that of their debts.↩︎
  2. Mr. Damico’s repository of R scripts to analyze major surveys has been a boon to our graduate students at Queens College, and has been of great help to me as well. It is one of those projects in which someone makes a very big contribution to the research community, but does so in a way that is not registered in the academy’s formal bookkeeping mechanisms. Thank you, Mr. Damico.↩︎

Download the Report in PDF Format

Archive

An archive of this research’s data and Markdown file is available on OSF. Click here.

Leave a Reply

Your email address will not be published. Required fields are marked *