Biases in the measurement of hidden populations: A pre-analysis plan

Author

Georgiy Syunyaev and Macartan Humphreys

Published

April 17, 2023

Abstract
We describe our pre-analysis plan for the PRIF (Prevalence Reduction Innovation Forum) meta-analysis. This plan has been written prior to accessing any results from any of the constituent studies. We provide formal descriptions of the individual designs used by the collection of studies taking part in the PRIF initiative as well as the design for the meta-analysis. These descriptions provide a characterization of sampling and analysis strategies as well as background assumptions about context. We report on simulation results assessing the scope for learning across studies. To implement simulations we developed an [R] package, hiddenmeta, that can implement all sampling procedures and all estimation approaches used by studies. We emphasize that simulation results depend on characterizations of study sites to the extent that these were known prior to implementation.

Motivation

The PRIF initiative brings together multiple teams seeking to assess the prevalence of hidden populations of different types across eight study areas using a variety of methods. The meta-analysis seeks to assess what can be learned across these studies about the performance of different procedures.

We provide detailed formal descriptions of the individual study designs used by the collection of studies taking part in the PRIF (Prevalence Reduction Innovation Forum) initiative as well as the design for the meta-analysis.

In particular we “declare” each of the designs in code, which means providing a characterization of sampling and analysis strategies as well as background assumptions about context. These characterizations are complete in the sense that they provide sufficient information to allow us to study key aspects of the individual strategies from sampling through to analysis. In addition, we describe a complete design for the meta-analysis component, which focuses on the relative propensities of strategies to over- or under-estimate prevalence.

To implement both the individual study and meta-analyses design declarations we developed an [R] package, hiddenmeta, that can implement all sampling procedures and all main estimation approaches that can be implemented based on the data collected by individual study teams, drawing on existing [R] packages used in the field wherever possible.

Objectives

The goal of the PRIF initiative is to build a global community of researcher-learners in the science of human trafficking prevalence estimation and specifically to:

  1. document the robustness of various methodological approaches in human trafficking prevalence research;

  2. identify and build the capacity of human trafficking teams in the design, testing, and dissemination of human trafficking prevalence data.

This Pre-Analysis Plan contributes to the first goal by providing a meta-analysis framework that allows for systematic comparison of various methodological approaches in human trafficking prevalence. The tools that we have developed for this purpose are fully contained in an [R] package we develop that can be installed using:

Full documentation on the package, along with more technical detail and references for all procedures used, is available at gsyunyaev.com/hiddenmeta

In what follows we provide details on each step of the meta-analysis. First, we describe the model for the meta-analyses we plan to conduct, i.e. how individual study data is aggregated to allow for estimation of relative performance of methods used in human trafficking prevalence studies. Next, we describe the quantities of interest (inquiries) we can estimate both at individual study level as well as at meta-analysis level. Finally, we describe the “answer” strategies, i.e. how we estimate each quantity of interest both at individual study level as well as at meta-analysis level.

Throughout this document we refer to the target human trafficking group of interest as hidden group, groups whose prevalence is available via official statistics or can be directly estimated, as known groups. We refer to the group which serves as the denominator for prevalence as the population, assuming that individual study allows us to estimate the proportion of hidden group of interest in this group.

Our description of our analysis plan follows the MIDA framework (Blair, Coppock, and Humphreys forthcoming). Using this framework we present in turn the reference Model (describing assumptions about the study context), the Inquiries, the Data strategy and the Answer strategy. Figure 1 presents the structure of this document.

%% https://quarto.org/docs/authoring/diagrams.html

flowchart TD
  A{{"Model (Population)"}} ==> B{{"Inquiries (Estimands)"}}
  B ==> C{{"Data strategy (Samples)"}}
  C ==> D{{"Answer strategy (Estimation)"}}
  click A "#sec-model" _blank
  click B "#sec-inquiries" _blank
  click C "#sec-data" _blank
  click D "#sec-estimation" _blank

Figure 1: Structure of the document

We highlight that unlike many pre-analysis plans, our analysis focuses on estimation rather than hypothesis testing. The analysis plan has been developed prior to accessing data and seeks to communicate our primary intentions when we access this data. As the type of analysis we implement here is novel we think it plausible that improved methods might be developed or identified when we turn to implementation. For this reason the pre-analysis serves primarily as a communication tool and to provide a benchmark against which future innovations can be assessed.

Model

To assess the performance of the individual designs and the meta-design we need background assumptions about study contexts. This includes assumptions about the general nature of network structures in different sites. It also includes assumptions about response patterns across sites which in turn has implications for how answers from different strategies related to each other.

The background information we gathered from teams included both information about the sites — specifically priors on key characteristics of the population, known groups and hidden group — as well as information on sampling and data collection strategies.

Table 1 describes a set of unobserved parameters for each study. We gathered priors on these which are then used to describe the underlying population as well as for estimation.

Unobserved Study Parameters

Parameter Description
degree_average Guess about average number of connections in the population
degree_average_hidden Guess about average number of connections to hidden group
p_visibility Guess about probability of recognizing member of hidden group (relevant for NSUM methods and akin to transmission bias)
hidden_prev Guess about prevalence of all hidden groups considered (numerator for prevalence)
p_service_use Guess about probability of service utilization by hidden group members (for service multiplier)
rho Guess about correlation between membership in known and hidden groups
edge_within_hidden Guess about average probability of connection within hidden groups
edge_within_known Guess about average probability of connection within known groups
edge_between Guess about average probability of connection between hidden and known group members
Table 1: Unobserved Study Parameters

At the meta-analysis stage we rely on assumptions about stability of the relative bias of sampling-estimation pairs:

  1. For identification purposes, we assume that Horvitz-Thompson estimates of prevalence or size of hidden group based on proportional sample (PPS) serve as a “benchmark”, meaning that those estimates are unbiased (though not necessarily accurate) for the true prevalence or size of hidden group in the population.
  2. Relative bias of a specific sampling-estimation pair is multiplicative, meaning that it over- or under-states the true prevalence or size of hidden group by certain percentage relative to the benchmark.
  3. Given the limited overlap in sampling methods used across individual studies and the small number of studies, we assume that the relative bias of each sampling-estimation pair is constant across studies. In exploratory analyses we also examine structured heterogeneity.
  4. We estimate the relative biases of methods used to estimate size and prevalence of hidden groups separately due to incompatibility of estimates produced by these methods.

Using the hiddenmeta package we can draw the data for a single (or multiple) study population(s) using pre-defined [R] list object or by reading from an external source. The study declaration object recognized by the package consists of four primary components:

  • pop – List of population simulation function and parameters of population from which samples are drawn,
  • sample – Nested list of sampling functions and parameters for each sampling strategy used in the study,
  • inquiries – List of function and arguments used to calculate quantities of interest we try to estimate in the study,
  • estimation – Nested list of estimation functions and corresponding sampling strategies used in the study to estimate quantities of interest.

The code below shows the definition of an example_study object that can be recognized and diagnosed by the hiddenmeta:

Code
## STUDY 1
study_1 <- 
  list(
    pop = 
      list(
        handler = get_study_population,
        
        # network structure setup
        network_handler = sim_block_network,
        network_handler_args = 
          list(N = 1000, K = 2, prev_K = c(known = .1, hidden = .2), rho_K = 0,
               p_edge_within = list(known = c(0.1, 0.1), hidden = c(0.1, 0.5)),
               p_edge_between = list(known = 0.02, hidden = 0.02),
               directed = FALSE, chunk_size = 1000),
        
        # groups
        group_names = c("known", "hidden"),
        
        # probability of visibility (show-up) for each group
        p_visible = list(known = 1, hidden = .8),
        
        # probability of service utilization in hidden population
        # for service multiplier
        add_groups = 
          list(
            service_use = "rbinom(.N, 1, 0.25)",
            "paste0('loc_', 1:10) := lapply(rep(.2, times = 10), function(add) rbinom(.N, 1, 0.05 + hidden * add))",
            target = 0.3,
            "paste0('known_', 2:10) := lapply(2:10, function(x) rbinom(.N, 1, 0.3))")
      ),
    sample = 
      list(
        rds = list(handler = sample_rds,
                   # RDS parameters
                   sample_label = "rds",
                   hidden_var = "hidden", # default
                   n_seed = 20,
                   n_coupons = 3,
                   add_seeds = 3,
                   target_type = "sample",
                   target_n_rds = 60),
        tls = list(handler = sample_tls,
                   sample_label = "tls",
                   hidden_var = NULL,
                   # TLS sampling parameters
                   target_n_clusters = 5,
                   target_cluster_type = "fixed",
                   target_per_cluster = 15,
                   clusters = paste0("loc_", 1:10)),
        pps = list(handler = sample_pps,
                   sample_label = "pps",
                   # prop sampling parameters
                   sampling_frame = NULL,
                   strata = NULL,
                   cluster = NULL,
                   target_n_pps = 150),
        lts = list(handler = sample_rds,
                   sample_label = "ltsall",
                   hidden_var = "hidden",
                   n_seed = 50,
                   n_coupons = Inf,
                   n_waves = 3,
                   target_type = "waves",
                   target_n_rds = 150,
                   sampling_frame = "all"),
        lts2 = list(handler = sample_rds,
                    sample_label = "ltsmulti",
                    hidden_var = "hidden",
                    n_seed = c(15, 15, 15),
                    n_coupons = Inf,
                    n_waves = NULL,
                    target_type = "sample",
                    target_n_rds = c(50, 50, 50),
                    sampling_frame = "all")
      ),
    inquiries = list(handler = get_study_estimands,
                     in_groups = "^target$",
                     out_groups = "^known$", 
                     hidden_var = "hidden"),
    estimators = 
      list(
        rds = 
          list(ss = list(handler = get_study_est_ss,
                         sampling_frame =  "hidden",
                         hidden_var = "target", # estimates prevalence withing RDS
                         n_coupons = 3,
                         total = 1000,
                         prefix = "rds",
                         label = "rds_ss"),
               sspse = list(handler = get_study_est_sspse,
                            prior_mean = 200,
                            mcmc_params = list(interval = 5, burnin = 2000, samplesize = 500),
                            total = 1000,
                            prefix = "rds",
                            label = "rds_sspse"),
               chords = list(handler = get_study_est_chords, 
                             type = "mle",
                             seed_condition = "rds_from == -999",
                             n_boot = 100,
                             prefix = "rds",
                             label = "rds_chords"),
               multiplier = list(handler = get_study_est_multiplier, 
                                 service_var = "service_use",
                                 seed_condition = "rds_from == -999",
                                 n_boot = 100,
                                 prefix = "rds",
                                 label = "rds_multi")),
        tls =
          list(ht = list(handler = get_study_est_ht,
                         hidden_var = "hidden",
                         weight_var = "tls_weight",
                         survey_design = ~ tls_loc_sampled,
                         prefix = "tls",
                         label = "tls_ht"),
               nsum = list(handler = get_study_est_nsum,
                           hidden_var = "hidden_visible_out",
                           known_vars = paste0(c("known", paste0("known_", 2:10)), "_visible_out"),
                           total = 1000,
                           prefix = "tls",
                           label = "tls_nsum",
                           weight_var = "tls_weight",
                           survey_design = ~ tls_loc_sampled,
                           n_boot = 1000),
               recap = list(handler = get_study_est_recapture,
                            capture_parse = 
                              "strsplit(x = unique(na.omit(tls_locs_sampled)), split = ';')[[1]]",
                            # sample_condition = "tls == 1",
                            model = "Mt",
                            hidden_variable = "hidden",
                            label = "tls_recap"),
               mse = list(handler = get_study_est_mse,
                          capture_parse = 
                            "strsplit(x = unique(na.omit(tls_locs_sampled)), split = ';')[[1]]",
                          # sample_condition = "tls == 1",
                          method = "stepwise",
                          hidden_variable = "hidden",
                          label = "tls_mse")),
        pps = 
          list(ht = list(handler = get_study_est_ht,
                         hidden_var = "hidden",
                         prefix = "pps",
                         label = "pps_ht"),
               nsum = list(handler = get_study_est_nsum,
                           hidden = "hidden_visible_out",
                           known = paste0(c("known", paste0("known_", 2:10)), "_visible_out"),
                           total = 1000,
                           prefix = "pps",
                           label = "pps_nsum",
                           weight_var = "pps_weight",
                           survey_design = ~ pps_cluster + strata(pps_strata),
                           n_boot = 1000)),
        all = 
          list(recap1 = list(handler = get_study_est_recapture,
                             capture_vars = c("rds", "pps"),
                             model = "Mt",
                             hidden_variable = "hidden",
                             label = "rds_pps_recap"),
               recap2 = list(handler = get_study_est_recapture,
                             capture_vars = c("rds"),
                             capture_parse = 
                               "strsplit(x = unique(na.omit(tls_locs_sampled)), split = ';')[[1]]",
                             model = "Mt",
                             hidden_variable = "hidden",
                             label = "rds_tls_recap"),
               mse1 = list(handler = get_study_est_mse,
                             capture_vars = c("rds", "pps"),
                             method = "stepwise",
                             hidden_variable = "hidden",
                             label = "rds_pps_mse"),
               mse2 = list(handler = get_study_est_mse,
                             capture_vars = c("rds"),
                             capture_parse = 
                               "strsplit(x = unique(na.omit(tls_locs_sampled)), split = ';')[[1]]",
                             method = "stepwise",
                             hidden_variable = "hidden",
                             label = "rds_tls_mse")),
        lts =
          list(link = list(handler = get_study_est_linktrace,
                           total = 1000,
                           strata = "hidden",
                           gibbs_params = list(n_samples = 5L, 
                                               chain_samples = 250L, 
                                               chain_burnin = 50L),
                           priors = list(p_n = 0L, p_l = 0.1, p_b = 1L),
                           prefix = "ltsall",
                           label = "link"))
      )
  )

To draw a population with the parameters declared in the example-study object we can use the following simple code:

Code
study_population <- do.call(declare_population, study_1$pop)

set.seed(872312)
example_pop <- study_population()

This generates a function, example_pop, that in turn can generate a dataframe representing an instance of the type of population we expect to see in study 1.

Figure 2 shows the population data frame produced by the code above that contains all variables that might be used for sampling or estimation based on the example-study object.

Figure 2: Example of population data frame

In particular the population data frame contains full adjacency matrix that allows us to retrieve full network of connections across members of population with different membership statuses in hidden and known groups. For example Figure 3 shows in color members of population who are members of some known and hidden groups at the same time, members of just one of those groups, or do not belong to either of them. We can see that in this example the population has significant degree of separation between those who belong and do not belong to hidden group.

Figure 3: Example of population network structure (first digit in group - known group membership; second digit in group - hidden group membership)

Inquiries

In this section we describe the set of study- and meta-level quantities of interest (inquiries) that we aim to estimate in the study.

Study level

At the individual study level we aim to estimate the following quantities:

  • hidden_prev – Prevalence of the hidden group considered in the study in the population
  • hidden_size – Size of the hidden group considered in the study
  • known_prev – Prevalence of each of the known groups considered in the study in the population
  • known_size – Size of each of the known groups considered in the study in the population
  • known_hidden_prev – Prevalence of the hidden group in each of the known groups considered in the study
Code
study_estimands <- do.call(declare_inquiry, study_1$inquiries)

example_estimands <- draw_estimands(study_population +  study_estimands)
inquiry estimand
1 known_size 105.000
2 known_prev 0.105
3 hidden_size 202.000
4 hidden_prev 0.202
5 target_size 300.000
6 target_prev 0.300
7 target_size_in_hidden 57.000
8 target_prev_in_hidden 0.282
9 hidden_size_in_known 23.000
10 hidden_prev_in_known 0.219
Table 2: Example of estimands we are interested in at study level

Meta level

In addition to study-level inquiries, the meta-analysis framework we described above will allow us to answer the following inquiries:

  • hidden_size / hidden_prev – What is the size or prevalence of the hidden group considered in each of the studies.
  • bias / bias_a – What is the relative bias of each of the sampling-estimation method pairs compared to the benchmark, Horvitz-Thompson estimation of prevalence or size based on proportional sample (PPS-HT).
  • bias_b – Does relative bias of each of the sampling-estimation method pairs compared to the benchmark varies with respect to features of study design or context.
Code
draw_estimands(meta_population + meta_inquiry)
inquiry estimand
rel_bias_pps_ht 1.00
rel_bias_pps_nsum 0.60
rel_bias_rds_chords 2.23
rel_bias_rds_pps_recap 0.35
rel_bias_rds_tls_recap 0.02
rel_bias_sd_pps_ht 1.00
rel_bias_sd_pps_nsum 2.00
rel_bias_sd_rds_chords 1.84
rel_bias_sd_rds_multi 0.20
rel_bias_sd_rds_pps_recap 0.27
rel_bias_sd_rds_sspse 6.52
rel_bias_sd_rds_tls_recap 0.05
rel_bias_sd_tls_ht 1.45
rel_bias_sd_tls_nsum 0.72
rel_bias_sd_tls_recap 1.58
rel_bias_tls_ht 4.76
rel_bias_tls_recap 4.23
study_1_hidden_size 198.00
study_2_hidden_size 480.67
study_3_hidden_size 189.33
Table 3: Example of estimands we are interested in at meta level

In addition we wish to assess the error-cost trade off associated with each method.

Data strategy

Overview of the studies

Table 4 lists the set of studies conducted as part of the PRIF initiative, together with the site of study, the population of interest, and the denominator. We refer to PRIF documentation for a fuller description of study motivations, sampling and estimation procedures, teams, and contexts.

Implementing organization Location Hidden group Population (denominator)
Freedom Fund (FF) Brazil (Recife) Women 18-21 who were sex workers before age 18 Women in age 18-21 who are sex workers
Stanford University (Stanford) Brazil Forced labor in agriculture sector Employed in agriculture sector
New York University (NYU) Costa Rica Forced labor in fishing industry Employed in fishing industry
New York University (NYU) Tanzania (Dar es Salaam, Iringa, and Zanzibar) Domestic servitude Domestic workers
NORC Morocco Domestic servitude Domestic workers
Johns Hopkins University (JHU) Pakistan (Sindh) Forced labor in brick kilns industry Employed in brick kilns industry
University of Massachusetts, Lowell (UMass) Tunisia (Tunis) Domestic servitude All domestic workers
RTI USA Forced labor in construction industry All construction workers
Table 4: Studies included in PRIF Initiative

Different teams in the PRIF initiative employ a variety of techniques, specifically different sampling and measurement strategies, as well as different estimation strategies, for measuring prevalence of human trafficking. Specifically approaches vary with respect to:

  1. How and from whom data is gathered, with options including probability sampling (PPS), time location sampling (TLS), respondent driven sampling (RDS), and link-tracing sampling (LTS)
  2. Which measures are taken, for instance information regarding own status, or beliefs about the status of others in your network, or information about timing and order of sampling along a chain
  3. How analysis is done using these measures, with options including Horvitz-Thompson weighting (HT), Network Scale-Up (NSUM), Capture-Recapture estimators (RECAP), Service multiplier approaches (MULTIPLIER), and Bayesian estimation based on RDS/LTS samples (SS, SS-PSE, Chords, Link-Tracing)

Table 5 summarizes the sampling strategies used in the individual studies. 6 out of 8 individual studies in the PRIF Initiative are using combination of PPS/TLS sampling and RDS or LTS sampling. The two remaining studies rely on either PPS or TLS samples.

PPS TLS RDS
Brazil (FF) + - +
Tunisia (UMass) - + -
Brazil (Stanford) + - -
Pakistan (JHU) + - +
Costa Rica (NYU) + - -
Tanzania (NYU) + - -
Morocco (NORC) - + -
USA (RTI) - + -
Table 5: Study Sampling Strategies

Each study in the PRIF Initiative that relies on proportional or time-location sampling collects data on both direct indicators of human trafficking developed by US DoS et al. (2020), as well as relational data for the Network Scale-up estimation. In turn, each study that relies on respondent-driven sampling collects full network data and the timing at which each respondent was recruited, to allow for all main RDS based estimation methods discussed in Section 5.1 .

Table 6 describes more detail on the known (or at least intended) study parameters gathered from teams. To mitigate study-specific factors that can affect bias of sampling-estimation methods, the features of the study designs that directly feature into estimation were harmonized across the studies as much as possible.

Study Design Parameters

Parameter Description
All studies
rds/ lts / pps / tls Sampling strategy(ies) used in the study
n Size of overall population (denominator for prevalence)
k_known Number of groups of known size considered
n_known Sizes of all known groups considered
Studies with RDS/LTS sample
n_seeds Initial number of seeds
seed_selection_type How seeds are selected (heterogeneous pool for LTS sample or standard RDS sample)
n_coupons Number of unique coupons given to each participant
target_type Whether the sampling strategy targets sample size or number of waves
target_n_waves Number of waves the sample is limited to
target_n_sample Total target sample size
allow_non_hidden Whether non-members of hidden group are allowed in the study
Studies with PPS/TLS sample
strata Whether stratified sampling is used and if yes number of strata
n_time_locations / cluster Number of time-locations (clusters) in the population
target_n_clusters Target number of time-locations (clusters)
target_cluster_type Type of sampling within each cluster. Can be either proportional or fixed (fixed number of units per cluster)
target_per_cluster Share (for proportional) or integer (for fixed) defining number of respondents per cluster
allow_non_hidden Are non-members of hidden population allowed in the study?
n_per_cluster Cluster sizes
sampling_weights Sampling weights
target_n_pps Target sample size
Additional parameters
n_service_use Overall service utilization over time among hidden group (for service multiplier)
n_max_recaptures Maximum number of recaptures possible, i.e. locations where data collection occurs (for mark-recapture)
Table 6: Study Design Parameters

Below we show how sampling works in hiddenmeta package using RDS sample from the example_study as an example. We can draw data with sampling indicator and all corresponding data collected during sampling using the following code:

Code
study_sample_rds <- do.call(declare_sampling, study_1$sample$rds)

example_sample <- draw_data(study_population + study_sample_rds)

The resulting data frame is shown in Figure 4 . Note that by default the data includes the whole population but can be easily subset to only the RDS sample using the rds == 1 condition.

Figure 4: Example of population data frame with RDS sample

We can further investigate the RDS sample we drew from the simulated population using hiddenmeta:::retrieve_graph() function. For example, Figure 5 we show the network structure by wave of enrollment.

Figure 5: Example of RDS sample network structure (colored by wave of enrollment)

Answer strategy

Individual studies

For each of the studies we are aiming to analyze all possible sampling-estimation method pairs given the sampling strategies used. Table 7 shows all sampling-estimation method pairs we consider in the meta-analysis. Note that the first three sampling-estimation pairs aim directly to estimate prevalence of hidden group within broader population, while the rest of the methods estimate size of the hidden group that can be transformed into prevalence estimate under assumption that there is no uncertainty about the size of the population within which the prevalence is being estimated.

Sampling Estimation Method references [R] Package
Prevalence estimation
PPS HT: Horvitz-Thompson estimator of prevalence with re-scaled bootstrap standard errors Rust and Rao (1996) Self-coded + surveybootstrap (Feehan and Salganik 2023)
TLS HT: Horvitz-Thompson estimator of prevalence with re-scaled bootstrap standard errors (Rust and Rao 1996) Rust and Rao (1996) Self-coded + surveybootstrap (Feehan and Salganik 2023)
RDS/LTS SS: Sequential sampling estimator of prevalence Gile (2011) RDS (Mark S. Handcock et al. 2023)
Population size estimation
PPS NSUM: Network scale-up estimator of population size with re-scaled bootstrap standard errors Killworth et al. (1998); Rust and Rao (1996) networkreporting (Feehan and Salganik 2016) + surveybootstrap (Feehan and Salganik 2023)
TLS NSUM: Network scale-up estimator of population size with re-scaled bootstrap standard errors (Killworth et al. 1998; Rust and Rao 1996) Killworth et al. (1998); Rust and Rao (1996) networkreporting (Feehan and Salganik 2016) + surveybootstrap (Feehan and Salganik 2023)
TLS/LTS RECAP *: Mark-recapture estimator of population size with parametric standard errors Hickman et al. (2006) Rcapture (Baillargeon and Rivest 2007)
TLS/LTS MSE *: Multiple systems estimator of population size with parametric standard errors Chan, Silverman, and Vincent (2021) SparseMSE (Chan, Silverman, and Vincent 2022)
RDS/LTS SSPSE: Bayesian sequential sampling model of population size Mark S. Handcock, Gile, and Mar (2014) sspse (Mark S. Handcock et al. 2022)
RDS CHORDS: Epidemiological model for population size estimation Berchenko, Rosenblatt, and Frost (2017) chords (Berchenko, Rosenblatt, and Frost 2017)
RDS MULTIPLIER *: Service-multiplier population size estimator with bootstrap re-sampling standard errors Hickman et al. (2006); Salganik (2006) Self-coded + surveybootstrap (Feehan and Salganik 2023)
LTS LINK: Link-tracing population size estimator based on special type of RDS Vincent and Thompson (2017); Vincent and Thompson (2022) Self-coded based on Vincent and Thompson (2022)
Table 7: Sampling-estimation pairs

In the meta-analysis part we focus on comparison of prevalence/size estimates produced by all the methods listed in the table. Certain types of estimation require additional data collection. Specifically, estimation using TLS-RECAP requires recapture data to be collected beyond standard NSUM and direct indicators. RDS-MULTIPLIER estimation requires data from a provider of a service that is likely to be used by hidden group of interest as well as direct questions about that service use asked from hidden group members in RDS sample. Finally LTS-LINK estimation requires a special type of RDS sample, Link-Tracing Sample (LTS), with more initial seeds but less waves of and collection of data on recaptures in the network.

Combining Table 5 and Table 7 we define the number of cases when two different sampling-estimation methods are being used in the same study. Table 8 shows counts of such method overlaps which will serve as basis for meta-analyses we propose.

LTS-SS PPS-HT PPS-NSUM RDS-CHORDS RDS-LINK RDS-SS TLS-HT TLS-MSE TLS-NSUM TLS-RECAP
LTS-SS 1 1 2 2 2 1 1 1 1
PPS-HT 5 2 3 4 0 0 0 0
PPS-NSUM 2 3 4 0 0 0 0
RDS-CHORDS 4 4 2 2 2 2
RDS-LINK 5 2 2 2 2
RDS-SS 2 2 2 2
TLS-HT 3 3 3
TLS-MSE 3 3
TLS-NSUM 3
Table 8: How often are pairs of methods used?

We have 2 comparisons where we can use five or more studies (comparison of NSUM and Horvitz-Thompson methods with data generated by PPS), 4 comparisons where we can use four studies (involving RDS and PPS approaches primarily) and 8 comparisons where we can use three studies. The meta-analytic strategy we describe seeks to pool across comparisons so that a comparison of two approaches in one study is nevertheless informative for comparisons of other approaches in other studies. We currently limit the meta-analysis however to the analysis of strategies that are used by at least 3 studies.

Estimation examples

In this section we show the simple examples of estimates produced from various samples using example_study object.

Horvitz-Thompson

Code
est_ht_pps <- do.call(declare_estimator, study_1$estimators$pps$ht)

example_ht <- est_ht_pps(example_sample)
estimator estimate se inquiry
hidden_size_pps_ht 220.00 26.381 hidden_size
hidden_prev_pps_ht 0.22 0.026 hidden_prev
Table 9: HT estimates from PPS sample

Network scale-up method

Code
est_nsum_pps <- do.call(declare_estimator, study_1$estimators$pps$nsum)

example_nsum <- est_nsum_pps(example_sample)
estimator estimate se inquiry
hidden_size_pps_nsum 183.65 28.131 hidden_size
Table 10: NSUM estimates from PPS sample

SS estimator

Code
est_ss <- do.call(declare_estimator, study_1$estimators$rds$ss)

example_ss <- est_ss(example_sample)
estimator estimate se inquiry
target_prev_in_hidden_rds_ss 0.343 0.099 target_prev_in_hidden
Table 11: SS estimates from RDS sample

SS-PSE estimator

Code
est_sspse <- do.call(declare_estimator, study_1$estimators$rds$sspse)

example_sspse <- est_sspse(example_sample)
estimator estimate se inquiry
hidden_size_rds_sspse 99 49.873 hidden_size
Table 12: SS-PSE estimates from RDS sample

Chords estimator

Code
est_chords <- do.call(declare_estimator, study_1$estimators$rds$chords)

example_chords <- est_chords(example_sample)
estimator estimate se inquiry
hidden_size_rds_chords 83 21.673 hidden_size
Table 13: Chords estimates from RDS sample

Mark-recapture estimator

Code
est_recap_tls <- do.call(declare_estimator, study_1$estimators$tls$recap)

example_recap <- est_recap_tls(example_sample)
estimator estimate se inquiry
hidden_size_tls_recap 178.567 10.279 hidden_size
Table 14: Example of Recapture estimates from TLS sample

Multiple systems estimator

Code
est_mse_tls <- do.call(declare_estimator, study_1$estimators$tls$mse)

example_mse <- est_mse_tls(example_sample)
estimator estimate se inquiry
hidden_size_tls_mse 177.317 8.131 hidden_size
Table 15: Example of Recapture estimates from TLS sample

Meta level

We use a Bayesian model to assess the relative bias of methods. We describe a model for \(N\) sites and \(K\) estimation methods which could be used at each site.

We are interested in the true prevalence, an \(N\)-dimensional vector \(\alpha\) and the bias associated with each estimate relative to the benchmark procedure: a \(K\)-dimensional vector \(\beta\) (with the first element constrained to unity).

The following code block describes the stan model used to estimate \(\alpha\) and \(\beta\) give the findings across studies.

data {
  int<lower=0> N;
  int<lower=0> K;
  int<lower=0> n_ests;
  real<lower=0> alpha_prior_mean[N];
  real<lower=0> alpha_prior_sd[N];
  real<lower=0> bias_prior_mean[K];
  real<lower=0> bias_prior_sd[K];
  int<lower=0> site[n_ests];
  int<lower=0> design[n_ests];
  vector<lower=0>[n_ests] ests;
  vector<lower=0>[n_ests] ses;
}
parameters {
  vector<lower=0.01>[K] bias;
  vector<lower=1>[N] alpha;
}

model {

  target += normal_lpdf(ests | bias[design].*alpha[site], ses);
  alpha ~ normal(alpha_prior_mean, alpha_prior_sd);
  bias ~ lognormal(log(bias_prior_mean), bias_prior_sd);

}

The model block has three main elements:

  • We assume that the likelihood of a given estimate \(\hat{\alpha}_{ij}\) for estimand \(\alpha_i\) using method \(j\) in location \(i\) is \(\phi(\hat{\alpha}_{ij}|\beta_j\alpha_{i}, \hat{\sigma}_{ij})\).
  • We have a (wide) prior on \(\alpha\): \(\alpha_i \sim f(\mu_{\alpha, i}, {\sigma}_{\alpha, i})\), where the \(\mu_{\alpha, i}\) is gathered from teams, \(\sigma\_{\alpha}\) is large, and \(f\) denotes the normal distribution. Note that we assume no relation between different elements of \(\alpha\)—that is we do not make use of a prior distribution on prevalence “in general.”
  • We have a (wide) prior on \(\beta_j\): \(\beta_j \sim g(\mu_{\beta_j}, {\sigma}_{\beta_j})\), where the \(\mu_{\beta_j}\) is gathered from teams, \(\sigma_{\alpha}\) is large, and \(g\) denotes the log-normal distribution. Note that we assume no relation between different elements of \(\beta\)—that is we do not make use of a prior distribution about biases of different methods “in general.”

The model thus builds in a critical assumption that the relative bias of a method is constant across sites. It is this bias that is estimated. To be clear the assumption is not that one method will always produce a higher or lower estimate than another but simply that it is potentially biased in that in expectation it produces a higher or lower estimate than another. Bias is relative to the bias of the benchmark method and also to the truth to the extent that the benchmark method is itself unbiased.

To illustrate the performance of this model we imagine a setting with \(N=8\), \(K=6\), bias factors of ( \(0.25\), \(0.50\), \(1.00\), \(2.00\), \(4.00\)) for the six designs and an incomplete set of site-design pairs. Specifically we imagine each study only used 3 methods, some methods were used as few as 3 times and some as many as six times.

For illustration we imagine a design in which each study has deployed only 3 methods, as shown in the table below.

method 1 method 2 method 3 method 4 method 5 method 6
1 0 0 0 1 1 1
2 1 0 0 0 1 1
3 1 1 0 0 0 1
4 1 1 1 0 0 0
5 1 1 1 1 0 0
6 0 0 1 1 1 1
7 0 0 0 1 1 1
8 1 0 0 0 1 1
Table 17: Illustration of 8 studies using different sets of methods

The key data used for the meta-analysis from this combination of methods across sites is the set of estimates and their standard errors from each site/method pair, as illustrated below.

site design estimate se
1 4 13.99 2.5
1 5 21.65 5.0
1 6 31.80 10.0
2 1 22.44 5.0
2 5 55.12 10.0
2 6 87.80 20.0
Table 18: Snapshot of simulated data

In this simulation the benchmark estimate is missing for three sites and noisy for others. The posteriors however recover the actual prevalence well in all sites. In those sites in which a benchmark exists, the posterior estimate is closer to the truth. In those site in which a benchmark estimate does not exist, a reasonably accurate estimate is generated.

Figure 6: Estimates of prevalence from benchmark study and pooled analysis

The estimates of bias are also accurate. In this case the truth lies in the credibility interval for each estimate.

Figure 7: Stipulated and estimated biases with 95% credibility intervals

The proof of concept, we highlight, is implemented under the assumption that indeed relative biases of methods is invariant across sites. Implicit in this assumption is that methods have not been selected in sites on the basis of their expected performance (relative to alternatives). If these assumptions are untenable a model in which \(\beta\) is allowed to vary across sites can be examined.

In the estimation of the model based on the data received from the studies included in PRIF Initiative we plan to rely on both uninformative priors on the main parameters of the model as well as qualitative prior beliefs about biases of individual sampling-estimation strategies received from the individual study teams and methods experts who participate in PRIF Initiative. These prior beliefs will be collected prior to estimation of meta-analysis model described above.

Heterogeneity in bias

The code block provided below outlines the Stan model utilized for estimating \(\alpha\) and \(\beta\) based on the results from various studies. This version accommodates the variability of biases depending on a specific characteristic (parameter bias_b_prior_mean).

The main distinction in this model is that the amount of bias within a study-method combination is influenced by both the method and a study-related covariate. The fundamental assumption now is that the benchmark is considered unbiased for all sites, while other estimates display differing biases across both method and site.

data {
  int<lower=0> N;
  int<lower=0> K;
  int<lower=0> n_ests;
  vector[N] covariate;
  real<lower=0> alpha_prior_mean[N];
  real<lower=0> alpha_prior_sd[N];
  real<lower=0> bias_a_prior_mean[K];
  real<lower=0> bias_a_prior_sd[K];
  real<lower=0> bias_b_prior_mean[K];
  real<lower=0> bias_b_prior_sd[K];
  int<lower=0> site[n_ests];
  int<lower=0> design[n_ests];
  vector<lower=0>[n_ests] ests;
  vector<lower=0>[n_ests] ses;
}
parameters {
  vector[K] bias_a;
  vector[K] bias_b;
  vector<lower=1>[N] alpha;
}

transformed parameters {
  vector<lower=0>[n_ests] bias;

  
  for (x in 1:n_ests) 
      bias[x] =  exp(bias_a[design[x]]  +  bias_b[design[x]] * covariate[site[x]] - 1);

}

model {

  target += normal_lpdf(ests | bias .*alpha[site], ses);
  alpha ~ normal(alpha_prior_mean, alpha_prior_sd);
  bias_a ~ normal(bias_a_prior_mean, bias_a_prior_sd);
  bias_b ~ normal(bias_b_prior_mean, bias_b_prior_sd);

}

Figure 8: Estimates of bias by site and by design

Figure 9: Estimates of dependence of bias on a study-level covariate

We plan to explore heterogeneous bias with respect to the following study level covariates:

  • p_visibility – Prior beliefs of individual study teams about the probability of recognizing member of hidden group
  • degree_average – Prior beliefs of individual study teams about average number of connections in the population
  • hidden_prev – Prior beliefs of individual study teams about the prevalence of target hidden groups considered in the study
  • target_n_sample – Target sample size for each study

Note that the last of these is a design feature and the first three are features of the populations, or more specifically beliefs about these features elicited from teams.

In the estimation of the model on the data received from the studies included in PRIF Initiative we plan to rely on both uninformative priors on the main parameters of the model as well as qualitative prior beliefs about biases of individual sampling-estimation strategies received from the individual study teams and methods experts who participate in PRIF Initiative. These prior beliefs were collected prior to estimation of meta-analysis model described above (though for teams, after they had access to their data).

Cost-error trade-offs

To generate estimates of the root mean squared error associated with each estimate we use the expression for mean square error:

\[ \mathrm{MSE}(\hat\beta) = \mathrm{Var}(\hat\beta) + \mathrm{Bias}(\hat\beta,\beta)^2 \]

To assess \(\mathrm{Bias}\) we use the difference between the posterior prevalence estimate and the raw estimate in question. We then combine our estimates of RMSE with implementation costs provided by teams to identify strategies that are undominated with respect to error and costs.

The figure below illustrates dominated and undominated strategies graphically. We implement this analysis using both the implemented scale of the projects and imagining larger sample sizes, under the assumption that standard errors decline with square root of the the sample but bias remains unchanged.

Figure 10: Illustration of cost/error tradeoffs

Stability assumption for base and heterogeneous bias models

Below we plot the logarithm of relative bias of sampling-estimation strategies relative to Horvitz-Thompson (HT) estimator on PPS sample. To calculate relative bias we first calculate bias of each sampling-estimation method pair relative to estimand, \(\hat\beta / \beta\), and then calculate relative bias of each method compared to benchmark HT estimator bias.

We simulate the population and samples for each study included in the PRIF Initiative using the version of the study designs provided by individual teams with size of population scaled down to fit within 10 000. All parameters used for the individual studies simulations can be found here.

Figure 11: Relative bias of sampling-estimator strategies

In Figure 11 we can see that while the relative bias of sampling-estimation strategies varies across simulations and studies, all methods included show general tendency to over- or underestimate size or prevalence of hidden group compared to the HT estimator.

Next we plot the logarithm of relative bias of sampling-estimation strategies relative to Horvitz-Thompson (HT) estimator on PPS or TLS sample (depending on which sampling strategy is included in individual study) by study.

We simulate the population and samples for each study included in the PRIF Initiative using the version of the study designs provided by individual teams with size of population scaled down to fit within 10 000. All parameters used for the individual studies simulations can be found here.

Figure 12: Relative bias of sampling-estimator strategies by study

In Figure 12 we see that as in previous section relative bias of sampling-estimation strategies varies across simulations and studies, but the variance of the relative bias is lower within studies.

References

Baillargeon, Sophie, and Louis-Paul Rivest. 2007. “Rcapture: Loglinear Models for Capture-Recapture in r.” Journal of Statistical Software 19: 1–31.
Berchenko, Yakir, Jonathan D. Rosenblatt, and Simon D. W. Frost. 2017. “Modeling and Analyzing Respondent-Driven Sampling as a Counting Process.” Biometrics 73 (4): 1189–98. https://doi.org/10.1111/biom.12678.
Blair, Graeme, Alexander Coppock, and Macartan Humphreys. forthcoming. Research Design in the Social Sciences: Declaration, Diagnosis, and Redesign. Princeton University Press.
Chan, Lax, Bernard W Silverman, and Kyle Vincent. 2021. “Multiple Systems Estimation for Sparse Capture Data: Inferential Challenges When There Are Nonoverlapping Lists.” Journal of the American Statistical Association 116 (535): 1297–1306.
———. 2022. “SparseMSE: Multiple Systems Estimation for Sparse Capture Data.”
Feehan, DM, and MJ Salganik. 2016. “Networkreporting: Tools for Using Network Reporting Estimators [r Package Version 0.1.1].”
———. 2023. “Surveybootstrap: Tools for the Bootstrap with Survey Data [r Package Version 0.0.3].”
Gile, Krista J. 2011. “Improved Inference for Respondent-Driven Sampling Data with Application to HIV Prevalence Estimation.” Journal of the American Statistical Association 106 (493): 135–46.
Handcock, Mark S., Krista J. Gile, and Corinne M. Mar. 2014. “Estimating Hidden Population Size Using Respondent-Driven Sampling Data.” Electronic Journal of Statistics 8 (1): 1491–521. https://doi.org/10.1214/14-EJS923.
Handcock, Mark S, Krista J Gile, Ian E Fellows, and W Whipple Neely. 2023. “RDS: Respondent-Driven Sampling Package.”
Handcock, Mark S, Krista J Gile, Brian Kim, and Katherine R McLaughlin. 2022. “Sspse: Estimating Hidden Population Size Using Respondent Driven Sampling.”
Hickman, Matthew, Vivian Hope, Lucy Platt, Vanessa Higgins, Mark Bellis, Tim Rhodes, Colin Taylor, and Kate Tilling. 2006. “Estimating Prevalence of Injecting Drug Use: A Comparison of Multiplier and Capture-Recapture Methods in Cities in England and Russia.” Drug and Alcohol Review 25 (2): 131–40.
Killworth, Peter D, Eugene C Johnsen, Christopher McCarty, Gene Ann Shelley, and H Russell Bernard. 1998. “A Social Network Approach to Estimating Seroprevalence in the United States.” Social Networks 20 (1): 23–50.
Rust, Keith F, and JNK Rao. 1996. “Variance Estimation for Complex Surveys Using Replication Techniques.” Statistical Methods in Medical Research 5 (3): 283–310.
Salganik, Matthew J. 2006. “Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling.” Journal of Urban Health 83 (Suppl 1): 98.
US DoS, TIP Office, David Okech, Lydia Aletraris, and Elyssa Schroeder. 2020. “Human Trafficking Statistical Definitions: Prevalence Reduction Innovation Forum.” University of Georgia African Programming; Research Initiative to End Slavery & The US Department of State Office to Monitor; Combat Trafficking in Persons. https://www.aha.org/system/files/media/file/2020/08/PRIF-Statistical-Definitions-Document-8-3.pdf.
Vincent, Kyle, and Steve Thompson. 2017. “Estimating Population Size with Link-Tracing Sampling.” Journal of the American Statistical Association 112 (519): 1286–95.
———. 2022. “Estimating the Size and Distribution of Networked Populations with Snowball Sampling.” Journal of Survey Statistics and Methodology 10 (2): 397–418.