Causal Clustering: Design of Cluster Experiments Under Network Interference

Written by escholar | Published 2024/01/30
Tech Story Tags: experimental-design | cluster-designs | spillover-effects | causal-inference | causal-clustering | optimal-cluster-design | bias-and-variance | network-data-analysis

TLDRThis paper presents a novel approach to designing cluster experiments in network settings for estimating global treatment effects. The Causal Clustering algorithm is introduced, aiming to minimize the worst-case mean-squared error in treatment effect estimation by optimizing cluster design. The study explores the impact of clustering choices on bias and variance, providing conditions for selecting between cluster-level and individual-level randomization. Unique network data from Facebook users and existing field experiment data are utilized to illustrate the properties of the proposed method.via the TL;DR App

Authors:

(1) Davide Viviano, Department of Economics, Harvard University;

(2) Lihua Lei, Graduate School of Business, Stanford University;

(3) Guido Imbens, Graduate School of Business and Department of Economics, Stanford University;

(4) Brian Karrer, FAIR, Meta;

(5) Okke Schrijvers, Meta Central Applied Science;

(6) Liang Shi, Meta Central Applied Science.

Table of Links

Abstract & Introduction

Setup

(When) should you cluster?

Choosing the cluster design

Empirical illustration and numerical studies

Recommendations for practice

References

A) Notation

B) Endogenous peer effects

C) Proofs

Abstract

This paper studies the design of cluster experiments to estimate the global treatment effect in the presence of spillovers on a single network. We provide an econometric framework to choose the clustering that minimizes the worst-case mean-squared error of the estimated global treatment effect. We show that the optimal clustering can be approximated as the solution of a novel penalized min-cut optimization problem computed via off-the-shelf semi-definite programming algorithms. Our analysis also characterizes easy-to-check conditions to choose between a cluster or individual-level randomization. We illustrate the method’s properties using unique network data from the universe of Facebook’s users and existing network data from a field experiment.

Keywords: Experimental Design, Spillover Effects, Causal Inference, Cluster Designs. JEL Codes: C10, C14, C31, C54

1 Introduction

Consider a (large) population of n individuals connected under a single observed network. Researchers are interested in conducting an experiment to estimate the global average treatment effect, i.e., the difference between the average effect of treating all versus none of the individuals in the population. Treating an individual may generate spillovers to her friends in the network. To capture such effects, researchers conduct a cluster experiment. Individuals are first partitioned into clusters. Within a cluster, either all units are assigned to the treatment or all units are assigned to the control group. Finally, researchers estimate treatment effects by taking a difference between the average outcomes of treated and control units (possibly adjusting for baseline covariates). The cluster design does not require modeling the dependence of individual outcomes on neighbors’ assignments, but it requires a choice of clusters and some assumptions on the extend of the spillovers along the network. For example, cluster experiments on online platforms require choosing a partition of the social network, and field experiments require choosing the unit of randomization, such as villages or regions. This raises the question of how many and which clusters to use in experiments.

Typical approaches in economic research assume prior knowledge of many independent clusters. There are many settings where this information is not available, and instead units in the population have different degrees of connections.[1] This paper provides an econometric framework to choose when and how to design the clusters in cluster experiments. Different from existing clustering algorithms geared towards community detection, we motivate the choice of the clusters based on the task of estimating global treatment effects. The choice of clustering must balance two competing objectives: the larger the clusters (and the smaller the number of clusters), the smaller the bias of the estimated global effect, but the larger its variance. We introduce an algorithmic procedure – entitled Causal Clustering – to choose the clustering that minimizes a weighted combination of the worst-case bias and variance as a function of the network and clusters. The worst-case approach encodes uncertainty over the dependence of individual outcomes on neighbors’ assignments. We study (i) whether to run a cluster-level instead of individual-level randomization; (ii) how to cluster individuals (and how many clusters to use).

We focus on a class of models where spillover effects are small relative to the outcomes’ variance but possibly non-negligible for inference. This is formalized in a novel framework of local asymptotics where individual outcomes depend arbitrarily on neighbors’ treatments, and, as n grows, spillovers from neighbors (and possibly also direct effects) converge to zero, but at an arbitrary slow rate (e.g., slower than n −1/2 ). This framework encodes the researchers’ uncertainty on the presence (and magnitude) of spillover effects by modeling first-order neighbors’ effect local to zero, with the convergence rate capturing the expected magnitude of spillovers.[2] The local asymptotic framework we study is consistent with settings with small (but non-negligible) treatment and spillover effects, typical, for instance, in online experiments [e.g., Karrer et al., 2021]. We characterize optimal clustering as a function of the expected magnitude of the largest spillover effects that the experiment can generate. The largest size of the spillover effects is a key input in our algorithms, whose characterization, in practice, is necessary for the design of the experiment but can be challenging. This parameter can be informed by previous experiments, in the same spirit of minimum detectable effects used in power analysis [e.g. Baird et al., 2018], or using some particular modeling assumptions. We provide guidance to practitioners in Section 6.

Our analysis proceeds as follows. First, we provide a formal characterization of the worstcase bias and variance. We show that the worst-case bias is closely related to a particular notion of between-clusters connectedness, defined as the per-individual average number of friends in other clusters. The worst-case variance can potentially be an arbitrary function of within-clusters covariances and between-clusters covariances: individuals in the same cluster have identical assignments and in different clusters may share common neighbors. We show that the variance only depends on the average squared cluster size, up to an asymptotically negligible error. This result formalizes the intuition that a larger number of clusters, with a small variance in cluster size, decreases the variance of the estimator.

We draw the implications of these results for choosing between a cluster experiment (for a given clustering) or assigning treatments independently between individuals (i.e., Bernoulli design or completely randomized design). Suppose the magnitude of the spillover effects is smaller than the square root of the number of clusters. In that case, the variance component dominates the bias, and a Bernoulli design is preferred (where a Bernoulli design is a special case of a cluster design with clusters containing a single unit). Vice-versa, a cluster design is preferred if the bias dominates the variance. Intuitively, because our objective trades off the bias and variance of the estimator, whenever the number of clusters is small, it is best to run a Bernoulli design for any value of spillover effects local to zero. On the other hand, if the number of clusters is sufficiently large, and the cluster design appropriately controls the bias of the estimator, a cluster design is preferred. We provide practitioners with a simple decision rule between cluster and Bernoulli designs that only depends on the number of clusters and the expected spillover effects’ magnitude.


[1] For example, when using villages as clusters, individuals may interact in the same and nearby villages. See Egger et al. [2022] for an example in cash-transfer programs.

This paper is available on arxiv under CC 1.0 license.


Written by escholar | We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community
Published by HackerNoon on 2024/01/30