Age Differences in Privacy Behavior: A Comparative Analysis of Older and Younger Adultsby@escholar

Age Differences in Privacy Behavior: A Comparative Analysis of Older and Younger Adults

tldt arrow

Too Long; Didn't Read

This section provides a comprehensive overview of the study methodology, including the experimental design, participant recruitment, and data analysis approach. It highlights the use of a Facebook application experiment to investigate the effects of framing, defaults, and justification messages on privacy decisions and explores age-related differences in privacy behavior.
featured image - Age Differences in Privacy Behavior: A Comparative Analysis of Older and Younger Adults
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture


(1) Reza Ghaiumy Anaraky, New York University;

(2) Byron Lowens;

(3) Yao Li;

(4) Kaileigh A. Byrne;

(5) Marten Risius;

(6) Xinru Page;

(7) Pamela Wisniewski;

(8) Masoumeh Soleimani;

(9) Morteza Soltani;

(10) Bart Knijnenburg.

Table of Links

Abstract & Introduction


Research Framework




Limitations and Future Work

Conclusion & References


4 Methods

4.1 Study Overview

To study our hypotheses, we used an existing dataset of a between subject experiment: a Facebook application that could purportedly automatically tag photos on Facebook. This application required users to have an active Facebook account with at least ten friends to be able to participate in the study. As a cover story, participants were told that the study would support the development of a Face-detection algorithm for a Facebook application that can automatically tag people in pictures and their task is to train this algorithm. Prior to logging on Facebook, the participants were asked to list the names of three of their friends with whom they have the most online interactions. After training phase, the app measured the repeated measures dependent variable: participants’ tendency to use the tagging feature for tagging themselves (or each of these three friends) in their own (or in each of these three friends’) photos. Figure 2 shows the experimental setup.

To make sure that participants understood the purported workings of the app, their first task was to test the readability of the following note: “This is a free application being developed by university researchers. It can automatically tag users or users’ friends with high accuracy. Should the app make a mistake, users can still remove the tags.” A short survey asked a number of comprehension questions about this note. Participants who answered these questions incorrectly had to read the note and answer the questions again. This procedure ensured that all participants clearly understood the context of the application.

Participants then entered the “training” phase of the application, in which they were asked to tag the people in four researcher-provided photos based on a key with the faces of individuals in the photo. This task was purportedly the main task for participants as it serves the cover story (i.e., training the application). Figure 3 shows the first training page. After training the application, participants started the “correction” phase of the study. In this phase, the app would display photos that were ostensibly tagged by the algorithm asking participants to correct any mistakes in the tags. All the photos were pre-tagged correctly so that participants do not have to make any corrections. The goal of the “correction” phase was to demonstrate the high accuracy of the tagging algorithm, thereby countering any potential fears about the possibility of the algorithm tagging users incorrectly. Training and correction phases were parts of the cover story, as the app told participants their main task would be training the algorithm. Participants did not know that the main task is their tagging decision. After this general training phase, the app provided participants with a scenario where they had to make a privacy decision (tag or not) relevant to each of their friends’ names they listed before, where it measured the disclosure decision.

4.2 Dependent Variables: Tagging Decisions and Privacy Concerns

Participants finally entered the “decision” phase, where they were given a chance to use the tagging feature for themselves. In this stage, the app thanked participants for training the algorithm and, as a token of appreciation, offered them a chance to use the app for their own photos. The experimental manipulations in terms of default, framing, and justification messages (see “Manipulations” below) were applied here as between subjects manipulations. In a prequestionnaire, a survey asked participants to enter the names of three Facebook friends that they regularly interact with; in the decision phase the app showed participants a decision page for each of these three friends. The decision page would claim that the algorithm had “identified” a) a number of previously unseen photos of the user on the friend’s page, and b) a number of previously unseen photos of the friend on the user’s page (in reality, the number of “identified” photos was a random number between five and fifteen). Participants were offered the choice to tag themselves, as well as the choice to tag their friend in these photos (see Figure 4). This resulted in six decisions (two tagging decisions for each of the three friends) per participant.

The results of a pilot study suggest that these six decisions hold credible ecological validity. In the pilot study, the disclosure scenario was more intrusive; instead of providing the tagging option only for three listed friends, the app inquired participants if they want to tag all of their friends in all of their own photos and tag themselves in all of their friends’ photos. Out of 50 participants, no one agreed to use the tagging feature. Everyone rejecting the tagging across different framing, defaults, and justification manipulations means that participants perceived the app as real. After this pilot, the app designers adopted a less intrusive scenario to proceed with their study.

Fig. 2. The Experimental Setup. This figure is anonymized.

Fig. 3. One of the training pages where participants could tag individuals in the photo on the left side of the screen based on a key on the right side of the screen. They were told that they are “training” the algorithm here. This figure is anonymized.

Privacy concerns is the second dependent variable of our interest. The dataset includes a reduced version of the IUIPC [48] dimension of general concerns with 3 items. This reduced version is validated and used in the previous studies [40]. We used the sum score of this scale for measuring privacy concerns (Cronbach’s α = 0.790). All the items were measured on a 5-point agreeableness Likert scale:

– Compared to others, I am more sensitive about the way online companies handle my personal information.

– To me, it is the most important thing to keep my privacy intact from online companies.

– I am concerned about threats to my personal privacy today.

We standardized the scale (grand mean = 0, SD = 1) in our analyses.

4.3 Independent Variables: Manipulations

Similar to most existing studies on framing and default effects, this experiment combined a default manipulation (accept vs. reject) with a framing manipulation (positive vs. negative). Table 1 shows this 2x2 design.

In addition to framing and defaults, the app involves a justification manipulation which adds a “justification message” to the decision scenario. Literature studies two common types of justification messages: normative [27] and rationale-based [66] justifications. For the sake of robustness, the experiment includes both types of justifications, each with both a positive and a negative valence. Therefore, the justifications manipulation consists of two normative justifications (positive—showing high popularity for the app, and negative—showing low popularity for the app), two rationale-based justifications (positive—discussing pros of using the app, and negative—discussing cons of using the app), and a condition without any justification (as a neutral baseline):

– Negative descriptive normative justification: Note: 3% of our study participants use the tagging feature.

– Positive descriptive normative justification: Note: 97% of our study participants use the tagging feature.

– Negative rationale-based justification: Note: Autotagged photos will show up on the Facebook walls of the tagged friends, where their friends can see them. Beware that they may not want others (parents, boss) to see some of these photos, because they could be embarrassing!

Table 1. A representation of framing and default conditions

– Positive rationale-based justification: Note: Autotagged photos will show up on the Facebook walls of the tagged friends, where their friends can see them. This will strengthen your friendship and let your friends relive the good times they had with you!

– None: No justification given. (This condition was treated as a baseline control for the model)

All of the experimental manipulations were between subjects. An example condition is shown in Figure 4.

Fig. 4. The Decision Page. This is an example for positive framing and opt-out default conditions. In addition, there is a positive rationale-based justification provided in blue, between the parenthesis.

4.4 Participant Recruitment

This study was approved by an institutional review board. After excluding those who failed an attentioncheck question, we were left with 44 older adults and 169 young adults. These participants were recruited through MTurk and Figure-eight crowd-sourcing platforms. There were not significant demographic differences between the participants across these platforms. Each participant received $1.30 (U.S. Dollars) as their participation incentive. The app required participants to have an active Facebook account with at least ten friends to participate in the study. Participants were debriefed after the study and were informed that the app was not real and that it did not actually tag any of their or their friends’ photos. As is customary in studies that involve deception, they were given the option to be removed from the data without influencing their incentives. No participants chose to do so.

4.5 Data Analysis Approach

Our first dependant variable (decision) was a binary variable with “accept to tag” coded as 1 and “reject to tag” coded as 0. Each participant responded to six disclosure scenarios: three scenarios on whether they wanted to tag themselves in each of their three friends’ photos and three scenarios on whether they wanted to tag each of their three friends in their own photos. Therefore, we constructed a multilevel path model with a random intercept to account for repeated measures per participant and a binary dependent variable. Our path model enabled us to treat privacy concerns as both an independent variable (by regressing tagging decision on it) and a dependent variable (by regressing it on study manipulations). The framing and default manipulations were dummy-coded where positive framing or opt-out defaults were coded as “0.5” and negative framing or opt-in defaults were coded as “−0.5”. To analyze justification messages, we first conducted an overall chi-square omnibus test to study their overall effect among the 5 conditions. Then we ran planned contrasts, including a contrast testing the effect of justification valence (positive justifications vs. negative justifications) to study H4 and H7. The other contrasts tested the effect of any justification (no vs. any), the effect of the type of justification (rationale-based vs. normative), and the interaction between justification type and valence). Our analyses were carried out in Mplus v7.4. As the sample was imbalanced (44 older adults and 169 young adults), we used MLR, a maximum likelihood estimation with robust standard errors in our analyses [59].

Fig. 5. The y-axis is standardized sum-score of privacy concerns. The graph shows that negative justifications lead to lower levels of privacy concerns

This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.