paint-brush
The LinkedIn Nanotargeting Experiment that Broke All the Rules by@netizenship
2,103 reads
2,103 reads

The LinkedIn Nanotargeting Experiment that Broke All the Rules

Too Long; Didn't Read

A study demonstrates the feasibility of nanotargeting on LinkedIn, bypassing audience size restrictions and achieving successful campaigns by employing JavaScript code to reactivate campaign launch buttons, employing various targeting strategies, and verifying success through campaign metrics and user interaction.
featured image - The LinkedIn Nanotargeting Experiment that Broke All the Rules
Netizenship Meaning in Online Communities HackerNoon profile picture

Authors:

(1) Ángel Merino, Department of Telematic Engineering Universidad Carlos III de Madrid {[email protected]};

(2) José González-Cabañas, UC3M-Santander Big Data Institute {[email protected]}

(3) Ángel Cuevas, Department of Telematic Engineering Universidad Carlos III de Madrid & UC3M-Santander Big Data Institute {[email protected]};

(4) Rubén Cuevas, Department of Telematic Engineering Universidad Carlos III de Madrid & UC3M-Santander Big Data Institute {[email protected]}.

Abstract and Introduction

LinkedIn Advertising Platform Background

Dataset

Methodology

User’s Uniqueness on LinkedIn

Nanotargeting proof of concept

Discussion

Related work

Ethics and legal considerations

Conclusions, Acknowledgments, and References

Appendix

6 Nanotargeting proof of concept

If our model outcome is correct, it may be possible to nanotarget an individual on LinkedIn. By nanotargeting, we refer to showing ads from an ad campaign exclusively to the targeted individual. However, we note that LinkedIn claims it is not possible to launch ad campaigns for audience sizes <300 users. If LinkedIn effectively imposes this policy, we should not be able to run nanotargeting campaigns. In a nutshell, in this section, we aim to verify whether it is feasible to run nanotargeting campaigns on LinkedIn based on the results derived from our methodology.



Figure 5: Probability of success of a nanotargeting campaign by combining the location and N skills. The red line represents an upper bound linked to using the least popular selection strategy for skills (Lo_LP). The blue line represents a lower bound linked to using the random selection strategy for skills (Lo_R).


Figure 6: Ad creativity used in the proof of concept experiment.

6.1 Description of the Experiment

We aim to nanotarget three of the authors of this paper based on their self-reported location and skills. From now on, we will refer to the authors as user 1 (U1), user 2 (U2), and user 3 (U3), respectively.


To configure each campaign, we use the LinkedIn Campaign Manager and define the targeted audience using the location and N skills retrieved from the LinkedIn profile of the targeted user. In addition, we set up the budget, upload the ad creativity and define the landing page the user will visit if they click on our ads. Once a LinkedIn advertising campaign is defined, to continue with its publication and be launched to the public, LinkedIn offers the possibility to use two different buttons including the text "Launch Campaign", one at the right side of the page, and another at the bottom, that is only visible if the advertiser scrolls down. We can select either of these buttons to publish the ad.


In our nanotargeting campaigns, we observed that the "Launch Campaign" button on the right side of the Ads Manager was not clickable, arguing that the audience was too small. However, this measure can be bypassed using a simple JavaScript code in the browser’s console to reactivate the button: document.querySelector(button_selector).disabled = false. At first, we thought LinkedIn was implementing its policy to avoid ad campaigns targeting less than 300 users. However, after enabling the button, the campaign can be launched and the audience size is not checked during the ad review process.


Next, we detail each of the campaign attributes that are relevant to our proof of concept experiment.


Skills selection: The number of skills available in the profiles of the targeted individuals was 28, 42, and 28 for U1, U2, and U3, respectively. Our model’s results enabled us to choose any of the two potential skill selection strategies: random or least popular. We decided to run our proof of concept experiment by selecting skills at random. This is to emulate the simplest setting for a non-skilled advertiser willing to implement a nanotargeting campaign. As we explained, any user (advertiser) with a LinkedIn account could retrieve the skills reported by any other user. It is enough to access the profile and retrieve the skills (and the location) reported by the user being targeted and configure an ad campaign in the dashboard using that information. In contrast, implementing the least popular selection implies sorting the skills by popularity, which requires access to the Ads Manager and obtaining the audience size associated with each skill. Although this is a very simple step for savvy users, non-skilled users may not know how to obtain the audience size for each skill and will be unable to implement the least-popular skill selection in the nanotargeting campaign.


Number of skills: We have configured campaigns with 7, 10, 13, 16, and 19 randomly selected skills.


Campaign duration: All campaigns ran for 3 days (72 hours). Each campaign started at day d noon and finished at d+3 noon. We note that the starting day, d, was not the same for all the campaigns.


Campaign budget: Each campaign was configured with a budget of $10. None of the 15 ad campaigns spent the budget in the 3 days they were running.


Table 2: Expected and actual successful nanotargeting campaigns in the proof of concept experiment. The first column includes the skills used in the campaign. The second column shows the success probability retrieved from the applied methodology. The third column shows the expected number of successful campaigns in the experiment out of the three targeted users per number of skills. The fourth column shows the actual number of successful campaigns in the proof of concept experiment.


Ad creativity: We have used a neutral ad creativity advertising a website from a research project that has nothing to do with privacy. Figure 6 shows the ad creativity employed in all our ad campaigns.


Targeted device: We configured our campaigns to deliver ads both on mobile devices and desktops.


Overall, we are targeting 3 different users and we are running 5 campaigns for each of them (one per number of skills value). Therefore, our proof of concept experiment includes 15 nanotargeting campaigns in total. Table 2 shows for each value of skills (first column) the estimated success probability according to our model (second column) and the expected number of successfully nanotargeted users among the 3 targeted users (third column). We compute the latter by multiplying the success probability retrieved from our model by the number of campaigns run per skills value, i.e., 3. For instance, for 19 skills (85% success rate) the expected number of successful campaigns out of three launched campaigns, based on the results of our methodology, is 2.55. This implies that at least 2 and likely 3 out of the three campaigns using 19 skills should be successful in our experiment. The last column of the table shows the actual number of successful nanotargeting campaigns in our experiment.

6.2 Validation of Nanotargeting Success

To validate whether our campaigns had successfully nanotargeted the targeted individual, we relied on both the information provided by LinkedIn for our campaigns and the information we directly collected.


First, we used the information provided by LinkedIn to advertisers in a dashboard where they can monitor the progress of their campaigns. It delivers information for many parameters, including the number of impressions and the number of clicks for an ad campaign. In some cases, it also estimates the (unique) users reached (referred to as reach estimation) in the campaign. This last parameter would allow us to confirm the success of the nanotargeting campaign when it is equal to 1 once the campaign is over. However, this parameter presents two limitations: (i) LinkedIn informs that this parameter is in a beta version and it only offers an estimation; (ii) we have observed that the estimation is only available in those campaigns reaching multiple users, but it is never reported when very few users are reached. Therefore, while we report this value (see Figure 10 in Appendix C), we cannot rely on it to verify the success of a nanotargeting campaign, but the opposite when the campaign has reached multiple users.


Second, all the targeted authors were aware of the ad creativity we were using in the ad campaigns, and we instructed them to (i) take a snapshot of each ad impression received from the nanotargeting campaign; (ii) click on the nanotargeted ad every time it appeared in their LinkedIn feed.[1] When clicking on the ad, the user was forwarded to the research project website advertised that runs on a server we manage. The server recorded the timestamp for each click and the campaign from which the click was generated, which identifies the user (U1, U2, or U3) performing the click.


With the information obtained in the two previous steps, we could assess whether a nanotargeting campaign was successful. We could confidently conclude that the user was the only one who received the ad if the number of impressions and clicks reported by LinkedIn matched the number of impressions and clicks provided by the targeted users and the number of clicks logged in our backend system, where we can verify whether the clicks comes from a single user.


Table 3: Results of the proof of concept experiment. Under LinkedIn Report, the results reported by the LinkedInCampaign Manager; under user report, the impressions the user notified for each campaign; and under Backend Log,

6.3 Results of Nanotargeting Experiment

Table 3 shows the results from the 15 ad campaigns we run in our proof of concept experiment. For each campaign, the table identifies: (i) the user being targeted, (ii) the number of skills used in the campaign, (iii) the number of impressions and clicks reported by LinkedIn in the dashboard summarizing the campaign results, (iv) the number of impressions reported by the user through the snapshot they captured of the received ads, (v) the number of clicks registered in our backend server, and (vi) the cost of the campaign. We highlight in bold all campaigns that successfully nanotargeted the targeted individual. Figure 10 in Appendix C shows a snapshot of the results of our campaigns as reported in the LinkedIn dashboard.


All the campaigns using the 13, 16, and 19 skills successfully nanotargeted the targeted user. Also, 2 out of the 3 campaigns using 10 skills were successful. Finally, only one of the campaigns using 7 skills was successful. These results match the expectations derived from our model as reported in Table 2. Our intuition is that our model provides a conservative result, and the actual success probability would be slightly higher than our model reports. This intuition is based on the fact that using 13 skills (71% success probability) already led to successful nanotargeting campaigns in all the cases.


The primary outcome of this experiment is that we have demonstrated that running nanotargeting campaigns systematically on LinkedIn is feasible. This implies that LinkedIn is not effectively implementing its policy where they indicate the audience size required to launch an ad campaign is 300 [11].


This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.


[1] We note U3 forgot to click in one of the received ad impressions in the campaign using 13 skills (marked with * in table 3). In that case, as we will find in our results and the LinkedIn report, the campaign delivered 3 ad impressions and received 2 clicks.