The “Facebook Baby” Scandal of 2024

A story from the future…

It’s October 6, 2024, and Facebook Dating has been active for five years. Tens of thousands of dates, hook-ups, marriages, and divorces can be linked back to matches made on Facebook. What has recently come to light is very concerning: Facebook researchers have been purposely experimenting with the algorithm that matches people. An internal document, leaked by an unnamed whistle-blower, seem to show that Facebook selected hundreds of completely random people and decided to see if they could push those people toward one another. Using what Divergent Design Lab (DDL) calls Ambient Tactical Deception, researchers attempted to move experimental subjects (Facebook users selected at random, including some who may have been married) into specific social circles by suggesting events, suggesting friends, suggesting music, and suggesting connections. Facebook algorithms were then doctored to show these individuals to one another as a match within Facebook Dating. This experiment is described as an attempt to put people together who are seemingly not compatible. If true, this would seem to be a gross violation of personal privacy, and may have negatively impacted the private lives of hundreds of Facebook users. According to the documents released, Facebook believes the experiment was unsuccessful, based on internal goals not included in the document release.

At least ten couples have come forward to say not only do they believe they were victims of this experiment, some say they will eventually need to explain to their young children that they exist because of a Facebook experiment. A class action lawsuit has also been discussed. Facebook has yet to comment, but reminds users that the agreement they signed when opening a Facebook account covers “ongoing, experimental system improvements”.

This obviously has not happened. It is a speculative, cybersecurity & design prototype. DDL’s research has begun to show that such manipulation is possible, at least in terms of emotionally manipulating people online. We have no evidence that Facebook would do something as rash as what is described above. However, the experimental approach is technologically possible. Facebook has conducted emotional experiments on its users in the past. This post is not a critique of people who use Facebook. It is not an attempt to encourage people to leave Facebook, or any social media platform. Instead, I am using Facebook as an example to point out the near-total lack of control over what has become an important channel for human interaction. I still have a Facebook account account. I post pictures of our family (especially our dog) and random thoughts, like everyone else.

Facebook Dating, a service of Facebook, launched in the United States on September 5, 2019. As an article from Wired notes, it “take[s] unique advantage of Facebook’s biggest asset—its extensive cache of data on you and all your friends.” In the same article, they report “Facebook says it will start matching you with potential dates based on your preferences, interests and other things you do on Facebook.” The company says this includes factors like where you’re from, the Facebook groups you’re in, and where you say you went to school.

What no article has yet done is investigate exactly how the algorithm (computer code) selects people to suggest to other people. That is considered a trade secret. The application of any personality-aware code to the massive database of personal interaction Facebook holds should give us all chills. If that code is accurate at predicting who we match with romantically, we could also say that the code + “my data” combination begins to “know” us. If the code can select a romantic match, Facebook the company, and Facebook the massive code and database, “know” us. If Facebook Dating can match a person more accurately than existing dating apps, it can use similar code to advertise, to sway politically, to suggest and dissuade people from specific courses of action far more effectively.

The basis of the Dating service is, according to patent US9609072B2, Social Graph, which “essentially refers to the global mapping of everybody and how they’re related.” [CBS News]. Facebook’s Social Graph is, as far as I can tell from how it is used for advertising and friend suggestions, relatively primitive. For example, going into my ad preferences, in the category of education (I am a professor at DePaul University), Facebook believes I am interested in: Aesthetics, Gender studies, Northern Illinois University, Installation art, DePaul University, Social science, Philosophy, Anthropology, Human behavior, Evolutionary psychology, Ontology, and Ideology. These are all things I have searched for or posted about, and they are all very broad topic areas. Facebook is not, as far as I know from reviewing these and other ad preferences, aware of anything particularly private. Anyone reading my academic bio over the last few years could make the same guesses.

Some things are less clear: what numbers are hidden behind these categories? Does each category have an interest scale (e.g. 1 to 10)? Does it include things one is interested in negatively? Does Facebook know I am deeply suspicious any time “evolutionary psychology” is mentioned? Does it know that I’ve only started describing my work with Divergent Design Lab in relationship to ontology? Does Social Graph “know” that I am a transgender person when it selects “Gender studies” as an interest, or is that merely pulling that data from searches for a degree program I considered last Fall? Will those two data points ever be joined, to indicate a strong, personal interest?

There is no way, currently, to answer these questions outside of reverse-engineering the Facebook algorithm. In theory we could create multiple artificial lives on Facebook, cyborg personalities who specifically search for a limited set of information using carefully controlled computers and network IP addresses, and then see what is suggested to those cyborg people. Given a controlled community of bots, one could guess a lot about how Social Graph works. It might be illegal to do so, and if it included any real people (via making friends with actual human beings), any academic researcher attempting this experiment would need to have their study approved by their institution’s Institutional Review Board.

What we can now do is attempt to predict where Facebook Dating will take the company. Since users who employ Facebook for dating are likely to remain on Facebook, we can guess that the system will be self-correcting, gathering further data about which couplings it suggested are successful. This data will come in via status updates and changes to relationship statuses. Has Facebook set up a flag to inform Mark Zuckerberg when the first “Facebook Dating Marriage” happens? Will it also track the first “Facebook Divorce”? Will there be “Facebook Dating Babies”?

While it may seem to some that this is a great deal of concern over a new online dating app, we should remember that Facebook previously attempted to make people feel “bad”. As the paper published in The Proceedings of the National Academy of Sciences (PNAS) put it:

In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

https://www.ncbi.nlm.nih.gov/pubmed/24889601

After the paper was published, James Grimmelmann pointed out that “Facebook users didn’t give informed consent.” As discussed in a PNAS update:

Obtaining informed consent and allowing participants to opt out are best practices in most instances under the US Department of Health and Human Services Policy for the Protection of Human Research Subjects (the “Common Rule”). Adherence to the Common Rule is PNAS policy, but as a private company Facebook was under no obligation to conform to the provisions of the Common Rule when it collected the data used by the authors, and the Common Rule does not preclude their use of the data. Based on the information provided by the authors, PNAS editors deemed it appropriate to publish the paper. It is nevertheless a matter of concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out.

https://www.pnas.org/content/111/29/10779.1

If Facebook is willing to be so careless with people’s emotional well-being, we have to wonder why we would remain on Facebook at all, and why people would turn their dating life over to a company that believes their existing policies include the right to deceptively alter the emotional content of our time spent of Facebook. As Grimmelmann puts it, Facebook’s attitude toward the experiment seems to be: “We wanted to see if we could make you feel bad without you noticing. We succeeded.”

At Divergent Design Lab we attempt to conduct all of our research ethically, and our experiments are not only reviewed by our university’s Institutional Review Board, they are discussed internally. We do not publish things that fail our internal ethical tests. However, when it comes to speculating about how technology not only can, but likely will, be used against people, we speculate at the extreme end of possible vulnerabilities and exploits, then share or results openly, and (hopefully) ethically. This is our responsibility as critical academics.

Individually, I believe things will get much worse. We will see continued and increasing exploitation of human vulnerabilities by technology companies, nation states, oppressive governments, and malicious computer users. This sort of thinking and approach is how we developed the concept of ambient tactical deception . Ambient tactical deception is a broad category that includes Facebook’s “feel bad” experiment. It means that you can deceive people online, tactically (i.e. strategically, with a purpose), and ambiently (i.e. in the background, without people noticing). In our paper, presented at a neuroethics conference, we compared those who control the flow of information, including social media companies and malicious actors, to René Descartes demon. Descartes imagined that a demon that could alter reality, including laws of math and science, creating a deceptive “reality.” In ambient tactical deception, we imagine that living online, as many of us do, for a good portion of our day makes us vulnerable to being exploited by lesser demons, people who would have us believe, act, or feel differently, by manipulating what we see on our screens.

In the past, the term “slippery slope” described a logical fallacy in which a few small actions might result in disaster. In relationship to technology, however, we can take many possible outcomes as a given, even if they sound like a slippery slope fallacy. If Facebook can match people online, it seems likely it will use that ability in attempts to control people’s behavior. Controlling behavior here is a less corporate correct way of describing advertising and social shaping for profit. The scenario described in the first paragraph is at least somehwhat possible. It is possible enough that it seems likely it has been or will be attempted by someone.

Divergent Design Lab focuses on discussing how such things are possible, then publishing our research to warn people that such manipulation may be inevitable. Perhaps ambient exploitation as extreme as forced match-making will not come from social media companies, but other extreme, unnoticeable deception will likely become part of ongoing international information warfare and election tampering. This blog will discuss instances in which ambient tactical deception (and other social engineering) has already been used. It will also make note of new developments in information warfare, whether attacks are directed against an election, a group of people, a corporation, or an individual.

-pt 10/06/2019

Transition note: The paper cited above, “Sorry: Ambient Tactical Deception Via Malware-Based Social Engineering” was published under my previous name. I’ve linked to it anyway. While I have made peace with the ongoing existence of that name, by necessity, it should only exist in the past. I am in the process of correcting some papers that are important to my ongoing research, including the paper linked. Should anyone wish cite that paper, or any previous paper, I ask that my current, legal name be used.

Paige Treebridge

Paige Treebridge co-directs Divergent Design Lab, focused on vulnerability and exploitation using cybersecurity, new media art, user experience design, and social psychology paradigms. Twitter @PTreebridge

Pin It on Pinterest