CIA, Social Media, Social Engineering, and Your Soul

The cybersecurity CIA triad: confidentiality, integrity, and availability. In relation to data, this means data should be protected from prying eyes and from being changed, while remaining available to authorized people. This concept is a foundation of information security1

What does this mean in relation to social media and social engineering? How is our confidentiality, integrity, and availability vulnerable on social media? How can we be exploited?

Some thoughts…

Confidentiality.
Only people you authorized to see your information should be able to see it. Random thoughts, pictures, videos, job history, birthdays, anniversaries, all of this data may be tagged so that specific people may see it. What about who you are “friends” with? Is that confidential information? What about when you began using a platform? Whether you are online at this moment? If you’ve been online this week? If you’ve read a private message? If you’ve indicated approval (“Liked”) for someone else’s picture? If you’re a member of a social group? A political group? What about what you were about to post, then did not post? What does social media know about people based on what they though they should not say, what they did not feel safe saying, or what they almost said and then decided not to say today? (This seems worth considering on National Coming Out Day)

Integrity.
You “friend” posts a strong statement in support of a political position you support. You “like” it, with a heart, or a thumbs up. Later, your friend feels the statement was too strong, and softens the statement. Should the platform alert you that the post has been changed? Actively via an alert, or simply by adding an “edited” tag? The data (that you liked a post) may no longer be accurate. In softening their statement, your friend may have restated their point in a way you disagree with. Has your data lost integrity, via context?

Availability.
Social media data generally “belongs” to the person who posted it. Social media platforms often assure us that we can delete our accounts and all associated data. However, we may rely on data being available even though someone else posted or shared it. I’ve left jobs where documents I shared became integral to my colleagues. Am I ethically responsible for maintaining a Google doc on my personal account after I quit a job? Would it be unethical to remove it? Would a former employee be liable if they deleted a document created on their personal account? Returning to integrity, what if that person purposely introduced errors into such a document document? For example, if they created formulas in a spreadsheet that would give incorrect results?

Social media and cloud-based accounts blur the lines between professional and personal, between yours, mine, ours, and theirs. I suspect these are issues CISOs (chief information security officers) worry about, and then they create educational slides for their employees, forbidding the use of non-company-owned resources.

More cinematically though

…could a person be drawn into a social media group consisting of automated accounts that seem to be real people, participate in that community for some time, and then have all the content change? Could a person be made to seems to be participating in activity that would be professionally or personally harmful?

This is already possible, obviously, via forgery. People make posts under accounts that are completely fake. Recently, people have been trolling social media with pictures of Alexandria Ocasio-Cortez with quotes from philosopher/economist Adam Smith. The purpose is to expose people who support a concept but reject it when it’s associated with a different person, but also expose people who support a person more than they understand what that person wrote. This remains a hoax easily uncovered as a shallow fake that relies on context and dislike of a person by the target audience.

The current use of that term “Deep fake” is disturbing. To the degree we are concerned with deep fakes, we hold out hope for the concept of a recorded image as a purveyor of truth. Making a video of someone saying something they did not say, perhaps would never say, is growing more convincing.

In a paper Prof. Jes Westbrook, a co-Director of Divergent Design Lab, and I are currently writing for a conference in Poland (if I get my new passport in time), we focus on a much deeper sort of deep fake:

What if an artificial intelligence could not only write or speak in your voice, could not only match your likely word choices, but could also predict what you would say or do in any given situation, with high accuracy?

What if it could do this with nothing more than the data available online currently and data available for purchase from the myriad companies currently holding our data? Could an employer or a potential partner put this fake you into situations, and believe they know how you would act? If that fake you was mostly accurate, but wrong in key situations, what do we call that difference? What do we call the difference between your predictable behavior and the (perhaps very limited) instances when you act in a way that is unpredictable?

While brainstorming about our current research, Dr. Filipo Sharevski, another co-Director of DDL, and I wrote down the word “soul” on the whiteboard. We wanted to explore, as an ongoing concept, how much of what we might think of as a “soul” can be captured in data. This is not a word either of us use in regular conversation, and certainly not serious academic discussions. To some degree, though, Divergent Design Lab is concerned (and I am very concerned) about the cybersecurity triad “CIA” as it relates to our “souls,” whatever they are. DDL is researching the outer fringes of how much of our lives can be captured in data, and what that means in terms of how we can be exploited.

Beyond social media, and social engineering, code is increasingly able to threaten the confidentiality, integrity, and availability of our lives, our bodies, our livelihood, and our relationships. The concept of uploading oneself to a computer is rarely considered in relation to how easily that could be exploited, how vulnerable that would make you2.

If how we act could be predicted, with 100% accuracy, have we possibly lost our free will, at least? Are our souls in danger? These words are slippery, but they feel apt. What Phil Agre called the “borderlands” between the computer world, and our own world outside have advanced much further than perhaps anyone but Agre imagined.

I feel like I need a word like “soul” to think about what part of us cannot be turned into probability and statistics, the roots of machine learning. How trackable are we? How predictable are we? What does that mean for our ability to change and effect change? What is possible, anywhere in the world, against an oppressive regime that has access to everything about our lives that is available online? Even in relatively free countries, among privileged people, we are, perhaps, more vulnerable than we imagine. When DDL gets into heavy areas like this I tend to find myself back in front of philosophy:

“There is no need here to invoke the extraordinary pharmaceutical productions, the molecular engineering, the genetic manipulations, although these are slated to enter into the new process. There is no need to ask which is the toughest or most tolerable regime, for it’s within each of them that liberating and enslaving forces confront one another. For example, in the crisis of the hospital as environment of enclosure, neighborhood clinics, hospices, and day care could at first express new freedom, but they could participate as well in mechanisms of control that are equal to the harshest of confinements. There is no need to fear or hope, but only to look for new weapons.”

Gilles Deleuze, Postscript on the Societies of Control

  1. 1. Cybersecurity & information security: field in which I am as much an expert as I am in philosophy: not an expert. That does not stop me because I have an art degree. As Jes regularly reminds me, we’re professional imposters.
  2. If I’m wrong, and this has been covered, please @PTreebridge with readings, fiction or philosophy.
Paige Treebridge

Paige Treebridge co-directs Divergent Design Lab, focused on vulnerability and exploitation using cybersecurity, new media art, user experience design, and social psychology paradigms. Twitter @PTreebridge

Pin It on Pinterest