LaBossiere Podcast

#38 - Renée DiResta

Episode Summary

On the internet’s biggest problems, social networks as open protocols, and pseudonymity online.

Episode Notes

Renée DiResta is the technical research manager at Stanford Internet Observatory, where she studies abuse in information technologies, investigating the spread of narratives across social and media networks, with an interest in user behavior, factional crowd dynamics, and the ways in which actors leverage these systems to exert influence.

 

Renee led multiple investigations and presented public testimony on multi-year efforts and influence operations of Russian actors in the 2016 presidential election. She’s gone on to advise Congress, the state Department, and other academic, civil society, and business organizations on terrorist activity and state-sponsored information warfare.

 

Renée also speaks and writes regularly about technology policy as a contributor at Wired and The Atlantic. Her work has been featured or covered by the New York Times, the Washington Post, CNN, CNBC, Bloomberg, Yale Review, Fast Company, Politico, TechCrunch, Wired, Slate, Forbes, Buzzfeed, The Economist and the Journal of Commerce. She also appeared in the documentary The Social Dilemma.

 

Before all this she ran research, marketing and business development at a couple startups, and before still worked in venture capital and quant finance.

Renée is the author of The Hardware Startup: Building your Product, Business, and Brand, published by O’Reilly Media. She also has degrees in Computer Science and Political Science from the Honors College at Stony Brook University. In her free time, she’s the co-founder of the parent advocacy organization Vaccinate California and raises 3 children.

Episode Transcription

Renee DiResta 
[TRANSCRIPT IS MACHINE GENERATED]

 

[00:00:00] Alex: Renee DiResta is the technical research manager at Stanford internet observatory, where she studies abuse in information technologies, investigating the spread of narratives across social and media networks with an interest in user behavior, factional crowd dynamics, and the ways in which actors leverage these systems to exert influence. 
 

Renee led multiple investigations and presented public testimony on multi-year efforts and influence operations of Russian actors in the 2016 presidential election. She has gone on to advise Congress, the state department and other academic, civil society, and business organizations on terrorist activity and state sponsored information warfare. Renee also speaks and writes regularly about technology policy as a contributor at Wired and the Atlantic. 
 

Her work has been featured or covered by the New York times, the Washington post CNN CNBC Bloomberg Yale review, fast company, Politico tech crunch, wired slate, Forbes, Buzzfeed, the economist, you get the idea. She also appeared in the documentary, the social dillema. Before all of this, she ran research, marketing and business development at a couple of startups, and before still, she worked in venture capital and quant finance.  
 

Renee is the author of the hardware startup: building your product, business, and brand, published by O'Riley media. She also has degrees in computer science and political science from the honors college at Stony Brook university. In her free time, she's the co-founder of the parent advocacy organization, vaccinate California, and raises three children.  
 

Today, we talked about the Internet's biggest problems, social networks as open protocols and pseudonymity online. Hope you enjoy. 
 

  
 

[00:01:34] Alex: So, you know, for starters, I think. To hear a little bit about you and the path you took to get to where you are today, you know, like what would you say were those early interests and larger milestones along the way that led you to diving so deep on discourse and the propagation of information across information technologies, I guess. 
 

Cause you've, you've worked across quite a variety of fields up to this point. Yeah,  
 

[00:01:58] Renee: it was it was entirely accidental. So in undergrad I did computer science and political science, and I minored in Russian and I took a million philosophy classes and I didn't know what I wanted to do with myself basically. 
 

And I, I really liked the, I really liked my graph theory and network theory classes. I went to Stony Brook. I had a pretty heavy math concentration. I took mostly applied math electives actually. And some interface design electives. That was, I kind of prefer it. To the to the kind of more engineering focused work. 
 

That took me to wall street. Actually, I took a job at Jane street and it was pretty early. It was 2005. So the firm was still small. I was, you know, kind of first 30 people and it was entirely unexpected. They just kind of I had a resume up, they called me for an interview. I was thinking I was going to go to law school. 
 

I was taking my GRE's, my LSATs, you know, trying to figure out where I was going to go with these these two weird degrees that I had. And I went in for the interview and I just really loved the environment of the trading desk. And I had never even thought of this as a potential. I'd never taken a finance class. 
 

But I was struck by this interesting environment that they had. That was very you know, I played poker during my interview, you know, these sorts of things. And I thought, what an interesting. Culture, but more than that would an interesting way of applying some of the kind of applied math classes that I'd really liked to this, you know, to just in some sort of professional capacity, I didn't know that it existed. 
 

So quant finance student, if there's like the world opened up and I thought, this is, this is actually a really cool. And I was I did it for about seven and a half years. And if through the financial crisis, through the European debt crisis one thing that I really loved about quantitative derivatives arbitrage was, was the desk that I was on. 
 

This question of how does information as it comes into the world. Trickle down through the entire apparatus of financial instruments. Right? So I was a market maker in ETFs. I was a trader of cross-border derivatives arbitrage between Brazil and in the U S and I was just really struck by this interesting phenomenon where, you know, something would happen an oil spill, a plane crash, and you would see this this kind of cascade, how the information would cascade through the instruments and the. 
 

You know, and, and all of the instruments would have to kind of come back into the equilibrium and to the mathematical relationships that they were supposed to hold. And I found this very elegant and also very intriguing. And there would be these moments of chaos when things would get out of line. 
 

And that would be, you know, very exciting. And I was just really interested in this kind of information, you know, information cascades and the ways in which the ways in which things moved and that representation. So I did this for a while. Like I said through financial crisis, through European debt crisis. 
 

And then I just started wondering, like, this was not the career path I had chosen with any intentionality. This was just something I had fallen into through an interview that I liked. And I had an environment that I found stimulating. So I started thinking, okay, what do I actually want to do with my life? 
 

You know, I'm turning 30, oh shit. What am I going to do? And I. I decided I was going to kind of figure out what was happening in tech. Like looking back at my friends who had been in engineering classes with me and many had gone on to work in big tech or to go work at a startup. And I was, you know the guy that I was dating, who is now, my husband was also at a startup and I thought, okay, maybe I'll see what is happening in this kind of realm of innovation. 
 

But I wasn't useful as an engineer anymore. And I I took a job in venture capital, and so I moved out to San Francisco in 2011 to take a job in VC. And it was right around the time when sensors on smartphones had made it incredibly cheap to just, you know, to know where somebody was to have, you know, everybody had cameras, everybody had video cameras. 
 

There was just a. Proliferation of internet of things, technology, new hardware, drones, where, you know, we're emerging and becoming kind of a you know, for ordinary people and, you know, just having a having a drone or a camera geo-fenced coupons, you know, devices to monitor your blood flow, all these different things had been made possible by I think Chris Anderson's term was the the, the peace dividend of the smartphone wars was this this kind of proliferation of. 
 

Of new ways to engage with a more technologically mediated world. And what was interesting about it was, it was all very optimistic. A lot of like, can we, right? Can we was the question? How do we not so much, should we write that would come much later, but so I was there in 2011 and this, this. 
 

Environment where again, I felt like I had suddenly found a community that was asking really interesting questions. Thinking about what could technology do to change things? What were new markets that would open up? How would we live differently? What would the future look like? And I was I really loved that. 
 

Still do that aspect of of Silicon valley culture. I wound up having a baby in 2013 and the thing that really more than anything else nudged my career into this. The stuff that I do now was I started paying attention to anti-vaccine conversations in mommy groups on Facebook. And I started noticing ways in which recommendation engines were suggesting. 
 

I follow certain types of accounts and just the various. Difference in what was suggested to me before I had a baby versus after I had a baby. And I, you know, this was beyond things like here's some new coupons for you or here's, you know, ads for baby clothes instead of electronics. Right. It was much more the communities that I was being nudged into. 
 

And I was struck by this idea, you know, behavioral economics, we paid a lot of attention to that at Jane street. Just, it wasn't really relevant to the day-to-day, but just understanding how. How does irrational exuberance work, right? Why do markets go haywire sometimes? And I thought here, what an interesting example of nudges moving me into new communities, new spaces, the networks that develop around these communities and spaces. 
 

And like I said, I had really enjoyed network theory when I was an undergrad also. And all of a sudden Twitter was a place to see these things with real practical application. Right. Not just some like, you know, coloring graphs in a textbook, but like here was a here's a representation of. What this could do in the world. 
 

And I was seeing an aspect of it that I thought was actually particularly disturbing, right. Ways in which I was being nudged into communities that I didn't think were really having a positive impact on the world. And yet, you know, there was this there was nobody nudging me into better communities, you know? 
 

And so I thought what an interesting dynamic, this is like these things that are highly sensational, highly engaging, highly conspiratorial seemed to be where. Where I'm being where I'm being pushed. And I just started talking to friends who, because yeah, as I mentioned I was in the valley. I had, you know, I had done CS as a. 
 

As an undergrad, a lot of my friends were still in engineering at Google and Facebook and other places. And I just started having conversations with them more about the, should we aspect of the tech? Is this, yes, this is not a violation, but is this the best way for us to use this system? Is this an ethical nudge? 
 

Is this the kind of community we want to be building? What happens when we've grown? You know, we've inadvertently grown this community into something massive. What does that actually do? In the real world, as you know, these people who were seeing all this anti-vaccine content seem to stop vaccinating, what does that do to like my kid in a classroom, you know? 
 

And so it, it started it really kind of changed the tenor for me. And I became very interested in. The kind of unintended consequences of network arrangements and social systems. And I still had a startup. I was working, I was doing a bunch of work in supply chain logistics at a company I'd helped found I'd left venture capital. 
 

I just wanted to this was like my night, my night hobby and my, my, this thing that I was really captivated by it. And then it just wound up becoming my actual job. So I'm not sure that that's really actionable for for, for many students, but. There wasn't really a field in quite the same way. 
 

It was very kind of nascent the, this, these conversations about internet harms kind of around this time. And how we should think about those harms at scale. Got it.  
 

[00:09:45] Alex: Super interesting path, I guess, as an add on to that, you know, managing Stanford's in an observatory sounds about as cool as it does complex. 
 

So how do you actually describe what you do to people today?  
 

[00:09:58] Renee: Yeah. So we we investigate the use and abuse of current information technology. I believe that's the kind of boiler plate. SIO has four main areas of work. So the first is trust and safety, right? How do we think about again, unintended consequences of platform policies as they impact people very directly. 
 

Brigading harassment CSUMB child harm You know, the kind of mental health questions that people have begun asking again, this, this concept of nudges nudging users towards eating disordered content or things like that. Right. These questions about what is a how do we think about content moderation and trust and safety issues? 
 

The second bucket is the kind of information integrity, which we used to call. Misdose, you know, the misinformation disinformation stuff. I don't love. The limitation of those two terms, because again, I think about it as it's really a problem of information cascades again. Right. What is a, how do we think about the ways that networks are designed? 
 

How do we think about the ways that information moves? How do we think about systems that are designed in a, you know, to be very, very effective vectors for propaganda and rumors and whereas the balance, the trade off between. Freedom of expression and the propagation of those rumors. So there's some interesting ethical questions, content, moderation questions, not is this true? 
 

Is this false? You know, that's a, I think a way to narrow attempt to scope a problem. That's actually much, much bigger. And then the other two buckets are emerging tech. So again, when you have an information environment, a playing field, if you will, when there are new technologies or new platform entrance, both of those things change what is possible in the system? 
 

A new. My new platform that says that it is literally moderation free and tries to appeal to one particular political demographic changes the way in which members of that political demographic spend their time, where they put their attention, the ways in which they engage with other users on other platforms. 
 

Do they go and become part of an echo chamber or is that like additive? Do they do that in addition to all of their other kind of online time? So there's a lot of interesting questions related to that. Or generative AI, large language models, make it possible for machines to produce novel forms of communication. 
 

Re relatively hard to detect actually in the context of text. When that becomes possible. How do we think about missing disinformation or manipulation? Because now all of a sudden the machine can be tweeting and the kind of old tells are no longer there. So interesting ways in which we think about the internet as a communication ecosystem and when something changes one facet of the ecosystem there are cascading effects and then the final the final kind of bucket is platform policy. 
 

Sorry. That can be platform, policy or regulatory policy, so government or or or self-regulatory. And we are interested again there in policy, education and design. How do we think about given empirical research into these other three areas? If there is a definable harm, what is the remediation for that harm or who should be in charge of. 
 

Various facets of the system or interrogating certain questions in a way that enables us to develop that empirical finding that that potentially leads to either regulatory or self regulatory policy. So it is a big kind of. Big scope. But we have so many really awesome people and we have a lot of student RAs who participate. 
 

You know, we have tons of tons of students who come in and sometimes they come with the particular question they're interested in. Sometimes they want to work on a project that we have. Sometimes it's an election related project including, you know, the Brazilian election is happening this year. 
 

U S election. There's a whole range of ways to to participate in. So it's it's a really fantastic team and I love being there really exciting.  
 

[00:13:42] Alex: Okay. So to start quite broadly, I suppose dealing with themes and misinformation and disinformation online have obviously riven in Brisbane, sort of into the mainstream as of late, actually before, before we dig into that, why don't we just briefly define some terms? 
 

Cause we tend to throw stuff like this around, right? So what's, what's the difference between misinformation, disinformation, and propaganda. Yeah.  
 

[00:14:03] Renee: So misinformation generally refers to something that is inadvertently wrong. It's usually false, sometimes misleading, but often falsifiable. And the people who are spreading it are generally doing it because they sincerely believe it. 
 

There's a often an altruistic motivation. I want to help my community know the truth about this thing. And so that is one, you know, that that's where misinformation false disinformation. When we use it, I'm at SIO and, and I think that's most careful clearly scoped terminology refers to a deliberate campaign to make people believe something. 
 

So it actually ties in very closely to propaganda in the context of what is formerly or what has historically been known as black propaganda? This there's a spectrum, the monochromatic spectrum, white gray, black and historical research into propaganda, historical descriptions describes white propaganda as that, which is attributable. 
 

You know, that the state for example, is putting it out or a particular agency or entity is putting it out because. Even though the message may be inflected in a certain way, the attribution is clear and transparent gray propaganda. You start to see things like front media properties, which are secretly funded. 
 

You wouldn't necessarily know that or there's some strong ties to a person in a government administration, and there is a perception that that might be linked, but you don't quite know for sure. So that's where that gray stuff falls. And then the black propaganda refers to. Active misattribution, active attempts to manipulate the audience by not being clear on who is putting out the message or by lying in the message. 
 

And that's where the, the kind of intersection with propaganda and dis comes in the form of this disinformation campaigns are often. Kind of linked through that that, that black propaganda, that, that material that is put out without a obvious you know, within an act of misattribution of a pretense that someone else is saying something. 
 

So disinformation, we tend to think of as a, as a campaign, not necessarily one claim, but a very coordinated effort to make people think or believe a particular. Propaganda. And the easiest definition of the term is sort of information with an agenda that tries to make the audience either believe something or take an action that fulfills the objectives of the propaganda. 
 

So there's an element of some sort of. Desired either activate or persuade or make a community of people take an action that benefits the the person who's controlling the messaging. Got  
 

[00:16:26] Alex: it. So, you know, now, now in an attempt to sort of really distill what we're getting out here with these larger themes in mind and. 
 

Obvious, but sort of, you know, gradual blurring between what parts of our lives are online and what parts are offline, so to speak. What do you see as the biggest problems most broadly that the internet is facing today? Like, is it context, collapse and echo chambers and online spaces? Is it like a larger loss of faith in institutions? 
 

Is it opaque algorithms? Like, do you even approach it like that? Like, are there well-defined issues in the first place or is everything a bit too interconnected to think  
 

[00:16:57] Renee: of a lot of interconnectedness. This is where I think there are certain things that are questions related to communication technology, right. 
 

Where we can treat the, the internet as a. As a new communication, medium, the same way that we could use media theory and scholarship drawn from other media environments, like the rise of television. There's a lot of really excellent books written in the 1960s about propaganda in the age of, of you know, this kind of information shift. 
 

One of the things that is novel about the internet though, is the participatory nature. And that is really fundamentally different. So in prior media environments, most of the conversation about disinformation or propaganda, We're related to a very top down control of information in which the government and the media were, you know, the access to the means of broadcast. 
 

That means that messaged semination was quite limited. There were, you know, you had to have access to television or radio, or, you know, a printing press. You could reach perhaps small communities of people, but not have the reach of mass media. There was a very different. You know, different degree of agency that an ordinary person had in those environments, but the internet changed that. 
 

Right? So first, everybody got the ability to create content. This happened in web one, right? The blogosphere, anybody could create a geo cities page. I had one when I was, you know, in like sixth grade or something like that anybody could write whatever the hell they wanted. It didn't matter if it was true or false, it was all irrelevant. 
 

It was just a way to, you know, the maximization of. What social networks did was they then brought all the people together and connected them onto one platform. And this solved in a sense, the problem of distribution. So now all of a sudden you could, you know, whereas maybe a handful of people could become popular during the age of blogs. 
 

In the age of social media, you could have very targeted niche communities where you could produce content for. People interested in yoga with two kids, you know, and you could have this really these very, very niche audiences. But it was very cheap to create content, you know, everybody had at this point also the, another facet of it was the the kind of phone dynamics I was talking about. 
 

You have now all of a sudden, a video camera. In your pocket everywhere you'd go. Right? So the kind of content you can put out changes each of us has the ability to share and like content, which influences an algorithm, which is curating and recommending content and ranking feeds. So we, through each of our small actions, Are influencing a much bigger system. 
 

And so I describe it sometimes as the algorithms, affordances and agency, right? You have the affordances, the tools that people are given in the system, the algorithms in this particular system that act as they see users using those tools. So I think it's the concept of like a mutually shaping system. 
 

The user does something uses a tool or a feature that they're given the algorithm keys. Provides them with more of what they want, theoretically, while developing kind of a picture of who they are sort of profile of who they are. Of course, there's a business model component to this. You know, the platforms are making money off of that. 
 

But in addition, there is this interesting flywheel effect where content that gets engagement gets more engagement, influencers that have some influence gain, more influence. And so there's the dynamics of what kind of. Substance or output is. Most likely to occur, given the structures, given the network shape, given the affordance capabilities, those two things are linked. 
 

And so the relationship between what a structure or, you know, the, the sort of structure of the internet ecosystem produces, I think is really one of the things that is transformative. There are crises of trust there. Crisis of legitimacy and institutions, but even there, what you see on social platforms because of this participatory dynamic, is that any time an institution fails in a visible way, it's, it's not something that just kind of like happens off in some corner where only a few people see it. 
 

If media decides to cover it, no, it's that it becomes so much a part of the conversation. And so I think there is this effect that's happening, where people are constantly seeing. They're political leaders make mistakes, say something wrong. And then that becomes something that, you know, that, that then dominates a small new cycle for a certain segment of the population for a period of time. 
 

And we're just hopping from these outrage cycles to outrage cycles, which I think does have the effect. In gendering cynicism, making people believe that all institutions are incompetent. All political leaders are craving because we see these sort of high profile instances multiple times a week. And, and that's where I think this link between the technological and the and the social impact is something that's just, it, it manifests very differently today, I think, than it did in, in prior media environments. 
 

There's some trade offs there.  
 

[00:22:01] Alex: Putting it cyclically like that is really interesting. I've never thought of it in that way. The next question I've got for you is, is, you know, there's been a lot of talk recently about Elon Musk's potential, I think ish acquisition of Twitter, and what some larger proposals being thrown around might imply sort of break this down for a second before. 
 

You know, before anything else, what are your thoughts on algorithmic transparency? You know, open sourcing a platform like Twitter's algorithm and is it as greatest solution as people imply? Like, I can't imagine there's literally a line of code in there saying, oh, you know, suppress content from accounts that you know, or like from Ohio or whatever, like as far as I can imagine, these are pretty complex machines with tools, for testing the you know resulting output of a post under certain conditions. 
 

Am I missing the mark here at all? What do you think? No,  
 

[00:22:45] Renee: I don't think you are. So I would say first I'm a huge advocate for transparency. And I think that there are on the regulatory front meaning actual government regulation of social platforms. The thing that I think is absolutely foundational. Is legislation like platform, accountability and transparency act full disclosure. 
 

A colleague of mine at Stanford kind of wrote the first draft of that. But just speaking as Renee, I think it's. Just hugely important because what it does is it says there are questions that we can't answer right now as outsiders. Interestingly, you're seeing this play out in the Elan conversation in the context of this, like what percentage of bots are on Twitter? 
 

You know, is it 5%, 20%? He threw out 90%. I think I saw a little bit earlier today, you know, depending on on what you see, you know your perception. How many bots are on Twitter and how much they matter is really very influenced by your own experience of the platform. And I remember when Elan first put out this proposal to this idea that he was going to buy Twitter. 
 

You know, he talked about four things. I think it was maximized free speech, you know, kind of make the platforms, realize their, their potential as the public square. Open source the algorithm, which I'll address in a second. And then the other two things were get rid of the bots, like, like stop the spam and then authenticate all the humans was the fourth one. 
 

And there's some interesting things there because in some ways, those are actually contradictory. You know, you can, there's a lot of arguments that automated accounts are still, you know, kind of still fall under the rubric of speech because they are the, you know, kind of one degree removed. But speaking on behalf of the people. 
 

Created them. Now this is a debatable point, I think. But it is you know, it is a point that I believe courts have, have found in favor of. But what's interesting is if you're Elon Musk and people are constantly impersonating you and kind of spam accounts are constantly drafting in your reply is trying to manipulate people into cryptocurrency scams. 
 

Then you probably have a very different perspective of what, what bots and spam are like on Twitter versus someone like me, where I see. You know, I see a handful when there's a couple of lakes, you know what look like automated accounts or things where, you know, you say a word and then immediately the reply hits and like that's quite obviously automated. 
 

But I don't, I don't see it as a massive problem. It is not for me. So there's this perception of my corner of the internet is like this. So the entire internet is like this. And that's not a, you know, it's not a particularly. Supportable point. But we don't have data access to do this kind of work arguments that platforms are biased against conservatives. 
 

We don't have access to really do that work. It's, you know, and it's, I think as we see these platforms as being so profoundly important in our daily lives, there's a strong argument to be made for having. Researcher or civil society access to better interrogate. You know, what the structure is actually is actually producing on the subject though, of, can you just open source the Twitter algorithm? 
 

Like I can't get my head around what that means. So per your point I don't know. I mean, like. Yeah, training data. And I can't get my head around, like what they're going to put out. I was really intrigued by the public response. You know, I saw some conservative pundit say like, now we're going to see how badly they've been shadow banning us. 
 

And I was like, what do you think this is going to look like? Like what a remarkable. Idea that you're just going to like, create a good hub account and go look and like find the, like if conservative that then shadow ban like line is, you know, is just ludicrous. But but I think that this, you know, the the algorithm is it's a, I'm doing that in air quotes. 
 

I'm realizing now no one can actually see me, but the algorithm and air quotes it doesn't actually. It doesn't really mean anything. And so are we talking about the recommendation algorithms? Are we talking about the feed ranking, the curation you know, that, that feed ranking question is a very complicated one. 
 

Elon has been tweeting about switching into reverse chronological feed, and he says, you know, go look at reverse chronological versus curated the ranked feed and see. You know, I think he said like how they've been manipulating you and manipulating is a very interesting term to use there. Because many people who have talked about the problems with algorithmic curation and feed ranking, including myself, talk about it in the context of. 
 

The user doesn't realize how much is designed to kind of grab their attention and hook them in and make them outraged, or make them participate in a, you know, highly engaged, often inflammatory conversation. But the use of the word manipulated in that context. There is almost a, an argument that like the Twitter algorithm itself was somehow deliberately manipulative, you know, in a, in a way that is sort of surprising to see stated by someone who's ostensibly interested in buying the company. 
 

So I don't know where they're going to go with this open sourcing thing. I do think teaching ordinary people, how a. Collaborative filtering works, you know, just the basics, like here's collaborative filtering. One-on-one here's, you know, here's why this platform thinks you should be in this group. Or here is, here are the different facets that go into a feed waiting, you know, maybe something like much more conceptual on a high level versus open sourcing the algorithm. 
 

[00:28:02] Alex: Gotcha. As a bit of an add on to that, like, what is this idea of social media platforms, existing as protocols, right? As public infrastructure or carriers rather than siloed services entail? Like, is this something you think is an inevitability? Cause you, you talk a lot about. Being a proponent of transparency. 
 

Is that something you'd see as like a net benefit? Like, is it right for, you know, centralized entities to control the curation of information in the first place?  
 

[00:28:27] Renee: It's an interesting question. There are trade offs here too. I think this unit, this is a very complicated problem with trade-offs all the way down, right? 
 

So the protocols not platforms. I think Mike Masnick has done some really good writing on this at tech dirt. There are real arguments to be made. And I think I've seen you know, Twitter's doing this with blue sky and I'm trying to think my colleague at Stanford Frank Fukuyama put out a proposal arguing in favor of middleware, right? 
 

Giving users control over that feed. There was a project out of MIT called gobo. I was at ISA Ethan Zuckerman's. When he was over at, at at MIT. And I remember I created an account and I thought it was really interesting. I could set my feed to show me more posts from women. More pictures, Moreno there, a bunch of different ways in which you could kind of adjust the sliders to see more of what you might not otherwise see to, to kind of have some, some control over your feed. 
 

And I think that. Again, this question of giving users agency at a minimum, it makes people feel more empowered. It makes them feel like they have more of an understanding of what's going into their feed. It's a little more comprehensible. Does it solve all of the problems of the internet? No. You know, you are going to see, I think the potential for people to. 
 

Be in their silo you know, to retreat much more heavily into you know, kind of safer feeling or friendlier feeling community. That's, you know, maybe one of the other kind of downsides but it, you know, it removes some of the problems, the concerns about censorship, the concerns about platform serving as arbiters of truth or as arbiters of what you should see on the flip side. 
 

You know, again, I think this question of does it just nudge us further and further into a wholly separate communities as opposed to bringing people together, which is what ostensibly a public square is for. Gotcha.  
 

[00:30:24] Alex: Speaking more broadly about the implications of. Larger decisions like that here. 
 

I've heard you talk about public health messaging and discourse during COVID as sort of evidence of what a trade-off information curation can actually be. Right? Like in Google's case, you've you have this case of, of their policy of, again, air quotes here in minimizing harm when presenting you with search results, giving you what they, you know, sort of deemed to be more accurate information instead of what might be like most popular in instances of, I don't know, looking up advice or like treatment for Alyssa's, for example. 
 

And you see it in social media feeds too, right? Like what actually gets curated for you? Like, like, is it, what has the most likes, is it what or who has the most expertise or credentials? Like what does that trade off look like for you?  
 

[00:31:07] Renee: Yeah, it's really complicated. Um, So, Google search in particular, back in around 2012. 
 

I think it was 2012, came up with a policy and they call it your money or your life. Right. And it argues that your money or your life says that there are certain types of searches where the platform has a higher standard of care, or there's an obligation to ensure that you have the most accurate information possible. 
 

Possible, because there is a quite articulatable harm to you if it gives you nonsense. So if you are looking to open a bank account and it gives you some, you know, fake spammy, like, you know, spear fishing company's website and you go and you upload your ID and your, you know, your SSN and you sign up for an account and then it turns out it's. 
 

Well, you know, you've potentially just experienced a very significant loss there. Similarly, if you have a cancer diagnosis, then you come home and you search the name of your illness and you find juice, fasts, and all kinds of nonsense. That is not on in keeping with the standard of modern medicine. Are you entitled to find alternative medicine? 
 

Yes. Should it be on the first page of Google search results? There's a strong argument that the answer to that is no one that, that I personally feel pretty strongly is in fact. A very legitimate, reasonable way for for, for platform to be thinking about how it creates information. What was interesting about your money or. 
 

Is that Google didn't apply it to YouTube for very, very long time, because YouTube was like a place that you went to be entertained. And so it speaks to the idea that our, our encasement with social media platforms originally, it was to find your friends and follow baby pictures and see parties. And like the night out, you know, what, what did everybody do kind of stuff. 
 

And then gradually it became a place that people went for informal. And as that shift happened you started to see this interesting phenomenon where influencers who were just very, very good at communicating were very authentic. They didn't necessarily have a lot of expertise, but they were very good at communicating. 
 

And so influencers. We're better at communicating in this new environment than institutions where knowledge had typically, you know, or an idea of authority had typically resided. This of course combines with what we were talking about earlier, where sometimes the institution is wrong. Right. Where is that intermediary consensus making process. 
 

When information is flying along, it's not presented on the nightly news. It's coming. You know, people are searching for information and random things are coming out every minute of the day. Any influencer with a million followers who normally talks about guitars can start talking about COVID there's, you know, cons to that. 
 

And so for platform curation, the question becomes. What do you surface and what are the potential harms associated with it? I do think that per the other part of our conversation about transparency I do think that research into understanding how information is received and how can we quantify these harms in some way, can we connect them to to perhaps offline impact? 
 

You know, we know that. Anti-vaccine messages increased. We know that they increased in certain communities. We can see comments in the groups from people who make personal decisions for their children that occasionally have really truly tragic outcomes. You know, there are people who lose children who are in these groups. 
 

They take that advice and their children suffer. Can you connect that to a broader societal trend? That's where there's a little bit more of a gap, but this question of when somebody goes and searches for vaccine information, what should they see? Should they see the most popular post on the planet? 
 

Which is a gamble metric or should they see something that has some kind of routing and authority and fact, because there is a potential for very real harm. I personally think it should be the latter. There are other people who completely disagree, you know, and that's, but that's the, that's the kind of the nature of the tension. 
 

How do we think about Do you give everybody the option to make that determination themselves with middleware, where they can slide up, you know, influencers and slide down institutions and media, you know, maybe that's how people want to live. I think that that's one of the, the kind of information design questions that faces us today. 
 

Super  
 

[00:35:21] Alex: interesting. So to start winding us down here in a sort of similar vein, I'd be interested in hearing your thoughts on online identity and where that's heading. You know, people tend to point back to the early days of the internet, as you did earlier as this largely like pseudonymous. You know, you know, existing on top of open protocols that people are free to use. 
 

Then we have the rise of a lot of today's social and tech giants, you know, Facebook, and we're notably in this context, that acted as a sort of great unmasking. I think I've heard it be called as prompting people to use real names when they're sort of interacting with each other online. Now we're in this interesting time period where in my mind were becoming more vulnerable using our likenesses and real identities online as you know, more and more of our sensitive or identifying information is out there either, you know, just via a quick Google search or maybe made available. 
 

Through some kind of database. Right. So what I'm really trying to ask, I think here is w what do you think the future of identity online looks like? Like, do we keep this quote unquote web to attitude of using our real names, or is that as there've been sort of a fundamental shift in the way that we present ourselves online under one or more identities, either pseudonymously or anonymously. 
 

[00:36:24] Renee: I remember a very long time ago, like 2011, there was an interesting the creator of four Chan Chris pool goes by Mo was that a conference I was at? And he, he said something that has stuck with me, which was, it's not who you share with it's who you share as, and he was arguing for the value of that annuity. 
 

And this was around at a time when they, you know, kind of Facebook country names seem to be the direction that things were going years before the extent to which. Quote, unquote, true name could be manipulated it as it was by a number of state actors and other entity created fake people on the platform kind of expose the flaws and this idea that true name was you know, was manageable. 
 

But I, it was that, that question of who you share as you know, what kinds of information do you put out there in your professional capacity versus in your personal capacity? I think one of the reasons why people find the experience of having one facet of their lives brought into another social platform. 
 

So jarring and so frightening. Actually a lot of the time is because there's a perception of these separate spheres, where I am one person over here and another person over here. And they're not really has the potential to to, to flatten that very quickly. There's a, I talked to You know, to numb people about this particular topic. 
 

A lot of you, there's a real value in anonymity, particularly outside of the U S particularly in authoritarian countries or where you really need dissidence to be able to feel like they can express themselves and use these tools to call attention to challenges. I do think there is something at the same time, very interesting. 
 

That's happening with emerging tech like generative text AI, where you are very soon, not really going to know if you're engaging with a human or not, right. That that technology is going to become increasingly more sophisticated. And so this idea of. Proof of person is something that's been intriguing me lately. 
 

What does it mean to have a, a verified identity? Not who you are specifically, but that you are real that you exist. And this notion, I think about Reddit a lot in the context of persistent pseudonymity, right? So there is a. You can use a pseudonym, but you're, there's kind of like a ranking, you know, it tells people like you have this much you know, karma, you have these many points. 
 

So there's a quick indicator. You're not some asshole who just popped into the forum to create problems. Yeah, you have a pseudonym, but you know, look you've existed and you've created value and you've participated in communities in a constructive way. And that, that question is there a way to take that persistent pseudonymity almost and and Create a identity layer for the internet that doesn't require necessarily true name or that has differentiation and your, you know, your, the various facets of your life or, or where you choose to be online, but also at the same time has this proof of person component to it. 
 

I do think that that's one of the more interesting questions or how we're going to reckon with this over the next five or six. I hear people say, like web three solves this, but I can't get my head around house. So she'd have another guest on an estimate about that. Definitely.  
 

[00:39:26] Alex: So you know, last one as the last, but I guess much more open question. 
 

I've started enjoying asking people. What do you think more people should be paying attention to?  
 

[00:39:37] Renee: Well, that's a great question. Hmm. I feel like I'm always so immersed in. My work. And I actually asked myself, what should I be paying more attention to? I have really been, I've been following education policy quite a lot. 
 

I feel like that's actually quite foundational. A lot of the questions around What does it mean to be educated? I have three little kids, eight, five and 18 months. And being you know, being thrusted in Burton, into homeschooling during the pandemic really made me pay attention quite a lot to the politics of education, the dynamics of education, the way that we encode our values and what we educate. 
 

And I, you know, They didn't pay quite so much attention to that. Until much more recently, the cost of an education, the value of trade school, perhaps the question of you know, math curriculums, all of these things are foundational, you know, and have pretty profound impact for the next generation. 
 

And you know, and so that's, I think I would say for me, it's I've been really struck by the dynamics around education.  
 

[00:40:38] Alex: Got it, Renee, this has been great. Thank you so much for the time. I hope we can keep in touch and I'll shoot you this.  
 

[00:40:44] Renee: Let me know. 
 

Awesome. Thanks so much. Take care. Bye bye.