Twitter and responsible social media
Hello.
This video is quite different from what I usually do. I discuss the dramatic exodus from Twitter that has happened over the last week and what it means to be a responsible social media company. Twitter has been instrumental as a tool for misinformation in recent years, and I think it's important that we have a discussion about how to hold social media companies accountable.
I am not personally on Twitter anymore, but you can follow me on Bluesky or Mastodon.
As always you can watch the video here or read the transcript below.
Best,
Anders
Transcript:
This video is a bit different because I'm not going to talk about war, at least not in the traditional sense. What I want to talk about is social media and some of the things that have been happening around Twitter. And it's of course also related to the war in the sense that the things that are happening on Twitter have a direct impact on the different wars that are taking place, both in Ukraine and in other places. But I also think I have something meaningful to add to the debate about Twitter and what it means to be a responsible social media platform. So let's talk about it.
Here on YouTube, I make videos about all aspects of the war in Ukraine and I do the same in the Danish media. I cover a broad range of military topics. But the research project that I'm working on in my day job is actually about social media and how social media is changing the conduct of warfare. And I'm doing that by studying the role of platforms like Telegram and YouTube and Twitter in the ongoing war in Ukraine and how they are changing the conditions for military leadership. So when I'm talking about social media, then it's not just me speaking as an enthusiastic user of these platforms. I'm also speaking as someone who studies these things professionally.
And one of the things that I find striking when talking to people about what's going on with Twitter is how deterministic many of them are in their interpretation of the situation. So people will say things like, "yes, there are a lot of bots on Twitter, but that would happen on any platform. What can you do about it?" Or they will assume that, "sure, the debate is polarized because that's just how these things work". But that's not actually true. That's not necessarily how these things work. When these things happen, then it's because of choices that are made by the people who control the platform. And I think it's important to have a debate about the platforms where our democratic conversations are taking place and what it means to have standards for good governance on social media. Because some social media companies do it better than others, and it's not all just the same.
So I want to suggest three yardsticks, we can say, for what I think can be used as a starting point for such a discussion. And the first one is that there should be an aspiration on the part of the company to not have any particular political bias in its algorithm. It's one of those classical points of criticism when people talk about social media. They assume that the algorithm will favor their opponents. Because that's just how they see it. So people on the left tend to feel that the algorithm has a right-leaning bias. And people on the right will feel that there is a left-leaning bias. But just because you feel it that way, that doesn't necessarily mean it's true. And when you run a social media company, then the responsible thing to do is to try to hit that balance where the effect of the algorithm is altogether neutral. So it's probably not possible to do that 100%, but it can be the aspiration. And it's possible to get pretty close to that.
The second point is that any social media platform must have a responsible approach to moderation. Moderation is basically the practice where certain types of content are not allowed on the platform. And if you break the rules, then the posts can be removed or you can get blocked. This is a very contentious topic because it's something that everyone will have personal experience with, either because we have written something that has been removed by the moderators or because we have experienced harassment on the platform.
But the one thing I will say is that some degree of moderation is always necessary. Some people like to say that there should be absolutely no rules and that any kind of moderation is a violation of free speech and things like that. In reality, you have to have some rules. Because if you don't, then the place will basically just turn into a dumpster of violence and pornography, right? And some social media platforms might want to be that place, but most of them don't. So it's not a question of whether there should be moderation, but about how. And a responsible social media platform must have responsible practices about moderation.
And it's not easy to do. Moderation is one of the hardest things to do because it's often the case that you have multiple good principles principles that are in conflict with each other and you have to make decisions in this gray zone about what's the right thing to do. And in that process, you also make decisions about who it is you want to feel comfortable on the platform. But that's something that a responsible social media platform has to prioritize and to be conscious about.
And the last point I will offer is that a responsible social media company must have good APIs. An API is a way that applications can interact with the platform. And if there are good APIs, then researchers can also get access to the data. So you can download maybe tens of millions of posts in a comma-separated file. And then if you know a bit of programming, then you can analyze that data and you can understand how the platform works.
So a good and open API gives transparency. It allows you to see the results of the algorithm. And because you can trace the discussions and the reactions and you can make informed assessments about how the algorithm works. So you can elevate the discussion from the point where individual users just have a personal feeling about how the algorithm will be favoring their political opponents. If the platform has good APIs, then you can have actual data and researchers can verify if there is a left-leaning or a right-leaning political bias in the results that are produced by the algorithm.
So those are three points that I think we can use as a starting point for discussion about social media platforms, about what it means to be a responsible social media platform. And they are not all the same. Some of them are doing quite well on these parameters and others are not.
But if we take Twitter, then this is obviously something that there's been a lot of discussions about this since Elon Musk bought the company. And if we begin from the bottom, then Twitter used to have really good APIs. And it was actually one of the reasons why Twitter became so popular because those APIs allowed developers to build third-party clients that could be used for Twitter. But it also meant that researchers had very good data access on Twitter. So there was pretty good transparency about how the algorithm worked. But when Elon Musk bought the company, then he closed those APIs so researchers no longer have data access.
And then if we look at content moderation, it's much the same story. Twitter used to have fairly good moderation. There was, of course, a lot of discussions about it. Some people were angry about how they did moderation because they disagreed with some of the decisions. But overall, it was a task that Twitter as a company took seriously.
When Elon Musk bought the company, he dramatically relaxed the standards of content moderation. And just to be clear, it was absolutely possible to change the policies of content moderation and still be a responsible company. You can make a decision about how you can set a different standard. But you can't just fire most of your content moderation team and say that you don't need that function anymore. That's not a responsible way to administrate a social media platform.
And finally, we get to the question of algorithmic bias. It's a bit complicated because what you need to look at is not the algorithm in itself. You need to look at the results of the algorithm. There are many different variables in a recommender system, and the goal of the designers is to constantly tweak these values to ensure that it provides a relevant and still balanced recommendations. At least that's the goal of the ambition is to have a politically unbiased algorithm.
And Elon Musk actually made the Twitter algorithm open source. So everyone can find it on the internet. It's on GitHub. And this gave the impression that there is openness about it because everyone can go and see for themselves that there is nothing in it that would favor certain political views. And that's true. There is nothing in the algorithm that says that if this is a Republican tweet, then it should be promoted more than a Democrat tweet. But that's also not really relevant because it's not how it works.
The way that bias occurs is in the decisions about how much weight you would give to different parameters in the algorithm. So just as an example, it might be that Republican-leaning voters tend to use the like button more, whereas Democrat-leaning voters will spend more time leaving comments. In that case, you would need to figure out how much weight a like should have compared to a comment, and how exactly these values should be set so that the overall result remains balanced. It was just an example. I don't know if Republican voters actually use the like button more than Democrats, but it's just to show that these are the things you would need to look at to build a balanced algorithm.
So just because the algorithm is open source, that doesn't actually provide any transparency about the algorithm. To know that you would need to know the output of the algorithm. And as I said, that's not possible anymore because Twitter has shut down the APIs.
But when Elon Musk bought Twitter, he talked about how in his perception there was a liberal bias on the platform, and he wanted to change that. So he wanted to fight against the woke virus and stuff like that. But if we look at the research that was done before he bought the platform, then it indicated that the algorithm was actually reasonably balanced. And in fact, he found that there was a slight right-leaning bias, not a left-leaning one. And also contrary to what many people believe, the algorithm was also not really amplifying extreme views on the right and the left, sort of at the expense of centrist views.
So the old Twitter was doing a pretty good job of trying to keep the algorithm balanced. And if you were to adjust anything in a responsible way, then that would be to tweak it to be slightly more liberal. That would be the data-driven approach. But Musk makes decisions about these things based on his gut feeling, and that is not a responsible way to run a social media platform.
I actually think it gets worse than that because by making the algorithm open source, Twitter has essentially published an instruction manual for the people that are building bot farms about how they can better game the system. Because now these IT programmers can see exactly what behavior they need to program their bot accounts to have to maximize the effect.
So relaxing the moderation practices and publishing the algorithm has helped the bad actors who want to promote stories about how the CIA is running biolabs in Ukraine and stuff like that. Now you have all these bots going around mimicking the behavior that is described in the documentation, and that in itself will distort the output of the algorithm because unless there is an equal number of bots on the left and the right in the political spectrum, then just the act of publishing the contents of the algorithm will actually introduce a political bias.
So we can't say that Twitter currently has the ambition of having a politically unbiased algorithm.
That's the reason why I'm not on Twitter. I was a very heavy Twitter user for 12 years. I loved the platform. It was hard to go. But I think that quite objectively, it is today a social media platform that is run in an irresponsible way. And I just don't want to provide content on the platform as long as that is the case because my presence will give the platform a veneer of legitimacy that it doesn't deserve.
And I don't buy the argument that you can be the voice of reason on such a platform because the people who control the algorithm will ultimately win if they don't aim to make the algorithm as unbiased as possible. You might think that you are the voice of reason and that you are having a positive impact by providing your perspective. But in reality you're just legitimizing the platform and you are attracting users who can then be exposed to misinformation or biased information, which will ultimately sway public opinion in the opposite direction of what you want.
If they change these policies on X, then I might be back. My problem is not Elon Musk, but it's those specific decisions that in my mind mean that Twitter – or X – is an irresponsible actor. And with the huge influence that social media has over our societies, I think it's important that we have this discussion about what it means to be a responsible social media platform.
By now, I think it's probably too late for Twitter. It's in a downward spiral and I think it's going to end up as some kind of clone of Truth Social. And there are some very strong alternatives that people can choose today where the practices of the companies that are running them are just objectively better. Threats is better, Mastodon is better, Bluesky is better.
All these platforms score better on the parameters of having an ambition for an unbiased algorithm, having transparent moderation practices, and having an open API for researchers. Not all these platforms are equally good, but they are all better than Twitter at this point. YouTube, by the way, is also quite good. It scores quite well as well on these parameters.
I think the momentum right now is with Bluesky, if we look at it as a Twitter replacement. That seems to be where most people are going at this point. And I must say, I also think it's a pretty great platform. So it's feature-rich, it's easy to use. So if you don't have an account on any of those platforms already and you want to check it out, then I would definitely suggest looking at Bluesky.
But I think the most important thing is that we have this discussion about what kinds of standards we want to set for the platforms where our public discussions are taking place. And I think it's important that we increase the general awareness of these things so people don't assume that all social media is just some kind of black box and that there is no way that we can monitor what the algorithm does so we just have to live with whatever the platform owners come up with. Because that's not true. If there are good and open APIs, then we can actually do research about these things and we can have a real knowledge about how they work and we can demand that they live up to certain standards.
Okay, I will end it here. You can find me on Bluesky or Mastodon. I also have an account on Threads but honestly I don't use it that much. Or you can of course also follow me here on YouTube and you can click Subscribe and you can click the bell icon and then you will get notifications when I upload new videos. And if you want to support the channel, you can subscribe to my newsletter. It's on www.logicofwar.com. Thank you very much for watching and I will see you again next time.