8/24/2022: Twitter Drama Has a New Twist
Ex-head of Security said Twitter is lying about bots to Elon Musk.
Twitter’s ex-head of security, Peiter “Mudge'' Zatko, filed a whistleblower complaint, alleging egregious security deficiencies. You can read the whole (redacted) complaint here. It’s quite an interesting and informative read, telling us how sausage gets made at Twitter and frankly it’s quite consistent with my experience during the early days of Facebook. People thought these platforms have a flawless system and a very precise process of problem solving, software deployment and internal controls. It can not be more far away from reality. Basically, Mudge was hired by Jack Dorsey to fix the security deficiencies after a 17-year old hacked into President Obama and many other public figures’ Twitter accounts in 2020. He tried very hard to do his job but was facing an uphill battle because allegedly the executive team just wanted to shove various security and spam problems under the rug. A few months after he accepted the job, he presented his findings to the executive team and let me just quote the whole thing from the report:
49. Initial Report: Mudge presented his initial findings to the senior executive team in February 2021, about one week before the Q1 Board meeting.” Jack Dorsey had ‘specifically recruited Mudge for his reputation of speaking truth to power, and told Mudge to not hold back. And Twitter's other senior leaders knew they had security problems. But even so, the rest of the executives were stunned to hear Mudge tell them just how bad things were. While Mudge highlighted some positive aspects of Twitter's security processes, such as the Company's well exercised (but understaffed) team tasked with scrambling to react to crises, the overall picture was dire.
Apparently, the then-CTO now CEO Parag Agrawal disagreed on the assessment that Twitter faced a non-negligible existential risk of even brief simultaneous, catastrophic data center failure, and had no workable disaster recovery plan. Mudge then was instructed to not tell the board about his findings. According to item 52 in the compliant:
52. Instructions to withhold information from Board: After the executive team meeting, Mudge was instructed not to send a detailed written report to the Board of Directors, but instead convey his findings orally, at a high level only. Mudge found the request unusual, but as a new team member, complied. With the benefit of hindsight, Mudge now interprets this instruction as an overt act in furtherance of an ongoing effort to restrict critical information and defraud the Board of Directors and Twitter shareholders.
After Jack Dorsey stepped down and Parag Agrawal stepped up as CEO, Mudge’s working relationship with Agrawal deteriorated. There’s a lot of redacted sections in the complaint but he was calling something fraudulent and he got terminated a few days after that by Agrawal.
In the report, he also called out that Twitter is lying about bots to Elon Musk. He alleged that Twitter CEO is playing word gymnastics and the <5% bot number is based on mDAU (monetizable daily active users), which can be defined internally arbitrarily and is basically a number that’s excluding bots already. According to Mudge, if the number is based on a commonly accepted definition of DAU, Twitter bot percentage would be way higher. We will see how this piece of information would be used in Elon’s Twitter acquisition trial.
This complaint highlights the challenges site integrity and cybersecurity professionals face when trying to rectify enormous problems large social media platforms have. They are constantly understaffed/under-resourced and are asked to paint a rosy picture even when things are dire. People want to blame the Twitter CEO and its executive team for failing to secure their systems and to fight spams. But I think what we should really blame is the incentive structure. Their performance is measured based on user growth, engagement and revenue. Everyone said security and site integrity is important but the related initiatives are constantly deprioritized across all social media platforms because these efforts are expensive and there’s no clear ROI on growth, engagement or revenue, at least not in the short term. In fact, a system that is exceptional in detecting spam bots might be in conflict with the measurable growth and engagement.
What ends up happening is that these platforms just want to hire security officers who are willing to sign off whatever terrible stuff they have. I am being super cynical here but I suppose Twitter regretted that it hired someone who actually wanted to do the job and make things better. Just look at what happened at Facebook and see who got fired and who got elevated to the top security/integrity positions. You see the playbook the Big Tech uses. I am against over regulation in general but I believe we need to force these platforms to have a level of transparency so people have a way to know how bad things are and users can decide for themselves how they want to use these platforms.
https://twitter.com/joncallas/status/1562690112147709952