CIO

Twitter spy scandal a wake-up call for companies to clean up their data access acts

Two Twitter employees accessed user data on behalf of the Saudi government. Neither should have had access, and this is a sign of a bigger problem at all companies.

A tremor rippled across the information security community last week when the Justice Department announced criminal charges against two Twitter employees, Ahmad Abouammo and Ali Alzabarah, for acting as foreign agents under the direction and control of the Kingdom of Saudi Arabia. The complaint alleges that the two men used their ability to access user data to provide the Saudi rulers with private information on more than 6,000 Twitter users.

Abouammo, who was a media partnerships manager at Twitter, is a US citizen. Alzabarah, who was a site reliability engineer at the social media giant, is a Saudi citizen, while a third person who was an intermediary in the theft of some of the data and who did not work at Twitter, Ahmed Almutairi, is also a Saudi citizen.  

Both former Twitter workers had access to a proprietary and confidential information for Twitters users, including the email addresses, birthdates, phone numbers and IP addresses. Alzabarah, who pulled data on four specific users at the request of the Saudis, also had access to users’ biographical information, logs that contained the users’ browser information, and a log of all of a particular user’s interactions at any given point in time, the complaint says.

The former Twitter employees accessed the user data even though neither one of their job duties required access to this information, a reportable violation of Twitter policies at the time regarding user data protection. Twitter says it enhanced its controls and permissions in 2015 to restrict user data access to only those whose duties required it.

Insider breach raises questions

Even so, the situation of insiders spying on behalf of a foreign government raised alarm bells among cybersecurity specialists about what they fear is widespread lax tech company employee access to sensitive data. “There are two big takeaways” from this situation, Mike Chapple, senior director of IT and associate teaching professor, IT, Analytics and Operation at Notre Dame University.

First, “Why did employees who had nothing to do with interactions with individual users have access to the systems that contain that information where they were able to go in and pull this profile information,” Chapple asks. “Anybody who's been around cybersecurity for a while knows that there's this principle of ‘least privilege’ that we've embraced for decades. It says people should only have the access they need to do their jobs.”

“In this case with one of the employees being a media relations person and the other one being a site reliability engineer, there is no real imaginable reason that any of them would require this access,” Chapple says. “If Twitter had tightly controlled the number and types of employees who had access to the information, it would become much harder, then, for foreign intelligence agencies to target someone with that access.”

Secondly, “It's not surprising that this sort of thing would happen at a technology company because the easiest path forward when it comes to access control is just giving large numbers of people access to as much information as you can because then you don't run into access control as a barrier to the work that you're trying to get done,” says Chapple.

Least access a cybersecurity gold standard

The notion of “least access” is one enshrined in a host of cybersecurity frameworks going back to the beginning of the field and plays a prominent role in the gold standard framework, the Framework for Improving Critical Infrastructure Cybersecurity first released by the National Institute of Standards and Technology in 2014 and updated since then. One of the key access control functions among several such functions embodied in the framework says that a key desired outcome of protective cybersecurity practices is to ensure that “access permissions are managed, incorporating the principles of least privilege and separation of duties.”

At least one previous high-profile incident involving violations of the least access principle hint that the problem may be a rampant one. In 2014, news broke that Uber’s “God View” software allowed far too many employees to track the real-time locations of passengers, including high-profile politicians, celebrities and even employees’ girlfriends. In 2017, Uber had to settle a complaint by the Federal Trade Commission over this sloppy access policy by agreeing to two decades’ worth of privacy and security audits.

“It wasn't until it came out in the media that people reacted negatively to it, that Uber locked that down and took that access away from people who didn't have a need to use it,” Chapple says. Like Uber, “organizations that experience a breach would probably agree that the cost of remediating a breach in terms of the direct costs and the reputational damage and everything else probably far surpass what it would have cost to implement the security controls that could have prevented that breach from occurring.”

Will the Twitter insider breach change how companies do authorization?

Yet, despite the fallout of the Uber breach, companies still appear to getting least access wrong. “I think an example here with the Saudis meddling with Twitter is going to be the first of many stories that are going to usher at a new era the same way that corporations responded to the breach of Target and suddenly took cybersecurity seriously,” Bryson Bort, the founder and CEO of cybersecurity firm Scythe says. “It’s a case of who’s watching the watchers.”

“The insider threat is pernicious. It’s hard to see it. It’s sort of a betrayal,” Bort says, pointing to the human factors that underly the problem in the first place. “I inherently trust the people I give paychecks to and who wear my logo on their shirt. We as human beings trust people that are on our team.”

Moreover, it takes a lot of forethought and planning, and ongoing work, to set up stratified and effective authorization schemes, all the while fearing any harm that could be caused to people’s ability to do their jobs. “You’re constraining people’s jobs, which requires careful consideration because if you restrain it too much, then you can actually prevent somebody from doing their job,” Bort says. “Nobody wants to affect operations.”

To tackle the challenge of establishing more sophisticated authorization policies, start with the most sensitive systems in the organizations, Chapple says. “If you think about it from the perspective of an outsider who's interested in economic gain and financial information or even perhaps is engaged in some form of espionage,” start with the systems that would be of most interest to those people.

Bort thinks that even before organizations can start to think of protecting valuable or otherwise strategically important assets, they have to do the foundational work of performing real asset inventories first. “How many companies know all of the assets in their enterprise?” he says. “If I don't even know what things are creating and computing data, how am I going to even start to get my hands around the data?”