Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Slack + Snapchat = AppSec? Breaking Down the Complexity of Messaging Apps

DZone 's Guide to

Slack + Snapchat = AppSec? Breaking Down the Complexity of Messaging Apps

The major cause of threats to messaging apps is because of how most of these tools were designed.

· Security Zone ·
Free Resource

Image title

Over the course of the last few weeks, messaging applications were hit hard with vulnerabilities, hacking attempt disclosures by nation-states, and insider employee inappropriate behaviors. As organizations continue to prioritize cybersecurity, outfitting their infrastructure with the latest and greatest defensive and offensive technologies, there is one clear area that is lacking security — communication and messaging tools.

Why is that? In the age of ISO, FEDRAMP, SOC2, and the rest of the trees in the acronym forest of security compliance, why is messaging, in particular, in such a precarious state?  The main reason, in my humble opinion, is because of how most of these tools were designed. 

When it comes to security, the weak point of virtually all messaging apps to date (and many other apps and services, really) is that they're built with the assumption that users have to trust the service. The problem is: Can users really trust the service? I’m not saying there are bad people running them, necessarily, but how many breaches (e.g. Equifax 2017) or alleged abuses (e.g. Snapchat 2019) have to happen until the answer to that question becomes clear? 

These days, so much of our personal data, from our PII to our online activity, is in the hands of third-party service providers. For many services, we simply can’t conceive of another way. Our bank, for example, clearly needs access to our account information and financial transactions – this access is, in fact, what makes them a bank and pretty much is the service they provide. Messaging services, however, are different. They are not like banks. They don’t need access to the content of the messages they deliver. Unfortunately, most of them were designed with this access, and now, we’re all suffering.  

Once a messaging service is built on a precondition of provider trust, its design becomes its Achilles heel and users generally suffer in two ways. First, even if basic security measures like transport layer security (TLS) and infrastructure disk encryption are actually used to protect message content, they only protect it for part of its journey from sender to receiver, and there is still a significant period of time where it is readable from the service’s perspective.  This window of availability is what many security experts refer to as a big glaring hole (to use a technical term).  

The provider will, of course, try to fill this hole with security compliance certifications and internal controls, but to that, all I can say this: In every third-party data breach to date involving PII (3 and counting), certifications and internal controls were in place. Indeed, experience has taught me that in most cases where humans and best intentions are involved, what can happen generally does. Controls need only fail once for bad things to happen, and when they do, it’s typically when and where it will hurt us most.   

The second way users suffer due to this design is once the service has access to user data, it always finds ways to leverage it, often prioritizing "virality" and its own growth over the user's interests in protecting or deleting it. This often leads to the provider “harmlessly” scanning message content for marketing purposes, retaining messages longer than necessary, abusing user contacts to aid the growth of the service, and other bright ideas — none of which are genuinely done in the user’s best interest. 

The right way to think about trust when you want security is that less is more. The concept of zero trust as a security goal, which really hasn't changed much since we were keeping secrets in kindergarten, is that ideally if you don't need to trust someone, you shouldn’t, and if you don't need to trust anyone, don’t. Practically, as it relates to the use of technology and especially messaging services today, it means the fewer people or things you need to trust with something important to you (like a private message), the better off you generally are. Done properly, end-to-end encryption (e2e) – a method of encrypting messages in such a way that they cannot be decrypted by anyone or anything except the recipient on their device – is a wonderful way to trust less and gain more. With solid e2e in place, you don't need to trust the provider to protect your messages as they route through their infrastructure because the provider lacks the technical capability to read them. This protection also keeps messages safe from others who would steal them from the provider, hackers, and insiders alike. Another way for a service to limit its requisite trust would be to limit the kinds of user information it collects and stores to only what's minimally necessary to provide the service.  If it does that, it limits the risk of that information being lost, abused or stolen, which greatly improves its security posture.  

When it comes to security, so much of it comes down to who to trust. Any chance we get to trust fewer things with less of our critical data is an opportunity for greater security that we should not pass up. So many messaging services have squandered their opportunity to provide meaningful user security by essentially designing it out of their systems. I think there’s just too much at stake to design things like that any more. When we consider the security of our messaging tools going forward, we should remember that the less we have to trust them, the more that we can.  

Topics:
application security ,application development ,appsec ,messaging ,mobile ,security ,user ,user data ,design ,services

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}