- Uploading source code with hardcoded credentials to GitHub (Uber).
- Using weak passwords for administrator access. Really weak passwords. Like admin/admin. (Equifax).
- Failure to implement authentication and access control in a robust way (Filesilo - with plain text password storage).
- Allowing people to sign up without any verification of identity or ownership of e-mail address or other information used to sign up (lots of second-tier sites).
That's about the service itself, but what about the sign-up page and the sign-up process? Here's a checklist for your signup page - and similar use cases.
- Sanitize your inputs (protects your database from injection attacks, and your users from stored cross-site scripting attacks).
- Validate your inputs (protects against injection attacks, user errors, and garbage content).
- Verify e-mail addresses, phone numbers, etc. (protects your users against abuse, and your service against spam).
- Handle all exceptions: exceptions can give all sorts of trouble when not handled in a good way - ranging from poor user experience (due to unhelpful error messages) to information leaks containing sensitive data from your database.
- Treat secrets as secrets - use strong cryptographic controls to protect them. Hash all passwords before storing them.
OK, that was a brief checklist - but that's by far not the most important part of creating a secure sign-up process, or any process for that matter. The key to secure software is to follow a good workflow that takes security into account. Here's how I like to think about this process.
Whenever you are building something, the first thing that comes to mind is "functionality." We don't build security and try to integrate functionality. It is the other way around. Although a secure development lifecycle includes a lot of "other things" like competence development, team organization, and so on, let's start with the backlog of functionality we want to build - and focus on the actual development, not the organization around it. Having a list of functionality is a good start for thinking about threats, and then security. Let's take a signup page as the starting point - here are a few things we'd like the page to have:
- Easy signup with username, password, e-mail.
- Good privacy control: data leaks should be unlikely, and if they occur they should not reveal sensitive info.
So, how can we build a threat model based on this? We'd probably like some more information about the context, and the technology stack being used.
- Who are the stakeholders (users, owners, developers, customers, competitors, etc)?
- Who could be interested in attacking the site? (Script kiddies? Cybercriminals? Hacktivists?)
- What is the value of the service to the various stakeholders?
If we know the answers to these questions, it is easier to understand the threat landscape for our web page, and the signup component in particular. We also need to know something about the technology stack. For example, we could build our signup page based on (it could be anything, but lots of websites use these technologies today):
- MongoDB for data storage in the cloud.
- Node.js for building a RESTful API backend.
- Vue.js for front-end.
So, having this down, we see the contours of how things work. We can start building a threat model that looks at people, infrastructure, software, etc.
People risk: someone signs up with a different person's credentials and pretends to be this person (a personal risk on a social media platform for example).
Software risk: user inputs used to conduct injection attacks in order to leak data about users (MongoDB can be attacked in a pretty similar manner as a SQL database).
Software risk: secrets are stored in unprotected form and they are leaked. User credentials sold on the dark web or posted to a pastebin.
Creating a big list of threats like this, we can rank them based on how serious the impact would be, and how likely they are. Based on this, we create security requirements that are then added to the backlog. These security requirements should then also be added to the testing plans (whether it is unit testing or integration testing), to make sure the controls actually work.
Testing and development are iterative - but, at some point, it seems we have covered the backlog and passed all the tests. It is QA time! In many development projects, QA will focus on functionality first, then performance. That is of course very important - but if users cannot trust the software, they will leave. That's why security should be a part of QA as well. This means more extensive testing, typically adding static testing, coding reviews, and vulnerability scans to the QA process. Normally, you would find something here, which could range from "small stuff" to "big stuff." If the quality is not sufficient, say performance is poor or functionality fails, one would go back to the backlog and update for the next Sprint. The same should be done directly for security.
When the backlog is updated - we need to update our threat model as well - also informed by what we've learned in the previous Sprint.
Following a process like this will not give you a 100% bug-free, super-secure software - but it will certainly help. And by tracking some metrics you can also measure if quality improves over time. Static testing, especially, gives good metrics for this - the number of code defects, vulnerabilities or not, should decrease from one Sprint to the next.
That's why we need a good process. This way we can learn to build better things over time. Run fast and break things - but repair them fast as well. This way it is possible to combine innovation with good security. Innovation is of limited use unless we can trust its results.