Seven Best Practices in Digital Security for Business Owners and Managers
Seven Best Practices in Digital Security for Business Owners and Managers
It's a safe bet that security issues are going to grow, and current security practices aren't growing with them. Check out these seven best practices.
Join the DZone community and get the full member experience.Join For Free
Technology moves so fast that it’s always hazardous to make predictions about it, but here’s one that seems a surefire bet: problems with Internet and Digital Security are only going to get worse, and increase in the next decade.
There are several reasons to feel confident about this prediction of digital insecurity. Automation and the Internet of Things are only going to increase the attack surface of hackers and malicious actors. Digital crime is becoming more organized, and is sometimes state-sponsored — as was evident in the Equifax hack by the Chinese military — meaning that these criminals have access to vastly more resources than in the past.
There are now entire ecosystems of stolen data online — data that can be used in a variety of criminal activities. And some purely technical advances in software development, specifically cloud-based microservices, container orchestration, and serverless functions, while capable of being more secure than their technical predecessors, depend on precise configuration management and levels of expertise that are missing in many organizations, and misconfigurations will likely exponentially increase the number of threat vectors in the near future.
The good news for business owners and managers is that the threat landscape is well-defined, and by properly deploying and maintaining many of the new software architectures and systems, you have many ways of limiting your exposure to hackers and malicious actors. Here are seven ways to do just that.
1: RBAC Everything
Almost every threat your digital systems will face will come through a human being. Even sophisticated users of technology fall victim to phishing scams or use insecure passwords. So the best way to limit your company’s exposure to digital threats is to limit access to the vital components of your core systems. That’s where Role-based Access Control (RBAC) comes in.
RBAC is the practice of configuring your systems so that different members of your organization have different and limited access to your key systems. Even before the advent of cloud technology, most businesses did some version of this, by limiting employee access to key financial or human resource accounts.
RBAC is the same idea, but at the configuration level of your key technical systems. Almost all servers, networking hardware, virtual machines, containers, orchestrators, serverless functions, build systems and data stores have a version of RBAC available, designed to limit access to the key components, such as root access, of those systems.
But a lot of organizations make several mistakes with RBAC. First, they often don’t require high levels of access confirmation for their system administrators. System admins and root-level access should always be strictly limited, and the access at that level must be multifactor in order to be secure. Secondly, business owners often do a poor job of scrubbing former employees from their roles in system; even absent malicious intent, an ex-employee who forgets to scrub and change passwords can leave you vulnerable. Once an employee leaves, you should shut down their access to your systems immediately.
Third, many organizations will RBAC one or two components of their system, and leave the rest open. This is a particular issue for systems that use microservices, orchestrators and serverless functions. These systems often deploy virtual machines and cloud-based computing resources with default configurations set to all access — you have to deliberately go into them and configure RBAC, or the systems are insecure. And fourth, many organizations decide that the process of authenticating users is so onerous that they use a single system to do so across all their access layers — and thus simplify the ability of a hacker to gain access into their system.
Your systems configurations should all follow the Principle of Least Privilege — that is, granting specific user access to that specific component of the system — no more, no less. And you should configure each layer of your system separately, and administer them independently.
2: Secure Your Passwords and Secrets
Along with RBAC, you’ll want to secure your passwords and secrets. And in today’s systems, as data connections, APIs, containers, VMs and servers all have their own passwords, secrets, encryption keys and credentials — in addition to those stored by human beings — this process becomes even more complex. Thankfully, almost all the cloud providers also provide secure data stores in which to deposit secrets and passwords; additionally, there are companies such as hashicorp whose business models are built around the secure storage of your credentials, passwords, and secrets.
Whatever system you use to ensure that your passwords and secrets are stored, you’ll want to ensure they are stored properly, isolated from the main application data flow, and outside your code repositories. And you will want to ensure that the developers and testers on your team are not storing secrets directly in code or in an architectural layer exposed to the public, in any of your environments. People are notoriously bad storers of passwords and secrets.
Many developers, in particular, will find themselves short on deadlines and drop an API secret directly into their code, thinking they will come back and store it properly, and then forget. You’ll also need to check your build systems and your test harnesses, so that your systems don’t automatically unpack credentials and expose them as part of a validation review; likewise review your logging and monitoring to ensure that credentials and secrets aren’t exposed in the stack trace. Cycle your passwords, secrets and credentials routinely. And you will want to run audits on your systems periodically to see if your passwords are stored properly, and updated.
3: Update and Maintain Your Systems
One of the simplest avenues for an attack on your system is to scan it, find an outdated or vulnerable component within it, and then use that component as the vector for your attack. This is what happened in the famous Equifax data breach of 2017, when the credit agency left itself vulnerable by failing to patch a web application framework. Half the population of the United States had their records exfiltrated by the Chinese military.
The simplest remedy for a situation like this is to patch and update your stack, from server to frontend, with particular attention paid to versions of orchestrators, containers and serverless functions. In practice, however, this is often quite difficult, because of the variety of software you might use, and how the version of that software interacts with other components on your system; more than a few development teams have been challenged by legacy systems with outdated and unsupported code frameworks and service versions. The more complex and diverse your system, the more difficult it will be to manage these updates and patches. In theory, infrastructure as service provisioning and containerized microservices makes this process easier; in practice, misconfiguration seems to be an increasing problem, and a vector for exploits.
Managing upgrades, patches and certs expirations should be part of any organization’s architectural planning. Managers should be wary of bolting new services and frameworks on top of their original systems if they don’t have the resources and expertise to manage them. It’s far better to have a slightly outmoded monolithic application that is simple to maintain with a small team, than a large microservice driven application beyond the ability of your team to maintain. You should always be refactoring and building better, but also building within the constraints of your organization to manage and maintain.
4: Validate and Authenticate Your Data Transmissions
One of the easiest ways to hack sites and systems is to inject malicious code into a system, and this usually occurs when the transmission of data is not locked down, validated or sanitized. It’s known as injection, and is number one in the Open Web Application Security Project (OWASP) top ten list of security vulnerabilities. It can happen anytime data transmitted from an input source, for example, a web form like a password input box, sends data back to a system to execute a behavior such as login, to confirm something, such as an authentic user, or to store that data in a persistent source like a SQL database.
If the software that interprets the input value is not strict about the types of values received from the source, a hacker can enter malicious values and gain access to software and systems that would otherwise be closed to them. Because this is one of the older vulnerabilities in tech, there are many means of offsetting the risk, such as escaping special characters, running a pattern check against the inputs, or implementing a database firewall. And, thankfully there are many automated systems to scan your systems and detect injection vulnerabilities. Your development teams should be running their code through some form of static analysis software testing (SAST) or dynamic analysis software testing (DAST), but the more complex your system is, the more vulnerabilities are likely to be exposed — systems with a lot of testing and development environments often leave themselves exposed because those environments aren’t scanned.
Because many modern, microservice architectures pass a lot of data via APIs, many organizations will introduce some form of API management designed to secure the communications between these data endpoints, which usually consists of encrypting data calls, whitelisting endpoints, and securing your servers and networks. Unless your system is extremely simple, it’s a lot of work to maintain on your own, and you will want to use some service to help manage these calls and deployments.
5: Isolate and Segregate Components
Segregation of Duties is a risk mitigation practice in large organizations where responsibilities for crucial processes are divided among individuals to decrease the risk that any single individual misstepping or acting maliciously will affect the entire company. Information technology companies historically incorporated this approach by separating skills and responsibilities in the Software Development Lifecycle (SDLC), sometimes conflicting with modern agile SDLC, which emphasizes developer ownership of projects, and faster release process. Security best practice here is to break a modern, continuous development process into a set of qualitative tests at each step of the release process, and provide both automated and procedural evaluation of code.
The key is to insure that all your development environments are configured against exploits, and fitted with both static and dynamic code analysis test suites. There has been a tendency in the past for dev and test environments, which are usually isolated from the public to be less secure than production environments, but because of the common use of public code libraries and open-source services, these environments also need to be locked down.
Microservices is an analogous software architectural design pattern that has found increasing favor over the last five years, partially for architecturally segregating services, and partially because microservices enable faster and distributed software deployments and testing, and are generally more fault-tolerant than monolithic systems. But because they tend to introduce complexity to a software system, they also require careful configuration and maintenance, and if neither is done properly, a microservice architecture can become vulnerable.
This is especially a problem with shared, multi-tenant environments and systems using orchestrators and containers. In 2018, for example, Tesla’s cloud services left the credentials for their AWS stack exposed in their Kubernetes credentials manager. Because containers and orchestrators simplify the construction and maintenance of microservice-based applications and services, they are becoming increasing common, as are exploits against them.
The good news is that there are many, many emerging scans and tests specific to containers and orchestrators, that will identify many of the misconfigurations and vulnerabilities in your system. One of the best places to start is https://vulnerablecontainers.org/ which is cross-referenced to vulnerabilities tracked by the National Institute of Standards and Technology (NIST).
6: Fault Tolerance
Fault tolerance is the principle that if any one part of a given system fails, it doesn’t take the entire system with it; you can view this idea as complementary with componentized microservice architecture where the application or service continues to run even if one part of it is compromised. Fault tolerance also means you have backup systems and databases in case your primary system fails or is compromised; for the purposes of security, these backups should be effectively isolated from your primary system in order to ensure that they cannot be compromised by the same attack.
Fault tolerance also means anticipating how your system will respond to an unanticipated request or a failure, because hackers often use the response to a failure itself as a means to gain an understanding about how a system is constructed, and find another means to attack it. A SQL injection attack might fail, for example, but the error response contains the query logic and possibly even passwords, configuration files, or other sensitive information used within the query. As part of the error response, for example, you might post the server software version of your host; now the attacker can search for vulnerabilities against that version of the software. You’ll want to ensure that your error responses can’t be used to interrogate your core systems for file and directory structure, or other components that can be used to map your architecture and exploit part of your system.
You should also game out the access and permission responses of serverless functions, orchestrators and containers. Any resources you make available through these calls need to be rate-limited and defined. A common exploit of hackers is to demand unlimited resources from your system and then use that request — and the failure of that request — as an opportunity to escalate their privileges within your system, and gain control of it. Make sure you don’t expose information in your responses, such as if a username is valid or not, which can be used to simplify the process of hacking your permissions.
In general, you should avoid verbose error responses, and find the right balance between messages that are too cryptic and not cryptic enough, to ensure that each response provides enough information to diagnose the error, but not enough to expose your system.
7: Log, Monitor and Notify
Finally, your best bet to keep your systems and applications safe is to log and monitor your systems and applications. You don’t want to do this randomly, or simply in response to your development needs. You’ll want to approach this architecturally and log specific types of information for specific uses in your system. Application, network, and infrastructure monitoring are one part of the architecture; event and audit monitoring another. Much of the data produced by your test environments doesn’t need to be persisted, and even in your production environments, you’ll want to persist data for limited periods, and for specific uses.
You’ll also want to bear in mind that error, audit and event logs can persist data that can be used forensically and legally — and you need to store and delete these logs with legal compliance in mind, especially since the advent of the EU’s GDPR and California’s CCPA laws, or if your data contains objects that are subject to special legal requirements, such as HIPAA, GLBA or SOX.
Notifications from your event and audit logs in particular, are important. Your logging and monitoring system should be set up outside your main application system or platform, and access to your logs and dashboards should be tightly controlled RBAC. It’s particularly important that you set alerts for your system administrators. Anytime a change to access privileges is made among your admins, all of your admins should be notified. Increasingly exploits against systems are beginning with privileged users, so you want to take steps to monitor them as well. And you need to audit these systems regularly.
You should also set notifications around the system components of your architectures to insure that no single component is utilizing excessive resources--a signal that an exploit or exfiltration is underway.
Your logging system will need the same high availability, distributed processing, and fault tolerance that you have in your regular stack, so you should treat it as a separate product deliverable, and build it in parallel to your main system--with all the same security restrictions and configurations.
Because of the potential complexity of these monitoring systems, many business and technical owners will choose to administer them through a Security Information and Event Management (SIEM) system that makes it easier to deploy and maintain such a system. There are a variety of them available on the market.
Unfortunately, today’s software environment is one where 40% of traffic on the web is bots — many of them seeking vulnerabilities and entry points to exploitable systems. And the increasing complexity and distribution of sensitive data means that there is more data to be stolen and sold than ever before. This makes data security one of the most important foci for any technology or online business, and one that requires constant attention. Utilizing these seven steps towards greater security can go a long way to diminishing your vulnerability and allow you to focus on your core mission and main product.
Opinions expressed by DZone contributors are their own.