# What Is a Digital Signature?

# What Is a Digital Signature?

### Digital signatures are used throughout everyday applications and technologies to ensure the integrity and security of systems. See how!

Join the DZone community and get the full member experience.

Join For FreeDiscover how to provide active runtime protection for your web applications from known and unknown vulnerabilities including Remote Code Execution Attacks.

Digital signatures are at the heart of many of the applications and technologies we use on a daily basis, including Bitcoin and, Secure Socket Layer (SSL), and Transport Layer Security (TLS). Not only does a digital signature authenticate an originator, ensuring that a message originates from the party that claimed to have sent it (similar to a handwritten signature), but it also provides a greater degree of security: integrity. Using a digital signature, we can ensure that the contents of a message have not changed during their transmission. This also has the corollary effect of ensuring non-repudiation, whereby an originator cannot claim that a signed message did not originate from him or her (i.e. that the signature is not his or hers).

Throughout this article, we will assume a basic understanding of security, with a goal of sending a given message between two parties (Alice and Bob) over an insecure communication channel (one in which some agent, Eve, is looking to tamper with transmissions). To drive our discussion, we will first look at the two main components in digital signatures: (1) cryptographic hashing and (2) public-key encryption. With a solid understanding of the prerequisites, we will then thoroughly define the process of generating and verifying a digital signature.

## Cryptographic Hashing

In general, a **hash** is a mathematical function that maps an arbitrarily-sized input into a fixed-sized output. Intuitively, that means that we can define some function, *d = ƒ(m)*, where the input to the function, *m*, is called the **message** and the output to the function, *d*, is called the **digest**. Given some arbitrarily-sized message, we know that our digest will always be of a fixed length. For example, we can create a function, *d = ƒ _{32}(m)*, that always results in a 32-bit digest: Regardless of the size of the input message (such as 1 bit, 64 bits, 256 bits, 5,000 bits, etc.), this hash function will always produce a 32-bit digest. This principle is illustrated in the figure below.

### Cryptographic Constraints

While general hash functions are free to transform the input message into the fixed-sized output digest in any manner they see fit, there are a few constraints imposed on **cryptographic hash functions** used in secure environments. Namely, cryptographic hash functions must have the following properties:

**Deterministic**: for a given message, the hash function always results in the same digest.**Quick**: for any given message, computing the digest should be completed in a reasonable time (note that there are cases where it should take an extended period of time to compute the hash, making brute-force trials of all possible messages infeasible, but in the general case, the hash should be computed in a timely manner for a small set of messages).**Irreversible**: given some digest, it is infeasible to compute the message that was used to generate the digest without trying all possible messages.**Disperse**: given two messages, where the second message is generated by changing a small portion of the first message (i.e. adding a single character), the digests of the two messages should not have any perceivable correlation.**Collision Resistant**: it is infeasible to find two messages that hash to the same digest.

The constraints required for a cryptographic hash function provide some realistic bounds on our theoretical hash function. In the general case, we could devise a hash function, albeit impractical, that maps all possible messages to the value 1, thus allowing for an infinite message set. In the case of a cryptographic hash function, the digest must be deterministic and collision resistant. Therefore, when the same message is fed into the hash function, the same digest is always produced, no matter how many times the process is repeated. Furthermore, all digests must be unique in the sense that given two different messages, the same digest must never be produced.

Since our digest length is limited, there is no feasible way to produce a completely collision-resistant cryptographic hash function. Instead, we make the digest length sufficiently long as to make the chance of a collision highly unlikely. For example, if we define our digest length to be 256 bits, we can have 1.18e^{77} possible digests. Thus, assuming a uniform distribution, there is an 8.64e^{-76}% chance that two messages will result in the same digest.

We must also ensure that given some digest, it is infeasible to compute the original message that created this digest without computing all possible digests. For example, given some digest, it must be infeasible to run the cryptographic hash in reverse and obtain the message that was used to generate the digest. In mathematical terms, this property of cryptographic hash functions makes them a one-way function, where running the function in the forward direction is computationally easy (satisfying our requirement for quick generation) while attempting to reverse the result of the function is computationally hard. In practical terms, a cryptographic hash requires that every possible input message is run through the function until a matching digest is found (since the function is deterministic, the message that caused a matching digest to be generated was the message that maps to the digest of interest).

Lastly, the function must produce digests that are sufficiently dispersed, where a small change in a message results in a largely different digest. For example, if we hashed the contents of a book, adding a period to the end of the book should result in a digest that is wholly unrecognizable from the digest generated from the original contents of the book. Put simply, even the most minuscule changes to a message should result in a large change to the digest.

### Public-Key Encryption

When looking to secure information, encryption is commonly the appropriate technique, but encryption can provide much more than simple obfuscation. Broadly, encryption is the process of taking some message (called **plaintext**) accompanied by some **key** and producing encrypted **cyphertext** that cannot be decrypted unless the ciphertext and a key (possibly the same key or a different key) is fed into a decryptor.

In general, there is two types of encryption: (1) **symmetric encryption**, where two duplicate keys are given to the sender and receiver and (2) **asymmetric encryption** (or **public-key encryption**), where a **private key** is maintained by the sender and a **public key** is given to the receiver. In the case of symmetric encryption, both the sender and receiver have the same key. This is analogous to writing a message, placing it in a box, and placing a lock on the box. We then provide the receiver with a duplicate key to open the lock. Unless a receiver has a duplicate of the key that locked the box, the box cannot be opened and therefore, the locked message cannot be read. This technique is illustrated in the figure below.

The case of public-key encryption is more complex, but it also allows us to do much more. With public-key encryption, we generate two keys: (1) a private key that is maintained by the sender (and never shared with anyone other than the sender) and (2) a public key that is given to the receiver. These two keys share an important property: they are complementary. Thus, plaintext encrypted with the private key can only be decrypted using the public key (the curious reader can look further into modulus mathematics to understand how this is accomplished). It is important to note that plaintext encrypted with the private key *cannot* be decrypted with the private key, and likewise, plaintext encrypted with the public key *cannot* be decrypted with the public key. As a rule, only plaintext encrypted with the private key can be decrypted with the public key. This technique is illustrated in the figure below.

Since we know that the private key is always kept secret by the sender, we know that if we are successfully able to decrypt some known plaintext using a sender's public key, we know that the plaintext originated from the sender. This is one of the most fundamental properties of public-key encryption and its importance cannot be overstated. Since the private and public keys for a sender are complementary, we know that if we can successfully decrypt some ciphertext with a public key, the private key must have encrypted it. Furthermore, since we know that the private key is only held by the sender, we know that the sender must be the one who sent the message. The trick in this technique is defining criteria to determine if some ciphertext was successfully decrypted (i.e. if we knew the original message as the receiver, what would be accomplished by encrypting the message?). In order to determine this, we must know what the original plaintext *should *be. In the case of digital signatures, we make a clever determination about the desired plaintext.

## Digital Signature

In the case of digital signatures, we have two main goals: (1) ensure that the original message was not changed and (2) ensure that message was actually sent by the sender. To accomplish this, we initiate a two-step process, combining our knowledge of cryptographic hashing and public-key encryption. First, we cryptographically hash our message, creating a digest associated with the message. If the message changes, we know that the digest should change.

Therefore, we can send the message, accompanied by the digest, instructing the receiver to re-hash the message and compare the digest generated by the sender and the digest generated by the receiver. If the digests match, we know that the contents of the message have not changed. There is one major flaw in the scheme, however: What if some malicious intermediary, Eve, both changes the message and recomputes the digest, forwarding both an altered message and altered digest to the receiver? In this case, when the receiver re-hashes the message, he or she will generate a digest that matches the digest generated by Eve. From the perspective of the receiver, the message is correct, since the digest he or she generated matches the digest received by Eve, but in fact, this digest was altered during transmission. This alteration is illustrated in the figure below.

In order to ensure that contents of the message were not changed, we must lock the digest somehow. If we were to use symmetric encryption, we could encrypt the digest, requiring that the receiver decrypt the digest and then recompute the hash. Since Eve does not have the key to encrypt and decrypt the digest, the most she can do is alter the plaintext message. Once the receiver computes his or her own digest of the sent message, he or she will see that the computed digest and the digest that accompany the message do not match, and thus the message must have been altered in transmission. Likewise, if Eve attempts to change the encrypted digest, decrypting the digest by the receiver will result in a corrupted digest (the corrupted digest and the recomputed digest will not match), and thus the receiver will know that the message has been altered in transmission.

While this scheme will suffice to secure the digest, it leaves out an important property: ensuring that the message was actually sent by the sender. If instead, we use public-key encryption, we can encrypt the digest with the private key of the sender and require that the receiver use the public key of the sender to decrypt the digest. This provides two assurances: (1) the digest has not been tampered with (or if it was, it will result in a corrupted digest after decryption) and (2) the digest was encrypted by the sender since the receiver was successfully able to decrypt the digest.

As stated before, in order to determine if the ciphertext was successfully decrypted, we need a known plaintext value to which we can compare our decrypted ciphertext. In the case of digital signatures, the known value is the expected digest of the message. If we do not obtain the expected digest, we know that the message was either changed, the encrypted digest was tampered with, or the message was not sent by the claimed sender (i.e. the digest was not encrypted using the private key of the sender). In any of these cases, we know that the message is invalid.

Coming full circle, we end up with a few steps that must be performed by the sender and receiver to ensure the validity of a digital signature. Namely, the sender must:

- Compute a digest, D
_{S}, for the message using a cryptographic hash. - Encrypt the digest using his or her private key.
- Transmit the message, along with the encrypted digest.

Next, the receiver must:

- Decrypt the digest using the public key of the sender, resulting in the digest accompanying the message from the receiver's perspective, D
_{R.} - Compute the digest, D
_{C}, for the message. - Compare the digests D
_{R}and D_{C}: if the two match, we know that the message was received from the sender unaltered and the message did indeed originate from the sender; if the two digests do not match, we know that the message is invalid.

This process is depicted in the following figure.

Using this technique, we can ensure (with a high degree of confidence) that any message transmitted between Alice and Bob cannot be tampered with by Eve without Bob knowing.

## Conclusion

Given the security-conscience world in which we live, encryption, cryptography, and other assurance techniques are becoming a prerequisite for software engineering. Particularly, digital signatures are used throughout everyday applications and technologies to ensure the integrity and security of systems, including the blockchain used by Bitcoin, the underlying security layers utilized by the Secure Hypertext Transfer Protocol (HTTPS), and the certificates used by browsers. Although some of these technologies are very complex, understanding basic principles, such as cryptographic hashing, public-key encryption, and digital signatures, breaks these intricate systems into composites of simple ideas and helps to ensure that we build in security at every level of our systems.

Find out how Waratek’s award-winning application security platform can improve the security of your new and legacy applications and platforms with no false positives, code changes or slowing your application.

Opinions expressed by DZone contributors are their own.

## {{ parent.title || parent.header.title}}

## {{ parent.tldr }}

## {{ parent.linkDescription }}

{{ parent.urlSource.name }}