7 Must-Know Object-Oriented Software Patterns (Part Two)
This is the second and final part in our exploration of must-know OOP patterns and covers the composite bridge pattern, iterator pattern, and lock design pattern.
Join the DZone community and get the full member experience.
Join For FreeThis is the second and final part in our exploration of must-know OOP patterns and covers the composite bridge pattern, iterator pattern, and lock design pattern. Find part one here, covering extension, singleton, exception shielding, and object pool patterns.
Object-oriented design is a fundamental part of modern software engineering that all developers need to understand. Software design patterns like object-oriented design serve as universally applicable solutions to common problems.
However, if you don’t have much experience with these object-oriented patterns, you can fall into suboptimal, ad-hoc solutions that violate key software engineering principles like code reusability and separation of concerns. On the other hand, misuse and overuse can result in a tangled, overly complex codebase that’s hard to understand and navigate.
In this article, we’ll explore our final three must-know object-oriented programming patterns (composite bridge, iterator, and lock) and show how to use these design patterns in your software development. With an array of example programming languages, we’ll show how to apply the composite bridge, iterator, and lock patterns effectively, compare them to ad hoc solutions, and demonstrate some common antipatterns that result from misuse or overuse.
Composite Bridge
The Composite Bridge pattern is a combination of two object-oriented design patterns (Composite and Bridge), and each has distinct benefits in designing flexible, decoupled, and reusable code. The Bridge pattern separates an abstraction from its implementation, allowing both to evolve independently. This is useful when an abstraction is going to be implemented in several distinct ways, and you want to keep your codebase adaptable to future changes.
The Composite pattern, on the other hand, allows you to treat a group of objects as a single instance of the object itself, simplifying the interaction with collections of objects. This pattern is particularly useful when you want to apply the same operations over a group of similar kinds of objects using the same piece of code.
However, at times, simpler constructs like basic inheritance might be a better choice. For example, implementing interfaces may not always be the best approach. If you only need to work with a single object, calling the method directly is a more straightforward and understandable solution.
Without
public void Log(Exception exception) {
raygunClient.Send(exception);
fileLogger.WriteException(exception);
…
dbLogger.InsertException(exception);
}
The code snippet (in C#) above represents a method for logging exceptions that utilizes multiple logging systems: Raygun, file logging, and a database logger. However, it directly calls each logging mechanism inside the Log function. This approach is not only monolithic but also rigid and tightly coupled. It means every time a new logging mechanism is added or removed, the Log method needs to be altered.
In this setup, the Log method must be made directly aware of all the different logging mechanisms. So, the Log method and the individual logging systems are tightly coupled. If you wanted to add another logger, you’d need to modify the Log method to incorporate it. Similarly, if a logging system needed to be removed or replaced, you’d have to alter the Log method. This is inflexible, makes the system harder to maintain, and goes against the design principle of separation of concerns.
Plus, this direct method calling approach doesn’t promote code reusability. If a different part of your application needed to use the same group of loggers, you would have to duplicate this code. This can lead to issues with code maintenance and consistency across your application.
With
The above code lacks the flexibility and reusability of decoupled design patterns like the Composite Bridge. Instead, we introduce an ILogger interface which exposes a Log method. This interface acts as an abstraction for our logging system, following the Bridge design pattern. Any class that implements this interface promises to provide a Log function, effectively creating a bridge between the generic logging operation (Log) and its specific implementation (_raygunClient.Send in RaygunLogger).
Then, we have a RaygunLogger
class that implements the ILogger interface, providing an actual implementation for logging exceptions. This class encapsulates the logging details for the Raygun system, making the concrete implementation invisible to other parts of the system. We can also create other specific loggers, like a FileLogger
or DbLogger
, each implementing the ILogger interface and providing their unique logging implementations.
The ApplicationLogger class uses the Composite design pattern to treat a group of ILogger objects (_loggers) as a single ILogger. This means we can add as many loggers as we need to the ApplicationLogger
, and the operation will be delegated to each logger automatically. The ApplicationLogger
doesn’t need to know the specifics of each, just that they will handle the Log method.
This arrangement is highly flexible. To add, remove or replace a logging system, you just need to manipulate the _loggers list in the ApplicationLogger
, with no need to alter any other code. The Bridge pattern ensures each logger can evolve independently, while the Composite pattern lets us handle multiple loggers transparently with a single piece of code. This decoupled and extensible design makes your logging system much easier to maintain and evolve over time.
public interface ILogger
{
void Log(Exception exception);
}
public class RaygunLogger : ILogger /*Bridge pattern*/
{
private RaygunClient _raygunClient;
public RaygunLogger(string apiKey)
{
_raygunClient = new RaygunClient(apiKey);
}
public void Log(Exception exception)
{
_raygunClient.Send(exception); /*Bridges Log to Send*/
}
}
public class ApplicationLogger /*Composite pattern*/
{
private List<ILogger> _loggers; /*Store different types of loggers*/
public ApplicationLogger() {
_loggers = new List<ILogger>();
}
public void AddLogger(ILogger logger) {
_loggers.Add(logger);
}
public void Log(Exception exception) {
foreach (var logger in _loggers)
{
logger.Log(exception); /*Send to all different loggers*/
}
}
}
Antipattern
The flip side is that these patterns tend to be abused, and often, developers keep introducing unnecessary abstractions. We don’t need to add this many layers to just log into the console, assuming that this is the only thing required in the following application:
public interface IWriter
{
void Write(string message);
}
public class ConsoleWriter : IWriter
{
public void Write(string message)
{
Console.WriteLine(message);
}
}
public class CompositeWriter
{
private List<IWriter> _writers;
public CompositeWriter()
{
_writers = new List<IWriter>();
}
public void AddWriter(IWriter writer)
{
_writers.Add(writer);
}
public void Write(string message)
{
foreach (var writer in _writers)
{
writer.Write(message);
}
}
}
class Program
{
static void Main(string[] args)
{
CompositeWriter writer = new CompositeWriter();
writer.AddWriter(new ConsoleWriter());
writer.Write("Hello, World!");
}
}
Instead
Rather, just call the method directly in such cases:
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello, World!");
}
}
Iterator
The iterator pattern offers a consistent way to traverse the elements of a collection or an aggregate object without exposing the internal details of the collection itself. This pattern is often used in conjunction with the Composite pattern to traverse a complex tree-like structure. It allows processing items in a sequence without needing to understand or handle the complexities of the collection’s underlying data structure. This can lead to cleaner and more readable code.
However, iterator comes with caveats. In some cases, using an iterator can reveal too much about the underlying structure of the data, making it harder to change the data structure in the future without also changing the clients that use the iterator. This can limit the reusability of the code.
Furthermore, multi-threaded applications can face issues with the iterator pattern. If one thread is iterating through a collection while another thread modifies the collection, this can lead to inconsistent states or even exceptions. So, we have to carefully synchronize access to the collection to prevent such issues, often at the cost of performance.
Without
The following Python code employs a traditional approach to iterate over the ‘index’ list, which holds the indices of ‘data’ list elements in the desired order. It then prints the ‘data’ elements according to these indices using a while loop. The implementation is straightforward but breaks encapsulation and decouples data that should be kept together, making it error-prone when reused or maintained.
data = ['a', 'b', 'c', 'd', 'e']
index = [3, 0, 4, 1, 2]
i = 0
while i < len(index):
print(data[index[i]])
i += 1
With
On the other hand, the following improved design leverages the iterator pattern to achieve the same goal but in a more elegant and Pythonic way. Here, an IndexIterator class is defined, which takes the ‘data’ and ‘index’ lists as parameters in its constructor. It implements the Python’s iterator protocol by providing iter()
and next()
methods.
The iter()
method simply returns the instance itself, allowing the class to be used in for-each loops. The next()
method retrieves the next item in the ‘index’ list, uses this to get the corresponding item from the ‘data’ list, and then increments the current position. If the end of the ‘index’ list is reached, it raises the StopIteration exception, which signifies the end of iteration to the for-each loop.
Finally, an instance of IndexIterator is created with ‘data’ and ‘index’ as parameters, and a for-each loop is used to iterate over the items. This makes the code cleaner and the iteration process more transparent, showcasing the power and utility of the iterator pattern.
class IndexIterator:
def __init__(self, data, index):
self.data = data
self.index = index
self.current = 0
def __iter__(self):
return self
def __next__(self):
if self.current < len(self.index):
result = self.data[self.index[self.current]]
self.current += 1
return result
else:
raise StopIteration
data = ['a', 'b', 'c', 'd', 'e']
index = [3, 0, 4, 1, 2]
for item in IndexIterator(data, index):
print(item)
Antipattern
The iterator can, of course, be misused. For example, the lack of true encapsulation in Python allows direct modifications of the ‘index’ list in the iterator after its creation. This compromises the state of the iterator because the ‘current’ pointer doesn’t get reset. As a result, the iterator’s behavior becomes unpredictable and inconsistent. The engineer might expect that after reversing the index list, the iterator would start from the beginning of the newly ordered list. However, due to the previously advanced ‘current’ pointer, it instead points to the second last element of the revised list.
iterator = IndexIterator(data, index)
# Display the first item
print(next(iterator))
# Misuse the iterator by changing the index list directly
# Remember, Python does not offer encapsulation with private fields
iterator.index.reverse()
# The behavior of the iterator has been compromised now; it will the second-last item, not the first in reverse
print(next(iterator))
Instead
The code below corrects this by encapsulating the reverse operation within the IndexIterator
class. A reverse method is added that not only reverses the order of the ‘index’ list but also resets the ‘current’ pointer to the beginning of the list. This ensures the iterator’s state remains consistent after the reverse operation.
In the revised code, the developer creates an IndexIterator instance, retrieves the first item, reverses the ‘index’ list using the encapsulated reverse method, and then retrieves the next item. This time, the iterator works as expected, proving the advantage of the iterator pattern in preserving the iterator’s internal state and protecting it from unintended modifications.
class IndexIterator:
…
def reverse(self):
self.index.reverse()
self.current = 0
# Create an iterator object
iterator = IndexIterator(data, index)
print(next(iterator))
# The encapsulated method correctly modifies the state of the iterator
iterator.reverse()
# Now, indeed the first item in reverse is displayed
print(next(iterator))
Lock
The Lock design pattern is a crucial element in multi-threaded programming that helps maintain the integrity of shared resources across multiple threads. It serves as a gatekeeper, allowing only one thread at a time to access or modify a particular resource. When a thread acquires a lock on a resource, it effectively prevents other threads from accessing or modifying it until the lock is released. This exclusivity ensures that concurrent operations don’t lead to inconsistent or unpredictable states of the shared resource (commonly referred to as data races or race conditions).
However, improper use of the Lock design pattern can lead to a variety of problems, with deadlocks being one of the most notorious. Deadlocks occur when two or more threads indefinitely wait for each other to release a lock, effectively freezing the application. For example, if thread A holds a lock that thread B needs and thread B, in turn, holds a lock that thread A needs, neither thread can proceed, leading to a deadlock. So, it’s essential to design your locking strategies carefully.
To mitigate these risks, one common strategy is to implement try-locking with timeouts. In this approach, a thread will try to acquire a lock, and if unsuccessful, it will wait for a specified timeout period before retrying. This method prevents a thread from being indefinitely blocked if it can’t immediately acquire a lock.
Another strategy is to carefully order the acquisition and release of locks to prevent circular waiting. Despite the potential for these complexities, the Lock design pattern is a powerful tool for ensuring thread safety in concurrent programming, but it shouldn’t be overused.
Without
In the Ruby on Rails application code below, we’re dealing with a user login system where users receive a bonus on their first login of the year. The grant_bonus method is used to check whether it’s the user’s first login this year and, if so, grants a bonus by updating their balance. However, this approach is susceptible to a race condition, known as a check-then-act scenario. If two requests for the same user occur simultaneously, they could both pass the first_login_this_year? check, leading to granting the bonus twice. We need a locking mechanism to ensure the atomicity of the grant_bonus operation.
# user.rb (User model)
class User < ApplicationRecord
def first_login_this_year?
last_login_at.nil? || last_login_at.year < Time.zone.now.year
end
def grant_bonus
if first_login_this_year?
update(last_login_at: Time.zone.now)
bonus = 50
update(balance: balance + bonus) # Add bonus to the user's balance
else
bonus = 0
end
end
end
# sessions_controller.rb
class SessionsController < ApplicationController
def login
user = User.find_by(email: params[:email])
if user && user.authenticate(params[:password])
bonus = user.grant_bonus
render json: { message: "Login successful! Bonus: $#{bonus}. New balance: $#{user.balance}" }
else
render json: { message: "Invalid credentials" }, status: :unauthorized
end
end
end
With
To remedy the race condition, the updated code employs a locking mechanism provided by ActiveRecord’s transaction method. It opens a database transaction, and within it, the reload(lock: true) line is used to fetch the latest user record from the database and lock it, ensuring that no other operations can modify it concurrently. If another request attempts to grant a bonus to the same user simultaneously, it will have to wait until the first transaction is complete, preventing the double bonus issue.
By encapsulating the check-then-act sequence in a transaction, we maintain the atomicity of the operation. The term ‘atomic’ here means that the operation will be executed as a single, unbroken unit without interference from other operations. If the transaction succeeds, the user’s last login date is updated, the bonus is added to their balance, and the updated balance is safely committed to the database. If the transaction fails at any point, none of the changes are applied, ensuring the data integrity.
# user.rb (User model)
class User < ApplicationRecord
def first_login_this_year?
last_login_at.nil? || last_login_at.year < Time.zone.now.year
end
def grant_bonus
self.transaction do
reload(lock: true)
if first_login_this_year?
update(last_login_at: Time.zone.now)
bonus = 50
update(balance: balance + bonus) # Add bonus to the user's balance
else
bonus = 0
end
end
end
end
Antipattern 1: Cyclical Lock Allocation
A common lock antipattern and pitfall in multi-threading involves a cyclical lock allocation. The controller locks product1
and then product2
. If two requests simultaneously attempt to compare product1
and product2
, but in opposite orders, a deadlock may occur. The first request locks product1 and then tries to lock product2
, which is locked by the second request. The second request, meanwhile, is waiting for product1
to be unlocked by the first request, resulting in a cyclic wait scenario where each request is waiting for the other to release a lock.
Rails.application.routes.draw do
get '/product/:id1/other/:id2', to: 'products#compare'
end
class ProductsController < ApplicationController
def compare
product1 = Product.find(params[:id1])
product1.mutex.lock()
product2 = Product.find(params[:id2])
product2.mutex.lock()
# compare_product might throw an exception
results = compare_product(product1, product2)
product2.mutex.unlock() product1.mutex.unlock()
render json: {message: results}
end
end
Instead
This example demonstrates a better approach using try_lock, a non-blocking method for acquiring a lock. If the lock is unavailable, it will not block execution and instead returns false immediately. This can prevent deadlocks and provide an opportunity to handle the scenario when a lock can’t be acquired.
Even better, the revised example includes a timeout for acquiring the second lock. If the lock cannot be acquired within the specified timeout, the code catches the Timeout error and logs it using Raygun’s error tracking. This additional exception handling further safeguards against deadlocks by setting an upper limit on how long a thread will wait for a lock before it gives up.
Finally, in both lock acquisition scenarios, the code structure makes use of Ruby’s begin-ensure-end construct to ensure that once a lock is acquired, it will always be released, even if an exception occurs during the execution of the critical section. This is an essential part of using locks to avoid leaving resources locked indefinitely due to unexpected errors.
# Try acquiring lock for product1
if product1.mutex.try_lock
begin
# Successfully acquired lock for product1
# Now, try acquiring lock for product2 with a timeout of 5 seconds
if product2.mutex.try_lock(5)
begin
# Successfully acquired lock for both product1 and product2
# Perform the critical section operations
ensure
product2.mutex.unlock
end
else
# Failed to acquire lock for product2 within 5 seconds
# Handle the timeout situation
Raygun.track_exception(Timeout::Error.new('Lock timeout occurred on second product\'s lock'), custom_data: { product_ids: [product1.id, product2.id] })
end
ensure
product1.mutex.unlock
end
else
# Failed to acquire lock for product1
# Handle the situation where the lock cannot be acquired immediately
Raygun.track_exception(Timeout::Error.new('Lock timeout occurred on first product\'s lock'), custom_data: { product_ids: [product1.id, product2.id] })
end
Antipattern 2: Removing Locks
Improper lock removal, often as a misguided attempt at boosting performance, is a common antipattern in concurrent environments. Locks help preserve data integrity, preventing unpredictable outcomes from race conditions, and overzealous or premature lock removal can spur these very conditions. While managing locks might introduce some overhead, they are crucial for ensuring data consistency. Remove locks with caution and back with thorough testing. Instead of arbitrary lock removal, utilize performance monitoring tools like Raygun APM to pinpoint performance bottlenecks and guide optimization efforts.
Wrap-up
In this two-part exploration, we’ve dived into key design patterns, going deep on extension, singleton, exception shielding, object pool, composite bridge, iterator, and lock. These patterns provide robust and versatile solutions to common challenges. Done right, they can help you adhere to principles of code reusability, separation of concerns, and overall software engineering principles.
However, it’s absolutely critical to be disciplined about when these patterns are implemented. Misuse or over-application can lead to confusion and dysfunction instead of simplicity and clarity. With consistent good habits, you’ll get a strong sense of when a pattern adds value and when it might obscure the essence of the code. The key is to strike a balance between robust design patterns and clean, simple code, leading to more efficient and resilient software development.
Happy coding!
Published at DZone with permission of Panos Patros. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments