Also known as the build stage of the SDLC, coding focuses on the writing and programming of a system. The Zones in this category take a hands-on approach to equip developers with the knowledge about frameworks, tools, and languages that they can tailor to their own build needs.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.
Development at Scale
As organizations’ needs and requirements evolve, it’s critical for development to meet these demands at scale. The various realms in which mobile, web, and low-code applications are built continue to fluctuate. This Trend Report will further explore these development trends and how they relate to scalability within organizations, highlighting application challenges, code, and more.
5 Reasons To Choose Django in 2024
Drupal 9 Essentials
Choosing the right backend technology for fintech development involves a detailed look at Java and Scala. Both languages bring distinct advantages to the table, and for professionals working in the fintech industry, understanding these nuances is crucial. There is no arguing Java is a true cornerstone in software development — stable, boasting comprehensive libraries and a vast ecosystem. Many of us — me included! — relied on it for years, and today Java is the backbone of countless financial systems. Scala, in many respects a more modern language, suggests an interesting blend of object-oriented and functional programming, proud of a syntax that reduces boilerplate code and boosts developer productivity. For teams searching to introduce functional programming concepts without stepping away from the JVM ecosystem, Scala is an intriguing option. Our discussion will cover the essential aspects that matter most in fintech backend development: ecosystem and libraries, concurrency, real-time processing, maintainability, and JVM interoperability. Let's analyze, side by side, how Java and Scala perform in the fast-paced, demanding world of fintech backend development, focusing on the concrete benefits and limitations each language presents. Ecosystem and Libraries for Fintech When deciding between Java and Scala for your fintech backend, your major concern will be the richness of their ecosystems and the availability of domain-specific libraries. Java accumulated an impressive array of libraries and frameworks that have become go-to resources for fintech projects. One example is Spring Boot – a real workhorse for setting up microservices, packed with features covering everything from securing transactions to managing data. There’s also Apache Kafka, pretty much the gold standard for managing event streams effectively. But what stands out about Java's ecosystem isn't just the sheer volume of tools but also the community backing them. A vast network of experienced Java developers means you’re never far from finding a solution or best practice advice, honed through years of real-world application. This kind of support network is simply invaluable. Scala, while newer on the scene, brings forward-thinking libraries and tools that are particularly well-suited to the challenges of modern fintech development. Akka, with its toolkit for crafting highly concurrent and resilient message-driven apps, fits perfectly with the needs of high-load financial systems. Alpakka, part of the Reactive Streams ecosystem, further extends Scala's capabilities, facilitating integration with a wide range of messaging systems and data stores. The language’s functional programming capabilities, combined with its interoperability with Java, allow teams to gradually adopt new paradigms without a complete overhaul. On the other hand, one significant challenge that fintech companies might face when adopting Scala is the relative scarcity of experienced Scala developers compared to Java developers. The smaller community size can make it difficult to find developers with deep experience in Scala, especially those who are adept at leveraging its advanced features in a fintech context. This scarcity can lead to higher recruitment costs and potentially longer project timelines, one of the factors to consider when deciding between Java and Scala. While Scala presents compelling advantages to fintech companies interested in building scalable, distributed systems, Java is still a strong contender. The choice between these languages will require you to carefully assess your project's needs, weighing specific pros and cons of the two paradigms. With this in mind, let’s compare some fundamental aspects of these two remarkable languages. Concurrency and Real-Time Processing In fintech, where handling multiple transactions swiftly and safely is the daily bread, a language’s concurrency models are of particular interest. Let’s see what Java and Scala offer us in this regard. Java and Concurrency in Fintech Initially, Java offered threads and locks – a straightforward but sometimes cumbersome way to manage concurrency. However, Java 8 introduced CompletableFuture, which marked a dramatic leap to straightforward asynchronous programming. CompletableFuture provides developers with a promise-like mechanism that can be completed at a later stage, making it ideal for fintech applications that require high throughput and low latency. Let’s consider a scenario where you need to fetch exchange rates from different services concurrently and then combine them to execute a transaction: Java CompletableFuture<Double> fetchUSDExchangeRate = CompletableFuture.supplyAsync(() -> { return exchangeService.getRate("USD"); }); CompletableFuture<Double> fetchEURExchangeRate = CompletableFuture.supplyAsync(() -> { return exchangeService.getRate("EUR"); }); fetchUSDExchangeRate .thenCombine(fetchEURExchangeRate, (usd, eur) -> { return processTransaction(usd, eur); }) .thenAccept(result -> System.out.println("Transaction Result: " + result)) .exceptionally(e -> { System.out.println("Error processing transaction: " + e.getMessage()); return null; }); In this snippet, supplyAsync initiates asynchronous tasks to fetch exchange rates. thenCombine waits for both rates before executing a transaction, ensuring that operations dependent on multiple external services can proceed smoothly. The exceptionally method provides a way to handle any errors that occur during execution, a crucial feature for maintaining robustness in financial operations. Scala and Concurrency With Akka Transitioning from Java to Scala’s actor model via Akka provides a stark contrast in handling concurrency. Akka actors, elegant yet efficient, are especially well-suited for the demands of fintech applications; they were designed to be lightweight and can be instantiated in the millions. They also bring fault tolerance through supervision strategies, ensuring the system remains responsive even when parts of it fail. Consider the previous example of fetching exchange rates and processing a transaction. Here’s how you can apply the actor model in Scala: Scala import akka.actor.Actor import akka.actor.ActorSystem import akka.actor.Props import akka.pattern.ask import akka.util.Timeout import scala.concurrent.duration._ import scala.concurrent.Future case class FetchRate(currency: String) case class RateResponse(rate: Double) case class ProcessTransaction(rate1: Double, rate2: Double) class ExchangeServiceActor extends Actor { def receive = { case FetchRate(currency) => sender() ! RateResponse(exchangeService.getRate(currency)) } } class TransactionActor extends Actor { implicit val timeout: Timeout = Timeout(5 seconds) def receive = { case ProcessTransaction(rate1, rate2) => val result = processTransaction(rate1, rate2) println(s"Transaction Result: $result") } } val system = ActorSystem("FintechSystem") val exchangeServiceActor = system.actorOf(Props[ExchangeServiceActor], "exchangeService") val transactionActor = system.actorOf(Props[TransactionActor], "transactionProcessor") implicit val timeout: Timeout = Timeout(5 seconds) import system.dispatcher // for the implicit ExecutionContext val usdRateFuture = (exchangeServiceActor ? FetchRate("USD")).mapTo[RateResponse] val eurRateFuture = (exchangeServiceActor ? FetchRate("EUR")).mapTo[RateResponse] val transactionResult = for { usdRate <- usdRateFuture eurRate <- eurRateFuture } yield transactionActor ! ProcessTransaction(usdRate.rate, eurRate.rate) Here, ExchangeServiceActor fetches currency rates asynchronously, while TransactionActor processes the transaction. The use of the ask pattern (?) allows us to send messages and receive futures in response, which we can then compose or combine as needed. This pattern elegantly handles the concurrency and asynchronicity inherent in fetching rates and processing transactions, without the direct management of threads. The actor model, by design, encapsulates state and behavior, making the codebase cleaner and easier to maintain. Fintech applications, with their demand for fault tolerance and quick scalability, are one of the major beneficiaries of Scala’s Akka framework. Code Readability and Maintainability in Fintech Java's syntax is known for its verbosity, which, applied to fintech, translates to clarity. Each line of code, while longer, is self-explanatory, making it easier for new team members to understand the business logic and the flow of the application. This characteristic is beneficial in environments where maintaining and auditing code is as crucial as writing it, given the regulatory scrutiny fintech applications often face. On the other hand, while Scala's more concise syntax reduces boilerplate and can lead to a tighter, more elegant codebase, it also introduces a significant challenge. The flexibility and variety of Scala can often result in different developers solving the same problem in multiple ways, creating what can be described as a "Babylon" within the project. This variability, while showcasing Scala's expressive power, can make it more difficult to maintain consistent coding standards and ensure code quality and understandability, especially in the highly regulated environment of fintech. This steepens the learning curve, especially for developers not familiar with functional programming paradigms. Consider a simple operation in a fintech application, such as validating a transaction against a set of rules. In Java, this might involve several explicit steps, each clearly laid out: Java public boolean validateTransaction(Transaction transaction) { if (transaction.getAmount() <= 0) { return false; } if (!knownCurrencies.contains(transaction.getCurrency())) { return false; } // Additional validation rules here return true; } The challenger, Scala, boasts a more concise syntax by virtue of its functional programming capabilities. This conciseness helps dramatically reduce the boilerplate code, making the codebase tighter and easier to maintain. Despite the challenge of maintaining a uniform standard across a team I mentioned above, the brevity of Scala code can be a significant asset, though it requires a steeper learning curve, especially for developers not familiar with functional programming paradigms. The same transaction validation in Scala might look significantly shorter, leveraging pattern matching and list comprehensions: Scala def validateTransaction(transaction: Transaction): Boolean = transaction match { case Transaction(amount, currency, _) if amount > 0 && knownCurrencies.contains(currency) => true case _ => false } JVM Interoperability and Legacy Integration A critical factor in choosing a backend technology for fintech applications is how well it integrates with existing systems. Many financial institutions rely on extensive legacy systems that are critical to their operations. Java’s and Scala’s paths to interoperability and integration within the JVM ecosystem have their unique advantages here. Java's long history and widespread use in the financial industry mean that most legacy systems in fintech are built using Java or compatible with Java. This compatibility facilitates seamless integration of new developments with existing systems. Java's stability and backward compatibility are key assets when updating or extending legacy systems, minimizing disruptions, and ensuring continuous operation. For instance, integrating a new Java-based service into an existing system can be as straightforward as: Java // Java service to be integrated with a legacy system public class NewJavaService { public String processData(String input) { // Process data return "Processed: " + input; } } This simplicity in integration is a significant advantage for Java, reducing the time and effort required to enhance or expand legacy systems with new functionalities. Scala's interoperability with Java is one of its standout features, allowing Scala to use Java libraries directly and vice versa. This interoperability means that financial institutions can adopt Scala for new projects or modules without abandoning their existing Java codebase. Scala can act as a bridge to more modern, functional programming paradigms while maintaining compatibility with the JVM ecosystem. For example, calling a Scala object from Java might look like this: Scala // Scala object object ScalaService { def processData(input: String): String = { // Process data s"Processed: $input" } } Scala // Java class calling Scala object public class JavaCaller { public static void main(String[] args) { String result = ScalaService.processData("Sample input"); System.out.println(result); } } This cross-language interoperability is particularly beneficial in fintech, where leveraging existing investments while adopting new technologies is often a strategic priority. Scala offers a path to modernize applications with functional programming concepts without a complete system overhaul. Conclusion It certainly is no revelation that the two languages have their strengths and difficulties. Java stands out for its robust ecosystem and libraries, offering a tried-and-tested path for developing fintech applications. Its traditional concurrency models and frameworks provide a solid foundation for building reliable and scalable systems. Moreover, Java's verbose syntax promotes clarity and maintainability, essential in the highly regulated fintech sector. Finally, Java's widespread adoption makes integration with existing systems and legacy code seamless Scala, on the other hand, will be your weapon of choice if you want to streamline your development process with a more expressive syntax and a robust concurrency management model. It’s particularly appealing for projects aiming for high scalability and resilience, without stepping completely away from the Java universe. This makes Scala a strategic choice for evolving your tech stack, introducing functional programming benefits while keeping the door open to Java's realm. So — no, there is no and probably never will be a definitive, final answer to this question. You will always have to balance the immediate needs of your project with long-term tech strategy. Do you build on the solid, familiar ground that Java offers, or do you step into Scala's territory, with its promise of modernized approaches and efficiency gains? In fintech, where innovation must meet reliability head-on, understanding the nuances of Java and Scala will equip you to make an informed decision that aligns with both your immediate project needs and your strategic goals for the future.
The ExecutorService in Java provides a flexible and efficient framework for asynchronous task execution. It abstracts away the complexities of managing threads manually and allows developers to focus on the logic of their tasks. Overview The ExecutorService interface is part of the java.util.concurrent package and represents an asynchronous task execution service. It extends the Executor interface, which defines a single method execute(Runnable command) for executing tasks. Executors Executors is a utility class in Java that provides factory methods for creating and managing different types of ExecutorService instances. It simplifies the process of instantiating thread pools and allows developers to easily create and manage executor instances with various configurations. The Executors class provides several static factory methods for creating different types of executor services: FixedThreadPool: Creates an ExecutorService with a fixed number of threads. Tasks submitted to this executor are executed concurrently by the specified number of threads. If a thread is idle and no tasks are available, it remains alive but dormant until needed. Java ExecutorService executor = Executors.newFixedThreadPool(5); CachedThreadPool: Creates an ExecutorService with an unbounded thread pool that automatically adjusts its size based on the workload. Threads are created as needed and reused for subsequent tasks. If a thread remains idle for a certain period, it may be terminated to reduce resource consumption. In a cached thread pool, submitted tasks are not queued but immediately handed off to a thread for execution. If no threads are available, a new one is created. If a server is so heavily loaded that all of its CPUs are fully utilized, and more tasks arrive, more threads will be created, which will only make matters worse. Idle time of threads is default to 60s, after which if they don't have any task thread will be terminated. Therefore, in a heavily loaded production server, you are much better off using Executors.newFixedThreadPool, which gives you a pool with a fixed number of threads, or using the ThreadPoolExecutor class directly, for maximum control. Java ExecutorService executor = Executors.newCachedThreadPool(); SingleThreadExecutor: Creates an ExecutorService with a single worker thread. Tasks are executed sequentially by this thread in the order they are submitted. This executor is useful for tasks that require serialization or have dependencies on each other. Java ExecutorService executor = Executors.newSingleThreadExecutor(); ScheduledThreadPool: Creates an ExecutorService that can schedule tasks to run after a specified delay or at regular intervals. It provides methods for scheduling tasks with fixed delay or fixed rate, allowing for periodic execution of tasks. newWorkStealingPool: Creates a work-stealing thread pool with the target parallelism level. This executor is based on the ForkJoinPool and is capable of dynamically adjusting its thread pool size to utilize all available processor cores efficiently. Overall, the Executors class simplifies the creation and management of executor instances. ExecutorService Tasks can be submitted to an ExecutorService for execution. These tasks are typically instances of Runnable or Callable, representing units of work that need to be executed asynchronously. Below are the methods in ExecutorService. 1. execute(Runnable command): Executes the given task asynchronously. Java ExecutorService executor = Executors.newFixedThreadPool(5); executor.execute(() -> { System.out.println("Task executed asynchronously"); }); 2. submit(Callable<T> task): Submits a task for execution and returns a Future representing the pending result of the task. Java ExecutorService executor = Executors.newSingleThreadExecutor(); Future<Integer> future = executor.submit(() -> { // Task logic return 42; }); 3. shutdown(): Initiates an orderly shutdown of the ExecutorService, allowing previously submitted tasks to execute before terminating. 4. shutdownNow(): Attempts to stop all actively executing tasks, halts the processing of waiting tasks, and returns a list of the tasks that were awaiting execution. Java List<Runnable> pendingTasks = executor.shutdownNow(); 5. awaitTermination(long timeout, TimeUnit unit): Blocks until all tasks have completed execution after a shutdown request, or the timeout occurs, or the current thread is interrupted, whichever happens first. Java boolean terminated = executor.awaitTermination(10, TimeUnit.SECONDS); if (terminated) { System.out.println("All tasks have completed execution"); } else { System.out.println("Timeout occurred before all tasks completed"); } 6. invokeAny(Collection<? extends Callable<T>> tasks): Executes the given tasks, returning the result of one that successfully completes. This method is useful when we have multiple tasks to run but we only care about the result of whichever one completes first. All other tasks are terminated. Java ExecutorService executor = Executors.newCachedThreadPool(); Set<Callable<String>> callables = new HashSet<>(); callables.add(() -> "Task 1"); callables.add(() -> "Task 2"); String result = executor.invokeAny(callables); System.out.println("Result: " + result); 7. invokeAll(Collection<? extends Callable<T>> tasks): Executes the given tasks, returning a list of Future objects representing their pending results. Java List<Callable<Integer>> tasks = Arrays.asList(() -> 1, () -> 2, () -> 3); List<Future<Integer>> futures = executor.invokeAll(tasks); for (Future<Integer> future : futures) { System.out.println("Result: " + future.get()); } Implementations The ExecutorService interface is typically implemented by various classes provided by the Java concurrency framework, such as ThreadPoolExecutor, ScheduledThreadPoolExecutor, and ForkJoinPool. Considerations Careful configuration of thread pool size to avoid underutilization or excessive resource consumption. Consider factors such as task submission rate, task priority, resource constraints, and the desired behavior in case of queue overflow. Choose the queue type that best meets your application's requirements for scalability, performance, and resource utilization. Proper handling of exceptions and task cancellation to ensure robustness and reliability. Understanding the concurrency semantics and potential thread safety issues in concurrent code. To create an instance of ExecutorService, we can pass ThreadFactory and task queue to be used while creating the pool. A ThreadFactory is an interface used to create new threads. It provides a way to encapsulate the logic for creating threads, allowing for customization of thread creation behavior. The primary purpose of a ThreadFactory is to decouple the thread creation process from the rest of the application logic, making it easier to manage and customize thread creation. It is preferred to pass custom Thread factory, as helps in setting thread prefix and priority if required. Java static final String prefix = "app.name.task"; ExecutorService executorService = Executors.newFixedThreadPool(5, () -> { Thread t = new Thread(r); t.setName(prefix + "-" + t.getId()); // Customize thread name if needed return t; }); TaskQueues When tasks are submitted to ExecutorService, if none of the threads in pool are available to process the tasks, they get stored in a queue, below are the different queue options to choose from. Unbounded Queue: An unbounded queue, such as LinkedBlockingQueue, has no fixed capacity and can grow dynamically to accommodate an unlimited number of tasks. It is suitable for scenarios where the task submission rate is unpredictable or where tasks need to be queued indefinitely without the risk of rejection due to queue overflow. However, keep in mind that unbounded queues can potentially lead to memory exhaustion if tasks are submitted at a faster rate than they can be processed. Bounded Queue: A bounded queue, such as ArrayBlockingQueue with a specified capacity, has a fixed size limit and can only hold a finite number of tasks. It is suitable for scenarios where resource constraints or backpressure mechanisms need to be enforced to prevent excessive memory usage or system overload. Tasks may be rejected or handled according to a specified rejection policy when the queue reaches its capacity. Priority Queue: A priority queue, such as PriorityBlockingQueue, orders tasks based on their priority or a specified comparator. It is suitable for scenarios where tasks have different levels of importance or urgency, and higher-priority tasks need to be processed before lower-priority ones. Priority queues ensure that tasks are executed in the order of their priority, regardless of their submission order. Synchronous Queue: A synchronous queue, such as SynchronousQueue, is a special type of queue that enables one-to-one task handoff between producer and consumer threads. It has a capacity of zero and requires both a producer and a consumer to be available simultaneously for task exchange to occur. Synchronous queues are suitable for scenarios where strict synchronization and coordination between threads are required, such as handoff between thread pools or bounded resource access. ScheduledThreadPool The ScheduledThreadPoolExecutor inherits thread pool management capabilities from ThreadPoolExecutor and provides functionalities for scheduling tasks to run after a given delay or periodically at defined intervals. Here's a detailed explanation: Runnable and Callable Tasks: You define tasks you want to schedule using these interfaces, similar to a regular ExecutorService. ScheduledFuture: This interface represents the result of a scheduled task submission. It allows checking the task's completion status, canceling the task before execution, and (for Callable tasks) retrieving the result upon completion. Scheduling Capabilities schedule(Runnable task, long delay, TimeUnit unit): Schedules a Runnable task to be executed after a specified delay in the given time unit (e.g., seconds, milliseconds). scheduleAtFixedRate(Runnable command, long initialDelay, long period, TimeUnit unit): Schedules a fixed-rate execution of a Runnable task. The task is first executed after the initialDelay, and subsequent executions occur with a constant period between them. scheduleWithFixedDelay(Runnable command, long initialDelay, long delay, TimeUnit unit): Schedules a fixed-delay execution of a Runnable task. Similar to scheduleAtFixedRate, but the delay is measured between the completion of the previous execution and the start of the next. Key Considerations Thread Pool Management: ScheduledThreadPoolExecutor maintains a fixed-sized thread pool by default. You can configure the pool size during object creation. Delayed Execution: Scheduled tasks are not guaranteed to execute precisely at the specified time. The actual execution time might be slightly different due to factors like thread availability and workload. Missed Executions: With fixed-rate scheduling, if the task execution time exceeds the period, subsequent executions might be skipped to maintain the fixed rate. Cancellation: You can cancel a scheduled task using the cancel method of the returned ScheduledFuture object. However, cancellation success depends on the task's state (not yet started, running, etc.). Java import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; public class ScheduledThreadPoolExample { public static void main(String[] args) throws InterruptedException { // Create a ScheduledThreadPoolExecutor with 2 threads ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(2); // Schedule a task with a 2-second delay Runnable task1 = () -> System.out.println("Executing task 1 after a delay"); scheduler.schedule(task1, 2, TimeUnit.SECONDS); // Schedule a task to run every 5 seconds with a fixed rate Runnable task2 = () -> System.out.println("Executing task 2 at fixed rate"); scheduler.scheduleAtFixedRate(task2, 1, 5, TimeUnit.SECONDS); // Schedule a task to run every 3 seconds with a fixed delay Runnable task3 = () -> System.out.println("Executing task 3 with fixed delay"); scheduler.scheduleWithFixedDelay(task3, 0, 3, TimeUnit.SECONDS); // Wait for some time to allow tasks to be executed Thread.sleep(15000); // Shutdown the scheduler scheduler.shutdown(); } } Shut Down ExecutorService Gracefully To efficiently shut down an ExecutorService, you can follow these steps: Call the shutdown() method to initiate the shutdown process. This method allows previously submitted tasks to execute before terminating but prevents the submission of new tasks. Call the shutdownNow() method if you want to force the ExecutorService to terminate immediately. This method attempts to stop all actively executing tasks, halts the processing of waiting tasks, and returns a list of the tasks that were awaiting execution but were never started. Await termination by calling the awaitTermination() method. This method blocks until all tasks have completed execution after a shutdown request, or the timeout occurs, or the current thread is interrupted, whichever happens first. Here's an example: Java ExecutorService executor = Executors.newFixedThreadPool(10); // Execute tasks using the executor // Shutdown the executor executor.shutdown(); try { // Wait for all tasks to complete or timeout after a certain period if (!executor.awaitTermination(60, TimeUnit.SECONDS)) { // If the timeout occurs, force shutdown executor.shutdownNow(); // Optionally, wait for the tasks to be forcefully terminated if (!executor.awaitTermination(60, TimeUnit.SECONDS)) { // Log a message indicating that some tasks failed to terminate } } } catch (InterruptedException ex) { // Log interruption exception executor.shutdownNow(); // Preserve interrupt status Thread.currentThread().interrupt(); } In summary, ExecutorService is a versatile framework that helps developers write efficient, scalable, and maintainable concurrent code.
This is part 4 of a 4-part tutorial Part 1: DSL Validations: Properties Part 2: DSL Validations: Child Properties Part 3: DSL Validations: Operators Part 4: DSL Validations: The Whole Enchilada In this final part of a four-part tutorial, after introducing the concept of property validators and operators, we can tie it all together by validating complete beans in a more reusable way. PropertyBeanValidator PropertyBeanValidator is the worker class that evaluates a collection of PropertyValidators against a target object. The specific property validators are provided during construction and are AND'ed together as if wrapped by an AndOperator (e.g., all property validators must pass for the entire bean to validate successfully). Kotlin open class PropertyBeanValidator<T> ( validators: Set<PropertyValidator<T>>) : DefaultBeanValidator() { override fun <T> validate( source: T, vararg groups: Class<*>?): Set<ConstraintViolation<T>> { // Place to catch all the constraint violations that // occurred during this validation val violations = mutableSetOf<ConstraintViolation<T>>() // Call each individual validator to determine whether // or not the bean validates correctly validators .parallelStream() .forEach { it as PropertyValidator<T>; it.validate(source, violations) } return violations } } Putting It All Together We'll reuse the Student class defined in Part 2, "DSL Validations: Child Properties" (linked in the introduction). Kotlin data class Address( val line1: String?, val line2: String? val city: String, val state: String, val zipCode: String ) data class Student( val studentId: String, val firstName: String?, val lastName: String?, val emailAddress: String?, val localAddress: Address ) For this example, we have three business rules to apply against a Student object: firstName and lastName must both be present or missing; address.line2 presence requires that address.line1 is also present; address.zipCode must be formatted correctly. Ad-Hoc Bean Validator Bean validators are created by instantiating PropertyBeanValidator directly by a factory allowing callers to be provided an appropriate validator without actually knowing what needs to be validated. The factory determines the specific validations required - based on caller, data state, feature flags, etc. - and builds the validator on the fly. Kotlin val validators = setOf( OrOperator( "studentName", listOf( AndOperator( "namePresent", listOf( NotBlankValidator("firstName", Student::firstName), NotBlankValidator("lastName", Student::lastName) ) ), AndOperator( "nameNotPresent", listOf( NullOrBlankValidator("firstName", Student::firstName), NullOrBlankValidator("lastName", Student::lastName) ), ) "first/last name must both be present or null" ), OrOperator( "Line2RequiresLine1", listOf( ChildPropertyValidator( "line1NotNull", Student::localAddress, NotBlankValidator("line1", Address::line1)), ChildPropertyValidator( "line2Null", Student::localAddress, NullOrBlankValidator("line2", Address::line2)), ), "line2 requires line1." ), ChildPropertyValidator( "address.zipCode", Student::localAddress, ZipCodeFormatValidator("address", Address::zipCode) ) ) val validator = PropertyBeanValidator(validators) Class-Specific Bean Validator A class-specific validator is useful when there is one and only one way to validate a class and consistent and correct usage across the code base. Here we extend PropertyBeanValidator and pass in the validators via an alternative constructor. Kotlin class StudentBeanValidator (validators: Set<PropertyValidator<Student>>) : PropertyBeanValidator<Student> (validators) { constructor() : this(getValidators()) companion object { fun getValidators() : Set<PropertyValidator<Student>> { return setOf( . . <same validations as above> . . ) } } } NOTE: It's a little more awkward in Kotlin, as you can't access data in the companion object before the object is constructed, but calling a method is allowed. Statics in Java would allow the creation of an immutable set that could be used for any number of instantiations. Validating Kotlin // Assume the student is created from a database entry val myStudent = retrieveStudent("studentId") // Validate the object val violations = mutableSetOf<ConstraintViolation<T>>() validator.validate(myStudent, violations) // empty collection means successful validation val successfullyValidated = violations.isEmpty() Annotation-Based Validation Jakarta's validation interface ConstraintValidator declares an annotation-driven validation which, in turn, can be defined via the DSL. First, implement the annotation that can be applied for validating students, in this example, limited to method parameters. Kotlin @Constraint(validatedBy = [StudentValidator::class]) @Target(AnnotationTarget.VALUE_PARAMETER) @Retention(AnnotationRetention.RUNTIME) annotation class ValidStudent( val message: String = "Invalid Student record", val groups: Array<KClass<*>> = [], val payload: Array<KClass<out Payload>> = [] ) Next, implement the class-extending ConstraintValidator that does the actual validation, using the StudentBeanValidation implemented earlier. Kotlin class StudentValidator : ConstraintValidator<ValidStudent, Student> { override fun isValid(student: Student, context: ConstraintValidatorContext): Boolean { val errors = StudentBeanValidator().validate(student) return if (errors.isNotEmpty()) { context.disableDefaultConstraintViolation() context.buildConstraintViolationWithTemplate( "Student validation failed with following errors :$errors") .addConstraintViolation() false } else { true } } } } Here is the annotation in action: Kotlin fun registerStudentForClass(@ValidStudent student: Student): Student { . . <do some work> . . } For those interested, this Baeldung tutorial dives deeper into validations than what I've covered. Final Thoughts DSL Validations is a language-independent way of checking for bean/object validity without writing ever more if-then-else statements that are uncommented, unclear, and unreadable, and are easy to extend and customize for whatever specific requirements your organization has. Supporting Code DefaultBeanValidator Kotlin open class DefaultBeanValidator : Validator { override fun <T> validate(source: T, vararg groups: Class<*>?) : Set<ConstraintViolation<T>> { throw UnsupportedOperationException (EXCEPTION_MESSAGE) } override fun <T> validateProperty(source: T, propertyName: String?, vararg groups: Class<*>?) : Set<ConstraintViolation<T>> { throw UnsupportedOperationException (EXCEPTION_MESSAGE) } override fun <T> validateValue(beanType: Class<T>?, propertyName: String?, value: Any?, vararg groups: Class<*>?) : Set<ConstraintViolation<T>> { throw UnsupportedOperationException (EXCEPTION_MESSAGE) } override fun getConstraintsForClass(clazz: Class<*>?): BeanDescriptor { throw UnsupportedOperationException (EXCEPTION_MESSAGE) } override fun <T : Any?> unwrap(type: Class<T>?): T { throw UnsupportedOperationException (EXCEPTION_MESSAGE) } override fun forExecutables(): ExecutableValidator { throw UnsupportedOperationException (EXCEPTION_MESSAGE) } companion object { const val EXCEPTION_MESSAGE = "Not yet implemented" } }
Java 21 just got simpler! Want to write cleaner, more readable code? Dive into pattern matching, a powerful new feature that lets you easily deconstruct and analyze data structures. This article will explore pattern matching with many examples, showing how it streamlines normal data handling and keeps your code concise. Examples of Pattern Matching Pattern matching shines in two key areas. First, the pattern matching feature of switch statements replaces the days of long chains of if statements, letting you elegantly match the selector expression against various data types, including primitives, objects, and even null. Secondly, what if you need to check an object's type and extract specific data? The pattern matching feature of instance expressions simplifies this process which allows you to confirm if an object matches a pattern and, if so, conveniently extract the desired data. Let’s take a look at more examples of pattern matching in Java code. Pattern Matching With Switch Statements Java public static String getAnimalSound(Animal animal) { return switch (animal) { case Dog dog -> "woof"; case Cat cat -> "meow"; case Bird bird -> "chirp"; case null -> "No animal found!"; default -> "Unknown animal sound"; }; } Matches selector expressions with types other than integers and strings Uses type patterns (case Dog dog) to check and cast types simultaneously Handles null directly within the switch block (case null) Employs arrow syntax (->) for concise body expressions Pattern Matching With instanceof Java if (object instanceof String str) { System.out.println("The string is: " + str); } else if (object instanceof Integer num) { System.out.println("The number is: " + num); } else { System.out.println("Unknown object type"); } Combines type checking and casting in a single expression Introduces a pattern variable (str, num) to capture the object's value. Avoids explicit casting (String str = (String) object). Pattern Matching With Primitive Types Java int number = 10; switch (number) { case 10: System.out.println("The number is 10."); break; case 20: System.out.println("The number is 20."); break; case 30: System.out.println("The number is 30."); break; default: System.out.println("The number is something else."); } Pattern matching with primitive types doesn't introduce entirely new functionality but rather simplifies existing practices when working with primitives in switch statements. Pattern Matching With Reference Types Java String name = "Daniel Oh"; switch (name) { case "Daniel Oh": System.out.println("Hey, Daniel!"); break; case "Jennie Oh": System.out.println("Hola, Jennie!"); break; default: System.out.println("What’s up!"); } Pattern matching with reference types makes code easier to understand and maintain due to its clear and concise syntax. By combining type checking and extraction in one step, pattern matching reduces the risk of errors associated with explicit casting. More expressive switch statements: Switch statements become more versatile and can handle a wider range of data types and scenarios. Pattern Matching With null Java Object obj = null; switch (obj) { case null: System.out.println("The object is null."); break; default: System.out.println("The object is not null."); } Before Java 21, switch statements would throw a NullPointerException if the selector expression was null. Pattern matching allows a dedicated case null clause to handle this scenario gracefully. By explicitly checking for null within the switch statement, you avoid potential runtime errors and ensure your code is more robust. Having a dedicated case null clause makes the code's intention clearer compared to needing an external null check before the switch. Java's implementation is designed not to break existing code. If a switch statement doesn't have a case null clause, it will still throw a NullPointerException as before, even if a default case exists. Pattern Matching With Multiple Patterns Java List<String> names = new ArrayList<>(); names.add("Daniel Oh"); names.add("Jennie Oh"); for (String name : names) { switch (name) { case "Daniel Oh", "Jennie Oh": System.out.println("Hola, " + name + "!"); break; default: System.out.println("What’s up!"); } } Unlike traditional switch statements, pattern matching considers the order of cases. The first case with a matching pattern is executed. Avoid unreachable code by ensuring subtypes don't appear before their supertypes in the pattern-matching cases. Conclusion Pattern matching is a powerful new feature in Java 21 that can make your code more concise and readable. It is especially useful for working with complex data structures with key benefits: Improved readability: Pattern matching makes code more readable by combining type checking, data extraction, and control flow into a single statement. This eliminates the need for verbose if-else chains and explicit casting. Conciseness: Code becomes more concise by leveraging pattern matching's ability to handle multiple checks and extractions in a single expression. This reduces boilerplate code and improves maintainability. Enhanced type safety: Pattern matching enforces type safety by explicitly checking and potentially casting the data type within the switch statement or instance expression. This reduces the risk of runtime errors caused by unexpected object types. Null handling: Pattern matching allows for the explicit handling of null cases directly within the switch statement. This eliminates the need for separate null checks before the switch, improving code flow and reducing the chance of null pointer exceptions. Flexibility: Pattern matching goes beyond basic types. It can handle complex data structures using record patterns (introduced in Java 14). This allows for more expressive matching logic for intricate data objects. Modern look and feel: Pattern matching aligns with modern functional programming paradigms, making Java code more expressive and aligned with other languages that utilize this feature. Overall, pattern matching in Java 21 streamlines data handling, improves code clarity and maintainability, and enhances type safety for a more robust and developer-friendly coding experience.
Brief Problem Description Imagine the situation: you (a Python developer) start a new job or join a new project, and you are told that the documentation is not up to date or is even absent, and those, who wrote the code, resigned a long time ago. Moreover, the code is written in a language that you are not familiar with (or “that you do not know”). You open the code, start examining it, and realize that there are no tests either. Also, the service has been working on Prod for so long that you are afraid to change something. I am not talking about any particular project or company. I have experienced this at least three times. Black Box Well, you have a black box that has API methods (judging by the code) and you know that it pulls something and writes to a database. There is also documentation for those services that are receiving requests. The advantages include the fact that it starts there is documentation on the API that it pulls, and the service code is quite readable. As for the disadvantages, it wants to get something via API. Something can be run in a container, and something can be used from a developer environment, but not everything. Another problem is that requests to the black box are encrypted and signed as well as requests from it to some other services. At the same time, you need to change something in this service and not break what is working. In such cases, Postman or cURL is inconvenient to use. You need to prepare each request in each specific case since there are dynamic input data and signatures that depend on the time of the request.There are almost no ready-made tests, and it is difficult to write them if you do not know the language very well. The market offers solutions that allow you to run tests in such a service. However, I have never used them, so trying to understand them would be more difficult and would take much more time than creating my own solution. Created Solution I have come up with a simple and convenient option. I have written a simple script in Python that will pull this very application. I used requests and a simple signature that I created very quickly for the requests prepared in advance.Next, I needed to mock backends. First Option To do this, I just ran a mock service in Python. In my case, Django turned out to be the fastest and easiest tool for this. I decided to implement everything as simply and quickly as possible and used the latest version of Django. The result was quite good, but it was only one method and it took me several hours to use despite the fact that I wanted to save time. There are dozens of such methods. Examples of Configuration Files In the end, I got rid of everything I did not need and simply generated JSON with requests and responses. I described each request from the front end of my application, the expected response of the service to which requests were sent, as well as the rules for checking the response to the main request.For each method, I wrote a separate URL. However, manually changing the responses of one method from correct to incorrect and vice versa and then pulling each method is difficult and time-consuming. JSON { "id": 308, "front": { "method": "/method1", "request": { "method": "POST", "data": { "from_date": "dfsdsf", "some_type": "dfsdsf", "goods": [ { "price": "112323", "name": "123123", "quantity": 1 } ], "total_amount": "2113213" } }, "response": { "code": 200, "body": { "status": "OK", "data": { "uniq_id": "sdfsdfsdf", "data": [ { "number": "12223", "order_id": "12223", "status": "active", "code": "12223", "url": "12223", "op_id": "12223" } ] } } } }, "backend": { "response": { "code": 200, "method": "POST", "data": { "body": { "status": 1, "data": { "uniq_id": "sdfsdfsdf", "data": [ { "number": "12223", "order_id": "12223", "status": "active", "code": "12223", "url": "12223", "op_id": "12223" } ] } } } } } } Second Option Then I linked mock objects to the script. As a result, it appeared that there is a script call that pulls my application and there is a mock object that responds to all its requests. The script saves the ID of the selected request, and the mock object generates a response based on this ID. Thus, I collected all requests in different options: correct and with errors. What I Got As a result, I got a simple view with one function for all URLs. This function takes a certain request identifier and, based on it, looks for the response rules — a mock object. In the meantime, the script that pulls the service before the request writes this very request identifier to the storage. This script simply takes each case in turn, writes an identifier, and makes the correct request, then it checks if the response is correct, and that's it. Intermediate Connections However, I needed not only to generate responses to these requests but also to test requests to mock objects. After all, the service could send an incorrect request, so it was necessary to check them too. As a result, there was a huge number of configuration files, and my several API methods turned into hundreds of large configuration files for checking. Connecting Database I decided to transfer everything to a database. My service began to write not only to the console but also to the database so that it would be possible to generate reports. That appeared to be more convenient: each case had its own entry in the database. Cases are combined into projects and have flags that allow you to disable irrelevant options. In the settings, I added request and response modifiers, which should be applied to each request and response at all levels. To simplify this as much as possible, I use SQLite. Django has it by default. I have transferred all configuration files to the database and saved all testing results in it. Algorithm Therefore, I found a very simple and flexible solution. It already works as an external integration test for three microservices, but I am the only one who uses it. It certainly does not override unit tests, but it complements them well. When I need to validate services, I use this Django tester to do that. Configuration File Example The settings have become simpler and are managed with Django Admin. I can easily turn them off, change, and watch history. I could go further and make a full-fledged UI, but this is more than enough for me for now. Request Body JSON JSON { "from_date": "dfsdsf", "some_type": "dfsdsf", "goods": [ { "price": "112323", "name": "123123", "quantity": 1 } ], "total_amount": "2113213" } Response Body JSON JSON { "uniq_id": "sdfsdfsdf", "data": [ { "number": "12223", "order_id": "12223", "status": "active", "code": "12223", "url": "12223", "op_id": "12223" } ] } Backend Response Body JSON JSON { "status": 1, "data": { "uniq_id": "sdfsdfsdf", "data": [ { "number": "12223", "order_id": "12223", "status": "active", "code": "12223", "url": "12223", "op_id": "12223" } ] } } What It Gives You In what way can this service be useful? Sometimes, even with tests, you need to pull services from the outside, or several services in a chain. Services can also be black boxes. A database can be run in Docker. As for an API...an API can be run in Docker as well. You need to set a host, port, and configuration files and run it. Why the Unusual Solution? Some may say that you can use third-party tools integration tests or some other tests. Of course, you can! But, with limited resources, there is often no time to apply all this, and quick and effective solutions are needed. And here comes the simplest Django service that meets all requirements.
In the world of high-performance computing, utilizing SIMD (Single Instruction, Multiple Data) instructions can significantly boost the performance of certain types of computations. SIMD enables processors to perform the same operation on multiple data points simultaneously, making it ideal for tasks like numerical computations, image processing, and multimedia operations. With Java 17, developers now have access to the Vector API, a feature that allows them to harness the power of SIMD directly within their Java applications. In this article, we'll explore what the Vector API is, how it works, and provide examples demonstrating its usage. Understanding SIMD and Its Importance Before delving into the Vector API, it's crucial to understand the concept of SIMD and why it's important for performance optimization. Traditional CPUs execute instructions serially, meaning each instruction operates on a single data element at a time. However, many modern CPUs include SIMD instruction sets, such as SSE (Streaming SIMD Extensions) and AVX (Advanced Vector Extensions), which enable parallel processing of multiple data elements within a single instruction. This parallelism is particularly beneficial for tasks involving repetitive operations on large arrays or datasets. By leveraging SIMD instructions, developers can achieve significant performance gains by exploiting the inherent parallelism of the underlying hardware. Introducing the Vector API The Vector API, introduced in Java 16 as an incubator module (jdk.incubator.vector) and made a standard feature in Java 17, provides a set of classes and methods for performing SIMD operations directly within Java code. The API abstracts the low-level details of SIMD instructions and allows developers to write portable and efficient vectorized code without resorting to platform-specific assembly language or external libraries. The core components of the Vector API include vector types, operations, and factories. Vector types represent SIMD vectors of different sizes and data types, such as integers, floating-point numbers, and boolean values. Operations include arithmetic, logical, and comparison operations that can be performed on vector elements. Factories are used to create vector instances and perform conversions between vector and scalar types. Getting Started With Vector API To utilize the Vector API from Java 17, your environment must be equipped with JDK version 17. The API resides within the java.util.vector package, providing classes and methods for vector operations. A simple example of adding two integer arrays using the Vector API demonstrates its ease of use and efficiency over traditional loop-based methods. Example 1: Adding Two Arrays Element-Wise To demonstrate the usage of the Vector API, let's consider a simple example of adding two arrays element-wise using SIMD instructions. We'll start by creating two arrays of floating-point numbers and then use the Vector API to add them together in parallel. Java import java.util.Arrays; import jdk.incubator.vector.*; public class VectorExample { public static void main(String[] args) { int length = 8; // Number of elements in the arrays float[] array1 = new float[length]; float[] array2 = new float[length]; float[] result = new float[length]; // Initialize arrays with random values Arrays.setAll(array1, i -> (float) Math.random()); Arrays.setAll(array2, i -> (float) Math.random()); // Perform addition using Vector API try (var vscope = VectorScope.create()) { VectorSpecies<Float> species = FloatVector.SPECIES_256; int i = 0; for (; i < length - species.length(); i += species.length()) { FloatVector a = FloatVector.fromArray(species, array1, i); FloatVector b = FloatVector.fromArray(species, array2, i); FloatVector sum = a.add(b); sum.intoArray(result, i); } for (; i < length; i++) { result[i] = array1[i] + array2[i]; } } // Print the result System.out.println("Result: " + Arrays.toString(result)); } } In this example, we create two arrays - array1 and array2 - containing random floating-point numbers. We then use the FloatVector class to perform the SIMD addition of corresponding elements from the two arrays. The VectorScope class is used to manage vectorization scope and ensure proper cleanup of resources. Example 2: Dot Product Calculation Another common operation that benefits from SIMD parallelism is the dot product calculation of two vectors. Let's demonstrate how to compute the dot product of two float arrays using the Vector API. Java import java.util.Arrays; import jdk.incubator.vector.*; public class DotProductExample { public static void main(String[] args) { int length = 8; // Number of elements in the arrays float[] array1 = new float[length]; float[] array2 = new float[length]; // Initialize arrays with random values Arrays.setAll(array1, i -> (float) Math.random()); Arrays.setAll(array2, i -> (float) Math.random()); // Perform dot product using Vector API try (var vscope = VectorScope.create()) { VectorSpecies<Float> species = FloatVector.SPECIES_256; int i = 0; FloatVector sum = species.create(); for (; i < length - species.length(); i += species.length()) { FloatVector a = FloatVector.fromArray(species, array1, i); FloatVector b = FloatVector.fromArray(species, array2, i); sum = sum.add(a.mul(b)); } float dotProduct = sum.reduceLanes(VectorOperators.ADD); for (; i < length; i++) { dotProduct += array1[i] * array2[i]; } System.out.println("Dot Product: " + dotProduct); } } } In this example, we compute the dot product of two arrays array1 and array2 using SIMD parallelism. We use the FloatVector class to perform SIMD multiplication of corresponding elements and then accumulate the results using vector reduction. Example 3: Additional Operations Doubled, with zeros where the original was <= 4: Beyond basic arithmetic, the Vector API supports a broad spectrum of operations, including logical, bitwise, and conversion operations. For instance, the following example demonstrates vector multiplication and conditional masking, showcasing the API's versatility for complex data processing tasks. Java import jdk.incubator.vector.IntVector; import jdk.incubator.vector.VectorMask; import jdk.incubator.vector.VectorSpecies; public class AdvancedVectorExample { public static void example(int[] vals) { VectorSpecies<Integer> species = IntVector.SPECIES_256; // Initialize vector from integer array IntVector vector = IntVector.fromArray(species, vals, 0); // Perform multiplication IntVector doubled = vector.mul(2); // Apply conditional mask VectorMask<Integer> mask = vector.compare(VectorMask.Operator.GT, 4); // Output the result System.out.println(Arrays.toString(doubled.blend(0, mask).toArray())); } } Here, we start by defining a VectorSpecies with the type IntVector.SPECIES_256, which indicates that we are working with 256-bit integer vectors. This species choice means that, depending on the hardware, the vector can hold multiple integers within those 256 bits, allowing parallel operations on them. We then initialize our IntVector from an array of integers, vals, using this species. This step converts our scalar integer array into a vectorized form that can be processed in parallel. Afterward, multiply every element in our vector by 2. The mul method performs this operation in parallel on all elements held within the IntVector, effectively doubling each value. This is a significant advantage over traditional loop-based approaches, where each multiplication would be processed sequentially. Next, we create a VectorMask by comparing each element in the original vector to the value 4 using the compare method with the GT (greater than) operator. This operation produces a mask where each position in the vector that holds a value greater than 4 is set to true, and all other positions are set to false. We then use the blend method to apply our mask to the doubled vector. This method takes two arguments: the value to blend with (0 in this case) and the mask. For each position in the vector where the mask is true, the original value from doubled is retained. Where the mask is false, the value is replaced with 0. This effectively zeros out any element in the doubled vector that originated from a value in vals that was 4 or less. Insights and Considerations When integrating the Vector API into applications, consider the following: Data alignment: For optimal performance, ensure data structures are aligned with vector sizes. Misalignment can lead to performance degradation due to additional processing steps. Loop vectorization: Manually vectorizing loops can lead to significant performance gains, especially in nested loops or complex algorithms. However, it requires careful consideration of loop boundaries and vector sizes. Hardware compatibility: While the Vector API is designed to be hardware-agnostic, performance gains can vary based on the underlying hardware's SIMD capabilities. Testing and benchmarking on target hardware are essential for understanding potential performance improvements. By incorporating these advanced examples and considerations, developers can better leverage the Vector API in Java to write more efficient, performant, and scalable applications. Whether for scientific computing, machine learning, or any compute-intensive task, the Vector API offers a powerful toolset for harnessing the full capabilities of modern hardware. Conclusion The Vector API in Java provides developers with a powerful tool for harnessing the performance benefits of SIMD instructions in their Java applications. By abstracting the complexities of SIMD programming, the Vector API enables developers to write efficient and portable code that takes advantage of the parallelism offered by modern CPU architectures. While the examples provided in this article demonstrate the basic usage of the Vector API, developers can explore more advanced features and optimizations to further improve the performance of their applications. Whether it's numerical computations, image processing, or multimedia operations, the Vector API empowers Java developers to unlock the full potential of SIMD parallelism without sacrificing portability or ease of development. Experimenting with different data types, vector lengths, and operations can help developers maximize the performance benefits of SIMD in their Java applications.
In this article, learn how the Dapr project can reduce the cognitive load on Java developers and decrease application dependencies. Coding Java applications for the cloud requires not only a deep understanding of distributed systems, cloud best practices, and common patterns but also an understanding of the Java ecosystem to know how to combine many libraries to get things working. Tools and frameworks like Spring Boot have significantly impacted developer experience by curating commonly used Java libraries, for example, logging (Log4j), parsing different formats (Jackson), serving HTTP requests (Tomcat, Netty, the reactive stack), etc. While Spring Boot provides a set of abstractions, best practices, and common patterns, there are still two things that developers must know to write distributed applications. First, they must clearly understand which dependencies (clients/drivers) they must add to their applications depending on the available infrastructure. For example, they need to understand which database or message broker they need and what driver or client they need to add to their classpath to connect to it. Secondly, they must know how to configure that connection, the credentials, connection pools, retries, and other critical parameters for the application to work as expected. Understanding these configuration parameters pushes developers to know how these components (databases, message brokers, configurations stores, identity management tools) work to a point that goes beyond their responsibilities of writing business logic for their applications. Learning best practices, common patterns, and how a large set of application infrastructure components work is not bad, but it takes a lot of development time out of building important features for your application. In this short article, we will look into how the Dapr project can help Java developers not only to implement best practices and distributed patterns out of the box but also to reduce the application’s dependencies and the amount of knowledge required by developers to code their applications. We will be looking at a simple example that you can find here. This Pizza Store application demonstrates some basic behaviors that most business applications can relate to. The application is composed of three services that allow customers to place pizza orders in the system. The application will store orders in a database, in this case, PostgreSQL, and use Kafka to exchange events between the services to cover async notifications. All the asynchronous communications between the services are marked with red dashed arrows. Let’s look at how to implement this with Spring Boot, and then let’s add Dapr. The Spring Boot Way Using Spring Boot, developers can create these three services and start writing the business logic to process the order placed by the customer. Spring Boot Developers can use http://start.spring.io to select which dependencies their applications will have. For example, with the Pizza Store Service, they will need Spring Web (to host and serve the FrontEnd and some REST endpoints), but also the Spring Actuators extension if we aim to run these services on Kubernetes. But as with any application, if we want to store data, we will need a database/persistent storage, and we have many options to select from. If you look into Spring Data, you can see that Spring Data JPA provides an abstraction to SQL (relational) databases. As you can see in the previous screenshot, there are also NoSQL options and different layers of abstractions here, depending on what your application is doing. If you decide to use Spring Data JPA, you are still responsible for adding the correct database Driver to the application classpath. In the case of PostgreSQL, you can also select it from the list. We face a similar dilemma if we think about exchanging asynchronous messages between the application’s services. There are too many options: Because we are developers and want to get things moving forward, we must make some choices here. Let’s use PostgreSQL as our database and Kafka as our messaging system/broker. I am a true believer in the Spring Boot programming model, including the abstraction layers and auto-configurations. However, as a developer, you are still responsible for ensuring that the right PostgreSQL JDBC driver and Kafka Client are included in your services classpath. While this is quite common in the Java space, there are a few drawbacks when dealing with larger applications that might consist of tens or hundreds of services. Application and Infrastructure Dependencies Drawbacks Looking at our simple application, we can spot a couple of challenges that application and operation teams must deal with when taking this application to production. Let’s start with application dependencies and their relationship with the infrastructure components we have decided to use. The Kafka Client included in all services needs to be kept in sync with the Kafka instance version that the application will use. This dependency pushes developers to ensure they use the same Kafka Instance version for development purposes. If we want to upgrade the Kafka Instance version, we need to upgrade, which means releasing every service that includes the Kafka Client again. This is particularly hard because Kafka tends to be used as a shared component across different services. Databases such as PostgreSQL can be hidden behind a service and never exposed to other services directly. But imagine two or more services need to store data; if they choose to use different database versions, operation teams will need to deal with different stack versions, configurations, and maybe certifications for each version. Aligning on a single version, say PostgreSQL 16.x, once again couples all the services that need to store or read persistent data with their respective infrastructure components. While versions, clients, and drivers create these coupling between applications and the available infrastructure, understanding complex configurations and their impact on application behavior is still a tough challenge to solve. Spring Boot does a fantastic job at ensuring that all configurations can be externalized and consumed from environment variables or property files, and while this aligns perfectly with the 12-factor apps principles and with container technologies such as Docker, defining these configurations parameter values is the core problem. Developers using different connection pool sizes, retry, and reconnection mechanisms being configured differently across environments are still, to this day, common issues while moving the same application from development environments to production. Learning how to configure Kafka and PostgreSQL for this example will depend a lot on how many concurrent orders the application receives and how many resources (CPU and memory) the application has available to run. Once again, learning the specifics of each infrastructure component is not a bad thing for developers. Still, it gets in the way of implementing new services and new functionalities for the store. Decoupling Infrastructure Dependencies and Reusing Best Practices With Dapr What if we can extract best practices, configurations, and the decision of which infrastructure components we need for our applications behind a set of APIs that application developers can consume without worrying about which driver/client they need or how to configure the connections to be efficient, secure and work across environments? This is not a new idea. Any company dealing with complex infrastructure and multiple services that need to connect to infrastructure will sooner or later implement an abstraction layer on top of common services that developers can use. The main problem is that building those abstractions and then maintaining them over time is hard, costs development time, and tends to get bypassed by developers who don’t agree or like the features provided. This is where Dapr offers a set of building blocks to decouple your applications from infrastructure. Dapr Building Block APIs allow you to set up different component implementations and configurations without exposing developers to the hassle of choosing the right drivers or clients to connect to the infrastructure. Developers focus on building their applications by just consuming APIs. As you can see in the diagram, developers don’t need to know about “infrastructure land” as they can consume and trust APIs to, for example, store and retrieve data and publish and subscribe to events. This separation of concern allows operation teams to provide consistent configurations across environments where we may want to use another version of PostgreSQL, Kafka, or a cloud provider service such as Google PubSub. Dapr uses the component model to define these configurations without affecting the application behavior and without pushing developers to worry about any of those parameters or the client/driver version they need to use. Dapr for Spring Boot Developers So, how does this look in practice? Dapr typically deploys to Kubernetes, meaning you need a Kubernetes cluster to install Dapr. Learning about how Dapr works and how to configure it might be too complicated and not related at all to developer tasks like building features. For development purposes, you can use the Dapr CLI, a command line tool designed to be language agnostic, allowing you to run Dapr locally for your applications. I like the Dapr CLI, but once again, you will need to learn about how to use it, how to configure it, and how it connects to your application. As a Spring Boot developer, adding a new command line tool feels strange, as it is not integrated with the tools that I am used to using or my IDE. If I see that I need to download a new CLI or if I depend on deploying my apps into a Kubernetes cluster even to test them, I would probably step away and look for other tools and projects. That is why the Dapr community has worked so hard to integrate with Spring Boot more natively. These integrations seamlessly tap into the Spring Boot ecosystem without adding new tools or steps to your daily work. Let’s see how this works with concrete examples. You can add the following dependency in your Spring Boot application that integrates Dapr with Testcontainers. <dependency> <groupId>io.diagrid.dapr</groupId> <artifactId>dapr-spring-boot-starter</artifactId> <version>0.10.7</version> </dependency> View the repository here. Testcontainers (now part of Docker) is a popular tool in Java to work with containers, primarily for tests, specifically integration tests that use containers to set up complex infrastructure. Our three Pizza Spring Boot services have the same dependency. This allows developers to enable their Spring Boot applications to consume the Dapr Building Block APIs for their local development without any Kubernetes, YAML, or configurations needed. Once you have this dependency in place, you can start using the Dapr SDK to interact with Dapr Building Blocks APIs, for example, if you want to store an incoming order using the Statestore APIs: Where `STATESTORE_NAME` is a configured Statestore component name, the `KEY` is just a key that we want to use to store this order and `order` is the order that we received from the Pizza Store front end. Similarly, if you want to publish events to other services, you can use the PubSub Dapr API; for example, to emit an event that contains the order as the payload, you can use the following API: The publishEvent API publishes an event containing the `order` as a payload into the Dapr PubSub component named (PUBSUB_NAME) and inside a specific topic indicated by PUBSUB_TOPIC. Now, how is this going to work? How is Dapr storing state when we call the saveState() API, or how are events published when we call publishEvent()? By default, the Dapr SDK will try to call the Dapr API endpoints to localhost, as Dapr was designed to run beside our applications. For development purposes, to enable Dapr for your Spring Boot application, you can use one of the two built-in profiles: DaprBasicProfile or DaprFullProfile. The Basic profile provides access to the Statestore and PubSub API, but more advanced features such as Actors and Workflows will not work. If you want to get access to all Dapr Building Blocks, you can use the Full profile. Both of these profiles use in-memory implementations for the Dapr components, making your applications faster to bootstrap. The dapr-spring-boot-starter was created to minimize the amount of Dapr knowledge developers need to start using it in their applications. For this reason, besides the dependency mentioned above, a test configuration is required in order to select which Dapr profile we want to use. Since Spring Boot 3.1.x, you can define a Spring Boot application that will be used for test purposes. The idea is to allow tests to set up your application with all that is needed to test it. From within the test packages (`src/test/<package>`) you can define a new @SpringBootApplication class, in this case, configured to use a Dapr profile. As you can see, this is just a wrapper for our PizzaStore application, which adds a configuration that includes the DaprBasicProfile. With the DaprBasicProfile enabled, whenever we start our application for testing purposes, all the components that we need for the Dapr APIs to work will be started for our application to consume. If you need more advanced Dapr setups, you can always create your domain-specific Dapr profiles. Another advantage of using these test configurations is that we can also start the application using test configuration for local development purposes by running `mvn spring-boot:test-run` You can see how Testcontainers is transparently starting the `daprio/daprd` container. As a developer, how that container is configured is not important as soon as we can consume the Dapr APIs. I strongly recommend you check out the full example here, where you can run the application on Kubernetes with Dapr installed or start each service and test locally using Maven. If this example is too complex for you, I recommend you to check these blog posts where I create a very simple application from scratch: Using the Dapr StateStore API with Spring Boot Deploying and configuring our simple application in Kubernetes
Real-time communication has become an essential aspect of modern applications, enabling users to interact with each other instantly. From video conferencing and online gaming to live customer support and collaborative editing, real-time communication is at the heart of today's digital experiences. In this article, we will explore popular real-time communication protocols, discuss when to use each one, and provide examples and code snippets in JavaScript to help developers make informed decisions. WebSocket Protocol WebSocket is a widely used protocol that enables full-duplex communication between a client and a server over a single, long-lived connection. This protocol is ideal for real-time applications that require low latency and high throughput, such as chat applications, online gaming, and financial trading platforms. Example Let's create a simple WebSocket server using Node.js and the ws library. 1. Install the ws library: Shell npm install ws 2. Create a WebSocket server in server.js: JavaScript const WebSocket = require('ws'); const server = new WebSocket.Server({ port: 8080 }); server.on('connection', (socket) => { console.log('Client connected'); socket.on('message', (message) => { console.log(`Received message: ${message}`); }); socket.send('Welcome to the WebSocket server!'); }); 3. Run the server: Shell node server.js WebRTC WebRTC (Web Real-Time Communication) is an open-source project that enables peer-to-peer communication directly between browsers or other clients. WebRTC is suitable for applications that require high-quality audio, video, or data streaming, such as video conferencing, file sharing, and screen sharing. Example Let's create a simple WebRTC-based video chat application using HTML and JavaScript. In index.html: HTML <!DOCTYPE html> <html> <head> <title>WebRTC Video Chat</title> </head> <body> <video id="localVideo" autoplay muted></video> <video id="remoteVideo" autoplay></video> <script src="main.js"></script> </body> </html> In main.js: JavaScript const localVideo = document.getElementById('localVideo'); const remoteVideo = document.getElementById('remoteVideo'); // Get media constraints const constraints = { video: true, audio: true }; // Create a new RTCPeerConnection const peerConnection = new RTCPeerConnection(); // Set up event listeners peerConnection.onicecandidate = (event) => { if (event.candidate) { // Send the candidate to the remote peer } }; peerConnection.ontrack = (event) => { remoteVideo.srcObject = event.streams[0]; }; // Get user media and set up the local stream navigator.mediaDevices.getUserMedia(constraints).then((stream) => { localVideo.srcObject = stream; stream.getTracks().forEach((track) => peerConnection.addTrack(track, stream)); }); MQTT MQTT (Message Queuing Telemetry Transport) is a lightweight, publish-subscribe protocol designed for low-bandwidth, high-latency, or unreliable networks. MQTT is an excellent choice for IoT devices, remote monitoring, and home automation systems. Example Let's create a simple MQTT client using JavaScript and the mqtt library. 1. Install the mqtt library: Shell npm install mqtt 2. Create an MQTT client in client.js: JavaScript const mqtt = require('mqtt'); const client = mqtt.connect('mqtt://test.mosquitto.org'); client.on('connect', () => { console.log('Connected to the MQTT broker'); // Subscribe to a topic client.subscribe('myTopic'); // Publish a message client.publish('myTopic', 'Hello, MQTT!'); }); client.on('message', (topic, message) => { console.log(`Received message on topic ${topic}: ${message.toString()}`); }); 3. Run the client: Shell node client.js Conclusion Choosing the right real-time communication protocol depends on the specific needs of your application. WebSocket is ideal for low latency, high throughput applications, WebRTC excels in peer-to-peer audio, video, and data streaming, and MQTT is perfect for IoT devices and scenarios with limited network resources. By understanding the strengths and weaknesses of each protocol and using JavaScript code examples provided, developers can create better, more efficient real-time communication experiences. Happy learning!!
In modern application development, delivering personalized and controlled user experiences is paramount. This necessitates the ability to toggle features dynamically, enabling developers to adapt their applications in response to changing user needs and preferences. Feature flags, also known as feature toggles, have emerged as a critical tool in achieving this flexibility. These flags empower developers to activate or deactivate specific functionalities based on various criteria such as user access, geographic location, or user behavior. React, a popular JavaScript framework known for its component-based architecture, is widely adopted in building user interfaces. Given its modular nature, React applications are particularly well-suited for integrating feature flags seamlessly. In this guide, we'll explore how to integrate feature flags into your React applications using IBM App Configuration, a robust platform designed to manage application features and configurations. By leveraging feature flags and IBM App Configuration, developers can unlock enhanced flexibility and control in their development process, ultimately delivering tailored user experiences with ease. IBM App Configuration can be integrated with any framework be it React, Angular, Java, Go, etc. React is a popular JavaScript framework that uses a component-based architecture, allowing developers to build reusable and modular UI components. This makes it easier to manage complex user interfaces by breaking them down into smaller, self-contained units. Adding feature flags to React components will make it easier for us to handle the components. Integrating With IBM App Configuration IBM App Configuration provides a comprehensive platform for managing feature flags, environments, collections, segments, and more. Before delving into the tutorial, it's important to understand why integrating your React application with IBM App Configuration is necessary and what benefits it offers. By integrating with IBM App Configuration, developers gain the ability to dynamically toggle features on and off within their applications. This capability is crucial for modern application development, as it allows developers to deliver controlled and personalized user experiences. With feature flags, developers can activate or deactivate specific functionalities based on factors such as user access, geographic location, or user preferences. This not only enhances user experiences but also provides developers with greater flexibility and control over feature deployments. Additionally, IBM App Configuration offers segments for targeted rollouts, enabling developers to gradually release features to specific groups of users. Overall, integrating with IBM App Configuration empowers developers to adapt their applications' behavior in real time, improving agility, and enhancing user satisfaction. To begin integrating your React application with App Configuration, follow these steps: 1. Create an Instance Start by creating an instance of IBM App Configuration on cloud.ibm.com. Within the instance, create an environment, such as Dev, to manage your configurations. Now create a collection. Creating collections comes in handy when there are multiple feature flags created for various projects. Each project can have a collection in the same App Configuration instance and you can tag these feature flags to the collection to which they belong. 2. Generate Credentials Access the service credentials section and generate new credentials. These credentials will be required to authenticate your React application with App Configuration. 3. Install SDK In your React application, install the IBM App Configuration React SDK using npm: CSS npm i ibm-appconfiguration-react-client-sdk 4. Configure Provider In your index.js or App.js, wrap your application component with AppConfigProvider to enable AppConfig within your React app. The Provider must be wrapped at the main level of the application, to ensure the entire application has access. The AppConfigProvider requires various parameters as shown in the screenshot below. All of these values can be found in the credentials created. 5. Access Feature Flags Now, within your App Configuration instance, create feature flags to control specific functionalities. Copy the feature flag ID for further integration into your code. Integrating Feature Flags Into React Components Once you've set up the AppConfig in your React application, you can seamlessly integrate feature flags into your components. Enable Components Dynamically Use the feature flag ID copied from the App Configuration instance to toggle specific components based on the flag's status. This allows you to enable or disable features dynamically without redeploying your application. Utilizing Segments for Targeted Rollouts IBM App Configuration offers segments to target specific groups of users, enabling personalized experiences and controlled rollouts. Here's how to leverage segments effectively: Define Segments Create segments based on user properties, behaviors, or other criteria to target specific user groups. Rollout Percentage Adjust the rollout percentage to control the percentage of users who receive the feature within a targeted segment. This enables gradual rollouts or A/B testing scenarios. Example If the rollout percentage is set to 100% and a particular segment is targeted, then the feature is rolled out to all the users in that particular segment. If the rollout percentage is set between 1% to 99% and the rollout percentage is 60%, for example, and a particular segment is targeted, then the feature is rolled out to randomly 60% of the users in that particular segment. If the rollout percentage is set to 0% and a particular segment is targeted, then the feature is rolled out to none of the users in that particular segment. Conclusion Integrating feature flags with IBM App Configuration empowers React developers to implement dynamic feature toggling and targeted rollouts seamlessly. By leveraging feature flags and segments, developers can deliver personalized user experiences while maintaining control over feature deployments. Start integrating feature flags into your React applications today to unlock enhanced flexibility and control in your development process.
After JUnit 5 was released, a lot of developers just added this awesome new library to their projects, because unlike other versions, in this new version, it is not necessary to migrate from JUnit 4 to 5, you just need to include the new library in your project, and with all the engine of JUnit 5 you can do your new tests using JUnit 5, and the older one with JUnit 4 or 3, will keep running without problem. But what can happen in a big project, a project that was built 10 years ago with two versions of JUnit running in parallel? New developers have started to work on the project, some of them with JUnit experience, others not. New tests are created using JUnit 5, new tests are created using JUnit 4, and at some point a developer without knowledge, when they will create a new scenario in a JUnit 5 test that has been already created, they just include a JUnit 4 annotation, and the test became a mix, some @Test of JUnit 4 and some @Test of JUnit 5, and each day is more difficult to remove the JUnit 4 library. So, how do you solve this problem? First of all, you need to show to your team, what is from JUnit 5 and what is from JUnit 4, so that new tests be created using JUnit 5 instead of JUnit 4. After that is necessary to follow the Boy Scout rule, whenever they pass a JUnit 4 test they must migrate to JUnit 5. Let’s see the main changes released in JUnit 5. All starts by the name, in JUnit 5, you don’t see packages called org.junit5, but rather org.junit.jupiter. To sum up, everything you see with “Jupiter”, it means that is from JUnit 5. They chose this name because Jupiter starts with “JU”, and is the fifth planet from the sun. Another change is about the @Test, this annotation was moved to a new package: org.junit.jupiter.api and now no one attribute like “expected,” or “timeout” is used anymore, use extension instead. For example, for timeout, now you have one annotation for this: @Timeout(value = 100, unit = TimeUnit.MILLISECONDS). Another change is that neither test methods nor classes need to be public. Now instead of using @Before and @After in your test configuration, you have to use @BeforeEach and @AfterEach, and you have also @BeforeAll and @AfterAll. To ignore tests, now you have to use @Disable instead of @Ignore. A great news that was released in JUnit 5 was the annotation @ParameterizedTest, with that is possible to run one test multiple times with different arguments. For example, if you want to test a method that creates some object and you want to validate if the fields are filled correctly, you just do the following: Java @ParameterizedTest @MethodSource("getInvalidSources") void shouldCheckInvalidFields(String name, String job, String expectedMessage) { Throwable exception = catchThrowable(() -> new Client(name, job)); assertThat(exception).isInstanceOf(IllegalArgumentException.class) .hasMessageContaining(expectedMessage); } static Stream<Arguments> getInvalidSources() { return Stream.of(Arguments.arguments("Jean Donato", "", "Job is empty"), Arguments.arguments("", "Dev", "Name is empty")); } There are so many nice features in JUnit 5, I recommend you check it out the JUnit 5 User Guide, to analyze what is useful to your project. Now that all developers know what was changed in JUnit 5, you can start the process of removing JUnit 4 from your project. So, if you are still using JUnit 4 in 2024, and your project is a big project, you will probably have some dependencies using JUnit 4. I recommend you analyze your libraries to check if some of them are using JUnit 4. In the image below I’m using Dependency Analyzer from IntelliJ. As you can see, jersey-test is using JUnit 4, that is, even if I remove JUnit 4 from my project, JUnit 4 will be available to use because Jersey. The easier way will be to bump jersey to 2.35 because JUnit 5 was introduced in jersey-test 2.35, but I can’t update the jersey-test framework because other libraries will break in my project. So, in this case, what can I do? I can exclude JUnit from Jersey with Dependency Exclusions from Maven (like the image below). That way JUnit 4 will not be used anymore, but rather our JUnit 5. When you run some tests that use Jersey, they will not be loaded, because there are methods in Jersey using JUnit 4 annotations, setUp and tearDown, using @Before and @After. To solve this, you can create one “Configuration Class” whose extends JerseyTest implementing setUp and tearDown with @BeforeEach and @AfterEach calling super.setUp() and super.TearDown(). Java public class JerseyConfigToJUnit5 extends JerseyTest { @BeforeEach public void setUp() throws Exception { super.setUp(); } @AfterEach public void tearDown() throws Exception { super.tearDown(); } } So, if you have already checked your libraries and no one has more dependency from JUnit 4, you finally can migrate all your tests to JUnit 5, for this process, there is a good tool that saves you from a lot of work, is OpenRewrite, a automated refactoring ecosystem for source code, they will change all your old packages, the older annotations, and everything to the new one. That’s it folks, now you and your teammates can enjoy JUnit 5 and relax your mind knowing that new tests will be created with JUnit 5 and the project will not become a Frankenstein. So, remember, keep your project up-to-date, because if you forget your libraries, each day will be more difficult to update, always use specifications, and frameworks that follow the specifications, and have a good design in your code, this permits you to change and move with the facility.