DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
Securing Your Software Supply Chain with JFrog and Azure
Register Today

Frameworks

A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.

icon
Latest Refcards and Trend Reports
Trend Report
Modern Web Development
Modern Web Development
Refcard #206
Angular Essentials
Angular Essentials
Refcard #333
Drupal 9 Essentials
Drupal 9 Essentials

DZone's Featured Frameworks Resources

Microservices With Apache Camel and Quarkus

Microservices With Apache Camel and Quarkus

By Nicolas Duminil CORE
Apache Camel is everything but a new arrival in the area of the Java enterprise stacks. Created by James Strachan in 2007, it aimed at being the implementation of the famous "EIP book" (Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf, published by Addison Wesley in October 2003). After having become one of the most popular Java integration frameworks in early 2010, Apache Camel was on the point of getting lost in the folds of history in favor of a new architecture model known as Enterprise Service Bus (ESB) and perceived as a panacea of the Service Oriented Architecture (SOA). But after the SOA fiasco, Apache Camel (which, meanwhile, has been adopted and distributed by several editors including but not limited to Progress Software and Red Hat under commercial names like Mediation Router or Fuse) is making a powerful comeback and is still here, even stronger for the next decade of integration. This comeback is also made easier by Quarkus, the new supersonic and subatomic Java platform. This article aims at proposing a very convenient microservices implementation approach using Apache Camel as a Java development tool, Quarkus as a runtime, and different Kubernetes (K8s) clusters - from local ones like Minikube to PaaS like EKS (Elastic Kubernetes Service), OpenShift, or Heroku - as the infrastructure. The Project The project used here in order to illustrate the point is a simplified money transfer application consisting of four microservices, as follows: aws-camelk-file: This microservice is polling a local folder and, as soon as an XML file is coming in, it stores it in a newly created AWS S3 bucket, which name starts with mys3 followed by a random suffix. aws-camelk-s3: This microservice is listening on the first found AWS S3 bucket, which name starts with mys3. As soon as an XML file comes in, it splits, tokenized, and streams it, before sending each message to an AWS SQS (Simple Queue Service) queue, which name is myQueue. aws-camelk-sqs: This microservice subscribes for messages to the AWS SQS queue named myQueue and, for each incoming message, unmarshals it from XML to Java objects, then marshals it to JSON format, before sending it to the REST service below. aws-camelk-jaxrs: This microservice exposes a REST API having endpoints for CRUD-ing money transfer orders. It consumes/produces JSON input/output data. It uses a service that exposes an interface defined by the aws-camelk-api module. Several implementations of this interface might be present but, for simplicity's sake, in the current case, we're using the one defined by theaws-camelk-provider module named DefaultMoneyTransferProvider, which only CRUDs the money transfer order requests in an in-memory hash map. The project's source code may be found here. It's a multi-module Maven project and the modules are explained below. The most important Maven dependencies are shown below: XML <dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-bom</artifactId> <version>${quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-camel-bom</artifactId> <version>${quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-bom</artifactId> <version>1.12.454</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> The Module aws-camelk-model This module defines the application's domain which consists of business objects like MoneyTransfer, Bank, BankAddress, etc. One of the particularities of the integration applications is the fact that the business domain is legacy and, generally, designed decades ago by business analysts and experts ignoring everything about the tool-set that you, as a software developer, are using currently. This legacy takes various forms, like Excel sheets and CSV or XML files. Hence we consider here the classical scenario according to which our domain model is defined as an XML grammar, defined by a couple of XSD files. These XSD files are in the src/main/resources/xsd directory and are processed by the jaxb2-maven-plugin in order to generate the associated Java objects. The listing below shows the plugin's configuration: XML <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>jaxb2-maven-plugin</artifactId> <dependencies> <dependency> <groupId>org.jvnet.jaxb2_commons</groupId> <artifactId>jaxb2-value-constructor</artifactId> <version>3.0</version> </dependency> </dependencies> <executions> <execution> <goals> <goal>xjc</goal> </goals> </execution> </executions> <configuration> <packageName>fr.simplex_software.quarkus.camel.integrations.jaxb</packageName> <sources> <source>${basedir}/src/main/resources/xsd</source> </sources> <arguments> <argument>-Xvalue-constructor</argument> </arguments> <extension>true</extension> </configuration> </plugin> Here, we're running the xjc schema compiler tool to generate Java classes in the target package fr.simplex_software.quarkus.camel.integrations.jaxb based on the XSD schema present in the project's src/main/resources/xsd directory. By default, these automatically generated Java objects having JAXB (Java Architecture for XML Binding) annotations don't have constructors, which makes them a bit hard to use, especially for classes with lots of properties that must be instantiated via setters. Accordingly, in the listing above, we configure the jaxb2-maven-plugin with a dependency to the jaxb3-value-constructor artifact. In doing that, we ask the xjc compiler to generate full argument constructors for every subsequent JAXB processed class. The final result of this module is a JAR file containing our domain model in the form of a Java class hierarchy that will be used as a dependency by all the other application's modules. This method is much more practical than the one consisting of manual implementation (again, in Java) of the domain object that is already defined by the XML grammar. The Module aws-camelk-api This module is very simple as it only consists of an interface. This interface, named MoneyTransferFacade, is the one exposed by the money transfer service. This service has to implement the exposed interface. In practice, such a service might have many different implementations, depending on the nature of the money transfer, the bank, the customer type, and many other possible criteria. In our example, we only consider a simple implementation of this interface, as shown in the next section. The Module aws-camelk-provider This module defines the service provider for the MoneyTransferFacade interface. The SPI (Software Provider Interface) pattern used here is a very powerful one, allowing to decouple the service interface from its implementation. Our implementation of the MoneyTransferFacade interface is the class DefaultMoneyTransferProvider and it's also very simple as it only CRUDing the money transfer orders in an in-memory hash map. The Module aws-camelk-jaxrs As opposed to the previous modules which are only common class libraries, this module and the next ones are Quarkus runnable services. This means that they use the quarkus-maven-plugin in order to create an executable JAR. This module, as its name implies, exposes a JAX-RS (Java API for RESTfull Web Services) API to handle money transfer orders. Quarkus comes with RESTeasy, a full implementation by Red Hat of the JAX-RS specifications, and this is what we're using here. There is nothing special to mention as far as the class MoneyTransferResource is concerned, which implements the REST API. It offers endpoints to create, read, update, and delete money transfer orders and, additionally, two endpoints that aim at checking the application's aliveness and readiness. The Module aws-camelk-file This module is the first one in the Camel pipeline, consisting of conveying XML files containing money transfer orders from their initial landing directory to the REST API, which processes them on behalf of the service provider. It uses Camel Java DSL (Domain Specific Language) for doing that, as shown in the listing below: Java fromF("file://%s?include=.*.xml&delete=true&idempotent=true&bridgeErrorHandler=true", inBox) .doTry() .to("validator:xsd/money-transfers.xsd") .setHeader(AWS2S3Constants.KEY, header(FileConstants.FILE_NAME)) .to(aws2S3(s3Name + RANDOM).autoCreateBucket(true).useDefaultCredentialsProvider(true)) .doCatch(ValidationException.class) .log(LoggingLevel.ERROR, failureMsg + " ${exception.message}") .doFinally() .end(); This code polls an input directory, defined as an external property, for the presence of any XML file (files having the .xml extension). Once such a file lands in the given directory, it is validated against the schema defined in the src/main/resources/xsd/money-transfers.xsd file. Should it be valid, it is stored in an AWS S3 bucket whose name is computed as being equal to an externally defined constant followed by a random suffix. Everything is encapsulated in a try...catch structure to consistently process the exception cases. Here, in order to define external properties, we use the Eclipse MP Configuration specs (among others) implemented by Quarkus, as shown in the listing below: Java private static final String RANDOM = new Random().ints('a', 'z') .limit(5) .collect(StringBuilder::new, StringBuilder::appendCodePoint, StringBuilder::append) .toString(); @ConfigProperty(name="inBox") String inBox; @ConfigProperty(name="s3Name") String s3Name; The RANDOM suffix is generated on behalf of the java.util.Random class and the properties inBox and s3Name are injected from the src/resource/application.properties file. The reason for using an S3 bucket name composed from a constant and a random suffix is that AWS S3 buckets need to have cross-region unique names and, accordingly, we need such a random suffix in order to guarantee unicity. The Module aws-camelk-s3 This module implements a Camel route which is triggered by the AWS infrastructure whenever a file lands in the dedicated S3 bucket. Here is the code: Java from(aws2S3(s3BucketName).useDefaultCredentialsProvider(true)) .split().tokenizeXML("moneyTransfer").streaming() .to(aws2Sqs(queueName).autoCreateQueue(true).useDefaultCredentialsProvider(true)); Once triggered, the Camel route splits the input XML file after having tokenized it, order by order. The idea is that an input file may contain several money transfer orders and these orders are to be processed separately. Hence, each single money transfer order issued from this tokenizing and splitting process is sent to the AWS SQS queue, which name is given by the value of the queueName property, injected from the application.properties file. The Module aws-camelk-sqs This is the last module of our Camel pipeline. Java from(aws2Sqs(queueName).useDefaultCredentialsProvider(true)) .unmarshal(jaxbDataFormat) .marshal().json(JsonLibrary.Jsonb) .setHeader(Exchange.HTTP_METHOD, constant("POST")) .to(http(uri)); This Camel route subscribes to the AWS SQS queue whose name is given by the queueName property and it unmarshals each XML message it receives to Java objects. Given that each XML message contains a money transfer order, it is unmarshaled in the correspondent MoneyTransfer Java class instance. Then, once unmarshalled, each MoneyTransfer Java class instance is marshaled again into a JSON payload. This is required because our REST interface consumes JSON payloads, and, as opposed to the standard JAX-RS client, which is able to automatically perform conversions from Java objects to JSON, the http() Camel component used here isn't. Hence, we need to do it manually. By setting the exchange's header to the POST constant, we set the type of HTTP request that will be sent to the REST API. Last but not least, the endpoint URI is, as usual, injected as an externally defined property, from the application.properties file. Unit Testing Before deploying and running our microservices, we need to unit test them. The project includes a couple of unit tests for almost all its modules - from aws-camelk-model, where the domain model is tested, as well as its various conversion from/to XML/Java, to aws-camelk-jaxrs, which is our terminus microservice. In order to run the unit test, it's simple. Just execute: Shell $ cd aws-camelk $ ./delete-all-buckets.sh #Delete all the buckets named "mys3*" if any $ ./purge-sqs-queue.sh #Purge the SQS queue named myQueue if it exists and isn't empty $ mvn clean package #Clean-up, run unit tests and create JARs A full unit test report will be displayed by the maven-surefile-plugin. In order that the unit tests run as expected, an AWS account is required and the AWS CLI should be installed and configured on the local box. This means that, among others, the file ~/.aws/credentials contains your aws_access_key_id and aws_secret_access_key properties with their associated values. The reason is that the unit tests use the AWS SDK (Software Development Kit) to handle S3 buckets and SQS queues, which makes them not quite unit tests but, rather, a combination of unit and integration tests. Deploying and Running Now, to deploy and run our microservices, there are many different scenarios that we have to consider - from the simple local standalone execution to PaaS deployments like OpenShift or EKS passing through local K8s clusters like Minikube. Accordingly, in order to avoid some confusion here, we have preferred to dedicate a separate post to each such deployment scenario. So stay close to your browser to see where the story takes us next. More
Effective Java Collection Framework: Best Practices and Tips

Effective Java Collection Framework: Best Practices and Tips

By Shailendra Bramhvanshi
Java collection framework provides a variety of classes and interfaces, such as lists, sets, queues, and maps, for managing and storing collections of related objects. In this blog, we go over effective Java collection framework: best practices and tips. What Is a Collection Framework? The Java collection framework is a key element of Java programming. To effectively use the Java collection framework, consider factors like utilizing the enhanced for loop, generics, avoiding raw types, and selecting the right collection. Choosing the Right Collection for the Task Each collection class has its own distinct set of qualities and is made to be used for a particular function. Following are some descriptions of each kind of collection: List: The ArrayList class is the most widely used list implementation in Java, providing resizable arrays when it is unknown how large the collection will be. Set: The HashSet class is the most popular implementation of a set in Java, providing uniqueness with a hash-table-based implementation. Queue: The LinkedList class is the most popular Java implementation of a queue, allowing elements to be accessed in a specific order. Map: The HashMap class of Java is the most popular map implementation for storing and retrieving data based on distinct keys. Factors to Consider While Choosing a Collection Type of data: Different collections may be more suitable depending on the kind of data that will be handled and stored. Ordering: A list or queue is preferable to a set or map when arranging important items. Duplicate elements: A set or map may be a better option than a list or queue if duplicate elements are not allowed. Performance: The characteristics of performance differences between different collections. By picking the right collection, you can improve the performance of your code. Examples of Use Cases for Different Collections Lists: Lists allow for the storage and modification of ordered data, such as a to-do list or shopping list. Set: A set can be used to create unique items, such as email addresses. Queue: A queue can be used to access elements in a specific order, such as handling jobs in the order they are received. Map: A map can be used to store and access data based on unique keys, such as user preferences. Selecting the right collection for a Java application is essential, taking into account data type, ordering, duplicate elements, and performance requirements. This will increase code effectiveness and efficiency. Using the Correct Methods and Interfaces In this section, the various methods and interfaces that the collection framework provides will be covered, along with some tips on how to effectively use them. Choosing the Right Collection: The collection framework provides a variety of collection types to improve code speed and readability, such as lists, sets, queues, maps, and deques. Using Iterators: Iterators are crucial for browsing through collections, but if modified, they can quickly break down and throw a ConcurrentModificationException. Use a copy-on-write array list or concurrent hash map to stop this. Using Lambda Expressions: Lambda expressions in Java 8 allow programmers to write code that can be used as an argument to a method and can be combined with the filter() and map() methods of the Stream API to process collections. Using the Stream API: The Stream API is a powerful feature in Java 8 that enables functional collection processing, parallelizable and lazy, resulting in better performance. Using Generics: Generics are a powerful feature introduced in Java 5 that allows you to write type-safe code. They are especially useful when working with collections, as they allow you to specify the types of elements that a collection can contain. To use generics, it is important to use the wildcard operator. The Java collection framework provides methods and interfaces to improve code efficiency, readability, and maintainability. Iterators, Lambda expressions, Stream API, and generics can be used to improve performance and avoid common pitfalls. Best Practices for Collection Usage In this section, we will explore some important best practices for collection usage. Proper Initialization and Declaration of Collections Collections should be initialized correctly before use to avoid null pointer exceptions. Use the appropriate interface or class to declare the collection for uniqueness or order. Using Generics to Ensure Type Safety Generics provide type safety by allowing us to specify the type of objects that can be stored in a collection, allowing us to catch type mismatch errors at compile time. When declaring a collection, specify the type using angle brackets (<>). For example, List<String> ensures that only String objects can be added to the list. Employing the Appropriate Interfaces for Flexibility The Java collection framework provides a variety of interfaces, allowing us to easily switch implementations and take advantage of polymorphism to write code that is more modular and reusable. Understanding the Behavior of Different Collection Methods It is important to understand the behavior of collection methods to use them effectively. To gain a thorough understanding, consult Java documentation or reliable sources. Understanding the complexities of operations like contains() and remove() can make a difference in code performance. Handling Null Values and Empty Collections To prevent unexpected errors or undesirable behavior, it's crucial to handle null values and empty collections properly. Check that collections are not null and have the required data to prevent errors. Memory and Performance Optimization In this section, we will explore techniques and best optimize to optimize memory utilization and enhance the performance of collections in Java as follows: 1. Minimizing the Memory Footprint With the Right Collection Implementation Memory usage can be significantly decreased by selecting the best collection implementation for the job. When frequent random access is required, for instance, using an array list rather than a linked list can reduce memory overhead. 2. Efficient Iteration Over Collections It is common practice to iterate over collections, so picking the most effective iteration strategy is crucial. In comparison to conventional loops, using iterator-based loops or enhanced for-each loops can offer better performance. 3. Considering Alternative Collection Libraries for Specific Use Cases The Java collection framework offers a wide range of collection types, but in some cases, alternative libraries like Guava or Apache commons-collections can provide additional features and better performance for specific use cases. 4. Utilizing Parallel Processing With Collections for Improved Performance With the advent of multi-core processors, leveraging parallel processing techniques can enhance the performance of operations performed on large collections. The Java Stream API provides support for parallel execution, allowing for efficient processing of data in parallel. Tips and Tricks for Effective Collection Usage Using the Right Data Structures for Specific Tasks The right data structure must be chosen for the task at hand, with advantages and disadvantages, to make wise decisions and improve performance. Making Use of Utility Methods in the Collections Class The collections class in Java provides utility methods to simplify and streamline collection operations, such as sorting, searching, shuffling, and reversing. Leveraging Third-Party Libraries and Frameworks for Enhanced Functionality The Java collection framework provides a wide range of data structures, but third-party libraries and frameworks can provide more advanced features and unique data structures. These libraries can boost productivity, give access to more powerful collection options, and address use cases that the built-in Java collections cannot. Optimizing Collections for Specific Use Cases Immutable collections offer better thread safety and can be shared without defensive copying. Dynamic collections can be used to prevent frequent resizing and enhance performance. Specialized collections like HashSet or TreeMap can improve efficiency for unique or sorted elements. Optimise collections to improve performance, readability, and maintainability. Conclusion In this blog post, we have covered some effective Java collection frameworks with the best practices and tips. To sum up, the Java collection framework is a crucial component of Java programming. You can use the collection framework effectively and create more effective, maintainable code by adhering to these best practices and advice. More
A React Frontend With Go/Gin/Gorm Backend in One Project
A React Frontend With Go/Gin/Gorm Backend in One Project
By Sven Loesekann
Designing a New Framework for Ephemeral Resources
Designing a New Framework for Ephemeral Resources
By Daniel De Vera
Essential Architecture Framework: In the World of Overengineering, Being Essential Is the Answer
Essential Architecture Framework: In the World of Overengineering, Being Essential Is the Answer
By Otavio Santana CORE
Reactive Programming
Reactive Programming

Before diving into Reactive World, let's take a look at some definitions of this mechanism: Reactive Programming is an asynchronous programming paradigm focused on streams of data. “Reactive programs also maintain continuous interaction with their environment, but at a speed which is determined by the environment, not the program itself. Interactive programs work at their own pace and mostly deal with communication, while reactive programs only work in response to external demands and mostly deal with accurate interrupt handling. Real-time programs are usually reactive.” — Gerad Berry, French Computer Scientist. Features of Reactive Programming Non-Blocking: The concept of using non-blocking is important. In Blocking, the code will stop and wait for more data. Non-blocking, in contrast, will process available data, ask to be notified when more are available, then continue. Asynchronous: Events are captured asynchronously. Failures as Messages: Exceptions are processed by a handler function. Backpressure: Push the data concurrently as soon as it is available. Thus, the client waits less time to receive and process the events. Data Streams: These are coherent, cohesive collections of digital signals created on a continual or near-continual basis. Reactive Streams API Reactive Streams was started in 2013 by engineers from Netflix, Red Hat, Twitter, and Oracle. The goal was to create a standard for asynchronous stream processing with non-blocking. The Reactive Streams API was defined with four interfaces : Publisher Interface That allows a specific type, and it will take subscribers or allow them to subscribe to it. Subscriber Interface This interface implements unsubscribe, onNext, onError, and onComplete. Subscription This interface has two methods : request() and cancel(). Processor This interface extends the Subscribe and Publisher interfaces. I know that diving into any Java Framework is the most anticipated step for every programmer, so I will choose the famous Spring Framework for the rest of the article As you know, the traditional Java Components are blocking, and here we are using a new Stack. Spring Reactive Types Spring Framework 5 has introduced two new reactive types : Mono: a publisher with zero or one element in data streams Flux: a publisher with zero or many elements in the data streams. Step 1:Project Creation For the Rest, we will need the Spring Reactive Web Starter. Lombok is optional. Step 2: Create Your Model Step 3: Create Your Repository and RepositoryImpl Step 4: Test To See the Result and To Understand the Mechanism Starting with Mono Operations I will create a simple test class with two Mono methods: Blocking Non-Blocking As you see, the result is the same! Why? Here you will not see any difference because the two methods retrieve only the first user. Blocking the rest of the users or not will not give us any results here. Flux Operations Here you will see the difference. Like before, I will create a simple test class with two Flux Methods : Blocking Non-Blocking The first method retrieves all users that exist. But as you see, the result is only the first user! What happened? We are saying blockFirst that method is saying go ahead and block but only wait for the first element. Don't worry about the rest. In the second case, we'll do the same thing but with Subscribe Mechanism (Non-Blocking) : The result is all elements. So, that subscriber is going to be executed for every element in the flux. Finally, to take the lead with Reactive programming, I advise you to learn about Spring Data R2DBC and Spring Web Flux, create your project and try to test different topics. Spring Web Flux is not compatible with Spring Web MVC, but there are shared programming techniques like using the Controller annotation. Summary Reactive Programming focuses on processing streams of data. Traditional applications are still alive and well.

By Elyes Ben Trad
New ORM Framework for Kotlin
New ORM Framework for Kotlin

If you have an aversion to new frameworks, don't even read this. For other kind readers, please note that here I'm going to present a proposal for an API for modeling database queries in a declarative style with strong Kotlin type checking primarily. Only some classes around entities are implemented; the database connection is missing for now. In the project, I tried to evaluate my experience and vision so far. However, not all ideas presented in this paper are completely new. Some of them I drew from the Ujorm framework, and the entity concept was inspired by the Ktorm framework. But the code is new. The prototype results from a mosaic that has been slowly pieced together and has now taken a form that is hopefully worth presenting. If you are not discouraged by the introduction, I will skip the general talk about ORM and let you get to the code samples. The demonstration examples use two relational database tables. This is an employee/department (unspecified organization) relationship where each employee may (or may not) have a supervisor. Both tables are described by entities from the following class diagram: Suppose we want to create a report containing the unique employee number, the employee's name, the department name, and the supervisor's name (if any). We are only interested in departments with a positive identifier and a department name starting with the letter "D." We want to sort the report by department name (descending) and then by employee name (ascending). How could a query (SELECT) based on these entities look in the presented API? Kotlin val employees: Employees = MyDatabase.employees // Employee metamodel val departments: Departments = MyDatabase.departments // Department metamodel val result: List<Employee> = MyDatabase.select( employees.id, employees.name, employees.department + departments.name, // DB relation by the inner join employees.superior + employees.name) // DB relation by the left outer join .where((employees.department + departments.id GE 1) AND (employees.department + departments.name STARTS "D")) .orderBy( employees.department + departments.name ASCENDING false, employees.name ASCENDING true) .toList() The use of a DSL in a database query probably doesn't surprise anyone today. However, the chaining of the entity attribute model (hereafter, property-descriptor) is worthy of attention. Combining them creates a new composite property descriptor that implements the same interface as its atomic parts. The query filter (WHERE) is described by an object constructed from elementary conditions into a single binary tree. Composite property descriptors provide information from which SQL query sessions between database tables are also derived. This approach can cover perhaps the most common SQL queries, including recursive queries. But certainly not all of them. For the remaining ones, an alternative solution must be used. A rough design is in the project tests. Let's focus next on the employee entity: Kotlin @Entity interface Employee { var id: Int var name: String var higherEducation: Boolean var contractDay: LocalDate var department: Department var superior: Employee? } Entity is an interface with no other dependencies. The advantage is that the interface can get by (in ORM) without binary code modification. However, to get the object, you must use a factory method that supplies the implementation. An alternative would be to extend some generic class (provided by the framework), which I found more invasive. The metamodel pair object provides the factory method for creating new objects.Each entity here needs a metamodel that contains information about its type and attributes that provides some services. The attributes of the metamodel are the property mentioned above descriptors of the pair entities. Note that the same property() method is used to create the property descriptors it doesn't matter if it is a session description (an attribute on another entity). The only exception is where the attribute (entity) type accepts NULL. The positive news is that the compiler will report the misuse (of the shorter method name) at compile time. An example of the employee entity metamodel is attached: Kotlin open class Employees : EntityModel<Employee>(Employee::class) { val id = property { it.id } val name = property { it.name } val higherEducation = property { it.higherEducation } val contractDay = property { it.contractDay } val department = property { it.department } val superior = propertyNullable { it.superior } } Annotations will declare the specific properties of the columns (database tables) on the entities so that the metamodel classes can be generated once according to their entity. The entity data is stored (internally) in an object array. The advantage is fewer memory requirements compared to an implementation based on the HashMap class. The next example demonstrates the creation of new objects and their storage in the database (INSERT). Kotlin val development: Department = MyDatabase.departments.new { name = "Development" created = LocalDate.of(2020, 10, 1) } val lucy: Employee = MyDatabase.employees.new { name = "Lucy" contractDay = LocalDate.of(2022, 1, 1) superior = null department = development } val joe: Employee = MyDatabase.employees.new { name = "Joe" contractDay = LocalDate.of(2022, 2, 1) superior = lucy department = development } MyDatabase.save(development, lucy, joe) The MyDatabase class (which provides the metamodel) is the only one here (the singleton design pattern). Still, it can generally be any object our application context provides (for example). If we wanted to use a service (cloning an entity object, for example), we could extend (that provider) with the AbstractEntityProvider class and use its resources. An example of the recommended registration procedure (metamodel classes), along with other examples, can be found in the project tests. Conditions A condition (or also a criterion) is an object we encountered when presenting a SELECT statement. However, you can also use a condition on its own, for example, to validate entity values or filter collections. If the library provided support for serializing it to JSON text format (and back), the range of uses would probably be even more expansive. To build the following conditions, we start from the metamodel already stored in the employees variable. Kotlin val crn1 = employees.name EQ "Lucy" val crn2 = employees.id GT 1 val crn3 = (employees.department + departments.id) LT 99 val crn4 = crn1 OR (crn2 AND crn3) val crn5 = crn1.not() OR (crn2 AND crn3) If we have an employee object in the employee variable, the employee criterion can be tested with the following code: Kotlin expect(crn4(employee)).equals(true) // Valid employee expect(crn5(employee)).equals(false) // Invalid employee On the first line, the employee met the criterion; on the second line, it did not. If needed (during debugging or logging), the content of the conditions can be visualized in the text; examples are attached: Kotlin expect(crn1.toString()) .toEqual("""Employee: name EQ "Lucy"""") expect(crn2.toString()) .toEqual("""Employee: id GT 1""") expect(crn3.toString()) .toEqual("""Employee: department.id LT 99""") expect(crn4.toString()) .toEqual("""Employee: (name EQ "Lucy") OR (id GT 1) AND (department.id LT 99)""") expect(crn5.toString()) .toEqual("""Employee: (NOT (name EQ "Lucy")) OR (id GT 1) AND (department.id LT 99)""") Other Interesting Things The property descriptor may not only be used to model SQL queries but can also participate in reading and writing values to the object. The simplest way is to extend the entity interface with the PropertyAccessor interface. If we have an employee object, code can be used to read it: Kotlin val id: Int = employee[employees.id] val name: String = employee[employees.name] val contractDay: LocalDate = employee[employees.contractDay] val department: Department = employee[employees.department] val superior: Employee? = employee[employees.superior] val departmentName: String = employee[employees.department + departments.name] The explicit declaration of variable data types is for presentation purposes only, but in practice, they are redundant and can be removed. Writing variables to an object is similar: Kotlin employee[employees.id] = id employee[employees.name] = name employee[employees.contractDay] = contractDay employee[employees.department] = department employee[employees.superior] = superior employee[employees.department + departments.name] = departmentName Please note that reading and writing values are done without overriding also for NULLABLE values. Another interesting feature is the support for reading and writing values using composite property descriptors. Just for the sake of argument, I assure you that for normal use of the object, it will be more convenient to use the standard entity API declared by the interface. The sample above copies its attributes to variables and back. If we wanted to clone an object, we could use the following construct (shallow copy): Kotlin val target: Employee = MyDatabase.utils().clone(source) No reflection methods are called during data copying, which is allowed by the class architecture used. More functional usage examples can be found in the project tests on GitHub. Why? Why was this project created? In the beginning, it was just an effort to learn the basics of Kotlin. Gradually a pile of disorganized notes in the form of source code was created, and it was only a matter of time before I came across language resources that would also lead to a simplified Ujorm framework API. Finding ready-made ORM libraries in Kotlin made me happy. However, of the two popular ones, I couldn't pick one that suited me better. I missed interesting features of one with the other and vice versa. In some places, I found the API not intuitive enough to use; in others, I ran into complications with database table recursion. A common handicap was (for my taste) the increased error rate when manually building the entity metamodel. Here one can certainly counter that entities can be generated from database tables. In the end, I organized my original notes into a project, cleaned up the code, and added this article. That is perhaps all that is relevant. Conclusion I like the integration with the core of a ready-made ORM framework; probably the fastest would be the integration with Ujorm. However, I am aware of the risks associated with any integration, and I can't rule out that this project won't eventually find any real use in the ORM field. The prototype is freely available under the Apache Commons 2 license. Thank you for your constructive comments.

By Pavel Ponec
WireMock: The Ridiculously Easy Way (For Spring Microservices)
WireMock: The Ridiculously Easy Way (For Spring Microservices)

Using WireMock for integration testing of Spring-based (micro)services can be hugely valuable. However, usually, it requires significant effort to write and maintain the stubs needed for WireMock to take a real service’s place in tests. What if generating WireMock stubs was as easy as adding @GenerateWireMockStub to your controller? Like this: Kotlin @GenerateWireMockStub @RestController class MyController { @GetMapping("/resource") fun getData() = MyServerResponse(id = "someId", message = "message") } What if that meant that you then just instantiate your producer’s controller stub in consumer-side tests… Kotlin val myControllerStub = MyControllerStub() Stub the response… Kotlin myControllerStub.getData(MyServerResponse("id", "message")) And verify calls to it with no extra effort? Kotlin myControllerStub.verifyGetData() Surely, it couldn’t be that easy?! Before I explain the framework that does this, let’s first look at the various approaches to creating WireMock stubs. The Standard Approach While working on a number of projects, I observed that the writing of WireMock stubs most commonly happens on the consumer side. What I mean by this is that the project that consumes the API contains the stub setup code required to run tests. The benefit of it is that it's easy to implement. There is nothing else the consuming project needs to do. Just import the stubs into the WireMock server in tests, and the job is done. However, there are also some significant downsides to this approach. For example, what if the API changes? What if the resource mapping changes? In most cases, the tests for the service will still pass, and the project may get deployed only to fail to actually use the API — hopefully during the build’s automated integration or end-to-end tests. Limited visibility of the API can lead to incomplete stub definitions as well. Another downside of this approach is the duplicated maintenance effort — in the worst-case scenario. Each client ends up updating the same stub definitions. Leakage of the API-specific information, in particular, sensitive information from the producer to the consumer, leads to the consumers being aware of the API characteristics they shouldn’t be. For example, the endpoint mappings or, sometimes even worse — API security keys. Maintaining stubs on the client side can also lead to increased test setup complexity. The Less Common Approach A more sophisticated approach that addresses some of the above disadvantages is to make the producer of the API responsible for providing the stubs. So, how does it work when the stubs live on the producer side? In a poly-repo environment, where each microservice has its own repository, this means the producer generates an artifact containing the stubs and publishes it to a common repository (e.g., Nexus) so that the clients can import it and use it. In a mono-repo, the dependencies on the stubs may not require the artifacts to be published in this way, but this will depend on how your project is set up. The stub source code is written manually and subsequently published to a repository as a JAR file The client imports the JAR as a dependency and downloads it from the repository Depending on what is in the Jar, the test loads the stub directly to WireMock or instantiates the dynamic stub (see next section for details) and uses it to set up WireMock stubs and verify the calls This approach improves the accuracy of the stubs and removes the duplicated effort problem since there is only one set of stubs maintained. There is no issue with visibility either since the stubs are written while having full access to the API definition, which ensures better understanding. The consistency is ensured by the consumers always loading the latest version of the published stubs every time the tests are executed. However, preparing stubs manually on the producer's side can also have its own shortcomings. It tends to be quite laborious and time-consuming. As any handwritten code intended to be used by 3rd parties, it should be tested, which adds even more effort to the development and maintenance. Another problem that may occur is a consistency issue. Different developers may write the stubs in different ways, which may mean different ways of using the stubs. This slows development down when developers maintaining different services need to first learn how the stubs have been written, in the worst-case scenario, uniquely for each service. Also, when writing stubs on the consumer's side, all that is required to prepare are stubs for the specific parts of the API that the consumer actually uses. But providing them on the producer's side means preparing all of them for the entire API as soon as the API is ready, which is great for the client but not so great for the provider. Overall, writing stubs on the provider side has several advantages over the client-side approach. For example, if the stub-publishing and API-testing are well integrated into the CI pipeline, it can serve as a simpler version of Consumer Driven Contracts, but it is also important to consider the possible implications like the requirement for the producer to keep the stubs in sync with the API. Dynamic Stubbing Some developers may define stubs statically in the form of JSON. This is additional maintenance. Alternatively, you can create helper classes that introduce a layer of abstraction — an interface that determines what stubbing is possible. Usually, they are written in one of the higher-level languages like Java/Kotlin. Such stub helpers enable the clients to set up stubs within the constraints set out by the author. Usually, it means using various values of various types. Hence I call them dynamic stubs for short. An example of such a dynamic stub could be a function with a signature along the lines of: Kotlin fun get(url: String, response: String } One could expect that such a method could be called like this: Kotlin get(url = "/someResource", response = "{ \"key\" = \"value\" }") And a potential implementation using the WireMock Java library: Kotlin fun get(url: String, response: String) { stubFor(get(urlPathEqualTo(url)) .willReturn(aResponse().withBody(response))) } Such dynamic stubs provide a foundation for the solution described below. Auto-Generating Dynamic WireMock Stubs I have been working predominantly in the Java/Kotlin Spring environment, which relies on the SpringMVC library to support HTTP endpoints. The newer versions of the library provide the @RestController annotation to mark classes as REST endpoint providers. It's these endpoints that I tend to stub most often using the above-described dynamic approach. I came to the realization that the dynamic stubs should provide only as much functionality as set out by the definition of the endpoints. For example, if a controller defines a GET endpoint with a query parameter and a resource name, the code enabling you to dynamically stub the endpoint should only allow the client to set the value of the parameter, the HTTP status code, and the body of the response. There is no point in stubbing a POST method on that endpoint if the API doesn't provide it. With that in mind, I believed there was an opportunity to automate the generation of the dynamic stubs by analyzing the definitions of the endpoints described in the controllers. Obviously, nothing is ever easy. A proof of concept showed how little I knew about the build tool that I have been using for years (Gradle), the SpringMVC library, and Java annotation processing. But nevertheless, in spite of the steep learning curve, I managed to achieve the following: parse the smallest meaningful subset of the relevant annotations (e.g., a single basic resource) design and build a data model of the dynamic stubs generate the source code of the dynamic stubs (in Java) and make Gradle build an artifact containing only the generated code and publish it (I also tested the published artifact by importing it into another project) In the end, here is what was achieved: The annotation processor iterates through all relevant annotations and generates the dynamic stub source code. Gradle compiles and packages the generated source into a JAR file and publishes it to an artifact repository (e.g., Nexus) The client imports the JAR as a dependency and downloads it from the repository The test instantiates the generated stubs and uses them to set up WireMock stubs and verify the calls made to WireMock With a mono-repo, the situation is slightly simpler since there is no need to package the generated code and upload it to a repository. The compiled stubs become available to the depending subprojects immediately. These end-to-end scenarios proved that it could work. The Final Product I developed a library with a custom annotation @GenerateWireMockStub that can be applied to a class annotated with @RestController. The annotation processor included in the library generates the Java code for dynamic stub creation in tests. The stubs can then be published to a repository or, in the case of a mono-repo, used directly by the project(s). For example, by adding the following dependencies (Kotlin project): Groovy kapt 'io.github.lsd-consulting:spring-wiremock-stub-generator:2.0.3' compileOnly 'io.github.lsd-consulting:spring-wiremock-stub-generator:2.0.3' compileOnly 'com.github.tomakehurst:wiremock:2.27.2' and annotating a controller having a basic GET mapping with @GenerateWireMockStub: Kotlin @GenerateWireMockStub @RestController class MyController { @GetMapping("/resource") fun getData() = MyServerResponse(id = "someId", message = "message") } will result in generating a stub class with the following methods: Java public class MyControllerStub { public void getData(MyServerResponse response) ... } public void getData(int httpStatus, String errorResponse) { ... } public void verifyGetData() { ... } public void verifyGetData(final int times) { ... } public void verifyGetDataNoInteraction() { ... } } The first two methods set up stubs in WireMock, whereas the other methods verify the calls depending on the expected number of calls — either once or the given number of times, or no interaction at all. That stub class can be used in a test like this: Kotlin //Create the stub for the producer’s controller val myControllerStub = MyControllerStub() //Stub the controller method with the response myControllerStub.getData(MyServerResponse("id", "message")) callConsumerThatTriggersCallToProducer() myControllerStub.verifyGetData() The framework now supports most HTTP methods, with a variety of ways to verify interactions. @GenerateWireMockStub makes maintaining these dynamic stubs effortless. It increases accuracy and consistency, making maintenance easier and enabling your build to easily catch breaking changes to APIs before your code hits production. More details can be found on the project’s website. A full example of how the library can be used in a multi-project setup and in a mono-repo: spring-wiremock-stub-generator-example spring-wiremock-stub-generator-monorepo-example Limitations The library’s limitations mostly come from the WireMock limitations. More specifically, multi-value and optional request parameters are not quite supported by WireMock. The library uses some workarounds to handle those. For more details, please check out the project’s README. Note The client must have access to the API classes used by the controller. Usually, it is achieved by exposing them in separate API modules that are published for consumers to use. Acknowledgments I would like to express my sincere gratitude to the reviewers who provided invaluable feedback and suggestions to improve the quality of this article and the library. Their input was critical in ensuring the article’s quality. A special thank you to Antony Marcano for his feedback and repeated reviews, and direct contributions to this article. This was crucial in ensuring that the article provides clear and concise documentation for the spring-wiremock-stub-generator library. I would like to extend my heartfelt thanks to Nick McDowall and Nauman Leghari for their time, effort, and expertise in reviewing the article and providing insightful feedback to improve its documentation and readability. Finally, I would also like to thank Ollie Kennedy for his careful review of the initial pull request and his suggestions for improving the codebase.

By Lukasz Gryzbon
Building a Flask Web Application With Docker: A Step-by-Step Guide
Building a Flask Web Application With Docker: A Step-by-Step Guide

Flask is a popular web framework for building web applications in Python. Docker is a platform that allows developers to package and deploy applications in containers. In this tutorial, we'll walk through the steps to build a Flask web application using Docker. Prerequisites Before we begin, you must have Docker installed on your machine. You can download the appropriate version for your operating system from the official Docker website. Additionally, it would help if you had a basic understanding of Flask and Python. Creating a Flask Application The first step is to create a Flask application. We'll create a simple "Hello, World!" application for this tutorial. Create a new file called app.py and add the following code: Python from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return 'Hello, World!' Save the file and navigate to its directory in a terminal. Creating a Dockerfile The next step is to create a Dockerfile. A Dockerfile is a script that describes the environment in which the application will run. We'll use the official Python 3.8 image as the base image for our Docker container. FROM python:3.8-slim-buster: This sets the base image for our Docker container to the official Python 3.8 image. WORKDIR /app: This sets the working directory inside the container to /app. COPY requirements.txt .: This copies the requirements.txt file from our local machine to the /app directory inside the container. RUN pip install --no-cache-dir -r requirements.txt: This installs the dependencies listed in requirements.txt. COPY . .: This copies the entire local directory to the /app directory inside the container. CMD [ "python", "app.py" ]: This sets the command to run when the container starts to python app.py. Create a new file called Dockerfile and add the following code: Dockerfile FROM python:3.8-slim-buster # Set the working directory WORKDIR /app # Install dependencies COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy the application code COPY . . # Run the application CMD [ "python", "app.py" ] Save the Dockerfile and navigate to its directory in a terminal. Building the Docker Image The next step is to build a Docker image from the Dockerfile. Run the following command to build the image: Python docker build -t my-flask-app . This command builds an image named my-flask-app from the Dockerfile in the current directory. The . at the end of the command specifies that the build context is the current directory. Starting the Docker Container Now that we have a Docker image, we can start a container from it. Run the following command to start a new container from the my-flask-app image and map port 5000 on the host to port 5000 in the container: Python docker run -p 5000:5000 my-flask-app This command starts a new container from the my-flask-app image and maps port 5000 on the host to port 5000 in the container. Testing the Flask Application Finally, open your web browser and navigate to http://localhost:5000. You should see the "Hello, World!" message displayed in your browser, indicating that the Flask application is running inside the docker application. Customizing the Flask Application You can customize the Flask application by modifying the app.py file and rebuilding the Docker image. For example, you could modify the hello function to return a different message: Python @app.route('/') def hello(): return 'Welcome to my Flask application!' Save the app.py file and rebuild the Docker image using the docker build command from earlier. Once the image is built, start a new container using the docker run command from earlier. When you navigate to http://localhost:5000, you should see the updated message displayed in your browser. Advantages Docker simplifies the process of building and deploying Flask applications, as it provides a consistent and reproducible environment across different machines and operating systems. Docker allows for easy management of dependencies and versions, as everything needed to run the application is contained within the Docker image. Docker facilitates scaling and deployment of the Flask application, allowing for the quick and easy creation of new containers. Disadvantages Docker adds an additional layer of complexity to the development and deployment process, which may require additional time and effort to learn and configure. Docker may not be necessary for small or simple Flask applications, as the benefits may not outweigh the additional overhead and configuration. Docker images and containers can take up significant disk space, which may concern applications with large dependencies or machines with limited storage capacity. Conclusion In this tutorial, we've walked through the steps to build a Flask web application using Docker. We've created a simple Flask application, written a Dockerfile to describe the environment in which the application will run, built a Docker image from the Dockerfile, started a Docker container from the image, and tested the Flask application inside the container. With Docker, you can easily package and deploy your Flask application in a consistent and reproducible manner, making it easier to manage and scale your application.

By Joseph owino
How To Add Chatbot To React Native
How To Add Chatbot To React Native

Building a chatbot on a React Native app may have been a complicated affair in the past, but not so today, thanks to Kommunicate’s Kompose chatbot builder. In this tutorial, we are going to build a chatbot application from scratch using Kompose (Kommunicate Chatbot) and React Native. We’ll do the integration in two phases: Create a Kompose chatbot and set up the answers. Add the created chatbot to your React Native. Let’s jump right into it. Phase 1: Create a Chatbot in Kompose and Setup the Answers If you do not have an account in Kommunicate, you can create one here for free. Next, log in to your Kommunicate dashboard and navigate to the Bot Integration section. Locate the Kompose section and click on Integrate Bot. If you want to build a bot from scratch, select a blank template and go to the Set up your bot section. Select the name of your Bot, your bot’s Avatar, and your bot’s default language, and click “Save and Proceed.” You are now done creating your bot, and all you have to worry about now is to “Enable bot to human transfer” when the bot encounters a query it does not understand. Enable this feature and click “Finish Bot Setup.” From the next page, you can choose if this bot will handle all the incoming conversations. Click on “Let this bot handle all the conversations,” and you are good to go. Newly created bot here: Dashboard →Bot Integration → Manage Bots. Step 2: Create Welcome Messages and Answers for Your Chatbot Go to the ‘Kompose – Bot Builder’ section and select the bot you created. First, set the welcome message for your chatbot. The welcome message is the first message that the chatbot sends to the user who initiates a chat. Click the “Welcome Message” section. In the “Enter Welcome message — Bot’s Message” box, provide the message your chatbot should be shown to the users when they open the chat and then save the welcome intent. After creating the welcome message, the next step is to feed answers/intents. These answers/intents can be the common questions about your product and service. The answers section is where you’ve to add all the user’s messages and the chatbot responses. Go to the “Answer” section, click +Add, then give an ‘Intent name.’ In the Configure user’s message section – you need to mention the phrases that you expect from the users that will trigger. Configure the bot’s reply section — you need to mention the responses (Text or as Rich messages) the chatbot will deliver to the users for the particular message. You can add any number of answers and follow-up responses for the chatbot. Here, I have used a custom payload by selecting the “Custom” option in the “More” option. Once you have configured the responses, you need to click on “Train Bot,” which is at the right button and to the left of the preview screen. Once successfully trained, a toast “Anser training completed” will come at the top right corner. Phase 2: Add the Created Chatbot to Your React Native ProjectStep 1: Setup the React Native Development Environment https://reactnative.dev/docs/environment-setup Step 2: Create a React Native App Create a new React Native app (my-app) by using the command in your terminal or Command Prompt: npx react-native init my-app Step 3: Now, Navigate to the My-App Folder cd my-app Step 4: Install Kommunicate to Your Project To add the Kommunicate module to your React Native application, add it using npm: npm install react-native-kommunicate-chat --save Step 5: Add Kommunicate Code to Your Project Navigate to App.js in your project. By default, a new project contains demo code which is not required. You can remove those codes and write your own code to start a conversation in Kommunicate. First, import Kommunicate using the following: import RNKommunicateChat from 'react-native-kommunicate-chat'; Then, create this method to open a conversation before returning any views: Next, we need to add a button, which, when clicked, would open a conversation. Add these React elements and return them. JavaScript const App: () => Node = () => { const isDarkMode = useColorScheme() === 'dark'; const backgroundStyle = { backgroundColor: isDarkMode ? Colors.darker : Colors.lighter, }; startConversation = () => { let conversationObject = { 'appId': 'eb775c44211eb7719203f5664b27b59f' // The [APP_ID](https://dashboard.kommunicate.io/settings/install) obtained from kommunicate dashboard. } RNKommunicateChat.buildConversation(conversationObject, (response, responseMessage) => { if (response == "Success") { console.log("Conversation Successfully with id:" + responseMessage); } }); } return ( <SafeAreaView style={styles.con}> <StatusBar barStyle={isDarkMode ? 'light-content' : 'dark-content'} /> <ScrollView contentInsetAdjustmentBehavior="automatic" style={backgroundStyle}> <Header /> <View style={{ backgroundColor: isDarkMode ? Colors.black : Colors.white, }> <Text style={styles.title}></Text> <Text style={styles.title}>Here you can talk with our customer support.</Text> <View style={styles.container}> <Button title="Start conversation" onPress={() => startConversation()} /> </View> </View> </ScrollView> </SafeAreaView> ); }; Here is my screenshot:

By Devashish Mamgain
Postgres JSON Functions With Hibernate 5
Postgres JSON Functions With Hibernate 5

Postgres database supports a few JSON types and special operations for those types. In some cases, those operations might be a good alternative for document databases like MongoDB or other NoSQL databases. Of course, databases like MongoDB might have better replication processes, but this subject is outside of the scope of this article. In this article, we will focus on how to use JSON operations in projects that use Hibernate framework with version 5. Example Model Our model looks like the example below: Java @Entity @Table(name = "item") public class Item { @Id private Long id; @Column(name = "jsonb_content", columnDefinition = "jsonb") private String jsonbContent; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getJsonbContent() { return jsonbContent; } public void setJsonbContent(String jsonbContent) { this.jsonbContent = jsonbContent; } } Important!: We could use a specific JSON type for the jsonbContent property, but in Hibernate version 5, that would not give any benefits from an operations standpoint. DDL operation: SQL create table item ( id int8 not null, jsonb_content jsonb, primary key (id) ) For presentation purposes, let's assume that our database contains such records: SQL INSERT INTO item (id, jsonb_content) VALUES (1, '{"top_element_with_set_of_values":["TAG1","TAG2","TAG11","TAG12","TAG21","TAG22"]}'); INSERT INTO item (id, jsonb_content) VALUES (2, '{"top_element_with_set_of_values":["TAG3"]}'); INSERT INTO item (id, jsonb_content) VALUES (3, '{"top_element_with_set_of_values":["TAG1","TAG3"]}'); INSERT INTO item (id, jsonb_content) VALUES (4, '{"top_element_with_set_of_values":["TAG22","TAG21"]}'); INSERT INTO item (id, jsonb_content) VALUES (5, '{"top_element_with_set_of_values":["TAG31","TAG32"]}'); -- item without any properties, just an empty json INSERT INTO item (id, jsonb_content) VALUES (6, '{}'); -- int values INSERT INTO item (id, jsonb_content) VALUES (7, '{"integer_value": 132}'); INSERT INTO item (id, jsonb_content) VALUES (8, '{"integer_value": 562}'); INSERT INTO item (id, jsonb_content) VALUES (9, '{"integer_value": 1322}'); -- double values INSERT INTO item (id, jsonb_content) VALUES (10, '{"double_value": 353.01}'); INSERT INTO item (id, jsonb_content) VALUES (11, '{"double_value": -1137.98}'); INSERT INTO item (id, jsonb_content) VALUES (12, '{"double_value": 20490.04}'); -- enum values INSERT INTO item (id, jsonb_content) VALUES (13, '{"enum_value": "SUPER"}'); INSERT INTO item (id, jsonb_content) VALUES (14, '{"enum_value": "USER"}'); INSERT INTO item (id, jsonb_content) VALUES (15, '{"enum_value": "ANONYMOUS"}'); -- string values INSERT INTO item (id, jsonb_content) VALUES (16, '{"string_value": "this is full sentence"}'); INSERT INTO item (id, jsonb_content) VALUES (17, '{"string_value": "this is part of sentence"}'); INSERT INTO item (id, jsonb_content) VALUES (18, '{"string_value": "the end of records"}'); -- inner elements INSERT INTO item (id, jsonb_content) VALUES (19, '{"child": {"pets" : ["dog"]}'); INSERT INTO item (id, jsonb_content) VALUES (20, '{"child": {"pets" : ["cat"]}'); INSERT INTO item (id, jsonb_content) VALUES (21, '{"child": {"pets" : ["dog", "cat"]}'); INSERT INTO item (id, jsonb_content) VALUES (22, '{"child": {"pets" : ["hamster"]}'); Native Query Approach In Hibernate 5, we can use a native approach where we execute a direct SQL command. Important!: Please, for presentation purposes, omit the fact that the below code allows SQL injection for expression for the LIKE operator. Of course, for such action, we should use parameters and PreparedStatement. Java private EntityManager entityManager; public List<Item> findAllByStringValueAndLikeOperatorWithNativeQuery(String expression) { return entityManager.createNativeQuery("SELECT * FROM item i WHERE i.jsonb_content#>>'{string_value}' LIKE '" + expression + "'", Item.class).getResultList(); } In the above example, there is the usage of the #>> operator that extracts the JSON sub-object at the specified path as text (please check the Postgres documentation for more details). In most cases, such a query (of course, with an escaped value) is enough. However, if we need to implement the creation of some kind of dynamic query based on parameters passed in our API, it would be better some kind of criteria builder. Posjsonhelper Hibernate 5 by default does not have support for Postgres JSON functions. Fortunately, you can implement it by yourself or use the posjsonhelper library which is an open-source project. The project exists Maven central repository, so you can easily add it by adding it as a dependency to your Maven project. XML <dependency> <groupId>com.github.starnowski.posjsonhelper</groupId> <artifactId>hibernate5</artifactId> <version>0.1.0</version> </dependency> To use the posjsonhelper library in your project, you need to use the Postgres dialect implemented in the project. For example: com.github.starnowski.posjsonhelper.hibernate5.dialects.PostgreSQL95DialectWrapper ... In case your project already has a custom dialect class, then there is also the possibility of using: com.github.starnowski.posjsonhelper.hibernate5.PostgreSQLDialectEnricher; Using Criteria Components The example below has similar behavior to the previous example that used a native query. However, in this case, we are going to use a criteria builder. Java private EntityManager entityManager; public List<Item> findAllByStringValueAndLikeOperator(String expression) { CriteriaBuilder cb = entityManager.getCriteriaBuilder(); CriteriaQuery<Item> query = cb.createQuery(Item.class); Root<Item> root = query.from(Item.class); query.select(root); query.where(cb.like(new JsonBExtractPathText((CriteriaBuilderImpl) cb, singletonList("string_value"), root.get("jsonbContent")), expression)); return entityManager.createQuery(query).getResultList(); } Hibernate is going to generate the SQL code as below: SQL select item0_.id as id1_0_, item0_.jsonb_content as jsonb_co2_0_ from item item0_ where jsonb_extract_path_text(item0_.jsonb_content,?) like ? The jsonb_extract_path_text is a Postgres function that is equivalent to the #>> operator (please check the Postgres documentation linked earlier for more details). Operations on Arrays The library supports a few Postgres JSON function operators like: ?&- Checks if all of the strings in the text array exist as top-level keys or array elements. So generally if we have JSON property that contains an array then you can check if it contains all elements that you are searching by. ?| - Checks if any of the strings in the text array exist as top-level keys or array elements. So generally if we have JSON property that contains an array then you can check if it contains at least of elements that you are searching by. Required DDL Changes The operator above can not be used in HQL because of special characters. That is why we need to wrap them, for example, in a custom SQL function. Posjsonhelper library requires two custom SQL functions that will wrap those operators. For the default setting these functions will have the implementation below. PLSQL CREATE OR REPLACE FUNCTION jsonb_all_array_strings_exist(jsonb, text[]) RETURNS boolean AS $$ SELECT $1 ?& $2; $$ LANGUAGE SQL; CREATE OR REPLACE FUNCTION jsonb_any_array_strings_exist(jsonb, text[]) RETURNS boolean AS $$ SELECT $1 ?| $2; $$ LANGUAGE SQL; For more information on how to customize or add programmatically required DDL please check the section "Apply DDL changes." "?&" Wrapper The below code example illustrates how to create a query that looks at records for which JSON property that contains an array has all string elements that we are searching by. Java private EntityManager entityManager; public List<Item> findAllByAllMatchingTags(Set<String> tags) { CriteriaBuilder cb = entityManager.getCriteriaBuilder(); CriteriaQuery<Item> query = cb.createQuery(Item.class); Root<Item> root = query.from(Item.class); query.select(root); query.where(new JsonbAllArrayStringsExistPredicate(hibernateContext, (CriteriaBuilderImpl) cb, new JsonBExtractPath((CriteriaBuilderImpl) cb, singletonList("top_element_with_set_of_values"), root.get("jsonbContent")), tags.toArray(new String[0]))); return entityManager.createQuery(query).getResultList(); } In case the tags would contain two elements, then Hibernate would generate the below SQL: SQL select item0_.id as id1_0_, item0_.jsonb_content as jsonb_co2_0_ from item item0_ where jsonb_all_array_strings_exist(jsonb_extract_path(item0_.jsonb_content,?), array[?,?])=true "?|" Wrapper The below code example illustrates how to create a query that looks at records for which JSON property that contains an array has at least one string element that we are searching by. Java private EntityManager entityManager; public List<Item> findAllByAnyMatchingTags(HashSet<String> tags) { CriteriaBuilder cb = entityManager.getCriteriaBuilder(); CriteriaQuery<Item> query = cb.createQuery(Item.class); Root<Item> root = query.from(Item.class); query.select(root); query.where(new JsonbAnyArrayStringsExistPredicate(hibernateContext, (CriteriaBuilderImpl) cb, new JsonBExtractPath((CriteriaBuilderImpl) cb, singletonList("top_element_with_set_of_values"), root.get("jsonbContent")), tags.toArray(new String[0]))); return entityManager.createQuery(query).getResultList(); } In case the tags would contain two elements then Hibernate would generate the below SQL: SQL select item0_.id as id1_0_, item0_.jsonb_content as jsonb_co2_0_ from item item0_ where jsonb_any_array_strings_exist(jsonb_extract_path(item0_.jsonb_content,?), array[?,?])=true For more examples of how to use numeric operators please check the demo dao object and dao tests. Conclusion In some cases, Postgres JSON types and functions can be good alternatives for NoSQL databases. This could save us from the decision of adding NoSQL solutions to our technology stack which could also add more complexity and additional costs.

By Szymon Tarnowski CORE
Using Apache Pulsar and Spring Boot for Real-Time Stream Processing
Using Apache Pulsar and Spring Boot for Real-Time Stream Processing

Real-time stream processing has become a critical component of modern data-driven applications. Apache Pulsar is an open-source distributed messaging system that provides seamless horizontal scalability and low-latency processing of real-time data streams. Spring Boot is a popular Java framework that simplifies the process of building and deploying production-grade applications. In this article, we will explore how to use Apache Pulsar and Spring Boot for real-time stream processing. Getting Started With Apache Pulsar The first step in building a real-time stream processing application with Pulsar is to set up a Pulsar cluster. A Pulsar cluster consists of one or more brokers that handle incoming data streams and route them to the appropriate consumers. Pulsar provides a simple and straightforward way to set up a cluster using the Pulsar CLI tool. Once the cluster is set up, you can start producing data to Pulsar topics. A topic is a named channel for data streams. Producers can publish messages to a topic, and consumers can subscribe to a topic to receive messages. Pulsar provides a variety of APIs for producers and consumers, including Java, Python, and C++. Integrating Apache Pulsar With Spring Boot To integrate Pulsar with a Spring Boot application, we will use the Pulsar Spring Boot starter, which provides a set of convenient utilities for interacting with Pulsar. The first step is to add the Pulsar Spring Boot starter to our project's dependencies. In our build.gradle file, we add the following lines: Next, we need to configure our Pulsar instance in the application.properties file: Here, we set the serviceUrl property to the URL of our Pulsar instance, and we disable authentication for simplicity. Producing and Consuming Messages With Spring Boot and Apache Pulsar Now that we have set up our Pulsar cluster and integrated it with our Spring Boot application, we can start producing and consuming messages. To create a Pulsar producer, we can use the PulsarTemplate utility provided by the Pulsar Spring Boot starter. Here's an example: In this example, we inject the PulsarTemplate utility using Spring's @Autowired annotation. Then, we use the send method to publish a message to the "my-topic" topic. To consume messages, we can use the @PulsarListener annotation provided by the Pulsar Spring Boot starter. Here's an example: In this example, we annotate a method with the @PulsarListener annotation, which tells Spring to create a Pulsar consumer for the "my-topic" topic with the "my-group" consumer group. Then, whenever a message is published to the topic, the consumeMessage method will be called with the message payload as its argument. Real-Time Stream Processing With Apache Pulsar and Spring Boot Now that we have set up our Pulsar cluster and integrated it with our Spring Boot application, we can start processing real-time data streams. One common use case for real-time stream processing is to perform real-time analytics on incoming data streams. For example, we may want to calculate the average value of a stream of sensor readings or identify patterns in a stream of user activity events. To perform real-time stream processing with Pulsar and Spring Boot, we can use the @PulsarListener annotation to consume messages from a Pulsar topic and then apply some processing logic to the messages. Here's an example: In this example, we consume messages from the "my-topic" topic using the @PulsarListener annotation, and parse the message payload as a JSON object representing a sensor reading. Then, we calculate the moving average of a window of sensor readings and write the results to a database using Spring's JdbcTemplate. Conclusion In this article, we explored how to use Apache Pulsar and Spring Boot for real-time stream processing. We started by setting up a Pulsar cluster and producing and consuming messages using Pulsar's Java API. Then, we integrated Pulsar with a Spring Boot application using the Pulsar Spring Boot starter and demonstrated how to produce and consume messages using Spring's PulsarTemplate and @PulsarListener annotations. Finally, we showed how to perform real-time stream processing by consuming messages from a Pulsar topic and applying some processing logic using Spring's JdbcTemplate. Real-time stream processing is a powerful tool for building modern data-driven applications, and Apache Pulsar and Spring Boot provide an easy and effective way to implement it. By leveraging the power of these technologies, developers can build scalable and high-performance real-time stream processing applications in a matter of hours or days rather than weeks or months.

By Anubhav Dubey
5 Best Java Frameworks for Web Development in 2023
5 Best Java Frameworks for Web Development in 2023

Java is one of the most popular and widely used programming languages on earth. It is known for its reliability, performance, and compatibility across different platforms and devices. However, developing web applications with Java can be challenging and time-consuming without the help of frameworks. Frameworks are software libraries that provide a set of tools, features, and guidelines for building web applications. They simplify and speed up the development process by handling common tasks such as routing, data access, security, testing, and deployment. They also enable developers to follow best practices and write clean, maintainable, and reusable code. In this blog post, we will explore five of the best Java frameworks for web development in 2023. These frameworks are: Spring Boot Quarkus Micronaut Jakarta EE Vert.x We will highlight their features, benefits, and use cases, and help you decide which one is the best fit for your web development project. 1. Spring Boot Spring Boot is a framework that makes it easy to create stand-alone, production-ready web applications with Spring. Spring is a comprehensive framework that provides a wide range of features for building enterprise-grade web applications, such as dependency injection, security, testing, data access, messaging, caching, and more. Spring Boot simplifies the configuration and deployment of Spring applications by providing sensible defaults and conventions. It also offers a number of starter dependencies that automatically configure the required libraries and dependencies for different scenarios. For example, if you want to use Spring MVC for web development, you can simply add the spring-boot-starter-web dependency to your project. Spring Boot also supports creating microservices-based web applications with ease. Microservices are small, independent, and loosely coupled services that communicate with each other via APIs. They enable faster development, easier scaling, and better fault tolerance. Spring Boot provides features such as service discovery, load balancing, circuit breaking, distributed tracing, and configuration management for building resilient microservices. Some of the advantages of using Spring Boot for web development are: It is based on the proven and mature Spring framework that has a large and active community of developers and users. It offers a rich set of features and integrations for building complex and diverse web applications. It simplifies the configuration and deployment of Spring applications by providing sensible defaults and conventions. It supports creating microservices-based web applications with ease and efficiency. 2. Quarkus Quarkus is a framework that aims to make Java a leading platform for cloud-native web development. Cloud-native web development refers to building web applications that are designed for the cloud environment, such as containers, Kubernetes, serverless functions, etc. Cloud-native web applications are expected to be fast, lightweight, scalable, and resilient. Quarkus achieves this goal by optimizing Java for GraalVM and HotSpot. GraalVM is a high-performance virtual machine that enables native compilation of Java applications. Native compilation means converting Java bytecode into native machine code that can run directly on the target platform without requiring a JVM. This results in faster startup time, lower memory footprint, and smaller binary size. HotSpot is the default JVM implementation that runs Java applications in interpreted or JIT-compiled mode. Quarkus enhances HotSpot by using a technique called build-time augmentation. Build-time augmentation means performing some tasks at build time rather than at runtime, such as dependency injection, configuration, resource loading, etc. This reduces the runtime overhead and improves performance. Some of the benefits of using Quarkus for web development are: It enables native compilation of Java applications for GraalVM that offer fast startup time, low memory footprint, and small binary size. It optimizes Java applications for HotSpot by using build-time augmentation that reduces runtime overhead and improves performance. It provides a unified development model for both imperative and reactive programming styles. Imperative programming means writing sequential and blocking code that executes one step at a time. Reactive programming means writing asynchronous and non-blocking code that reacts to events or data streams. It offers a number of extensions that integrate with popular libraries and frameworks such as Hibernate ORM, RESTEasy, 3. Vert.x Vert.x is a framework that enables building event-driven, non-blocking, and reactive web applications with Java. It provides a polyglot platform that supports different programming languages such as Java, Kotlin, Groovy, Ruby, JavaScript, etc. It also offers a number of features for building scalable and resilient web applications, such as clustering, circuit breaking, event bus, service discovery, and more. Vert.x is based on the Vert.x core library that provides a low-level API for handling events and I/O operations. It also provides a number of language-specific APIs and modules that offer higher-level abstractions for web development. For example, the vertx-web module provides a web server and router for handling HTTP requests and responses. Some of the advantages of using Vert.x for web development are: It enables building event-driven, non-blocking, and reactive web applications that can handle high concurrency and low latency. It provides a polyglot platform that supports different programming languages and enables interoperability between them. It offers a number of features for building scalable and resilient web applications, such as clustering, circuit breaking, event bus, service discovery, and more. It provides a modular architecture that allows developers to pick and choose the required components and avoid unnecessary dependencies. 4. Micronaut Micronaut is a framework that focuses on building microservices-based web applications with Java. It provides a lightweight and modular platform that enables fast startup time, low memory footprint, and high performance. It also offers a number of features for building cloud-native web applications, such as service discovery, load balancing, configuration management, and more. Micronaut achieves these goals by using a technique called ahead-of-time (AOT) compilation. AOT compilation means generating native executable code from Java classes and methods before runtime. This eliminates the need for a runtime reflection and bytecode manipulation and improves performance. Micronaut also provides a number of annotations and code-generation tools that simplify the development process and reduce boilerplate code. Some of the benefits of using Micronaut for web development are: It provides a lightweight and modular platform that enables fast startup time, low memory footprint, and high performance. It focuses on building microservices-based web applications that are designed for the cloud environment and can scale and adapt to changing requirements. It uses ahead-of-time (AOT) compilation to generate native executable code that improves performance and eliminates runtime reflection and bytecode manipulation. It provides a number of annotations and code-generation tools that simplify the development process and reduce boilerplate code. 5. Jakarta EE Jakarta EE defines a number of specifications for different components and services of a web application, such as Servlets, JSP, JPA, EJB, CDI, JMS, JSF, and more. These specifications are implemented by different vendors and application servers, such as Tomcat, WildFly, GlassFish, and more. Jakarta EE also provides a number of tools and resources for building, testing, and deploying web applications, such as Maven plugins, Arquillian, and more. Some of the advantages of using Jakarta EE for web development are: It provides a set of standard specifications and APIs for building enterprise-grade web applications that are portable and interoperable. It has a large and established ecosystem of vendors, application servers, and developers that offer support and resources. It defines a number of tools and resources for building, testing, and deploying web applications that simplify the development process. Conclusion In conclusion, choosing the right Java framework for web development can significantly impact the success of your project. Spring Boot, Quarkus, Micronaut, Jakarta EE, and Vert.x are all excellent options for building robust and scalable web applications. Each framework has its own set of features, benefits, and use cases, so it's important to evaluate them carefully and choose the one that best suits your requirements. Java development services can benefit greatly from using a framework to speed up development, improve code quality, and ensure scalability and reliability. By leveraging the power of these frameworks, developers can focus on building innovative and engaging web applications that meet the needs of their clients and end-users. In conclusion, choosing the right Java framework is essential for delivering high-quality and efficient web applications. So, make sure to evaluate your options carefully and choose the one that aligns with your project goals.

By Udit Handa
Kubernetes Alternatives to Spring Java Framework
Kubernetes Alternatives to Spring Java Framework

Spring Cloud and Kubernetes both complement each other to build a cloud-native platform and run microservices on the Kubernetes containers. Kubernetes provides many features which are similar to Spring Cloud and Spring Config Server features. Spring framework has been around for many years. Even today, many organizations prefer to go with Spring libraries because it provides many features. It's a great deal when developers have total control over cloud configuration along with business logic source code. Let's discuss a couple of challenges of cloud configuration code with Spring Cloud and Spring Config Server for microservices architecture: Tight coupling of business logic and configuration source code: Spring configuration provides tight coupling with business logic code, which makes the code-heavy and also makes it difficult to debug production issues. It slows down releases for new business features due to the tight integration of business logic with cross-cutting configuration source code. Extra coding and testing effort: For new feature releases, one extra testing effort is required to test new features, mainly during integration, regression, and load testing. We need to test the entire code with cross-cutting configurations, even for minor code changes in the business logic. Slow build and deployment: It takes extra time to load, deploy, and run heavy code because of the strong bonding of configuration and business logic. It consumes extra CPU and RAM for all business-specific API calls. Spring doesn't provide these important features: Continuous Integration (CI): It doesn't address any CI-related concerns. It only handles the build microservices part. Self-healing of infrastructure: It doesn't care about self-healing and restarting apps for any crashes. It provides health check APIs and observability features using actuator/micrometer support with Prometheus. Dependency on Java framework: It only supports Java programming language. Kubernetes Alternatives of Spring Cloud Here are a few better alternatives of Kubernetes for Spring libraries: Spring Cloud Kubernetes Service discovery Netflix Eureka. Not recommended to use for cloud-native modern applications. K8s provides Cluster API, a service that exposes microservices across all namespaces since "kube-dns" allows lookup. It also provides integration with the ingress controller and K8s ingress resources to intelligently route incoming traffic to designated service. Load balancing Netflix Ribbon provides client-side load balancing on HTTP/TCP requests. K8s provides load balancer services. It's the responsibility of K8s service to load balance. Configuration management Spring Config Serverexternalizes configuration management through code configuration. K8s provides ConfigMap and secret to externalize configuration at the infra side natively, which is maintained by the DevOps team. API Gateway Spring Cloud Gateway and Zuul2 provide all API gateway features like request routing, caching, authentication, authorization, API level load balancing, rate limiting, circuit breaker, etc. K8s services and ingress resources fulfill partial API gateway features like routing and load balancing. K8s supports service mesh architecture implementation tools like Istio, which provides most of the API gateway-related features like service discovery and API tracing. However, it's not a replacement for the external API gateway. Resilience and fault tolerance Resilence4j, Spring Retryprojects provide resiliency and fault tolerance mechanisms. In addition, they provide circuit breaker, timeout, and retry features. K8s provides the same features with health checks, resource isolation, and service mesh. Scaling and self-healing Spring Boot Admin supports scaling and self-healing of applications. It's used for managing and monitoring Spring Boot applications. Each application is considered a client and registers to the admin server. Spring Boot Actuator endpoints help to monitor the environment. K8s provides out-of-the-box auto-scaling and self-healing by checking health check APIs. It spawns new containers automatically and deploys microservices apps in case of app issues or adds more containers automatically when traffic loads increase during peak. Batch jobs Spring Batch, Spring Cloud Task, and Spring Cloud Data Flow (SCDF) have capabilities to schedule/on-demand and run batch jobs. Spring tasks can run short-living jobs. A short-lived task could be a Java process or a shell script. K8s also provides scheduled Cron job features. It executes batch jobs and provides limited scheduling features. It also works together with Spring Batch. Important Note: Starting from the Spring Cloud Greenwich release Train, Netflix OSS, Hystrix, Ribbon, and Zuul are entering into maintenance mode and are now deprecated. This means that there won't be any new features added to these modules, and the Spring Cloud team will only fix bugs and security issues. The maintenance mode does not include the Eureka module. Spring provides regular releases and patches for its libraries; however, Netflix OSS is almost not active and is not being used by organizations. Conclusion Spring provides tons of features and has had a proven Java-based framework for many years! Kubernetes provides complimentary features which are comparable with Spring features and can be replaced to extract configuration code from the business logic. Cloud-native microservices' service architecture (MSA) and 12/15 factor principles recommend keeping cross-cutting configuration code outside of the business logic code. Configuration should be stored and managed separately. In MSA, the same configuration can be shared across many microservices; that's why configuration should be stored externally and be available for all microservices applications. Also, these configurations should be managed by DevOps teams. This helps developers to focus only on business logic programming. It will definitely make release faster with lower development costs. Also, building and deployment will be faster for microservices apps. Kubernetes provides better alternatives to replace these legacy Spring libraries features; many of them are deprecated or in the maintenance phase. Kubernetes also provides service mesh support. These Kubernetes alternatives are really helpful for microservices applications and complimentary to the Spring Java framework for microservices development.

By Rajiv Srivastava
A Developer's Dilemma: Flutter vs. React Native
A Developer's Dilemma: Flutter vs. React Native

A few years ago, if you or your team decided to develop a mobile app for both iOS and Android, you would be required to maintain two codebases, one for each platform. This meant either learning or hiring developers well-versed in Swift and/or Objective-C(for iOS) and Kotlin and/or Java(for Android), the primary programming languages used to build native apps for these platforms. As you can imagine, this not only puts a strain on your resources and budget allocation but also involves longer development cycles and painstaking efforts to ensure consistency between both apps. With the advent of cross-platform app development frameworks in the last few years, mobile app developers and companies are increasingly moving or at least considering using these technologies to build their mobile apps. The primary reason is that these frameworks enable developers to write code only once and build for both Android and iOS. Maintaining a single codebase significantly reduces your time-to-launch and ensures a high level of consistency between both apps. We will discuss in detail in a bit how this is made possible by React Native and Flutter, which are the two most popular cross-platform mobile app development frameworks at the moment. React Native is a Facebook(Now Meta) project and was released publicly about two years before Google released the initial version of Flutter in 2017. Before the emergence of these two frameworks, there were a few other options for developers, like Xamarin, Cordova, and Ionic, but they have since fallen out of favor because of many complexity and performance issues. There is a relatively recent effort by the JetBrains team(the company that built some of the popular IDEs) called Kotlin Multiplatform mobile. However, it is still in Beta and not as popular as React Native or Flutter. Native vs. React Native/Flutter If you are new to mobile app development or have yet to work with any of these cross-platform technologies, naturally, the question arises: Which one should I use or invest my time learning? My team at BrewApps has been developing mobile apps using both Flutter and React Native(and Native apps as well) for many years now, and we have often pondered on this question. In this article, I will attempt to highlight some key differences between React Native and Flutter's architectures, internal working, strengths, and weaknesses. I hope it will help you or your development team decide which platform to opt for in your next mobile app development project. Even though Flutter and React Native are designed to work on platforms other than iOS and Android, e.g., Windows, Linux, macOS, and Web, etc., for this article, I will limit the discussion to only iOS and Android. What Programming Language(s) Do I Need to Learn? React Native uses JavaScript, which is a widely used programming language, esp. in the web development world, and therefore if you are familiar with JavaScript or React JS, you will find it a lot easier to grasp React Native concepts. On the other hand, Flutter uses Dart, an object-oriented, type-safe, garbage-collected programming language developed by folks at Google. Dart's syntax may seem familiar due to its similarity with other programming languages, but the learning curve might be a bit steeper. If you come from the C/C++ world, you may feel more at home with Dart's type-safety requirement. Because JavaScript has been around for quite some time, it boasts a much larger developer community and a mature ecosystem of libraries, frameworks, and tooling. On the other hand, Dart is a modern and relatively new programming language optimized for UI. Dart was chosen for Flutter because Dart's runtime and compilers support a JIT-based compilation that allows for features like hot reload and also Ahead-of-Time(AOT) compilation that generates efficient machine code suitable for production deployments. Both languages support asynchronous programming, which is a key requirement in building reactive UI in mobile apps. In JavaScript, we have async, await, and Promises to allow asynchronous execution of code, and similarly, in Dart, there are async, await, and futures keywords. Dart has support for concurrency and parallelism through what is called an isolate(essentially an abstraction over threads and processes). In JavaScript, even though concurrency is achieved using Web Workers for web development, for React Native, you will have to use third-party libraries. Now you might wonder, we are using JavaScript or Dart here, not Swift and/or Objective-C for iOS and Kotlin and/or Java for Android(used for native app development); what happens when you hit the Build or Run button? To understand that, you may want to learn a bit about React Native and Flutter's architecture and internals. Knowing this is not mandatory for developing a React Native or Flutter app in most cases. However, a deeper understanding of how things work may help you make the right design and architectural decisions in a large and complex app development project. Feel free to skip the next section if this doesn't interest you at this time. What's Happening Under the Hood? React Native React Native is undergoing major upgrades to its architecture, and changes have started rolling out in the last few months. We will talk about what necessitated the changes in a bit. There are plenty of apps out there still using the old architecture and potentially in the process of migrating to the new architecture. So I think it makes sense to talk about both here. In the old architecture, the best way to think about how this works is to imagine two worlds, JavaScript and Native, and a bridge component connecting these worlds. The bridge enables asynchronous bidirectional communication using JSON messages. React Native Bridge You, as a React Native developer, will write all the business logic, callback handlers, etc., in JavaScript. JavaScript sends all the information(usually as a batch of messages) about the updates that it expects to be rendered to the Native side via the bridge. The Native platform APIs take care of rendering the Views. Whenever a UI interaction occurs, that event is sent to the JavaScript side, and the corresponding handler is executed. Using this bridge mechanism, React Native essentially has access to all the native APIs and thus controls the views of the underlying platform(iOS or Android). And more importantly, this is achieved without having to write any native code, i.e., Objective-C/Swift for iOS and Java/Kotlin for Android. React Native comes bundled with an open-source optimized-for-React Native JavaScript engine called Hermes. The JavaScript VM runs the JavaScript code in your React Native app. Even though the bridge in the old architecture is a clever way of facilitating communication between the JavaScript and Native worlds, it inadvertently becomes a bottleneck due to practical bandwidth limitations of the bridge and impacts overall performance in certain use cases. Furthermore, message passing involves serializing and deserializing operations that introduce processing delays, thereby affecting the user experience. If your app has complex high FPS animations or requires interactions like fast scrolling, it may not work very smoothly in React Native. The new architecture attempts to address these issues by getting rid of 'the bridge' altogether and introducing a new mechanism called the JavaScript Interface(JSI). JSI allows JavaScript to hold a reference to host objects and invoke methods on them and vice versa. This eliminates communication overheads(message passing between the two worlds) and allows for synchronous execution and concurrency. The new architecture, available since version 0.68, is built upon what the React Native team calls 'two pillars of the new architecture.' 1. The New Native Module System: Turbo Modules Native modules allow you to access a native platform API that's unavailable in JavaScript. This happened in the old architecture through the bridge mechanism as described above. Turbo Native Modules are an improvement over the legacy Native Modules by providing strongly typed interfaces, the ability to write code in C++, lazy loading to facilitate faster app startup, and the use of JSI, as explained above. 2. The New Renderer: Fabric The fabric rendering system written in C++ is designed to be cross-platform and more interoperable with iOS and Android, with the goal of improving user experience. The fabric takes advantage of the lazy loading, JSI, and other benefits the new architecture offers. What Happens When You Launch Your React Native App? When you launch your React Native app, the underlying operating system creates and assigns your app a Native thread, also known as the Main thread or UI Thread. The Native thread then sets up native infrastructure and spawns the JavaScript VM, which runs the JavaScript(your app and the framework) code. The Native thread then asks the JavaScript thread to process the main component that's registered as an entry point in AppRegistry, and communication between the JavaScript and Native world then commences. Flutter Flutter's architecture is very different in comparison to that of React Native. Here's a graphical representation of Flutter's layered architecture. Flutter's Layered Architecture As a Flutter app developer, you will spend most of your time on the first two layers writing your application's business logic in Dart and using the libraries and services available in the framework layer. In Flutter, it's said that "Everything is a Widget," which essentially means that you build your UI screen out of widgets. Flutter provides an extensive suite of basic widgets. Every UI aspect is described as a widget, and widgets are nested inside each other to build a widget tree. The layer below, the Flutter engine(written in C++), is mainly responsible for low-level input/output primitives and translating all the UI descriptions to actual pixels(aka rasterization). It does it through the use of the Skia graphics engine. So instead of relying on the platform-provided widgets, Flutter uses its own high-performance rendering engine to draw the 'Flutter widgets.' Finally, the lowest layer in the stack is a platform-specific embedder which acts as the glue between the host operating system and Flutter. The embedder coordinates with the underlying operating system for access to services like rendering surfaces, input, etc. There will be a separate embedder for each target platform, e.g., one each for iOS and Android. What Happens When You Launch Your Flutter App? When you launch your Flutter app, the embedder component we discussed earlier provides the entry point and initializes the Flutter engine. The Flutter engine is responsible for managing the Dart VM and Flutter runtime. Rendering, input, event handling, etc., is then delegated to the compiled Flutter and app code. In addition to these main differences in the architecture between Flutter and React Native, the important thing to note here is the Dart code is ahead-of-time(AOT) and compiled into native, ARM, or x86 libraries when deployed to production. The library then gets packaged as a "runner" project, and the whole thing is built into a .apk or .ipa. Also, unlike React Native, Flutter doesn't use built-in platform widgets but widgets from its own library(Material Design or Cupertino widgets) that are managed and rendered by Flutter's framework and engine. What If I Need Something the Platforms(iOS or Android) Offer But Is Not Exposed in Flutter or React Native? In React Native, with the legacy architecture, you can write Native modules to access platform APIs that are unavailable in JavaScript. The Native modules can either be built inside your React Native project or separately as an NPM package. With the new architecture, the legacy Native module is being replaced with Turbo Native Module and Fabric Native Components. With Flutter, you can write plugin packages for Android (in Kotlin or Java) and iOS (in Swift or Objective-C). In addition to this, you can also write platform-specific code in your app and have the Dart portion of your app communicate with the non-Dart part using what is called a platform channel. This message passing between Dart and the non-Dart portion is similar to the message passing across the bridge in React Native. Debugging and Dev Tools React Native and Flutter have many features and developer tools for debugging and performance monitoring. Both React Native and Flutter support "Hot reload," which essentially means when you run the app in debug mode on the simulator, any change you make in the code is reflected near instantly in your simulator window. This is very convenient as you don't have to recompile the entire app every time(and wait for it to finish) you need to inspect how minor changes in your code would affect the UI or functionality. In React Native, the Hot reload feature is called 'Fast Refresh.' Hot reload in Flutter is possible because of Dart's ability to use its VM to JIT compile the code. In React Native, you can use the JavaScript debugger in the Chrome browser's developer tools to debug JavaScript code by setting your project to be debugged remotely. Debugging through the native code can be done by launching your app in Android Studio or Xcode and using the available native debugging features in the IDE. Some developers find this setup a bit clunky. Flutter comes with an extensive suite of performance and debugging tools. These packages and tools can be directly installed in popular IDEs like IntelliJ, VS code, etc. What Does the Developer Community Say About Flutter and React Native? Let's look at some of the Stack Overflow developer surveys over the past two years. When asked about which frameworks and libraries have you done extensive development work in over the past year and which do you want to work in over the next year? 2021 2022 When asked about which technology they were learning in 2022? These surveys show that developer interest in both React Native and Flutter remains high, and a slight increase in Flutter recently. Google trends over the past year confirm this. Which Companies Are Using React Native or Flutter for Their Mobile App Development? The list of companies using these cross-platform technologies is ever-growing. Here are a few notable mentions: React Native Facebook Shopify Wix Tesla Flipkart Flutter BMW Nubank Google Pay Dream11 Toyota I Am Convinced React Native and Flutter Are the Future of Mobile App Development. How Do I Get Started? The official documentation of React Native and Flutter is a great resource for learning and setting things up to develop your first cross-platform mobile app. Following are a few other resources, amongst many others, that have good content for learning and getting started. Popular Udemy course for React Native by Maximilian Another Udemy course by Stephen Grider Flutter tutorials at Codewithandrea Flutter fundamentals Tutorials and Videos by Ray Wenderlich YouTube videos for Flutter development by Marcus Ng Final Verdict Both frameworks have their own set of strengths and weaknesses, are backed by Big companies like Google and Facebook, have a thriving developer community, and are here to stay for a long time. Looking at the developer surveys and based on our own experience building apps using both of these technologies, it seems like Flutter has a slight edge over React Native at the moment because of its rich library of widgets, developer-friendly tooling, and overall performance. It, however, comes down to your specific requirements for the app, resource availability, as well as the expertise and preferences of the development team. This article will hopefully help you assess that and make the right decision.

By Uday Pitambare

Top Frameworks Experts

expert thumbnail

Justin Albano

Software Engineer,
IBM

I am devoted to continuously learning and improving as a software developer and sharing my experience with others in order to improve their expertise. I am also dedicated to personal and professional growth through diligent studying, discipline, and meaningful professional relationships. When not writing, I can be found playing hockey, practicing Brazilian Jiu-jitsu, watching the NJ Devils, reading, writing, or drawing. ~II Timothy 1:7~ Twitter: @justinmalbano
expert thumbnail

Thomas Hansen

CTO,
AINIRO.IO

Obsessed with automation, Low-Code, No-Code, and everything that makes my computer do the work for me.
expert thumbnail

Hiren Dhaduk

CTO,
Simform

Hiren is VP of Technology at Simform, with extensive experience in helping enterprises and startups streamline their business performance through data-driven innovation.
expert thumbnail

Tetiana Stoyko

CTO, Co-Founder,
Incora Software Development

As a former senior developer and current CTO of the software development company Incora, I aim to share knowledge on various complex functionality, frameworks, tools, and many more. In my posts, you can find interpreted technical experience, that we gained as a team.

The Latest Frameworks Topics

article thumbnail
Pydantic and Elasticsearch: Dynamic Couple for Data Management
Combining Pydantic and Elasticsearch for data management, we can create structured schemas and perform automatic data validation and enhance data consistency.
June 9, 2023
by Dmitrii Sedov
· 1,468 Views · 1 Like
article thumbnail
Enhancing Search Engine Efficiency With Elasticsearch Aliases
In this article, the reader will learn more about optimizing search performance and scalability with alias-based index management.
June 9, 2023
by Tapan Behera
· 1,445 Views · 2 Likes
article thumbnail
Performance Comparison — Thread Pool vs. Virtual Threads (Project Loom) In Spring Boot Applications
This article compares different request-handling approaches in Spring Boot Applications: ThreadPool, WebFlux, Coroutines, and Virtual Threads (Project Loom).
June 6, 2023
by Aleksei Chaika
· 1,875 Views · 3 Likes
article thumbnail
New ORM Framework for Kotlin
The article introduces a Kotlin API for some ORMs to simplify database operations by providing a lightweight and intuitive interface.
June 5, 2023
by Pavel Ponec
· 1,884 Views · 1 Like
article thumbnail
Microservices With Apache Camel and Quarkus (Part 3)
In Parts 1 and 2, you've seen how to run microservices as Quarkus local processes. Let's now look at some K8s-based deployments, starting with Minikube.
June 7, 2023
by Nicolas Duminil CORE
· 3,941 Views · 1 Like
article thumbnail
A Quick Way To Build Mobile Check Capture App Using Xamarin.Forms
This blog illustrates the steps to use Xamarin.Forms to develop a Mobile Check Capture app for the iOS and Android platforms.
June 7, 2023
by Eric Parker
· 1,401 Views · 2 Likes
article thumbnail
Becoming a Skilled PHP Developer: A Comprehensive Guide
To become a skilled PHP developer, focus on mastering PHP syntax, adhering to best practices, and maintaining a commitment to continuous learning and improvement.
June 7, 2023
by Riajur Rahman
· 1,681 Views · 1 Like
article thumbnail
Guide To Implementing BLoC Architecture in Flutter
This guide introduces BLoC architecture in Flutter, a design pattern that separates business logic and presentation layers, enhancing app structure and maintainability.
June 6, 2023
by Bhini Dave
· 1,250 Views · 1 Like
article thumbnail
Understanding Dependencies...Visually!
Whether from the command line or within an IDE, understanding your project's dependencies is challenging because it's text-based. Are there alternatives?
June 5, 2023
by Scott Sosna CORE
· 2,933 Views · 4 Likes
article thumbnail
Is Podman a Drop-In Replacement for Docker?
In this blog, you will start with a production-ready Dockerfile and execute the Podman commands just like you would do when using Docker.
May 31, 2023
by Gunter Rotsaert CORE
· 6,415 Views · 10 Likes
article thumbnail
Writing Better Code: Symfony Dependency Injection
Learn how Symfony Dependency Injection can simplify your application's class dependencies. Discover efficient techniques for clean and maintainable code.
June 5, 2023
by Dmytro Polkhov
· 1,814 Views · 1 Like
article thumbnail
Mastering Go-Templates in Ansible With Jinja2
Explore how to streamline your DevOps workflow with Ansible, Jinja2, and Go-Templates with practical solutions and examples.
June 5, 2023
by Ruslan Kh.
· 1,801 Views · 7 Likes
article thumbnail
Essential Architecture Framework: In the World of Overengineering, Being Essential Is the Answer
Essential architecture: Zup's framework for successful software. Join us on this text and know one framework to make your life simpler focusing on what matters.
June 2, 2023
by Otavio Santana CORE
· 3,561 Views · 2 Likes
article thumbnail
Cucumber Selenium Tutorial: A Comprehensive Guide With Examples and Best Practices
Want to learn automation testing with Cucumber? Check out our detailed guide on Cucumber Selenium tutorial with examples! Get the most out of it!
June 5, 2023
by Sarah Elson
· 2,203 Views · 3 Likes
article thumbnail
Microservices With Apache Camel and Quarkus (Part 2)
Take a look at a scenario to deploy and run locally the simplified money transfer application presented in part 1 as Quarkus standalone services.
June 3, 2023
by Nicolas Duminil CORE
· 7,074 Views · 2 Likes
article thumbnail
Health Check Response Format for HTTP APIs
Continue a journey of getting more familiar with HTTP APIs and learn about a great initiative to standardize health checks across the industry.
June 2, 2023
by Nicolas Fränkel CORE
· 3,566 Views · 4 Likes
article thumbnail
A React Frontend With Go/Gin/Gorm Backend in One Project
In this article, the reader will learn more about how to build a Go Project with React Frontend and a PostgreSQL database.
June 2, 2023
by Sven Loesekann
· 3,806 Views · 3 Likes
article thumbnail
Effective Java Collection Framework: Best Practices and Tips
In this blog, we learn effectively use the Java Collection Framework, consider factors like utilizing the enhanced for loop, generics, and avoiding raw types.
June 1, 2023
by Shailendra Bramhvanshi
· 5,456 Views · 5 Likes
article thumbnail
Microservices With Apache Camel and Quarkus
This post proposes a microservices deployment model based on Camel, using a Java development stack, Quarkus as a runtime, and K8s as a cloud-native platform.
May 31, 2023
by Nicolas Duminil CORE
· 7,511 Views · 7 Likes
article thumbnail
Angular Unit Testing With Karma and Jasmine
Jasmine is a JavaScript testing framework, and Karma is a node-based testing tool for JavaScript codes across multiple real browsers.
June 1, 2023
by Haresh Kumbhani
· 2,816 Views · 1 Like
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com

Let's be friends: