DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
Building Scalable Real-Time Apps with AstraDB and Vaadin
Register Now

Java

Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.

icon
Latest Refcards and Trend Reports
Trend Report
Low Code and No Code
Low Code and No Code
Trend Report
Modern Web Development
Modern Web Development
Refcard #024
Core Java
Core Java

DZone's Featured Java Resources

Five Java Books Beginners and Professionals Should Read

Five Java Books Beginners and Professionals Should Read

By Mahesh Sharma
There's a good reason why Java is one of the most widely used programming languages: it's very powerful and flexible. Because of its adaptability and power, it may be used in a wide variety of applications, including the development of web applications and Android applications. However, it may be difficult for newcomers to know where to begin since there is so much information out there. But worry not! You won't need to go elsewhere after reading this article. We have compiled a list of the five best Java books for beginners, each of which is simple to read and understand while yet doing an excellent job of explaining the fundamentals of the language. These books provide a complete overview of the world of Java programming, covering everything from syntax and programming ideas to more advanced subjects such as data structures and object-oriented programming. What Is Java? Java is a widely used object-oriented programming language and flexible software platform that guides billions of devices across the globe, including computers, gaming consoles, medical equipment, and a broad variety of other types of products. Java provides developers with several advantages since it is based on the syntax and guidelines of C and C++. When it comes to the development of software, adopting Java has a number of key benefits, one of the most prominent being its remarkable portability. You can use a notebook computer to develop code and then simply move that code to any device, including mobile devices, if you are using Java. Since its creation in 1991 by James Gosling of Sun Microsystems (which is now owned by Oracle), the language has remained a top option for developers all over the globe. The language was established with the objective of "write once, run anywhere,". Java allows developers to concentrate on developing cutting-edge applications without worrying whether or not their code will work correctly on other systems. Although the terms Java and JavaScript may seem interchangeable, there is a significant difference between the two. JavaScript does not need compilation, but Java does. On top of that, in contrast to JavaScript, Java may be executed on any platform. New and enhanced software development tools are being released at a dizzying rate, driving fast change in the industry. These technologies are posing a threat to businesses that were previously considered vital; nonetheless, in the middle of all this upheaval, one language has stayed constant: Java. Even more amazing is the fact that almost two decades after its creation, Java remains the preferred language for the development of application software. Developers continuously choose it above other popular languages such as Python, Ruby, PHP, Swift, and C++. Therefore, it should come as no surprise that having knowledge of Java is necessary for everyone who wants to compete in the employment market of today. The language has been around for a long time and is very popular, which shows how reliable and useful it is. This makes it a valuable tool for coders and organizations alike. How To Determine Which Java Book Is Right for You You could feel overwhelmed if you're just starting out in programming and looking for the perfect Java book, but don't worry about it! You'll find the ideal resource quickly with the help of our educated recommendations. First and foremost, you need to evaluate your existing level of expertise. If you're just starting off, it's best to read a book that lays a solid foundation for you. Give priority to writers that have years of expertise in real-world programming and a track record of being effective in teaching Java. It is helpful to read reviews written by other customers before making a purchase decision. Readability, structure, and general effectiveness of the material as a Java guide are all things that need to be investigated. Next, take into account the time and financial limits you have. Compare the advantages of purchasing a physical book vs an e-book or online course and decide if you want a full book or a short guide. Last but not least, give some thought to the way in which you take in information best. If you learn best via direct participation, you should look for a book that is packed with a wide variety of hands-on activities and projects. If you would rather take a more theoretical approach, another option is to choose a book that explores the "why" behind Java's features and the way it operates. Top 5 Java Books for Beginners 1. Head First Java Kathy Sierra and Bert Bates' Head First Java is widely regarded as the definitive introduction to the Java programming language. This book is packed with extensive knowledge of Java programming fundamentals such as classes, threads, objects, collections, and language features. The content is delivered in a visually attractive way, and the book incorporates puzzles and games to make it easier to comprehend Java programming. This book stands out from others on the market due to the fact that it contains interviews with experienced Java programmers. These programmers are kind enough to offer their expertise and tips in order to accelerate the learning process for Java beginners. In the first chapter of Head First Java, the author takes a deep dive into the concepts of inheritance and composition. These concepts provide a terrific opportunity to improve computing practices via the process of problem-solving. In addition, the reader will find helpful tools in the form of vivid charts, memory maps, exercises, and bulleted lists throughout the course of the book to assist them in comprehending design patterns. This book, which has a total of 18 chapters and covers topics ranging from basic introductions to distributed and deployment computing, is without a doubt the best resource available for beginners who are just starting out in the world of Java programming. If you can have the greatest, why go for anything else? Grab Head First Java right now to get started on your path to becoming a Java programming expert, and get ready to open the door to a world of unimaginable opportunities. 2. Java for Dummies The book Java for Dummies written by Barry A. Burd is an excellent resource for anyone who is interested in delving into the realm of Java programming. Using the book's lucid instructions, readers can learn to design their own fundamental Java objects and become experts at code reuse. This book gives a full explanation of how Java code is processed on the CPU by providing a wealth of visual aids, including useful photographs and screenshots. However, this is not all that Java for Dummies has to offer; it goes above and beyond to provide a reading experience of the highest caliber. The book is comprised of nineteen chapters, the first of which provides readers with professional advice on how to make the most of their time spent reading the book, while the last chapter provides readers with a list of the best ten websites available to Java programmers. Along the way, readers will get familiar with enhanced features and tools introduced in Java 9, acquire knowledge of approaches and strategies for integrating smaller applications into larger applications, and acquire a thorough understanding of Java objects and code reuse. The reader will be able to comfortably overcome any programming difficulty after reading this book since it also offers helpful guidance on how to easily manage events and exceptions. Overall, Java for Dummies is a book that should be read by everyone who wants to become an expert in Java programming and push their abilities to the next level. 3. Java Programming for Beginners The book Java Programming for Beginners, by Mark Lassoff, is a great way to get started in the world of Java programming. It will walk you through the fundamentals of Java syntax as well as the complex parts of object-oriented programming. By the end of the book, you will have a thorough grasp of Java SE and be able to create GUI-based Java programs that run on Windows, macOS, and Linux computers. This book is packed with information that is both informative and entertaining, as well as challenging exercises and hundreds of code examples that can be executed and used as a learning resource. By reading this book, you will go from knowing the data types in Java through loops and conditionals, and then on to functions, classes, and file handling. The last chapter of the book provides instructions on how to deal with XML and examines the process of developing graphical user interfaces (GUIs). This book provides a practical approach to navigating the Java environment and covers all of the fundamental subjects that are necessary for a Java programmer. 4. Java: A Beginner's Guide Herbert Schildt's book Java: A Beginner's Guide is widely regarded as one of the best introductions to the Java programming language. Aspiring computer programmers should make this book, which is more than 700 pages long and contains a jewel, their primary reference since it covers all the fundamentals in an easy-to-read way. This book starts out with the fundamentals of Java syntax, compiling, and application planning, but it goes fast on to more sophisticated subjects really quickly. You'll get right into practical, hands-on lessons that push you to consider carefully the fundamental ideas behind Java programming. In addition, there is a test at the end of each chapter, so you'll have lots of opportunities to put what you've learned into practice and demonstrate that you understand it. However, what sets this book different from others on the market are the helpful insights and suggestions provided by Java programmers who have years of experience. These professionals share their insights and experiences with you, allowing you to win over everything that stands in your way. They cover anything from everyday quirks to massive challenges. It's possible that Java: A Beginner's Guide is too complex for some people, but it's ideal for those who are prepared to put in the work and Google questions as they go along. So, why should you wait? With the help of this invaluable book, you can get started on your path to becoming a Java expert right now. 5. Sams Teach Yourself Java Sams Teach Yourself Java is distinguished not only by its outstanding writing style but also by its ability to enable readers to comprehend the language in less than 24 hours. Okay, maybe 24 hours is a bit of a stretch, but the fact remains that this book is the best way to learn Java quickly. The activities are broken down into manageable chunks, and the provided explanations are both thorough and easy to follow. This book walks you through the full process of building a program by breaking it down into stages that are simple to comprehend and follow you step-by-step. You'll learn how to examine the process and apply key ideas to future tasks, which will help you understand the language better overall. Having a solid understanding of the theory that lies behind Java is one of the most essential components of producing code in that language. This is where the book really shines since it makes you think about the whole process before you write a single line of code. If you do so, you will put yourself in a position where you can easily tackle even the most difficult programming problems. Sams Teach Yourself Java is a wonderful option for anybody who wants to get a deeper grasp of the language, regardless of whether they are beginners or intermediate coders. Is It Worth Learning Java in 2023? Are you considering learning Java in 2023? There is no need to continue investigating since the answer is an obvious yes. Java is quickly becoming a crucial programming language for software developers as the focus of the globe moves more and more toward mobile applications and convenience. It's been the third most popular language among employers for the past two years, and it doesn't look like it's going to slow down anytime soon. Despite the fact that the pandemic has clearly had an effect on the number of available jobs, the demand for Java developers is still considerable. In point of fact, there is a great deal of compelling reasons why you need to study Java in the year 2023. Reasons Why You Should Seriously Consider Learning Java in 2023 Java Is Friendly for Beginners Java has an open-door policy for beginners. Java is a fantastic language that will assist you in getting your feet wet in the realm of coding and navigating your way through the complex landscape of software development. In addition, since Java programmers earn a wage that is on average higher than those who program in other languages, Java is an excellent choice for new programmers to study as they extend their language skills and advance their careers. Use of Java Is Not Going Away Anytime Soon In the last few years, Java has stayed in a pretty stable situation, with at least 60,000 jobs always open. Python has made significant progress in recent years, but this has not prevented Java from becoming the dominant programming language in use today. Java has earned its reputation as the "workhorse" language of the programming industry for a good reason. When we look into the future, we can say with absolute certainty that Java will continue to be regarded as the most effective programming language for many years to come. Because of its reliability and adaptability, it is an excellent investment for any programmer or company that aims to develop systems that will stand the test of time. Therefore, you may relax knowing that Java will not be disappearing at any time in the near future. Versatile and Flexible Language Companies were confronted with a significant obstacle during the pandemic when workers were required to work from home. Because many businesses did not have the appropriate infrastructure and equipment to support remote work, their workers were forced to utilize their own personal devices, such as laptops, mobile phones, and tablets. However, the trend toward remote work began long before the pandemic and will continue even after it has passed. Good News for Those Who Code in Java Java is a very versatile and adaptable programming language that can operate on any operating system, including Mac OS, Windows, and even Android. Java allows businesses to design their own private software with the peace of mind that it will function faultlessly across all of the devices used by their workers while maintaining the highest levels of safety, security, and reliability. Java is, without a doubt, the best answer for businesses that want to keep up with the times and provide their workers with the resources they want to be able to do their jobs from any location and at any time. Strong Support From the Community Java has been around for a number of decades now and can be thought of as one of the oldest programming languages that are still in use when compared to its competitors. Many developers utilize Java for many challenges. There is a good probability that the solutions to the majority of the issues will already be accessible, since the method to finding them may have been tried and proven before. Additionally, there are a large number of communities and groups on the internet and social media, respectively. The other developers and newcomers to the field will find that their peers in the community are eager to provide a helping hand and find solutions to the problems they are experiencing. Multiple Open-Source Libraries Are Available for Java The materials included in open-source libraries may be copied, researched, modified, altered, and shared, among others. There are a number of open-source libraries in Java, including JHipster, Maven, Google Guava, Apache Commons, and others, which may be used to make Java development simpler, more affordable, and more efficient. Java Has Powerful Development Tools Java is more than just a programming language; its Integrated Development Environments (IDEs) make it a software development juggernaut. Developers have all the resources they need to produce top-notch apps thanks to industry-leading tools like Eclipse, NetBeans, and IntelliJ IDEA. These integrated development environments (IDEs) provide a wide variety of features, ranging from code completion and automatic refactoring to debugging and syntax highlighting. Not only is it simpler to write code while using Java, but it's also quicker. When it comes to the development of back-end apps, Java is the go-to solution for 90 percent of the organizations that make up the Fortune 500. Java serves as the basis for the Android operating system. Java is also essential for cloud computing systems like Amazon Web Services and Microsoft's Windows Azure and plays a key role in data processing for Apache Hadoop. Java Can Run on a Wide Variety of Platforms Java is a platform-independent programming language because the Java source code has been broken down to byte code by the Java compiler. This byte code may then be run on any platform by utilizing the Java Virtual Machine. Because it can operate on a variety of platforms, Java is sometimes referred to as a WORA language, which stands for "write once, run anywhere." Additionally, due to the platform-independent nature of Java, the majority of Java programs are developed in a Windows environment, even if they are ultimately deployed and operate on UNIX platforms. Conclusion To summarize, Java is a reliable and extensively used programming language that is important for the process of developing a broad variety of software applications. Possessing the appropriate resources can make a substantial difference in a beginner's ability to effectively acquire the language. This article features an overview of the five Java books that are considered to be the best for beginners and come highly recommended by Java professionals and industry experts. Because they include information on fundamental aspects of programming as well as object-oriented programming, data structures, and algorithms, these books are a good place for beginners to begin their studies. Beginners may get a strong foundation in Java programming and enhance their abilities in developing complicated software programs by following the directions and examples that are offered in these books and using them as a guide. More
Generics in Java and Their Implementation

Generics in Java and Their Implementation

By Bikash Jain
Generics in Java In Java programming, language generics are introduced in J2SE 5 for dealing with type-safe objects. It detects bugs at the compile time by which code is made stable. Any object type is allowed to be stored in the collection before the generic introduction. Now after the generic introduction in the Java programming language, programmers are forced to store particular object types. Advantages of Java Generics Three main advantages of using generics are given below: 1. Type-Safety Generics allow to store of only a single object type. Therefore, different object types are not allowed to be stored in generics. Any type of object is allowed to be stored without generics. Java // declaring a list with the name dataList List dataList= new ArrayList(); // adding integer into the dataList dataList.add(10); // adding string data into the dataList dataList.add("10"); With generics, we need to tell the object type we want to store. Declaring a list with the name dataList: List<Integer> dataList= new ArrayList(); Adding integer into the dataList dataList.add(10); Adding string data into the dataList dataList.add("10"); // but this statement gives compile-time error 2. No Need for Type Casting Object type casting is not required with generics. It is required to do casting before a generic introduction. Java declaring a list with the name dataList List dataList= new ArrayList(); adding an element to the dataList dataList.add("hello"); typecasting String s = (String) dataList.get(0); There is no need for object type casting after generics. Java // declaring a list with the name dataList List<String> dataList= new ArrayList<String>(); // adding an element to the dataList dataList.add("hello"); //typecasting is not required String s = dataList.get(0); 3. Checking at Compile Time Issues will not occur at run time as it is checked at the compile time. And according to good programming strategies, problem handling done at compile time is far better than handling done at run time. Java // declaring a list with the name dataList List<String> dataList = new ArrayList<String>(); // adding an element into the dataList dataList .add("hello"); // try to add an integer in the dataList but this statement will give compile-time error dataList .add(32); Syntax: Java Generic collection can be used as : ClassOrInterface<Type> Example: An example of how generics are used in java is given below: ArrayList<String> Example Program of Java Generics The ArrayList class is used in this example. But in place of the ArrayList class, any class of collection framework can be used, like Comparator, HashMap, TreeSet, HashSet, LinkedList, ArrayList, etc. Java // importing packages import java.util.*; // creating a class with the name GenericsExample class GenericsExample { // main method public static void main(String args[]) { // declaring a list with the name dataList to store String elements ArrayList < String > dataList = new ArrayList < String > (); // adding an element into the dataList dataList.add("hina"); // adding an element into the dataList dataList.add("rina"); // if we try to add an integer into the dataList then it will give a compile-time error //dataList.add(32); //compile time error // accessing element from dataList String s = dataList.get(1); //no need of type casting // printing an element of the list System.out.println("element is: " + s); // for iterating over the dataList elements Iterator < String > itr = dataList.iterator(); // iterating and printing the elements of the list while (itr.hasNext()) { System.out.println(itr.next()); } } } Output: Java element is: rina hina rina Java Generics Example Using Map In this, we are using a map to demonstrate the generic example. The map allows the data storage in the form of key-value pairs. Java // importing packages import java.util.*; // creating a class with the name GenericsExample class GenericsExample { // main method public static void main(String args[]) { // declaring a map for storing keys of Integer type with String values Map < Integer, String > dataMap = new HashMap < Integer, String > (); // adding some key value into the dataMap dataMap.put(3, "seema"); dataMap.put(1, "hina"); dataMap.put(4, "rina"); // using dataMap.entrySet() Set < Map.Entry < Integer, String >> set = dataMap.entrySet(); // creating an iterator for iterating over the dataMap Iterator < Map.Entry < Integer, String >> itr = set.iterator(); // iterating for printing every key-value pair of map while (itr.hasNext()) { // type casting is not required Map.Entry e = itr.next(); System.out.println(e.getKey() + " " + e.getValue()); } } } Output: Java 1 hina 3 seema 4 rina Generic Class A generic class is a class that can refer to any type. And here, for creating a particular type of generic class, we are using the parameter of T type. The declaration of a generic class is much similar to the declaration of a non-generic class, except that the type parameter section is written after the name of the class. Parameters of one or more than one type are allowed in the type parameter section. A generic class declaration looks like a non-generic class declaration, except that the class name is followed by a type parameter section. As one or more parameters are accepted, so Parameterized types or parameterized classes are some other names for it. And the example is given below to demonstrate the use and creation of generic classes. Generic Class Creation Java class GenericClassExample < T > { T object; void addElement(T object) { this.object = object; } T get() { return object; } } Here T type represents that it can refer to any type, such as Employee, String, and Integer. The type specified by you for the class is used for data storage and retrieval. Generic Class Implementation Java Let us see an example for a better understanding of generic class usage // creating a class with the name GenericExample class GenericExample { // main method public static void main(String args[]) { // using the generic class created in the above example with the Integer type GenericClassExample < Integer > m = new GenericClassExample < Integer > (); // calling addElement for the m m.addElement(6); // if we try to call addElement with the string type element then it will give a compile-time error //m.addElement("hina"); //Compile time error System.out.println(m.get()); } } Output: Java 6 Generic Method Similar to the generic class, generic methods can also be created. And any type of argument can be accepted by the generic method. The declaration of the generic method is just similar to that of the generic type, but the scope of the type parameter is limited to the method where its declaration has been done. Generic methods are allowed to be both static and non-static. Let us understand the generic method of Java with an example. Here is an example of printing the elements of an array. Here E is used for representing elements. Java // creating a class with the name GenericExample public class GenericExample { // creating a generic method for printing the elements of an array public static < E > void printElements(E[] elements) { // iterating over elements for printing elements of an array for (E curElement: elements) { System.out.println(curElement); } System.out.println(); } // main method public static void main(String args[]) { // declaring an array having Integer type elements Integer[] arrayOfIntegerElements = { 10, 20, 30, 40, 50 }; // declaring an array having character-type elements Character[] arrayOfCharacterElements = { 'J', 'A', 'V', 'A', 'T', 'P', 'O', 'I', 'N', 'T' }; System.out.println("Printing an elements of an Integer Array"); // calling generic method printElements for integer array printElements(arrayOfIntegerElements); System.out.println("Printing an elements of an Character Array"); // calling generic method printElements for character array printElements(arrayOfCharacterElements); } } Output: Java Printing an elements of an Integer Array 10 20 30 40 50 Printing an elements of an Character Array J A V A T P O I N T Wildcard in Java Generics Wildcard elements in generics are represented by the question mark (?) symbol. And any type is represented by it. If <? extends Number> is written by us, then this means any Number such as (double, float, and Integer) child class. Now the number class method can be called from any of the child classes. Wildcards can be used as a type of local variable, return type, field, or Parameter. But the wildcards can not be used as type arguments for the invocation of the generic method or the creation of an instance of generic. Let us understand the wildcards in Java generics with the help of the example below given: Java // importing packages import java.util.*; // creating an abstract class with the name Animal abstract class Animal { // creating an abstract method with the name eat abstract void eat(); } // creating a class with the name Cat which inherits the Animal class class Cat extends Animal { void eat() { System.out.println("Cat can eat"); } } // creating a class with the name Dog which inherits the Animal class class Dog extends Animal { void eat() { System.out.println("Dog can eat"); } } // creating a class for testing the wildcards of java generics class GenericsExample { //creating a method by which only Animal child classes are accepted public static void animalEat(List << ? extends Animal > lists) { for (Animal a: lists) { //Animal class method calling by the instance of the child class a.eat(); } } // main method public static void main(String args[]) { // creating a list of type Cat List < Cat > list = new ArrayList < Cat > (); list.add(new Cat()); list.add(new Cat()); list.add(new Cat()); // creating a list of type Dog List < Dog > list1 = new ArrayList < Dog > (); list1.add(new Dog()); list1.add(new Dog()); // calling animalEat for list animalEat(list); // calling animalEat for list1 animalEat(list1); } } Output: Java Cat can eat Cat can eat Cat can eat Dog can eat Dog can eat Upper Bounded Wildcards The main objective of using upper-bounded wildcards is to reduce the variable restrictions. An unknown type is restricted by it to be a particular type or subtype of a particular type. Upper Bounded Wildcards are used by writing a question mark symbol, then extending the keyword if there is a class and implementing a keyword for the interface, and then the upper bound is written. Syntax of Upper Bound Wildcard ? extends Type. Example of Upper Bound Wildcard Let us understand the Upper Bound Wildcard with an example. Here upper bound wildcards are used by us for List<Double> and List<Integer> method writing. Java // importing packages import java.util.ArrayList; // creating a class with the name UpperBoundWildcardExample public class UpperBoundWildcardExample { // creating a method by using upper bounded wildcards private static Double sum(ArrayList << ? extends Number > list) { double add = 0.0; for (Number n: list) { add = add + n.doubleValue(); } return add; } // main method public static void main(String[] args) { // creating a list of integer type ArrayList < Integer > list1 = new ArrayList < Integer > (); // adding elements to the list1 list1.add(30); list1.add(40); // calling sum method for printing sum System.out.println("Sum is= " + sum(list1)); // creating a list of double type ArrayList < Double > list2 = new ArrayList < Double > (); list2.add(10.0); list2.add(20.0); // calling sum method for printing sum System.out.println("Sum is= " + sum(list2)); } } Output: Java Sum is= 70.0 Sum is= 30.0 Unbounded Wildcards Unknown type list is specified by the unbounded wildcards like List<?>. Example of Unbounded Wildcards Java // importing packages import java.util.Arrays; import java.util.List; // creating a class with the name UnboundedWildcardExample public class UnboundedWildcardExample { // creating a method displayElements by using Unbounded Wildcard public static void displayElements(List << ? > list) { for (Object n: list) { System.out.println(n); } } // main method public static void main(String[] args) { // creating a list of type integer List < Integer > list1 = Arrays.asList(6, 7, 8); System.out.println("printing the values of integer list"); // calling displayElements for list1 displayElements(list1); // creating a list of type string List < String > list2 = Arrays.asList("six", "seven", "eight"); System.out.println("printing the values of string list"); // calling displayElements for list2 displayElements(list2); } } Output: Java printing the values of integer list 6 7 8 printing the values of string list six seven eight Lower Bounded Wildcards Lower Bounded Wildcards are used to restrict the unknown type to be a particular type or the supertype of the particular type. Lower Bounded Wildcards are used by writing a question mark symbol followed by the keyword super, then writing the lower bound. Syntax of Lower Bound Wildcard ? super Type. Example of Lower Bound Wildcard Java // importing packages import java.util.*; // creating a class with the name LowerBoundWildcardExample public class LowerBoundWildcardExample { // creating a method by using upper bounded wildcards private static void displayElements(List << ? super Integer > list) { for (Object n: list) { System.out.println(n); } } // main method public static void main(String[] args) { // creating a list of type integer List < Integer > list1 = Arrays.asList(6, 7, 8); System.out.println("printing the values of integer list"); // calling displayElements for list1 displayElements(list1); // creating a list of type string List < Number > list2 = Arrays.asList(8.0, 9.8, 7.6); System.out.println("printing the values of string list"); // calling displayElements for list2 displayElements(list2); } } Output: Java printing the values of integer list 6 7 8 printing the values of string list 8.0 9.8 7.6 Conclusion After the generic introduction in the Java programming language, programmers are forced to store particular object types. Type safety, No need for type casting, and Checking at compile time are Three main advantages of using generics. A generic class is a class that can refer to any type. Similar to the generic class, generic methods can also be created. And any type of argument can be accepted by the generic method. Wildcard elements in generics are represented by the question mark (?) symbol. Upper Bounded, Lower Bounded, and Unbounded are three types of wildcards in Java generic. More
Does the OCP Exam Still Make Sense?
Does the OCP Exam Still Make Sense?
By Jasper Sprengers CORE
Dynamic Data Processing Using Serverless Java With Quarkus on AWS Lambda by Enabling SnapStart (Part 2)
Dynamic Data Processing Using Serverless Java With Quarkus on AWS Lambda by Enabling SnapStart (Part 2)
By Daniel Oh CORE
Revolutionize JSON Parsing in Java With Manifold
Revolutionize JSON Parsing in Java With Manifold
By Shai Almog CORE
Java Concurrency: Condition
Java Concurrency: Condition

Previously we checked on ReentRantLock and its fairness. One of the things we can stumble upon is the creation of a Condition. By using Condition, we can create mechanisms that allow threads to wait for specific conditions to be met before proceeding with their execution. Java public interface Condition { void await() throws InterruptedException; void awaitUninterruptibly(); long awaitNanos(long nanosTimeout) throws InterruptedException; boolean await(long time, TimeUnit unit) throws InterruptedException; boolean awaitUntil(Date deadline) throws InterruptedException; void signal(); void signalAll(); } The closest we came to that so far is the wait Object Monitor method. A Condition is bound to a Lock and a thread cannot interact with a Condition and its methods if it does not have a hold on that Lock. Also, Condition uses the underlying lock mechanisms. For example, signal and signalAll will use the underlying queue of the threads that are maintained by the Lock, and will notify them to wake up. One of the obvious things to implement using Conditions is a BlockingQueue. Worker threads process data and publisher threads dispatch data. Data are published on a queue, worker threads will process data from the queue, and then they should wait if there is no data in the queue. For a worker thread, if the condition is met the flow is the following: Acquire the lock Check the condition Process data Release the lock If the condition is not met, the flow would slightly change to this: Acquire the lock Check the condition Wait until the condition is met Re-acquire the lock Process data Release the lock The publisher thread, whenever it adds a message, should notify the threads waiting on the condition. The workflow would be like this: Acquire the lock Publish data Notify the workers Release the lock Obviously, this functionality already exists through the BlockingQueue interface and the LinkedBlockingDeque and ArrayBlockingQueue implementations. We will proceed with an implementation for the sake of the example. Let’s see the message queue: Java package com.gkatzioura.concurrency.lock.condition; import java.util.LinkedList; import java.util.Queue; import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; public class MessageQueue<T> { private Queue<T> queue = new LinkedList<>(); private Lock lock = new ReentrantLock(); private Condition hasMessages = lock.newCondition(); public void publish(T message) { lock.lock(); try { queue.offer(message); hasMessages.signal(); } finally { lock.unlock(); } } public T receive() throws InterruptedException { lock.lock(); try { while (queue.isEmpty()) { hasMessages.await(); } return queue.poll(); } finally { lock.unlock(); } } } Now let’s put it into action: Java MessageQueue<String> messageQueue = new MessageQueue<>(); @Test void testPublish() throws InterruptedException { Thread publisher = new Thread(() -> { for (int i = 0; i < 10; i++) { String message = "Sending message num: " + i; log.info("Sending [{}]", message); messageQueue.publish(message); try { Thread.sleep(1000); } catch (InterruptedException e) { throw new RuntimeException(e); } } }); Thread worker1 = new Thread(() -> { for (int i = 0; i < 5; i++) { try { String message = messageQueue.receive(); log.info("Received: [{}]", message); } catch (InterruptedException e) { throw new RuntimeException(e); } } }); Thread worker2 = new Thread(() -> { for (int i = 0; i < 5; i++) { try { String message = messageQueue.receive(); log.info("Received: [{}]", message); } catch (InterruptedException e) { throw new RuntimeException(e); } } }); publisher.start(); worker1.start(); worker2.start(); publisher.join(); worker1.join(); worker2.join(); } That’s it! Our workers processed the expected messages and waited when the queue was empty.

By Emmanouil Gkatziouras CORE
Ultrafast Application Development With MicroStream and PostgreSQL Integration Using Jakarta EE
Ultrafast Application Development With MicroStream and PostgreSQL Integration Using Jakarta EE

Integrating MicroStream and PostgreSQL, leveraging the new Jakarta EE specifications known as Jakarta Data, presents a powerful solution for developing ultrafast applications with SQL databases. MicroStream is a high-performance, in-memory object graph persistence library that enables efficient data storage and retrieval. At the same time, PostgreSQL is a widely used, robust SQL database system known for its reliability and scalability. Developers can achieve remarkable application performance and efficiency by combining these technologies' strengths and harnessing Jakarta Data's capabilities. This article will explore the integration between MicroStream and PostgreSQL, focusing on leveraging Jakarta Data to enhance the development process and create ultrafast applications. We will delve into the key features and benefits of MicroStream and PostgreSQL, highlighting their respective strengths and use cases. Furthermore, we will dive into the Jakarta Data specifications, which provide a standardized approach to working with data in Jakarta EE applications, and how they enable seamless integration between MicroStream and PostgreSQL. By the end of this article, you will have a comprehensive understanding of how to leverage MicroStream, PostgreSQL, and Jakarta Data to build high-performance applications that combine the benefits of in-memory storage and SQL databases. Facing the Java and SQL Integration Challenge The biggest challenge in integrating SQL databases with Java applications is the impedance mismatch between the object-oriented programming (OOP) paradigm used in Java and the relational database model used by SQL. This impedance mismatch refers to the fundamental differences in how data is structured and manipulated in these two paradigms, leading to the need for conversion and mapping between the object-oriented world and the relational database world. Java is known for its powerful OOP features, such as encapsulation, polymorphism, and inheritance, which enable developers to create modular, maintainable, and readable code. However, these concepts do not directly translate to the relational database model, where data is stored in tables with rows and columns. As a result, when working with SQL databases, developers often have to perform tedious and error-prone tasks of mapping Java objects to database tables and converting between their respective representations. This impedance mismatch not only hinders productivity but also consumes significant computational power. According to some estimates, up to 90% of computing power can be consumed by the conversion and mapping processes between Java objects and SQL databases. It impacts performance and increases the cost of cloud resources, making it a concern for organizations following FinOps practices. MicroStream addresses this challenge with its in-memory object graph persistence approach by eliminating the need for a separate SQL database and the associated mapping process. With MicroStream, Java objects can be stored directly in memory without the overhead of conversions to and from a relational database. It results in significant performance improvements and reduces the power consumption required for data mapping. By using MicroStream, developers can leverage the natural OOP capabilities of Java, such as encapsulation and polymorphism, without the need for extensive mapping and conversion. It leads to cleaner and more maintainable code and reduces the complexity and cost of managing a separate database system. In the context of a cloud environment, the reduction in power consumption provided by MicroStream translates to cost savings, aligning with the principles of the FinOps culture. Organizations can optimize their cloud infrastructure usage and reduce operational expenses by minimizing the resources needed for data mapping and conversion. Overall, MicroStream helps alleviate the impedance mismatch challenge between SQL databases and Java, enabling developers to build high-performance applications that take advantage of OOP's natural design and readability while reducing power consumption and costs associated with data mapping. While addressing the impedance mismatch between SQL databases and Java applications can bring several advantages, it is vital to consider the trade-offs involved. Here are some trade-offs associated with the impedance mismatch: Increased complexity: Working with an impedance mismatch adds complexity to the development process. Developers need to manage and maintain the mapping between the object-oriented model and the relational database model, which can introduce additional layers of code and increase the overall complexity of the application. Performance overhead: The conversion and mapping process between Java objects and SQL databases can introduce performance overhead. The need to transform data structures and execute database queries can impact the overall application performance, especially when dealing with large datasets or complex data models. Development time and effort: Addressing the impedance mismatch often requires writing additional code for mapping and conversion, which adds to the development time and effort. Developers need to implement and maintain the necessary logic to synchronize data between the object-oriented model and the relational database, which can increase the development effort and introduce potential sources of errors. Maintenance challenges: When an impedance mismatch exists, any changes to the object-oriented model or the database schema may require mapping and conversion logic updates. This can create maintenance challenges, as modifications to one side of the system may necessitate adjustments on the other side to ensure consistency and proper data handling. Learning curve: Dealing with the impedance mismatch typically requires understanding the intricacies of both the object-oriented paradigm and the relational database model. Developers must have a good grasp of SQL, database design, and mapping techniques. This may introduce a learning curve for those more accustomed to working solely in the object-oriented domain. It is essential to weigh these trade-offs against the benefits and specific requirements of the application. Different scenarios may prioritize various aspects, such as performance, development speed, or long-term maintenance. Alternative solutions like MicroStream can help mitigate these trade-offs by providing a direct object storage approach and reducing the complexity and performance overhead associated with the impedance mismatch. Enough for today's theory; let’s see this integration in practice. It will be a simple application using Java, Maven, and Java SE. The first step is to have an installed PostgreSQL. To make it easier, let’s use docker and run the following command: Shell docker run --rm=true --name postgres-instance -e POSTGRES_USER=micronaut \ -e POSTGRES_PASSWORD=micronaut -e POSTGRES_DB=airplane \ -p 5432:5432 postgres:14.1 Ultrafast With PostgreSQL and MicroStream In this example, let’s use an airplane sample where we’ll have several planes and models that we’ll filter by manufacturer. The first step of our project is about the Maven dependencies. Besides the CDI, we need to include the MicroStream integration following the MicroStream relational integration, and furthermore, the PostgreSQL driver. XML <dependency> <groupId>expert.os.integration</groupId> <artifactId>microstream-jakarta-data</artifactId> <version>${microstream.data.version}</version> </dependency> <dependency> <groupId>one.microstream</groupId> <artifactId>microstream-afs-sql</artifactId> <version>${microstream.version}</version> </dependency> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>42.2.14</version> </dependency> The second step is to overwrite the configuration for using a relational database. First, create DataSource, and then we’ll inject it and then use it on the StorageManager. Java @ApplicationScoped class DataSourceSupplier implements Supplier<DataSource> { private static final String JDBC = "microstream.postgresql.jdbc"; private static final String USER = "microstream.postgresql.user"; private static final String PASSWORD = "microstream.postgresql.password"; @Override @Produces @ApplicationScoped public DataSource get() { Config config = ConfigProvider.getConfig(); PGSimpleDataSource dataSource = new PGSimpleDataSource(); dataSource.setUrl(config.getValue(JDBC, String.class)); dataSource.setUser(config.getValue(USER, String.class)); dataSource.setPassword(config.getValue(PASSWORD, String.class)); return dataSource; } } @Alternative @Priority(Interceptor.Priority.APPLICATION) @ApplicationScoped class SQLSupplier implements Supplier<StorageManager> { @Inject private DataSource dataSource; @Override @Produces @ApplicationScoped public StorageManager get() { SqlFileSystem fileSystem = SqlFileSystem.New( SqlConnector.Caching( SqlProviderPostgres.New(dataSource) ) ); return EmbeddedStorage.start(fileSystem.ensureDirectoryPath("microstream_storage")); } public void close(@Disposes StorageManager manager) { manager.close(); } } With the configuration ready, the next step is to create the entity and its repository. In our sample, we’ll make Airplane and Airport as entity and repository, respectively. Java @Repository public interface Airport extends CrudRepository<Airplane, String> { List<Airplane> findByModel(String model); } @Entity public class Airplane { @Id private String id; @Column("title") private String model; @Column("year") private Year year; @Column private String manufacturer; } The last step is executing the application, creating airplanes, and filtering by the manufacturer. Thanks to the Jakarta EE and MicroProfile specifications, the integration works with microservices and monolith. Java public static void main(String[] args) { try (SeContainer container = SeContainerInitializer.newInstance().initialize()) { Airplane airplane = Airplane.id("1").model("777").year(1994).manufacturer("Boing"); Airplane airplane2 = Airplane.id("2").model("767").year(1982).manufacturer("Boing"); Airplane airplane3 = Airplane.id("3").model("747-8").year(2010).manufacturer("Boing"); Airplane airplane4 = Airplane.id("4").model("E-175").year(2023).manufacturer("Embraer"); Airplane airplane5 = Airplane.id("5").model("A319").year(1995).manufacturer("Airbus"); Airport airport = container.select(Airport.class).get(); airport.saveAll(List.of(airplane, airplane2, airplane3, airplane4, airplane5)); var boings = airport.findByModel(airplane.getModel()); var all = airport.findAll().toList(); System.out.println("The boings: " + boings); System.out.println("The boing models avialables: " + boings.size()); System.out.println("The airport total: " + all.size()); } System.exit(0); } Conclusion In conclusion, the impedance mismatch between SQL databases and Java applications presents significant challenges in terms of complexity, performance, development effort, maintenance, and the learning curve. However, by understanding these trade-offs and exploring alternative solutions, such as MicroStream, developers can mitigate these challenges and achieve better outcomes. MicroStream offers a powerful approach to address the impedance mismatch by eliminating the need for a separate SQL database and reducing the complexity of mapping and conversion processes. With MicroStream, developers can leverage the natural benefits of object-oriented programming in Java without sacrificing performance or increasing computational overhead. By storing Java objects directly in memory, MicroStream enables efficient data storage and retrieval, resulting in improved application performance. It eliminates the need for complex mapping logic and reduces the development effort required to synchronize data between the object-oriented model and the relational database. Moreover, MicroStream aligns with the principles of FinOps culture by reducing power consumption, which translates into cost savings in cloud environments. By optimizing resource usage and minimizing the need for data mapping and conversion, MicroStream contributes to a more cost-effective and efficient application architecture. While trade-offs are associated with impedance mismatch, such as increased complexity and maintenance challenges, MicroStream offers a viable solution that balances these trade-offs and enables developers to build ultrafast applications with SQL databases. By leveraging the power of Jakarta Data specifications and MicroStream's in-memory object graph persistence, developers can achieve a harmonious integration between Java and SQL databases, enhancing application performance and reducing development complexities. In the rapidly evolving application development landscape, understanding the challenges and available solutions for impedance mismatch is crucial. With MicroStream, developers can embrace the advantages of object-oriented programming while seamlessly integrating with SQL databases, paving the way for efficient, scalable, and high-performance applications. Source: MicroStream Integration on GitHub

By Otavio Santana CORE
Preventing Data Loss With Kafka Listeners in Spring Boot
Preventing Data Loss With Kafka Listeners in Spring Boot

Data loss is one of the biggest problems developers face when building distributed systems. Whether due to network issues or code bugs, data loss can have serious consequences for enterprises. In this article, we'll look at how to build Kafka listeners with Spring Boot and how to use Kafka's acknowledgment mechanisms to prevent data loss and ensure the reliability of our systems. Apache Kafka Apache Kafka is a distributed message platform used to store and deliver messages. Once a message is written to Kafka, it will be kept there according to a retention policy. The consumer groups mechanism is used to read out messages. The offset for each consumer group is used to understand the stage of message processing and to keep track of the progress of each consumer group in reading messages from a partition. It allows each consumer group to independently read messages from a topic and resume reading from where it left off in case of failures or restarts. In a simplified way, this can be represented as follows: After successfully processing a message, a consumer sends an acknowledgment to Kafka, and the offset pointer for that consumer group is shifted. As mentioned earlier, other consumer groups store their offset values in the message broker, allowing messages to be read independently. When we talk about high-reliability systems that must guarantee no data loss, we must consider all possible scenarios. Apache Kafka, by design, already has the features to ensure reliability. We, as consumers of messages, must also provide proper reliability. But what can go wrong? The consumer receives the message and crashes before he can process it The consumer receives the message, processes it, and then crashes Any network problems This can happen for reasons beyond our control — temporary network unavailability, an incident on the instance, pod eviction in a K8s cluster, etc. Kafka allows guaranteeing message delivery using the acknowledgment mechanism — at least once delivery. It means that the message will be delivered at least once, but under certain circumstances, it can be delivered several times. All we need to do is to configure Apache Kafka correctly and be able to react to duplicate messages if needed. Let's try to implement this in practice. Run Apache Kafka To start the message broker, we also need the zookeeper. The easiest way to do this is with docker-compose. Create the file docker-compose.yml: YAML --- version: '3' services: zookeeper: image: confluentinc/cp-zookeeper:7.3.3 container_name: zookeeper environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 broker: image: confluentinc/cp-kafka:7.3.3 container_name: broker ports: - "9092:9092" depends_on: - zookeeper environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 Create a new topic: Shell docker exec broker \ kafka-topics --bootstrap-server broker:9092 \ --create \ --topic demo To produce messages, you can run the command: Shell docker exec -ti broker \ kafka-console-producer --bootstrap-server broker:9092 \ --topic demo Each line is a new message. When finished, press Ctrl+C: Shell >first >second >third >^C% Messages have been written and will be stored in Apache Kafka. Spring Boot Application Create a gradle project and add the necessary dependencies to build.gradle: Groovy plugins { id 'java' id 'org.springframework.boot' version '2.7.10' id 'io.spring.dependency-management' version '1.0.15.RELEASE' } group = 'com.example' version = '0.0.1-SNAPSHOT' sourceCompatibility = '17' repositories { mavenCentral() } dependencies { implementation 'org.springframework.boot:spring-boot-starter' implementation 'org.springframework.kafka:spring-kafka' compileOnly 'org.projectlombok:lombok:1.18.26' annotationProcessor 'org.projectlombok:lombok:1.18.26' testImplementation 'org.springframework.boot:spring-boot-starter-test' testImplementation 'org.springframework.kafka:spring-kafka-test' testCompileOnly 'org.projectlombok:lombok:1.18.26' testAnnotationProcessor 'org.projectlombok:lombok:1.18.26' } application.yml: YAML spring: kafka: consumer: bootstrap-servers: localhost:9092 group-id: demo-group auto-offset-reset: earliest key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializer Let's write an event handler: Java @Component @Slf4j public class DemoListener { @KafkaListener(topics = "demo", groupId = "demo-group") void processKafkaEvents(ConsumerRecord<String, String> record) { log.info("Try to process message"); // Some code log.info("Processed value: " + record.value()); } } Execution Result: Shell Try to process message Processed value: first Try to process message Processed value: second Try to process message Processed value: third But what if an error happens during message processing? In that case, we need to handle it correctly. If this error is related to an invalid message, we can write to the log or place this message in a separate topic — DLT (dead letter topic) for further parsing of this message. And what if processing implies calling another microservice, but that microservice doesn't answer? In this case, we may need the retry mechanism. To implement it, we can configure DefaultErrorHandler: Java @Configuration @Slf4j public class KafkaConfiguration { @Bean public DefaultErrorHandler errorHandler() { BackOff fixedBackOff = new FixedBackOff(5000, 3); DefaultErrorHandler errorHandler = new DefaultErrorHandler((consumerRecord, exception) -> { log.error("Couldn't process message: {}; {}", consumerRecord.value().toString(), exception.toString()); }, fixedBackOff); errorHandler.addNotRetryableExceptions(NullPointerException.class); return errorHandler; } } Here we have specified that in case of an error, we will do retries (maximum three times) at intervals of five seconds. But if we have an NPE, we won't do iterations in that case but just write a message to the log and skip the message. But if we want more flexibility in error handling, we can do it manually: YAML spring: kafka: consumer: bootstrap-servers: localhost:9092 group-id: demo-group auto-offset-reset: earliest key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializer properties: enable.auto.commit: false listener: ack-mode: MANUAL Here we set spring.kafka.consumer.properties.enable.auto.commit=false (if true, the consumer's offset will be periodically committed in the background. In that case property auto.commit.interval.ms (default 5000ms will be used) and spring.kafka.listener.ack-mode=MANUAL, which means we want to control this mechanism ourselves. Now we can control the sending of the acknowledgment ourselves: Java @KafkaListener(topics = "demo", groupId = "demo-group") void processKafkaEvents(ConsumerRecord<String, String> record, Acknowledgment acknowledgment) { log.info("Try to process message"); try { //Some code log.info("Processed value: " + record.value()); acknowledgment.acknowledge(); } catch (SocketTimeoutException e) { log.error("Error while processing message. Try again later"); acknowledgment.nack(Duration.ofSeconds(5)); } catch (Exception e) { log.error("Error while processing message: {}" + record.value()); acknowledgment.acknowledge(); } } The Acknowledgment object allows you to explicitly acknowledge or reject (nack) the message. By calling acknowledge(), you are telling Kafka that the message has been successfully processed and can be committed. By calling nack(), you are telling Kafka that the message should be re-queued for processing after a specified delay (i.e., in a case when another microservice isn't responding). Conclusion Data loss prevention is critical for consumer Kafka applications. In this article, we looked at some best practices for exception handling and data loss prevention with Spring Boot. By following these practices, you can ensure that your application is more resilient to failures and can gracefully recover from errors without data loss. By applying these strategies, you can build a robust and reliable Kafka consumer application.

By Viacheslav Shago
Java String Templates Today
Java String Templates Today

In our last post, we introduced you to the Manifold project and how it offers a revolutionary set of language extensions for Java, including the ability to parse and process JSON files seamlessly in Java. Today, we will take a look at another exciting feature of the Manifold project: string templates. But before we get to that, some of the feedback I got from the previous post was that I was unclear about Manifold. Manifold is a combination of an IDE plugin and plugins to Maven or Gradle. Once used we can enhance the Java language (or environment) almost seamlessly in a fluid way. A frequent question was, "How is it different from something like Lombok?" There are many similarities and, in fact, if you understand Lombok then you are on your way to understanding Manifold. Lombok is a great solution for some problems in the Java language. It is a bandaid on the verbosity of Java and some of its odd limitations (I mean bandaid as a compliment, no hate mail). Manifold differs from Lombok in several critical ways: It’s modular: All the extensions built into Manifold are separate from one another. You can activate a particular feature or leave it out of the compiler toolchain. It’s bigger: Lombok has many features but Manifold's scope is fantastic and far more ambitious. It tries to do the “right thing": Lombok is odd. We declare private fields but then use getters and setters as if they aren’t private. Manifold uses properties (which we will discuss later) that more closely resemble what the Java language “should have offered." Manifold also has some drawbacks: It only works as a compiler toolchain and only in one way. Lombok can be compiled back to plain Java source code and removed. It only supports one IDE - IntelliJ. These partially relate to the age of Manifold which is a new project by comparison. But it also relates to the different focus. Manifold focuses on language functionality and a single working fluid result. JEP 430 String Interpolation One of the big features coming to JDK 21 is JEP 430, which is a string interpolation language change. It will allow writing code like this: String name = "Joan"; String info = STR."My name is \{name}"; In this case, info will have the value “My name is Joan”. This is just the tip of the iceberg in this JSR as the entire architecture is pluggable. I will discuss this in a future video but for now, the basic functionality we see here is pretty fantastic. Unfortunately, it will take years to use this in production. It will be in preview in JDK 21, then it will be approved. We will wait for an LTS, and then wait for the LTS to reach critical mass. In the meantime, can we use something as nice as this today? Maven Dependencies Before we dive into the code, I want to remind you that all the code for this and other videos in this series is available on GitHub (feel free to star it and follow). String templating has no dependencies. We still need to make changes to the pom file but we don’t need to add dependencies. I’m adding one dependency here for the advanced templates we will discuss soon. All that’s needed is the compiler plugin. That means that string templates are a compile-time feature and have no runtime impact! <dependencies> <dependency> <groupId>systems.manifold</groupId> <artifactId>manifold-templates-rt</artifactId> <version>${manifold.version}</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.0</version> <configuration> <source>19</source> <target>19</target> <encoding>UTF-8</encoding> <compilerArgs> <!-- Configure manifold plugin --> <arg>-Xplugin:Manifold</arg> </compilerArgs> <!-- Add the processor path for the plugin --> <annotationProcessorPaths> <path> <groupId>systems.manifold</groupId> <artifactId>manifold-strings</artifactId> <version>${manifold.version}</version> </path> <path> <groupId>systems.manifold</groupId> <artifactId>manifold-templates</artifactId> <version>${manifold.version}</version> </path> </annotationProcessorPaths> </configuration> </plugin> </plugins> </build> Manifold String Interpolation To begin, we can create a new variable that we can use to get external input. In the second line, we integrate that variable into the printout: String world = args.length > 0 ? args[0] : "world"; System.out.println("Hello $world! I can write \$world as the variable..."); The backslash syntax implicitly disables the templating behavior, just like in other string elements in Java. This will print “Hello world! I can write $world as the variable…”. There’s something that you can’t really see in the code, you need to look at a screenshot of the same code: It’s subtle, do you see it? Notice the $world expression: it is colored differently. It's no longer just a string but a variable embedded in a string. This means that we can control-click it and go to the variable declaration, rename it, or see find its usage. There's another way to escape a string, and we can use the @DisableStringLiteralTemplates annotation on a method or a class to disable this functionality in the respective block of code. This can be useful if we use the dollar sign frequently in a block of code: @DisableStringLiteralTemplates private static void noTemplate(String word) { System.out.println("Hello $world!"); } Templates The Manifold project allows us to create JSP-like templates without all of the baggage. We can define a base class to a template to create generic code for the templates and place common functionality in a single location. We can create a file called HelloTemplate.html.mtl in the resources/templates directory with the following content. Notice the params we define in the template file can be anything: <%@ params(String title, String body) %> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>${title}</title> </head> <body> ${body} </body> </html> This will seem very familiar to those of us with a JSP background. We can then use the file in the Java code like this: we can pass the parameters and they will replace the appropriate blocks in the HTML file. System.out.println(HelloTemplate.render("My Title", "My Body")); Notice the generated template it compiled to a class, similar to JSP. Unlike JSP this template isn’t a servlet and can be used in any context. A local application, a server, etc. The templating language is more lightweight and doesn’t depend on various server APIs. It is also less mature. The main value is in using such an API to generate arbitrary files like Java source files or configuration files. The templating capabilities are powerful yet simple. Just like we could in JSP, we can embed Java source code into the template e.g. we can include control flow and similar restrictions just like we could in JSP: <% if(body != null) {%> ${body} <% } %> Why Not: JSP, Velocity, Thymeleaf, or Freemarker? There are so many templating languages in Java already. Adding yet another one seems like a heavy burden of replication. I think all of those are great and this isn’t meant to replace them, at least not yet. Their focus is very much on web generation, they might not be ideal for more fluid use cases like code generation or web frameworks like Spark. Another big advantage is size and performance. All of these frameworks have many dependencies and a lot of runtime overhead. Even JSP performs the initial compilation in runtime by default. This templating support is compiled and Spartan, in a good way. It’s fast, simple, and deeply integrated into the application flow. Import We can import Java packages just like we can in every Java class using code like this: <%@ import com.debugagent.stringtemplates.* %> Once imported we can use any class within the code. Notice that this import statement must come above other lines in the code, just like a regular import statement. Include We can use include to simply include another template into the current template, allowing us to assemble sophisticated templates like headers and footers. If we want to generate a complex Java class, we can wrap the boilerplate in a generic template and include that in. We can conditionally include a template using an if statement and use a for loop to include multiple entries: <%@ include JavaCode("ClassName", classBody) %> Notice that we can include an entry with parameters and pass them along to the underlying template. We can pass hardcoded strings or variables along the include chain. A Lot More I skipped extends because of a documentation issue, which has since been fixed. It has a lot of potential. There’s layout functionality that has a lot of potential, but is missing parameter passing at the moment. But the main value is in the simplicity and total integration. When I define a dependency on a class and remove it from the code the error appears even in the template file. This doesn’t happen in Thymeleaf. Video Final Word In conclusion, with the Manifold project, we can write fluent text processing code today without waiting for a future JVM enhancement. The introduction of String templates can help Java developers generate files that aren't a web application, which is useful in several cases; e.g., where code generation is needed. Manifold allows us to create JSP-like templates without all of the baggage and generate any arbitrary file we want. With the inclusion of sophisticated options like layout, the sky's the limit. There’s a lot more to Manifold and we will dig deeper into it as we move forward.

By Shai Almog CORE
How To Create a GraalVM Docker Image
How To Create a GraalVM Docker Image

In this post, you will learn how to create a Docker image for your GraalVM native image. By means of some hands-on experiments, you will learn that it is a bit trickier than what you are used to when creating Docker images. Enjoy! Introduction In a previous post, you learned how to create a GraalVM native image for a Spring Boot 3 application. Nowadays, applications are often distributed as Docker images, so it is interesting to verify how this is done for a GraalVM native image. A GraalVM native image does not need a JVM, so can you use a more minimalistic Docker base image for example? You will execute some experiments during this blog and will learn by doing. The sources used in this blog are available on GitHub. The information provided in the GraalVM documentation is a good starting point for learning. It is good reference material when reading this blog. As an example application, you will use the Spring Boot application from the previous post. The application contains one basic RestController which just returns a hello message. The RestController also includes some code in order to execute tests in combination with Reflection, but this part was added for the previous post. Java @RestController public class HelloController { @RequestMapping("/hello") public String hello() { // return "Hello GraalVM!" String helloMessage = "Default message"; try { Class<?> helloClass = Class.forName("com.mydeveloperplanet.mygraalvmplanet.Hello"); Method helloSetMessageMethod = helloClass.getMethod("setMessage", String.class); Method helloGetMessageMethod = helloClass.getMethod("getMessage"); Object helloInstance = helloClass.getConstructor().newInstance(); helloSetMessageMethod.invoke(helloInstance, "Hello GraalVM!"); helloMessage = (String) helloGetMessageMethod.invoke(helloInstance); } catch (ClassNotFoundException e) { throw new RuntimeException(e); } catch (InvocationTargetException e) { throw new RuntimeException(e); } catch (InstantiationException e) { throw new RuntimeException(e); } catch (IllegalAccessException e) { throw new RuntimeException(e); } catch (NoSuchMethodException e) { throw new RuntimeException(e); } return helloMessage; } } Build the application: Shell $ mvn clean verify Run the application from the root of the repository: Shell $ java -jar target/mygraalvmplanet-0.0.1-SNAPSHOT.jar Test the endpoint: Shell $ curl http://localhost:8080/hello Hello GraalVM! You are now ready for Dockerizing this application! Prerequisites Prerequisites for this blog are: Basic Linux knowledge, Ubuntu 22.04 is used during this post Basic Java and Spring Boot knowledge Basic GraalVM knowledge Basic Docker knowledge Basic SDKMAN knowledge Create Docker Image for Spring Boot Application In this section, you will create a Dockerfile for the Spring Boot application. This is a very basic Dockerfile and is not to be used in production code. See previous posts "Docker Best Practices" and "Spring Boot Docker Best Practices" for tips and tricks for production-ready Docker images. The Dockerfile you will be using is the following: Dockerfile FROM eclipse-temurin:17.0.5_8-jre-alpine COPY target/mygraalvmplanet-0.0.1-SNAPSHOT.jar app.jar ENTRYPOINT ["java", "-jar", "app.jar"] You use a Docker base image containing a Java JRE, copy the JAR file into the image, and, in the end, you run the JAR file. Build the Docker image: Shell $ docker build . --tag mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT Verify the size of the image. It is 188MB in size. Shell $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydeveloperplanet/mygraalvmplanet 0.0.1-SNAPSHOT be12e1deda89 33 seconds ago 188MB Run the Docker image: Shell $ docker run --name mygraalvmplanet mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT ... 2023-02-26T09:20:48.033Z INFO 1 --- [ main] c.m.m.MyGraalVmPlanetApplication : Started MyGraalVmPlanetApplication in 2.389 seconds (process running for 2.981) As you can see, the application started in about 2 seconds. Test the endpoint again. First, find the IP Address of your Docker container. In the output below, the IP Address is 172.17.0.2, but it will probably be something else on your machine. Shell $ docker inspect mygraalvmplanet | grep IPAddress "SecondaryIPAddresses": null, "IPAddress": "172.17.0.2", "IPAddress": "172.17.0.2", Invoke the endpoint with the IP Address and verify that it works. Shell $ curl http://172.17.0.2:8080/hello Hello GraalVM! In order to continue, stop the container, remove it, and also remove the image. Do this after each experiment. This way, you can be sure that you start from a clean situation each time. Shell $ docker rm mygraalvmplanet $ docker rmi mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT Create Docker Image for GraalVM Native Image Let’s do the same for the GraalVM native image. First, switch to using GraalVM. Shell $ sdk use java 22.3.r17-nik Create the native image: Shell $ mvn -Pnative native:compile Create a similar Dockerfile (Dockerfile-native-image). This time, you use an Alpine Docker base image without a JVM. You do not need a JVM for running a GraalVM native image as it is an executable and not a JAR file. Dockerfile FROM alpine:3.17.1 COPY target/mygraalvmplanet mygraalvmplanet ENTRYPOINT ["/mygraalvmplanet"] Build the Docker image, this time with an extra --file argument because the file name deviates from the default. Shell $ docker build . --tag mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT --file Dockerfile-native-image Verify the size of the Docker image. It is now only 76.5MB instead of the 177MB earlier. Shell $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydeveloperplanet/mygraalvmplanet 0.0.1-SNAPSHOT 4f7c5c6a9b29 25 seconds ago 76.5MB Run the container and note that it does not start correctly. Shell $ docker run --name mygraalvmplanet mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT exec /mygraalvmplanet: no such file or directory What is wrong here? Why does this not work? It is a vague error, but the Alpine Linux Docker image uses musl as a standard C library whereas the GraalVM native image is compiled using an Ubuntu Linux distro, which uses glibc. Let’s change the Docker base image to Ubuntu. The Dockerfile is Dockerfile-native-image-ubuntu: Dockerfile FROM ubuntu:jammy COPY target/mygraalvmplanet mygraalvmplanet ENTRYPOINT ["/mygraalvmplanet"] Build the Docker image. Shell $ docker build . --tag mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT --file Dockerfile-native-image-ubuntu Verify the size of the Docker image, it is now 147MB. Shell $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydeveloperplanet/mygraalvmplanet 0.0.1-SNAPSHOT 1fa90b1bfc54 3 hours ago 147MB Run the container and it starts successfully in less than 200ms. Shell $ docker run --name mygraalvmplanet mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT ... 2023-02-26T12:48:26.140Z INFO 1 --- [ main] c.m.m.MyGraalVmPlanetApplication : Started MyGraalVmPlanetApplication in 0.131 seconds (process running for 0.197) Create Docker Image Based on Distroless Image The size of the Docker image build with the Ubuntu base image is 147MB. But, the Ubuntu image does contain a lot of tooling which is not needed. Can we reduce the size of the image by using a distroless image which is very small in size? Create a Dockerfile Dockerfile-native-image-distroless and use a distroless base image. Dockerfile FROM gcr.io/distroless/base COPY target/mygraalvmplanet mygraalvmplanet ENTRYPOINT ["/mygraalvmplanet"] Build the Docker image. Shell $ docker build . --tag mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT --file Dockerfile-native-image-distroless Verify the size of the Docker image, it is now 89.9MB. Shell $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydeveloperplanet/mygraalvmplanet 0.0.1-SNAPSHOT 6fd4d44fb622 9 seconds ago 89.9MB Run the container and see that it is failing to start. It appears that several necessary libraries are not present in the distroless image. Shell $ docker run --name mygraalvmplanet mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT /mygraalvmplanet: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory When Googling this error message, you will find threads that mention copying the required libraries from other images (e.g., the Ubuntu image), but you will encounter a next error and a next error. This is a difficult path to follow and costs some time. See, for example, this thread. A solution for using distroless images can be found here. Create Docker Image Based on Oracle Linux Another approach for creating Docker images is the one that can be found on the GraalVM GitHub page. Build the native image in a Docker container and use a multistage build to build the target image. The Dockerfile being used is copied from here and can be found in the repository as Dockerfile-oracle-linux. Create a new file Dockerfile-native-image-oracle-linux, copy the contents of Dockerfile-oracle-linux into it, and change the following: Update the Maven SHA and DOWNLOAD_URL. Change L36 in order to compile the native image as you used to do: mvn -Pnative native:compile Change L44 and L45 in order to copy and use the mygraalvmplanet native image. The resulting Dockerfile is the following: Dockerfile FROM ghcr.io/graalvm/native-image:ol8-java17-22 AS builder # Install tar and gzip to extract the Maven binaries RUN microdnf update \ && microdnf install --nodocs \ tar \ gzip \ && microdnf clean all \ && rm -rf /var/cache/yum # Install Maven # Source: # 1) https://github.com/carlossg/docker-maven/blob/925e49a1d0986070208e3c06a11c41f8f2cada82/openjdk-17/Dockerfile # 2) https://maven.apache.org/download.cgi ARG USER_HOME_DIR="/root" ARG SHA=1ea149f4e48bc7b34d554aef86f948eca7df4e7874e30caf449f3708e4f8487c71a5e5c072a05f17c60406176ebeeaf56b5f895090c7346f8238e2da06cf6ecd ARG MAVEN_DOWNLOAD_URL=https://dlcdn.apache.org/maven/maven-3/3.9.0/binaries/apache-maven-3.9.0-bin.tar.gz RUN mkdir -p /usr/share/maven /usr/share/maven/ref \ && curl -fsSL -o /tmp/apache-maven.tar.gz ${MAVEN_DOWNLOAD_URL} \ && echo "${SHA} /tmp/apache-maven.tar.gz" | sha512sum -c - \ && tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \ && rm -f /tmp/apache-maven.tar.gz \ && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn ENV MAVEN_HOME /usr/share/maven ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2" # Set the working directory to /home/app WORKDIR /build # Copy the source code into the image for building COPY . /build # Build RUN mvn -Pnative native:compile # The deployment Image FROM docker.io/oraclelinux:8-slim EXPOSE 8080 # Copy the native executable into the containers COPY --from=builder /build/target/mygraalvmplanet . ENTRYPOINT ["/mygraalvmplanet"] Build the Docker image. Relax, this will take quite some time. Shell $ docker build . --tag mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT -f Dockerfile-native-image-oracle-linux This image size is 177MB. Shell $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydeveloperplanet/mygraalvmplanet 0.0.1-SNAPSHOT 57e0fda006f0 9 seconds ago 177MB Run the container and it starts in 55ms. Shell $ docker run --name mygraalvmplanet mydeveloperplanet/mygraalvmplanet:0.0.1-SNAPSHOT ... 2023-02-26T13:13:50.188Z INFO 1 --- [ main] c.m.m.MyGraalVmPlanetApplication : Started MyGraalVmPlanetApplication in 0.055 seconds (process running for 0.061) So, this works just fine. This is the way to go when creating Docker images for your GraalVM native image: Prepare a Docker image based on your target base image; Install the necessary tooling, in the case of this application, GraalVM and Maven; Use a multistage Docker build in order to create the target image. Conclusion Creating a Docker image for your GraalVM native image is possible, but you need to be aware of what you are doing. Using a multistage build is the best option. Dependent on whether you need to shrink the size of the image by using a distroless image, you need to prepare the image to build the native image yourself.

By Gunter Rotsaert CORE
Creating Scalable OpenAI GPT Applications in Java
Creating Scalable OpenAI GPT Applications in Java

One of the more notable aspects of ChatGPT is its engine, which not only powers the web-based chatbot but can also be integrated into your Java applications. Whether you prefer reading or watching, let’s review how to start using the OpenAI GPT engine in your Java projects in a scalable way, by sending prompts to the engine only when necessary: Budget Journey App Imagine you want to visit a city and have a specific budget in mind. How should you spend the money and make your trip memorable? This is an excellent question to delegate to the OpenAI engine. Let’s help users get the most out of their trips by building a simple Java application called BudgetJourney. The app can suggest multiple points of interest within a city, tailored to fit specific budget constraints. The architecture of the BudgetJourney app looks as follows: The users open a BudgetJourney web UI that runs on Vaadin. Vaadin connects to a Spring Boot backend when users want to get recommendations for a specific city and budget. Spring Boot connects to a YugabyteDB database instance to check if there are already any suggestions for the requested city and budget. If the data is already in the database, the response is sent back to the user. Otherwise, Spring Boot connects to the OpenAI APIs to get recommendations from the neural network. The response is stored in YugabyteDB for future reference and sent back to the user. Now, let’s see how the app communicates with the Open AI engine (step 4) and how using the database (step 3) makes the solution scalable and cost-effective. OpenAI Java Library The OpenAI engine can be queried via the HTTP API. You need to create an account, get your token (i.e., API key) and use that token while sending requests to one of the OpenAI models. A model in the context of OpenAI is a computational construct trained on a large dataset to recognize patterns, make predictions, or perform specific tasks based on input data. Presently, the service supports several models that can understand and generate natural language, code, images, or convert audio into text. Our BudgetJourney app uses the GPT-3.5 model which understands and generates natural language or code. The app asks the model to suggest several points of interest within a city while considering budget constraints. The model then returns the suggestions in a JSON format. The open-source OpenAI Java library implements the GPT-3.5 HTTP APIs, making it easy to communicate with the service via well-defined Java abstractions. Here’s how you get started with the library: Add the latest OpenAI Java artifact to your pom.xml file. XML <dependency> <groupId>com.theokanning.openai-gpt3-java</groupId> <artifactId>service</artifactId> <version>${version}</version> </dependency> Create an instance of the OpenAiService class by providing your token and a timeout for requests between the app and OpenAI engine. Java OpenAiService openAiService = new OpenAiService( apiKey, Duration.ofSeconds(apiTimeout)); Easy! Next, let’s see how you can work with the GPT-3.5 model via the OpenAiService instance. Sending Prompts to GPT-3.5 Model You communicate with the OpenAI models by sending text prompts that tell what you expect a model to do. The model behaves best when your instructions are clear and include examples. To build a prompt for the GPT-3.5 model, you use the ChatCompletionRequest API of the OpenAI Java library: Java ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest .builder() .model(“gpt-3.5-turbo”) .temperature(0.8) .messages( List.of( new ChatMessage("system", SYSTEM_TASK_MESSAGE), new ChatMessage("user", String.format("I want to visit %s and have a budget of %d dollars", city, budget)))) .build(); model(“gpt-3.5-turbo”) is an optimized version of the GPT-3.5 model. temperature(...) controls how much randomness and creativity to expect in a model’s response. For instance, higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more deterministic. messages(...) are the actual instructions or prompts to the model. There are “system” messages that instruct the model to behave a certain way, “assistant” messages that store previous responses, and “user” messages that carry user requests with asks. The SYSTEM_TASK_MESSAGE of the BudgetJourney app looks as follows: You are an API server that responds in a JSON format. Don't say anything else. Respond only with the JSON. The user will provide you with a city name and available budget. While considering that budget, you must suggest a list of places to visit. Allocate 30% of the budget to restaurants and bars. Allocate another 30% to shows, amusement parks, and other sightseeing. Dedicate the remainder of the budget to shopping. Remember, the user must spend 90-100% of the budget. Respond in a JSON format, including an array named 'places'. Each item of the array is another JSON object that includes 'place_name' as a text, 'place_short_info' as a text, and 'place_visit_cost' as a number. Don't add anything else after you respond with the JSON. Although wordy and in need of optimization, this system message conveys the desired action: to suggest multiple points of interest with maximal budget utilization and to provide the response in JSON format, which is essential for the rest of the application. Once you created the prompt (ChatCompletionRequest) providing both the system and user messages as well as other parameters, you can send it via the OpenAiService instance: Java OpenAiService openAiService = … //created earlier StringBuilder builder = new StringBuilder(); openAiService.createChatCompletion(chatCompletionRequest) .getChoices().forEach(choice -> { builder.append(choice.getMessage().getContent()); }); String jsonResponse = builder.toString(); The jsonResponse object is then further processed by the rest of the application logic which prepares a list of points of interest and displays them with the help of Vaadin. For example, suppose a user is visiting Tokyo and wants to spend up to $900 in the city. The model will strictly follow our instructions from the system message and respond with the following JSON: JSON { "places": [ { "place_name": "Tsukiji Fish Market", "place_short_info": "Famous fish market where you can eat fresh sushi", "place_visit_cost": 50 }, { "place_name": "Meiji Shrine", "place_short_info": "Beautiful Shinto shrine in the heart of Tokyo", "place_visit_cost": 0 }, { "place_name": "Shibuya Crossing", "place_short_info": "Iconic pedestrian crossing with bright lights and giant video screens", "place_visit_cost": 0 }, { "place_name": "Tokyo Skytree", "place_short_info": "Tallest tower in the world, offering stunning views of Tokyo", "place_visit_cost": 30 }, { "place_name": "Robot Restaurant", "place_short_info": "Unique blend of futuristic robots, dancers, and neon lights", "place_visit_cost": 80 }, // More places ]} This JSON is then converted into a list of different points of interest. It is then shown to the user: NOTE: The GPT-3.5 model was trained on the Sep 2021 data set. Therefore, it can’t provide 100% accurate and relevant trip recommendations. However, this inaccuracy can be improved with the help of OpenAI plugins that give models access to real-time data. For instance, once the Expedia plugin for OpenAI becomes publicly available as an API, this will let you improve this BudgetJourney app further. Scaling With a Database As you can see, it’s straightforward to integrate the neural network into your Java applications and communicate with it in a way similar to other 3rd party APIs. You can also tune the API behavior, such as adding a desired output format. But, this is still a 3rd party API that charges you for every request. The more prompts you send and the longer they are, the more you pay. Nothing comes for free. Plus, it takes time for the model to process your prompts. For instance, it can take 10-30 seconds before the BudgetJourney app receives a complete list of recommendations from OpenAI. This might be overkill, especially if different users send similar prompts. To make OpenAI GPT applications scalable, it’s worth storing the model responses in a database. That database allows you to: Reduce the volume of requests to the OpenAI API and, therefore, the associated costs. Serve user requests with low latency by returning previously processed (or preloaded) recommendations from the database. The BudgetJourney app uses the YugabyteDB database due to its ability to scale globally and store the model responses close to the user locations. With the geo-partitioned deployment mode, you can have a single database cluster with the data automatically pinned to and served from various geographies with low latency. A custom geo-partitioning column (the “region” column in the picture above) lets the database decide on a target row location. For instance, the database nodes from Europe already store recommendations for a trip to Miami on a $1500 budget. Next, suppose a user from Europe wants to go to Miami and spend that amount. In that case, the application can respond within a few milliseconds by getting the recommendations straight from the database nodes in the same geography. The BudgetJourney app uses the following JPA repository to get recommendations from the YugabyteDB cluster: Java @Repository public interface CityTripRepository extends JpaRepository<CityTrip, Integer> { @Query("SELECT pointsOfInterest FROM CityTrip WHERE cityName=?1 and budget=?2 and region=?3") String findPointsOfInterest(String cityName, Integer budget, String region); } With an Entity class looking as follows: Java @Entity public class CityTrip { @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "landmark_generator") @SequenceGenerator(name = "landmark_generator", sequenceName = "landmark_sequence", allocationSize = 5) int id; @NotEmpty String cityName; @NotNull Integer budget; @NotEmpty @Column(columnDefinition = "text") String pointsOfInterest; @NotEmpty String region; //The rest of the logic } So, all you need to do is to make a call to the database first, then revert to the OpenAI API if relevant suggestions are not yet available in the database. As your application increases in popularity, more and more local recommendations will be available, making this approach even more cost-effective over time. Wrapping Up A ChatGPT web-based chatbot is an excellent way to demonstrate the OpenAI engine’s capabilities. Explore the engine’s powerful models and start building new types of Java applications. Just make sure you do it in a scalable way!

By Denis Magda CORE
Solve the Java Spring Boot Enum
Solve the Java Spring Boot Enum

I had a hard time solving the org.hibernate.type.descriptor.java.EnumJavaTypeDescriptor.fromOrdinal(EnumJavaTypeDescriptor.java:76)error that I got while reading the value from the database. Background While using the Spring Boot application, I saved some values in the database as an INT that I had to map back with the enum UserRole. The challenge: I used id and name in the enum, but instead of starting the id value from 0, I started the ID value from 1. This is the point the story started to challenge itself. The actual problem is instead of mapping the id of the enum while populating the object back to the Java object. It started reading the ordinal value. And I was feeling clueless even after debugging. Every blog that I was reading was suggesting to change the id to 0 so that the underlying exception (shown below) can be avoided. Java ERROR 13992 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.ArrayIndexOutOfBoundsException: Index 2 out of bounds for length 2] with root cause java.lang.ArrayIndexOutOfBoundsException: Index 2 out of bounds for length 2 ... My POJO mappings were: User POJO Java @Entity @Table(name = "USER") @Getter @Setter @NoArgsConstructor(force = true) public class User { @Id @Column(name = "ID") @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Column(name = "ROLE") @NotNull(message = "Role is required") @JsonAlias("roleId") private UserRole roleId; @Column(name = "NAME") @NotNull(message = "User Name is required") private Sting name; } UserRole POJO Java public enum UserRole { ADMIN(1, "Admin"), USER(2, "User"); private final int id; private final String name; private UserRole(final int id, final String name) { this.id = id; this.name = name; } public int getId() { return id; } public String getName() { return name; } public static UserRole valueOf(final Integer id) { if (id == null) { return null; } for (UserRole type : UserRole.values()) { if (type.id == id) { return type; } } return null; } } Solution When such a situation arises where you have to manipulate the DB values as per your enum, you can use the: javax.persistence.Converter Here's How! I added the following code to the User POJO: Java @Convert(converter = UserRole.UserRoleConverter.class) @Column(name = "ROLE") @NotNull(message = "Role is required") @JsonAlias("roleId") private UserRole roleId; And the below code to the UserRole POJO: Java @Converter(autoApply = true) public static class UserRoleConverter implements AttributeConverter<UserRole, Integer> { @Override public Integer convertToDatabaseColumn(UserRole attribute) { if (attribute == null) { throw new BadRequestException("Please provide a valid User Role."); } return attribute.getId(); } @Override public UserRole convertToEntityAttribute(Integer dbData) { return UserRole.valueOf(dbData); } } This converter maps the value fetched from the DB to the enum value by internally calling the valueOf() method. I was facing this error, and this approach solved the problem for me. I am pretty sure it will solve your problem too. Do let me know in the comments if you find the solution useful. :)

By Ajay Sodhi
Guide to Creating and Containerizing Native Images
Guide to Creating and Containerizing Native Images

Native Image technology is gaining traction among developers whose primary goal is to accelerate startup time of applications. In this article, we will learn how to turn Java applications into native images and then containerize them for further deployment in the cloud. We will use: Spring Boot 3.0 with baked-in support for Native Image as the framework for our Java application; Liberica Native Image Kit (NIK) as a native-image compiler; Alpaquita Stream as a base image. Building Native Images from Spring Boot Apps Installing Liberica NIK It would be best to utilize a powerful computer with several gigabytes of RAM to work with native images. Opt for a cloud service provided by Amazon or a workstation so as not to overload the laptop. We will be using Linux bash commands further on because bash is a perfect way of accessing the code remotely. macOS commands are similar. As for Windows, you can use any alternative, for instance, bash included in the Git package for Windows. Download Liberica Native Image Kit for your system. Choose a Full version for our purposes. Unpack tar.gz with: tar -xzvf ./bellsoft-liberica.tar.gz Now, put the compiler to $PATH with: GRAALVM_HOME=/home/user/opt/bellsoft-liberica export PATH=$GRAALVM_HOME/bin:$PATH Check that Liberica NIK is installed: java -version openjdk version "17.0.5" 2022-10-18 LTS OpenJDK Runtime Environment GraalVM 22.3.0 (build 17.0.5+8-LTS) OpenJDK 64-Bit Server VM GraalVM 22.3.0 (build 17.0.5+8-LTS, mixed mode, sharing) native-image --version GraalVM 22.3.0 Java 17 CE (Java Version 17.0.5+8-LTS) If you get the error "java: No such file or directory" on Linux, you installed the binary for Alpine Linux, not Linux. Check the binary carefully. Creating a Spring Boot Project The easiest way to create a new Spring Boot project is to generate one with Spring Initializr. Select Java 17, Maven, JAR, and Spring SNAPSHOT-version (3.0.5 at the time of writing this article), then fill in the fields for project metadata. We don’t need any dependencies.Add the following code to you main class:System.out.println("Hello from Native Image!"); Spring has a separate plugin for native compilation, which utilizes multiple context dependent parameters under the hood. Let’s add the required configuration to our pom.xml file: XML <profiles> <profile> <id>native</id> <build> <plugins> <plugin> <groupId>org.graalvm.buildtools</groupId> <artifactId>native-maven-plugin</artifactId> <executions> <execution> <id>build-native</id> <goals> <goal>compile-no-fork</goal> </goals> <phase>package</phase> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles> Let’s build the project with the following command: ./mvnw clean package -Pnative The resulting native image is in the target directory. Write a Dockerfile We need to write a Dockerfile to generate a Docker image container. Put the following file into the application folder: Dockerfile FROM bellsoft/alpaquita-linux-base:stream-musl COPY target/native-image-demo . CMD ["./native-image-demo"] Where we: Create an image with Alpaquita Linux base image (the native image doesn’t need a JVM to execute); Copy the app into the new image; Run the program inside the container. We can also skip the step with Liberica NIK installation and we build a native image straight in a container, which is useful when the development and deployment architectures are different. For that purpose, create another folder and put there your application and the following Dockerfile: Dockerfile FROM bellsoft/liberica-native-image-kit-container:jdk-17-nik-22.3-stream-musl as builder WORKDIR /home/myapp ADD native-image-demo /home/myapp/native-image-demo RUN cd native-image-demo && ./mvnw clean package -Pnative FROM bellsoft/alpaquita-linux-base:stream-musl WORKDIR /home/myapp COPY --from=builder /home/myapp/native-image-demo/target/native-image-demo . CMD ["./native-image-demo"] Where we: Specify the base image for Native Image generation; Point to the directory where the image will execute inside Docker; Copy the program to the directory; Build a native image; Create another image with Alpaquita Linux base image (the native image doesn’t need a JVM to execute); Specify the executable directory; Copy the app into the new image; Run the program inside the container. Build a Native Image Container To generate a native image and containerize it, run: docker build . Note that if you use Apple M1, you may experience troubles with building a native image inside a container. Check that the image was create with the following command: Dockerfile docker images REPOSITORY TAG IMAGE ID CREATED SIZE <none> <none> 8ebc2a97ef8e 18 seconds ago 45.2MB Tag the newly created image: docker tag 8ebc2a97ef8e nik-example Now you can run the image with: docker run -it --rm 8ebc2a97ef8e Hello from Native Image! Conclusion Native image containerization is as simple as creating Docker container images of standard Java apps. Much trickier is to migrate a Java application to Native Image. We used a simple program that didn’t require any manual configuration. But dynamic Java features (Reflection, JNI, Serialization, etc.) are not supported by GraalVM, so you have to make the native-image tool aware of them.

By Dmitry Chuyko
Solving Java Multithreading Challenges in My Google Photos Clone [Video]
Solving Java Multithreading Challenges in My Google Photos Clone [Video]

We want to turn our Google Photos clone from single-threaded to multithreaded, to generate thumbnails much faster than before - but there are a couple of challenges along the way. We'll learn that using Java's Parallel File Streams seems to be a buggy endeavor, so we'll fall back on good, old ExecutorServices. But how many threads should we use to generate thumbnails? How can threads get in conflict with each other? How do we make our program fail-safe for threading issues? Find out in this episode! What’s in the Video 00:00 Intro We start off with a quick recap. In the previous episodes, we built a tiny application that can take a folder full of images, and turn those images into thumbnails - by spawning external ImageMagick processes. We did that sequentially, spawning the next process, as soon as a thumbnail conversion process has finished. But we surely can make this faster and also utilize our system resources (CPU/IO) more by doing the thumbnail conversion multithreaded, spawning multiple ImageMagick processes at the very same time. We'll try and figure out how to do that in this episode. 00:24 Java’s Parallel Streams The first idea would be to use Java's built-in parallel streams feature, as we are reading in the files as a stream anyway. Interestingly enough the API lets you do this just fine, and it even works flawlessly on my machine, but as soon as we deploy our application to a different server, it stops working. Why is that? We'll need to do a bit of benchmarking and fumbling around, to notice that parallel file streams, in JDKs < 19, aren't really supported. So, depending on the Java version, you'll get different behavior. Hence, we cannot work with parallel streams for now. 03:32 Java’s ExecutorService Given that parallel streams are not an option, we will resort back to using a good old ExecutorService. An ExecutorService lets us define how many threads we want to open, and then work off n-tasks in parallel. Figuring out the API is not that difficult, but the real question is: How many threads specifically should we open up simultaneously? We'll cover that question in detail during this segment. 06:12 Performance Benchmarking After having implemented multithreading, we also need to make sure to benchmark our changes. Will we get a 2x/3x speed improvement? Or maybe even a speed reduction? During this segment, we'll run and time our application locally, as well as on my NAS, and see how different hardware configurations might affect the final result. 08:10 File Storage and Hashing Last but not least, we'll have to figure out how to store our thumbnails. So far, we created thumbnails with the same filename as the original image and put all the files into the same directory. That doesn't work for a huge amount of files, with potential file clashes and multithreading conflicts. Hence, we will start hashing our files with the BLAKE3 algorithm, store the files under that hash, and also use a directory layout similar to what Git uses internally to store its files. 16:52 Up Next We did a ton of multithreading work in this episode. Up next it is time to add a database to our application and store the information about all converted thumbnails there. Stay tuned!

By Marco Behler CORE
Spring Boot vs Eclipse MicroProfile: Resident Set Size (RSS) and Time to First Request (TFR) Comparative
Spring Boot vs Eclipse MicroProfile: Resident Set Size (RSS) and Time to First Request (TFR) Comparative

In this article, we’re going to compare some essential metrics of web applications using two different Java stacks: Spring Boot and Eclipse MicroProfile. More precisely, we’ll implement the same web application in Spring Boot 3.0.2 and Eclipse MicroProfile 4.2. These releases are the most recent at the time of this writing. Since there are several implementations of Eclipse MicroProfile, we’ll be using one of the most famous: Quarkus. At the time of this writing, the most recent Quarkus release is 2.16.2. This mention is important regarding Eclipse MicroProfile because, as opposed to Spring Boot, which isn’t based on any specification and, consequently, the question of the implementation doesn’t exist, Eclipse MicroProfile has largely been adopted by many editors who provide different implementations, among which Quarkus, Wildfly, Open Liberty and Payara are from the most evangelical. In this article, we will implement the same web application using two different technologies, Spring Boot and Quarkus, such that to compare their respective two essential metrics: RSS (Resident Set Size) and TFR (Time to First Request). The Use Case The use case that we’ve chosen for the web application to be implemented is a quite standard one: the one of a microservice responsible to manage press releases. A press release is an official statement delivered to members of the news media for the purpose of providing information, creating an official statement, or making a public announcement. In our simplified case, a press release consists in a set of data like a unique name describing its subject, an author, and a publisher. The microservice used to manage press releases is very straightforward. As with any microservice, it exposes a REST API allowing for CRUD press releases. All the required layers, like domain, model, entities, DTOs, mapping, persistence, and service, are present as well. Our point here is not to discuss the microservices structure and modus operandi but to propose a common use case to be implemented in the two similar technologies, Spring Boot and Quarkus, to be able to compare their respective performances through the mentioned metrics. Resident Set Size (RSS) RSS is the amount of RAM occupied by a process and consists of the sum of the following JVM spaces: Heap space Class metadata Thread stacks Compiled code Garbage collection RSS is a very accurate metric, and comparing applications based on it is a very reliable way to measure their associated performances and footprints. Time to First Request (TFR) There is a common concern about measuring and comparing applications' startup times. However, logging it, which is how this is generally done, isn’t enough. The time you’re seeing in your log file as being the application startup time isn’t accurate because it represents the time your application or web server started, but not the one required that your application starts to receive requests. Application and web servers, or servlet containers, might start in a couple of milliseconds, but this doesn’t mean your application can process requests. These platforms often delay work through the process and may give a false, lazy initialization indication about the TFR. Hence, to accurately determine the TFR, in this report, we’re using Clément Escofier’s script time.js, found here in the GitHub repository, which illustrates the excellent book Reactive Systems in Java by Clément Escoffier and Ken Finnigan. Spring Boot Implementation To compare the metrics presented above for the two implementations, you need to clone and run the two projects. Here are the steps required to experience the Spring Boot implementation: Shell $ git clone https://github.com/nicolasduminil/Comparing-Resident-Size- Set-Between-Spring-Boot-and-Quarkus.git metrics $ cd metrics $ git checkout spring-boot $ mvn package $ java -jar target/metrics.jar Here you start by cloning the GIT repository, and once this operation is finished, you go into the project’s root directory and do a Maven build. Then you start the Spring Boot application by running the über JAR created by the spring-boot-maven-plugin. Now you can test the application via its exposed Swagger UI interface by going here. Please take a moment to use the feature that tries it out that Swagger UI offers. The order of operations is as follows: First, the POST endpoint is to create a press release. Please use the editor to modify the JSON payload proposed by default. While doing this, you should leave the field pressReleaseId having a value of “0” as this is the primary key that will be generated by the insert operation. Below, you can see an example of how to customize this payload: JSON { "pressReleaseId": 0, "name": "AWS Lambda", "author": "Nicolas DUMINIL", "publisher": "ENI" } Next, a GET /all is followed by a GET /id to check that the previous operation has successfully created a press release. A PUT to modify the current press release A DELETE /id to clean-up Note: Since the ID is automatically generated by a sequence, as explained, the first record will have the value of “1.” You can use this value in GET /id and DELETE /id requests. Notice that the press release name must be unique. Now, once you have experienced your microservice, let’s see its associated RSS. Proceed as follows: Shell $ ps aux | grep metrics nicolas 31598 3.5 1.8 13035944 598940 pts/1 Sl+ 19:03 0:21 java -jar target/metrics.jar nicolas 31771 0.0 0.0 9040 660 pts/2 S+ 19:13 0:00 grep --color=auto metrics $ ps -o pid,rss,command -p 31598 PID RSS COMMAND 31598 639380 java -jar target/metrics.ja Here, we get the PID of our microservice by looking up its name, and once we have it, we can display its associated RSS. Notice that the command ps -o above will display the PID, the RSS, and the starting command associated with the process, which PID is passed as the -p argument. And as you may see, the RSS for our process is 624 MB (639380 KB). If you’re hesitating about how to calculate this value, you can use the following command: Shell $ echo 639380/1024 | bc 624 As for the TFR, all you need to do is to run the script time.js, as follows: Shell node time.js "java -jar target/metrics.jar" "http://localhost:8080/" 173 ms To resume, our Spring Boot microservice has a RSS of 624 MB and a TFR of 173 ms. Quarkus Implementation We need to perform these same operations to experience our Quarkus microservice. Here are the required operations: Shell $ git checkout quarkus $ mvn package quarkus:dev Once our Quarkus microservice has started, you may use the Swager UI interface here. And if you’re too tired to use the graphical interface, then you may use the curl scripts provided in the repository ( post.sh, get.sh, etc.) as shown below: Shell java -jar target/quarkus-ap/quarkus-run.jar & ./post.sh ./get.sh ./get-1.sh 1 ./update.sh ... Now, let’s see how we do concerning our RSS and TFR: Shell $ ps aux | grep quarkus-run nicolas 24776 20.2 0.6 13808088 205004 pts/3 Sl+ 16:27 0:04 java -jar target/quarkus-app/quarkus-run.jar nicolas 24840 0.0 0.0 9040 728 pts/5 S+ 16:28 0:00 grep --color=auto quarkus-run $ ps -o pid,rss,command -p 24776 PID RSS COMMAND 24776 175480 java -jar target/quarkus-app/quarkus-run.jar $ echo 175480/1024 | bc 168 $ node time.js "java -jar target/quarkus-app/quarkus-run.jar" "http://localhost:8081/q/swagger-ui" 121 ms As you can see, our Quarkus microservice uses an RSS of 168MB, i.e., almost 500MB less than the 624MB with Spring Boot. Also, the TFR is slightly inferior (121ms vs. 173ms). Conclusion Our exercise has compared the RSS and TFR metrics for the two microservices executed with the HotSpot JVM (Oracle JDK 17). Spring Boot and Quarkus support the compilation into native executables through GraalVM. It would have been interesting to compare these same metrics of the native replica of the two microservices, and if we didn’t do it here, that’s because Spring Boot heavily relies on Java introspection and, consequently, it’s significantly more difficult to generate Spring Boot native microservices than Quarkus ones. But stay tuned; it will come soon. The source code may be found here. The GIT repository has a master branch and two specific ones, labeled spring-boot and, respectively, quarkus. Enjoy!

By Nicolas Duminil CORE

Top Java Experts

expert thumbnail

Nicolas Fränkel

Head of Developer Advocacy,
Api7

Developer Advocate with 15+ years experience consulting for many different customers, in a wide range of contexts (such as telecoms, banking, insurances, large retail and public sector). Usually working on Java/Java EE and Spring technologies, but with focused interests like Rich Internet Applications, Testing, CI/CD and DevOps. Also double as a trainer and triples as a book author.
expert thumbnail

Shai Almog

OSS Hacker, Developer Advocate and Entrepreneur,
Codename One

Software developer with ~30 years of professional experience in a multitude of platforms/languages. JavaOne rockstar/highly rated speaker, author, blogger and open source hacker. Shai has extensive experience in the full stack of backend, desktop and mobile. This includes going all the way into the internals of VM implementation, debuggers etc. Shai started working with Java in 96 (the first public beta) and later on moved to VM porting/authoring/internals and development tools. Shai is the co-founder of Codename One, an Open Source project allowing Java developers to build native applications for all mobile platforms in Java. He's the coauthor of the open source LWUIT project from Sun Microsystems and has developed/worked on countless other projects both open source and closed source. Shai is also a developer advocate at Lightrun.
expert thumbnail

Marco Behler

Hi, I'm Marco. Say hello, I'd like to get in touch! twitter: @MarcoBehler
expert thumbnail

Ram Lakshmanan

yCrash - Chief Architect

Want to become Java Performance Expert? Attend my master class: https://ycrash.io/java-performance-training

The Latest Java Topics

article thumbnail
Microservices With Apache Camel and Quarkus (Part 2)
Take a look at a scenario to deploy and run locally the simplified money transfer application presented in part 1 as Quarkus standalone services.
June 3, 2023
by Nicolas Duminil CORE
· 1,689 Views · 1 Like
article thumbnail
Structured Logging
This post introduces Structured Logging and the rationale behind its use. Some simple examples are provided to reinforce understanding.
June 2, 2023
by Karthik Viswanathan
· 2,482 Views · 1 Like
article thumbnail
Effective Java Collection Framework: Best Practices and Tips
In this blog, we learn effectively use the Java Collection Framework, consider factors like utilizing the enhanced for loop, generics, and avoiding raw types.
June 1, 2023
by Shailendra Bramhvanshi
· 2,494 Views · 2 Likes
article thumbnail
Microservices With Apache Camel and Quarkus
This post proposes a microservices deployment model based on Camel, using a Java development stack, Quarkus as a runtime, and K8s as a cloud-native platform.
May 31, 2023
by Nicolas Duminil CORE
· 4,189 Views · 3 Likes
article thumbnail
How To Approach Java, Databases, and SQL [Video]
Learn how to save thumbnail data to a database to render our pictures to a nice HTML gallery page and finish the proof of concept for our Google Photos clone.
June 2, 2023
by Marco Behler CORE
· 2,961 Views · 1 Like
article thumbnail
Database Integration Tests With Spring Boot and Testcontainers
In this tutorial, we'll show you how to use Testcontainers for integration testing with Spring Data JPA and a PostgreSQL database.
May 31, 2023
by Andrei Rogalenko
· 2,591 Views · 1 Like
article thumbnail
Operator Overloading in Java
Write expressions like (myBigDecimalMap[ObjectKey] * 5 > 20) in Java... Manifold makes that happen. Expressions like "5 mph * 3 hr" produces distance!
May 31, 2023
by Shai Almog CORE
· 2,104 Views · 2 Likes
article thumbnail
Reactive Programming
In this article, the reader will learn about how to take advance of Reactive Programming with Java and Spring Framework.
May 31, 2023
by Elyes Ben Trad
· 2,221 Views · 1 Like
article thumbnail
Transactional Outbox Patterns Step by Step With Spring and Kotlin
Distributed microservices Transactional Outbox pattern step-by-step implementation guide with Reactive Spring and Kotlin with coroutines.
May 30, 2023
by Alexander Bryksin
· 3,073 Views · 3 Likes
article thumbnail
What Is a Monad? Basic Theory for a Java Developer
Are you a Java developer who wants to know the theory behind Monads? Here you will find a step-by-step tutorial that will help you understand them.
Updated December 8, 2021
by Bartłomiej Żyliński CORE
· 36,608 Views · 22 Likes
article thumbnail
What Is Applicative? Basic Theory for Java Developers
Are you a Java developer who wants to know the theory behind Applicatives? Here you will find a step-by-step tutorial that will help you understand them.
January 20, 2022
by Bartłomiej Żyliński CORE
· 7,987 Views · 8 Likes
article thumbnail
What Is a Functor? Basic Theory for Java Developers
Are you a Java developer who wants to know the theory behind Functors? Here you will find a step-by-step tutorial that will help you understand them.
Updated December 9, 2021
by Bartłomiej Żyliński CORE
· 11,245 Views · 12 Likes
article thumbnail
Effortlessly Streamlining Test-Driven Development and CI Testing for Kafka Developers
Here’s how you can simplify test-driven development and continuous integration with Redpanda and Testcontainers and Quarkus.
May 25, 2023
by Christina Lin CORE
· 5,514 Views · 6 Likes
article thumbnail
Automating the Migration From JS to TS for the ZK Framework
Migrate the JavaScript codebase for the ZK framework to TypeScript with automated tools like jscodeshift, typescript-eslint, AST Explorer, and the TSDoc parser.
May 26, 2023
by Gordon Hsu
· 4,923 Views · 4 Likes
article thumbnail
Hibernate Get vs. Load
In this article, the reader will learn about the difference between the get method and the load method in hibernate using get() and load().
May 22, 2023
by Jay Ponnam
· 2,113 Views · 1 Like
article thumbnail
Extending Java APIs: Add Missing Features Without the Hassle
Do you ever pull your hair out in frustration, asking why isn't this part of the Java API? Thanks to Manifold, you can solve that problem for everyone.
May 24, 2023
by Shai Almog CORE
· 4,167 Views · 5 Likes
article thumbnail
Mocking With the Mockito Framework and Testing REST APIs [Video]
This video tutorial covers the Mockito framework and testing REST APIs/services.
October 30, 2018
by Tarun Telang CORE
· 36,792 Views · 6 Likes
article thumbnail
Migrate Serialized Java Objects with XStream and XMT
Java serialization is convenient to store the state of Java objects. However, there are some drawbacks of serialized data: It is not human-readable. It is Java-specific and is not exchangeable with other programming languages. It is not migratable if fields of the associated Java class have been changed. These drawbacks make Java serialization not a practical approach to storing object states for real-world projects. In a product developed recently, we use XStream to serialize/deserialize Java objects, which solves the first and second problems. The third problem is addressed with XMT, an open source tool developed by us to migrate XStream serialized XMLs. This article introduces this tool with some examples. Computer Languages Need to be Simplified So many of the issues that we all run into when we are working on converting computer languages into something that can be better understood by human beings is the fact that computer languages need to be simplified if possible. These languages are great for the computers that speak back and forth with one another, but they don’t necessarily work out as well when humans try to become involved with them. Many humans end up confused and unable to make much progress at all on getting these systems cleared up. Thus, it is necessary to get them cleaned up and made more usable. There are people who are actively working on this problem right now, but in the meantime, we may simply have to deal with computers that can’t do everything we would like for them to do. XStream deserialization problem when class is evolved Assume a Task class below with a prioritized field indicating whether it is a prioritized task: package example; public class Task { public boolean prioritized; } With XStream, we can serialize objects of this class to XML like below: import com.thoughtworks.xstream.XStream; public class Test { public static void main(String args[]) { Task task = new Task(); task.prioritized = true; String xml = new XStream().toXML(task); saveXMLToFileOrDatabase(xml); } private static void saveXMLToFileOrDatabase(String xml) { // save XML to file or database here } } The resulting XML will be: true And you can deserialize the XML to get back task object: import com.thoughtworks.xstream.XStream; import com.thoughtworks.xstream.io.xml.DomDriver; public class Test { public static void main(String args[]) { String xml = readXMLFromFileOrDatabase(); Task task = (Task) new XStream(new DomDriver()).fromXML(xml); } private static String readXMLFromFileOrDatabase() { // read XML from file or database here } } Everything is fine. Now we find that a prioritized flag is not enough, so we enhance the Task class to be able to distinguish between high priority, medium priority and low priority: package example; public class Task { enum Priority {HIGH, MEDIUM, LOW} public Priority priority; } However, deserialization of previously saved XML is no longer possible since the new Task class is not compatible with the previous version. How does XMT Address the Problem XMT comes to the rescue: it introduces the class VersionedDocument to version serialized XMLs and handles the migration. With XMT, serialization of task object can be written as: package example; import com.pmease.commons.xmt.VersionedDocument; public class Test { public static void main(String args[]) { Task task = new Task(); task.prioritized = true; String xml = VersionedDocument.fromBean(task).toXML(); saveXMLToFileOrDatabase(xml); } private static void saveXMLToFileOrDatabase(String xml) { // save XML to file or database here } } For task class of the old version, the resulting XML will be: true Compared with the XML generated previously with XStream, an additional attribute version is added to the root element indicating the version of the XML. The value is set to "0" unless there are migration methods defined in the class as we will introduce below. When Task class is evolved to use enum based priority field, we add a migrated method like the below: package example; import java.util.Stack; import org.dom4j.Element; import com.pmease.commons.xmt.VersionedDocument; public class Task { enum Priority {HIGH, MEDIUM, LOW} public Priority priority; @SuppressWarnings("unused") private void migrate1(VersionedDocument dom, Stack versions) { Element element = dom.getRootElement().element("prioritized"); element.setName("priority"); if (element.getText().equals("true")) element.setText("HIGH"); else element.setText("LOW"); } } Migration methods need to be declared as a private method with the name in the form of "migrateXXX", where "XXX" is a number indicating the current version of the class. Here method "migrate1" indicates that the current version of the Task class is of "1", and the method migrates the XML from version "0" to "1". The XML to be migrated is passed as a VersionedDocument object which implements the dom4j Document interface and you may use dom4j to migrate it to be compatible with the current version of the class. In this migration method, we read back the "prioritized" element of version "0", rename it as "priority", and set the value as "HIGH" if the task is originally a prioritized task; otherwise, set the value as "LOW". With this migration method defined, you can now safely deserialize the task object from XML: package example; import com.pmease.commons.xmt.VersionedDocument; public class Test { public static void main(String args[]) { String xml = readXMLFromFileOrDatabase(); Task task = (Task) VersionedDocument.fromXML(xml).toBean(); } private static String readXMLFromFileOrDatabase() { // read XML from file or database here } } The deserialization works not only for XML of the old version but also for XML of the new version. At deserialization time, XMT compares the version of the XML (recorded in the version attribute as we mentioned earlier) with the current version of the class (maximum suffix number of various migrate methods), and runs applicable migrate methods one by one. In this case, if an XML of version "0" is read, method migrate1 will be called; if an XML of version "1" is read, no migration methods will be called since it is already up to date. As the class keeps evolving, more migration methods can be added to the class by increasing the suffix number of the latest migration method. For example, let's further enhance our Task class so that the priority field is taking a numeric value ranging from "1" to "10". We add another migrate method to the Task class to embrace the change: @SuppressWarnings("unused") private void migrate2(VersionedDocument dom, Stack versions) { Element element = dom.getRootElement().element("priority"); if (element.getText().equals("HIGH")) element.setText("10"); else if (element.getText().equals("MEDIUM")) element.setText("5"); else element.setText("1"); } This method only handles the migration from version "1" to version "2", and we do not need to care about version "0" anymore, since the XML of version "0" will first be migrated to version "1" by calling the method migrate1 before running this method. With this change, you will be able to deserialize the task object from XML of the current version and any previous versions. This article demonstrates the idea of how to migrate field change of classes. XMT can handle many complicated scenarios, such as migrating data defined in multiple tiers of class hierarchy, addressing class hierarchy change, etc. For more information of XMT, please visit http://wiki.pmease.com/display/xmt/Documentation+Home
August 13, 2022
by Robin Shen
· 19,112 Views · 1 Like
article thumbnail
Microservices: Quarkus vs Spring Boot
In the era of containers (the ''Docker Age'') Java is still on top, but which is better? Spring Boot or Quarkus?
Updated October 13, 2021
by Ualter Junior CORE
· 143,451 Views · 42 Likes
article thumbnail
Microservices Arrived at Your Home
Mixing a few JBoss tools, Apache Camel, OpenShift Enterprise, and a little secret sauce to make your home smarter.
Updated May 11, 2016
by Martin Večeřa
· 20,279 Views · 21 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com

Let's be friends: