Migrate, Modernize and Build Java Web Apps on Azure: This live workshop will cover methods to enhance Java application development workflow.
Modern Digital Website Security: Prepare to face any form of malicious web activity and enable your sites to optimally serve your customers.
Also known as the build stage of the SDLC, coding focuses on the writing and programming of a system. The Zones in this category take a hands-on approach to equip developers with the knowledge about frameworks, tools, and languages that they can tailor to their own build needs.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.
Development at Scale
As organizations’ needs and requirements evolve, it’s critical for development to meet these demands at scale. The various realms in which mobile, web, and low-code applications are built continue to fluctuate. This Trend Report will further explore these development trends and how they relate to scalability within organizations, highlighting application challenges, code, and more.
Ways To Reduce JVM Docker Image Size
In Java Persistence API (JPA) development, the flexibility and dynamism of queries play a pivotal role, especially when dealing with dynamic search interfaces or scenarios where the query structure is known only at runtime. The JPA Criteria Query emerges as a powerful tool for constructing such dynamic queries, allowing developers to define complex search criteria programmatically. One critical aspect of real-world applications, particularly those involving user interfaces for specific record searches, is the implementation of pagination. Pagination not only enhances the user experience by presenting results in manageable chunks but also contributes to resource optimization on the application side. This introduction explores the synergy between JPA Criteria Query and Pagination, shedding light on how developers can leverage this combination to efficiently fetch and organize data. The ensuing discussion will delve into the steps involved in implementing pagination using JPA Criteria Query, providing a practical understanding of this essential aspect of Java persistence. The Criteria API provides a powerful and flexible way to define queries dynamically, especially when the structure of the query is known only at runtime. In many real-world applications, providing a search interface with specific record requirements is common. Pagination, a technique where query results are split into manageable chunks, is essential for enhancing user experience and optimizing resource consumption on the application side. Let's delve into the main points of implementing pagination using the JPA Criteria Query. Note: This explanation assumes a working knowledge of the JPA Criteria AP. Implementation Steps Step 1: Fetching Records Java public List<Post> filterPosts(Integer offset, Integer offset) { CriteriaBuilder criteriaBuilder = entityManager.getCriteriaBuilder(); CriteriaQuery<Post> criteriaQuery = criteriaBuilder.createQuery(Post.class); Root<Post> root = criteriaQuery.from(Post.class); // Optional: Add selection criteria/predicates // List<Predicate> predicates = new ArrayList<>(); // predicates.add(criteriaBuilder.equal(root.get("status"), "published")); // CriteriaQuery<Post> query = criteriaQuery.where(predicates); List<Post> postList = entityManager .createQuery(criteriaQuery) .setFirstResult(offset) .setMaxResults(size) .getResultList(); return postList; } In this step, we use the CriteriaBuilder and CriteriaQuery to construct a query for the desired entity (Post, in this case). The from method is used to specify the root of the query. If needed, you can add selection criteria or predicates to narrow down the result set. Finally, the setFirstResult and setMaxResults methods are used for pagination, where offset specifies the start position, and size specifies the maximum number of results. Step 2: Count All Records Java private int totalItemsCount(Predicate finalPredicate) { try { CriteriaBuilder criteriaBuilder = entityManager.getCriteriaBuilder(); CriteriaQuery<Long> criteriaQuery = criteriaBuilder.createQuery(Long.class); // Optional: If joins are involved, you need to specify // Root<Post> root = criteriaQuery.from(Post.class); // Join<Post, Comments> joinComments = root.join("comments"); return Math.toIntExact(entityManager.createQuery(criteriaQuery .select(criteriaBuilder.count(root)) .where(finalPredicate)) .getSingleResult()); } catch (Exception e) { log.error("Error fetching total count: {}", e.getMessage()); } return 0; } In this step, we define a method to count all records that satisfy the criteria. The criteriaBuilder is used to construct a CriteriaQuery of type Long to perform the count. The count query is constructed using the select and where methods and the result is obtained using getSingleResult. This implementation provides insight into how the JPA Criteria Query can be utilized for efficient pagination. I hope it helped.
On the 19th of September, 2023, Java 21 was released. It is time to take a closer look at the changes since the last LTS release, which is Java 17. In this blog, some of the changes between Java 17 and Java 21 are highlighted, mainly by means of examples. Enjoy! Introduction First of all, the short introduction is not entirely correct because Java 21 is mentioned in one sentence with being an LTS release. An elaborate explanation is given in this blog of Nicolai Parlog. In short, Java 21 is a set of specifications defining the behaviour of the Java language, the API, the virtual machine, etc. A reference implementation of Java 21 is implemented by OpenJDK. Updates to the reference implementation are made in this OpenJDK repository. After the release, a fork is created jdk21u. This jdk21u fork is maintained and will receive updates for a longer time than the regular 6-month cadence. Even with jdk21u, there is no guarantee that fixes are made during a longer time period. This is where the different Vendor implementations make a difference. They build their own JDKs and make them freely available, often with commercial support. So, it is better to say “JDK21 is a version, for which many vendors offer support." What has changed between Java 17 and Java 21? A complete list of the JEPs (Java Enhancement Proposals) can be found at the OpenJDK website. Here you can read the nitty gritty details of each JEP. For a complete list of what has changed per release since Java 17, the Oracle release notes give a good overview. In the next sections, some of the changes are explained by example, but it is mainly up to you to experiment with these new features in order to get acquainted with them. Do note that no preview or incubator JEPs are considered here. The sources used in this post are available at GitHub. Check out an earlier blog if you want to know what has changed between Java 11 and Java 17. Last thing to mention in this introduction, is the availability of a Java playground, where you can experiment with Java from within your browser. Prerequisites Prerequisites for this blog are: You must have a JDK21 installed; You need some basic Java knowledge. JEP444: Virtual Threads Let’s start with the most important new feature in JDK21: virtual threads. Virtual threads are lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications. Up till now, threads were implemented as wrappers around Operating System (OS) threads. OS threads are costly and if you send an http request to another server, you will block this thread until you have received the answer of the server. The processing part (creating the request and processing the answer) is just a small portion of the entire time the thread was blocked. Sending the request and waiting for the answer takes up much more time than the processing part. A way to circumvent this, is to use asynchronous style. Disadvantage of this approach is the more complex implementation. This is where virtual threads come to the rescue. You are able to keep the implementation simple like you did before and still have the scalability of the asynchronous style. The Java application PlatformThreads.java demonstrates what happens when creating 1.000, 10.000, 100.000 and 1.000.000 threads concurrently. The threads only wait for one second. Dependent on your machine, you will get different results because the threads are bound to the OS threads. Java public class PlatformThreads { public static void main(String[] args) { testPlatformThreads(1000); testPlatformThreads(10_000); testPlatformThreads(100_000); testPlatformThreads(1_000_000); } private static void testPlatformThreads(int maximum) { long time = System.currentTimeMillis(); try (var executor = Executors.newCachedThreadPool()) { IntStream.range(0, maximum).forEach(i -> { executor.submit(() -> { Thread.sleep(Duration.ofSeconds(1)); return i; }); }); } time = System.currentTimeMillis() - time; System.out.println("Number of threads = " + maximum + ", Duration(ms) = " + time); } } The output of running this application is the following: Shell Number of threads = 1000, Duration(ms) = 1094 Number of threads = 10000, Duration(ms) = 1625 Number of threads = 100000, Duration(ms) = 5292 [21,945s][warning][os,thread] Attempt to protect stack guard pages failed (0x00007f8525d00000-0x00007f8525d04000). # # A fatal error has been detected by the Java Runtime Environment: # Native memory allocation (mprotect) failed to protect 16384 bytes for memory to guard stack pages # An error report file with more information is saved as: # /home/<user_dir>/MyJava21Planet/hs_err_pid8277.log [21,945s][warning][os,thread] Attempt to protect stack guard pages failed (0x00007f8525c00000-0x00007f8525c04000). [thread 82370 also had an error] [thread 82371 also had an error] [21,946s][warning][os,thread] Failed to start thread "Unknown thread" - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached. [21,946s][warning][os,thread] Failed to start the native thread for java.lang.Thread "pool-4-thread-32577" ... What do you see here? The application takes about 1s for 1.000 threads, 1.6s for 10.000 threads, 5.3s for 100.000 threads and it crashes with 1.000.000 threads. The boundary for the maximum number of OS threads on my machine lies somewhere between 100.000 and 1.000.000 threads. Change the application by replacing the Executors.newCachedThreadPool with the new Executors.newVirtualThreadPerTaskExecutor (VirtualThreads.java). Java try (var executor = Executors.newVirtualThreadPerTaskExecutor()) { IntStream.range(0, maximum).forEach(i -> { executor.submit(() -> { Thread.sleep(Duration.ofSeconds(1)); return i; }); }); } Run the application again. The output is the following: Shell Number of threads = 1000, Duration(ms) = 1020 Number of threads = 10000, Duration(ms) = 1056 Number of threads = 100000, Duration(ms) = 1106 Number of threads = 1000000, Duration(ms) = 1806 Number of threads = 10000000, Duration(ms) = 22010 The application takes about 1s for 1.000 threads (similar to the OS threads), 1s for 10.000 threads (better than OS threads), 1.1s for 100.000 threads (also better), 1.8s for 1.000.000 (does not crash) and even 10.000.000 threads are no problem, taking about 22s in order to execute. This is quite amazing and incredible, isn’t it? JEP431: Sequenced Collections Sequenced Collections fill the lack of a collection type that represents a sequence of elements with a defined encounter order. Besides that, a uniform set of operations were absent that apply such collections. There have been quite some complaints from the community about this topic and this is now solved by the introduction of some new collection interfaces. The overview is available in the following image which is based on the overview as created by Stuart Marks. Besides the new introduced interfaces, some unmodifiable wrappers are available now. Java Collections.unmodifiableSequencedCollection(sequencedCollection) Collections.unmodifiableSequencedSet(sequencedSet) Collections.unmodifiableSequencedMap(sequencedMap) The next sections will show these new interfaces based on the application SequencedCollections.java. SequencedCollection A sequenced collection is a Collection whose elements have a predefined encounter order. The new interface SequencedCollection is: Java interface SequencedCollection<E> extends Collection<E> { // new method SequencedCollection<E> reversed(); // methods promoted from Deque void addFirst(E); void addLast(E); E getFirst(); E getLast(); E removeFirst(); E removeLast(); } In the following example, a list is created and reversed. The first and last item are retrieved and a new first and last item are added. Java private static void sequencedCollection() { List<String> sc = Stream.of("Alpha", "Bravo", "Charlie", "Delta").collect(Collectors.toCollection(ArrayList::new)); System.out.println("Initial list: " + sc); System.out.println("Reversed list: " + sc.reversed()); System.out.println("First item: " + sc.getFirst()); System.out.println("Last item: " + sc.getLast()); sc.addFirst("Before Alpha"); sc.addLast("After Delta"); System.out.println("Added new first and last item: " + sc); } The output is: Shell Initial list: [Alpha, Bravo, Charlie, Delta] Reversed list: [Delta, Charlie, Bravo, Alpha] First item: Alpha Last item: Delta Added new first and last item: [Before Alpha, Alpha, Bravo, Charlie, Delta, After Delta] As you can see, no real surprises here, it just works. SequencedSet A sequenced set is a Set that is a SequencedCollection that contains no duplicate elements. The new interface is: Java interface SequencedSet<E> extends Set<E>, SequencedCollection<E> { SequencedSet<E> reversed(); // covariant override } In the following example, a SortedSet is created and reversed. The first and last item are retrieved and it is tried to add a new first and last item. Java private static void sequencedSet() { SortedSet<String> sortedSet = new TreeSet<>(Set.of("Charlie", "Alpha", "Delta", "Bravo")); System.out.println("Initial list: " + sortedSet); System.out.println("Reversed list: " + sortedSet.reversed()); System.out.println("First item: " + sortedSet.getFirst()); System.out.println("Last item: " + sortedSet.getLast()); try { sortedSet.addFirst("Before Alpha"); } catch (UnsupportedOperationException uoe) { System.out.println("addFirst is not supported"); } try { sortedSet.addLast("After Delta"); } catch (UnsupportedOperationException uoe) { System.out.println("addLast is not supported"); } } The output is: Shell Initial list: [Alpha, Bravo, Charlie, Delta] Reversed list: [Delta, Charlie, Bravo, Alpha] First item: Alpha Last item: Delta addFirst is not supported addLast is not supported The only difference with a SequencedCollection is that the elements are sorted alphabetically in the initial list and that the addFirst and addLast methods are not supported. This is obvious because you cannot guarantee that the first element will remain the first element when added to the list (it will be sorted again anyway). SequencedMap A sequenced map is a Map whose entries have a defined encounter order. The new interface is: Java interface SequencedMap<K,V> extends Map<K,V> { // new methods SequencedMap<K,V> reversed(); SequencedSet<K> sequencedKeySet(); SequencedCollection<V> sequencedValues(); SequencedSet<Entry<K,V>> sequencedEntrySet(); V putFirst(K, V); V putLast(K, V); // methods promoted from NavigableMap Entry<K, V> firstEntry(); Entry<K, V> lastEntry(); Entry<K, V> pollFirstEntry(); Entry<K, V> pollLastEntry(); } In the following example, a LinkedHashMap is created, and some elements are added and the list is reversed. The first and last elements are retrieved and new first and last items are added. Java private static void sequencedMap() { LinkedHashMap<Integer,String> hm = new LinkedHashMap<Integer,String>(); hm.put(1, "Alpha"); hm.put(2, "Bravo"); hm.put(3, "Charlie"); hm.put(4, "Delta"); System.out.println("== Initial List =="); printMap(hm); System.out.println("== Reversed List =="); printMap(hm.reversed()); System.out.println("First item: " + hm.firstEntry()); System.out.println("Last item: " + hm.lastEntry()); System.out.println(" == Added new first and last item =="); hm.putFirst(5, "Before Alpha"); hm.putLast(3, "After Delta"); printMap(hm); } The output is: Shell == Initial List == 1 Alpha 2 Bravo 3 Charlie 4 Delta == Reversed List == 4 Delta 3 Charlie 2 Bravo 1 Alpha First item: 1=Alpha Last item: 4=Delta == Added new first and last item == 5 Before Alpha 1 Alpha 2 Bravo 4 Delta 3 After Delta Also here no surprises. JEP440: Record Patterns Record patterns enhance the Java programming language in order to deconstruct record values. This will make it easier to navigate into the data. Let’s see how this works with application RecordPatterns.java. Assume the following GrapeRecord which consists out of a color and a number of pits. Java record GrapeRecord(Color color, Integer nbrOfPits) {} When you need to access the number of pits, you had to implicitely cast the GrapeRecord and you were able to access the nbrOfPits member using the grape variable. Java private static void singleRecordPatternOldStyle() { Object o = new GrapeRecord(Color.BLUE, 2); if (o instanceof GrapeRecord grape) { System.out.println("This grape has " + grape.nbrOfPits() + " pits."); } } With Record Patterns, you can add the record members as part of the instanceof check and access them directly. Java private static void singleRecordPattern() { Object o = new GrapeRecord(Color.BLUE, 2); if (o instanceof GrapeRecord(Color color, Integer nbrOfPits)) { System.out.println("This grape has " + nbrOfPits + " pits."); } } Introduce a record SpecialGrapeRecord which consists out of a record GrapeRecord and a boolean. Java record SpecialGrapeRecord(GrapeRecord grape, boolean special) {} You have created a nested record. Record Patterns also support nested records as can be seen in the following example: Java private static void nestedRecordPattern() { Object o = new SpecialGrapeRecord(new GrapeRecord(Color.BLUE, 2), true); if (o instanceof SpecialGrapeRecord(GrapeRecord grape, boolean special)) { System.out.println("This grape has " + grape.nbrOfPits() + " pits."); } if (o instanceof SpecialGrapeRecord(GrapeRecord(Color color, Integer nbrOfPits), boolean special)) { System.out.println("This grape has " + nbrOfPits + " pits."); } } JEP441: Pattern Matching for Switch Pattern matching for instanceof has been introduced with Java 17. Pattern matching for switch expressions will allow to test expressions against a number of patterns. This leads to several new and interesting possibilities as is demonstrated in application PatternMatchingSwitch.java. Pattern Matching Switch When you want to verify whether an object is an instance of a particular type, you needed to write something like the following: Java private static void oldStylePatternMatching(Object obj) { if (obj instanceof Integer i) { System.out.println("Object is an integer:" + i); } else if (obj instanceof String s) { System.out.println("Object is a string:" + s); } else if (obj instanceof FruitType f) { System.out.println("Object is a fruit: " + f); } else { System.out.println("Object is not recognized"); } } This is quite verbose and the reason is that you cannot test whether the value is of a particular type in a switch expression. With the introduction of pattern matching for switch, you can refactor the code above to the following, less verbose code: Java private static void patternMatchingSwitch(Object obj) { switch(obj) { case Integer i -> System.out.println("Object is an integer:" + i); case String s -> System.out.println("Object is a string:" + s); case FruitType f -> System.out.println("Object is a fruit: " + f); default -> System.out.println("Object is not recognized"); } } Switches and Null When the object argument in the previous example happens to be null, a NullPointerException will be thrown. Therefore, you need to check for null values before evaluating the switch expression. The following code uses pattern matching for switch, but if obj is null, a NullPointerException is thrown. Java private static void oldStyleSwitchNull(Object obj) { try { switch (obj) { case Integer i -> System.out.println("Object is an integer:" + i); case String s -> System.out.println("Object is a string:" + s); case FruitType f -> System.out.println("Object is a fruit: " + f); default -> System.out.println("Object is not recognized"); } } catch (NullPointerException npe) { System.out.println("NullPointerException thrown"); } } However, now it is possible to test against null and determine in your switch what to do when the value happens to be null. Java private static void switchNull(Object obj) { switch (obj) { case Integer i -> System.out.println("Object is an integer:" + i); case String s -> System.out.println("Object is a string:" + s); case FruitType f -> System.out.println("Object is a fruit: " + f); case null -> System.out.println("Object is null"); default -> System.out.println("Object is not recognized"); } } Case Refinement What if you need to add extra checks based on a specific FruitType in the previous example? This would lead to extra if-statements in order to determine what to do. Java private static void inefficientCaseRefinement(Object obj) { switch (obj) { case String s -> System.out.println("Object is a string:" + s); case FruitType f -> { if (f == FruitType.APPLE) { System.out.println("Object is an apple"); } if (f == FruitType.AVOCADO) { System.out.println("Object is an avocado"); } if (f == FruitType.PEAR) { System.out.println("Object is a pear"); } if (f == FruitType.ORANGE) { System.out.println("Object is an orange"); } } case null -> System.out.println("Object is null"); default -> System.out.println("Object is not recognized"); } } This type of problem is solved by allowing when-clauses in switch blocks to specify guards to pattern case labels. The case label is called a guarded case label and the boolean expression is called the guard. The above code becomes the following code, which is much more readable. Java private static void caseRefinement(Object obj) { switch (obj) { case String s -> System.out.println("Object is a string:" + s); case FruitType f when (f == FruitType.APPLE) -> { System.out.println("Object is an apple"); } case FruitType f when (f == FruitType.AVOCADO) -> { System.out.println("Object is an avocado"); } case FruitType f when (f == FruitType.PEAR) -> { System.out.println("Object is a pear"); } case FruitType f when (f == FruitType.ORANGE) -> { System.out.println("Object is an orange"); } case null -> System.out.println("Object is null"); default -> System.out.println("Object is not recognized"); } } Enum Constants Enum types can be used in switch expressions, but the evaluation is limited to the enum constants of the specific type. What if you want to evaluate based on multiple enum constants? Introduce a new enum CarType. Java public enum CarType { SUV, CABRIO, EV } Now that it is possible to use a case refinement, you could write something like the following. Java private static void inefficientEnumConstants(Object obj) { switch (obj) { case String s -> System.out.println("Object is a string:" + s); case FruitType f when (f == FruitType.APPLE) -> System.out.println("Object is an apple"); case FruitType f when (f == FruitType.AVOCADO) -> System.out.println("Object is an avocado"); case FruitType f when (f == FruitType.PEAR) -> System.out.println("Object is a pear"); case FruitType f when (f == FruitType.ORANGE) -> System.out.println("Object is an orange"); case CarType c when (c == CarType.CABRIO) -> System.out.println("Object is a cabrio"); case null -> System.out.println("Object is null"); default -> System.out.println("Object is not recognized"); } } This code would be more readable if you would have a separate case for every enum constant instead of having a lots of guarded patterns. This turns the above code into the following, much more readable code. Java private static void enumConstants(Object obj) { switch (obj) { case String s -> System.out.println("Object is a string:" + s); case FruitType.APPLE -> System.out.println("Object is an apple"); case FruitType.AVOCADO -> System.out.println("Object is an avocado"); case FruitType.PEAR -> System.out.println("Object is a pear"); case FruitType.ORANGE -> System.out.println("Object is an orange"); case CarType.CABRIO -> System.out.println("Object is a cabrio"); case null -> System.out.println("Object is null"); default -> System.out.println("Object is not recognized"); } } JEP413: Code Snippets Code snippets allow you to simplify the inclusion of example source code in API documentation. Code snippets are now often added by means of the <pre> HTML tag. See application Snippets.java for the complete source code. Java /** * this is an example in Java 17 * <pre>{@code * if (success) { * System.out.println("This is a success!"); * } else { * System.out.println("This is a failure"); * } * } * </pre> * @param success */ public void example1(boolean success) { if (success) { System.out.println("This is a success!"); } else { System.out.println("This is a failure"); } } Generate the javadoc: Shell $ javadoc src/com/mydeveloperplanet/myjava21planet/Snippets.java -d javadoc In the root of the repository, a directory javadoc is created. Open the index.html file with your favourite browser and click the snippets URL. The above code has the following javadoc. There are some shortcomings using this approach: no source code validation; no way to add comments because the fragment is already located in a comment block; no code syntax highlighting; etc. Inline Snippets In order to overcome these shortcomings, a new @snippet tag is introduced. The code above can be rewritten as follows. Java /** * this is an example for inline snippets * {@snippet : * if (success) { * System.out.println("This is a success!"); * } else { * System.out.println("This is a failure"); * } * } * * @param success */ public void example2(boolean success) { if (success) { System.out.println("This is a success!"); } else { System.out.println("This is a failure"); } } The generated javadoc is the following. You notice here that the code snippet is visible marked as source code and a copy source code icon is added. As an extra test, you can remove in the javadoc of methods example1 and example2 a semi-colon, introducing a compiler error. In example1, the IDE just accepts this compiler error. However, in example2, the IDE will prompt you about this compiler error. External Snippets An interesting feature is to move your code snippets to an external file. Create in package com.mydeveloperplanet.myjava21planet a directory snippet-files. Create a class SnippetsExternal in this directory and mark the code snippets by means of an @start tag and an @end tag. With the region parameter, you can give the code snippet a name to refer to. The example4 method also contains the @highlight tag which allows you highlight certain elements in the code. Many more formatting and highlighting options are available, it is too much to cover them all. Java public class SnippetsExternal { public void example3(boolean success) { // @start region=example3 if (success) { System.out.println("This is a success!"); } else { System.out.println("This is a failure"); } // @end } public void example4(boolean success) { // @start region=example4 if (success) { System.out.println("This is a success!"); // @highlight substring="println" } else { System.out.println("This is a failure"); } // @end } } In your code, you refer to the SnippetsExternal file and the region you want to include in your javadoc. Java /** * this is an example for external snippets * {@snippet file="SnippetsExternal.java" region="example3" }" * * @param success */ public void example3(boolean success) { if (success) { System.out.println("This is a success!"); } else { System.out.println("This is a failure"); } } /** * this is an example for highlighting * {@snippet file="SnippetsExternal.java" region="example4" }" * * @param success */ public void example4(boolean success) { if (success) { System.out.println("This is a success!"); } else { System.out.println("This is a failure"); } } When you generate the javadoc as before, you will notice in the output that the javadoc tool cannot find the SnippetsExternal file. Shell src/com/mydeveloperplanet/myjava21planet/Snippets.java:48: error: file not found on source path or snippet path: SnippetsExternal.java * {@snippet file="SnippetsExternal.java" region="example3" }" ^ src/com/mydeveloperplanet/myjava21planet/Snippets.java:62: error: file not found on source path or snippet path: SnippetsExternal.java * {@snippet file="SnippetsExternal.java" region="example4" }" You need to add the path to the snippet files by means of the --snippet-path argument. Shell $ javadoc src/com/mydeveloperplanet/myjava21planet/Snippets.java -d javadoc --snippet-path=./src/com/mydeveloperplanet/myjava21planet/snippet-files The javadoc for method example3 contains the defined snippet. The javadoc for method example4 contains the highlighted section. JEP408: Simple Web Server Simple Web Server is a minimal HTTP server for serving a single directory hierarchy. Goal is to provide a web server for computer science students for testing or prototyping purposes. Create in the root of the repository a httpserver directory, containing a simple index.html file. HTML Welcome to Simple Web Server You can start the web server programmatically as follows (see SimpleWebServer.java). The path to the directory must refer to the absolute path of the directory. Java private static void startFileServer() { var server = SimpleFileServer.createFileServer(new InetSocketAddress(8080), Path.of("/<absolute path>/MyJava21Planet/httpserver"), SimpleFileServer.OutputLevel.VERBOSE); server.start(); } Verify the output. Shell $ curl http://localhost:8080 Welcome to Simple Web Server You can change the contents of the index.html file on the fly and it will serve the new contents immediately after a refresh of the page. It is also possible to create a custom HttpHandler in order to intercept the response and change it. Java class MyHttpHandler implements com.sun.net.httpserver.HttpHandler { @Override public void handle(HttpExchange exchange) throws IOException { if ("GET".equals(exchange.getRequestMethod())) { OutputStream outputStream = exchange.getResponseBody(); String response = "It works!"; exchange.sendResponseHeaders(200, response.length()); outputStream.write(response.getBytes()); outputStream.flush(); outputStream.close(); } } } Start the web server on a different port and add a context path and the HttpHandler. Java private static void customFileServerHandler() { try { var server = HttpServer.create(new InetSocketAddress(8081), 0); server.createContext("/custom", new MyHttpHandler()); server.start(); } catch (IOException ioe) { System.out.println("IOException occured"); } } Run this application and verify the output. Shell $ curl http://localhost:8081/custom It works! Conclusion In this blog, you took a quick look at some features added since the last LTS release Java 17. It is now up to you to start thinking about your migration plan to Java 21 and a way to learn more about these new features and how you can apply them into your daily coding habits. Tip: IntelliJ will help you with that!
Uploading massive datasets to Amazon S3 can be daunting, especially when dealing with gigabytes of information. However, a solution exists within reach. We can revolutionize this process by harnessing the streaming capabilities of a Node.js TypeScript application. Streaming enables us to transfer substantial data to AWS S3 with remarkable efficiency, all while conserving memory resources and ensuring scalability. In this article, we embark on a journey to unveil the secrets of developing a Node.js TypeScript application that seamlessly uploads gigabytes of data to AWS S3 using the magic of streaming. Setting up the Node.js Application Let's start by setting up a new Node.js project: Shell mkdir aws-s3-upload cd aws-s3-upload npm init -y Next, install the necessary dependencies: Shell npm install aws-sdk axios npm install --save-dev @types/aws-sdk @types/axios typescript ts-node npm install --save-dev @types/express @types/multer multer multer-s3 Configuring AWS SDK and Multer In this section, we'll configure the AWS SDK to enable communication with Amazon S3. Ensure you have your AWS credentials ready. JavaScript import { S3 } from 'aws-sdk'; import multer from 'multer'; import multerS3 from 'multer-s3'; import { v4 as uuidv4 } from 'uuid'; const app = express(); const port = 3000; const s3 = new S3({ accessKeyId: 'YOUR_AWS_ACCESS_KEY_ID', secretAccessKey: 'YOUR_AWS_SECRET_ACCESS_KEY', region: 'YOUR_AWS_REGION', }); We'll also set up Multer to handle file uploads directly to S3. Define the storage configuration and create an upload middleware instance. JavaScript const upload = multer({ storage: multerS3({ s3, bucket: 'YOUR_S3_BUCKET_NAME', contentType: multerS3.AUTO_CONTENT_TYPE, acl: 'public-read', key: (req, file, cb) => { cb(null, `uploads/${uuidv4()}_${file.originalname}`); }, }), }); Creating the File Upload Endpoint Now, let's create a POST endpoint for handling file uploads: JavaScript app.post('/upload', upload.single('file'), (req, res) => { if (!req.file) { return res.status(400).json({ message: 'No file uploaded' }); } const uploadedFile = req.file; console.log('File uploaded successfully. S3 URL:', uploadedFile.location); res.json({ message: 'File uploaded successfully', url: uploadedFile.location, }); }); Testing the Application To test the application, you can use tools like Postman or cURL. Ensure you set the Content-Type header to multipart/form-data and include a file in the request body with the field name 'file.' Choosing Between Database Storage and Cloud Storage Whether to store files in a database or an S3 bucket depends on your specific use case and requirements. Here's a brief overview: Database Storage Data Integrity: Ideal for ensuring data integrity and consistency between structured data and associated files, thanks to ACID transactions. Security: Provides fine-grained access control mechanisms, including role-based access control. File Size: Suitable for small to medium-sized files in terms of performance and storage cost. Transactional workflows: Useful for applications with complex transactions involving both structured data and files. Backup and recovery: Facilitates inclusion of files in database backup and recovery processes. S3 Bucket Storage Scalability: Perfect for large files and efficient file storage, scaling to gigabytes, terabytes, or petabytes of data. Performance: Optimized for fast file storage and retrieval, especially for large media files or binary data. Cost-efficiency: Cost-effective for large volumes of data compared to databases, with competitive pricing. Simplicity: Offers straightforward file management, versioning, and easy sharing via public or signed URLs. Use cases: Commonly used for storing static assets and content delivery and as a scalable backend for web and mobile file uploads. Durability and availability: Ensures high data durability and availability, suitable for critical data storage. Hybrid Approach: In some cases, metadata and references to files are stored in a database, while the actual files are stored in an S3 bucket, combining the strengths of both approaches. The choice should align with your application's needs, considering factors like file size, volume, performance requirements, data integrity, access control, and budget constraints. Multer vs. Formidable — Choosing the Right File Upload Middleware When building Node.js applications with Express, choosing the suitable file upload middleware is essential. Let's compare two popular options: Multer and Formidable. Multer With Express Express integration: Seamlessly integrates with Express for easy setup and usage. Abstraction layer: Provides a higher-level abstraction for handling file uploads, reducing boilerplate code. Middleware chain: Easily fits into Express middleware chains, enabling selective usage on specific routes or endpoints. File validation: Supports built-in file validation, enhancing security and control over uploaded content. Multiple file uploads: Handles multiple file uploads within a single request efficiently. Documentation and community: Benefits from extensive documentation and an active community. File renaming and storage control: Allows customization of file naming conventions and storage location. Formidable With Express Versatility: Works across various HTTP server environments, not limited to Express, offering flexibility. Streaming: Capable of processing incoming data streams, ideal for handling huge files efficiently. Customization: Provides granular control over the parsing process, supporting custom logic. Minimal dependencies: Keeps your project lightweight with minimal external dependencies. Widely adopted: A well-established library in the Node.js community. Choose Multer and Formidable based on your project's requirements and library familiarity. Multer is excellent for seamless integration with Express, built-in validation, and a straightforward approach. Formidable is preferred when you need more customization, versatility, or streaming capabilities for large files. Conclusion In conclusion, this article has demonstrated how to develop a Node.js TypeScript application for efficiently uploading large data sets to Amazon S3 using streaming. Streaming is a memory-efficient and scalable approach, mainly when dealing with gigabytes of data. Following the steps outlined in this guide can enhance your data upload capabilities and build more robust applications.
As software applications grow in complexity, the need for efficient concurrency management becomes increasingly important. Traditional threading models can be resource-intensive and difficult to manage, especially when dealing with a large number of threads. This challenge has led to the development of virtual threads, a lightweight alternative that simplifies concurrent programming. In this article, we will explore the concept of virtual threads from a developer-agnostic perspective, discussing their benefits and potential use cases. While our examples will focus on Java 21 and Project Loom, the concepts discussed are applicable to other languages and platforms that support similar lightweight concurrency models. Understanding Virtual Threads Virtual threads, also known as fibers or lightweight threads, are a new approach to concurrency that aims to reduce the overhead associated with traditional threads. Unlike traditional threads, which are backed by operating system threads, virtual threads are scheduled and managed by the runtime environment (e.g., the Java Virtual Machine, or JVM). This allows for the creation and management of a large number of virtual threads with minimal overhead, making it easier to write highly concurrent applications. Benefits of Virtual Threads Scalability: Virtual threads consume significantly fewer resources than traditional threads, enabling a single runtime instance to handle millions of virtual threads without running out of memory or causing performance degradation. Simplified Programming Model: Virtual threads eliminate the need for complex synchronization constructs like locks, semaphores, and thread pools, allowing developers to write straightforward, sequential code that is both easier to understand and maintain. Improved Error Handling: With virtual threads, errors and exceptions can be propagated and caught just like regular exceptions in sequential code, streamlining error handling. Better Utilization of Hardware Resources: Virtual threads allow the runtime environment to efficiently distribute work across available CPU cores, ensuring optimal use of hardware resources. Real-World Scenario: Web Server Request Handling To better understand the benefits of virtual threads, consider a web server that handles incoming HTTP requests. In a traditional threading model, each incoming request would be assigned to a separate thread, which could lead to resource exhaustion and performance issues if the number of requests grows significantly. Using virtual threads, the web server can handle each incoming request with a lightweight virtual thread, dramatically reducing the overhead associated with thread management and allowing the server to handle a much larger number of concurrent requests. Here's an example using Java 21 and Project Loom: Java import java.io.IOException; import java.net.ServerSocket; import java.net.Socket; import java.nio.charset.StandardCharsets; public class WebServer { public static void main(String[] args) throws IOException { try (ServerSocket serverSocket = new ServerSocket(8080)) { while (true) { Socket socket = serverSocket.accept(); Thread.startVirtualThread(() -> handleRequest(socket)); } } } private static void handleRequest(Socket socket) { try (socket; var inputStream = socket.getInputStream(); var outputStream = socket.getOutputStream()) { // Read the request, process it, and generate a response String response = "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\n\r\nHello, World!"; outputStream.write(response.getBytes(StandardCharsets.UTF_8)); } catch (IOException e) { // Handle exceptions e.printStackTrace(); } } } In this example, we create a simple web server that listens on port 8080 and responds with a "Hello, World!" message. For each incoming request, a new virtual thread is created to handle the request, allowing the server to efficiently manage a large number of concurrent connections. Conclusion Virtual threads offer a powerful and efficient way to handle concurrency in modern software applications. By allowing the runtime environment to manage scheduling and resource allocation, virtual threads enable developers to write simpler, more scalable code. While our examples focused on Java 21 and Project Loom, the principles behind virtual threads are applicable to other languages and platforms that support lightweight concurrency models. By understanding and leveraging the benefits of virtual threads, developers can create more efficient and responsive applications that are better suited to meet the demands of today's complex software ecosystems.
The general appearance of a piece of writing, a picture, a piece of text, or another medium is created to appeal to the spectator and aid in understanding what they are looking at. For instance, Computer Hope has a distinctive layout that is identifiable to our visitors, making it easier for them to move around the website. What Is an HTML Layout? An HTML layout is a template for organizing web pages in a specific way. It is straightforward to use, understand, and adjust web design elements using HTML tags. A proper HTML layout is essential for any website and will significantly enhance its visual appeal. They will also be appropriately formatted on mobile devices because HTML layouts are often responsive by default. A page layout determines how a website looks. An HTML layout is a structure that makes it simple for users to move between online pages. It is a method for creating web pages with straightforward HTML tags. The layout of the web pages is the most crucial aspect to consider while developing a website so that it can look fantastic and appear professional. For building layouts for responsive and dynamic websites, you can also employ JAVASCRIPT and CSS-based frameworks. There are many html interview questions that can be asked about HTML layout. Explore more for HTML-based interview questions and answers. The above image shows a typical layout of an HTML page. Elements in an HTML Layout A web page's structure is defined by a variety of HTML elements. Some of them are given below: Header The webpage's logo or symbol, the heading element, the introduction, and the author information are all found in the header. Web pages' header sections are made using the <header> element. <header>: This tag is used to define a section or header of the HTML documents. Example HTML <header> <h1> This is an Html Layout Example !! </h1> </header> Navbar The primary block of navigational links is contained within the navbar. It may have connections to that page or to different pages. To create a navbar in a webpage <nav> tag is used. <nav>: Establishes a group of navigation links. Example The following is an example of how the <nav> tag is used along with some other HTML tags to create a navbar of a website. HTML <nav> <ul> <li><a href="index.html">Home</a></li> <li><a href="about.html">About</a></li> <li><a href="contact.html">Contact</a></li> <li><a href="#">Other Link</a></li> <li><a href="#">Link 2</a></li> </ul> </nav> Main Section The main section of the webpage can be divided into multiple sections like <article>,<section>. <article>: The HTML element known as <article> denotes a self-contained composition that is meant to be independently distributable or reusable within a document, page, application, or website. An interactive widget or gadget, a blog entry, a product card, a user-submitted comment, a forum post, a magazine or newspaper story, or any other independent piece of content are examples. Example HTML <article> <h2> This is the article section </h2> <p> Write your content here </p> </article> <section> The HTML <section> element designates a distinct portion of a website that has similar items grouped together. With very few exceptions, sections should always have a heading. It might have text, pictures, tables, videos, etc. Example HTML <section> <h2> Introduction to HTML section Element... </h2> <p> Lorem ipsum, dolor perspiciatis voluptas deserunt sit amet consectetur adipisicing elit. Illum modi eos eveniet facere delectus sint autem perspiciatis voluptas deserunt velit labore, in fugit mollitia culpa quas, alias similique ratione adipisci! </p> </section> SideNav This section of the webpage contains a side navbar that can be used to define other links that are present on the website, or we can define the indexes of the current page. We can create a side navbar in the webpage using the <aside> HTML tag. <aside> The HTML element known as <aside> designates a section of a page whose content is only loosely connected to the document's primary text. Frequently, sidebars or call-out boxes are used to present asides. Example HTML <aside> <h2>Side Bar Section</h2> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quisquam, quae.</p> <ul> <li><a href=" #intoduction ">Introduction</a></li> <li><a href=" #Our-team">Our Team</a></li> <li><a href="#">Other Link</a></li> <li><a href="">Link 2</a></li> </ul> </aside> Footer The footer of an HTML document is specified using the <footer> tag. The footer information located in this section is author information, copyright information, carriers, etc. Within the body tag, the footer tag is utilized. In HTML 5, a new tag called <footer> has been added. Both a start tag and an end tag are necessary for the footer elements. Example HTML <footer> <p> This is an example of what the footer section of the page would look like..... </p> <p> © 2022 abcd </p> <p> Auther: xyz</p> <a href="#navbar"> Back to top </a> </footer> HTML Layout Techniques There are numerous frameworks and ways for generating layouts; however, in this article, we'll focus on basic methods. Multicolumn layouts can be made using the techniques listed below: CSS float property CSS flexbox CSS grid CSS framework Besides these, there are also some other methods to create a layout, for example table-based and using only div tags, but using the table to create a layout is not recommended. 1. CSS Float Property The CSS float feature is frequently used to create complete web layouts. Learning float is simple; all you need to do is keep in mind how the float and clear properties operate. Example HTML <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <meta data-fr-http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Html Layout based on CSS float property</title> <style> div.container { width: 100%; border: 1px solid gray; } header, footer { padding: 1em; color: rgb(255, 255, 255); background-color: #b4607c; clear: left; text-align: center; } nav { float: left; max-width: 160px; margin: 0; padding: 1em; } nav ul { list-style-type: none; padding: 0; } nav ul a { text-decoration: none; } article { margin-left: 170px; border-left: 1px solid gray; padding: 1em; overflow: hidden; } </style> </head> <body> <div class="container"> <header> <h1>Html Layout based on CSS float property</h1> </header> <nav> <ul> <li><a href="#">Link1</a></li> <li> <a href="#">Link 2</a></li> <li><a href="#">Link 3</a></li> </ul> </nav> <article> <h1> Layout </h1> <p> Molestias veniam expedita aliquid alias unde ipsam porro sequi vel, dolor rem esse soluta Lorem ipsum dolor sit amet consectetur adipisicing elit. voluptas eligendi nostrum voluptatem sapiente consectetur adipisicing elit. error aliquid alias unde ipsam fugit eveniet! </p> <p> Molestias veniam expedita aliquid alias unde ipsam porro sequi vel, dolor rem esse soluta Lorem ipsum dolor sit amet consectetur adipisicing elit. </p> </article> <footer>Copyright © xyz</footer> </div> </body> </html> Output 2. CSS Flexbox When the page layout must handle various screen sizes and display devices, the use of Flexbox guarantees that elements behave consistently. Example HTML <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <meta data-fr-http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Html Layout based on CSS flexbox property</title> <style> .flex-container { display: -webkit-flex; display: flex; -webkit-flex-flow: row wrap; flex-flow: row wrap; text-align: center; } .flex-container > * { padding: 15px; -webkit-flex: 1 100%; flex: 1 100%; } .article { text-align: left; } header { background: #b4607c; color: white; } footer { background: #b4607c; color: white; } .nav { background: #eee; } .nav ul { list-style-type: none; padding: 0; } .nav ul a { text-decoration: none; } @media all and (min-width: 768px) { .nav { text-align: left; -webkit-flex: 1 auto; flex: 1 auto; -webkit-order: 1; order: 1; } .article { -webkit-flex: 5 0px; flex: 5 0px; -webkit-order: 2; order: 2; } footer { -webkit-order: 3; order: 3; } } </style> </head> <body> <div class="flex-container"> <header> <h1>Html Layout based on CSS flexbox property</h1> </header> <nav class="nav"> <ul> <li><a href="#">link1</a></li> <li><a href="#">Link2</a></li> <li><a href="#">Link3</a></li> </ul> </nav> <article class="article"> <h1>Flexbox</h1> <p> Molestias veniam expedita aliquid alias unde ipsam porro sequi vel, dolor rem esse soluta Lorem ipsum dolor sit amet consectetur adipisicing elit. voluptas eligendi nostrum voluptatem sapiente consectetur adipisicing elit. error aliquid alias unde ipsam fugit eveniet! </p> <p> ipsum dolor sit amet consectetur adipisicing elit. voluptas eligendi nostrum voluptatem sapiente consectetur adipisicing elit. </p> <p><strong>Resize this page and see what happens!</strong></p> </article> <footer>Copyright © xyz</footer> </div> </body> </html> Output 3. CSS Grid It is simpler to design web pages without the usage of floats and positioning, thanks to the CSS Grid Layout Module, which provides a grid-based layout system with rows and columns. 4. CSS Framework Websites may easily be made to run with different browsers and browser versions thanks to CSS frameworks. This lessens the possibility that errors will emerge during cross-browser testing. Utilizing these frameworks enables quicker and more efficient web development because they come with ready-to-use stylesheets. Conclusion Designing the layout of a webpage is the most crucial part because this is the first thing a user will see on your website. There are several ways to design a layout of a page. We can use any CSS-based frameworks like Bootstrap, Material, and Tailwind, and there are also many JavaScript-based frameworks available.
In the ever-evolving landscape of software engineering, the database stands as a cornerstone for storing and managing an organization's critical data. From ancient caves and temples that symbolize the earliest forms of information storage to today's distributed databases, the need to persistently store and retrieve data has been a constant in human history. In modern applications, the significance of a well-managed database is indispensable, especially as we navigate the complexities of cloud-native architectures and application modernization. Why a Database? 1. State Management in Microservices and Stateless Applications In the era of microservices and stateless applications, the database plays a pivotal role in housing the state and is crucial for user information and stock management. Despite the move towards stateless designs, certain aspects of an application still require a persistent state, making the database an integral component. 2. Seizing Current Opportunities The database is not just a storage facility; it encapsulates the current opportunities vital for an organization's success. Whether it's customer data, transaction details, or real-time analytics, the database houses the pulse of the organization's present, providing insights and supporting decision-making processes. 3. Future-Proofing for Opportunities Ahead As organizations embrace technologies like Artificial Intelligence (AI) and Machine Learning (ML), the database becomes the bedrock for unlocking new opportunities. Future-proofing involves not only storing current data efficiently but also structuring the database to facilitate seamless integration with emerging technologies. The Challenges of Database Management Handling a database is not without its challenges. The complexity arises from various factors, including modeling, migration, and the constant evolution of products. 1. Modeling Complexity The initial modeling phase is crucial, often conducted when a product is in its infancy, or the organization lacks the maturity to perform optimally. The challenge lies in foreseeing the data requirements and relationships accurately. 2. Migration Complexity Unlike code refactoring on the application side, database migration introduces complexity that surpasses application migration. The need for structural changes, data transformations, and ensuring data integrity makes database migration a challenging endeavor. 3. Product Evolution Products evolve, and so do their data requirements. The challenge is to manage the evolutionary data effectively, ensuring that the database structure remains aligned with the changing needs of the application and the organization. Polyglot Persistence: Exploring Database Options In the contemporary software landscape, the concept of polyglot persistence comes into play, allowing organizations to choose databases that best suit their specific scenarios. This approach involves exploring relational databases, NoSQL databases, and NewSQL databases based on the application's unique needs. Integrating Database and Application: Bridging Paradigms One of the critical challenges in mastering Java Persistence lies in integrating the database with the application. This integration becomes complex due to the mismatch between programming paradigms in Java and database systems. Patterns for Integration Several design patterns aid in smoothing the integration process. Patterns like Driver, Active Record, Data Mapper, Repository, DAO (Data Access Object), and DTO (Data Transfer Object) provide blueprints for bridging the gap between the Java application and the database. Data-Oriented vs. Object-Oriented Programming While Java embraces object-oriented programming principles like inheritance, polymorphism, encapsulation, and types, the database world revolves around normalization, denormalization, and structural considerations. Bridging these paradigms requires a thoughtful approach. Principles of Database-Oriented Programming: Separating Code (Behavior) from Data Encourage a clean separation between business logic and data manipulation. Representing Data with Generic Data Structures Use generic structures to represent data, ensuring flexibility and adaptability. Treating Data as Immutable Embrace immutability to enhance data consistency and reliability. Separating Data Schema from Data Representation Decouple the database schema from the application's representation of data to facilitate changes without affecting the entire system. Principles of Object-Oriented Programming Expose Behavior and Hide Data Maintain a clear distinction between the functionality of objects and their underlying data. Abstraction Utilize abstraction to simplify complex systems and focus on essential features. Polymorphism Leverage polymorphism to create flexible and reusable code. Conclusion Mastering Java Persistence requires a holistic understanding of these principles, patterns, and paradigms. The journey involves selecting the proper database technologies and integrating them seamlessly with Java applications while ensuring adaptability to future changes. In this dynamic landscape, success stories, documentation, and a maturity model serve as guiding beacons, aiding developers and organizations in their pursuit of efficient and robust database management for cloud-native applications and modernization initiatives. Video and Slide Presentation Slides
The Windows Subsystem for Linux (WSL), which unites the Windows and Linux operating systems, has completely changed how users and developers interact with these systems. WSL, which Microsoft first released in 2016, offers a compatibility layer for Windows that enables users to run native Linux command-line tools and applications on their Windows systems. A whole new world of opportunities has been made possible for both developers and enthusiasts by this potent feature, which has eliminated the gap between two historically separate operating systems. Microsoft revolutionized the relationship between Windows and Linux when it introduced the Windows Subsystem for Linux (WSL) feature. WSL brings the strength, adaptability, and extensive ecosystem of Linux to the Windows operating system by enabling developers and users to run a full-fledged Linux environment directly on Windows. Let us explore the world of WSL, including its advantages, applications, and how it has merged two different platforms. What Is WSL? Windows Subsystem for Linux is known by the initials WSL. Users can directly use native Linux command-line tools and applications on the Windows operating system thanks to a compatibility layer created by Microsoft. Without the use of dual booting or virtual machines, WSL enables programmers, system administrators, and users to collaborate with Linux-based software in a Windows environment without any difficulty. There Are Two Major Versions of WSL WSL 1: In the first version of WSL, the Linux binaries interact with a translation layer provided by the WSL core. This translation layer intercepts Linux system calls and translates them into corresponding Windows system calls. WSL 1 does not include a Linux kernel but relies on the Windows kernel for execution. It provides a Linux-compatible environment and allows users to run Linux command-line tools and applications, access the file system, and execute shell scripts. WSL 2: WSL 2 introduces a significant architectural change by incorporating a lightweight virtual machine (VM). In this version, a full Linux kernel runs within the VM, providing improved compatibility, performance, and support for more Linux-specific features. WSL 2 utilizes the Virtual Machine Platform built into Windows and leverages the Hyper-V hypervisor to run the Linux kernel. It also introduces a more efficient file system protocol (9P) for faster file system access. WSL offers integration with the Windows environment, allowing users to access and work with files seamlessly between Windows and Linux. It supports various Linux distributions, such as Ubuntu, Debian, Fedora, and more, which can be installed directly from the Microsoft Store or by importing custom distributions. WSL also enables users to install and use Linux package managers, run Linux servers, develop and test cross-platform applications, and perform system administration tasks within the Windows ecosystem. Overall, WSL provides a powerful tool for developers and users who need to work with both Windows and Linux environments, offering a convenient and streamlined experience for running Linux software on Windows machines. The WSL Architecture At its core, WSL comprises two distinct versions: WSL 1 and WSL 2. WSL 1 leverages a translation layer that interprets Linux system calls into Windows equivalents, enabling Linux binaries to run on Windows. On the other hand, WSL 2 utilizes a lightweight virtual machine (VM) to run a full Linux kernel, offering improved performance and full system call compatibility. The Windows Subsystem for Linux (WSL) architecture consists of several components that enable the integration of Linux functionalities within the Windows operating system. Here is an overview of the WSL architecture: WSL Core: At the heart of WSL is the WSL core, which is responsible for managing the Linux system call interface and translating Linux system calls into their Windows equivalents. This component provides the necessary compatibility layer that allows Linux binaries to run on Windows. WSL Distro: A WSL distribution, such as Ubuntu, Debian, or Fedora, is a package that includes a root file system containing the Linux user space environment, libraries, and binaries. Each WSL distribution runs within its own lightweight virtual machine (VM) or a compatibility layer depending on the version of WSL being used. WSL 1: In WSL 1, the WSL core operates as a translation layer that intercepts Linux system calls made by Linux binaries and translates them into Windows system calls. It does not include a Linux kernel and relies on the Windows kernel for execution. The file system is accessed through the DrvFs file system driver, which provides translation between Linux and Windows file systems. WSL 2: WSL 2 introduces a significant architectural change by utilizing a lightweight VM to run a full Linux kernel. In this version, the WSL core interacts with the Linux kernel directly, resulting in improved compatibility and performance compared to WSL 1. The file system is accessed through the 9P protocol, allowing for faster file system operations. Virtual Machine Platform: WSL 2 utilizes the Virtual Machine Platform, which is a lightweight virtualization technology built into Windows. This platform hosts the virtual machine that runs the Linux kernel within WSL 2. It provides isolation between the Linux kernel and the Windows host, enabling better compatibility and performance. Windows Kernel: The Windows kernel forms the underlying foundation of the WSL architecture. It provides the necessary system services and resources required for WSL to function, including hardware access, process management, and file system operations. Windows Console: The Windows Console is the terminal interface for interacting with WSL. It provides the command-line interface (CLI) where users can execute Linux commands, run Linux applications, and manage their WSL environment. The Windows Console supports various terminal emulators and can be customized with different shells and tools. The WSL architecture allows for seamless integration between the Windows and Linux environments, enabling users to leverage Linux tools, utilities, and applications within the Windows operating system. Whether running Linux binaries through the WSL core in WSL 1 or utilizing a lightweight virtual machine with a full Linux kernel in WSL 2, WSL provides a bridge between two traditionally distinct operating systems, expanding the possibilities for developers, system administrators, and users. WSL Versions and Features WSL 1 was the initial release, featuring a compatibility layer translating Linux system calls to the Windows kernel. While it provided substantial improvements over traditional compatibility layers, it lacked the full Linux kernel functionality. WSL 2, introduced with the Windows 10 May 2020 Update, brought significant enhancements by employing a lightweight virtualization technology. This update replaced the compatibility layer with a complete Linux kernel, running in a lightweight virtual machine, thus delivering a more authentic Linux environment and improved performance. Benefits The Windows Subsystem for Linux (WSL) offers numerous benefits to users, developers, and system administrators. Here are some of the key advantages of using WSL: Seamless Integration: WSL seamlessly integrates Linux and Windows environments, allowing users to run Linux command-line tools and applications directly on their Windows machines. This eliminates the need for dual booting or running virtual machines, streamlining the development and execution of cross-platform projects. Compatibility and Portability: WSL ensures compatibility between Windows and Linux by providing a Linux-compatible environment within Windows. This enables developers to write, test, and run Linux-specific code or scripts on their Windows machines, ensuring the applications work smoothly across different platforms. It also facilitates easier collaboration among teams with diverse operating system preferences. Access to Linux Ecosystem: WSL provides access to the vast Linux ecosystem, allowing users to install and run various Linux distributions, such as Ubuntu, Debian, and Fedora, directly from the Microsoft Store. This enables users to leverage the extensive range of software, development frameworks, and libraries available in the Linux community, enhancing their development capabilities. Improved Development Environment: WSL enhances the development environment on Windows machines. Developers can utilize familiar Linux tools, utilities, and workflows, such as bash, grep, sed, awk, and package managers like apt and yum, without leaving the Windows ecosystem. This flexibility improves productivity and enables developers to leverage the strengths of both operating systems. Docker Integration: WSL greatly enhances Docker workflows on Windows. With WSL 2, users can run Docker containers natively, resulting in improved performance and eliminating the need for resource-intensive virtualization solutions like Hyper-V. This allows developers to seamlessly work with Dockerized applications, enabling efficient container-based development and deployment. System Administration and Troubleshooting: WSL offers a powerful environment for system administrators and IT professionals. It allows them to leverage Linux-oriented tools, scripts, and utilities to manage and troubleshoot Windows systems effectively. They can perform tasks such as scripting, network diagnostics, and automation using the vast array of Linux tools available within WSL. Learning and Skill Development: WSL serves as an excellent learning tool for individuals seeking to gain familiarity with Linux or enhance their Linux skills. It provides a risk-free environment for experimenting, practicing, and acquiring proficiency in Linux command-line operations, scripting, and administration, all within the comfort of a Windows machine. Performance and Resource Efficiency: With the introduction of WSL 2, which utilizes a lightweight virtual machine, users can experience improved performance and resource efficiency compared to WSL 1. The full Linux kernel running within the virtual machine ensures better system call compatibility and enhanced execution speed for Linux applications. Enhanced File System Integration: WSL seamlessly integrates Windows and Linux file systems, enabling easy access to files and directories across both environments. This allows users to work on files using their preferred Windows or Linux tools without the need for complex file sharing or conversion processes. Use Cases Web Development: WSL is widely used by web developers who work with both Windows and Linux stacks. It allows them to seamlessly switch between Windows-based development tools like Visual Studio and Linux-based servers, ensuring consistency and minimizing the need for separate development environments. Developers can run popular web development tools, such as Node.js, Nginx, and Apache, directly within WSL, enabling them to build and test web applications efficiently. System Administration: WSL provides system administrators with a powerful toolset to manage Windows-based systems while leveraging their Linux expertise. Administrators can utilize Linux-specific command-line tools and scripts to perform various system administration tasks, such as network configuration, package management, and troubleshooting. This allows for efficient system management and automation within the Windows environment. Data Science and Machine Learning: WSL has gained popularity among data scientists and researchers in the field of machine learning. It allows them to run Linux-based frameworks, such as TensorFlow, PyTorch, and scikit-learn, seamlessly on their Windows machines. Data scientists can leverage the computational power of their Windows hardware while accessing the rich ecosystem of data science libraries and tools available in Linux. WSL enables them to develop and experiment with machine learning models, process large datasets, and perform data analysis tasks efficiently. DevOps and Continuous Integration/Continuous Deployment (CI/CD): WSL is a valuable asset for DevOps teams working with mixed operating system environments. It enables seamless integration and collaboration between developers using different platforms. With WSL, developers can test and deploy applications built for Linux environments directly on their Windows machines, ensuring compatibility and reducing deployment issues. WSL can be combined with popular CI/CD tools like Jenkins, GitLab CI/CD, or Azure DevOps, enabling efficient build and deployment pipelines for cross-platform applications. Education and Learning: WSL serves as an excellent learning tool for individuals interested in exploring Linux and acquiring Linux skills. It provides a safe and accessible environment for students, enthusiasts, and newcomers to practice and experiment with Linux command-line operations, scripting, and system administration. WSL’s integration within Windows simplifies the learning process by eliminating the need for separate hardware or virtual machines. Cross-Platform Development: WSL enables developers to create cross-platform applications with ease. By utilizing the Linux environment within WSL, developers can ensure their applications work seamlessly on both Windows and Linux operating systems. They can test and debug their code in the Linux environment, ensuring compatibility before deployment. Software Testing: WSL offers a convenient platform for software testing. Testers can utilize WSL to run automated tests, perform compatibility testing, and validate software behavior in a Linux environment while working on a Windows machine. This allows for efficient testing and debugging without the need for dedicated Linux hardware or virtual machines. Research and Experimentation: WSL provides researchers and enthusiasts with a flexible platform for experimentation and prototyping. Whether it’s exploring new technologies, testing novel software configurations, or conducting academic research, WSL’s compatibility and access to the Linux ecosystem enable researchers to work in a familiar and versatile environment. Limitations and Future Developments While WSL has brought remarkable Linux integration to Windows, it’s essential to acknowledge its limitations. Not all Linux applications or graphical user interfaces (GUIs) are fully compatible with WSL. Additionally, certain hardware or system-level functionality may not be accessible within the WSL environment. Microsoft is actively working on improving WSL with regular updates and feature enhancements. Developers can expect further improvements in hardware support, increased file system performance, and better integration with Windows-specific features. While WSL offers significant advantages, it does have limitations, including GUI support and certain performance considerations. However, Microsoft is actively addressing these limitations and investing in future developments for WSL. With planned improvements in GUI application support, file system performance, kernel updates, integration with Windows development tools, and enhanced networking and GPU support, WSL is poised to become an even more powerful and versatile tool for bridging the gap between Windows and Linux environments. While the Windows Subsystem for Linux (WSL) provides significant benefits, it does have some limitations. Additionally, Microsoft continues to invest in WSL and has plans for future developments. Let’s explore the limitations and potential future advancements for WSL: Limitations of WSL Graphical User Interface (GUI) Limitations: WSL primarily focuses on providing a command-line interface, and running Linux GUI applications directly within WSL is not supported. While there are workarounds available, such as running an X server on Windows, the GUI experience can be limited and may not offer the same level of integration as native applications. Performance Considerations: While WSL 2 offers improved performance compared to WSL 1, it still introduces a layer of translation between Linux system calls and Windows equivalents. In scenarios where performance is critical, such as heavy computational workloads or high I/O operations, running native Linux or virtualized Linux environments may provide better performance. Kernel Compatibility: WSL provides a Linux-compatible environment but does not offer a complete Linux kernel. This means that certain kernel-specific features or behaviors may not be available or behave differently within WSL. While WSL aims to provide a broad compatibility range, some Linux applications or system components may have specific kernel requirements that are not met within WSL. Limited Linux Kernel Access: WSL operates as a lightweight virtual machine (VM) with its own Linux kernel, isolated from the host Windows kernel. This isolation can limit direct access to hardware resources or kernel-level functionalities that require direct interaction with the host kernel. It may impact scenarios where deep integration with hardware or specialized kernel features is necessary. Future Developments for WSL GUI Application Support: Microsoft has acknowledged the demand for running Linux GUI applications within WSL and is actively working on improving the graphical capabilities of WSL. In the future, we can expect better support for running Linux GUI applications directly within the Windows environment, providing a more integrated and seamless experience. File System Performance Enhancements: Microsoft has identified file system performance as an area for improvement within WSL. Future developments aim to optimize file system operations, reduce latency, and improve overall performance when accessing files between the Windows and Linux environments. Kernel Updates: Microsoft is committed to keeping WSL up to date with the latest Linux kernel improvements. This includes supporting newer kernel versions, enhancing system call compatibility, and incorporating bug fixes and security patches to ensure a more robust and reliable Linux environment within WSL. Integration with Windows Development Tools: Microsoft is working on tighter integration between WSL and Windows development tools, such as Visual Studio and Windows Terminal. This integration aims to provide a more seamless experience for developers, allowing them to seamlessly switch between Windows and Linux development workflows within a unified environment. Enhanced Networking and GPU Support: Microsoft is exploring ways to improve networking capabilities within WSL, enabling more advanced networking scenarios and smoother integration with Windows networking features. Additionally, there are plans to improve GPU support within WSL, allowing Linux applications to leverage the full power of dedicated GPUs on Windows machines. Conclusion The Windows Subsystem for Linux (WSL) has revolutionized the development landscape by enabling developers to seamlessly merge the power of Linux with the familiarity of Windows. Without giving up the Windows operating system, it offers users a rare chance to learn about Linux, access its robust ecosystem, and utilize its tools and utilities. WSL promises an even more feature-rich and robust experience as it continues to advance and update, bridging the gap between Windows and Linux like never before. Windows users now interact with Linux tools and applications in a completely new way thanks to the Windows Subsystem for Linux (WSL). WSL has made it possible for programmers, system administrators, and enthusiasts to collaborate easily in a hybrid environment that combines the benefits of both Windows and Linux by bridging the gap between the two operating systems. WSL has developed into an essential tool for everyone thanks to its compatibility, flexibility, and access to the Linux ecosystem. Developers, system administrators, researchers, and students can all work effectively in a hybrid Windows-Linux environment thanks to the Windows Subsystem for Linux (WSL), which has a wide range of use cases. WSL provides an adaptable platform that combines the best of the Windows and Linux ecosystems for a variety of uses, including web development, system administration, data science, DevOps, and education. For people and teams working in a variety of operating system environments, WSL is a useful tool thanks to its seamless integration, compatibility, and access to Linux tools and libraries.
Welcome back to the series exploring the synergy between Ballerina and GraalVM. In the previous article, ‘Ballerina Code to GraalVM Executable,’ we delved into the seamless integration of Ballerina and GraalVM, witnessing how Ballerina applications can build GraalVM native executable and achieve improved performance and reduced memory consumption. In this continuation, we take the next step in our journey, exploring how to containerize a Ballerina GraalVM executable. If you have not read the previous article, I recommend you do so before continuing with this one. We will use the same Conference Service application to build a Docker image containing the GraalVM executable. The code for this application can be found in the below link: GitHub — TharmiganK/conference-service-ballerina: A RESTful conference service written in…A RESTful conference service which is written in Ballerina. — GitHub — TharmiganK/conference-service-ballerina: A RESTful…github.com We will be looking into the following ways to create the Docker image. Using a custom Docker file. Using the Ballerina Code to Cloud feature. Using a Custom Docker File As we know already, the GraalVM native executable is platform-dependent. If you are a Linux user, then you can build the GraalVM executable locally and pass it to a Docker with the simplest slim container. If you are using macOS or Windows to build a Docker image containing the GraalVM executable, then you have to build the executable in a Docker container. In this post, I am using a macOS, so I need to build the executable in a Docker container that has the GraalVM native image tool. The GraalVM community already has container images with the native-image tool. The images can be found on the GraalVM container page. Since Ballerina Swan Lake Update 7 works with Java11, I have chosen this image: ghcr.io/graalvm/native-image:ol8-java11–22.3.3. Let’s start by building the application and obtaining the JAR file. $ bal build Compiling source tharmigan/conference_service:1.0.0 Generating executable target/bin/conference_service.jar Use the following Docker file to build the GraalVM executable in the graalvm/native-image container and run the executable in adebian:stable-slim container. Dockerfile FROM ghcr.io/graalvm/native-image:ol8-java11-22.3.3 as build WORKDIR /app/build COPY target/bin/conference_service.jar . RUN native-image -jar conference_service.jar --no-fallback FROM debian:stable-slim WORKDIR /home/ballerina COPY --from=build /app/build/conference_service . CMD echo "time = $(date +"%Y-%m-%dT%H:%M:%S.%3NZ") level = INFO module = tharmigan/conference_service message = Executing the Ballerina application" && "./conference_service" Build the Docker image. $ docker build . -t ktharmi176/conference-service:1.0.0 Use the following Docker compose file to run the conference_service and the mock country_service in the host network. YAML version: '2' services: conference-service: image: 'ktharmi176/conference-service:1.0.0' ports: - '8102:8102' volumes: - ./Config.toml:/home/ballerina/Config.toml depends_on: country-service: condition: service_started network_mode: "host" country-service: image: 'ktharmi176/country-service:latest' hostname: country-service container_name: country-service ports: - '9000:9000' network_mode: "host" Check the image names and run the following command: $ docker compose up Now, the two services have been started. Test the service using the request.http file. Using the Ballerina Code to Cloud Feature Default Mode The Code to Cloud feature in Ballerina enables developers to quickly deploy their Ballerina applications to cloud platforms without the need for extensive configuration or manual setup. It aims to reduce the complexity of cloud-native development and streamline the deployment process. We can simply run the following command to build the GraalVM executable in a Docker container. $ bal build --graalvm --cloud=docker Compiling source tharmigan/conference_service:1.0.0 Generating artifacts Building the native image. This may take a while [+] Building 331.1s (13/13) FINISHED docker:default => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 439B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for gcr.io/distroless/base:latest 3.0s => [internal] load metadata for ghcr.io/graalvm/native-image:ol8-java11-22.3.3 3.7s => [build 1/4] FROM ghcr.io/graalvm/native-image:ol8-java11-22.3.3@sha256:c0b4d9c31013d4fd91c4dec25f8772602e851ee67b8510d21bfdab532da4c17c 0.0s => [stage-1 1/3] FROM gcr.io/distroless/base@sha256:73deaaf6a207c1a33850257ba74e0f196bc418636cada9943a03d7abea980d6d 0.0s => [internal] load build context 0.4s => => transferring context: 42.65MB 0.4s => CACHED [stage-1 2/3] WORKDIR /home/ballerina 0.0s => CACHED [build 2/4] WORKDIR /app/build 0.0s => [build 3/4] COPY conference_service.jar . 0.1s => [build 4/4] RUN native-image -jar conference_service.jar -H:Name=conference_service --no-fallback -H:+StaticExecutableWithDynamicLibC 326.3s => [stage-1 3/3] COPY --from=build /app/build/conference_service . 0.3s => exporting to image 0.3s => => exporting layers 0.3s => => writing image sha256:4a6e1223a8d5a0446b688b110522bdc796027bfc1bc4fe533c62be649900ee05 0.0s => => naming to docker.io/library/conference_service:latest 0.0s Execute the below command to run the generated Docker image: docker run -d conference_service:latest The auto-generated Docker file can be found in the following path: target/docker/conference_service Dockerfile # Auto Generated Dockerfile FROM ghcr.io/graalvm/native-image:ol8-java11-22.3.3 as build WORKDIR /app/build COPY conference_service.jar . RUN native-image -jar conference_service.jar -H:Name=conference_service --no-fallback -H:+StaticExecutableWithDynamicLibC FROM gcr.io/distroless/base WORKDIR /home/ballerina COPY --from=build /app/build/conference_service . CMD ["./conference_service"] Note: By default, Ballerina builds a mostly-static native-image and packs it in a distorless container. For more information on GraalVM mostly-static images, see Static and Mostly Static Images. Now, let’s run docker-compose after changing the image name of the conference_service. YAML version: '2' services: conference-service: image: 'conference_service:latest' ports: - '8102:8102' volumes: - ./Config.toml:/home/ballerina/Config.toml depends_on: country-service: condition: service_started network_mode: "host" country-service: image: 'ktharmi176/country-service:latest' hostname: country-service container_name: country-service ports: - '9000:9000' network_mode: "host" Test the service using the request.http file. Configure Mode The Code to Cloud feature supports overriding the default mode where we configure the following: The GraalVM build-image The native-image build command The base image for the deployment This can be achieved by providing the configurations via Cloud.toml file. The following shows an example to provide the same configurations used for the custom Docker file. TOML [container.image] name = "conference-service" repository = "ktharmi176" tag = "1.0.0" base = "debian:stable-slim" [graalvm.builder] base = "ghcr.io/graalvm/native-image:ol8-java11-22.3.3" buildCmd = "native-image -jar conference_service.jar --no-fallback" Run the following command to build the Docker image with the above configurations. $ bal build --graalvm --cloud=docker Compiling source tharmigan/conference_service:1.0.0 Generating artifacts Building the native image. This may take a while [+] Building 310.0s (14/14) FINISHED docker:default => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 372B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/debian:stable-slim 3.9s => [internal] load metadata for ghcr.io/graalvm/native-image:ol8-java11-22.3.3 2.5s => [auth] library/debian:pull token for registry-1.docker.io 0.0s => [build 1/4] FROM ghcr.io/graalvm/native-image:ol8-java11-22.3.3@sha256:c0b4d9c31013d4fd91c4dec25f8772602e851ee67b8510d21bfdab532da4 0.0s => [stage-1 1/3] FROM docker.io/library/debian:stable-slim@sha256:6fe30b9cb71d604a872557be086c74f95451fecd939d72afe3cffca3d9e60607 0.0s => [internal] load build context 0.3s => => transferring context: 42.65MB 0.3s => CACHED [stage-1 2/3] WORKDIR /home/ballerina 0.0s => CACHED [build 2/4] WORKDIR /app/build 0.0s => [build 3/4] COPY conference_service.jar . 0.1s => [build 4/4] RUN native-image -jar conference_service.jar --no-fallback 305.1s => [stage-1 3/3] COPY --from=build /app/build/conference_service . 0.2s => exporting to image 0.3s => => exporting layers 0.3s => => writing image sha256:1f5b5b30653a48a6d27258f785d93a1654dde25d2e70899e14f2b61996e01996 0.0s => => naming to docker.io/ktharmi176/conference-service:1.0.0 0.0s Execute the below command to run the generated Docker image: docker run -d ktharmi176/conference-service:1.0.0 The auto-generated Docker file with the above configurations will look like this. Dockerfile # Auto Generated Dockerfile FROM ghcr.io/graalvm/native-image:ol8-java11-22.3.3 as build WORKDIR /app/build COPY conference_service.jar . RUN native-image -jar conference_service.jar --no-fallback FROM debian:stable-slim WORKDIR /home/ballerina COPY --from=build /app/build/conference_service . CMD ["./conference_service"] This is the same as the one we wrote manually. Run Docker-compose and check the functionality using the request.http file. In conclusion, we have built a GraalVM executable for a Ballerina application and containerized it in Docker. GraalVM and Ballerina with the Code to Cloud feature simplify the experience of developing and deploying the Ballerina GraalVM native image in the cloud. It also enables the use of cloud-native technologies with GraalVM easily without in-depth knowledge.
At AINIRO.IO we've just created a new release of Magic, where the most important feature is the ability to dynamically compile C# code and load the resulting IL code into the AppDomain, almost turning C# into an "interpreted language" due to an execution model that is more similar to PHP and JavaScript than traditionally compiled languages. This has a lot of benefits, especially for Business Process Workflows, since it allows you to use C# in a dynamic runtime, where you've got dynamic actions that are executed from Hyperlambda being a high-level execution orchestration runtime. Below is some example code illustrating the idea: C# using System; using magic.node; using magic.node.extensions; using magic.signals.contracts; [Slot(Name = "foo")] public class Foo : ISlot { public void Signal(ISignaler signaler, Node input) { input.Value = $"Hello {input.GetEx()}, najs to meet you"; } } The point about the above code, of course, is that it implements the ISlot interface, which allows me to interact with it from Hyperlambda, as illustrated below. C# foo:Thomas Hansen The above Hyperlambda, of course, will invoke my C# slot, passing in "Thomas Hansen," and my C# slot, of course, will do some simple string concatenation, returning the result to the caller. If you save the above C# code as "/etc/csharp/foo.cs", you can execute the following Hyperlambda code to dynamically compile the file and execute the slot. C# // Loading file. io.file.load:/etc/csharp/slot.cs // compiling file into an assembly. system.compile references .:netstandard .:System.Runtime .:System.ComponentModel .:System.Private.CoreLib .:magic.node .:magic.node.extensions .:magic.signals.contracts code:x:@io.file.load assembly-name:foo.dll // Loading assembly as plugin now that we've created it. system.plugin.load:x:@system.compile // Invoking dynamically created C# slot. .name:John Doe foo:x:@.name // Unloading plugin. system.plugin.unload:foo.dll Notice that the above [system.compile] never saves the assembly but returns it as a byte[]. To save the compiled code, you can use, for instance [io.file.save.binary]. In the video below, I am demonstrating some features related to this and showing you how you can almost treat C# as if it's a 100% dynamic scripting language due to the dynamic nature of the process. This has a lot of advantages, especially related to BPW or Business Process Workflows, where you've got tons of smaller building blocks or composables you need to orchestrate together dynamically without having to go through an entire process of deployment and more rigid processes. This allows you to dynamically orchestrate C# snippets together, where Hyperlambda becomes the orchestration tool, loosely coupling building blocks of C# code together that somehow perform a larger task. Due to the dynamic nature of Hyperlambda again, allowing you to build anything from scheduled tasks to HTTP endpoints, this has a lot of really interesting advantages for more complex domains, where the end state of your system is in constant flux, possibly due to integrating with hundreds of different parts, where each part is a separate application, often changing over time, making statically compiled code sub-optimal. Statically compiled code is amazing, and you should, of course, prefer it when you can — However, there are problem domains it is fundamentally incompatible with — Workflows being one example. Now, with the ability to compile C# code on the fly in Hyperlambda, this is no longer a problem, and you can use statically compiled C# as much as you wish for such problems. As long as you obey the Hyperlambda interface being the ISlot interface, allowing Hyperlambda to orchestrate your code together.
After laying the groundwork in our previous article on the basics of Unity's coroutines, we're now ready to delve deeper into the mechanics that drive coroutine execution. This article aims to explore two key aspects that make coroutines a powerful tool in Unity: the concept of yielding and the coroutine's relationship with Unity's main game loop. Yielding is a cornerstone of coroutine functionality, allowing a coroutine to pause its execution and yield control to other routines. This feature enables you to write asynchronous code that can wait for specific conditions to be met, such as time delays or external data, before resuming its execution. We'll explore the different types of yield statements available in Unity, like yield return null and yield return new WaitForSeconds(), and discuss their implications on coroutine behavior. Moreover, understanding how coroutines fit into Unity's main game loop is crucial for leveraging their full potential. Unlike standard methods that execute all their code at once, coroutines have the ability to pause and resume, interleaving their execution with the main game loop. This allows for more flexible and efficient code, especially in scenarios like animations, AI behaviors, and timed events. To illustrate these concepts, we'll provide Unity C# code examples that demonstrate how yielding works and how coroutines are executed in relation to the main game loop. By the end of this article, you'll have a deeper understanding of coroutine mechanics, setting the stage for our discussion on practical use cases and advanced coroutine patterns in Unity. So, let's dive in and unravel the intricacies of coroutine execution in Unity. Yielding Execution One of the most powerful features of coroutines in Unity is the ability to yield execution. This means that a coroutine can pause its operation, allowing other functions or coroutines to run, and then resume from where it left off. This is particularly useful for breaking up tasks that would otherwise block the main thread, making your game unresponsive. The concept of yielding is central to how coroutines function. When a coroutine yields, it effectively says, "I have reached a point where I can pause, so go ahead and run other tasks." This is done using the yield keyword in C#, followed by a return statement that specifies the condition under which the coroutine should resume. Here's a simple example that uses yield return null, which means the coroutine will resume on the next frame: C# using System.Collections; using UnityEngine; public class SimpleYieldExample : MonoBehaviour { IEnumerator Start() { Debug.Log("Coroutine started: " + Time.time); yield return null; Debug.Log("Coroutine resumed: " + Time.time); } } In this example, the coroutine starts and logs the current time. It then yields, allowing other functions and coroutines to execute. On the next frame, it resumes and logs the time again, showing that it paused for approximately one frame. Different Types of Yield Statements Unity provides several types of yield statements, each with its own use case: yield return null: Pauses the coroutine until the next frame yield return new WaitForSeconds(float seconds): Pauses the coroutine for a specified number of seconds yield return new WaitForEndOfFrame(): Pauses the coroutine until the end of the frame, after all graphical rendering is done yield return new WaitForFixedUpdate(): Pauses the coroutine until the next fixed frame rate update function Each of these yield statements serves a different purpose and can be crucial for various tasks like animations, loading, or any time-sensitive operations. Understanding the concept of yielding and the different types of yield statements available can significantly enhance your ability to write efficient and effective coroutines in Unity. In the next section, we'll explore how these coroutines fit into Unity's main game loop, providing a more holistic understanding of coroutine execution. Coroutine Execution Flow Understanding how coroutines operate within Unity's main game loop is crucial for mastering their behavior and capabilities. While it's easy to think of coroutines as separate threads running in parallel, they are actually executed within Unity's main game loop. However, their ability to pause and resume sets them apart and allows for more complex and flexible behavior. How Coroutines Run in Conjunction With Unity's Main Game Loop Coroutines in Unity are not separate threads but are instead managed by Unity's main game loop. When a coroutine yields, it essentially steps out of the game loop temporarily, allowing other game processes to take place. It then re-enters the loop either in the next frame or after a specified condition is met. Here's a simplified example to demonstrate this: C# using System.Collections; using UnityEngine; public class CoroutineFlowExample : MonoBehaviour { void Start() { StartCoroutine(MyCoroutine()); } IEnumerator MyCoroutine() { Debug.Log("Coroutine started at frame: " + Time.frameCount); yield return null; Debug.Log("Coroutine resumed at frame: " + Time.frameCount); } } In this example, the coroutine starts and logs the current frame count. It then yields, stepping out of the game loop. On the next frame, it resumes and logs the frame count again. You'll notice that the frame count will have incremented, indicating that the game loop continued while the coroutine was paused. An Illustration or Example Showing the Flow of Execution in Coroutines To further illustrate how a coroutine's execution is interleaved with the main game loop, consider the following pseudo-code that represents a simplified Unity game loop: Plain Text Game Loop: 1. Update Physics 2. Run Coroutines 3. Render Frame 4. Repeat Now, let's say we have a coroutine that performs some logic, waits for 2 seconds, and then continues: C# IEnumerator MyWaitingCoroutine() { Debug.Log("Logic Part 1: Frame " + Time.frameCount); yield return new WaitForSeconds(2); Debug.Log("Logic Part 2: Frame " + Time.frameCount); } In this scenario, "Logic Part 1" would execute during the "Run Coroutines" step of the game loop. The coroutine would then yield, waiting for 2 seconds. During this time, the game loop would continue to cycle through its steps, updating physics and rendering frames. After approximately 2 seconds, the coroutine would resume, executing "Logic Part 2" during the "Run Coroutines" step. Understanding this interleaved execution is key to mastering coroutines in Unity. It allows you to write code that is both efficient and easy to manage, as you can break up tasks into smaller parts without blocking the main game loop. In the next section, we'll explore some practical use cases where this capability is particularly beneficial. Use Cases for Coroutines Coroutines are a versatile tool in Unity, capable of handling a wide range of scenarios that require asynchronous or time-dependent behavior. Their ability to pause and resume makes them particularly useful for tasks that are too complex or time-consuming to be executed in a single frame. In this section, we'll explore some common use cases where coroutines shine and provide practical Unity C# examples to demonstrate their utility. Timed Events Coroutines are excellent for managing events that need to happen after a certain amount of time has passed. For example, you might want to delay a game character's action or trigger an event after a countdown. C# IEnumerator TriggerTimedEvent() { yield return new WaitForSeconds(5); Debug.Log("Timed event triggered!"); } In this example, the message "Timed event triggered!" will be logged after a 5-second delay. Animations Coroutines can also be used to control animations, especially those that require precise timing or sequencing. C# IEnumerator AnimateObject(Vector3 targetPosition) { Vector3 startPosition = transform.position; float journeyLength = Vector3.Distance(startPosition, targetPosition); float startTime = Time.time; float speed = 1.0f; float distanceCovered = (Time.time - startTime) * speed; float fractionOfJourney = distanceCovered / journeyLength; while (fractionOfJourney < 1) { distanceCovered = (Time.time - startTime) * speed; fractionOfJourney = distanceCovered / journeyLength; transform.position = Vector3.Lerp(startPosition, targetPosition, fractionOfJourney); yield return null; } } Here, the object will move from its current position to a target position, interpolating its position over time. AI Behaviors Coroutines can be used to manage complex AI behaviors, such as decision-making processes that occur over multiple frames. C# IEnumerator AIDecisionMaking() { Debug.Log("AI thinking..."); yield return new WaitForSeconds(2); Debug.Log("AI made a decision!"); } In this example, the AI "thinks" for 2 seconds before making a decision, represented by the log statements. Showcase Some Practical Examples in Unity Consider a game where a player's health regenerates over time. A coroutine can manage this efficiently: C# IEnumerator RegenerateHealth() { while (true) { if (playerHealth < 100) { playerHealth++; Debug.Log("Health: " + playerHealth); } yield return new WaitForSeconds(1); } } In this example, the player's health increases by 1 every second until it reaches 100, at which point the coroutine will still run but won't increase the health. Understanding these practical applications of coroutines can significantly improve the way you approach problem-solving in Unity. Whether it's managing time-dependent events, controlling animations, or implementing complex AI behaviors, coroutines offer a flexible and efficient way to achieve your goals. In the next article, we'll delve deeper into best practices, performance considerations, and more advanced coroutine patterns. Conclusion As we've explored in this article, understanding the mechanics of coroutine execution is not just an academic exercise; it's a practical skill that can significantly enhance your Unity projects. Coroutines offer a robust and flexible way to manage asynchronous and time-dependent tasks, from simple timed events and animations to more complex AI behaviors. For instance, we've seen how you can use coroutines to manage health regeneration in a game: C# IEnumerator RegenerateHealth() { while (true) { if (playerHealth < 100) { playerHealth++; Debug.Log("Health: " + playerHealth); } yield return new WaitForSeconds(1); } } This example demonstrates that coroutines can be an effective way to handle game mechanics that are dependent on time or other asynchronous events. The yield return new WaitForSeconds(1); line is a powerful yet straightforward way to introduce a delay, allowing other game processes to continue running smoothly. But this is just scratching the surface. As you become more comfortable with coroutines, you'll find that they can be used for much more than simple delays and animations. They can manage complex state machines for AI, handle user input in a non-blocking manner, and even manage resource-intensive tasks by spreading the workload over multiple frames. In the next article, we'll delve deeper into the world of coroutines, exploring best practices to optimize your usage of this feature. We'll look at performance considerations, such as how to avoid common pitfalls that can lead to frame rate drops. We'll also explore advanced coroutine patterns, like nested coroutines, and how to manage multiple coroutines efficiently. By mastering coroutines, you're adding a powerful tool to your Unity development toolkit. Whether you're developing a simple mobile game or a complex virtual reality experience, coroutines can help you create more efficient and responsive games. So stay tuned for our next piece, where we'll take your coroutine skills to the next level.