Celebrate a decade of Kubernetes. Explore why K8s continues to be one of the most prolific open-source systems in the SDLC.
With the guidance of FinOps experts, learn how to optimize AWS containers for performance and cost efficiency.
Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.
Accelerate Your Journey to a Modern Data Platform Using Coalesce
Documenting a Java WebSocket API Using Smart-Doc
People have been emailing and asking me to write something on some soft topics for beginners, as I write mostly for mid-level or seniors. So, here is a new article for beginners, especially for UI developers. Today, let's explore the common challenges developers face when working with Tailwind CSS and how to overcome them using the powerful combination of Tailwind Merge and clsx. What's the Problem? When using Tailwind CSS, you often want to pass custom class names to your components, just like you would with a native HTML element, which allows you to style your components dynamically and override the default styles. However, this can lead to conflicts when the custom class names clash with the base Tailwind classes. The problem with Tailwind is these conflicts are not predictable. You don't know the outcome, really. It doesn't matter if you put this at the front of the class list or at the end; in both cases, when you have a conflict, you don't really get the result that you expect. The default behavior of Tailwind doesn't always align with our intuition, where we expect the last class to take precedence in case of a conflict. Introducing Tailwind Merge The finest solution to this tricky problem is a utility function called Tailwind Merge. This function intelligently merges conflicting Tailwind classes. It makes sure that the last class wins, which aligns with our expectations. import { twMerge } from 'tailwind-merge'; const containerClasses = twMerge( 'bg-blue-500 text-white px-4 py-2 rounded', 'bg-red-500' ); In the example above, the twMerge function takes the base Tailwind classes and the custom class name as arguments and returns the merged result. This way, the bg-re-500 class will override the bg-blue-500 class, as expected. Handling Conditional Classes Another common scenario is when you need to apply different classes based on a condition, such as a component's state. Tailwind Merge makes this easy to manage as well: const buttonClasses = twMerge( 'bg-blue-500 text-white px-4 py-2 rounded', 'bg-green-500', isLoading && 'bg-gray-500' ); In this case, if the isLoading variable is true, the bg-gray-500 class will be added to the final class string. Introducing clsx While Tailwind Merge solves the problem of conflicting classes, some developers prefer to use an object-based syntax for conditional classes. This is where the clsx library comes in handy. import clsx from 'clsx'; const buttonClasses = twMerge( clsx({ 'bg-blue-500 cursor-not-allowed': !loading, 'bg-gray-500 cursor-pointer': loading, }), 'text-white px-4 py-2 rounded' ); By using clsx, you can now define your conditional classes in an object-based format, which some developers find more intuitive. Combining the Powers of Tailwind Merge and clsx To get the best of both worlds, you can combine Tailwind Merge and clsx using a custom utility function: import { twMerge } from 'tailwind-merge'; import clsx from 'clsx'; export const cn = (...inputs: ClassValue[]) => { return twMerge(clsx(inputs)); }; This cn (short for "class names") function first passes the input classes through clsx, which handles the object-based conditional classes, and then passes the result to Tailwind Merge to resolve any conflicts. Now, you can use this cn function in your components with both syntaxes: const buttonClasses = cn( { 'bg-blue-500': !pending, 'bg-gray-500': pending, }, 'text-white px-4 py-2 rounded' ); Or: const buttonClasses = cn( 'text-white px-4 py-2 rounded', pending ? 'bg-blue-500' : 'bg-gray-500' ); This approach allows you to leverage the strengths of both Tailwind Merge and clsx together, providing a flexible and intuitive way to manage your component styles. Conclusion In conclusion, understanding and mastering the use of Tailwind Merge and clsx can greatly improve your experience when working with Tailwind CSS. By combining these tools, you can effectively manage class conflicts and conditional styles, and create reusable, well-structured components.
There is a great post post on c2.com. c2.com is one of those golden blogs of the past just like codinghorror and Joel on software. You might have stumbled upon them before especially if you have been around for a long time. In the past, it was the norm to encourage individuals to read the source code and be able to figure out how things work. I see a trend against it from time to time including ranting on open source software and its documentation, which feels weird since having the source code available is essentially the ultimate form of documentation. Apart from being something that is encouraged as a good practice, I believe it’s the natural way for your troubleshooting to evolve. Over the last years, I’ve caught myself residing mostly on reading the source code, instead of StackOverflow, a Generative AI solution, or a Google search. Going straight to the repository of interest has been waaaaaay faster. There are various reasons for that. Your Problems Get More Niche One of the reasons we get the search results we get is popularity. More individuals are searching for Spring Data JPA repositories instead of NameQueries on Hibernate. As the software product you develop advances, the more specific issues you need to tackle. If you want to understand how the Pub/Sub thread pool is used, chances are you will get tons of search results on getting started with Pub/Sub but none answering your question. And that’s ok, the more things advance the more niche a situation gets. The same thing applies to Gen AI-based solutions. These solutions have been of great help, especially the ones that crunched vast amounts of open-source repositories, but still, the results are influenced by the average data they have ingested. We could spend hours battling with search and prompts but going for the source would be way faster. Buried Under Search Engine Optimization The moment you go for the second page on a search engine you know it’s over. The information you are looking for is nowhere to be found. On top of that, you get bombarded with sites popping up with information irrelevant to your request. This affects your attention span but also it’s frustrating since a hefty amount of time is spent sorting out the results with the hope of maybe getting your answer. You Want the Truth LLMs are great. We are privileged to have this technology in this era. Getting a result from an LLM is based on the training data used. Since GhatGPT has crunched GitHub, the results can be way closer to what I am looking for. This can get me far in certain cases. Not in cases where accuracy is needed. LLMs make stuff up and that’s ok, we are responsible adults and it’s our duty to validate the output of a prompt’s response as well as extract the value that is there. If you are interested in how many streams the BigQuery connector for Apache Beam opens on the stream API, there’s no alternative to reading the source code. The source code is the source of truth. The same applies to that exotic tool you recently found out, which synchronizes data between two cloud buckets. When you want to know how many operations occur so you can keep the bills low, you have no alternative to checking the source code. The Quality of the Code Is Great It’s mind-blowing how easy it is to navigate the source code of open-source projects nowadays. The quality of the code and the practices employed are widespread. Most projects have a structure that is pretty much predictable. Also, the presence of extensive testing assists a lot since the test cases act as a specification of how a component should behave. If you think about it on the one hand I have the choice of issuing multiple search requests or various prompts and then refining them until I get the result of choice, on the other hand, all I have to do is search a project with a predictable structure. There’s Too Much Software Out There Overall, there is way too much software out there that would be a Herculean effort to document fully. Also no matter how many software blogs are out there they won’t focus on that specific need of yours. The more specialized a software is, the less likely to be widely documented. I don’t see this as a negative, actually, it’s a positive that we can have software components available to tackle niche use cases. Having that software is already a win, having to read its source is part of using it. Devil Is in the Details It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so. It is common to assume that a software component operates in a specific way and operates based on this assumption. The same assumption can also be found in other sources. But what if that module that you thought was thread-safe is not? What if you have to commit a transaction while you have the assumption that the transaction is auto-committed once you exit a block? Usually, if something is not in the documentation with bold letters we can rely on certain assumptions. Checking the source is the one thing that can protect you from false assumptions. It’s all about understanding how things work and respecting their peculiarities. Overall, the more I embraced checking the source code the less frustrating things became. Somehow it is my shortcut of choice. Tools and search can fail you, but the source code can’t let you down, it’s the source of truth after all.
Lately, I have been playing with JBang and PicoCLI, and I am pretty amazed at what we can do with these tools. I needed to create a script that would go to a specified repository on GitHub, check the commit range, and verify if any tickets were associated with them. Additionally, I wanted to check if the ticket was accepted and if the commit was approved or not. The idea was to integrate this script along with the CI/CD pipeline. While the traditional approach might involve using bash scripts or Python, as a Java developer, I feel more at home doing this in Java. This is where JBang comes into the picture. And since I want this to be a command-line tool, PicoCLI comes in handy. In this article, I will show you how to create a script with JBang and PicoCLI to generate release notes. Step 1: Install JBang If you don't already have JBang installed, you can install it by following these steps: On macOS: brew install jbangdev/tap/jbang On Linux: curl -Ls https://sh.jbang.dev | bash -s - app setup After installing JBang, you can verify the installation by running: jbang --version Step 2: Initialize Your JBang Script First, we need to initialize our JBang script. You can do this by running the following command: jbang init release-notes.java This will create a basic Java file. It starts with a shebang line. In Unix-like environments (macOS, Linux, etc.), the operating system tells the user how to execute the script when running it directly from the terminal. This special line tells your computer's terminal to use JBang to run the script, making it behave like a standalone command. This special line ensures that even without explicitly calling JBang, your script will execute seamlessly, handling dependencies and running the Java code effortlessly. To open it in your IDE, you can use: jbang edit --sandbox release-notes.java This creates a sandbox environment and sets up a Gradle project for you. You can then open it on your favorite IDE. Step 3: Add Dependencies JBang's **//DEPS directive makes dependency management a breeze. You just need to specify the dependencies at the top of your Java file: ///usr/bin/env jbang "$0" "$@" ; exit $? //JAVA 21+ //DEPS org.projectlombok:lombok:1.18.30 //DEPS info.picocli:picocli:4.6.2 //DEPS commons-io:commons-io:2.15.1 //DEPS com.fasterxml.jackson.core:jackson-databind:2.16.1 //DEPS com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.16.1 //DEPS io.github.openfeign:feign-java11:11.8 //DEPS io.github.openfeign:feign-jackson:11.8 //DEPS ch.qos.logback:logback-classic:1.5.6 When working with JBang, you can easily add dependencies to your script using the //DEPS directive. This format allows you to include external libraries directly in your script, simplifying the process of managing dependencies. Step 4: Set Up Logging Let's combine Logback with colorized output for those who love visual feedback. This involves setting up a custom appender to enhance your logging experience. private static void configureLogback() { LoggerContext context = (LoggerContext) LoggerFactory.getILoggerFactory(); PatternLayoutEncoder encoder = new PatternLayoutEncoder(); encoder.setContext(context); encoder.setPattern("%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"); encoder.start(); PicoCLIColorizedAppender appender = new PicoCLIColorizedAppender(); appender.setContext(context); appender.setEncoder(encoder); appender.start(); Logger rootLogger = (Logger) LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME); rootLogger.detachAndStopAllAppenders(); rootLogger.addAppender(appender); rootLogger.setLevel(Level.DEBUG); } For this, I need a custom appended. static class PicoCLIColorizedAppender extends ConsoleAppender<ILoggingEvent> { @Override protected void append(ILoggingEvent event) { String formattedMessage = new String(encoder.encode(event)); String colorizedMessage = getColorizedMessage(event, formattedMessage); System.out.print(colorizedMessage); } private String getColorizedMessage(ILoggingEvent event, String formattedMessage) { String template = switch (event.getLevel().toInt()) { case Level.DEBUG_INT -> "@|blue %s|@"; // Blue for DEBUG case Level.INFO_INT -> "@|green %s|@"; // Green for INFO case Level.WARN_INT -> "@|yellow %s|@"; // Yellow for WARN case Level.ERROR_INT -> "@|red %s|@"; // Red for ERROR default -> "%s"; }; return CommandLine.Help.Ansi.AUTO.string(String.format(template, formattedMessage)); } public Encoder<ILoggingEvent> getEncoder() { return encoder; } public void setEncoder(Encoder<ILoggingEvent> encoder) { this.encoder = encoder; } } Step 5: Configure ObjectMapper Next, we configure the ObjectMapper for JSON serialization and deserialization: public class release_notes { static final ObjectMapper objectMapper = new ObjectMapper() .registerModule(new JavaTimeModule()) .setPropertyNamingStrategy(PropertyNamingStrategies.SNAKE_CASE) .setDefaultPropertyInclusion(JsonInclude.Include.NON_NULL) .disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES); //Other code. } Step 6: Feign-tastic GitHub Client We'll leverage Feign to create a GitHub client, making API interactions smooth. This involves defining an interface (GitHubClient) and implementing functions to fetch project details and commits. public class release_notes { static GitHubClient gitHubClient = Feign.builder() .decoder(new JacksonDecoder(objectMapper)) .encoder(new JacksonEncoder(objectMapper)) .requestInterceptor(request -> request.header("Authorization", "Bearer " + getApiToken())) .target(GitHubClient.class, "https://api.github.com"); // Other code... } interface GitHubClient { @RequestLine("GET /repos/{owner}/{repo}") @Headers("Accept: application/vnd.github+json") GithubProject getProject(@Param("owner") String owner, @Param("repo") String repo); @RequestLine("GET /repos/{owner}/{repo}/commits?sha={sha}&page={page}") @Headers("Accept: application/vnd.github+json") List<Commit> getCommitsPage(@Param("owner") String owner, @Param("repo") String repo, @Param("sha") String sha, @Param("page") int page); default List<Commit> getCommits(String owner, String repo, String sha) { return fetchAllPages(page -> getCommitsPage(owner, repo, sha, page)); } default <T> List<T> fetchAllPages(IntFunction<List<T>> pageFunction) { List<T> allResults = new ArrayList<>(); List<T> curPageData; for (int curPageNum = 1; (curPageData = pageFunction.apply(curPageNum)).size() > 0; curPageNum++) { allResults.addAll(curPageData); } return allResults; } } // Records for GitHub responses record GithubProject(String defaultBranch, String name, String description, String htmlUrl, OffsetDateTime updatedAt) {} record Commit(String sha, CommitDetails commit, String htmlUrl) {} record CommitDetails(String message, Author author) {} record Author(String email, Instant date) {} Note that we called a method getApiToken() when creating the client. We need to implement this. static String apiTokenCache; static String getApiToken() { if (apiTokenCache != null) { return apiTokenCache; } try { Process statusProcess = new ProcessBuilder("gh", "auth", "status", "-t") .redirectOutput(PIPE) .redirectError(PIPE) .start(); String statusOutput = IOUtils.toString(statusProcess.getInputStream(), Charset.defaultCharset()); String statusError = IOUtils.toString(statusProcess.getErrorStream(), Charset.defaultCharset()); if (statusError.contains("You are not logged into any GitHub hosts.")) { new ProcessBuilder("gh", "auth", "login") .inheritIO() .start() .waitFor(); } else if (!statusOutput.contains("Logged in to github.com account")) { throw new GitHubCliProcessException("Unrecognized GitHub CLI auth status:\n" + statusOutput + statusError); } Matcher tokenMatcher = GH_CLI_STATUS_TOKEN_REGEX.matcher(statusOutput); if (tokenMatcher.find()) { apiTokenCache = tokenMatcher.group(1); return apiTokenCache; } else { throw new GitHubCliProcessException("Unable to extract token from output: " + statusOutput); } } catch (IOException | InterruptedException e) { if (e instanceof InterruptedException) { Thread.currentThread().interrupt(); } throw new GitHubCliProcessException("GitHub CLI process error: " + e.getMessage(), e); } } This code fetches your GitHub API token securely. It first checks if a cached token exists. If not, it uses the "gh" command-line tool to get your authentication status. It launches the "gh" login process if you're not logged in. Once logged in, it extracts your API token from the "gh" output and caches it for future use. If there are any errors during this process, it throws an exception. Important Note: This script relies on the GitHub CLI (gh). If you haven't already installed it, you can find instructions for your operating system. Step 7: Create the Command Line Application Now, the heart of the tool: PicoCLI takes over command-line argument parsing and execution of the core logic. We'll define options for GitHub user, repository, commit range, output format, and more. @Slf4j @CommandLine.Command(name = "release_notes", mixinStandardHelpOptions = true) class ReleaseNoteCommand implements Callable<Integer> { private enum OutputFormat { MARKDOWN, HTML } @CommandLine.Option(names = {"-u", "--user"}, description = "GitHub user", required = true) private String user; @CommandLine.Option(names = {"-r", "--repo"}, description = "GitHub repository", required = true) private String repo; @CommandLine.Option(names = {"-s", "--since"}, description = "Since commit", required = true) private String sinceCommit; @CommandLine.Option(names = {"-ut", "--until"}, description = "Until commit", required = true) private String untilCommit; @CommandLine.Option(names = {"-f", "--file"}, description = "Output file for release notes (optional)") private File outputFile; @CommandLine.Option(names = {"-v", "--version"}, description = "Release version (optional)", defaultValue = "v1.0.0") private String version; @CommandLine.Option(names = {"-o", "--output-format"}, description = "Output format (default: MARKDOWN)", defaultValue = "MARKDOWN") private OutputFormat outputFormat; @Override public Integer call() { try { GithubProject project = release_notes.gitHubClient.getProject(user, repo); List<Commit> commits = getCommitsInRange(release_notes.gitHubClient, sinceCommit, untilCommit, user, repo); String releaseNotes = generateReleaseNotes(commits, project, version, outputFormat); File outputFileWithExtension; if (outputFile != null) { String extension = (outputFormat == OutputFormat.HTML) ? ".html" : ".md"; outputFileWithExtension = new File(outputFile.getAbsolutePath() + extension); try (PrintWriter writer = new PrintWriter(outputFileWithExtension, StandardCharsets.UTF_8)) { writer.print(releaseNotes); log.info("Release notes saved to: {}", outputFileWithExtension.getAbsolutePath()); } catch (IOException e) { log.error("Error writing release notes to file: {}", e.getMessage(), e); return 1; } } else { log.info(releaseNotes); } } catch (Exception e) { log.error("Error fetching commits: {}", e.getMessage(), e); return 1; } return 0; } } This Java code defines a command-line tool (ReleaseNoteCommand) for generating release notes from a GitHub repository. It uses PicoCLI to handle command-line arguments, such as GitHub user, repository, commit range, output format, and optional version and output file. It fetches commit data using a GitHubClient, processes it to categorize changes (features, bug fixes, other), and then formats the information into either Markdown or HTML release notes. Finally, it either saves the release notes to a specified file or prints them to the console. (Note: Some methods used in this code, such as getCommitsInRange, generateReleaseNotes, and helper method, are not shown here but can be found in the complete code here.) Step 8: Running the Show: Main Method Finally, implement the main method to execute the command: import picocli.CommandLine; import static java.lang.System.exit; public class release_notes { public static void main(String... args) { configureLogback(); int exitCode = new CommandLine(new GitHubCommitChecker()).execute(args); exit(exitCode); } // Other methods... } Your CLI script is ready! To put this creation to work, run it with the following command (adjusting the arguments to match your repository): ./release_notes.java -u rokon12 -r cargotracker -s 44e55ce -ut 50814d1 -f release -o HTML This will generate an HTML file in your root directory. It also prints excellent help functionality. For example: ./release_notes.java Missing required options: '--user=<user>', '--repo=<repo>', '--since=<sinceCommit>', '--until=<untilCommit>' Usage: release_notes [-f=<outputFile>] [-o=<outputFormat>] -r=<repo> -s=<sinceCommit> -u=<user> -ut=<untilCommit> [-v=<version>] -f, --file=<outputFile> Output file for release notes (optional) -o, --output-format=<outputFormat> Output format (default: MARKDOWN) -r, --repo=<repo> GitHub repository -s, --since=<sinceCommit> Since commit -u, --user=<user> GitHub user -ut, --until=<untilCommit> Until commit -v, --version=<version> Release version (optional) It will print on the terminal if we don't want to save it in any file. That's it. Conclusion Congratulations! You've built a versatile release notes generator powered by JBang and PicoCLI. This tool, easily integrated into your CI/CD pipelines, empowers you to create detailed, informative release notes straight from GitHub while enjoying the comfort and familiarity of Java. Feel free to tailor it further to match your specific workflow. Let me know if you'd like me to elaborate on any specific code section or aspect! Don't forget to share this post!
The topic of note-taking remains relevant today. We know the benefits it provides to the author. We are familiar with various approaches to note-taking and tools that can be used, and we have choices. Imagine you have found your approach, your tool, and your note base is growing and pleasing to the eye. What next? I want to discuss one path of development in this area. This article is dedicated to the concept of a digital garden — the philosophy of publicly maintaining personal notes. My Path For the past 20 years, I have kept notes using methods that lacked systematization, reliability, or usefulness: paper notebooks, text files, Evernote, and other applications whose names have faded from memory. Three years ago, I started using the Zettelkasten style (as I understood and adapted it for myself). I learned about this approach from Rob Muhlestein whom I encountered on Twitch. Experiments led me to my current note-taking method: in the form of Markdown files stored in a Git repository. On my computer, I work with them in VSCode, and on my phone, I use the default "Notes" app. Why not Obsidian? I tried it and decided to stick with VSCode for the following reasons: VSCode is a basic editor that is always open and used for work files. I could not set up Obsidian synchronization via Git, and iCloud synchronization caused app crashes on my phone. I need a quickly accessible app on my phone to jot down fleeting thoughts before, for example, reaching for a roll of toilet paper and experiencing discomfort from waiting. At the same time, I need the note base on my phone much less frequently than the need to urgently write something down. Separating apps for sudden thoughts and the main base has an advantage: it creates a ritual of transferring notes from the phone app to the main base. I review fresh notes, tag them, and add details. This allows me to revisit the recorded thought and increase the chances of not forgetting it. What Is a Digital Garden? The phrase "digital garden" is a metaphor describing an approach to note-taking. It is not just a set of tools like WordPress plugins or Jekyll templates. The idea of a garden is familiar to all of us — it is a place where something grows. Gardens can be very personal and filled with gnome figurines, or they can be sources of food and vitality. And who knows what a sudden visitor to your garden might see? Are you in a lovely pajama with a glass of fresh juice under an apple tree? Or perhaps standing upside down, trying to bring a bit of order and pull out weeds? The digital gardening metaphor emphasizes the slow growth of ideas through writing, rewriting, editing, and revisiting thoughts in a public space. Instead of fixed opinions that never change, this approach allows ideas to develop over time. The goal of digital gardening is to use the collective intelligence of your network to create constructive feedback loops. If done right, you will have an accessible representation of your thoughts that can be "sent out" into the world, and people will be able to respond to it. Even for the most raw ideas, it helps to create a feedback loop to strengthen and fully develop the idea. Core Principles of Gardening Connections over timelines: Gardens are organized around contextual and associative connections; concepts and themes within each note define how they relate to others. The publication date is not the most important aspect of the text. Continuous growth: Gardens never end; they are constantly growing, evolving, and changing, like a real garden. Imperfection: Gardens are inherently imperfect. They do not hide their rough edges and do not claim to be a permanent source of truth. Learning in public: To create constructive feedback loops Personal and experimental: Gardens are inherently heterogeneous. You may plant the same seeds as your neighbor but get a different arrangement of plants. You organize the garden around ideas and means that fit your way of thinking rather than a standard template. Independent ownership: Gardening is about creating your own little corner of the internet that you fully control. Learning in Public This point may raise questions, as it suggests sharing something for free. The approach implies that you publicly document your steps, thoughts, mistakes, and successes in mastering a new topic or skill. This allows you not only to share the result but also to demonstrate the thinking and learning process, which can be useful to others. Besides altruistic motives, personal interest is worth noting. Sometimes I am too lazy to clearly formulate thoughts in notes, which then backfires — I can't understand what I meant. It's amusing how I always rely on my future superpower to decipher my own nonsensical notes — a belief that remains unshakable, though entirely unfounded. But when it comes to others, I don’t allow such naivety. Knowing that my note might not only attract attention but also genuinely help someone motivates me to make an effort and articulate the thought properly. How To Share Today, convenient tools have made creating a fully customizable website much easier. Services like Netlify and Vercel have removed deployment complexities. Static site generators like Jekyll, Gatsby, 11ty, and Next simplify creating complex sites that automatically generate pages and handle load time, image optimization, and SEO. Obsidian offers the ability to publish notes through its subscription platform. Using this service does not feel like "independent ownership." I chose Quartz for myself. It is a free static generator based on Markdown content. Quartz is designed primarily as a tool for publishing digital gardens on the internet. It is simple enough for people without technical experience but powerful enough for customization by experienced developers. My Digital Garden As mentioned, I store my notes in a public Git repository. Most of it is in Russian, with some in English. Changes are automatically published on GitHub Pages and available on my digital garden page. The design, solution scheme, and GitHub Actions scripts are available for review in this note if you want to create something similar. Some use RSS for updates, while I use a Telegram channel for this purpose. It is specifically an update channel, messages are not posted separately. Materials for Further Study The concept has a philosophy and history; I won't retell them but will provide links where you can read more: Maggie Appleton Website GitHub Jacky Zhao GitHub Examples can also be found through a GitHub search if the repository is tagged with the relevant topic. See here for more. Conclusion Establishing a digital garden has been a logical continuation of my journey, starting with the Zettelkasten method. How has it affected me? After the initial effort to set up and deploy the system, I hardly maintain it except for occasional issues. And now I continue to push my notes to Git. The only thing is that I have started to make them more understandable. Thank you for reading the article, and good luck in your quest for organizing thoughts and creating an effective space for ideas!
Basic Retrieval-Augmented Generation (RAG) (opens new window)data pipelines often rely on hard-coded steps, following a predefined path every time they run. There is no real-time decision-making in these systems, and they do not dynamically adjust actions based on input data. This limitation can reduce flexibility and responsiveness in complex or changing environments, highlighting a major weakness in traditional RAG systems. LlamaIndex resolves this limitation by introducing agents(opens new window). Agents are a step beyond our query engines in that they can not only "read" from a static source of data, but can dynamically ingest and modify data from various tools. Powered by an LLM, these agents are designed to perform a series of actions to accomplish a specified task by choosing the most suitable tools from a provided set. These tools can be as simple as basic functions or as complex as comprehensive LlamaIndex query engines. They process user inputs or queries, make internal decisions on how to handle these inputs, and decide whether additional steps are necessary or if a final result can be delivered. This ability to perform automated reasoning and decision-making makes agents highly adaptable and efficient for complex data processing tasks. Source: LlamaIndex The diagram illustrates the workflow of LlamaIndex agents: how they generate steps, make decisions, select tools, and evaluate progress to dynamically accomplish tasks based on user inputs. Core Components of a LlamaIndex Agent There are two main components of an agent in LlamaIndex: AgentRunner and AgentWorker. Source: LlamaIndex Agent Runner The Agent Runner is the orchestrator within LlamaIndex. It manages the state of the agent, including conversational memory, and provides a high-level interface for user interaction. It creates and maintains tasks and is responsible for running steps through each task. Here’s a detailed breakdown of its functionalities: Task creation: The Agent Runner creates tasks based on user queries or inputs. State management: It stores and maintains the state of the conversation and tasks. Memory management: It manages conversational memory internally, ensuring context is maintained across interactions. Task execution: It runs steps through each task, coordinating with the Agent Worker. Unlike LangChain agents(opens new window, which require developers to manually define and pass memory, LlamaIndex agents handle memory management internally. Source: LlamaIndex Agent Worker The Agent Worker controls the step-wise execution of a task given by the Agent Runner. It is responsible for generating the next step in a task based on the current input. Agent Workers can be customized to include specific reasoning logic, making them highly adaptable to different tasks. Key aspects include: Step generation: Determines the next step in the task based on current data Customization: This can be tailored to handle specific types of reasoning or data processing. The Agent Runner manages task creation and state, while the Agent Worker carries out the steps of each task, acting as the operational unit under the Agent Runner’s direction. Types of Agents in LlamaIndex LlamIndex offers different kinds of agents designed for specific tasks and functions. Data Agents Data Agents (opens new window)are specialized agents designed to handle various data tasks, including retrieval and manipulation. They can operate in both read and write modes and interact seamlessly with different data sources. Data Agents can search, retrieve, update, and manipulate data across various databases and APIs. They support interaction with platforms like Slack, Shopify, Google, and more, allowing for easy integration with these services. Data Agents can handle complex data operations such as querying databases, calling APIs, updating records, and performing data transformations. Their adaptable design makes them suitable for a wide range of applications, from simple data retrieval to intricate data processing pipelines. Python from llama_index.agent import OpenAIAgent, ReActAgent from llama_index.llms import OpenAI # import and define tools ... # initialize llm llm = OpenAI(model="gpt-3.5-turbo") # initialize openai agent agent = OpenAIAgent.from_tools(tools, llm=llm, verbose=True) # initialize ReAct agent agent = ReActAgent.from_tools(tools, llm=llm, verbose=True) # use agent response = agent.chat("What is (121 * 3) + 42?") Custom Agents Custom Agents give you a lot of flexibility and customization options. By subclassing CustomSimpleAgentWorker, you can define specific logic and behavior for your agents. This includes handling complex queries, integrating multiple tools, and implementing error-handling mechanisms. You can tailor Custom Agents to meet specific needs by defining step-by-step logic, retry mechanisms, and integrating various tools. This customization lets you create agents that manage sophisticated tasks and workflows, making them highly adaptable to different scenarios. Whether managing intricate data operations or integrating with unique services, Custom Agents provides the tools you need to build specialized, efficient solutions. Tools and Tool Specs Tools are the most important component of any agent. They allow the agent to perform various tasks and extend its functionality. By using different types of tools, an agent can execute specific operations as needed. This makes the agent highly adaptable and efficient. Function Tools Function Tools lets you convert any Python function into a tool that an agent can use. This feature is useful for creating custom operations, enhancing the agent's ability to perform a wide range of tasks. You can transform simple functions into tools that the agent incorporates into its workflow. This can include mathematical operations, data processing functions, and other custom logic. You can convert your Python function into a tool like this: Python from llama_index.core.tools import FunctionTool def multiply(a: int, b: int) -> int: """Multiple two integers and returns the result integer""" return a * b multiply_tool = FunctionTool.from_defaults(fn=multiply) The FunctionTool method in LlamaIndex allows you to convert any Python function into a tool that an agent can use. The name of the function becomes the name of the tool, and the function's docstring serves as the tool's description. QueryEngine Tools QueryEngine Tools wrap existing query engines, allowing agents to perform complex queries over data sources. These tools integrate with various databases and APIs, enabling the agent to retrieve and manipulate data efficiently. These tools enable agents to interact with specific data sources, execute complex queries, and retrieve relevant information. This integration allows the agent to use the data effectively in its decision-making processes. To convert any query engine to a query engine tool, you can use the following code: Python from llama_index.core.tools import QueryEngineTool from llama_index.core.tools import ToolMetadata query_engine_tools = QueryEngineTool( query_engine="your_index_as_query_engine_here", metadata=ToolMetadata( name="name_your_tool", description="Provide the description", ), ) The QueryEngineTool method allows you to convert a query engine into a tool that an agent can use. The ToolMetadata class helps define the name and description of this tool. The name of the tool is set by the name attribute, and the description is set by the description attribute. Note: The description of the tool is extremely important because it helps the LLM decide when to use that tool. Building an AI Agent Using MyScaleDB and LlamaIndex Let's build an AI agent (opens new window)using both a Query Engine Tool and a Function Tool to demonstrate how these tools can be integrated and utilized effectively. Install the Necessary Libraries First, install the required libraries by running the following command in your terminal: Shell pip install myscale-client llama We will use MyScaleDB (opens new window)as a vector search engine to develop the query engine. It's an advanced SQL vector database that has been specially designed for scalable applications. Get the Data for the Query Engine We will use the Nike catalog dataset (opens new window)for this example. Download and prepare the data using the following code: Python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader import requests url = 'https://niketeam-asset-download.nike.net/catalogs/2024/2024_Nike%20Kids_02_09_24.pdf?cb=09302022' response = requests.get(url) with open('Nike_Catalog.pdf', 'wb') as f: f.write(response.content) reader = SimpleDirectoryReader(input_files=["Nike_Catalog.pdf"]) documents = reader.load_data() This code will download the Nike catalog PDF and load the data for use in the query engine. Connecting With MyScaleDB Before using MyScaleDB, we need to establish a connection: Python import clickhouse_connect client = clickhouse_connect.get_client( host='your_host_here', port=443, username='your_username_here', password='your_password_here' ) To learn how to get the cluster details and read more about MyScale, you can refer to the MyScaleDB quickstart (opens new window)guide. Create the Query Engine Tool Let’s first build the first tool for our agent, which is the query engine tool. For that, let’s first develop the query engine using MyScaleDB and add the Nike catalog data to the vector store. Python from llama_index.vector_stores.myscale import MyScaleVectorStore from llama_index.core import StorageContext vector_store = MyScaleVectorStore(myscale_client=client) storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents( documents, storage_context=storage_context ) query_engine = index.as_query_engine() Once the data is ingested into the vector store, an index is created. The next step is to transform the Query Engine into a tool. For that, we will use the QueryEngineTool method of the LlamaIndex. Python from llama_index.core.tools import QueryEngineTool from llama_index.core.tools import ToolMetadata query_engine_tool = QueryEngineTool( query_engine=index, metadata=ToolMetadata( name="nike_data", description="Provide information about the Nike products. Use a detailed plain text question as input to the tool." ), ) The QueryEngineTool takes query_engine and meta_data as arguments. In the metadata, we define the name of the tool with the description. Create the Function Tool Our next tool is a simple Python function that multiplies two numbers. This method will be transformed into a tool using the FunctionTool of the LlamaIndex. Python from llama_index.core.tools import FunctionTool # Define a simple Python function def multiply(a: int, b: int) -> int: """Multiply two integers and return the result.""" return a * b # Change function to a tool multiply_tool = FunctionTool.from_defaults(fn=multiply) After this, we are done with the tools. The LlamaIndex agents take tools as a Python list. So, let’s add the tools to a list. Python tools = [multiply_tool, query_engine_tool] Define the LLM Let’s define the LLM, the heart of any LlamaIndex agent. The choice of LLM is crucial because the better the understanding and performance of the LLM, the more effectively it can act as a decision-maker and handle complex problems. We will use gpt-3.5-turbo model from OpenAI. Python from llama_index.llms.openai import OpenAI llm = OpenAI(model="gpt-3.5-turbo") Initialize the Agent As we saw earlier, an agent consists of an Agent Runner and an Agent Worker. These are two building blocks of an agent. Now, we will explore how they work in practice. We have implemented the code below in two ways: Custom agent: The first method is to initialize the Agent worker first with the tools and LLM. Then, pass the Agent Worker to the Agent Runner to handle the complete agent. Here, you import the necessary modules and compose your own agent. Python from llama_index.core.agent import AgentRunner from llama_index.agent.openai import OpenAIAgentWorker # Method 2: Initialize AgentRunner with OpenAIAgentWorker openai_step_engine = OpenAIAgentWorker.from_tools(tools, llm=llm, verbose=True) agent1 = AgentRunner(openai_step_engine) Use predefined agent: The second method is to use the Agents which are the subclass of AgentRunner that bundles the OpenAIAgentWorker under the hood. Therefore, we do not need to define the AgentRunner or AgentWorkers ourselves, as they are implemented on the backend. Python from llama_index.agent.openai import OpenAIAgent # Initialize OpenAIAgent agent = OpenAIAgent.from_tools(tools, llm=llm, verbose=True) Note: When verbose=true is set in LLMs, we gain insight into the model's thought process, allowing us to understand how it arrives at its answers by providing detailed explanations and reasoning. Regardless of the initialization method, you can test the agents using the same method. Let’s test the first one: Python # Call the custom agent agent = agent.chat("What's the price of BOYS NIKE DF STOCK RECRUIT PANT DJ573?") You should get a result similar to this: Now, let’s call the first custom agent with the math operation. Python # Call the second agent response = agent1.chat("What's 2+2?") Upon calling the second agent, and asking for a math operation. You will get a response similar to this: The potential for AI agents to handle complex tasks autonomously is expanding, making them invaluable in business settings where they can manage routine tasks and free up human workers for higher-value activities. As we move forward, the adoption of AI agents is expected to grow, further revolutionizing how we interact with technology and optimize our workflows. Conclusion LlamaIndex agents offer a smart way to manage and process data, going beyond traditional RAG systems. Unlike static data pipelines, these agents make real-time decisions, adjusting their actions based on incoming data. This automated reasoning makes them highly adaptable and efficient for complex tasks. They integrate various tools, from basic functions to advanced query engines, to intelligently process inputs and deliver optimized results.
Navigating toward a cloud-native architecture can be both exciting and challenging. The expectation of learning valuable lessons should always be top of mind as design becomes a reality. In this article, I wanted to focus on an example where my project seemed like a perfect serverless use case, one where I’d leverage AWS Lambda. Spoiler alert: it was not. Rendering Fabric.js Data In a publishing project, we utilized Fabric.js — a JavaScript HTML5 canvas library — to manage complex metadata and content layers. These complexities included spreads, pages, and templates, each embedded with fonts, text attributes, shapes, and images. As the content evolved, teams were tasked with updates, necessitating the creation of a publisher-quality PDF after each update. We built a Node.js service to run Fabric.js, generating PDFs and storing resources in AWS S3 buckets with private cloud access. During a typical usage period, over 10,000 teams were using the service, with each individual contributor sending multiple requests to the service as a result of manual page saves or auto-saves driven by the Angular client. The service was set up to run as a Lambda in AWS. The idea of paying at the request level seemed ideal. Where Serverless Fell Short We quickly realized that our Lambda approach wasn’t going to cut it. The spin-up time turned out to be the first issue. Not only was there the time required to start the Node.js service but preloading nearly 100 different fonts that could be used by those 10,000 teams caused delays too. We were also concerned about Lambda’s processing limit of 250 MB of unzipped source code. The initial release of the code was already over 150 MB in size, and we still had a large backlog of feature requests that would only drive this number higher. Finally, the complexity of the pages — especially as more elements were added — demanded increased CPU and memory to ensure quick PDF generation. After observing the usage for first-generation page designs completed by the teams, we forecasted the need for nearly 12 GB of RAM. Currently, AWS Lambdas are limited to 10 GB of RAM. Ultimately, we opted for dedicated EC2 compute resources to handle the heavy lifting. Unfortunately, this decision significantly increased our DevOps management workload. Looking for a Better Solution Although I am no longer involved with that project, I’ve always wondered if there was a better solution for this use case. While I appreciate AWS, Google, and Microsoft providing enterprise-scale options for cloud-native adoption, what kills me is the associated learning curve for every service. The company behind the project was a smaller technology team. Oftentimes teams in that position struggle with adoption when it comes to using the big three cloud providers. The biggest challenges I continue to see in this regard are: A heavy investment in DevOps or CloudOps to become cloud-native. Gaining a full understanding of what appears to be endless options. Tech debt related to cost analysis and optimization. Since I have been working with the Heroku platform, I decided to see if they had an option for my use case. Turns out, they introduced large dynos earlier this year. For example, with their Performance-L RAM Dyno, my underlying service would get 50x the compute power of a standard Dyno and 30 GB of RAM. The capability to write to AWS S3 has been available from Heroku for a long time too. V2 Design in Action Using the Performance-L RAM dyno in Heroku would be no different (at least operationally) than using any other dyno in Heroku. To run my code, I just needed the following items: A Heroku account The Heroku command-line interface (CLI) installed locally After navigating to the source code folder, I would issue a series of commands to log in to Heroku, create my app, set up my AWS-related environment variables, and run up to five instances of the service using the Performance-L dyno with auto-scaling in place: Shell heroku login heroku apps:create example-service heroku config:set AWS_ACCESS_KEY_ID=MY-ACCESS-ID AWS_SECRET_ACCESS_KEY=MY-ACCESS-KEY heroku config:set S3_BUCKET_NAME=example-service-assets heroku ps:scale web=5:Performance-L-RAM git push heroku main Once deployed, my example-service application can be called via standard RESTful API calls. As needed, the auto-scaling technology in Heroku could launch up to five instances of the Performance-L Dyno to meet consumer demand. I would have gotten all of this without having to spend a lot of time understanding a complicated cloud infrastructure or worrying about cost analysis and optimization. Projected Gains As I thought more about the CPU and memory demands of our publishing project — during standard usage seasons and peak usage seasons — I saw how these performance dynos would have been exactly what we needed. Instead of crippling our CPU and memory when the requested payload included several Fabric.js layers, we would have had enough horsepower to generate the expected image, often before the user navigated to the page containing the preview images. We wouldn’t have had size constraints on our application source code, which we would inevitably have hit in AWS Lambda limitations within the next 3 to 4 sprints. The time required for our DevOps team to learn Lambdas first and then switch to EC2 hit our project’s budget pretty noticeably. And even then, those services weren't cheap, especially when spinning up several instances to keep up with demand. But with Heroku, the DevOps investment would be considerably reduced and placed into the hands of software engineers working on the use case. Just like any other dyno, it’s easy to use and scale up the performance dynos either with the CLI or the Heroku dashboard. Conclusion My readers may recall my personal mission statement, which I feel can apply to any IT professional: “Focus your time on delivering features/functionality that extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.” — J. Vester In this example, I had a use case that required a large amount of CPU and memory to process complicated requests made by over 10,000 consumer teams. I walked through what it would have looked like to fulfill this use case using Heroku's large dynos, and all I needed was a few CLI commands to get up and running. Burning out your engineering and DevOps teams is not your only option. There are alternatives available to relieve the strain. By taking the Heroku approach, you lose the steep learning curve that often comes with cloud adoption from the big three. Even better, the tech debt associated with cost analysis and optimization never sees the light of day. In this case, Heroku adheres to my personal mission statement, allowing teams to focus on what is likely a mountain of feature requests to help product owners meet their objectives. Have a really great day!
The explicit behavior of IAC version managers is quite crucial. It is especially critical in the realm of Terraform and OpenTofu because tool upgrades might destroy or corrupt all managed infrastructure. To protect users from unexpected updates, all version managers have to work clearly and without any internal wizardry that cannot be explained without a deep dive into the sources. Tenv is a versatile version manager for OpenTofu, Terraform, Terragrunt, and Atmos, written in Go and developed by tofuutils team. This tool simplifies the complexity of handling different versions of these powerful tools, ensuring developers and DevOps professionals can focus on what matters most — building and deploying efficiently. Tenv is a successor of tofuenv and tfenv. In the process of tenv development, our team discovered quite an unpleasant surprise with Terragrunt and tenv, which may have created serious issues. On a fresh install of the Linux system, when one of our users attempted to run Terragrunt, the execution ended up utilizing OpenTofu instead of Terraform, with no warnings in advance. In the production environment, it might cause serious Terraform state corruption, but luckily it was a testing environment. Before we look at the root cause of this issue, I need to explain how the tenv works. Tenv manages all tools by wrapping them in an additional binary that serves as a proxy for the original tool. It means you can't install Terraform or OpenTofu on an ordinary Linux machine alongside tenv (except NixOS case). At our tool, we supply a binary with the same name as the tool (Terraform / OpenTofu / Terragrunt / Atmos), within which we implement the proxy pattern. It was required since it simplifies version management and allows us to add new capabilities to automatic version discovery and installation handling. So, knowing that tenv is based on a downstream proxy architecture, we are ready to return to the problem. Why was our user's execution performed using OpenTofu rather than Terraform? The answer has two parts: Terragrunt started to use OpenTofu as the default IAC tool, however, this was not a major release; instead, it was provided as a patch and users didn't expect to have any differences in the behavior. The original problem may be found here. When Terragrunt called OpenTofu in the new default behavior, it used tenv's proxy to check the required version of OpenTofu and install it automatically. Although the TERRAGRUNT_TFPATH setting might control the behavior, users were unaware of the Terragrunt breaking change and were surprised to see OpenTotu at the end of execution. But why did OpenTofu execute if users did not have it in their system? Here we are dealing with the second issue that has arisen. At the start of tenv development, we replicated many features from the tfenv tool. One of these features was automatic tool installation, which is controlled by the TFENV_AUTO_INSTALL environment variable and is enabled by default. Tenv also has the TENV_AUTO_INSTALL variable, which is also was true by default unless the mentioned case hasn't been discovered. Users who used Terraform / OpenTofu without Terragrunt via tenv may have encountered the auto-install when, for example, switching the version of the tool with the following command: tenv tf use 1.5.3 tenv tofu use 1.6.1 The use command installed the required version even if it wasn’t present in the operation system locally. After a brief GitHub discussion, our team decided to disable auto-install by default and release this minor change as a new, major version of tenv. We made no major changes to the program, did not update the framework of the language version, and only updated the default variable, deciding that users should understand that one of the most often utilized and crucial behaviors had changed. It's interesting that during the discussion, we disagreed on whether users should read the README.md or documentation, but whether you like it or not, it's true that people don't read the docs unless they're in difficulty. As the tofuutils team, we cannot accept the possibility that a user will mistakenly utilize OpenTofu in a real-world production environment and break the state or the cloud environment. Finally, I'd like to highlight a few points once more: Implement intuitive behavior in your tool. Consider user experience and keep in mind that many people don't read manuals. Do not worry about releasing a major version if you made the breaking change. In programming, explicit is preferable to implicit, especially when dealing with state-sensitive tools.
Every day, developers are pushed to evaluate and use different tools, cloud provider services, and follow complex inner-development loops. In this article, we look at how the open-source Dapr project can help Spring Boot developers build more resilient and environment-agnostic applications. At the same time, they keep their inner development loop intact. Meeting Developers Where They Are A couple of weeks ago at Spring I/O, we had the chance to meet the Spring community face-to-face in the beautiful city of Barcelona, Spain. At this conference, the Spring framework maintainers, core contributors, and end users meet yearly to discuss the framework's latest additions, news, upgrades, and future initiatives. While I’ve seen many presentations covering topics such as Kubernetes, containers, and deploying Spring Boot applications to different cloud providers, these topics are always covered in a way that makes sense for Spring developers. Most tools presented in the cloud-native space involve using new tools and changing the tasks performed by developers, sometimes including complex configurations and remote environments. Tools like the Dapr project, which can be installed on a Kubernetes cluster, push developers to add Kubernetes as part of their inner-development loop tasks. While some developers might be comfortable with extending their tasks to include Kubernetes for local development, some teams prefer to keep things simple and use tools like Testcontainers to create ephemeral environments where they can test their code changes for local development purposes. With Dapr, developers can rely on consistent APIs across programming languages. Dapr provides a set of building blocks (state management, publish/subscribe, service Invocation, actors, and workflows, among others) that developers can use to code their application features. Instead of spending too much time describing what Dapr is, in this article, we cover how the Dapr project and its integration with the Spring Boot framework can simplify the development experience for Dapr-enabled applications that can be run, tested, and debugged locally without the need to run inside a Kubernetes cluster. Today, Kubernetes, and Cloud-Native Runtimes Today, if you want to work with the Dapr project, no matter the programming language you are using, the easiest way is to install Dapr into a Kubernetes cluster. Kubernetes and container runtimes are the most common runtimes for our Java applications today. Asking Java developers to work and run their applications on a Kubernetes cluster for their day-to-day tasks might be way out of their comfort zone. Training a large team of developers on using Kubernetes can take a while, and they will need to learn how to install tools like Dapr on their clusters. If you are a Spring Boot developer, you probably want to code, run, debug, and test your Spring Boot applications locally. For this reason, we created a local development experience for Dapr, teaming up with the Testcontainers folks, now part of Docker. As a Spring Boot developer, you can use the Dapr APIs without a Kubernetes cluster or needing to learn how Dapr works in the context of Kubernetes. This test shows how Testcontainers provisions the Dapr runtime by using the @ClassRule annotation, which is in charge of bootstrapping the Dapr runtime so your application code can use the Dapr APIs to save/retrieve state, exchange asynchronous messages, retrieve configurations, create workflows, and use the Dapr actor model. How does this compare to a typical Spring Boot application? Let’s say you have a distributed application that uses Redis, PostgreSQL, and RabbitMQ to persist and read state and Kafka to exchange asynchronous messages. You can find the code for this application here (under the java/ directory, you can find all the Java implementations). Your Spring Boot applications will need to have not only the Redis client but also the PostgreSQL JDBC driver and the RabbitMQ client as dependencies. On top of that, it is pretty standard to use Spring Boot abstractions, such as Spring Data KeyValue for Redis, Spring Data JDBC for PostgreSQL, and Spring Boot Messaging RabbitMQ. These abstractions and libraries elevate the basic Redis, relational database, and RabbitMQ client experiences to the Spring Boot programming model. Spring Boot will do more than just call the underlying clients. It will manage the underlying client lifecycle and help developers implement common use cases while promoting best practices under the covers. If we look back at the test that showed how Spring Boot developers can use the Dapr APIs, the interactions will look like this: In the second diagram, the Spring Boot application only depends on the Dapr APIs. In both the unit test using the Dapr APIs shown above and the previous diagram, instead of connecting to the Dapr APIs directly using HTTP or gRPC requests, we have chosen to use the Dapr Java SDK. No RabbitMQ, Redis clients, or JDBC drivers were included in the application classpath. This approach of using Dapr has several advantages: The application has fewer dependencies, so it doesn’t need to include the Redis or RabbitMQ client. The application size is not only smaller but less dependent on concrete infrastructure components that are specific to the environment where the application is being deployed. Remember that these clients’ versions must match the component instance running on a given environment. With more and more Spring Boot applications deployed to cloud providers, it is pretty standard not to have control over which versions of components like databases and message brokers will be available across environments. Developers will likely run a local version of these components using containers, causing version mismatches with environments where the applications run in front of our customers. The application doesn’t create connections to Redis, RabbitMQ, or PostgreSQL. Because the configuration of connection pools and other details closely relate to the infrastructure and these components are pushed away from the application code, the application is simplified. All these concerns are now moved out of the application and consolidated behind the Dapr APIs. A new application developer doesn’t need to learn how RabbitMQ, PostgreSQL, or Redis works. The Dapr APIs are self-explanatory: if you want to save the application’s state, use the saveState() method. If you publish an event, use the publishEvent() method. Developers using an IDE can easily check which APIs are available for them to use. The teams configuring the cloud-native runtime can use their favorite tools to configure the available infrastructure. If they move from a self-managed Redis instance to a Google Cloud In-Memory Store, they can swap their Redis instance without changing the application code. If they want to swap a self-managed Kafka instance for Google PubSub or Amazon SQS/SNS, they can shift Dapr configurations. But, you ask, what about those APIs, saveState/getState and publishEvent? What about subscriptions? How do you consume an event? Can we elevate these API calls to work better with Spring Boot so developers don’t need to learn new APIs? Tomorrow, a Unified Cross-Runtime Experience In contrast with most technical articles, the answer here is not, “It depends." Of course, the answer is YES. We can follow the Spring Data and Messaging approach to provide a richer Dapr experience that integrates seamlessly with Spring Boot. This, combined with a local development experience (using Testcontainers), can help teams design and code applications that can run quickly and without changes across environments (local, Kubernetes, cloud provider). If you are already working with Redis, PostgreSQL, and/or RabbitMQ, you are most likely using Spring Boot abstractions Spring Data and Spring RabbitMQ/Kafka/Pulsar for asynchronous messaging. For Spring Data KeyValue, check the post A Guide to Spring Data Key Value for more details. To find an Employee by ID: For asynchronous messaging, we can take a look at Spring Kafka, Spring Pulsar, and Spring AMQP (RabbitMQ) (see also Messaging with RabbitMQ ), which all provide a way to produce and consume messages. Producing messages with Kafka is this simple: Consuming Kafka messages is extremely straightforward too: For RabbitMQ, we can do pretty much the same: And then to send a message: To consume a message from RabbitMQ, you can do: Elevating Dapr to the Spring Boot Developer Experience Now let’s take a look at how it would look with the new Dapr Spring Boot starters: Let’s take a look at the DaprKeyValueTemplate: Let’s now store our Vote object using the KeyValueTemplate. Let’s find all the stored votes by creating a query to the KeyValue store: Now, why does this matter? The DaprKeyValueTemplate, implements the KeyValueOperations interfaces provided by Spring Data KeyValue, which is implemented by tools like Redis, MongoDB, Memcached, PostgreSQL, and MySQL, among others. The big difference is that this implementation connects to the Dapr APIs and does not require any specific client. The same code can store data in Redis, PostgreSQL, MongoDB, and cloud provider-managed services such as AWS DynamoDB and Google Cloud Firestore. Over 30 data stores are supported in Dapr, and no changes to the application or its dependencies are needed. Similarly, let’s take a look at the DaprMessagingTemplate. Let’s publish a message/event now: To consume messages/events, we can use the annotation approach similar to the Kafka example: An important thing to notice is that out-of-the-box Dapr uses CloudEvents to exchange events (other formats are also supported), regardless of the underlying implementations. Using the @Topic annotation, our application subscribes to listen to all events happening in a specific Dapr PubSub component in a specified Topic. Once again, this code will work for all supported Dapr PubSub component implementations such as Kafka, RabbitMQ, Apache Pulsar, and cloud provider-managed services such as Azure Event Hub, Google Cloud PubSub, and AWS SNS/SQS (see Dapr Pub/sub brokers documentation). Combining the DaprKeyValueTemplate and DaprMessagingTemplate gives developers access to data manipulation and asynchronous messaging under a unified API, which doesn’t add application dependencies, and it is portable across environments, as you can run the same code against different cloud provider services. While this looks much more like Spring Boot, more work is required. On top of Spring Data KeyValue, the Spring Repository interface can be implemented to provide a CRUDRepository experience. There are also some rough edges for testing, and documentation is needed to ensure developers can get started with these APIs quickly. Advantages and Trade-Offs As with any new framework, project, or tool you add to the mix of technologies you are using, understanding trade-offs is crucial in measuring how a new tool will work specifically for you. One way that helped me understand the value of Dapr is to use the 80% vs 20% rule. Which goes as follows: 80% of the time, applications do simple operations against infrastructure components such as message brokers, key/value stores, configuration servers, etc. The application will need to store and retrieve state and emit and consume asynchronous messages just to implement application logic. For these scenarios, you can get the most value out of Dapr. 20% of the time, you need to build more advanced features that require deeper expertise on the specific message broker that you are using or to write a very performant query to compose a complex data structure. For these scenarios, it is okay not to use the Dapr APIs, as you probably require access to specific underlying infrastructure features from your application code. It is common when we look at a new tool to generalize it to fit as many use cases as we can. With Dapr, we should focus on helping developers when the Dapr APIs fit their use cases. When the Dapr APIs don’t fit or require specific APIs, using provider-specific SDKs/clients is okay. By having a clear understanding of when the Dapr APIs might be enough to build a feature, a team can design and plan in advance what skills are needed to implement a feature. For example, do you need a RabbitMQ/Kafka or an SQL and domain expert to build some advanced queries? Another mistake we should avoid is not considering the impact of tools on our delivery practices. If we can have the right tools to reduce friction between environments and if we can enable developers to create applications that can run locally using the same APIs and dependencies required when running on a cloud provider. With these points in mind let’s look at the advantage and trade-offs: Advantages Concise APIs to tackle cross-cutting concerns and access to common behavior required by distributed applications. This enables developers to delegate to Dapr concerns such as resiliency (retry and circuit breaker mechanisms), observability (using OpenTelemetry, logs, traces and metrics), and security (certificates and mTLS). With the new Spring Boot integration, developers can use the existing programming model to access functionality With the Dapr and Testcontainers integration, developers don’t need to worry about running or configuring Dapr, or learning other tools that are external to their existing inner development loops. The Dapr APIs will be available for developers to build, test, and debug their features locally. The Dapr APIs can help developers save time when interacting with infrastructure. For example, instead of pushing every developer to learn about how Kafka/Pulsar/RabbitMQ works, they just need to learn how to publish and consume events using the Dapr APIs. Dapr enables portability across cloud-native environments, allowing your application to run against local or cloud-managed infrastructure without any code changes. Dapr provides a clear separation of concerns to enable operations/platform teams to wire infrastructure across a wide range of supported components. Trade-Offs Introducing abstraction layers, such as the Dapr APIs, always comes with some trade-offs. Dapr might not be the best fit for all scenarios. For those cases, nothing stops developers from separating more complex functionality that requires specific clients/drivers into separate modules or services. Dapr will be required in the target environment where the application will run. Your applications will depend on Dapr to be present and the infrastructure needed by the application wired up correctly for your application to work. If your operation/platform team is already using Kubernetes, Dapr should be easy to adopt as it is a quite mature CNCF project with over 3,000 contributors. Troubleshooting with an extra abstraction between our application and infrastructure components can become more challenging. The quality of the Spring Boot integration can be measured by how well errors are propagated to developers when things go wrong. I know that advantages and trade-offs depend on your specific context and background, feel free to reach out if you see something missing in this list. Summary and Next Steps Covering the Dapr Statestore (KeyValue) and PubSub (Messaging) is just the first step, as adding more advanced Dapr features into the Spring Boot programming model can help developers access more functionality required to create robust distributed applications. On our TODO list, Dapr Workflows for durable executions is coming next, as providing a seamless experience to develop complex, long-running orchestration across services is a common requirement. One of the reasons why I was so eager to work on the Spring Boot and Dapr integration is that I know that the Java community has worked hard to polish their developer experiences focusing on productivity and consistent interfaces. I strongly believe that all this accumulated knowledge in the Java community can be used to take the Dapr APIs to the next level. By validating which use cases are covered by the existing APIs and finding gaps, we can build better integrations and automatically improve developers’ experiences across languages. You can find all the source code for the example we presented at Spring I/O linked in the "Today, Kubernetes, and Cloud-Native Runtimes" section of this article. We expect to merge the Spring Boot and Dapr integration code to the Dapr Java SDK to make this experience the default Dapr experience when working with Spring Boot. Documentation will come next. If you want to contribute or collaborate with these projects and help us make Dapr even more integrated with Spring Boot, please contact us.
Effective monitoring and troubleshooting are critical for maintaining the performance and reliability of Atlassian products like Jira and Confluence and software configuration management (SCM) tools like Bitbucket. This article explores leveraging various monitoring tools to identify, diagnose, and resolve issues in these essential development and collaboration platforms. Before we discuss the monitoring tools, let's clarify the importance of monitoring. Monitoring Atlassian tools is crucial for several reasons: Proactive issue detection Performance optimization Capacity planning Security and compliance Minimizing downtime By implementing robust monitoring practices, IT teams can ensure smooth operations, enhance user experience, and maximize the value of Atlassian investments. Essential Monitoring Tools 1. Atlassian's Built-in Monitoring Tools Atlassian provides several built-in tools for monitoring and troubleshooting: Troubleshooting and Support Tools This app, included by default in Atlassian products, offers features like log analysis, health checks, and support zip creation. It helps identify common issues and provides links to relevant knowledge-based articles. Instance Health Check This feature, available in the administration console, scans for potential problems and offers recommendations for resolving them. Application Metrics Atlassian products expose various performance metrics via JMX (Java Management Extensions). External monitoring tools can be utilized to gather and examine these metrics. 2. Log Analysis Log files contain a resource of information for troubleshooting. Critical log files to monitor include: Application logs (e.g., atlassian-jira.log, atlassian-confluence.log) Tomcat logs (catalina.out) Database logs Log aggregation tools like ELK Stack (Elasticsearch, Logstash, Kibana) or Splunk can centralize, search, and analyze log data from multiple sources. 3. Prometheus and Grafana Prometheus and Grafana are popular open-source tools for monitoring and visualization: Prometheus: Collects and stores time-series data from configured targets Grafana: Creates dashboards and visualizations based on the collected metrics Atlassian provides documentation on setting up Prometheus and Grafana to monitor Jira and Confluence. This combination allows for: Real-time performance monitoring Custom dashboards for different stakeholders Alerting based on predefined thresholds 4. Application Performance Monitoring (APM) Tools APM solutions offer comprehensive visibility into how applications are functioning and how users are experiencing them. Popular options include: Dynatrace AppDynamics New Relic These tools can help identify bottlenecks, trace transactions, and determine the root cause for performance issues across the application stack. 5. Infrastructure Monitoring Monitoring the underlying infrastructure is crucial for maintaining optimal performance. Key areas to monitor include: CPU, memory, and disk usage Network performance Database performance Monitoring tools like Nagios, Zabbix, or cloud-native solutions (e.g., AWS CloudWatch) can monitor infrastructure components. 6. Synthetic Monitoring and User Experience Synthetic monitoring involves simulating user interactions to identify issues proactively. Synthetic monitoring tools like Selenium or Atlassian's own Statuspage can be used to: Monitor critical user journeys Check availability from different geographic locations Measure response times for crucial operations The section below will examine some of the frequent issues with Atlassian tools and troubleshooting techniques for these common issues. Troubleshooting Techniques 1. Performance Degradation When facing performance issues: Check system resources (CPU, memory, disk I/O) for bottlenecks. Analyze application logs for errors or warnings. Review recent configuration changes. Examine database performance metrics. Use APM tools to identify slow transactions or API calls. 2. Out of Memory Errors For out-of-memory errors: Analyze garbage collection logs. Review memory usage trends in monitoring tools. Check for memory leaks using profiling tools. Adjust JVM memory settings if necessary. 3. Database-Related Issues When troubleshooting database problems: Monitor database connection pool metrics. Analyze slow query logs. Check for database locks or deadlocks. Review database configuration settings. 4. Integration and Plugin Issues For issues related to integrations or plugins: Check plugin logs for errors. Review recent plugin updates or configuration changes. Disable suspect plugins to isolate the issue. Monitor plugin-specific metrics if available. In the section below, let's look at some of the best practices for effective monitoring. Best Practices for Effective Monitoring Establish baselines: Create performance baselines during normal operations to quickly identify deviations. Set up alerts: Configure alerts for critical metrics to enable rapid response to issues. Use dashboards: Create custom dashboards for different teams (e.g., operations, development, management) to provide relevant insights. Regular health checks: Perform periodic health checks using Atlassian's built-in tools and third-party monitoring solutions. Monitor trends: Look for long-term performance metrics trends to address potential issues proactively. Correlate data: Use tools like PerfStack to correlate configuration changes with performance metrics. Continuous improvement: Review and refine your monitoring strategy based on lessons learned from past incidents. Conclusion Effective monitoring and troubleshooting of Atlassian tools necessitate a blend of built-in features, third-party tools, and best practices. Organizations can ensure optimal performance, minimize downtime, and provide the best possible user experience by implementing a comprehensive monitoring strategy. Remember that monitoring is an ongoing process. As your Atlassian environments evolve, so should your monitoring and troubleshooting approaches. Keep yourself updated on new tools and techniques, and be ready to adapt your strategy as necessary to align with your organization's evolving needs.
Selecting the right reporting tool for your C# .NET applications can significantly impact your project's success. A good reporting solution should offer comprehensive features, ease of use, and flexibility. This blog post will guide you through the essential features to look for in a .NET reporting tool and provide a high-level comparison of the leading options: ActiveReports.NET, Telerik Reporting, DevExpress Reporting, Stimulsoft Reporting, and List & Label. Essential Features of a .NET Reporting Solution When evaluating .NET reporting tools, you will want to consider the following essential features and judge what is most important for your use case: Designer tools: The tool should offer designers suited to your use cases, such as an embeddable WinForms desktop designer or an embeddable JavaScript-based web designer. Data source support: It should support a wide range of data sources, including SQL databases, JSON, XML, and various object collections. Export options: The ability to export reports to various formats, like PDF, Excel, Word, and HTML, is crucial. Interactive reports: Features like drill-down, drill-through, and parameterized reports enhance user engagement. Performance: The tool should handle large datasets efficiently and ensure quick report generation. Custom scripting: Support for custom scripting using C# or VB.NET allows for dynamic report customization. API and scripting: A rich API for programmatically creating and modifying reports is essential. Localization: Extensive localization support is necessary for international applications. User-friendly interface: An intuitive and easy-to-use interface for both developers and end-users Cost-effectiveness: Competitive pricing and good value for the features offered Comparison table ActiveReports.NET Notable Features Advanced designer tools: Offers Visual Studio-integrated designer, standalone desktop designer, and embeddable desktop and web designers Rich data binding: Supports various data sources, including SQL, JSON, XML, and object collections Comprehensive export options: Supports export to PDF, Excel, Word, HTML, and more Interactive reporting: Features like drill-down, drill-through, and parameterized reports High performance: Efficient handling of large datasets with quick report generation Custom scripting: Supports C# and VB.NET for dynamic report customization Rich API: Provides a comprehensive API for report creation and modification and designer customization Localization: Extensive support for localization Pros User-friendly interface Many designer and viewer options Best performance Extensive export and data binding capabilities Most interactivity features Strong support and documentation Largest total feature set Most layout options Most import options Cons Higher initial learning curve for beginners No bundle pricing with other MESCIUS products Summary ActiveReports.NET is a powerful and flexible reporting solution that excels in performance, design flexibility, and data binding capabilities. Its comprehensive feature set and high-quality support make it a top choice for developers. Telerik Reporting Notable Features Seamless integration: Integrates well with other Telerik products User-friendly designers: Offers both a standalone and a Visual Studio-integrated designer Comprehensive data source support: Supports various data sources, including SQL, OLAP cubes, and web services Extensive export options: Supports export to PDF, Excel, Word, CSV, and more Interactive reports: Features drill-down, drill-through, and report parameters Pros Easy integration with other Telerik components User-friendly interface Above-average interactivity features Good documentation Good value when bundled with other Telerik components Cons Fairly limited support included by default Limited designer options Lower than average total feature set Summary Telerik Reporting is a solid choice, especially for those already using Telerik products. It offers a user-friendly experience and integrates well with other components, though it can be pricier than some alternatives. DevExpress Reporting Notable Features Integration with DevExpress UI: Works seamlessly with DevExpress UI components Unique designers: Includes VS Code-integrated and WinUI designers Rich data source support: Supports SQL, XML, JSON, and Entity Framework Extensive export options: Includes PDF, Excel, Word, HTML, and more Interactive reporting: Offers drill-down, drill-through, and parameterized reports Pros Seamless integration with DevExpress UI components Comprehensive data binding capabilities Extensive export options Most chart types supported over alternatives Many designer and viewer options Above-average import options Good value when bundled with other DevExpress components Cons Higher learning curve for advanced features More expensive than some alternatives Very limited support options Summary DevExpress Reporting is ideal for those already using DevExpress components. It offers powerful features and excellent performance, though it may require more time to master its advanced capabilities. Stimulsoft Reporting Notable Features Cross-platform support: Supports web, desktop, and mobile platforms Flexible designers: Offers embeddable and standalone designers Wide data source compatibility: Supports SQL, XML, JSON, and more Comprehensive export options: Includes PDF, Excel, Word, HTML, and more Interactive reports: Features drill-down, drill-through, and report parameters Pros Flexible design options Extensive data source and export options Support for many platforms Support for platforms outside .NET like PHP, Java, and Flash Cons Interface can be less intuitive for beginners Performance can lag with very large datasets More expensive than some alternatives Summary Stimulsoft Reporting is a versatile option with strong cross-platform support. It offers a rich set of features, though its interface might be less user-friendly for beginners, and performance can be an issue with very large datasets. It's a great option if you need something outside .NET like PHP, Java, or Flash. List & Label Notable Features Comprehensive designer tools: Offers standalone and Visual Studio-integrated designers Robust data source support: Supports SQL, XML, JSON, and more Extensive export options: Includes PDF, Excel, Word, HTML, and more Interactive reporting: Offers drill-down, drill-through, and parameterized reports Localization support: Extensive localization capabilities Pros Flexible design options Strong data binding capabilities Good localization support Extensive export options Offers documentation/support in German Cons Higher learning curve Can be more expensive than some alternatives Fewest designer and viewer options than alternatives Least chart types supported Worse documentation than most alternatives Summary List & Label is a robust and flexible reporting solution with strong localization support. While it offers a comprehensive feature set, it may require more time to master and can be pricier than other options. They also offer support and documentation in German, which may be a significant point for prospective German-speaking customers. Conclusion Choosing the best C# .NET reporting tool involves considering your specific needs, budget, and existing technology stack. While Telerik Reporting, DevExpress Reporting, Stimulsoft Reporting, and List & Label each have their strengths, ActiveReports.NET stands out as a particularly strong contender. ActiveReports.NET excels in designer flexibility, data binding capabilities, export options, interactive features, performance, and cost-effectiveness. Its comprehensive feature set, combined with the high-quality support provided by MESCIUS, makes it an excellent choice for developers and businesses seeking a reliable and powerful reporting solution. In conclusion, while each of the five reporting tools discussed has its merits, ActiveReports.NET is recommended for those seeking a feature-rich, high-performance, and cost-effective reporting solution for their .NET applications.
Bartłomiej Żyliński
Software Engineer,
SoftwareMill
Abhishek Gupta
Principal Developer Advocate,
AWS
Yitaek Hwang
Software Engineer,
NYDIG