Solving a Common Dev Grievance: Architecture Documentation
How to Secure Your Raspberry Pi and Enable Safe, Resilient Updates
Kubernetes in the Enterprise
In 2014, Kubernetes' first commit was pushed to production. And 10 years later, it is now one of the most prolific open-source systems in the software development space. So what made Kubernetes so deeply entrenched within organizations' systems architectures? Its promise of scale, speed, and delivery, that is — and Kubernetes isn't going anywhere any time soon.DZone's fifth annual Kubernetes in the Enterprise Trend Report dives further into the nuances and evolving requirements for the now 10-year-old platform. Our original research explored topics like architectural evolutions in Kubernetes, emerging cloud security threats, advancements in Kubernetes monitoring and observability, the impact and influence of AI, and more, results from which are featured in the research findings.As we celebrate a decade of Kubernetes, we also look toward ushering in its future, discovering how developers and other Kubernetes practitioners are guiding the industry toward a new era. In the report, you'll find insights like these from several of our community experts; these practitioners guide essential discussions around mitigating the Kubernetes threat landscape, observability lessons learned from running Kubernetes, considerations for effective AI/ML Kubernetes deployments, and much more.
API Integration Patterns
Threat Detection
Hey, DZone Community! We have a survey in progress as part of our original research for the upcoming Trend Report. We would love for you to join us by sharing your experiences and insights (anonymously if you choose) — readers just like you drive the content that we cover in our Trend Reports. check out the details for our research survey below Observability and Performance Research DZone's annual research on application performance dives deeper into the emerging trends and techniques around monitoring and observability, both of which are must-haves to support the performance, reliability, and scalability of today's complex applications and system architectures. Our 10-minute research survey that will help guide the narrative of our November Observability and Performance Trend Report explores: Observability models, techniques, and tools OpenTelemetry use, benefits, and drawbacks Performance metrics and degradation root causes AI analytics capabilities for observability and monitoring Join the Observability Research Over the coming months, we will compile, observe, and analyze data from hundreds of respondents; results and observations will be featured in the "Key Research Findings" of our Trend Reports. Your responses help inform the narrative of our Trend Reports, so we truly cannot do this without you. Stay tuned for each report's launch and see how your insights align with the larger DZone Community. We thank you in advance for your help! —The DZone Content and Community team
As organizations adopt microservices and containerized architectures, they often realize that they need to rethink their approach to basic operational tasks like security or observability. It makes sense: in a world where developers – rather than operations teams – are keeping applications up and running, and where systems are highly distributed, ephemeral, and interconnected, how can you take the same approach you have in the past? From a technology perspective, there has been a clear shift to open source standards, especially in the realm of observability. Protocols like OpenTelemetry and Prometheus, and agents like Fluent Bit, are now the norm – according to the 2023 CNCF survey, Prometheus usage increased to 57% adoption in production workloads, with OpenTelemetry and Fluent both at 32% adoption in production. But open source tools alone can’t help organizations transform their observability practices. As I’ve had the opportunity to work with organizations who have solved the challenge of observability at scale, I’ve seen a few common trends in how these companies operate their observability practices. Let's dig in. Measure Thyself — Set Smart Goals With Service Level Objectives Service Level Objectives were first introduced by the Google SRE book in 2016 with great fanfare. But I’ve found that many organizations don’t truly understand them, and even fewer have implemented them. This is unfortunate because they are secretly one of the best ways to predict failures. SLOs (Service Level Objectives) are specific goals that show how well a service should perform, like aiming for 99.9% uptime. SLIs (Service Level Indicators) are the actual measurements used to see if the SLOs are met — think about tracking the percentage of successful requests. Error budgeting is the process of allowing a certain amount of errors or downtime within the SLOs, which helps teams balance reliability and new features — this ensures they don’t push too hard at the risk of making things unstable. Having SLOs on your key services and using error budgeting allows you to identify impending problems and act on them. One of the most mature organizations that I’ve seen practicing SLOs is Doordash. For them, the steaks are high (pun intended). If they have high SLO burn for a service, that could lead to a merchant not getting a food order on time, right, or at all. Or it could lead to a consumer not getting their meal on time or experiencing errors in the app. Getting started with SLOs doesn’t need to be daunting. My colleague recently wrote up her tips on getting started with SLOs. She advises to keep SLOs practical and achievable, starting with the goals that truly delight customers. Start small by setting an SLO for a key user journey. Collaborate with SREs and business users to define realistic targets. Be flexible and adjust SLOs as your system evolves. Embrace Events — The Only Constant in your Cloud-Native Environment is Change In DevOps, things are always changing. We're constantly shipping new code, turning features on and off, updating our infrastructure, and more. This is great for innovation and agility, but it also introduces change, which opens the door for errors. Plus, the world outside our systems is always shifting too, from what time of day it is to what's happening in the news. All of this can make it hard to keep everything running smoothly. These everyday events that result in changes are the most common causes of issues in production systems. And the challenge is that these changes are initiated by many different types of systems, from feature flag management to CI/CD, cloud infrastructure, security, and more. Interestingly, 67% of organizations don’t have the ability to identify change(s) in their environments that caused performance issues according to the Digital Enterprise Journal. The only way to stay on top of all of these changes is to connect them into a central hub to track them. When people talk about “events” as a fourth type of telemetry, outside of metrics, logs, and traces, this is typically what they mean. One organization I’ve seen do this really well is Dandy Dental. They’ve found that the ability to understand change in their system, and quickly correlate it to the changes in behavior, has made debugging a lot faster for developers. Making a habit of understanding what changed has allowed Dandy to improve their observability effectiveness. Adopt Hypothesis-Driven Troubleshooting — Enable Any Developer to Fix Issues Faster When a developer begins troubleshooting an issue, they start with a hypothesis. Their goal is to quickly prove or disprove that hypothesis. The more context they have about the issue, the faster they can form a good hypothesis to test. If they have multiple hypotheses, they will need to test each one in order of likelihood to determine which one is the culprit. The faster a developer can prove or disprove a hypothesis, the faster they can solve the problem. Developers use observability tools to both form their initial hypotheses and to prove/disprove them. A good observability tool will give the developer the context they need to form a likely hypothesis. A great observability tool will make it as easy as possible for a developer with any level of expertise or familiarity with the service to quickly form a likely hypothesis and test it. Organizations that want to improve their MTTR can start by shrinking the time to create a hypothesis. Tooling that provides the on-call developer with highly contextual alerts that immediately focus them on the relevant information can help shrink this time. The other advantage of explicitly taking a hypothesis-driven troubleshooting approach is concurrency. If the issue is high severity, or has significant complexity, they may need to call in more developers to help them concurrently prove or disprove each hypothesis to speed up troubleshooting time. An AI software company we work with uses hypothesis-driven troubleshooting. I recently heard a story about how they were investigating a high error rate on a service, and used their observability tool to narrow it down to two hypotheses. Within 10 minutes they had proven their first hypothesis to be correct – that the errors were all occurring in a single region that had missed the most recent software deploy. Taking the Next Step If you're committed to taking your observability practice to the next level, these tried-and-true habits can help you take the initial steps forward. All three of these practices are areas that we’re passionate about. If you’ll be at KubeCon and want to discuss this more, please come say hello! This article was shared as part of DZone's media partnership with KubeCon + CloudNativeCon.View the Event
I have attended several events this year, and I’m constantly keeping my ear to the ground for the latest topics and trends in technology. As a developer focused mostly on data and database industries, I feel that this year has seen a massive expansion of data and efficiency use cases and interest. In this post, I’ll highlight some of the trends I’ve seen throughout 2024, especially in the data, graph, and analytics technology spaces. Whether you are simply curious about what is happening in technology industries or looking to put together interesting topics for papers or presentations, this post will include the greatest hits. Graph and Knowledge Graph There has been an explosion of knowledge graph discussions and content this year, many of which are related to AI (the next topic on our list). However, knowledge graphs are useful in solving many broader technical problems and have been applied in business long before AI took the world by storm. “A knowledge graph is an organized representation of real-world entities and their relationships.” - What is a Knowledge Graph While connected data is often stored in a graph database, it is not limited to that storage medium. The value of a knowledge graph is the connections between the data that make understanding context and how objects are actually relevant to each other. In AI, knowledge graphs help support reasoning/logic, fact-checking, and gathering relevant information for answers provided by an LLM. There has been content introducing knowledge graphs and how to build the data and architecture into existing systems. Many sessions also highlight how they can provide visibility into the “black box” of machine learning or how network-based results improve data analytics. AI and GenAI Much of this year’s content has focused on artificial intelligence (AI) and generative AI (GenAI) topics. 2024 has been a fast-paced year of exploration of how technology can take advantage of the latest innovations with language models, using them as chatbots, personal assistants, and much more. Earlier in the year, content applied large language models to everything, searching for the use cases with maximum value. As time has progressed, the space has adapted with specialized models (trained in certain areas), new varieties of tools (for coding, accessibility, content generation, etc.), as well as integration with existing, high-quality data stores used in retrieval augmented generation (RAG). Themes of productivity, efficiency, and human capital disruption have been woven among the technical details of building, implementing, evaluating, and maintaining AI solutions over time. I have also felt the emergence of what I call the “data scientist developer,” overlapping typically the data science or development topics. As an added bonus, changes in the field are occurring so quickly that the learning curve can seem insurmountable and there is always a need for higher-level content amidst deep-dives. GraphRAG Combining the last two topics together has sparked lots of discussion around GraphRAG. This is where graph data (usually knowledge graphs) are used in combination with an LLM to provide relevant and high-quality results in an AI application. Microsoft released a GraphRAG solution a few weeks ago, and many others have produced similar offerings/solutions to AI’s shortcomings. In the content realm, much of the story revolves around GenAI limitations, what GraphRAG is, and how it solves specific problems in the AI technology industry. I will be interested to see how this area evolves in 2025 and how businesses harness GraphRAG to improve their systems. Query Language Standardization Leaving the whirlwind of AI behind for a moment, 2024 was also the announcement of the ISO graph query language standard! This has been in the works for many years and involved many graph database vendors discussing and outlining a query language standard for graph databases. The most recent before this was the SQL standard in 1986 (the latest revision in 2023), so this made quite a splash in the database and graph database communities this spring. While there may not be a lot of interruption to workflow from the release, we can expect convergence of graph query syntax, making it easier for new developers to learn. Existing impacted languages (Cypher, Gremlin, SPARQL, and others) will adopt any necessary changes to align with the standards. Applications and Operations Topics in constructing better systems and improving response to issues continue to be critical to development. While we see some disruption from AI, many systems still need non-AI solutions and approaches (or at least need to start there). Content around system architecture, DevOps, tools/frameworks, testing and logging, cybersecurity, authentication, and many other core technologies are still prevalent, helping developers run the backbone of businesses across industries. Whether you are working with the latest tech stack or trying to improve error response, you can find high-level and deep-dive content available alongside the shinier topics. Community and Larger Themes I have noticed an increase in joint projects, events, and content this year. While there are probably a variety of different factors, two that immediately come to mind are that the technology industry is focusing on efficiency and also tackling larger problems. We can do more with less by combining energy across teams, companies, and industries to produce unified results with overall less effort. Whether that’s bringing together separate audiences for an event to showcase multiple technologies or integrating two technologies for optimized solutions, collaboration among differing communities offers broader learning opportunities through a combined source. I have also seen an increase in larger technical concerns for sustainability, ethics/privacy, and tech regulations. Issues of this magnitude cannot be solved by a single entity, so joint efforts are needed to affect awareness and change in these areas. Wrapping Up! This list is not exhaustive but highlights many of the topics where I have seen throughout 2024's technical content in places like newsfeeds, technical blogs/videos, and conferences, such as Neo4j’s free, 24-hour virtual NODES conference. With 170+ speakers lined up from all parts of the world and industries, there are sample sessions for each of the trends covered above and many more. Happy coding!
In our industry, few pairings have been as exciting and game-changing as the union of artificial intelligence (AI) and machine learning (ML) with cloud-native environments. It's a union designed for innovation, scalability, and yes, even cost efficiency. So put on your favorite Kubernetes hat and let's dive into this dynamic world where data science meets the cloud! Before we explore the synergy between AI/ML and cloud-native technologies, let’s set a few definitions. AI: A broad concept referring to machines mimicking human intelligence. ML: The process of “teaching” a machine to perform specific tasks and generate accurate output through pattern identification. Cloud native: A design paradigm that leverages modern cloud infrastructure to build scalable, resilient, and flexible applications – picture microservices in Docker containers orchestrated by Kubernetes and continuously deployed by CI/CD pipelines. The Convergence of AI/ML and Cloud Native What are some of the benefits of implementing AI and ML in cloud-native environments? Scalability Ever tried to manually scale an ML model as it gets bombarded with a gazillion requests? Not fun. But with cloud-native platforms, scaling becomes as easy as a Sunday afternoon stroll in the park. Kubernetes, for instance, can automatically scale pods running your AI models based on real-time metrics, which means your AI model can perform well even under duress. Agility In a cloud-native world, a microservices architecture means your AI/ML components can be developed, updated, and deployed independently. This modularity fosters agility, which lets you innovate and iterate rapidly, and without fear of breaking the entire system. It's like being able to swap out parts of the engine of your car while driving to update them—except much safer. Cost Efficiency Serverless computing platforms (think AWS Lambda, Google Cloud Functions, and Azure Functions) allow you to run AI/ML workloads only when needed. No more paying for idle compute resources. It's the cloud equivalent of turning off the lights when you leave a room—simple, smart, and cost-effective. It’s also particularly advantageous for intermittent or unpredictable workloads. Collaboration Cloud-native environments make a breeze out of collaboration among data scientists, developers, and operations teams. With centralized repositories, version control, and CI/CD pipelines, everyone can work harmoniously on the same ML lifecycle. It's the tech equivalent of a well-coordinated kitchen in a highly-rated-on-Yelp restaurant. Trending Applications of AI/ML in Cloud Native While most of the general public is familiar with AI/ML technologies through interactions with generative AI chatbots, fewer realize the extent to which AI/ML has already enhanced their online experiences. AI-Powered DevOps (AIOps) By supercharging DevOps processes with AI/ML, you can automate incident detection, root cause analysis, and predictive maintenance. Additionally, integrating AI/ML with your observability tools and CI/CD pipelines enables you to improve operational efficiency and reduce service downtime. Kubernetes + AI/ML Kubernetes, the long-time de facto platform for container orchestration, is now also the go-to for orchestrating AI/ML workloads. Projects like Kubeflow simplify the deployment and management of machine learning pipelines on Kubernetes, which means you get end-to-end support for model training, tuning, and serving. Edge Computing Edge computing processes AI/ML workloads closer to where data is generated, which dramatically reduces latency. By deploying lightweight AI models at edge locations, organizations can perform real-time inference on devices such as IoT sensors, cameras, and mobile devices – even your smart fridge (because why not?). Federated Learning Federated learning does not need organizations to share raw data in order for them to collaboratively train AI models. It's a great solution for industries that have strict privacy and compliance regulations, such as healthcare and finance. MLOps MLOps integrates DevOps practices into the machine learning lifecycle. Tools like MLflow, TFX (TensorFlow Extended), and Seldon Core make continuous integration and deployment of AI models a reality. Imagine DevOps, but smarter. Because Integration Challenges Keep Things Interesting Of course, none of this comes without its challenges. Complexity Integrating AI/ML workflows with cloud-native infrastructure isn't for the faint of heart. Managing dependencies, ensuring data consistency, and orchestrating distributed training processes requires a bit more than a sprinkle of magic. Latency and Data Transfer For real-time AI/ML applications, latency can be a critical concern. Moving tons of data between storage and compute nodes introduces delays. Edge computing solutions can mitigate this by processing data closer to its source. Cost Management The cloud's pay-as-you-go model is great—until uncontrolled resource allocation starts nibbling away at your budget. Implementing resource quotas, autoscaling policies, and cost monitoring tools is your financial safety net. AI/ML Practices That Could Help Save the Day Modularize! Design your AI/ML applications using the principles of microservices. Decouple data preprocessing, model training, and inference components to enable independent scaling and updates. Leverage managed services: Cloud providers offer AI/ML services to simplify infrastructure management and accelerate development. Observe your models: Integrate your AI/ML workloads with observability tools – having access to metrics about resource usage, model performance, and system health can help you proactively detect and address issues. Secure your data and models: Use encryption, access controls, and secure storage solutions to protect sensitive data and AI models. In Summary The integration of AI/ML technologies in cloud-native environments offers scalability, agility, and cost efficiency, while enhancing collaboration across teams. However, navigating this landscape comes with its own set of challenges, from managing complexity to ensuring data privacy and controlling costs. There are trends to keep an eye on, such as edge computing—a literal edge of glory for real-time processing—AIOps bringing brains to DevOps, and federated learning letting organizations share the smarts without sharing the data. The key to harnessing these technologies lies in best practices: think modular design, robust monitoring, and a sprinkle of foresight through observability tools. The future of AI/ML in cloud-native environments isn't just about hopping on the newest tech bandwagon. It’s about building systems so smart, resilient, and adaptable, you’d think they were straight out of a sci-fi movie (hopefully not Terminator). Keep your Kubernetes hat on tight, your algorithms sharp, and your cloud synced – and let’s see what’s next! This article was shared as part of DZone's media partnership with KubeCon + CloudNativeCon.View the Event
In environments with AWS Cloud workloads, a proactive approach to vulnerability management involves shifting from traditional patching to regularly deploying updated Secure Golden Images. This approach is well-suited to a modern Continuous Integration and Continuous Delivery (CI/CD) environment, where the goal is rapid, automated deployment — and doing this with AMIs (Amazon Machine Images) ensures that every instance benefits from consistent security updates. Creating the Golden Image The first step to securing your EC2 environment is building a Secure Golden Image (SGI) —a pre-configured AMI that serves as the baseline for deploying secure EC2 instances. An SGI should include: AWS-updated kernels: Using the latest AWS-supported kernel ensures you’re starting with a secure, updated OS. The latest AWS kernels also support Kernel Live Patching, which allows for updates without rebooting, minimizing downtime. AWS Systems Manager (SSM): Enabling SSM eliminates the need for traditional SSH access, a significant attack vector. With Session Manager, you can securely access and manage instances without SSH keys, reducing risk. Baseline security configurations: The image should be hardened following security best practices. This includes encryption, restrictive network access, secure IAM role configuration, and logging integration with AWS CloudTrail and AWS GuardDuty for monitoring and alerting. Vulnerability Scanning and Image Hardening After building your golden image, leverage tools to scan for vulnerabilities and misconfigurations. Integrating these scans into your CI/CD pipeline ensures that every new deployment based on the golden image meets your security standards. Keeping the Golden Image Patched and Updated One of the most important aspects of using a golden image strategy is maintaining it. In a dynamic cloud environment, vulnerabilities evolve continuously, requiring frequent updates. Here are some key steps to keep your golden images up-to-date: Release new secure golden images at a regular cadence: Releasing new Secure Golden Images (SGIs) at a regular cadence — whether monthly or quarterly — ensures consistent security updates and a reliable fallback if issues arise. Automating the process using AWS services like EC2 Image Builder helps streamline AMI creation and management, reducing manual errors. A regular and consistent release schedule guarantees your infrastructure stays secure and up-to-date, aligning with best practices for vulnerability management and continuous deployment. Archive and version control: It’s important to maintain the version history for your AMIs. This allows for easy rollback if necessary and ensures compliance during security audits by demonstrating how you manage patching across your instances. Continuous monitoring: While a golden image provides a secure baseline, vulnerabilities can still emerge in running applications. Use tools to monitor the health of your deployed EC2 instances and ensure compliance with security policies. Patching vs. Golden Image Deployment: A Thoughtful Debate When debating whether to adopt a golden image strategy versus traditional patching, it’s essential to weigh the pros and cons of both methods. Patching, while effective for quick fixes, can create inconsistencies over time, especially when patches are applied manually or across multiple servers. This can lead to configuration drift, library drift, package drift, etc..., where each server has a slightly different configuration, making it difficult to maintain a consistent security posture across your infrastructure. Manual patching also introduces the risk of missing patches or creating security gaps if updates are not applied in time. On the other hand, Golden Image Deployment offers consistency and uniformity. By standardizing the creation and deployment of hardened AMIs, you eliminate these drifts entirely. Every instance spun up from a golden image starts with the same secure baseline, ensuring that all EC2 instances are protected by the same set of patches and security configurations. This is particularly valuable in CI/CD environments, where automation and rapid deployment are priorities. However, golden image deployment can take longer than traditional patching, especially in environments where uptime is critical. Rebuilding and redeploying AMIs requires careful coordination and orchestration, particularly for live production environments. Automation through tools like EC2 Image Builder and blue/green deployment strategies can help reduce downtime, but the upfront effort to automate these processes is more complex than simply applying a patch. A balanced approach would be to deploy Secure Golden Images (SGIs) at regular intervals — such as monthly or quarterly — to maintain consistency and uniformity across your EC2 instances, preventing configuration drift. In between these regular SGI deployments, manual patching can be applied in special cases where critical vulnerabilities arise. This strategy combines the best of both worlds: regular, reliable updates through golden images, and the flexibility to address urgent issues through patching. In summary, patching may be faster in certain emergency situations, but over time, it can lead to inconsistencies. A golden image strategy, while requiring more initial setup and automation, ensures long-term consistency and security. For organizations with cloud-native architectures and a DevOps approach, adopting a golden image strategy aligns better with modern security and CI/CD practices.
There's a far smaller audience of folks who understand the intricacies of HTML document structure than those who understand the user-friendly Microsoft (MS) Word application. Automating HTML-to-DOCX conversions makes a lot of sense if we frequently need to generate well-formatted documents from dynamic web content, streamline reporting workflows, or convert any other web-based information into editable Word documents for a non-technical business audience. Automating HTML-to-DOCX conversions with APIs reduces the time and effort it takes to generate MS Word content for non-technical users. In this article, we'll review open-source and proprietary API solutions for streamlining HTML-to-DOCX conversions in Java, and we'll explore the relationship between HTML and DOCX file structures that makes this conversion relatively straightforward. How Similar are HTML and DOCX Structures? HTML and DOCX documents serve very different purposes, but they have more in common than we might initially think. They're both XML-based formats with similar approaches to structuring text on a page: HTML documents use an XML-based structure to organize how content appears in a web browser. DOCX documents use a series of zipped XML files to collectively define how content appears in the proprietary MS Word application. Content elements in an HTML document like paragraphs (<p>), headings (<h1>, <h2>, etc.), and tables (<table>) all roughly translate into DOCX iterations of the same concept. For example, DOCX files map HTML <p> tags to <w:p> elements, and they map <h1> tags to <w:pStyle> elements. Further, in a similar way to how HTML documents often reference CSS stylesheets (e.g., styles.css) for element styling, DOCX documents use an independent document.xml file to store content display elements and map them with Word styles and settings, stored in style.xml and settings.xml files respectively within the DOCX archive. Differences Between HTML and DOCX to Consider It's worth noting that HTML and DOCX files do handle certain types of content quite differently, despite sharing a similar derivative structure. Much of this can be attributed to differences between how web browser applications and the MS Word application interpret information. The challenges we encounter with HTML-to-DOCX conversions are largely driven by inconsistencies in the way custom styling, media content, and dynamic elements are interpreted. The styling used in native HTML and native DOCX documents is often custom/proprietary, and custom/proprietary HTML styles (e.g., custom fonts) won't necessarily translate into identical DOCX styles when we convert content between those formats. Further, in HTML files, multimedia (e.g., images, videos) are included on any given page as links, whereas DOCX files embed media objects directly. Finally, the dynamic code elements we find on some HTML pages — usually written in JavaScript — won't translate to DOCX whatsoever given that DOCX is a static format. Converting HTML to DOCX When we convert HTML to DOCX, we effectively parse content from HTML elements and subsequently map that content to appropriate DOCX elements. The same occurs in reverse when we make the opposite conversion (a process I've written about in the past). How that parsing and mapping take place depends entirely on how we structure our code — or which APIs we elect to use in our programming project. Open-Source Libraries for HTML-to-DOCX Conversions If we're looking for open-source libraries to make HTML-to-DOCX conversions, we'll go a long way with libraries like jsoup and docx4j. The jsoup library is designed to parse and clean HTML programmatically into a structure that we can easily work with, and the docx4j library offers features capable of mapping HTML tags to their corresponding DOCX elements. We can also finalize the creation of our DOCX documents with docx4j, literally organizing our mapped HTML elements into a series of XML files and zipping those with a .docx extension. The docx4j library is very similar to Microsoft's OpenXML SDK, only for Java developers instead of C#. HTML-to-DOCX Conversion Demonstration If we're looking to simplify HTML-to-DOCX conversions, we can turn our attention to a web API solution that gets in the weeds on our behalf, parsing and mapping HTML into a consistent DOCX result without requiring us to download multiple libraries or write a lot of extra code. JitPack a free solution to use, requiring only a free API key. We'll now walk through example code that we can use to structure our API call. To begin, we'll install the client using Maven. We'll first add the repository to our pom.xml: XML <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> And after that, we'll add the dependency to our pom.xml: XML <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> Next, we'll import the necessary classes to configure the API client, handle exceptions, etc.: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.ConvertWebApi; Now we'll configure our API client with an API key for authentication: Java ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); Finally, we’ll create the API instance, prepare our input request, and handle our conversion (while catching any exceptions, of course): Java ConvertWebApi apiInstance = new ConvertWebApi(); HtmlToOfficeRequest inputRequest = new HtmlToOfficeRequest(); // HtmlToOfficeRequest | HTML input to convert to DOCX try { byte[] result = apiInstance.convertWebHtmlToDocx(inputRequest); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling ConvertWebApi#convertWebHtmlToDocx"); e.printStackTrace(); } Once our conversion is complete, we can write the resulting byte[] array to a DOCX file, and we're all finished. We can perform subsequent operations with our new DOCX document, or we can store it for business users to access directly and call it a day. Conclusion In this article, we reviewed some of the similarities between HTML and DOCX file structures that make converting between both formats relatively simple and easy to accomplish with code. We then discussed two open-source libraries we could use in conjunction to handle HTML-to-DOCX conversions, and we learned how to call a free proprietary API to handle all our steps in one go.
In the ever-evolving landscape of AI, chatbots have become indispensable tools for enhancing user engagement and streamlining information delivery. This article will walk you through the process of building an interactive chatbot using Streamlit for the front end, LangChain for orchestrating interactions, and Anthropic’s Claude Model powered by Amazon Bedrock as the Large Language Model (LLM) backend. We'll dive into the code snippets for both the backend and front end and explain the key components that make this chatbot work. Core Components Streamlit frontend: Streamlit's intuitive interface allows us to create a low-code user-friendly chat interface with minimal effort. We'll explore how the code sets up the chat window, handles user input, and displays the chatbot's responses. LangChain orchestration: LangChain empowers us to manage the conversation flow and memory, ensuring the chatbot maintains context and provides relevant responses. We'll discuss how LangChain's ConversationSummaryBufferMemory and ConversationChain are integrated. Bedrock/Claude LLM backend: The true magic lies in the LLM backend. We'll look at how to leverage Amazon Bedrock’s claude foundation model to generate intelligent and contextually aware responses. Chatbot Architecture Conceptual Walkthrough of the Architecture User interaction: The user initiates the conversation by typing a message into the chat interface created by Streamlit. This message can be a question, a request, or any other form of input the user wishes to provide. Input capture and processing: Streamlit's chat input component captures the user's message and passes it on to the LangChain framework for further processing. Contextualization with LangChain memory: LangChain plays a crucial role in maintaining the context of the conversation. It combines the user's latest input with the relevant conversation history stored in its memory. This ensures that the chatbot has the necessary information to generate a meaningful and contextually appropriate response. Leveraging the LLM: The combined context is then sent to the Bedrock/Claude LLM. This powerful language model uses its vast knowledge and understanding of language to analyze the context and generate a response that addresses the user's input in an informative way. Response retrieval: LangChain receives the generated response from the LLM and prepares it for presentation to the user. Response display: Finally, Streamlit takes the chatbot's response and displays it in the chat window, making it appear as if the chatbot is engaging in a natural conversation with the user. This creates an intuitive and user-friendly experience, encouraging further interaction. Code Snippets Frontend (Streamlit) Python import streamlit import chatbot_backend from langchain.chains import ConversationChain from langchain.memory import ConversationSummaryBufferMemory import boto3 from langchain_aws import ChatBedrock import pandas as pd # 2 Set Title for Chatbot - streamlit.title("Hi, This is your Chatbott") # 3 LangChain memory to the session cache - Session State - if 'memory' not in streamlit.session_state: streamlit.session_state.memory = demo.demo_memory() # 4 Add the UI chat history to the session cache - Session State if 'chat_history' not in streamlit.session_state: streamlit.session_state.chat_history = [] # 5 Re-render the chat history for message in streamlit.session_state.chat_history: with streamlit.chat_message(message["role"]): streamlit.markdown(message["text"]) # 6 Enter the details for chatbot input box input_text = streamlit.chat_input("Powered by Bedrock") if input_text: with streamlit.chat_message("user"): streamlit.markdown(input_text) streamlit.session_state.chat_history.append({"role": "user", "text": input_text}) chat_response = demo.demo_conversation(input_text=input_text, memory=streamlit.session_state.memory) with streamlit.chat_message("assistant"): streamlit.markdown(chat_response) streamlit.session_state.chat_history.append({"role": "assistant", "text": chat_response}) Backend (LangChain and LLM) Python from langchain.chains import ConversationChain from langchain.memory import ConversationSummaryBufferMemory import boto3 from langchain_aws import ChatBedrock # 2a Write a function for invoking model- client connection with Bedrock with profile, model_id def demo_chatbot(): boto3_session = boto3.Session( # Your aws_access_key_id, # Your aws_secret_access_key, region_name='us-east-1' ) llm = ChatBedrock( model_id="anthropic.claude-3-sonnet-20240229-v1:0", client=boto3_session.client('bedrock-runtime'), model_kwargs={ "anthropic_version": "bedrock-2023-05-31", "max_tokens": 20000, "temperature": .3, "top_p": 0.3, "stop_sequences": ["\n\nHuman:"] } ) return llm # 3 Create a Function for ConversationSummaryBufferMemory (llm and max token limit) def demo_memory(): llm_data = demo_chatbot() memory = ConversationSummaryBufferMemory(llm=llm_data, max_token_limit=20000) return memory # 4 Create a Function for Conversation Chain - Input text + Memory def demo_conversation(input_text, memory): llm_chain_data = demo_chatbot() # Initialize ConversationChain with proper llm and memory llm_conversation = ConversationChain(llm=llm_chain_data, memory=memory, verbose=True) # Call the invoke method full_input = f" \nHuman: {input_text}" llm_start_time = time.time() chat_reply = llm_conversation.invoke({"input": full_input}) llm_end_time = time.time() llm_elapsed_time = llm_end_time - llm_start_time memory.save_context({"input": input_text}, {"output": chat_reply.get('response', 'No Response')}) return chat_reply.get('response', 'No Response') Conclusion We've explored the fundamental building blocks of an interactive chatbot powered by Streamlit, LangChain, and a powerful LLM backend. This foundation opens doors to endless possibilities, from customer support automation to personalized learning experiences. Feel free to experiment, enhance, and deploy this chatbot for your specific needs and use cases.
In this interview with Julian Fischer, CEO of the cloud computing and automation company anynines GmbH, we explore the evolving landscape of cloud-native technologies with a strong focus on the roles of Kubernetes and Cloud Foundry in modern enterprise environments. About the Interviewee The interviewee, Julian Fischer, has extensive experience in Cloud Foundry and Kubernetes ops. Julian leads anynines in helping organizations operate applications at scale. Under his guidance, they're also pioneering advancements in managing data services across many Kubernetes clusters via the open-source Klutch project. The Dominance of Kubernetes Question: Kubernetes has dominated the container orchestration space in recent years. What key factors have contributed to its success? Answer: "Kubernetes has indeed taken the lead in container orchestration. It's flexible, and this flexibility allows companies to customize their container deployment and management to fit their unique needs. But it's not just about flexibility. The ecosystem around Kubernetes is robust and ever-growing. Think tools, services, integrations – you name it. This expansive ecosystem is a major draw. Community support is another big factor. The Kubernetes community is large, active, and innovative. And let's not forget about multi-cloud capabilities. Kubernetes shines here. It enables consistent deployments across various cloud providers and on-premises environments. That's huge for companies with diverse infrastructure needs. Lastly, it's efficient. Kubernetes has some pretty advanced scheduling capabilities. This means optimal use of cluster resources." Question: Despite Kubernetes' popularity, what challenges do organizations face when managing large-scale Kubernetes environments? Answer: "Well, Kubernetes isn't without its challenges, especially at scale. Complexity is a big one. Ensuring consistent configs across multiple clusters? It's not for the faint of heart. Resource management becomes a real juggling act as you scale up. You're dealing with compute, storage, network – it all gets more complex. Monitoring is another headache. As your microservices and containers multiply, maintaining visibility becomes tougher. It's like trying to keep track of a thousand moving parts. Security is a constant concern too. Implementing and maintaining policies across a large Kubernetes ecosystem is a full-time job. And then there are all the updates and patches. Keeping a large Kubernetes environment up-to-date is like painting the Golden Gate Bridge. By the time you finish, it's time to start over. It's a never-ending process." Question: Given Kubernetes' dominance, is there still a place for Cloud Foundry in the cloud-native ecosystem? Answer: "Absolutely. Cloud Foundry still brings a lot to the table. It's got a different focus. While Kubernetes is all about flexibility, Cloud Foundry is about simplicity and operational efficiency for developers. It streamlines the whole process of deploying and scaling apps. That's valuable. Think about it this way. Cloud Foundry abstracts away a lot of the infrastructure complexity. Developers can focus on code, not on managing the underlying systems. That's powerful. Robust security features, proven track record in large enterprises – these things matter. And here's something interesting—in some large-scale scenarios, Cloud Foundry can actually be more economical. Especially when you're running lots of cloud-native apps. It's all about the right tool for the job." The Relationship Between Cloud Foundry and Kubernetes Question: How are the Cloud Foundry and Kubernetes communities working together to bridge these technologies? Answer: "It's not a competition anymore. The communities are collaborating, and it's exciting to see. There are some really interesting projects in the works. Take Klutch, for example. It's an open-source tool that's bridging the gap between Cloud Foundry and Kubernetes for data services. Pretty cool stuff." Figure 1. The open-source Klutch project enables centralized resource management for multi-cluster Kubernetes environments. "Then there's Korifi. This project is ambitious. It's bringing the Cloud Foundry developer experience to Kubernetes. Imagine getting Cloud Foundry's simplicity with Kubernetes' power. That's the goal. These projects show a shift in thinking. It's not about choosing one or the other anymore. It's about leveraging the strengths of both platforms. That's the future of cloud-native tech." Question: What factors should organizations consider when choosing between Kubernetes and Cloud Foundry? Answer: "Great question. There's no one-size-fits-all answer here. First, look at your team. What are they comfortable with? What's their expertise? That matters a lot. Then, think about your applications. What do they need? Some apps are better suited for one platform over the other. Scalability is crucial too. How much do you need to grow? And how fast? Consider your control needs as well. Do you need the fine-grained control of Kubernetes? Or would you benefit more from Cloud Foundry's abstraction? Don't forget about your existing tools and workflows. Integration is key. You want a solution that plays nice with what you already have. It's about finding the right fit for your specific situation." Question: Can you elaborate on the operational efficiency advantages that Cloud Foundry might offer in certain scenarios? Answer: "Sure thing. Cloud Foundry can be a real efficiency booster in the right context. It's all about its opinionated approach. This might sound limiting, but in large-scale environments, it can be a blessing. Here's why – Cloud Foundry streamlines a lot of operational aspects. Deployment, scaling, management - it's all simplified. This means less operational overhead. In some cases, it can lead to significant cost savings. Especially when you're dealing with a large number of applications that fit well with Cloud Foundry's model. But here's the catch. This advantage is context-dependent. It's not a universal truth. You need to evaluate your specific use case. For some, the efficiency gains are substantial. For others, not so much. It's all about understanding your needs and environment." Looking to the Future of Cloud-Native Technologies Question: How do you see the future of cloud-native technologies evolving, particularly concerning Kubernetes and Cloud Foundry? Answer: "The future is exciting. And diverse. We're moving away from the idea that there's one perfect solution for everything. Kubernetes will continue to dominate, no doubt. But Cloud Foundry isn't going anywhere. In fact, I see increased integration between the two. We're likely to see more hybrid approaches. Organizations leveraging the strengths of both platforms. Why choose when you can have both, right? The focus will be on creating seamless experiences. Imagine combining Kubernetes' flexibility with Cloud Foundry's developer-friendly abstractions. That's incredibly powerful, and what we’re working towards. Innovation will continue at a rapid pace. We'll see new tools, new integrations. The line between these technologies might even start to blur. It's an exciting time to be in this space." Question: What advice would you give to organizations trying to navigate this complex cloud-native ecosystem? Answer: "My advice? Stay flexible. And curious. This field is evolving rapidly. What works today might not be the best solution tomorrow. Start by really understanding your needs. Not just your current needs, but where you're headed. Don't view it as a binary choice. Kubernetes or Cloud Foundry – it doesn't have to be either/or. Consider how they can work together in your stack. Experiment. Start small. See what works for your specific use cases. Invest in your team. Train them on both technologies. The more versatile your team, the better positioned you'll be. And remember, it's okay to change course. Be prepared to evolve your strategy as the technologies and your needs change. The goal isn't to use the trendiest tech. It's to choose the right tools that solve your problems efficiently. Sometimes that's Kubernetes. Sometimes it's Cloud Foundry. Often, it's a combination of both. Stay focused on your business needs, and let that guide your technology choices." This article was shared as part of DZone's media partnership with KubeCon + CloudNativeCon.View the Event
The Oracle WITH clause is one of the most commonly used techniques to simplify the SQL source code and improve performance. In Oracle SQL, the 'WITH' clause also known as a Common Table Expression (CTE) is a powerful tool which is also used to enhance the code readability. WITH is commonly used to define temporary named result sets, also referred to as subqueries or CTEs as defined earlier. These temporary named sets can be referenced multiple times within the main SELECT SQL query. The CTEs are like virtual tables and are very helpful in organizing and modularizing the SQL code. Understanding the WITH Clause Syntax The usage of the WITH clause is very simple. Create a namespace with the AS operator followed by the SELECT query and you can add as many SELECT queries as you want followed by a comma (,). It's a good practice to use meaningful terms for namespaces in order to distinguish in the main SELECT. In terms of internal execution of the WITH clause, Oracle will internally execute the namespaces individually and cache the results in the memory which will then be utilized by the main SELECT SQL. It mimics a materialized view with intermediate results and reduces redundant calculations. This suggests that Oracle optimizes SQL queries with CTEs by storing the results of the subqueries temporarily, allowing for faster retrieval and processing in subsequent parts of the query. SQL WITH cte_name1 as (SELECT * FROM Table1), cte_name2 as (SELECT * FROM Table2), ... SELECT ... FROM cte_name1, cte_name2 WHERE ...; Use Case In this use case, I am going to talk specifically about how you can effectively utilize inner joins alongside, using a WITH clause, which can tremendously help in performance tuning the process. Let's take a look at the dataset first and the problem statement before we delve deep into the solution. The scenario is of an e-commerce retail chain for whom the bulk product sales price data needs to be loaded for a particular e-store location. Imagine that a product can have several price lines meant for regular prices, promotional and BOGO offer prices. In this case, the user is trying to create multiple promotional price lines and is unaware of the possible mistakes he/she could commit. Through this process, we will detect duplicate data that is functionally redundant and prevent the creation of poor data quality in the pricing system. By doing so, we will avoid the interface program failures in the Pricing repository staging layer, which acts as a bridge between the pricing computation engine and the pricing repository accessed by the e-commerce platform. TABLE: e_promotions Price_LINE UPC_code Description Price Start_DT End_dt Row_num flag 10001 049000093322 Coca-Cola 12 OZ $6.86 01/01/2024 09/30/2024 1 0 10001 049000093322 Coca-Cola 12 OZ $5.86 01/31/2024 03/30/2024 2 0 10001 049000028201 Fanta Pineapple Soda, 20 OZ $2.89 01/01/2024 09/30/2024 3 0 10001 054000150296 Scott 1000 $1.19 01/01/2024 09/30/2024 4 0 PS: This a sample data, but in the real world, there could be thousands and millions of price lines being updated to mark down or mark up the prices on a weekly basis. The table above captures the UPC codes and the respective items within the price line 10001. The issue with this data set is that the back office user is trying to create a duplicate line as part of the same price line through an upload process and the user does not know the duplicate data he/she may be creating. The intent here is to catch the duplicate record and reject both entries 1 and 2 so that the user can decide which one among the two needs to go in the pricing system to be reflected on the website. Using the code below would simplify error detection and also optimize the store proc solution for better performance. PLSQL WITH price_lines as (SELECT rowid, price_line, UPC, start_dt, end_dt FROM e_promotions WHERE price_line = 10001 AND flag = 0) SELECT MIN(a.rowid) as row_id, a.price_line, a.UPC, a.start_dt, a.end_dt FROM price_lines a, price_lines b WHERE a.price_line = b.price_line AND a.flag = b.flag AND a.UPC = b.UPC AND a.rowid <> b.rowid AND (a.start_dt BETWEEN b.start_dt AND b.end_dt OR a.end_dt BETWEEN b.start_dt AND b.end_dt OR b.start_dt BETWEEN a.start_dt AND a.end_dt OR b.end_dt BETWEEN a.start_dt AND a.end_dt) GROUP BY a.price_line, a.UPC, a.start_dt, a.end_dt; With the code above we did two things in parallel: Queried the table once for the dataset we need to process using the WITH clause Added the inner join to detect duplicates without having to query the table for the 2nd time, hence optimizing the performance of the store proc This is one of the many use cases I have used in the past that gave me significant performance gain in my PLSQL and SQL coding. Have fun and post your comments if you have any questions!
In this article, we'll address four problems covering different date-time topics. These problems are mainly focused on the Calendar API and on the JDK Date/Time API. Disclaimer: This article is an abstract from my recent book Java Coding Problems, Second Edition. Use the following problems to test your programming prowess on date and time. Remember that there usually isn’t a single correct way to solve a particular problem. Also, remember that the explanations shown here include only the most interesting and important details needed to solve the problems. Download the example solutions to see additional details and to experiment with the programs. 1. Defining a Day Period Problem: Write an application that goes beyond AM/PM flags and split the day into four periods: night, morning, afternoon, and evening. Depending on the given date-time and time zone generate one of these periods. Let’s imagine that we want to say hello to a friend from another country (in a different time zone) via a message such as Good morning, Good afternoon, and so on based on their local time. So, having access to AM/PM flags is not enough, because we consider that a day (24 hours) can be represented by the following periods: • 9:00 PM (or 21:00) – 5:59 AM = night • 6:00 AM – 11:59 AM = morning • 12:00 PM – 5:59 PM (or 17:59) = afternoon • 6:00 PM (or 18:00) – 8:59 PM (or 20:59) = evening Before JDK 16 First, we have to obtain the time corresponding to our friend’s time zone. For this, we can start from our local time given as a java.util.Date, java.time.LocalTime, and so on. If we start from a java. util.Date, then we can obtain the time in our friend’s time zone as follows: Java LocalTime lt = date.toInstant().atZone(zoneId).toLocalTime(); Here, the date is a new Date() and zoneId is java.time.ZoneId. Of course, we can pass the zone ID as a String and use the ZoneId.of(String zoneId) method to get the ZoneId instance. If we prefer to start from LocalTime.now(), then we can obtain the time in our friend’s time zone as follows: Java LocalTime lt = LocalTime.now(zoneId); Next, we can define the day periods as a bunch of LocalTime instances and add some conditions to determine the current period. The following code exemplifies this statement: Java public static String toDayPeriod(Date date, ZoneId zoneId) { LocalTime lt = date.toInstant().atZone(zoneId).toLocalTime(); LocalTime night = LocalTime.of(21, 0, 0); LocalTime morning = LocalTime.of(6, 0, 0); LocalTime afternoon = LocalTime.of(12, 0, 0); LocalTime evening = LocalTime.of(18, 0, 0); LocalTime almostMidnight = LocalTime.of(23, 59, 59); LocalTime midnight = LocalTime.of(0, 0, 0); if((lt.isAfter(night) && lt.isBefore(almostMidnight)) || lt.isAfter(midnight) && (lt.isBefore(morning))) { return "night"; } else if(lt.isAfter(morning) && lt.isBefore(afternoon)) { return "morning"; } else if(lt.isAfter(afternoon) && lt.isBefore(evening)) { return "afternoon"; } else if(lt.isAfter(evening) && lt.isBefore(night)) { return "evening"; } return "day"; } Now, let’s see how we can do this in JDK 16+. JDK 16+ Starting with JDK 16+, we can go beyond AM/PM flags via the following strings: in the morning, in the afternoon, in the evening, and at night. These friendly outputs are available via the new pattern, B. This pattern is available starting with JDK 16+ via DateTimeFormatter and DateTimeFormatterBuilder (you can find these APIs in Chapter 1, Problem 18, in my book; see the figure below for reference). So, the following code uses the DateTimeFormatter to exemplify the usage of pattern B, representing a period of the day: Java public static String toDayPeriod(Date date, ZoneId zoneId) { ZonedDateTime zdt = date.toInstant().atZone(zoneId); DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MMM-dd [B]"); return zdt.withZoneSameInstant(zoneId).format(formatter); } Here is an output for Australia/Melbourne: Java 2023-Feb-04 at night You can see more examples in the bundled code. Feel free to challenge yourself to adjust this code to reproduce the result from the first example. 2. Converting Between Date and YearMonth Write an application that converts between java.util.Date and java.time.YearMonth and vice versa. Converting a java.util.Date to JDK 8 java.time.YearMonth can be done based on YearMonth. from(TemporalAccessor temporal). A TemporalAccessor is an interface (more precisely, a framework-level interface) that exposes read-only access to any temporal object including date, time, and offset (a combination of these is also allowed). So, if we convert the given java.util.Date to java. time.LocalDate, then the result of the conversion can be passed to YearMonth.from() as follows: Java public static YearMonth toYearMonth(Date date) { return YearMonth.from(date.toInstant() .atZone(ZoneId.systemDefault()) .toLocalDate()); } Vice versa can be obtained via Date.from(Instant instant) as follows: Java public static Date toDate(YearMonth ym) { return Date.from(ym.atDay(1).atStartOfDay( ZoneId.systemDefault()).toInstant()); } Well, that was easy, wasn’t it? 3. Converting Between Int and YearMonth Let’s consider that a YearMonth is given (for instance, 2023-02). Convert it to an integer representation (for instance, 24277) that can be converted back to YearMonth. Consider that we have YearMonth.now() and we want to convert it to an integer (for example, this can be useful for storing a year/month date in a database using a numeric field). Check out the solution: Java public static int to(YearMonth u) { return (int) u.getLong(ChronoField.PROLEPTIC_MONTH); } The proleptic-month is a java.time.temporal.TemporalField, which basically represents a date-time field such as month-of-year (our case) or minute-of-hour. The proleptic-month starts from 0 and counts the months sequentially from year 0. So, getLong() returns the value of the specified field (here, the proleptic-month) from this year-month as a long. We can cast this long to int since the proleptic-month shouldn’t go beyond the int domain (for instance, for 2023/2 the returned int is 24277). Vice versa can be accomplished as follows: Java public static YearMonth from(int t) { return YearMonth.of(1970, 1) .with(ChronoField.PROLEPTIC_MONTH, t); } You can start from any year/month. The 1970/1 (known as the epoch and the starting point for java.time.Instant) choice was just an arbitrary choice. 4. Converting Week/Year to Date Problem Statement: Consider that two integers are given representing a week and a year (for instance, week 10, year 2023). Write a program that converts 10-2023 to a java.util.Date via Calendar and to a LocalDate via the WeekFields API. Also, do vice versa: from a given Date/LocalDate extract the year and the week as integers. Solutions: Let’s consider the year 2023, week 10. The corresponding date is Sun Mar 05 15:15:08 EET 2023 (of course, the time component is relative). Converting the year/week to java.util.Date can be done via the Calendar API as in the following self-explanatory snippet of code: Java public static Date from(int year, int week) { Calendar calendar = Calendar.getInstance(); calendar.set(Calendar.YEAR, year); calendar.set(Calendar.WEEK_OF_YEAR, week); calendar.set(Calendar.DAY_OF_WEEK, 1); return calendar.getTime(); } If you prefer to obtain a LocalDate instead of a Date then you can easily perform the corresponding conversion or you can rely on java.time.temporal.WeekFields. This API exposes several fields for working with week-of-year, week-of-month, and day-of-week. This being said, here is the previous solution written via WeekFields to return a LocalDate: Java public static LocalDate from(int year, int week) { WeekFields weekFields = WeekFields.of(Locale.getDefault()); return LocalDate.now() .withYear(year) .with(weekFields.weekOfYear(), week) .with(weekFields.dayOfWeek(), 1); } On the other hand, if we have a java.util.Date and we want to extract the year and the week from it, then we can use the Calendar API. Here, we extract the year: Java public static int getYear(Date date) { Calendar calendar = Calendar.getInstance(); calendar.setTime(date); return calendar.get(Calendar.YEAR); } And here, we extract the week: Java public static int getWeek(Date date) { Calendar calendar = Calendar.getInstance(); calendar.setTime(date); return calendar.get(Calendar.WEEK_OF_YEAR); } Getting the year and the week from a LocalDate is easy thanks to ChronoField.YEAR and ChronoField. ALIGNED_WEEK_OF_YEAR: Java public static int getYear(LocalDate date) { return date.get(ChronoField.YEAR); } public static int getWeek(LocalDate date) { return date.get(ChronoField.ALIGNED_WEEK_OF_YEAR); } Of course, getting the week can be accomplished via WeekFields as well: Java return date.get(WeekFields.of( Locale.getDefault()).weekOfYear()); Challenge yourself to obtain week/month and day/week from a Date/LocalDate.
Fast Flow Conf 2024: Shaping the Future of Platforms and Teams
October 22, 2024 by
“Let’s Cook!”: A Beginner's Guide to Making Tasty Web Projects
October 22, 2024 by
How to Use Retrieval-Augmented Generation (RAG) Locally
October 22, 2024 by
Implement Hibernate Second-Level Cache With NCache
October 22, 2024 by
Explainable AI: Making the Black Box Transparent
May 16, 2023 by CORE
Close Site Search Indexing via Kubernetes HAProxy Ingress
October 22, 2024 by
Implement Hibernate Second-Level Cache With NCache
October 22, 2024 by
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
How to Use Retrieval-Augmented Generation (RAG) Locally
October 22, 2024 by
Close Site Search Indexing via Kubernetes HAProxy Ingress
October 22, 2024 by
Close Site Search Indexing via Kubernetes HAProxy Ingress
October 22, 2024 by
Calling a Client Via Spring @schedule Cron Job
October 22, 2024 by
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
How to Use Retrieval-Augmented Generation (RAG) Locally
October 22, 2024 by
Implement Hibernate Second-Level Cache With NCache
October 22, 2024 by
Five IntelliJ Idea Plugins That Will Change the Way You Code
May 15, 2023 by