DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

DZone Spotlight

Thursday, September 28 View All Articles »
Navigating the Skies

Navigating the Skies

By Kellyn Gorman CORE
This is an article from DZone's 2023 Database Systems Trend Report.For more: Read the Report In today's rapidly evolving digital landscape, businesses across the globe are embracing cloud computing to streamline operations, reduce costs, and drive innovation. At the heart of this digital transformation lies the critical role of cloud databases — the backbone of modern data management. With the ever-growing volume of data generated for business, education, and technology, the importance of scalability, security, and cloud services has become paramount in choosing the right cloud vendor. In this article, we will delve into the world of primary cloud vendors, taking an in-depth look at their offerings and analyzing the crucial factors that set them apart: scalability, security, and cloud services for cloud databases. Armed with this knowledge, businesses can make informed decisions as they navigate the vast skies of cloud computing and select the optimal vendor to support their unique data management requirements. Scaling in the Cloud One of the fundamental advantages of cloud databases is their ability to scale in response to increasing demands for storage and processing power. Scalability can be achieved in two primary ways: horizontally and vertically. Horizontal scaling, also known as scale-out, involves adding more servers to a system, distributing the load across multiple nodes. Vertical scaling, or scale-up, refers to increasing the capacity of existing servers by adding more resources such as CPU, memory, and storage. Benefits of Scalability By distributing workloads across multiple servers or increasing the resources available on a single server, cloud databases can optimize performance and prevent bottlenecks, ensuring smooth operation even during peak times. Scalability allows organizations to adapt to sudden spikes in demand or changing requirements without interrupting services. By expanding or contracting resources as needed, businesses can maintain uptime and avoid costly outages. By scaling resources on-demand, organizations can optimize infrastructure costs, paying only for what they use. This flexible approach allows for more efficient resource allocation and cost savings compared to traditional on-premises infrastructure. Examples of Cloud Databases With Scalability Several primary cloud vendors offer scalable cloud databases designed to meet the diverse needs of organizations. The most popular releases encompass database platforms from licensed versions to open source, such as MySQL and PostgreSQL. In public clouds, there are three major players in the arena: Amazon, Microsoft Azure, and Google. The major cloud vendors offer managed cloud databases in various flavors of both licensed and open-source database platforms. These databases are easily scalable in storage and compute resources, but all controlled through service offerings. Scalability is about more power in the cloud, although some cloud databases are able to scale out, too. Figure 1: Scaling up behind the scenes in the cloud Each cloud vendor provides various high availability and scalability options with minimal manual intervention, allowing organizations to scale instances up or down and add replicas for read-heavy workloads or maintenance offloading. Securing Data in the Cloud As organizations increasingly embrace cloud databases to store and manage their sensitive data, ensuring robust security has become a top priority. While cloud databases offer numerous advantages, they also come with potential risks, such as data breaches, unauthorized access, and insider threats. In this section, we will explore the security features that cloud databases provide and discuss how they help mitigate these risks. Common Security Risks Data breaches aren't a question of if, but a question of when. Unauthorized access to sensitive data can lead to data access by those who shouldn't, potentially resulting in reputational damage, financial losses, and regulatory penalties. It shouldn't surprise anyone that cloud databases can be targeted by cybercriminals attempting to gain unauthorized access to data. This risk makes it essential to implement strict access controls at all levels — cloud, network, application, and database. As much as we don't like to think about it, disgruntled employees or other insiders can pose a significant threat to organizations' data security, as they may have legitimate access to the system but misuse it for malicious or unintentional abuse. Security Features in Cloud Databases One of the largest benefits of a public cloud vendor is the numerous first-party and partner security offerings, which can offer better security for cloud databases. Cloud databases offer robust access control mechanisms, such as role-based access control (RBAC) and multi-factor authentication (MFA), to ensure that only authorized users can access data. These features help prevent unauthorized access and reduce the risk of insider threats. Figure 2: Database security in the public cloud The second most implemented protection method is encryption and data level protection. To protect data from unauthorized access, cloud databases provide various encryption methods. These different levels and layers of encryption help secure data throughout its lifecycle. Encryption comes in three main methods: Encryption at rest protects data stored on a disk by encrypting it using strong encryption algorithms. Encryption in transit safeguards data as it travels between the client and the server or between different components within the database service. Encryption in use encrypts data while it is being processed or used by the database, ensuring that data remains secure even when in memory. Compliance and Regulations Cloud database providers often adhere to strict compliance standards and regulations, such as the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA), and the Payment Card Industry Data Security Standard (PCI-DSS). Compliance with these regulations helps ensure that organizations meet their legal and regulatory obligations, further enhancing data security. Integrating cloud databases with identity and access management (IAM) services, such as AWS Identity and Access Management, Azure Active Directory, and Google Cloud Identity, helps enforce strict security and access control policies. This integration ensures that only authorized users can access and interact with the cloud database, enhancing overall security. Cloud Services and Databases Cloud databases not only provide efficient storage and management of data but can also be seamlessly integrated with various other cloud services to enhance their capabilities. By leveraging these integrations, organizations can access powerful tools for insights, analytics, security, and quality. In this section, we will explore some popular cloud services that can be integrated with cloud databases and discuss their benefits. Cloud Machine Learning Services Machine learning services in the cloud enable organizations to develop, train, and deploy machine learning models using their cloud databases as data sources. These services can help derive valuable insights and predictions from stored data, allowing businesses to make data-driven decisions and optimize processes. With today's heavy investment in artificial intelligence (AI), no one should be surprised that Cloud Services for AI are at the top of the services list. AI services in the cloud, such as natural language processing, computer vision, and speech recognition, can be integrated with cloud databases to unlock new capabilities. These integrations enable organizations to analyze unstructured data, automate decision-making, and improve user experiences. Cloud Databases and Integration Integrating cloud databases with data warehouse solutions, such as Amazon Redshift, Google BigQuery, Azure Synapse Analytics, and Snowflake, allows organizations to perform large-scale data analytics and reporting. This combination provides a unified platform for data storage, management, and analysis, enabling businesses to gain deeper insights from their data. Along with AI and machine learning, cloud databases can be integrated with business intelligence (BI) tools like Tableau, Power BI, and Looker to create visualizations and dashboards. By connecting BI tools to cloud databases, organizations can easily analyze and explore data, empowering them to make informed decisions based on real-time insights. Data streaming and integrating cloud databases with services like Amazon Kinesis, Azure Stream Analytics, and Google Cloud Pub/Sub enable organizations to process and analyze data in real time, providing timely insights and improving decision-making. By integrating cloud databases with monitoring and alerting services, such as Amazon CloudWatch, Azure Monitor, and Google Cloud Monitoring, organizations can gain insights into the health and performance of their databases. These services allow businesses to set up alerts, monitor key performance indicators (KPIs), and troubleshoot issues in real time. Data Pipelines and ETL Services Data pipelines and ETL services are the final services from the category of integration, such as AWS Glue, Azure Data Factory, and Google Cloud Data Fusion, that can be integrated with relational cloud databases to automate data ingestion, transformation, and loading processes, ensuring seamless data flow between systems. Conclusion The scalability of cloud databases is an essential factor for organizations looking to manage their growing data needs effectively. Along with scalability, security plays a critical aspect of cloud databases, and it is crucial for organizations to understand the features and protections offered by their chosen provider. By leveraging robust access control, encryption, and compliance measures, businesses can significantly reduce the risks associated with data breaches, unauthorized access, and insider threats, ensuring that their sensitive data remains secure and protected in the cloud. Finally, to offer the highest return on investment, integrating cloud databases with other services unlocks the powerful analytics and insights available in the public cloud. By leveraging these integrations, organizations can enhance the capabilities of their cloud databases and optimize their data management processes, driving innovation and growth in the digital age. This is an article from DZone's 2023 Database Systems Trend Report.For more: Read the Report More
Demystifying Project Loom: A Guide to Lightweight Threads in Java

Demystifying Project Loom: A Guide to Lightweight Threads in Java

By Udit Handa
Concurrent programming is the art of juggling multiple tasks in a software application effectively. In the realm of Java, this means threading — a concept that has been both a boon and a bane for developers. Java's threading model, while powerful, has often been considered too complex and error-prone for everyday use. Enter Project Loom, a paradigm-shifting initiative designed to transform the way Java handles concurrency. In this blog, we'll embark on a journey to demystify Project Loom, a groundbreaking project aimed at bringing lightweight threads, known as fibers, into the world of Java. These fibers are poised to revolutionize the way Java developers approach concurrent programming, making it more accessible, efficient, and enjoyable. But before we dive into the intricacies of Project Loom, let's first understand the broader context of concurrency in Java. Understanding Concurrency in Java Concurrency is the backbone of modern software development. It allows applications to perform multiple tasks simultaneously, making the most of available resources, particularly in multi-core processors. Java, from its inception, has been a go-to language for building robust and scalable applications that can efficiently handle concurrent tasks. In Java, concurrency is primarily achieved through threads. Threads are lightweight sub-processes within a Java application that can be executed independently. These threads enable developers to perform tasks concurrently, enhancing application responsiveness and performance. However, traditional thread management in Java has its challenges. Developers often grapple with complex and error-prone aspects of thread creation, synchronization, and resource management. Threads, while powerful, can also be resource-intensive, leading to scalability issues in applications with a high thread count. Java introduced various mechanisms and libraries to ease concurrent programming, such as the java.util.concurrent package, but the fundamental challenges remained. This is where Project Loom comes into play. What Is Project Loom? Project Loom is an ambitious endeavor within the OpenJDK community that aims to revolutionize Java concurrency by introducing lightweight threads, known as fibers. These fibers promise to simplify concurrent programming in Java and address many of the pain points associated with traditional threads. The primary goal of Project Loom is to make concurrency more accessible, efficient, and developer-friendly. It achieves this by reimagining how Java manages threads and by introducing fibers as a new concurrency primitive. Fibers are not tied to native threads, which means they are lighter in terms of resource consumption and easier to manage. One of the key driving forces behind Project Loom is reducing the complexity associated with threads. Traditional threads require careful management of thread pools, synchronization primitives like locks and semaphores, and error-prone practices like dealing with thread interruption. Fibers simplify this by providing a more lightweight and predictable model for concurrency. Moreover, Project Loom aims to make Java more efficient by reducing the overhead associated with creating and managing threads. In traditional thread-based concurrency, each thread comes with its own stack and requires significant memory resources. Fibers, on the other hand, share a common stack, reducing memory overhead and making it possible to have a significantly larger number of concurrent tasks. Project Loom is being developed with the idea of being backward-compatible with existing Java codebases. This means that developers can gradually adopt fibers in their applications without having to rewrite their entire codebase. It's designed to seamlessly integrate with existing Java libraries and frameworks, making the transition to this new concurrency model as smooth as possible. Fibers: The Building Blocks of Lightweight Threads Fibers are at the heart of Project Loom. They represent a new concurrency primitive in Java, and understanding them is crucial to harnessing the power of lightweight threads. Fibers, sometimes referred to as green threads or user-mode threads, are fundamentally different from traditional threads in several ways. First and foremost, fibers are not tied to native threads provided by the operating system. In traditional thread-based concurrency, each thread corresponds to a native thread, which can be resource-intensive to create and manage. Fibers, on the other hand, are managed by the Java Virtual Machine (JVM) itself and are much lighter in terms of resource consumption. One of the key advantages of fibers is their lightweight nature. Unlike traditional threads, which require a separate stack for each thread, fibers share a common stack. This significantly reduces memory overhead, allowing you to have a large number of concurrent tasks without exhausting system resources. Fibers also simplify concurrency by eliminating some of the complexities associated with traditional threads. For instance, when working with threads, developers often need to deal with issues like thread interruption and synchronization using locks. These complexities can lead to subtle bugs and make code harder to reason about. Fibers provide a more straightforward model for concurrency, making it easier to write correct and efficient code. To work with fibers in Java, you'll use the java.lang.Fiber class. This class allows you to create and manage fibers within your application. You can think of fibers as lightweight, cooperative threads that are managed by the JVM, and they allow you to write highly concurrent code without the pitfalls of traditional thread management. Getting Started With Project Loom Before you can start harnessing the power of Project Loom and its lightweight threads, you need to set up your development environment. At the time of writing, Project Loom was still in development, so you might need to use preview or early-access versions of Java to experiment with fibers. Here are the steps to get started with Project Loom: Choose the right Java version: Project Loom features might not be available in the stable release of Java. You may need to download an early-access version of Java that includes Project Loom features. Check the official OpenJDK website for the latest releases and versions that support Project Loom. Install and configure your development environment: Download and install the chosen Java version on your development machine. Configure your IDE (Integrated Development Environment) to use this version for your Project Loom experiments. Import Project Loom libraries: Depending on the Java version you choose, you may need to include Project Loom libraries in your project. Refer to the official documentation for instructions on how to do this. Create a simple fiber: Start by creating a basic Java application that utilizes fibers. Create a simple task that can run concurrently using a fiber. You can use the java.lang.Fiber class to create and manage fibers within your application. Compile and run your application: Compile your application and run it using the configured Project Loom-enabled Java version. Observe how fibers operate and how they differ from traditional threads. Experiment and learn: Explore more complex scenarios and tasks where fibers can shine. Experiment with asynchronous programming, I/O-bound operations, and other concurrency challenges using fibers. Benefits of Lightweight Threads in Java Project Loom's introduction of lightweight threads, or fibers, into the Java ecosystem brings forth a myriad of benefits for developers and the applications they build. Let's delve deeper into these advantages: Efficiency: Fibers are more efficient than traditional threads. They are lightweight, consuming significantly less memory, and can be created and destroyed with much less overhead. This efficiency allows you to have a higher number of concurrent tasks without worrying about resource exhaustion. Simplicity: Fibers simplify concurrent programming. With fibers, you can write code that is easier to understand and reason about. You'll find yourself writing less boilerplate code for thread management, synchronization, and error handling. Scalability: The reduced memory footprint of fibers translates to improved scalability. Applications that need to handle thousands or even millions of concurrent tasks can do so more efficiently with fibers. Responsiveness: Fibers enhance application responsiveness. Tasks that would traditionally block a thread can now yield control to the fiber scheduler, allowing other tasks to run in the meantime. This results in applications that feel more responsive and can better handle user interactions. Compatibility: Project Loom is designed to be backward-compatible with existing Java codebases. This means you can gradually adopt fibers in your applications without a full rewrite. You can incrementally update your code to leverage lightweight threads where they provide the most benefit. Resource utilization: Fibers can improve resource utilization in applications that perform asynchronous I/O operations, such as web servers or database clients. They allow you to efficiently manage a large number of concurrent connections without the overhead of traditional threads. Reduced complexity: Code that deals with concurrency often involves complex patterns and error-prone practices. Fibers simplify these complexities, making it easier to write correct and efficient concurrent code. It's important to note that while Project Loom promises significant advantages, it's not a one-size-fits-all solution. The choice between traditional threads and fibers should be based on the specific needs of your application. However, Project Loom provides a powerful tool that can simplify many aspects of concurrent programming in Java and deserves consideration in your development toolkit. Project Loom Best Practices Now that you have an understanding of Project Loom and the benefits it offers, let's dive into some best practices for working with fibers in your Java applications: Choose the right concurrency model: While fibers offer simplicity and efficiency, they may not be the best choice for every scenario. Evaluate your application's specific concurrency requirements to determine whether fibers or traditional threads are more suitable. Limit blocking operations: Fibers are most effective in scenarios with a high degree of concurrency and tasks that may block, such as I/O operations. Use fibers for tasks that can yield control when waiting for external resources, allowing other fibers to run. Avoid thread synchronization: One of the advantages of fibers is reduced reliance on traditional synchronization primitives like locks. Whenever possible, use non-blocking or asynchronous techniques to coordinate between fibers, which can lead to more efficient and scalable code. Keep error handling in mind: Exception handling in fibers can be different from traditional threads. Be aware of how exceptions propagate in fiber-based code and ensure you have proper error-handling mechanisms in place. Use thread pools: Consider using thread pools with fibers for optimal resource utilization. Thread pools can efficiently manage the execution of fibers while controlling the number of active fibers to prevent resource exhaustion. Stay updated: Project Loom is an evolving project, and new features and improvements are regularly introduced. Stay updated with the latest releases and documentation to take advantage of the latest enhancements. Experiment and benchmark: Before fully adopting fibers in a production application, experiment with different scenarios and benchmark the performance to ensure that fibers are indeed improving your application's concurrency. Profile and debug: Familiarize yourself with tools and techniques for profiling and debugging fiber-based applications. Tools like profilers and debuggers can help you identify and resolve performance bottlenecks and issues. Project Loom and Existing Libraries/Frameworks One of the remarkable aspects of Project Loom is its compatibility with existing Java libraries and frameworks. As a developer, you don't have to discard your existing codebase to leverage the benefits of fibers. Here's how Project Loom can coexist with your favorite Java tools: Java standard library: Project Loom is designed to seamlessly integrate with the Java standard library. You can use fibers alongside existing Java classes and packages without modification. Concurrency libraries: Popular Java concurrency libraries, such as java.util.concurrent, can be used with fibers. You can employ thread pools and other concurrency utilities to manage and coordinate fiber execution. Frameworks and web servers: Java frameworks and web servers like Spring, Jakarta EE, and Apache Tomcat can benefit from Project Loom. Fibers can improve the efficiency of handling multiple client requests concurrently. Database access: If your application performs database access, fibers can be used to efficiently manage database connections. They allow you to handle a large number of concurrent database requests without excessive resource consumption. Third-party libraries: Most third-party libraries that are compatible with Java can be used in conjunction with Project Loom. Ensure that you're using Java versions compatible with Project Loom features. Asynchronous APIs: Many Java libraries and frameworks offer asynchronous APIs that align well with fibers. You can utilize these APIs to write non-blocking, efficient code. Project Loom's compatibility with existing Java ecosystem components is a significant advantage. It allows you to gradually adopt fibers where they provide the most value in your application while preserving your investment in existing code and libraries. Future of Project Loom As Project Loom continues to evolve and make strides in simplifying concurrency in Java, it's essential to consider its potential impact on the future of Java development. Here are some factors to ponder: Increased adoption: As developers become more familiar with fibers and their benefits, Project Loom could see widespread adoption. This could lead to the creation of a vast ecosystem of libraries and tools that leverage lightweight threads. Enhancements and improvements: Project Loom is still in development, and future releases may bring further enhancements and optimizations. Keep an eye on the project's progress and be ready to embrace new features and improvements. Easier concurrency education: With the simplification of concurrency, Java newcomers may find it easier to grasp the concepts of concurrent programming. This could lead to a more significant talent pool of Java developers with strong concurrency skills. Concurrency-driven architectures: Project Loom's efficiency and ease of use might encourage developers to design and implement more concurrency-driven architectures. This could result in applications that are highly responsive and scalable. Feedback and contributions: Get involved with the Project Loom community by providing feedback, reporting issues, and even contributing to the project's development. Your insights and contributions can shape the future of Project Loom. Conclusion In this journey through Project Loom, we've explored the evolution of concurrency in Java, the introduction of lightweight threads known as fibers, and the potential they hold for simplifying concurrent programming. Project Loom represents a significant step forward in making Java more efficient, developer-friendly, and scalable in the realm of concurrent programming. As you embark on your own exploration of Project Loom, remember that while it offers a promising future for Java concurrency, it's not a one-size-fits-all solution. Evaluate your application's specific needs and experiment with fibers to determine where they can make the most significant impact. The world of Java development is continually evolving, and Project Loom is just one example of how innovation and community collaboration can shape the future of the language. By embracing Project Loom, staying informed about its progress, and adopting best practices, you can position yourself to thrive in the ever-changing landscape of Java development. More

Trend Report

Database Systems

This data-forward, analytics-driven world would be lost without its database and data storage solutions. As more organizations continue to transition their software to cloud-based systems, the growing demand for database innovation and enhancements has climbed to novel heights. We are upon a new era of the "Modern Database," where databases must both store data and ensure that data is prepped and primed securely for insights and analytics, integrity and quality, and microservices and cloud-based architectures.In our 2023 Database Systems Trend Report, we explore these database trends, assess current strategies and challenges, and provide forward-looking assessments of the database technologies most commonly used today. Further, readers will find insightful articles — written by several of our very own DZone Community experts — that cover hand-selected topics, including what "good" database design is, database monitoring and observability, and how to navigate the realm of cloud databases.

Database Systems

Refcard #008

Design Patterns

By Justin Albano CORE
Design Patterns

Refcard #388

Threat Modeling

By Apostolos Giannakidis
Threat Modeling

More Articles

Auto-Scaling DynamoDB Streams Applications on Kubernetes
Auto-Scaling DynamoDB Streams Applications on Kubernetes

This blog post demonstrates how to auto-scale your DynamoDB Streams consumer applications on Kubernetes. You will work with a Java application that uses the DynamoDB Streams Kinesis adapter library to consume change data events from a DynamoDB table. It will be deployed to an Amazon EKS cluster and will be scaled automatically using KEDA. The application includes an implementation of the com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor that processes data from the DynamoDB stream and replicates it to another (target) DynamoDB table - this is just used as an example. We will use the AWS CLI to produce data to the DynamoDB stream and observe the scaling of the application. The code is available in this GitHub repository. What's Covered? Introduction Horizontal scalability with Kinesis Client Library What is KEDA? Prerequisites Setup and configure KEDA on EKS Configure IAM Roles Deploy DynamoDB Streams consumer application to EKS DynamoDB Streams consumer app autoscaling in action with KEDA Delete resources Conclusion Introduction Amazon DynamoDB is a fully managed database service that provides fast and predictable performance with seamless scalability. With DynamoDB Streams, you can leverage Change Data Capture (CDC) to get notified about changes to DynamoDB table data in real time. This makes it possible to easily build applications that react to changes in the underlying database without the need for complex polling or querying. DynamoDB offers two streaming models for change data capture: Kinesis Data Streams for DynamoDB DynamoDB Streams With Kinesis Data Streams, you can capture item-level modifications in any DynamoDB table and replicate them to a Kinesis data stream. With DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours. We will make use of the native DynamoDB Streams capability. Even with DynamoDB Streams, there are multiple options to choose from when it comes to consuming the change data events: Use the low-level DynamoDB Streams API to read the change data events from the DynamoDB Streams table. Use an AWS Lambda trigger Use the DynamoDB Streams Kinesis adapter library Our application will leverage DynamoDB Streams along with the Kinesis Client Library (KCL) adapter library 1.x to consume change data events from a DynamoDB table. Horizontal Scalability With Kinesis Client Library The Kinesis Client Library ensures that for every shard there is a record processor running and processing that shard. KCL helps take care of many of the complex tasks associated with distributed computing and scalability. It connects to the data stream, enumerates the shards within the data stream, and uses leases to coordinate shard associations with its consumer applications. A record processor is instantiated for every shard it manages. KCL pulls data records from the data stream, pushes the records to the corresponding record processor, and checkpoints processed records. More importantly, it balances shard-worker associations (leases) when the worker instance count changes or when the data stream is re-sharded (shards are split or merged). This means that you are able to scale your DynamoDB Streams application by simply adding more instances since KCL will automatically balance the shards across the instances. But, you still need a way to scale your applications when the load increases. Of course, you could do it manually or build a custom solution to get this done. This is where KEDA comes in. What Is KEDA? KEDA is a Kubernetes-based event-driven autoscaling component that can monitor event sources like DynamoDB Streams and scale the underlying Deployments (and Pods) based on the number of events needing to be processed. It's built on top of native Kubernetes primitives such as the Horizontal Pod Autoscaler that can be added to any Kubernetes cluster. Here is a high-level overview of its key components (you can refer to the KEDA documentation for a deep dive): From KEDA Concepts documentation The keda-operator-metrics-apiserver component in KEDA acts as a Kubernetes metrics server that exposes metrics for the Horizontal Pod Autoscaler. A KEDA Scaler integrates with an external system (such as Redis) to fetch these metrics (e.g., length of a list) to drive auto-scaling of any container in Kubernetes based on the number of events needing to be processed. The role of the keda-operator component is to activate and deactivateDeployment, i.e. scale to and from zero. You will see the DynamoDB Streams scaler in action that scales based on the shard count of a DynamoDB Stream. Now let's move on to the practical part of this tutorial. Prerequisites In addition to an AWS account, you will need to have the AWS CLI, kubectl, and Docker installed. Setup an EKS Cluster and Create a DynamoDB Table There are a variety of ways in which you can create an Amazon EKS cluster. I prefer using eksctl CLI because of the convenience it offers. Creating an EKS cluster using eksctl can be as easy as this: eksctl create cluster --name <cluster name> --region <region e.g. us-east-1> For details, refer to the Getting Started with Amazon EKS – eksctl. Create a DynamoDB table with streams enabled to persist application data and access the change data feed. You can use the AWS CLI to create a table with the following command: aws dynamodb create-table \ --table-name users \ --attribute-definitions AttributeName=email,AttributeType=S \ --key-schema AttributeName=email,KeyType=HASH \ --billing-mode PAY_PER_REQUEST \ --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES We will need to create another table that will serve as a replica of the first table. aws dynamodb create-table \ --table-name users_replica \ --attribute-definitions AttributeName=email,AttributeType=S \ --key-schema AttributeName=email,KeyType=HASH \ --billing-mode PAY_PER_REQUEST Clone this GitHub repository and change it to the right directory: git clone https://github.com/abhirockzz/dynamodb-streams-keda-autoscale cd dynamodb-streams-keda-autoscale Ok, let's get started! Setup and Configure KEDA on EKS For the purposes of this tutorial, you will use YAML files to deploy KEDA, but you could also use Helm charts. Install KEDA: # update version 2.8.2 if required kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.8.2/keda-2.8.2.yaml Verify the installation: # check Custom Resource Definitions kubectl get crd # check KEDA Deployments kubectl get deployment -n keda # check KEDA operator logs kubectl logs -f $(kubectl get pod -l=app=keda-operator -o jsonpath='{.items[0].metadata.name}' -n keda) -n keda Configure IAM Roles The KEDA operator as well as the DynamoDB streams consumer application need to invoke AWS APIs. Since both will run as Deployments in EKS, we will use IAM Roles for Service Accounts (IRSA) to provide the necessary permissions. In our particular scenario: KEDA operator needs to be able to get information about the DynamoDB table and Stream The application (KCL 1.x library to be specific) needs to interact with Kinesis and DynamoDB - it needs a bunch of IAM permissions to do so. Configure IRSA for the KEDA Operator Set your AWS Account ID and OIDC Identity provider as environment variables: ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) #update the cluster name and region as required export EKS_CLUSTER_NAME=demo-eks-cluster export AWS_REGION=us-east-1 OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///") Create a JSON file with Trusted Entities for the role: read -r -d '' TRUST_RELATIONSHIP <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:aud": "sts.amazonaws.com", "${OIDC_PROVIDER}:sub": "system:serviceaccount:keda:keda-operator" } } } ] } EOF echo "${TRUST_RELATIONSHIP}" > trust_keda.json Now, create the IAM role and attach the policy (take a look at policy_dynamodb_streams_keda.json file for details): export ROLE_NAME=keda-operator-dynamodb-streams-role aws iam create-role --role-name $ROLE_NAME --assume-role-policy-document file://trust_keda.json --description "IRSA for DynamoDB streams KEDA scaler on EKS" aws iam create-policy --policy-name keda-dynamodb-streams-policy --policy-document file://policy_dynamodb_streams_keda.json aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/keda-dynamodb-streams-policy Associate the IAM role and Service Account: kubectl annotate serviceaccount -n keda keda-operator eks.amazonaws.com/role-arn=arn:aws:iam::${ACCOUNT_ID}:role/${ROLE_NAME} # verify the annotation kubectl describe serviceaccount/keda-operator -n keda You will need to restart KEDA operator Deployment for this to take effect: kubectl rollout restart deployment.apps/keda-operator -n keda # to verify, confirm that the KEDA operator has the right environment variables kubectl describe pod -n keda $(kubectl get po -l=app=keda-operator -n keda --output=jsonpath={.items..metadata.name}) | grep "^\s*AWS_" # expected output AWS_STS_REGIONAL_ENDPOINTS: regional AWS_DEFAULT_REGION: us-east-1 AWS_REGION: us-east-1 AWS_ROLE_ARN: arn:aws:iam::<AWS_ACCOUNT_ID>:role/keda-operator-dynamodb-streams-role AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token Configure IRSA for the DynamoDB Streams Consumer Application Start by creating a Kubernetes Service Account: kubectl apply -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: dynamodb-streams-consumer-app-sa EOF Create a JSON file with Trusted Entities for the role: read -r -d '' TRUST_RELATIONSHIP <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:aud": "sts.amazonaws.com", "${OIDC_PROVIDER}:sub": "system:serviceaccount:default:dynamodb-streams-consumer-app-sa" } } } ] } EOF echo "${TRUST_RELATIONSHIP}" > trust.json Now, create the IAM role and attach the policy. Update the policy.json file and enter the region and AWS account details. export ROLE_NAME=dynamodb-streams-consumer-app-role aws iam create-role --role-name $ROLE_NAME --assume-role-policy-document file://trust.json --description "IRSA for DynamoDB Streams consumer app on EKS" aws iam create-policy --policy-name dynamodb-streams-consumer-app-policy --policy-document file://policy.json aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/dynamodb-streams-consumer-app-policy Associate the IAM role and Service Account: kubectl annotate serviceaccount -n default dynamodb-streams-consumer-app-sa eks.amazonaws.com/role-arn=arn:aws:iam::${ACCOUNT_ID}:role/${ROLE_NAME} # verify the annotation kubectl describe serviceaccount/dynamodb-streams-consumer-app-sa The core infrastructure is now ready. Let's prepare and deploy the consumer application. Deploy DynamoDB Streams Consumer Application to EKS We would first need to build the Docker image and push it to ECR (you can refer to the Dockerfile for details). Build and Push the Docker Image to ECR # create runnable JAR file mvn clean compile assembly\:single # build docker image docker build -t dynamodb-streams-consumer-app . AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) # create a private ECR repo aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com aws ecr create-repository --repository-name dynamodb-streams-consumer-app --region us-east-1 # tag and push the image docker tag dynamodb-streams-consumer-app:latest $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/dynamodb-streams-consumer-app:latest docker push $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/dynamodb-streams-consumer-app:latest Deploy the Consumer Application Update the consumer.yaml to include the Docker image you just pushed to ECR and the ARN for the DynamoDB streams for the source table. The rest of the manifest remains the same. To retrieve the ARN for the stream, run the following command: aws dynamodb describe-table --table-name users | jq -r '.Table.LatestStreamArn' The consumer.yaml Deployment manifest looks like this: apiVersion: apps/v1 kind: Deployment metadata: name: dynamodb-streams-kcl-consumer-app spec: replicas: 1 selector: matchLabels: app: dynamodb-streams-kcl-consumer-app template: metadata: labels: app: dynamodb-streams-kcl-consumer-app spec: serviceAccountName: dynamodb-streams-kcl-consumer-app-sa containers: - name: dynamodb-streams-kcl-consumer-app image: AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/dynamodb-streams-kcl-consumer-app:latest imagePullPolicy: Always env: - name: TARGET_TABLE_NAME value: users_replica - name: APPLICATION_NAME value: dynamodb-streams-kcl-app-demo - name: SOURCE_TABLE_STREAM_ARN value: <enter ARN> - name: AWS_REGION value: us-east-1 - name: INSTANCE_NAME valueFrom: fieldRef: fieldPath: metadata.name Create the Deployment: kubectl apply -f consumer.yaml # verify Pod transition to Running state kubectl get pods -w DynamoDB Streams Consumer App Autoscaling in Action With KEDA Now that you've deployed the consumer application, the KCL adapter library should jump into action. The first thing it will do is create a "control table" in DynamoDB - it should be the same as the name of the application (which in this case is dynamodb-streams-kcl-app-demo). It might take a few minutes for the initial co-ordination to happen and the table to get created. You can check the logs of the consumer application to see the progress. kubectl logs -f $(kubectl get po -l=app=dynamodb-streams-kcl-consumer-app --output=jsonpath={.items..metadata.name}) Once the lease allocation is complete, check the table and note the leaseOwner attribute: aws dynamodb describe-table --table-name dynamodb-streams-kcl-app-demo Add Data to the DynamoDB Table Now that you've deployed the consumer application, let's add data to the source DynamoDB table (users). You can use the producer.sh script for this. export export TABLE_NAME=users ./producer.sh Check consumer logs to see the messages being processed: kubectl logs -f $(kubectl get po -l=app=dynamodb-streams-kcl-consumer-app --output=jsonpath={.items..metadata.name}) Check the target table (users_replica) to confirm that the DynamoDB streams consumer application has indeed replicated the data. aws dynamodb scan --table-name users_replica Notice that the value for the processed_by attribute? It's the same as the consumer application Pod. This will make it easier for us to verify the end-to-end autoscaling process. Create the KEDA Scaler Use the scaler definition: kubectl apply -f keda-dynamodb-streams-scaler.yaml Here is the ScaledObject definition. Notice that it's targeting the dynamodb-streams-kcl-consumer-app Deployment (the one we just created) and the shardCount is set to 2: apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: aws-dynamodb-streams-scaledobject spec: scaleTargetRef: name: dynamodb-streams-kcl-consumer-app triggers: - type: aws-dynamodb-streams metadata: awsRegion: us-east-1 tableName: users shardCount: "2" identityOwner: "operator" Note on shardCount Attribute: We are using the shardCount value of 2. This is very important to note since we are using DynamoDB Streams Kinesis adapter library using KCL 1.x that supports "up to 2 simultaneous consumers per shard." This means that you cannot have more than two consumer application instances processing the same DynamoDB stream shard. However, this KEDA scaler configuration will ensure that there will be one Pod for every two shards. So, for example, if there are four shards, the application will be scaled out to two Pods. If there are six shards, there will be three Pods, and so on. Of course, you can choose to have one Pod for every shard by setting the shardCount to 1. To keep track of the number of shards in the DynamoDB stream, you can run the following command: aws dynamodbstreams describe-stream --stream-arn $(aws dynamodb describe-table --table-name users | jq -r '.Table.LatestStreamArn') | jq -r '.StreamDescription.Shards | length' I have used a utility called jq. If you want the shard details: aws dynamodbstreams describe-stream --stream-arn $(aws dynamodb describe-table --table-name users | jq -r '.Table.LatestStreamArn') | jq -r '.StreamDescription.Shards' Verify DynamoDB Streams Consumer Application Auto-Scaling We started off with one Pod of our application. But, thanks to KEDA, we should now see additional Pods coming up automatically to match the processing requirements of the consumer application. To confirm, check the number of Pods: kubectl get pods -l=app=dynamodb-streams-kcl-consumer-app-consumer Most likely, you will see four shards in the DynamoDB stream and two Pods. This can change (increase/decrease) depending on the rate at which data is produced to the DynamoDB table. Just like before, validate the data in the DynamoDB target table (users_replica) and note the processed_by attribute. Since we have scaled out to additional Pods, the value should be different for each record since each Pod will process a subset of the messages from the DynamoDB change stream. Also, make sure to check dynamodb-streams-kcl-app-demo control table in DynamoDB. You should see an update for the leaseOwner reflecting the fact that now there are two Pods consuming from the DynamoDB stream. Once you have verified the end-to-end solution, you can clean up the resources to avoid incurring any additional charges. Delete Resources Delete the EKS cluster and DynamoDB tables. eksctl delete cluster --name <enter cluster name> aws dynamodb delete-table --table-name users aws dynamodb delete-table --table-name users_replica Conclusion Use cases you should experiment with: Scale further up - How can you make DynamoDB streams increase it's number of shards? What happens to the number of consumer instance Pods? Scale down - What happens when the shard capacity of the DynamoDB streams decreases? In this post, we demonstrated how to use KEDA and DynamoDB Streams and combine two powerful techniques (Change Data Capture and auto-scaling) to build scalable, event-driven systems that can adapt based on the data processing needs of your application.

By Abhishek Gupta CORE
Understanding Europe's Cyber Resilience Act and What It Means for You
Understanding Europe's Cyber Resilience Act and What It Means for You

IoT manufacturers in every region have a host of data privacy standards and laws to comply with — and Europe is now adding one more. The Cyber Resilience Act, or CRA, has some aspects that are simply common sense and others that overlap with already existing standards. However, other aspects are entirely new and could present challenges to IoT manufacturers and providers. Let’s explore the act and consider how it will change the world of connected devices. The Basics of the Cyber Resilience Act The CRA lays out several specific goals that the act is intended to fulfill: Goal #1: Ensuring fewer vulnerabilities and better protection in IoT devices and products Goal #2: Bringing more responsibility for cybersecurity to the manufacturer Goal #3: Increasing transparency Goal #2 leads directly to the obligations laid out for manufacturers: Cybersecurity must be an aspect of every step of the device or software development life cycle. Risks need to be documented. Manufacturers must report, handle, and patch vulnerabilities for any devices that are sold for the product’s expected lifetime or for five years, whichever comes first. The manufacturer must provide clear and understandable instructions for any products with digital elements. So, to whom does the CRA apply? The answer is anyone who deals in IoT devices, software, manufacturing, development, etc. The standard lays the responsibility for vulnerabilities squarely at the foot of the manufacturers of the IoT device or software product in a way most other standards have yet to stipulate. However, the CRA does not affect all devices and manufacturers equally. Three main categories will dictate how manufacturers apply the standard. The first is the default category, which covers roughly 90 percent of IoT products. Devices and products in this category are non-critical, like smart speakers, non-essential software, etc. The default category doesn’t require a third-party assessment of adherence to the standard, so the category just provides a basis for self-assessment that allows manufacturers to establish best practices for product security. The second and third categories are Critical Class I and Critical Class II, which apply to roughly 10 percent of IoT products. Class I includes password managers, network interfaces, firewalls, microcontrollers, and more. In other words, the vendors and designers of MCUs and the other components included in Class I will have to comply with all of the requirements for that category. Class II is for operating systems, industrial firewalls, MPUs, etc. Again, that means the vendors who develop these operating systems and microprocessors will need to ensure they meet the specific requirements for Class II. The criteria that divide the classes are based on intended functionality (like whether the software will be used for security/access management), intended use (like whether it is for an industrial environment), and breach or vulnerability likelihood, among other criteria. Both Critical Class categories require a third-party assessment for compliance purposes. Importantly, there are penalties for non-compliance, which include the possibility of a full ban on the problematic product, as well as fines of €15 million or 2.5 percent of the annual turnover of the offending company, whichever is higher. Why This Act Matters The Cyber Resilience Act is part of a longstanding and ongoing endeavor by EU governing bodies to ensure a deeper level of cybersecurity in the EU. This endeavor is largely in response to a marked increase in ransomware and denial of service attacks since the pandemic and especially since the start of the Russia-Ukraine War. Still, the CRA overlaps with some other standards including the upcoming NIS2 Directive, which is the EU’s blanket cybersecurity legislation. Because of that, it’s easy to think that the CRA doesn’t have much to add, but it actually does. The act is broader than a typical IoT security standard because it also applies to software that is not embedded. That is to say, it applies to the software you might use on your desktop to interact with your IoT device, rather than just applying to the software on the device itself. Since non-embedded software is where many vulnerabilities take place, this is an important change. A second important change is the requirement for five years of security updates and vulnerability reporting. Few consumers who buy an IoT device expect regular software updates and security patches for that type of time range, but both will be a requirement under the CRA. The third important point of the standard is the requirement for some sort of reporting and alerting system for vulnerabilities so that consumers can report vulnerabilities, see the status of security and software updates for devices, and be warned of any risks. The CRA also requires that manufacturers notify the European Union Agency for Cybersecurity (ENISA) of a vulnerability within 24 hours of discovery. These requirements are intended to keep consumers’ data safe, but they will also allow manufacturers to avoid costly breaches. Prepare for Compliance Today The Cyber Resilience Act is in its early stages and, even when it is approved, manufacturers will have two years to comply. So, full compliance will probably not be obligatory until 2025 or 2026. However, that doesn’t mean you shouldn’t start preparing now. When the General Data Protection Regulation (GDPR) came into force in the EU, companies had to make major changes to their operations and the way they handled consumer data, advertising, cookies, and more. This act has the potential to be just as complex and revolutionary in changing the way IoT manufacturers and software providers manage security for their products. What can manufacturers do now to avoid penalties for non-compliance in the future? For starters, there are already technologies available that can help with CRA compliance. The reporting requirements of the EU Cyber Resilience Act are time-sensitive and penalties for non-compliance are high. This means manufacturers have a vested interest in developing efficient ways to communicate discovered vulnerabilities to both consumers and ENISA, as well as to patch those vulnerabilities as quickly as possible. As a result, IoT providers who can utilize a peer-to-peer communication platform that enables remote status reports, updates, and security patches will have a competitive advantage. Additionally, such a platform can allow IoT providers to set up push notifications and security alerts for consumers, enabling the highest level of transparency and communication in case a vulnerability is discovered. It’s also important to keep up with changes to the proposal as laid out by the European Commission, since the CRA as it appears in 2026 may not be the same as the initial draft of the standard. In fact, well-known companies like Microsoft are already making recommendations and critiques of the EU Cyber Resilience Act proposal. Many experts believe the CRA is too broad at the moment and too hard to apply, and that it needs stronger definitions, examples, and action plans to be truly effective. If these critiques are followed, compliance could become a bit less complex and a bit easier to understand in the future, so it would be wise to keep informed of any changes. Final Thoughts While the initial shift to CRA compliance may be challenging, various technologies and cybersecurity tools are already available to help. Integrating these tools and choosing to pursue the highest levels of security in your IoT products today will put you well on your way to fulfilling the requirements of the act before it is even in effect. Good luck.

By Carsten Rhod Gregersen
Designing Databases for Distributed Systems
Designing Databases for Distributed Systems

This is an article from DZone's 2023 Database Systems Trend Report.For more: Read the Report Database design is a critical factor in microservices and cloud-native solutions because a microservices-based architecture results in distributed data. Instead of data management happening in a single process, multiple processes can manipulate the data. The rise of cloud computing has made data even more distributed. To deal with this complexity, several data management patterns have emerged for microservices and cloud-native solutions. In this article, we will look at the most important patterns that can help us manage data in a distributed environment. The Challenges of Database Design for Microservices and the Cloud Before we dig into the specific data management patterns, it is important to understand the key challenges with database design for microservices and the cloud: In a microservices architecture, data is distributed across different nodes. Some of these nodes can be in different data centers in completely different geographic regions of the world. In this situation, it is tough to guarantee consistency of data across all the nodes. At any given point in time, there can be differences in the state of data between various nodes. This is also known as the problem of eventual consistency. Since the data is distributed, there's no central authority that manages data like in single-node monolithic systems. It's important for the various participating systems to use a mechanism (e.g., consensus algorithms) for data management. The attack surface for malicious actors is larger in a microservices architecture since there are multiple moving parts. This means we need to establish a more robust security posture while building microservices. The main promise of microservices and the cloud is scalability. While it becomes easier to scale the application processes, it is not so easy to scale the database nodes horizontally. Without proper scalability, databases can turn into performance bottlenecks. Diving Into Data Management Patterns Considering the associated challenges, several patterns are available to manage data in microservices and cloud-native applications. The main job of these patterns is to facilitate the developers in addressing the various challenges mentioned above. Let's look at each of these patterns one by one. Database per Service As the name suggests, this pattern proposes that each microservices manages its own data. This implies that no other microservices can directly access or manipulate the data managed by another microservice. Any exchange or manipulation of data can be done only by using a set of well-defined APIs. The figure below shows an example of a database-per-service pattern. Figure 1: Database-per-service pattern At face value, this pattern seems quite simple. It can be implemented relatively easily when we are starting with a brand-new application. However, when we are migrating an existing monolithic application to a microservices architecture, the demarcation between services is not so clear. Most of the functionality is written in a way where different parts of the system access data from other parts informally. Two main areas that we need to focus on when using a database-per-service pattern: Defining bounded contexts for each service Managing business transactions spanning multiple microservices Shared Database The next important pattern is the shared database pattern. Though this pattern supports microservices architecture, it adopts a much more lenient approach by using a shared database accessible to multiple microservices. For existing applications transitioning to a microservices architecture, this is a much safer pattern, as we can slowly evolve the application layer without changing the database design. However, this approach takes away some benefits of microservices: Developers across teams need to coordinate schema changes to tables. Runtime conflicts may arise when multiple services are trying to access the same database resources. CQRS and Event Sourcing In the command query responsibility segregation (CQRS) pattern, an application listens to domain events from other microservices and updates a separate database for supporting views and queries. We can then serve complex aggregation queries from this separate database while optimizing the performance and scaling it up as needed. Event sourcing takes it a bit further by storing the state of the entity or the aggregate as a sequence of events. Whenever we have an update or an insert on an object, a new event is created and stored in the event store. We can use CQRS and event sourcing together to solve a lot of challenges around event handling and maintaining separate query data. This way, you can scale the writes and reads separately based on their individual requirements. Figure 2: Event sourcing and CQRS in action together On the downside, this is an unfamiliar style of building applications for most developers, and there are more moving parts to manage. Saga Pattern The saga pattern is another solution for handling business transactions across multiple microservices. For example, placing an order on a food delivery app is a business transaction. In the saga pattern, we break this business transaction into a sequence of local transactions handled by different services. For every local transaction, the service that performs the transaction publishes an event. The event triggers a subsequent transaction in another service, and the chain continues until the entire business transaction is completed. If any particular transaction in the chain fails, the saga rolls back by executing a series of compensating transactions that undo the impact of all the previous transactions. There are two types of saga implementations: Orchestration-based saga Choreography-based saga Sharding Sharding helps in building cloud-native applications. It involves separating rows of one table into multiple different tables. This is also known as horizontal partitioning, but when the partitions reside on different nodes, they are known as shards. Sharding helps us improve the read and write scalability of the database. Also, it improves the performance of queries because a particular query must deal with fewer records as a result of sharding. Replication Replication is a very important data management pattern. It involves creating multiple copies of the database. Each copy is identical and runs on a different server or node. Changes made to one copy are propagated to the other copies. This is known as replication. There are several types of replication approaches, such as: Single-leader replication Multi-leader replication Leaderless replication Replication helps us achieve high availability and boosts reliability, and it lets us scale out read operations since read requests can be diverted to multiple servers. Figure 3 below shows sharding and replication working in combination. Figure 3: Using sharding and replication together Best Practices for Database Design in a Cloud-Native Environment While these patterns can go a long way in addressing data management issues in microservices and cloud-native architecture, we also need to follow some best practices to make life easier. Here are a few best practices: We must try to design a solution for resilience. This is because faults are inevitable in a microservices architecture, and the design should accommodate failures and recover from them without disrupting the business. We must implement proper migration strategies when transitioning to one of the patterns. Some of the common strategies that can be evaluated are schema first versus data first, blue-green deployments, or using the strangler pattern. Don't ignore backups and well-tested disaster recovery systems. These things are important even for single-node databases. However, in a distributed data management approach, disaster recovery becomes even more important. Constant monitoring and observability are equally important in microservices or cloud-native applications. For example, techniques like sharding can lead to unbalanced partitions and hotspots. Without proper monitoring solutions, any reactions to such situations may come too late and may put the business at risk. Conclusion We can conclude that good database design is absolutely vital in a microservices and cloud-native environment. Without proper design, an application will face multiple problems due to the inherent complexity of distributed data. Multiple data management patterns exist to help us deal with data in a more reliable and scalable manner. However, each pattern has its own challenges and set of advantages and disadvantages. No pattern fits all the possible scenarios, and we should select a particular pattern only after managing the various trade-offs. This is an article from DZone's 2023 Database Systems Trend Report.For more: Read the Report

By Saurabh Dashora CORE
What Is Good Database Design?
What Is Good Database Design?

This is an article from DZone's 2023 Database Systems Trend Report.For more: Read the Report Good database design is essential to ensure data accuracy, consistency, and integrity and that databases are efficient, reliable, and easy to use. The design must address the storing and retrieving of data quickly and easily while handling large volumes of data in a stable way. An experienced database designer can create a robust, scalable, and secure database architecture that meets the needs of modern data systems. Architecture and Design A modern data architecture for microservices and cloud-native applications involves multiple layers, and each one has its own set of components and preferred technologies. Typically, the foundational layer is constructed as a storage layer, encompassing one or more databases such as SQL, NoSQL, or NewSQL. This layer assumes responsibility for the storage, retrieval, and management of data, including tasks like indexing, querying, and transaction management. To enhance this architecture, it is advantageous to design a data access layer that resides above the storage layer but below the service layer. This data access layer leverages technologies like object-relational mapping or data access objects to simplify data retrieval and manipulation. Finally, at the topmost layer lies the presentation layer, where the information is skillfully presented to the end user. The effective transmission of data through the layers of an application, culminating in its presentation as meaningful information to users, is of utmost importance in a modern data architecture. The goal here is to design a scalable database with the ability to handle a high volume of traffic and data while minimizing downtime and performance issues. By following best practices and addressing a few challenges, we can meet the needs of today's modern data architecture for different applications. Figure 1: Layered architecture Considerations By taking into account the following considerations when designing a database for enterprise-level usage, it is possible to create a robust and efficient system that meets the specific needs of the organization while ensuring data integrity, availability, security, and scalability. One important consideration is the data that will be stored in the database. This involves assessing the format, size, complexity, and relationships between data entities. Different types of data may require specific storage structures and data models. For instance, transactional data often fits well with a relational database model, while unstructured data like images or videos may require a NoSQL database model. The frequency of data retrieval or access plays a significant role in determining the design considerations. In read-heavy systems, implementing a cache for frequently accessed data can enhance query response times. Conversely, the emphasis may be on lower data retrieval frequencies for data warehouse scenarios. Techniques such as indexing, caching, and partitioning can be employed to optimize query performance. Ensuring the availability of the database is crucial for maintaining optimal application performance. Techniques such as replication, load balancing, and failover are commonly used to achieve high availability. Additionally, having a robust disaster recovery plan in place adds an extra layer of protection to the overall database system. As data volumes grow, it is essential that the database system can handle increased loads without compromising performance. Employing techniques like partitioning, sharding, and clustering allows for effective scalability within a database system. These approaches enable the efficient distribution of data and workload across multiple servers or nodes. Data security is a critical consideration in modern database design, given the rising prevalence of fraud and data breaches. Implementing robust access controls, encryption mechanisms for sensitive personally identifiable information, and conducting regular audits are vital for enhancing the security of a database system. In transaction-heavy systems, maintaining consistency in transactional data is paramount. Many databases provide features such as appropriate locking mechanisms and transaction isolation levels to ensure data integrity and consistency. These features help to prevent issues like concurrent data modifications and inconsistencies. Challenges Determining the most suitable tool or technology for our database needs can be a challenge due to the rapid growth and evolving nature of the database landscape. With different types of databases emerging daily and even variations among vendors offering the same type, it is crucial to plan carefully based on your specific use cases and requirements. By thoroughly understanding our needs and researching the available options, we can identify the right tool with the appropriate features to meet our database needs effectively. Polyglot persistence is a consideration that arises from the demand of certain applications, leading to the use of multiple SQL or NoSQL databases. Selecting the right databases for transactional systems, ensuring data consistency, handling financial data, and accommodating high data volumes pose challenges. Careful consideration is necessary to choose the appropriate databases that can fulfill the specific requirements of each aspect while maintaining overall system integrity. Integrating data from different upstream systems, each with its own structure and volume, presents a significant challenge. The goal is to achieve a single source of truth by harmonizing and integrating the data effectively. This process requires comprehensive planning to ensure compatibility and future-proofing the integration solution to accommodate potential changes and updates. Performance is an ongoing concern in both applications and database systems. Every addition to the database system can potentially impact performance. To address performance issues, it is essential to follow best practices when adding, managing, and purging data, as well as properly indexing, partitioning, and implementing encryption techniques. By employing these practices, you can mitigate performance bottlenecks and optimize the overall performance of your database system. Considering these factors will contribute to making informed decisions and designing an efficient and effective database system for your specific requirements. Advice for Building Your Architecture Goals for a better database design should include efficiency, scalability, security, and compliance. In the table below, each goal is accompanied by its corresponding industry expectation, highlighting the key aspects that should be considered when designing a database for optimal performance, scalability, security, and compliance. GOALS FOR DATABASE DESIGN Goal Industry Expectation Efficiency Optimal performance and responsiveness of the database system, minimizing latency and maximizing throughput. Efficient handling of data operations, queries, and transactions. Scalability Ability to handle increasing data volumes, user loads, and concurrent transactions without sacrificing performance. Scalable architecture that allows for horizontal or vertical scaling to accommodate growth. Security Robust security measures to protect against unauthorized access, data breaches, and other security threats. Implementation of access controls, encryption, auditing mechanisms, and adherence to industry best practices and compliance regulations. Compliance Adherence to relevant industry regulations, standards, and legal requirements. Ensuring data privacy, confidentiality, and integrity. Implementing data governance practices and maintaining audit trails to demonstrate compliance. Table 1 When building your database architecture, it's important to consider several key factors to ensure the design is effective and meets your specific needs. Start by clearly defining the system's purpose, data types, volume, access patterns, and performance expectations. Consider clear requirements that provide clarity on the data to be stored and the relationships between the data entities. This will help ensure that the database design aligns with quality standards and conforms to your requirements. Also consider normalization, which enables efficient storage use by minimizing redundant data, improves data integrity by enforcing consistency and reliability, and facilitates easier maintenance and updates. Selecting the right database model or opting for polyglot persistence support is crucial to ensure the database aligns with your specific needs. This decision should be based on the requirements of your application and the data it handles. Planning for future growth is essential to accommodate increasing demand. Consider scalability options that allow your database to handle growing data volumes and user loads without sacrificing performance. Alongside growth, prioritize data protection by implementing industry-standard security recommendations and ensuring appropriate access levels are in place and encourage implementing IT security measures to protect the database from unauthorized access, data theft, and security threats. A good back-up system is a testament to the efficiency of a well-designed database. Regular backups and data synchronization, both on-site and off-site, provide protection against data loss or corruption, safeguarding your valuable information. To validate the effectiveness of your database design, test the model using sample data from real-world scenarios. This testing process will help validate the performance, reliability, and functionality of the database system you are using, ensuring it meets your expectations. Good documentation practices play a vital role in improving feedback systems and validating thought processes and implementations during the design and review phases. Continuously improving documentation will aid in future maintenance, troubleshooting, and system enhancement efforts. Primary and secondary keys contribute to data integrity and consistency. Use indexes to optimize database performance by indexing frequently queried fields and limiting the number of fields returned in queries. Regularly backing up the database protects against data loss during corruption, system failure, or other unforeseen circumstances. Data archiving and purging practices help remove infrequently accessed data, reducing the size of the active dataset. Proper error handling and logging aid in debugging, troubleshooting, and system maintenance. Regular maintenance is crucial for growing database systems. Plan and schedule regular backups, perform performance tuning, and stay up to date with software upgrades to ensure optimal database performance and stability. Conclusion Designing a modern data architecture that can handle the growing demands of today's digital world is not an easy job. However, if you follow best practices and take advantage of the latest technologies and techniques, it is very much possible to build a scalable, flexible, and secure database. It just requires the right mindset and your commitment to learning and improving with a proper feedback loop. Additional reading: Semantic Modeling for Data: Avoiding Pitfalls and Breaking Dilemmas by Panos Alexopoulos Learn PostgreSQL: Build and manage high-performance database solutions using PostgreSQL 12 and 13 by Luca Ferrari and Enrico Pirozzi Designing Data-Intensive Applications by Martin Kleppmann This is an article from DZone's 2023 Database Systems Trend Report.For more: Read the Report

By Manas Dash CORE
GenAI-Infused ChatGPT: A Guide To Effective Prompt Engineering
GenAI-Infused ChatGPT: A Guide To Effective Prompt Engineering

In today's world, interacting with AI systems like ChatGPT has become an everyday experience. These AI systems can understand and respond to us in a more human-like way. But how do they do it? That's where prompt engineering comes in. Think of prompt engineering as the instruction manual for AI. It tells AI systems like ChatGPT how to understand what we want and respond appropriately. It's like giving clear directions to a helpful friend. In this guide, we're going to explore prompt engineering, with a special focus on how it combines with something called GenAI. GenAI is like the secret sauce that makes AI even smarter. By mixing GenAI with ChatGPT and prompt engineering, we can make AI understand and talk to us even better. Whether you're new to this world or an expert, this guide will show you the ropes. We'll dive into the tricks of prompt design, look at what's right and wrong, and share ways to make ChatGPT perform its best with GenAI. So, let's embark on this journey to make AI, like ChatGPT, even more amazing with the help of prompt engineering and GenAI. What Is Prompt Engineering? Prompt engineering is the art of crafting clear and precise instructions or inputs given to AI models, such as ChatGPT, to guide their responses effectively. It serves as the bridge between human communication and AI understanding. Imagine you're chatting with an AI chatbot, and you want it to tell you a joke. The prompt is the message you send to the chatbot, like saying, "Tell me a funny joke." It helps the chatbot understand your request and respond with a joke. In essence, prompt engineering ensures that AI knows what to do when you talk to it. Importance of Prompt Engineering Prompt engineering plays a pivotal role in AI interactions for several key reasons: Effective communication: Good prompt engineering ensures that users can communicate their needs clearly to AI models, leading to more accurate and relevant responses. Example: Asking ChatGPT, "Can you summarize the key points of the latest climate change report for a general audience?" is a clear prompt that conveys the desired task. Bias mitigation: Well-crafted prompts can help reduce biases in AI responses by guiding models to provide fair and unbiased answers. Example: Using a prompt like "Provide an overview of the benefits and drawbacks of various renewable energy sources" ensures a balanced response. Improved performance: Proper prompts can enhance the performance of AI models, making them more useful and accurate in delivering information or completing tasks. Example: When instructing ChatGPT to "Explain the principles of machine learning in simple terms," the prompt's clarity aids in effective communication. Ethical use: Prompt engineering plays a crucial role in ensuring that AI systems are used ethically and responsibly, avoiding harmful or inappropriate responses. Example: Instructing ChatGPT to "Avoid generating offensive content or engaging in harmful discussions" sets ethical boundaries. Customization: It allows users to customize AI responses to specific tasks or contexts, making the technology more versatile and adaptable. Example: Crafting a prompt like "Summarize the key findings of the research paper on sustainable agriculture" tailors the response to a specific task. Effective Prompts: What Works A well-constructed prompt is a vital ingredient in prompt engineering. Here's an example of an effective prompt: Good Prompt: "Explain the principles of thermodynamics and their applications in mechanical engineering, focusing on the concept of energy conservation and providing real-world examples." In this prompt, the following elements contribute to its effectiveness: Task definition: The task is well-defined (explaining thermodynamics principles and their applications). Field specification: The specific field of study is mentioned (mechanical engineering). Contextual clarity: The user's request for real-world examples adds clarity and context, making it an effective prompt. Ineffective Prompts: What to Avoid Conversely, ineffective prompts can hinder prompt engineering efforts. Here's an example of a poorly constructed prompt: Bad Prompt: "Explain Thermodynamics?" This prompt exhibits several shortcomings: Vagueness: It's too vague and lacks clarity. It doesn't specify what aspect of thermodynamics the user is interested in or what level of detail is expected. Consequently, it's unlikely to yield a meaningful or informative response in the context of technical education. The Prompt Framework Prompt engineering is like providing a set of rules and guidelines to an AI system, enabling it to understand and execute tasks effectively. Think of it as having a manual that instructs you on how to communicate with a computer or AI system using words. This framework ensures that the AI comprehends your instructions, leading to accurate and desired outcomes. The framework essentially consists of three major principles: Subject: Define what you want the computer or AI to do. For example, if you want it to translate a sentence, you need to specify that. Example: "Emerging Quality Engineering technologies" Define the Task: Be clear about what you expect the computer or AI to achieve. If it's a summary, you should say that. Example: "Write me a blog on the Emerging Quality Engineering technologies" Clear Instruction: Give the computer clear and specific directions so it knows exactly what to do. Example: "The blog should be 500 to 700 words, in a persuasive and informative tone, and include at least seven benefits of the importance of Quality Engineers in today’s tech world." Offering Context: Sometimes, you might need to provide additional information or context to help the computer understand your request better. Example: "Imagine you are creating this blog post for people looking to start/sustain their career in the field of Quality Engineering." Prompt Framework Template (Copy/Paste this template for further use.) Subject: [Subject] Task: [Task] Instruction: The [type of content] should be between [word count range], written in a [tone], and include at least [number] [specific details]. Context: Imagine you are creating this [type of content] targeting [target audience]. Prompt Framework Sample Subject: Emerging Quality Engineering Technologies Task: Write me a blog on the Emerging Quality Engineering Technologies Instruction: The blog host should be 500 to 700 words in a persuasive and informative tone and include at least seven benefits of the importance of Quality Engineers in today’s tech world Context: Imagine you are creating this blog post for people looking to start/sustain their career in the field of Quality Engineering (Please use the above sample in ChatGPT to see amazing results that attract Quality Engineering Techies.) Prompt Iteration Ask ChatGPT questions until you have the information you need by rephrasing and giving more context. Why? By asking more questions, you build a better understanding between AI and you for more accurate answers. When? When you have information but don’t know how to get the best answers or formulations yourself. Sample Prompt 1: "I am a manager looking to improve my team's collaboration and communication. Can you suggest some strategies to enhance teamwork and create a more productive work environment? If I need more clarification or ideas, I’ll ask further questions." We can continue the conversation by adding additional prompts to the same conversation by typing a new prompt or clicking on the edit icon near the previous prompt. Prompt 2: "I forgot to tell you that I am a manager in a tech organization." 4th Grader Explanation Explaining in a 4th-grade style involves breaking down information or concepts into simpler terms and providing straightforward and comprehensible descriptions to help others grasp the topic easily. For example, explain Chaos theory and its benefits. Before Using 4th Grader Chaos theory is a scientific concept that studies complex systems. It helps us understand how seemingly random events in nature might have some order or predictability. Its benefit is that it allows scientists to make sense of chaotic phenomena, from weather patterns to the behavior of molecules. After Using 4th Grader Chaos theory is like solving puzzles in nature. It helps scientists understand things that look messy and unpredictable. The cool part is it lets them find hidden patterns in things like weather and tiny particles. Follow-Up Questions Follow-up questions in prompt engineering refer to additional questions or prompts that are used to provide clarification or gather more specific information when interacting with AI models or systems. These questions are asked in a conversation to guide the AI's responses and ensure that the desired information or action is obtained. Scenario 1: Virtual Assistant for Travel Planning User Prompt: "I'd like to plan a vacation in December." AI Follow-up Question: "Great choice! Could you please specify your preferred dates and any specific activities or attractions you'd like to include in your itinerary?" Scenario 2: Content Generation User Prompt: "Write an article about renewable energy sources." AI Follow-up Question: "Sure, I can do that. Do you have a specific angle or focus in mind for the article, such as solar power, wind energy, or sustainable practices?" Scenario 3: Language Translation User Prompt: "Translate this text from English to Spanish: 'The quick brown fox jumps over the lazy dog.'" AI Follow-up Question: "Got it! Is there a particular dialect or region you'd like the translation to be in, or should I provide a standard translation?" Scenario 4: Code Generation User Prompt: "Write a Python code snippet to calculate the factorial of a number." AI Follow-up Question: "Sure, I can do that. Do you have a specific number you'd like to calculate the factorial for, or should I provide a general code template?" These examples illustrate how follow-up questions can help refine user requests and gather the necessary details to generate more tailored and accurate responses from AI systems. Teach Me Instruct ChatGPT to provide step-by-step instructions or explanations on various topics, helping users learn new skills or gain knowledge. "Teach me" requests typically seek clear and informative responses to specific questions or tasks. For Example "Teach me as a video editor professional how to use Premiere Pro to edit marketing videos step by step, excluding details about program installation or video importing." Write Like Me "Write like me" prompts instruct ChatGPT to mimic a specific writing style, whether it's a personal style or a brand's unique voice. This approach is valuable for maintaining a consistent brand identity and creating content that resonates with the intended audience. For Example "Write a cover letter for a marketing position using the same tone and language style found in my resume and previous cover letters." Conclusion Effective prompt engineering is essential for making AI work better and understand us. With the power of GenAI, we can take AI interactions to the next level. As you explore prompt engineering and AI, remember that you hold the key to making AI smarter. Together, we can bridge the gap between humans and machines, making AI not just smart but insightful. Thank you for joining us on this journey into the world of prompt engineering, where possibilities are limitless.

By Bala Murugan
A Better Web3 Experience: Account Abstraction From Flow (Part 2)
A Better Web3 Experience: Account Abstraction From Flow (Part 2)

In part one of this two-part series, we looked at how walletless dApps smooth the web3 user experience by abstracting away the complexities of blockchains and wallets. Thanks to account abstraction from Flow and the Flow Wallet API, we can easily build walletless dApps that enable users to sign up with credentials that they're accustomed to using (such as social logins or email accounts). We began our walkthrough by building the backend of our walletless dApp. Here in part two, we'll wrap up our walkthrough by building the front end. Here we go! Create a New Next.js Application Let's use the Next.js framework so we have the frontend and backend in one application. On our local machine, we will use create-next-app to bootstrap our application. This will create a new folder for our Next.js application. We run the following command: Shell $ npx create-next-app flow_walletless_app Some options will appear; you can mark them as follows (or as you prefer!). Make sure to choose No for using Tailwind CSS and the App Router. This way, your folder structure and style references will match what I demo in the rest of this tutorial. Shell ✔ Would you like to use TypeScript with this project? ... Yes ✔ Would you like to use ESLint with this project? ... No ✔ Would you like to use Tailwind CSS with this project? ... No <-- IMPORTANT ✔ Would you like to use `src/` directory with this project? ... No ✔ Use App Router (recommended)? ... No <-- IMPORTANT ✔ Would you like to customize the default import alias? ... No Start the development server. Shell $ npm run dev The application will run on port 3001 because the default port (3000) is occupied by our wallet API running through Docker. Set Up Prisma for Backend User Management We will use the Prisma library as an ORM to manage our database. When a user logs in, we store their information in a database at a user entity. This contains the user's email, token, Flow address, and other information. The first step is to install the Prisma dependencies in our Next.js project: Shell $ npm install prisma --save-dev To use Prisma, we need to initialize the Prisma Client. Run the following command: Shell $ npx prisma init The above command will create two files: prisma/schema.prisma: The main Prisma configuration file, which will host the database configuration .env: Will contain the database connection URL and other environment variables Configure the Database Used by Prisma We will use SQLite as the database for our Next.js application. Open the schema.prisma file and change the datasource db settings as follows: Shell datasource db { provider = "sqlite" url = env("DATABASE_URL") } Then, in our .env file for the Next.js application, we will change the DATABASE_URL field. Because we’re using SQLite, we need to define the location (which, for SQLite, is a file) where the database will be stored in our application: Shell DATABASE_URL="file:./dev.db" Create a User Model Models represent entities in our app. The model describes how the data should be stored in our database. Prisma takes care of creating tables and fields. Let’s add the following User model in out schema.prisma file: Shell model User { id Int @id @default(autoincrement()) email String @unique name String? flowWalletJobId String? flowWalletAddress String? createdAt DateTime @default(now()) updatedAt DateTime @updatedAt } With our model created, we need to synchronize with the database. For this, Prisma offers a command: Shell $ npx prisma db push Environment variables loaded from .env Prisma schema loaded from prisma/schema.prisma Datasource "db": SQLite database "dev.db" at "file:./dev.db" SQLite database dev.db created at file:./dev.db -> Your database is now in sync with your Prisma schema. Done in 15ms After successfully pushing our users table, we can use Prisma Studio to track our database data. Run the command: Shell $ npx prisma studio Set up the Prisma Client That's it! Our entity and database configuration are complete. Now let's go to the client side. We need to install the Prisma client dependencies in our Next.js app. To do this, run the following command: Shell $ npm install @prisma/client Generate the client from the Prisma schema file: Shell $ npx prisma generate Create a folder named lib in the root folder of your project. Within that folder, create a file entitled prisma.ts. This file will host the client connection. Paste the following code into that file: TypeScript // lib/prisma.ts import { PrismaClient } from '@prisma/client'; let prisma: PrismaClient; if (process.env.NODE_ENV === "production") { prisma = new PrismaClient(); } else { let globalWithPrisma = global as typeof globalThis & { prisma: PrismaClient; }; if (!globalWithPrisma.prisma) { globalWithPrisma.prisma = new PrismaClient(); } prisma = globalWithPrisma.prisma; } export default prisma; Build the Next.js Application Frontend Functionality With our connection on the client part finalized, we can move on to the visual part of our app! Replace the code inside pages/index.tsx file, delete all lines of code and paste in the following code: TypeScript # pages/index.tsx import styles from "@/styles/Home.module.css"; import { Inter } from "next/font/google"; import Head from "next/head"; const inter = Inter({ subsets: ["latin"] }); export default function Home() { return ( <> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={styles.main}> <div className={styles.card}> <h1 className={inter.className}>Welcome to Flow Walletless App!</h1> <div style={{ display: "flex", flexDirection: "column", gap: "20px", margin: "20px", } > <button style={{ padding: "20px", width: 'auto' }>Sign Up</button> <button style={{ padding: "20px" }>Sign Out</button> </div> </div> </main> </> ); } In this way, we have the basics and the necessities to illustrate the creation of wallets and accounts! The next step is to configure the Google client to use the Google API to authenticate users. Set up Use of Google OAuth for Authentication We will need Google credentials. For that, open your Google console. Click Create Credentials and select the OAuth Client ID option. Choose Web Application as the application type and define a name for it. We will use the same name: flow_walletless_app. Add http://localhost:3001/api/auth/callback/google as the authorized redirect URI. Click on the Create button. A modal should appear with the Google credentials. We will need the Client ID and Client secret to use in our .env file shortly. Next, we’ll add the next-auth package. To do this, run the following command: Shell $ npm i next-auth Open the .env file and add the following new environment variables to it: Shell GOOGLE_CLIENT_ID= <GOOGLE CLIENT ID> GOOGLE_CLIENT_SECRET=<GOOGLE CLIENT SECRET> NEXTAUTH_URL=http://localhost:3001 NEXTAUTH_SECRET=<YOUR NEXTAUTH SECRET> Paste in your copied Google Client ID and Client Secret. The NextAuth secret can be generated via the terminal with the following command: Shell $ openssl rand -base64 32 Copy the result, which should be a random string of letters, numbers, and symbols. Use this as your value for NEXTAUTH_SECRET in the .env file. Configure NextAuth to Use Google Next.js allows you to create serverless API routes without creating a full backend server. Each file under api is treated like an endpoint. Inside the pages/api/ folder, create a new folder called auth. Then create a file in that folder, called [...nextauth].ts, and add the code below: TypeScript // pages/api/auth/[...nextauth].ts import NextAuth from "next-auth" import GoogleProvider from "next-auth/providers/google"; export default NextAuth({ providers: [ GoogleProvider({ clientId: process.env.GOOGLE_CLIENT_ID as string, clientSecret: process.env.GOOGLE_CLIENT_SECRET as string, }) ], }) Update _app.tsx file to use NextAuth SessionProvider Modify the _app.tsx file found inside the pages folder by adding the SessionProvider from the NextAuth library. Your file should look like this: TypeScript // pages/_app.tsx import "@/styles/globals.css"; import { SessionProvider } from "next-auth/react"; import type { AppProps } from "next/app"; export default function App({ Component, pageProps }: AppProps) { return ( <SessionProvider session={pageProps.session}> <Component {...pageProps} /> </SessionProvider> ); } Update the Main Page To Use NextAuth Functions Let us go back to our index.tsx file in the pages folder. We need to import the functions from the NextAuth library and use them to log users in and out. Our update index.tsx file should look like this: TypeScript // pages/index.tsx import styles from "@/styles/Home.module.css"; import { Inter } from "next/font/google"; import Head from "next/head"; import { useSession, signIn, signOut } from "next-auth/react"; const inter = Inter({ subsets: ["latin"] }); export default function Home() { const { data: session } = useSession(); console.log("session data",session) const signInWithGoogle = () => { signIn(); }; const signOutWithGoogle = () => { signOut(); }; return ( <> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={styles.main}> <div className={styles.card}> <h1 className={inter.className}>Welcome to Flow Walletless App!</h1> <div style={{ display: "flex", flexDirection: "column", gap: "20px", margin: "20px", } > <button onClick={signInWithGoogle} style={{ padding: "20px", width: "auto" }>Sign Up</button> <button onClick={signOutWithGoogle} style={{ padding: "20px" }>Sign Out</button> </div> </div> </main> </> ); } Build the “Create User” Endpoint Let us now create a users folder underneath pages/api. Inside this new folder, create a file called index.ts. This file is responsible for: Creating a user (first we check if this user already exists) Calling the Wallet API to create a wallet for this user Calling the Wallet API and retrieving the jobId data if the User entity does not yet have the address created These actions are performed within the handle function, which calls the checkWallet function. Paste the following snippet into your index.ts file: TypeScript // pages/api/users/index.ts import { User } from "@prisma/client"; import { BaseNextRequest, BaseNextResponse } from "next/dist/server/base-http"; import prisma from "../../../lib/prisma"; export default async function handle( req: BaseNextRequest, res: BaseNextResponse ) { const userEmail = JSON.parse(req.body).email; const userName = JSON.parse(req.body).name; try { const user = await prisma.user.findFirst({ where: { email: userEmail, }, }); if (user == null) { await prisma.user.create({ data: { email: userEmail, name: userName, flowWalletAddress: null, flowWalletJobId: null, }, }); } else { await checkWallet(user); } } catch (e) { console.log(e); } } const checkWallet = async (user: User) => { const jobId = user.flowWalletJobId; const address = user.flowWalletAddress; if (address != null) { return; } if (jobId != null) { const request: any = await fetch(`http://localhost:3000/v1/jobs/${jobId}`, { method: "GET", }); const jsonData = await request.json(); if (jsonData.state === "COMPLETE") { const address = await jsonData.result; await prisma.user.update({ where: { id: user.id, }, data: { flowWalletAddress: address, }, }); return; } if (request.data.state === "FAILED") { const request: any = await fetch("http://localhost:3000/v1/accounts", { method: "POST", }); const jsonData = await request.json(); await prisma.user.update({ where: { id: user.id, }, data: { flowWalletJobId: jsonData.jobId, }, }); return; } } if (jobId == null) { const request: any = await fetch("http://localhost:3000/v1/accounts", { method: "POST", }); const jsonData = await request.json(); await prisma.user.update({ where: { id: user.id, }, data: { flowWalletJobId: jsonData.jobId, }, }); return; } }; POST requests to the api/users path will result in calling the handle function. We’ll get to that shortly, but first, we need to create another endpoint for retrieving existing user information. Build the “Get User” Endpoint We’ll create another file in the pages/api/users folder, called getUser.ts. This file is responsible for finding a user in our database based on their email. Copy the following snippet and paste it into getUser.ts: TypeScript // pages/api/users/getUser.ts import prisma from "../../../lib/prisma"; export default async function handle( req: { query: { email: string; }; }, res: any ) { try { const { email } = req.query; const user = await prisma.user.findFirst({ where: { email: email, }, }); return res.json(user); } catch (e) { console.log(e); } } And that's it! With these two files in the pages/api/users folder, we are ready for our Next.js application frontend to make calls to our backend. Add “Create User” and “Get User” Functions to Main Page Now, let’s go back to the pages/index.tsx file to add the new functions that will make the requests to the backend. Replace the contents of index.tsx file with the following snippet: TypeScript // pages/index.tsx import styles from "@/styles/Home.module.css"; import { Inter } from "next/font/google"; import Head from "next/head"; import { useSession, signIn, signOut } from "next-auth/react"; import { useEffect, useState } from "react"; import { User } from "@prisma/client"; const inter = Inter({ subsets: ["latin"] }); export default function Home() { const { data: session } = useSession(); const [user, setUser] = useState<User | null>(null); const signInWithGoogle = () => { signIn(); }; const signOutWithGoogle = () => { signOut(); }; const getUser = async () => { const response = await fetch( `/api/users/getUser?email=${session?.user?.email}`, { method: "GET", } ); const data = await response.json(); setUser(data); return data?.flowWalletAddress != null ? true : false; }; console.log(user) const createUser = async () => { await fetch("/api/users", { method: "POST", body: JSON.stringify({ email: session?.user?.email, name: session?.user?.name }), }); }; useEffect(() => { if (session) { getUser(); createUser(); } }, [session]); return ( <> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={styles.main}> <div className={styles.card}> <h1 className={inter.className}>Welcome to Flow Walletless App!</h1> <div style={{ display: "flex", flexDirection: "column", gap: "20px", margin: "20px", } > {user ? ( <div> <h5 className={inter.className}>User Name: {user.name}</h5> <h5 className={inter.className}>User Email: {user.email}</h5> <h5 className={inter.className}>Flow Wallet Address: {user.flowWalletAddress ? user.flowWalletAddress : 'Creating address...'}</h5> </div> ) : ( <button onClick={signInWithGoogle} style={{ padding: "20px", width: "auto" } > Sign Up </button> )} <button onClick={signOutWithGoogle} style={{ padding: "20px" }> Sign Out </button> </div> </div> </main> </> ); } We have added two functions: getUser searches the database for a user with the email logged in. createUser creates a user or updates it if it does not have an address yet. We also added a useEffect that checks if the user is logged in with their Google account. If so, the getUser function is called, returning true if the user exists and has a registered email address. If not, we call the createUser function, which makes the necessary checks and calls. Test Our Next.js Application Finally, we restart our Next.js application with the following command: Shell $ npm run dev You can now sign in with your Google account, and the app will make the necessary calls to our wallet API to create a Flow Testnet address! This is the first step in the walletless Flow process! By following these instructions, your app will create users and accounts in a way that is convenient for the end user. But the wallet API does not stop there. You can do much more with it, such as execute and sign transactions, run scripts to fetch data from the blockchain, and more. Conclusion Account abstraction and walletless onboarding in Flow offer developers a unique solution. By being able to delegate control over accounts, Flow allows developers to create applications that provide users with a seamless onboarding experience. This will hopefully lead to greater adoption of dApps and a new wave of web3 users.

By Alvin Lee CORE
Monkey-Patching in Java
Monkey-Patching in Java

The JVM is an excellent platform for monkey-patching. Monkey patching is a technique used to dynamically update the behavior of a piece of code at run-time. A monkey patch (also spelled monkey-patch, MonkeyPatch) is a way to extend or modify the runtime code of dynamic languages (e.g. Smalltalk, JavaScript, Objective-C, Ruby, Perl, Python, Groovy, etc.) without altering the original source code. — Wikipedia I want to demo several approaches for monkey-patching in Java in this post. As an example, I'll use a sample for-loop. Imagine we have a class and a method. We want to call the method multiple times without doing it explicitly. The Decorator Design Pattern While the Decorator Design Pattern is not monkey-patching, it's an excellent introduction to it anyway. Decorator is a structural pattern described in the foundational book, Design Patterns: Elements of Reusable Object-Oriented Software. The decorator pattern is a design pattern that allows behavior to be added to an individual object, dynamically, without affecting the behavior of other objects from the same class. — Decorator pattern Our use-case is a Logger interface with a dedicated console implementation: We can implement it in Java like this: Java public interface Logger { void log(String message); } public class ConsoleLogger implements Logger { @Override public void log(String message) { System.out.println(message); } } Here's a simple, configurable decorator implementation: Java public class RepeatingDecorator implements Logger { //1 private final Logger logger; //2 private final int times; //3 public RepeatingDecorator(Logger logger, int times) { this.logger = logger; this.times = times; } @Override public void log(String message) { for (int i = 0; i < times; i++) { //4 logger.log(message); } } } Must implement the interface Underlying logger Loop configuration Call the method as many times as necessary Using the decorator is straightforward: Java var logger = new ConsoleLogger(); var threeTimesLogger = new RepeatingDecorator(logger, 3); threeTimesLogger.log("Hello world!"); The Java Proxy The Java Proxy is a generic decorator that allows attaching dynamic behavior: Proxy provides static methods for creating objects that act like instances of interfaces but allow for customized method invocation. — Proxy Javadoc The Spring Framework uses Java Proxies a lot. It's the case of the @Transactional annotation. If you annotate a method, Spring creates a Java Proxy around the encasing class at runtime. When you call it, Spring calls the proxy instead. Depending on the configuration, it opens the transaction or joins an existing one, then calls the actual method, and finally commits (or rollbacks). The API is simple: We can write the following handler: Java public class RepeatingInvocationHandler implements InvocationHandler { private final Logger logger; //1 private final int times; //2 public RepeatingInvocationHandler(Logger logger, int times) { this.logger = logger; this.times = times; } @Override public Object invoke(Object proxy, Method method, Object[] args) throws Exception { if (method.getName().equals("log") && args.length ## 1 && args[0] instanceof String) { //3 for (int i = 0; i < times; i++) { method.invoke(logger, args[0]); //4 } } return null; } } Underlying logger Loop configuration Check every requirement is upheld Call the initial method on the underlying logger Here's how to create the proxy: Java var logger = new ConsoleLogger(); var proxy = (Logger) Proxy.newProxyInstance( //1-2 Main.class.getClassLoader(), new Class[]{Logger.class}, //3 new RepeatingInvocationHandler(logger, 3)); //4 proxy.log("Hello world!"); Create the Proxy object We must cast to Logger as the API was created before generics, and it returns an Object Array of interfaces the object needs to conform to Pass our handler Instrumentation Instrumentation is the capability of the JVM to transform bytecode before it loads it via a Java agent. Two Java agent flavors are available: Static, with the agent passed on the command line when you launch the application Dynamic allows connecting to a running JVM and attaching an agent on it via the Attach API. Note that it represents a huge security issue and has been drastically limited in the latest JDK. The Instrumentation API's surface is limited: As seen above, the API exposes the user to low-level bytecode manipulation via byte arrays. It would be unwieldy to do it directly. Hence, real-life projects rely on bytecode manipulation libraries. ASM has been the traditional library for this, but it seems that Byte Buddy has superseded it. Note that Byte Buddy uses ASM but provides a higher-level abstraction. The Byte Buddy API is outside the scope of this blog post, so let's dive directly into the code: Java public class Repeater { public static void premain(String arguments, Instrumentation instrumentation) { //1 var withRepeatAnnotation = isAnnotatedWith(named("ch.frankel.blog.instrumentation.Repeat")); //2 new AgentBuilder.Default() //3 .type(declaresMethod(withRepeatAnnotation)) //4 .transform((builder, typeDescription, classLoader, module, domain) -> builder //5 .method(withRepeatAnnotation) //6 .intercept( //7 SuperMethodCall.INSTANCE //8 .andThen(SuperMethodCall.INSTANCE) .andThen(SuperMethodCall.INSTANCE)) ).installOn(instrumentation); //3 } } Required signature; it's similar to the main method, with the added Instrumentation argument Match that is annotated with the @Repeat annotation. The DSL reads fluently even if you don't know it (I don't). Byte Buddy provides a builder to create the Java agent Match all types that declare a method with the @Repeat annotation Transform the class accordingly Transform methods annotated with @Repeat Replace the original implementation with the following Call the original implementation three times The next step is to create the Java agent package. A Java agent is a regular JAR with specific manifest attributes. Let's configure Maven to build the agent: XML <plugin> <artifactId>maven-assembly-plugin</artifactId> <!--1--> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> <!--2--> </descriptorRefs> <archive> <manifestEntries> <Premain-Class>ch.frankel.blog.instrumentation.Repeater</Premain-Class> <!--3--> </manifestEntries> </archive> </configuration> <executions> <execution> <goals> <goal>single</goal> </goals> <phase>package</phase> <!--4--> </execution> </executions> </plugin> Create a JAR containing all dependencies () Testing is more involved, as we need two different codebases, one for the agent and one for the regular code with the annotation. Let's create the agent first: Shell mvn install We can then run the app with the agent: Shell java -javaagent:/Users/nico/.m2/repository/ch/frankel/blog/agent/1.0-SNAPSHOT/agent-1.0-SNAPSHOT-jar-with-dependencies.jar \ #1 -cp ./target/classes #2 ch.frankel.blog.instrumentation.Main #3 Run Java with the agent created in the previous step. The JVM will run the premain method of the class configured in the agent Configure the classpath Set the main class Aspect-Oriented Programming The idea behind AOP is to apply some code across different unrelated object hierarchies - cross-cutting concerns. It's a valuable technique in languages that don't allow traits, code you can graft on third-party objects/classes. Fun fact: I learned about AOP before Proxy. AOP relies on two main concepts: an aspect is the transformation applied to code, while a point cut matches where the aspect applies. In Java, AOP's historical implementation is the excellent AspectJ library. AspectJ provides two approaches, known as weaving: build-time weaving, which transforms the compiled bytecode, and runtime weaving, which relies on the above instrumentation. Either way, AspectJ uses a specific format for aspects and pointcuts. Before Java 5, the format looked like Java but not quite; for example, it used the aspect keyword. With Java 5, one can use annotations in regular Java code to achieve the same goal. We need an AspectJ dependency: XML <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> <version>1.9.19</version> </dependency> As Byte Buddy, AspectJ also uses ASM underneath. Here's the code: Java @Aspect //1 public class RepeatingAspect { @Pointcut("@annotation(repeat) && call(* *(..))") //2 public void callAt(Repeat repeat) {} //3 @Around("callAt(repeat)") //4 public Object around(ProceedingJoinPoint pjp, Repeat repeat) throws Throwable { //5 for (int i = 0; i < repeat.times(); i++) { //6 pjp.proceed(); //7 } return null; } } Mark this class as an aspect Define the pointcut; every call to a method annotated with @Repeat Bind the @Repeat annotation to the the repeat name used in the annotation above Define the aspect applied to the call site; it's an @Around, meaning that we need to call the original method explicitly The signature uses a ProceedingJoinPoint, which references the original method, as well as the @Repeat annotation Loop over as many times as configured Call the original method At this point, we need to weave the aspect. Let's do it at build-time. For this, we can add the AspectJ build plugin: XML <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>aspectj-maven-plugin</artifactId> <executions> <execution> <goals> <goal>compile</goal> <!--1--> </goals> </execution> </executions> </plugin> Bind execution of the plugin to the compile phase To see the demo in effect: Shell mvn compile exec:java -Dexec.mainClass=ch.frankel.blog.aop.Main Java Compiler Plugin Last, it's possible to change the generated bytecode via a Java compiler plugin, introduced in Java 6 as JSR 269. From a bird's eye view, plugins involve hooking into the Java compiler to manipulate the AST in three phases: parse the source code into multiple ASTs, analyze further into Element, and potentially generate source code. The documentation could be less sparse. I found the following Awesome Java Annotation Processing. Here's a simplified class diagram to get you started: I'm too lazy to implement the same as above with such a low-level API. As the expression goes, this is left as an exercise to the reader. If you are interested, I believe the DocLint source code is a good starting point. Conclusion I described several approaches to monkey-patching in Java in this post: the Proxy class, instrumentation via a Java Agent, AOP via AspectJ, and javac compiler plugins. To choose one over the other, consider the following criteria: build-time vs. runtime, complexity, native vs. third-party, and security concerns. To Go Further Monkey patch Guide to Java Instrumentation Byte Buddy Creating a Java Compiler Plugin Awesome Java Annotation Processing Maven AspectJ plugin

By Nicolas Fränkel CORE
How To Deploy Helidon Application to Kubernetes With Kubernetes Maven Plugin
How To Deploy Helidon Application to Kubernetes With Kubernetes Maven Plugin

In this article, we delve into the exciting realm of containerizing Helidon applications, followed by deploying them effortlessly to a Kubernetes environment. To achieve this, we'll harness the power of JKube’s Kubernetes Maven Plugin, a versatile tool for Java applications for Kubernetes deployments that has recently been updated to version 1.14.0. What's exciting about this release is that it now supports the Helidon framework, a Java Microservices gem open-sourced by Oracle in 2018. If you're curious about Helidon, we've got some blog posts to get you up to speed: Building Microservices With Oracle Helidon Ultra-Fast Microservices: When MicroStream Meets Helidon Helidon: 2x Productivity With Microprofile REST Client In this article, we will closely examine the integration between JKube’s Kubernetes Maven Plugin and Helidon. Here's a sneak peek of the exciting journey we'll embark on: We'll kick things off by generating a Maven application from Helidon Starter Transform your Helidon application into a nifty Docker image. Craft Kubernetes YAML manifests tailored for your Helidon application. Apply those manifests to your Kubernetes cluster. We'll bundle those Kubernetes YAML manifests into a Helm Chart. We'll top it off by pushing that Helm Chart to a Helm registry. Finally, we'll deploy our Helidon application to Red Hat OpenShift. An exciting aspect worth noting is that JKube’s Kubernetes Maven Plugin can be employed with previous versions of Helidon projects as well. The only requirement is to provide your custom image configuration. With this latest release, Helidon users can now easily generate opinionated container images. Furthermore, the plugin intelligently detects project dependencies and seamlessly incorporates Kubernetes health checks into the generated manifests, streamlining the deployment process. Setting up the Project You can either use an existing Helidon project or create a new one from Helidon Starter. If you’re on JDK 17 use 3.x version of Helidon. Otherwise, you can stick to Helidon 2.6.x which works with older versions of Java. In the starter form, you can choose either Helidon SE or Helidon Microprofile, choose application type, and fill out basic details like project groupId, version, and artifactId. Once you’ve set your project, you can add JKube’s Kubernetes Maven Plugin to your pom.xml: XML <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>kubernetes-maven-plugin</artifactId> <version>1.14.0</version> </plugin> Also, the plugin version is set to 1.14.0, which is the latest version at the time of writing. You can check for the latest version on the Eclipse JKube releases page. It’s not really required to add the plugin if you want to execute it directly from some CI pipeline. You can just provide a fully qualified name of JKube’s Kubernetes Maven Plugin while issuing some goals like this: Shell $ mvn org.eclipse.jkube:kubernetes-maven-plugin:1.14.0:resource Now that we’ve added the plugin to the project, we can start using it. Creating Container Image (JVM Mode) In order to build a container image, you do not need to provide any sort of configuration. First, you need to build your project. Shell $ mvn clean install Then, you just need to run k8s:build goal of JKube’s Kubernetes Maven Plugin. By default, it builds the image using the Docker build strategy, which requires access to a Docker daemon. If you have access to a docker daemon, run this command: Shell $ mvn k8s:build If you don’t have access to any docker daemon, you can also build the image using the Jib build strategy: Shell $ mvn k8s:build -Djkube.build.strategy=jib You will notice that Eclipse JKube has created an opinionated container image for your application based on your project configuration. Here are some key points about JKube’s Kubernetes Maven Plugin to observe in this zero configuration mode: It used quay.io/jkube/jkube-java as a base image for the container image It added some labels to the container image (picked from pom.xml) It exposed some ports in the container image based on the project configuration It automatically copied relevant artifacts and libraries required to execute the jar in the container environment. Creating Container Image (Native Mode) In order to create a container image for the native executable, we need to generate the native executable first. In order to do that, let’s build our project in the native-image profile (as specified in Helidon GraalVM Native Image documentation): Shell $ mvn package -Pnative-image This creates a native executable file in the target folder of your project. In order to create a container image based on this executable, we just need to run k8s:build goal but also specify native-image profile: Shell $ mvn k8s:build -Pnative-image Like JVM mode, Eclipse JKube creates an opinionated container image but uses a lightweight base image: registry.access.redhat.com/ubi8/ubi-minimal and exposes only the required ports by application. Customizing Container Image as per Requirements Creating a container image with no configuration is a really nice way to get started. However, it might not suit everyone’s use case. Let’s take a look at how to configure various aspects of the generated container image. You can override basic aspects of the container image with some properties like this: Property Name Description jkube.generator.name Change Image Name jkube.generator.from Change Base Image jkube.generator.tags A comma-separated value of additional tags for the image If you want more control, you can provide a complete XML configuration for the image in the plugin configuration section: XML <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>kubernetes-maven-plugin</artifactId> <version>${jkube.version}</version> <configuration> <images>  </images> </configuration> </plugin> The same is also possible by providing your own Dockerfile in the project base directory. Kubernetes Maven Plugin automatically detects it and builds a container image based on its content: Dockerfile FROM openjdk:11-jre-slim COPY maven/target/helidon-quickstart-se.jar /deployments/ COPY maven/target/libs /deployments/libs CMD ["java", "-jar", "/deployments/helidon-quickstart-se.jar"] EXPOSE 8080 Pushing the Container Image to Quay.io: Once you’ve built a container image, you most likely want to push it to some public or private container registry. Before pushing the image, make sure you’ve renamed your image to include the registry name and registry user. If I want to push an image to Quay.io in the namespace of a user named rokumar, this is how I would need to rename my image: Shell $ mvn k8s:build -Djkube.generator.name=quay.io/rokumar/%a:%v %a and %v correspond to project artifactId and project version. For more information, you can check the Kubernetes Maven Plugin Image Configuration documentation. Once we’ve built an image with the correct name, the next step is to provide credentials for our registry to JKube’s Kubernetes Maven Plugin. We can provide registry credentials via the following sources: Docker login Local Maven Settings file (~/.m2/settings.xml) Provide it inline using jkube.docker.username and jkube.docker.password properties Once you’ve configured your registry credentials, you can issue the k8s:push goal to push the image to your specified registry: Shell $ mvn k8s:push Generating Kubernetes Manifests In order to generate opinionated Kubernetes manifests, you can use k8s:resource goal from JKube’s Kubernetes Maven Plugin: Shell $ mvn k8s:resource It generates Kubernetes YAML manifests in the target directory: Shell $ ls target/classes/META-INF/jkube/kubernetes helidon-quickstart-se-deployment.yml helidon-quickstart-se-service.yml JKube’s Kubernetes Maven Plugin automatically detects if the project contains io.helidon:helidon-health dependency and adds liveness, readiness, and startup probes: YAML $ cat target/classes/META-INF/jkube/kubernetes//helidon-quickstart-se-deployment. yml | grep -A8 Probe livenessProbe: failureThreshold: 3 httpGet: path: /health/live port: 8080 scheme: HTTP initialDelaySeconds: 0 periodSeconds: 10 successThreshold: 1 -- readinessProbe: failureThreshold: 3 httpGet: path: /health/ready port: 8080 scheme: HTTP initialDelaySeconds: 0 periodSeconds: 10 successThreshold: 1 Applying Kubernetes Manifests JKube’s Kubernetes Maven Plugin provides k8s:apply goal that is equivalent to kubectl apply command. It just applies the resources generated by k8s:resource in the previous step. Shell $ mvn k8s:apply Packaging Helm Charts Helm has established itself as the de facto package manager for Kubernetes. You can package generated manifests into a Helm Chart and apply it on some other cluster using Helm CLI. You can generate a Helm Chart of generated manifests using k8s:helm goal. The interesting thing is that JKube’s Kubernetes Maven Plugin doesn’t rely on Helm CLI for generating the chart. Shell $ mvn k8s:helm You’d notice Helm Chart is generated in target/jkube/helm/ directory: Shell $ ls target/jkube/helm/helidon-quickstart-se/kubernetes Chart.yaml helidon-quickstart-se-0.0.1-SNAPSHOT.tar.gz README.md templates values.yaml Pushing Helm Charts to Helm Registries Usually, after generating a Helm Chart locally, you would want to push it to some Helm registry. JKube’s Kubernetes Maven Plugin provides k8s:helm-push goal for achieving this task. But first, we need to provide registry details in plugin configuration: XML <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>kubernetes-maven-plugin</artifactId> <version>1.14.0</version> <configuration> <helm> <snapshotRepository> <name>ChartMuseum</name> <url>http://example.com/api/charts</url> <type>CHARTMUSEUM</type> <username>user1</username> </snapshotRepository> </helm> </configuration> </plugin> JKube’s Kubernetes Maven Plugin supports pushing Helm Charts to ChartMuseum, Nexus, Artifactory, and OCI registries. You have to provide the applicable Helm repository type and URL. You can provide the credentials via environment variables, properties, or ~/.m2/settings.xml. Once you’ve all set up, you can run k8s:helm-push goal to push chart: Shell $ mvn k8s:helm-push -Djkube.helm.snapshotRepository.password=yourpassword Deploying To Red Hat OpenShift If you’re deploying to Red Hat OpenShift, you can use JKube’s OpenShift Maven Plugin to deploy your Helidon application to an OpenShift cluster. It contains some add-ons specific to OpenShift like S2I build strategy, support for Routes, etc. You also need to add the JKube’s OpenShift Maven Plugin plugin to your pom.xml. Maybe you can add it in a separate profile: XML <profile> <id>openshift</id> <build> <plugins> <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>${jkube.version}</version> </plugin> </plugins> </build> </profile> Then, you can deploy the application with a combination of these goals: Shell $ mvn oc:build oc:resource oc:apply -Popenshift Conclusion In this article, you learned how smoothly you can deploy your Helidon applications to Kubernetes using Eclipse JKube’s Kubernetes Maven Plugin. We saw how effortless it is to package your Helidon application into a container image and publish it to some container image registry. We can alternatively generate Helm Charts of our Kubernetes YAML manifests and publish Helm Charts to some Helm registry. In the end, we learned about JKube’s OpenShift Maven Plugin, which is specifically designed for Red Hat OpenShift users who want to deploy their Helidon applications to Red Hat OpenShift. You can find the code used in this blog post in this GitHub repository. In case you’re interested in knowing more about Eclipse JKube, you can check these links: Documentation Github Issue Tracker StackOverflow YouTube Channel Twitter Gitter Chat

By Rohan Kumar
Agile Estimation: Techniques and Tips for Success
Agile Estimation: Techniques and Tips for Success

Agile estimation plays a pivotal role in Agile project management, enabling teams to gauge the effort, time, and resources necessary to accomplish their tasks. Precise estimations empower teams to efficiently plan their work, manage expectations, and make well-informed decisions throughout the project's duration. In this article, we delve into various Agile estimation techniques and best practices that enhance the accuracy of your predictions and pave the way for your team's success. The Essence of Agile Estimation Agile estimation is an ongoing, iterative process that takes place at different levels of detail, ranging from high-level release planning to meticulous sprint planning. The primary objective of Agile estimation is to provide just enough information for teams to make informed decisions without expending excessive time on analysis and documentation. Designed to be lightweight, collaborative, and adaptable, Agile estimation techniques enable teams to rapidly adjust their plans as new information emerges or priorities shift. Prominent Agile Estimation Techniques 1. Planning Poker Planning Poker is a consensus-driven estimation technique that employs a set of cards with pre-defined numerical values, often based on the Fibonacci sequence (1, 2, 3, 5, 8, 13, etc.). Each team member selects a card representing their estimate for a specific task, and all cards are revealed simultaneously. If there is a significant discrepancy in estimates, team members deliberate their reasoning and repeat the process until a consensus is achieved. 2. T-Shirt Sizing T-shirt sizing is a relative estimation technique that classifies tasks into different "sizes" according to their perceived complexity or effort, such as XS, S, M, L, and XL. This method allows teams to swiftly compare tasks and prioritize them based on their relative size. Once tasks are categorized, more precise estimation techniques can be employed if needed. 3. User Story Points User story points serve as a unit of measurement to estimate the relative effort required to complete a user story. This technique entails assigning a point value to each user story based on its complexity, risk, and effort, taking into account factors such as workload, uncertainty, and potential dependencies. Teams can then use these point values to predict the number of user stories they can finish within a given timeframe. 4. Affinity Estimation Affinity Estimation is a technique that involves grouping tasks or user stories based on their similarities in terms of effort, complexity, and size. This method helps teams quickly identify patterns and relationships among tasks, enabling them to estimate more efficiently. Once tasks are grouped, they can be assigned a relative point value or size category. 5. Wideband Delphi The Wideband Delphi method is a consensus-based estimation technique that involves multiple rounds of anonymous estimation and feedback. Team members individually provide estimates for each task, and then the estimates are shared anonymously with the entire team. Team members discuss the range of estimates and any discrepancies before submitting revised estimates in subsequent rounds. This process continues until a consensus is reached. Risk Management in Agile Estimation Identify and Assess Risks Incorporate risk identification and assessment into your Agile estimation process. Encourage team members to consider potential risks associated with each task or user story, such as technical challenges, dependencies, or resource constraints. By identifying and assessing risks early on, your team can develop strategies to mitigate them, leading to more accurate estimates and a smoother project execution. Assign Risk Factors Assign risk factors to tasks or user stories based on their level of uncertainty or potential impact on the project. These risk factors can be numerical values or qualitative categories (e.g., low, medium, high) that help your team prioritize tasks and allocate resources effectively. Incorporating risk factors into your estimates can provide a more comprehensive understanding of the work involved and help your team make better-informed decisions. Risk-Based Buffering Include risk-based buffering in your Agile estimation process by adding contingency buffers to account for uncertainties and potential risks. These buffers can be expressed as additional time, resources, or user story points, and they serve as a safety net to ensure that your team can adapt to unforeseen challenges without jeopardizing the project's success. Monitor and Control Risks Continuously monitor and control risks throughout the project lifecycle by regularly reviewing your risk assessments and updating them as new information becomes available. This proactive approach allows your team to identify emerging risks and adjust their plans accordingly, ensuring that your estimates remain accurate and relevant. Learn From Risks Encourage your team to learn from the risks encountered during the project and use this knowledge to improve their estimation and risk management practices. Conduct retrospective sessions to discuss the risks faced, their impact on the project, and the effectiveness of the mitigation strategies employed. By learning from past experiences, your team can refine its risk management approach and enhance the accuracy of future estimates. By incorporating risk management into your Agile estimation process, you can help your team better anticipate and address potential challenges, leading to more accurate estimates and a higher likelihood of project success. This approach also fosters a culture of proactive risk management and continuous learning within your team, further enhancing its overall effectiveness and adaptability. Best Practices for Agile Estimation Foster Team Collaboration Efficient Agile estimation necessitates input from all team members, as each individual contributes unique insights and perspectives. Promote open communication and collaboration during estimation sessions to ensure everyone's opinions are considered and to cultivate a shared understanding of the tasks at hand. Utilize Historical Data Draw upon historical data from previous projects or sprints to inform your estimations. Examining past performance can help teams identify trends, patterns, and areas for improvement, ultimately leading to more accurate predictions in the future. Velocity and Capacity Planning Incorporate team velocity and capacity planning into your Agile estimation process. Velocity is a measure of the amount of work a team can complete within a given sprint or iteration, while capacity refers to the maximum amount of work a team can handle. By considering these factors, you can ensure that your estimates align with your team's capabilities and avoid overcommitting to work. Break Down Large Tasks Large tasks or user stories can be challenging to estimate accurately. Breaking them down into smaller, more manageable components can make the estimation process more precise and efficient. Additionally, this approach helps teams better understand the scope and complexity of the work involved, leading to more realistic expectations and improved planning. Revisit Estimates Regularly Agile estimation is a continuous process, and teams should be prepared to revise their estimates as new information becomes available or circumstances change. Periodically review and update your estimates to ensure they remain accurate and pertinent throughout the project lifecycle. Acknowledge Uncertainty Agile estimation recognizes the inherent uncertainty in software development. Instead of striving for flawless predictions, focus on providing just enough information to make informed decisions and be prepared to adapt as necessary. Establish a Baseline Create a baseline for your estimates by selecting a well-understood task or user story as a reference point. This baseline can help teams calibrate their estimates and ensure consistency across different tasks and projects. Pursue Continuous Improvement Consider Agile estimation as an opportunity for ongoing improvement. Reflect on your team's estimation accuracy and pinpoint areas for growth. Experiment with different techniques and practices to discover what works best for your team and refine your approach over time. Conclusion Agile estimation is a vital component of successful Agile project management. By employing the appropriate techniques and adhering to best practices, teams can enhance their ability to predict project scope, effort, and duration, resulting in more effective planning and decision-making. Keep in mind that Agile estimation is an iterative process, and teams should continuously strive to learn from their experiences and refine their approach for even greater precision in the future.

By Arun Pandey
Revolutionizing Software Testing
Revolutionizing Software Testing

This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report Artificial intelligence (AI) has revolutionized the realm of software testing, introducing new possibilities and efficiencies. The demand for faster, more reliable, and efficient testing processes has grown exponentially with the increasing complexity of modern applications. To address these challenges, AI has emerged as a game-changing force, revolutionizing the field of automated software testing. By leveraging AI algorithms, machine learning (ML), and advanced analytics, software testing has undergone a remarkable transformation, enabling organizations to achieve unprecedented levels of speed, accuracy, and coverage in their testing endeavors. This article delves into the profound impact of AI on automated software testing, exploring its capabilities, benefits, and the potential it holds for the future of software quality assurance. An Overview of AI in Testing This introduction aims to shed light on the role of AI in software testing, focusing on key aspects that drive its transformative impact. Figure 1: AI in testing Elastically Scale Functional, Load, and Performance Tests AI-powered testing solutions enable the effortless allocation of testing resources, ensuring optimal utilization and adaptability to varying workloads. This scalability ensures comprehensive testing coverage while maintaining efficiency. AI-Powered Predictive Bots AI-powered predictive bots are a significant advancement in software testing. Bots leverage ML algorithms to analyze historical data, patterns, and trends, enabling them to make informed predictions about potential defects or high-risk areas. By proactively identifying potential issues, predictive bots contribute to more effective and efficient testing processes. Automatic Update of Test Cases With AI algorithms monitoring the application and its changes, test cases can be dynamically updated to reflect modifications in the software. This adaptability reduces the effort required for test maintenance and ensures that the test suite remains relevant and effective over time. AI-Powered Analytics of Test Automation Data By analyzing vast amounts of testing data, AI-powered analytical tools can identify patterns, trends, and anomalies, providing valuable information to enhance testing strategies and optimize testing efforts. This data-driven approach empowers testing teams to make informed decisions and uncover hidden patterns that traditional methods might overlook. Visual Locators Visual locators, a type of AI application in software testing, focus on visual elements such as user interfaces and graphical components. AI algorithms can analyze screenshots and images, enabling accurate identification of and interaction with visual elements during automated testing. This capability enhances the reliability and accuracy of visual testing, ensuring a seamless user experience. Self-Healing Tests AI algorithms continuously monitor test execution, analyzing results and detecting failures or inconsistencies. When issues arise, self-healing mechanisms automatically attempt to resolve the problem, adjusting the test environment or configuration. This intelligent resilience minimizes disruptions and optimizes the overall testing process. What Is AI-Augmented Software Testing? AI-augmented software testing refers to the utilization of AI techniques — such as ML, natural language processing, and data analytics — to enhance and optimize the entire software testing lifecycle. It involves automating test case generation, intelligent test prioritization, anomaly detection, predictive analysis, and adaptive testing, among other tasks. By harnessing the power of AI, organizations can improve test coverage, detect defects more efficiently, reduce manual effort, and ultimately deliver high-quality software with greater speed and accuracy. Benefits of AI-Powered Automated Testing AI-powered software testing offers a plethora of benefits that revolutionize the testing landscape. One significant advantage lies in its codeless nature, thus eliminating the need to memorize intricate syntax. Embracing simplicity, it empowers users to effortlessly create testing processes through intuitive drag-and-drop interfaces. Scalability becomes a reality as the workload can be efficiently distributed among multiple workstations, ensuring efficient utilization of resources. The cost-saving aspect is remarkable as minimal human intervention is required, resulting in substantial reductions in workforce expenses. With tasks executed by intelligent bots, accuracy reaches unprecedented heights, minimizing the risk of human errors. Furthermore, this automated approach amplifies productivity, enabling testers to achieve exceptional output levels. Irrespective of the software type — be it a web-based desktop application or mobile application — the flexibility of AI-powered testing seamlessly adapts to diverse environments, revolutionizing the testing realm altogether. Figure 2: Benefits of AI for test automation Mitigating the Challenges of AI-Powered Automated Testing AI-powered automated testing has revolutionized the software testing landscape, but it is not without its challenges. One of the primary hurdles is the need for high-quality training data. AI algorithms rely heavily on diverse and representative data to perform effectively. Therefore, organizations must invest time and effort in curating comprehensive and relevant datasets that encompass various scenarios, edge cases, and potential failures. Another challenge lies in the interpretability of AI models. Understanding why and how AI algorithms make specific decisions can be critical for gaining trust and ensuring accurate results. Addressing this challenge requires implementing techniques such as explainable AI, model auditing, and transparency. Furthermore, the dynamic nature of software environments poses a challenge in maintaining AI models' relevance and accuracy. Continuous monitoring, retraining, and adaptation of AI models become crucial to keeping pace with evolving software systems. Additionally, ethical considerations, data privacy, and bias mitigation should be diligently addressed to maintain fairness and accountability in AI-powered automated testing. AI models used in testing can sometimes produce false positives (incorrectly flagging a non-defect as a defect) or false negatives (failing to identify an actual defect). Balancing precision and recall of AI models is important to minimize false results. AI models can exhibit biases and may struggle to generalize new or uncommon scenarios. Adequate training and validation of AI models are necessary to mitigate biases and ensure their effectiveness across diverse testing scenarios. Human intervention plays a critical role in designing test suites by leveraging their domain knowledge and insights. They can identify critical test cases, edge cases, and scenarios that require human intuition or creativity, while leveraging AI to handle repetitive or computationally intensive tasks. Continuous improvement would be possible by encouraging a feedback loop between human testers and AI systems. Human experts can provide feedback on the accuracy and relevance of AI-generated test cases or predictions, helping improve the performance and adaptability of AI models. Human testers should play a role in the verification and validation of AI models, ensuring that they align with the intended objectives and requirements. They can evaluate the effectiveness, robustness, and limitations of AI models in specific testing contexts. AI-Driven Testing Approaches AI-driven testing approaches have ushered in a new era in software quality assurance, revolutionizing traditional testing methodologies. By harnessing the power of artificial intelligence, these innovative approaches optimize and enhance various aspects of testing, including test coverage, efficiency, accuracy, and adaptability. This section explores the key AI-driven testing approaches, including differential testing, visual testing, declarative testing, and self-healing automation. These techniques leverage AI algorithms and advanced analytics to elevate the effectiveness and efficiency of software testing, ensuring higher-quality applications that meet the demands of the rapidly evolving digital landscape: Differential testing assesses discrepancies between application versions and builds, categorizes the variances, and utilizes feedback to enhance the classification process through continuous learning. Visual testing utilizes image-based learning and screen comparisons to assess the visual aspects and user experience of an application, thereby ensuring the integrity of its look and feel. Declarative testing expresses the intention of a test using a natural or domain-specific language, allowing the system to autonomously determine the most appropriate approach to execute the test. Self-healing automation automatically rectifies element selection in tests when there are modifications to the user interface (UI), ensuring the continuity of reliable test execution. Key Considerations for Harnessing AI for Software Testing Many contemporary test automation tools infused with AI provide support for open-source test automation frameworks such as Selenium and Appium. AI-powered automated software testing encompasses essential features such as auto-code generation and the integration of exploratory testing techniques. Open-Source AI Tools To Test Software When selecting an open-source testing tool, it is essential to consider several factors. Firstly, it is crucial to verify that the tool is actively maintained and supported. Additionally, it is critical to assess whether the tool aligns with the skill set of the team. Furthermore, it is important to evaluate the features, benefits, and challenges presented by the tool to ensure they are in line with your specific testing requirements and organizational objectives. A few popular open-source options include, but are not limited to: Carina – AI-driven, free forever, scriptless approach to automate functional, performance, visual, and compatibility tests TestProject – Offered the industry's first free Appium AI tools in 2021, expanding upon the AI tools for Selenium that they had previously introduced in 2020 for self-healing technology Cerberus Testing – A low-code and scalable test automation solution that offers a self-healing feature called Erratum and has a forever-free plan Designing Automated Tests With AI and Self-Testing AI has made significant strides in transforming the landscape of automated testing, offering a range of techniques and applications that revolutionize software quality assurance. Some of the prominent techniques and algorithms are provided in the tables below, along with the purposes they serve: KEY TECHNIQUES AND APPLICATIONS OF AI IN AUTOMATED TESTING Key Technique Applications Machine learning Analyze large volumes of testing data, identify patterns, and make predictions for test optimization, anomaly detection, and test case generation Natural language processing Facilitate the creation of intelligent chatbots, voice-based testing interfaces, and natural language test case generation Computer vision Analyze image and visual data in areas such as visual testing, UI testing, and defect detection Reinforcement learning Optimize test execution strategies, generate adaptive test scripts, and dynamically adjust test scenarios based on feedback from the system under test Table 1 KEY ALGORITHMS USED FOR AI-POWERED AUTOMATED TESTING Algorithm Purpose Applications Clustering algorithms Segmentation k-means and hierarchical clustering are used to group similar test cases, identify patterns, and detect anomalies Sequence generation models: recurrent neural networks or transformers Text classification and sequence prediction Trained to generate sequences such as test scripts or sequences of user interactions for log analysis Bayesian networks Dependencies and relationships between variables Test coverage analysis, defect prediction, and risk assessment Convolutional neural networks Image analysis Visual testing Evolutionary algorithms: genetic algorithms Natural selection Optimize test case generation, test suite prioritization, and test execution strategies by applying genetic operators like mutation and crossover on existing test cases to create new variants, which are then evaluated based on fitness criteria Decision trees, fandom forests, support vector machines, and neural networks Classification Classification of software components Variational autoencoders and generative adversarial networks Generative AI Used to generate new test cases that cover different scenarios or edge cases by test data generation, creating synthetic data that resembles real-world scenarios Table 2 Real-World Examples of AI-Powered Automated Testing AI-powered visual testing platforms perform automated visual validation of web and mobile applications. They use computer vision algorithms to compare screenshots and identify visual discrepancies, enabling efficient visual testing across multiple platforms and devices. NLP and ML are combined to generate test cases from plain English descriptions. They automatically execute these test cases, detect bugs, and provide actionable insights to improve software quality. Self-healing capabilities are also provided by automatically adapting test cases to changes in the application's UI, improving test maintenance efficiency. Quantum AI-Powered Automated Testing: The Road Ahead The future of quantum AI-powered automated software testing holds great potential for transforming the way testing is conducted. Figure 3: Transition of automated testing from AI to Quantum AI Quantum computing's ability to handle complex optimization problems can significantly improve test case generation, test suite optimization, and resource allocation in automated testing. Quantum ML algorithms can enable more sophisticated and accurate models for anomaly detection, regression testing, and predictive analytics. Quantum computing's ability to perform parallel computations can greatly accelerate the execution of complex test scenarios and large-scale test suites. Quantum algorithms can help enhance security testing by efficiently simulating and analyzing cryptographic algorithms and protocols. Quantum simulation capabilities can be leveraged to model and simulate complex systems, enabling more realistic and comprehensive testing of software applications in various domains, such as finance, healthcare, and transportation. Parting Thoughts AI has significantly revolutionized the traditional landscape of testing, enhancing the effectiveness, efficiency, and reliability of software quality assurance processes. AI-driven techniques such as ML, anomaly detection, NLP, and intelligent test prioritization have enabled organizations to achieve higher test coverage, early defect detection, streamlined test script creation, and adaptive test maintenance. The integration of AI in automated testing not only accelerates the testing process but also improves overall software quality, leading to enhanced customer satisfaction and reduced time to market. As AI continues to evolve and mature, it holds immense potential for further advancements in automated testing, paving the way for a future where AI-driven approaches become the norm in ensuring the delivery of robust, high-quality software applications. Embracing the power of AI in automated testing is not only a strategic imperative but also a competitive advantage for organizations looking to thrive in today's rapidly evolving technological landscape. This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report

By Tuhin Chattopadhyay CORE

Culture and Methodologies

Agile

Agile

Career Development

Career Development

Methodologies

Methodologies

Team Management

Team Management

Unlocking Opportunities: The Advantages of Certifications for Software Engineers

September 27, 2023 by Konstantin Glumov

Top 8 Conferences Developers Can Still Attend

September 26, 2023 by Pavan Belagatti CORE

Operational Testing Tutorial: Comprehensive Guide With Best Practices

May 16, 2023 by Harshit Paul

Data Engineering

AI/ML

AI/ML

Big Data

Big Data

Databases

Databases

IoT

IoT

The API-Centric Revolution: Decoding Data Integration in the Age of Microservices and Cloud Computing

September 27, 2023 by Andrea Arnold

Five Tools for Data Scientists to 10X their Productivity

September 27, 2023 by Mirza Arique Alam

Explainable AI: Making the Black Box Transparent

May 16, 2023 by Yifei Wang

Software Design and Architecture

Cloud Architecture

Cloud Architecture

Integration

Integration

Microservices

Microservices

Performance

Performance

The API-Centric Revolution: Decoding Data Integration in the Age of Microservices and Cloud Computing

September 27, 2023 by Andrea Arnold

SwiftData Dependency Injection in SwiftUI Application

September 27, 2023 by Aleksei Pichukov

Low Code vs. Traditional Development: A Comprehensive Comparison

May 16, 2023 by Tien Nguyen

Coding

Frameworks

Frameworks

Java

Java

JavaScript

JavaScript

Languages

Languages

Tools

Tools

SwiftData Dependency Injection in SwiftUI Application

September 27, 2023 by Aleksei Pichukov

Driving Digital Transformation Through the Cloud

September 27, 2023 by Tom Smith CORE

Scaling Event-Driven Applications Made Easy With Sveltos Cross-Cluster Configuration

May 15, 2023 by Gianluca Mardente

Testing, Deployment, and Maintenance

Deployment

Deployment

DevOps and CI/CD

DevOps and CI/CD

Maintenance

Maintenance

Monitoring and Observability

Monitoring and Observability

The API-Centric Revolution: Decoding Data Integration in the Age of Microservices and Cloud Computing

September 27, 2023 by Andrea Arnold

Test Automation Success With Measurable Metrics

September 27, 2023 by Kayukaran Parameswaran

Low Code vs. Traditional Development: A Comprehensive Comparison

May 16, 2023 by Tien Nguyen

Popular

AI/ML

AI/ML

Java

Java

JavaScript

JavaScript

Open Source

Open Source

Five Tools for Data Scientists to 10X their Productivity

September 27, 2023 by Mirza Arique Alam

Advancements in Computer Vision: Deep Learning for Image Recognition

September 27, 2023 by Madhuri Hammad

Five IntelliJ Idea Plugins That Will Change the Way You Code

May 15, 2023 by Toxic Dev

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: