Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service
Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.
Navigating the Skies
Demystifying Project Loom: A Guide to Lightweight Threads in Java
Database Systems
This data-forward, analytics-driven world would be lost without its database and data storage solutions. As more organizations continue to transition their software to cloud-based systems, the growing demand for database innovation and enhancements has climbed to novel heights. We are upon a new era of the "Modern Database," where databases must both store data and ensure that data is prepped and primed securely for insights and analytics, integrity and quality, and microservices and cloud-based architectures.In our 2023 Database Systems Trend Report, we explore these database trends, assess current strategies and challenges, and provide forward-looking assessments of the database technologies most commonly used today. Further, readers will find insightful articles — written by several of our very own DZone Community experts — that cover hand-selected topics, including what "good" database design is, database monitoring and observability, and how to navigate the realm of cloud databases.
Design Patterns
Threat Modeling
This blog post demonstrates how to auto-scale your DynamoDB Streams consumer applications on Kubernetes. You will work with a Java application that uses the DynamoDB Streams Kinesis adapter library to consume change data events from a DynamoDB table. It will be deployed to an Amazon EKS cluster and will be scaled automatically using KEDA. The application includes an implementation of the com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor that processes data from the DynamoDB stream and replicates it to another (target) DynamoDB table - this is just used as an example. We will use the AWS CLI to produce data to the DynamoDB stream and observe the scaling of the application. The code is available in this GitHub repository. What's Covered? Introduction Horizontal scalability with Kinesis Client Library What is KEDA? Prerequisites Setup and configure KEDA on EKS Configure IAM Roles Deploy DynamoDB Streams consumer application to EKS DynamoDB Streams consumer app autoscaling in action with KEDA Delete resources Conclusion Introduction Amazon DynamoDB is a fully managed database service that provides fast and predictable performance with seamless scalability. With DynamoDB Streams, you can leverage Change Data Capture (CDC) to get notified about changes to DynamoDB table data in real time. This makes it possible to easily build applications that react to changes in the underlying database without the need for complex polling or querying. DynamoDB offers two streaming models for change data capture: Kinesis Data Streams for DynamoDB DynamoDB Streams With Kinesis Data Streams, you can capture item-level modifications in any DynamoDB table and replicate them to a Kinesis data stream. With DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours. We will make use of the native DynamoDB Streams capability. Even with DynamoDB Streams, there are multiple options to choose from when it comes to consuming the change data events: Use the low-level DynamoDB Streams API to read the change data events from the DynamoDB Streams table. Use an AWS Lambda trigger Use the DynamoDB Streams Kinesis adapter library Our application will leverage DynamoDB Streams along with the Kinesis Client Library (KCL) adapter library 1.x to consume change data events from a DynamoDB table. Horizontal Scalability With Kinesis Client Library The Kinesis Client Library ensures that for every shard there is a record processor running and processing that shard. KCL helps take care of many of the complex tasks associated with distributed computing and scalability. It connects to the data stream, enumerates the shards within the data stream, and uses leases to coordinate shard associations with its consumer applications. A record processor is instantiated for every shard it manages. KCL pulls data records from the data stream, pushes the records to the corresponding record processor, and checkpoints processed records. More importantly, it balances shard-worker associations (leases) when the worker instance count changes or when the data stream is re-sharded (shards are split or merged). This means that you are able to scale your DynamoDB Streams application by simply adding more instances since KCL will automatically balance the shards across the instances. But, you still need a way to scale your applications when the load increases. Of course, you could do it manually or build a custom solution to get this done. This is where KEDA comes in. What Is KEDA? KEDA is a Kubernetes-based event-driven autoscaling component that can monitor event sources like DynamoDB Streams and scale the underlying Deployments (and Pods) based on the number of events needing to be processed. It's built on top of native Kubernetes primitives such as the Horizontal Pod Autoscaler that can be added to any Kubernetes cluster. Here is a high-level overview of its key components (you can refer to the KEDA documentation for a deep dive): From KEDA Concepts documentation The keda-operator-metrics-apiserver component in KEDA acts as a Kubernetes metrics server that exposes metrics for the Horizontal Pod Autoscaler. A KEDA Scaler integrates with an external system (such as Redis) to fetch these metrics (e.g., length of a list) to drive auto-scaling of any container in Kubernetes based on the number of events needing to be processed. The role of the keda-operator component is to activate and deactivateDeployment, i.e. scale to and from zero. You will see the DynamoDB Streams scaler in action that scales based on the shard count of a DynamoDB Stream. Now let's move on to the practical part of this tutorial. Prerequisites In addition to an AWS account, you will need to have the AWS CLI, kubectl, and Docker installed. Setup an EKS Cluster and Create a DynamoDB Table There are a variety of ways in which you can create an Amazon EKS cluster. I prefer using eksctl CLI because of the convenience it offers. Creating an EKS cluster using eksctl can be as easy as this: eksctl create cluster --name <cluster name> --region <region e.g. us-east-1> For details, refer to the Getting Started with Amazon EKS – eksctl. Create a DynamoDB table with streams enabled to persist application data and access the change data feed. You can use the AWS CLI to create a table with the following command: aws dynamodb create-table \ --table-name users \ --attribute-definitions AttributeName=email,AttributeType=S \ --key-schema AttributeName=email,KeyType=HASH \ --billing-mode PAY_PER_REQUEST \ --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES We will need to create another table that will serve as a replica of the first table. aws dynamodb create-table \ --table-name users_replica \ --attribute-definitions AttributeName=email,AttributeType=S \ --key-schema AttributeName=email,KeyType=HASH \ --billing-mode PAY_PER_REQUEST Clone this GitHub repository and change it to the right directory: git clone https://github.com/abhirockzz/dynamodb-streams-keda-autoscale cd dynamodb-streams-keda-autoscale Ok, let's get started! Setup and Configure KEDA on EKS For the purposes of this tutorial, you will use YAML files to deploy KEDA, but you could also use Helm charts. Install KEDA: # update version 2.8.2 if required kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.8.2/keda-2.8.2.yaml Verify the installation: # check Custom Resource Definitions kubectl get crd # check KEDA Deployments kubectl get deployment -n keda # check KEDA operator logs kubectl logs -f $(kubectl get pod -l=app=keda-operator -o jsonpath='{.items[0].metadata.name}' -n keda) -n keda Configure IAM Roles The KEDA operator as well as the DynamoDB streams consumer application need to invoke AWS APIs. Since both will run as Deployments in EKS, we will use IAM Roles for Service Accounts (IRSA) to provide the necessary permissions. In our particular scenario: KEDA operator needs to be able to get information about the DynamoDB table and Stream The application (KCL 1.x library to be specific) needs to interact with Kinesis and DynamoDB - it needs a bunch of IAM permissions to do so. Configure IRSA for the KEDA Operator Set your AWS Account ID and OIDC Identity provider as environment variables: ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) #update the cluster name and region as required export EKS_CLUSTER_NAME=demo-eks-cluster export AWS_REGION=us-east-1 OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///") Create a JSON file with Trusted Entities for the role: read -r -d '' TRUST_RELATIONSHIP <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:aud": "sts.amazonaws.com", "${OIDC_PROVIDER}:sub": "system:serviceaccount:keda:keda-operator" } } } ] } EOF echo "${TRUST_RELATIONSHIP}" > trust_keda.json Now, create the IAM role and attach the policy (take a look at policy_dynamodb_streams_keda.json file for details): export ROLE_NAME=keda-operator-dynamodb-streams-role aws iam create-role --role-name $ROLE_NAME --assume-role-policy-document file://trust_keda.json --description "IRSA for DynamoDB streams KEDA scaler on EKS" aws iam create-policy --policy-name keda-dynamodb-streams-policy --policy-document file://policy_dynamodb_streams_keda.json aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/keda-dynamodb-streams-policy Associate the IAM role and Service Account: kubectl annotate serviceaccount -n keda keda-operator eks.amazonaws.com/role-arn=arn:aws:iam::${ACCOUNT_ID}:role/${ROLE_NAME} # verify the annotation kubectl describe serviceaccount/keda-operator -n keda You will need to restart KEDA operator Deployment for this to take effect: kubectl rollout restart deployment.apps/keda-operator -n keda # to verify, confirm that the KEDA operator has the right environment variables kubectl describe pod -n keda $(kubectl get po -l=app=keda-operator -n keda --output=jsonpath={.items..metadata.name}) | grep "^\s*AWS_" # expected output AWS_STS_REGIONAL_ENDPOINTS: regional AWS_DEFAULT_REGION: us-east-1 AWS_REGION: us-east-1 AWS_ROLE_ARN: arn:aws:iam::<AWS_ACCOUNT_ID>:role/keda-operator-dynamodb-streams-role AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token Configure IRSA for the DynamoDB Streams Consumer Application Start by creating a Kubernetes Service Account: kubectl apply -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: dynamodb-streams-consumer-app-sa EOF Create a JSON file with Trusted Entities for the role: read -r -d '' TRUST_RELATIONSHIP <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:aud": "sts.amazonaws.com", "${OIDC_PROVIDER}:sub": "system:serviceaccount:default:dynamodb-streams-consumer-app-sa" } } } ] } EOF echo "${TRUST_RELATIONSHIP}" > trust.json Now, create the IAM role and attach the policy. Update the policy.json file and enter the region and AWS account details. export ROLE_NAME=dynamodb-streams-consumer-app-role aws iam create-role --role-name $ROLE_NAME --assume-role-policy-document file://trust.json --description "IRSA for DynamoDB Streams consumer app on EKS" aws iam create-policy --policy-name dynamodb-streams-consumer-app-policy --policy-document file://policy.json aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/dynamodb-streams-consumer-app-policy Associate the IAM role and Service Account: kubectl annotate serviceaccount -n default dynamodb-streams-consumer-app-sa eks.amazonaws.com/role-arn=arn:aws:iam::${ACCOUNT_ID}:role/${ROLE_NAME} # verify the annotation kubectl describe serviceaccount/dynamodb-streams-consumer-app-sa The core infrastructure is now ready. Let's prepare and deploy the consumer application. Deploy DynamoDB Streams Consumer Application to EKS We would first need to build the Docker image and push it to ECR (you can refer to the Dockerfile for details). Build and Push the Docker Image to ECR # create runnable JAR file mvn clean compile assembly\:single # build docker image docker build -t dynamodb-streams-consumer-app . AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) # create a private ECR repo aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com aws ecr create-repository --repository-name dynamodb-streams-consumer-app --region us-east-1 # tag and push the image docker tag dynamodb-streams-consumer-app:latest $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/dynamodb-streams-consumer-app:latest docker push $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/dynamodb-streams-consumer-app:latest Deploy the Consumer Application Update the consumer.yaml to include the Docker image you just pushed to ECR and the ARN for the DynamoDB streams for the source table. The rest of the manifest remains the same. To retrieve the ARN for the stream, run the following command: aws dynamodb describe-table --table-name users | jq -r '.Table.LatestStreamArn' The consumer.yaml Deployment manifest looks like this: apiVersion: apps/v1 kind: Deployment metadata: name: dynamodb-streams-kcl-consumer-app spec: replicas: 1 selector: matchLabels: app: dynamodb-streams-kcl-consumer-app template: metadata: labels: app: dynamodb-streams-kcl-consumer-app spec: serviceAccountName: dynamodb-streams-kcl-consumer-app-sa containers: - name: dynamodb-streams-kcl-consumer-app image: AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/dynamodb-streams-kcl-consumer-app:latest imagePullPolicy: Always env: - name: TARGET_TABLE_NAME value: users_replica - name: APPLICATION_NAME value: dynamodb-streams-kcl-app-demo - name: SOURCE_TABLE_STREAM_ARN value: <enter ARN> - name: AWS_REGION value: us-east-1 - name: INSTANCE_NAME valueFrom: fieldRef: fieldPath: metadata.name Create the Deployment: kubectl apply -f consumer.yaml # verify Pod transition to Running state kubectl get pods -w DynamoDB Streams Consumer App Autoscaling in Action With KEDA Now that you've deployed the consumer application, the KCL adapter library should jump into action. The first thing it will do is create a "control table" in DynamoDB - it should be the same as the name of the application (which in this case is dynamodb-streams-kcl-app-demo). It might take a few minutes for the initial co-ordination to happen and the table to get created. You can check the logs of the consumer application to see the progress. kubectl logs -f $(kubectl get po -l=app=dynamodb-streams-kcl-consumer-app --output=jsonpath={.items..metadata.name}) Once the lease allocation is complete, check the table and note the leaseOwner attribute: aws dynamodb describe-table --table-name dynamodb-streams-kcl-app-demo Add Data to the DynamoDB Table Now that you've deployed the consumer application, let's add data to the source DynamoDB table (users). You can use the producer.sh script for this. export export TABLE_NAME=users ./producer.sh Check consumer logs to see the messages being processed: kubectl logs -f $(kubectl get po -l=app=dynamodb-streams-kcl-consumer-app --output=jsonpath={.items..metadata.name}) Check the target table (users_replica) to confirm that the DynamoDB streams consumer application has indeed replicated the data. aws dynamodb scan --table-name users_replica Notice that the value for the processed_by attribute? It's the same as the consumer application Pod. This will make it easier for us to verify the end-to-end autoscaling process. Create the KEDA Scaler Use the scaler definition: kubectl apply -f keda-dynamodb-streams-scaler.yaml Here is the ScaledObject definition. Notice that it's targeting the dynamodb-streams-kcl-consumer-app Deployment (the one we just created) and the shardCount is set to 2: apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: aws-dynamodb-streams-scaledobject spec: scaleTargetRef: name: dynamodb-streams-kcl-consumer-app triggers: - type: aws-dynamodb-streams metadata: awsRegion: us-east-1 tableName: users shardCount: "2" identityOwner: "operator" Note on shardCount Attribute: We are using the shardCount value of 2. This is very important to note since we are using DynamoDB Streams Kinesis adapter library using KCL 1.x that supports "up to 2 simultaneous consumers per shard." This means that you cannot have more than two consumer application instances processing the same DynamoDB stream shard. However, this KEDA scaler configuration will ensure that there will be one Pod for every two shards. So, for example, if there are four shards, the application will be scaled out to two Pods. If there are six shards, there will be three Pods, and so on. Of course, you can choose to have one Pod for every shard by setting the shardCount to 1. To keep track of the number of shards in the DynamoDB stream, you can run the following command: aws dynamodbstreams describe-stream --stream-arn $(aws dynamodb describe-table --table-name users | jq -r '.Table.LatestStreamArn') | jq -r '.StreamDescription.Shards | length' I have used a utility called jq. If you want the shard details: aws dynamodbstreams describe-stream --stream-arn $(aws dynamodb describe-table --table-name users | jq -r '.Table.LatestStreamArn') | jq -r '.StreamDescription.Shards' Verify DynamoDB Streams Consumer Application Auto-Scaling We started off with one Pod of our application. But, thanks to KEDA, we should now see additional Pods coming up automatically to match the processing requirements of the consumer application. To confirm, check the number of Pods: kubectl get pods -l=app=dynamodb-streams-kcl-consumer-app-consumer Most likely, you will see four shards in the DynamoDB stream and two Pods. This can change (increase/decrease) depending on the rate at which data is produced to the DynamoDB table. Just like before, validate the data in the DynamoDB target table (users_replica) and note the processed_by attribute. Since we have scaled out to additional Pods, the value should be different for each record since each Pod will process a subset of the messages from the DynamoDB change stream. Also, make sure to check dynamodb-streams-kcl-app-demo control table in DynamoDB. You should see an update for the leaseOwner reflecting the fact that now there are two Pods consuming from the DynamoDB stream. Once you have verified the end-to-end solution, you can clean up the resources to avoid incurring any additional charges. Delete Resources Delete the EKS cluster and DynamoDB tables. eksctl delete cluster --name <enter cluster name> aws dynamodb delete-table --table-name users aws dynamodb delete-table --table-name users_replica Conclusion Use cases you should experiment with: Scale further up - How can you make DynamoDB streams increase it's number of shards? What happens to the number of consumer instance Pods? Scale down - What happens when the shard capacity of the DynamoDB streams decreases? In this post, we demonstrated how to use KEDA and DynamoDB Streams and combine two powerful techniques (Change Data Capture and auto-scaling) to build scalable, event-driven systems that can adapt based on the data processing needs of your application.
IoT manufacturers in every region have a host of data privacy standards and laws to comply with — and Europe is now adding one more. The Cyber Resilience Act, or CRA, has some aspects that are simply common sense and others that overlap with already existing standards. However, other aspects are entirely new and could present challenges to IoT manufacturers and providers. Let’s explore the act and consider how it will change the world of connected devices. The Basics of the Cyber Resilience Act The CRA lays out several specific goals that the act is intended to fulfill: Goal #1: Ensuring fewer vulnerabilities and better protection in IoT devices and products Goal #2: Bringing more responsibility for cybersecurity to the manufacturer Goal #3: Increasing transparency Goal #2 leads directly to the obligations laid out for manufacturers: Cybersecurity must be an aspect of every step of the device or software development life cycle. Risks need to be documented. Manufacturers must report, handle, and patch vulnerabilities for any devices that are sold for the product’s expected lifetime or for five years, whichever comes first. The manufacturer must provide clear and understandable instructions for any products with digital elements. So, to whom does the CRA apply? The answer is anyone who deals in IoT devices, software, manufacturing, development, etc. The standard lays the responsibility for vulnerabilities squarely at the foot of the manufacturers of the IoT device or software product in a way most other standards have yet to stipulate. However, the CRA does not affect all devices and manufacturers equally. Three main categories will dictate how manufacturers apply the standard. The first is the default category, which covers roughly 90 percent of IoT products. Devices and products in this category are non-critical, like smart speakers, non-essential software, etc. The default category doesn’t require a third-party assessment of adherence to the standard, so the category just provides a basis for self-assessment that allows manufacturers to establish best practices for product security. The second and third categories are Critical Class I and Critical Class II, which apply to roughly 10 percent of IoT products. Class I includes password managers, network interfaces, firewalls, microcontrollers, and more. In other words, the vendors and designers of MCUs and the other components included in Class I will have to comply with all of the requirements for that category. Class II is for operating systems, industrial firewalls, MPUs, etc. Again, that means the vendors who develop these operating systems and microprocessors will need to ensure they meet the specific requirements for Class II. The criteria that divide the classes are based on intended functionality (like whether the software will be used for security/access management), intended use (like whether it is for an industrial environment), and breach or vulnerability likelihood, among other criteria. Both Critical Class categories require a third-party assessment for compliance purposes. Importantly, there are penalties for non-compliance, which include the possibility of a full ban on the problematic product, as well as fines of €15 million or 2.5 percent of the annual turnover of the offending company, whichever is higher. Why This Act Matters The Cyber Resilience Act is part of a longstanding and ongoing endeavor by EU governing bodies to ensure a deeper level of cybersecurity in the EU. This endeavor is largely in response to a marked increase in ransomware and denial of service attacks since the pandemic and especially since the start of the Russia-Ukraine War. Still, the CRA overlaps with some other standards including the upcoming NIS2 Directive, which is the EU’s blanket cybersecurity legislation. Because of that, it’s easy to think that the CRA doesn’t have much to add, but it actually does. The act is broader than a typical IoT security standard because it also applies to software that is not embedded. That is to say, it applies to the software you might use on your desktop to interact with your IoT device, rather than just applying to the software on the device itself. Since non-embedded software is where many vulnerabilities take place, this is an important change. A second important change is the requirement for five years of security updates and vulnerability reporting. Few consumers who buy an IoT device expect regular software updates and security patches for that type of time range, but both will be a requirement under the CRA. The third important point of the standard is the requirement for some sort of reporting and alerting system for vulnerabilities so that consumers can report vulnerabilities, see the status of security and software updates for devices, and be warned of any risks. The CRA also requires that manufacturers notify the European Union Agency for Cybersecurity (ENISA) of a vulnerability within 24 hours of discovery. These requirements are intended to keep consumers’ data safe, but they will also allow manufacturers to avoid costly breaches. Prepare for Compliance Today The Cyber Resilience Act is in its early stages and, even when it is approved, manufacturers will have two years to comply. So, full compliance will probably not be obligatory until 2025 or 2026. However, that doesn’t mean you shouldn’t start preparing now. When the General Data Protection Regulation (GDPR) came into force in the EU, companies had to make major changes to their operations and the way they handled consumer data, advertising, cookies, and more. This act has the potential to be just as complex and revolutionary in changing the way IoT manufacturers and software providers manage security for their products. What can manufacturers do now to avoid penalties for non-compliance in the future? For starters, there are already technologies available that can help with CRA compliance. The reporting requirements of the EU Cyber Resilience Act are time-sensitive and penalties for non-compliance are high. This means manufacturers have a vested interest in developing efficient ways to communicate discovered vulnerabilities to both consumers and ENISA, as well as to patch those vulnerabilities as quickly as possible. As a result, IoT providers who can utilize a peer-to-peer communication platform that enables remote status reports, updates, and security patches will have a competitive advantage. Additionally, such a platform can allow IoT providers to set up push notifications and security alerts for consumers, enabling the highest level of transparency and communication in case a vulnerability is discovered. It’s also important to keep up with changes to the proposal as laid out by the European Commission, since the CRA as it appears in 2026 may not be the same as the initial draft of the standard. In fact, well-known companies like Microsoft are already making recommendations and critiques of the EU Cyber Resilience Act proposal. Many experts believe the CRA is too broad at the moment and too hard to apply, and that it needs stronger definitions, examples, and action plans to be truly effective. If these critiques are followed, compliance could become a bit less complex and a bit easier to understand in the future, so it would be wise to keep informed of any changes. Final Thoughts While the initial shift to CRA compliance may be challenging, various technologies and cybersecurity tools are already available to help. Integrating these tools and choosing to pursue the highest levels of security in your IoT products today will put you well on your way to fulfilling the requirements of the act before it is even in effect. Good luck.
This is an article from DZone's 2023 Database Systems Trend Report.For more: Read the Report Database design is a critical factor in microservices and cloud-native solutions because a microservices-based architecture results in distributed data. Instead of data management happening in a single process, multiple processes can manipulate the data. The rise of cloud computing has made data even more distributed. To deal with this complexity, several data management patterns have emerged for microservices and cloud-native solutions. In this article, we will look at the most important patterns that can help us manage data in a distributed environment. The Challenges of Database Design for Microservices and the Cloud Before we dig into the specific data management patterns, it is important to understand the key challenges with database design for microservices and the cloud: In a microservices architecture, data is distributed across different nodes. Some of these nodes can be in different data centers in completely different geographic regions of the world. In this situation, it is tough to guarantee consistency of data across all the nodes. At any given point in time, there can be differences in the state of data between various nodes. This is also known as the problem of eventual consistency. Since the data is distributed, there's no central authority that manages data like in single-node monolithic systems. It's important for the various participating systems to use a mechanism (e.g., consensus algorithms) for data management. The attack surface for malicious actors is larger in a microservices architecture since there are multiple moving parts. This means we need to establish a more robust security posture while building microservices. The main promise of microservices and the cloud is scalability. While it becomes easier to scale the application processes, it is not so easy to scale the database nodes horizontally. Without proper scalability, databases can turn into performance bottlenecks. Diving Into Data Management Patterns Considering the associated challenges, several patterns are available to manage data in microservices and cloud-native applications. The main job of these patterns is to facilitate the developers in addressing the various challenges mentioned above. Let's look at each of these patterns one by one. Database per Service As the name suggests, this pattern proposes that each microservices manages its own data. This implies that no other microservices can directly access or manipulate the data managed by another microservice. Any exchange or manipulation of data can be done only by using a set of well-defined APIs. The figure below shows an example of a database-per-service pattern. Figure 1: Database-per-service pattern At face value, this pattern seems quite simple. It can be implemented relatively easily when we are starting with a brand-new application. However, when we are migrating an existing monolithic application to a microservices architecture, the demarcation between services is not so clear. Most of the functionality is written in a way where different parts of the system access data from other parts informally. Two main areas that we need to focus on when using a database-per-service pattern: Defining bounded contexts for each service Managing business transactions spanning multiple microservices Shared Database The next important pattern is the shared database pattern. Though this pattern supports microservices architecture, it adopts a much more lenient approach by using a shared database accessible to multiple microservices. For existing applications transitioning to a microservices architecture, this is a much safer pattern, as we can slowly evolve the application layer without changing the database design. However, this approach takes away some benefits of microservices: Developers across teams need to coordinate schema changes to tables. Runtime conflicts may arise when multiple services are trying to access the same database resources. CQRS and Event Sourcing In the command query responsibility segregation (CQRS) pattern, an application listens to domain events from other microservices and updates a separate database for supporting views and queries. We can then serve complex aggregation queries from this separate database while optimizing the performance and scaling it up as needed. Event sourcing takes it a bit further by storing the state of the entity or the aggregate as a sequence of events. Whenever we have an update or an insert on an object, a new event is created and stored in the event store. We can use CQRS and event sourcing together to solve a lot of challenges around event handling and maintaining separate query data. This way, you can scale the writes and reads separately based on their individual requirements. Figure 2: Event sourcing and CQRS in action together On the downside, this is an unfamiliar style of building applications for most developers, and there are more moving parts to manage. Saga Pattern The saga pattern is another solution for handling business transactions across multiple microservices. For example, placing an order on a food delivery app is a business transaction. In the saga pattern, we break this business transaction into a sequence of local transactions handled by different services. For every local transaction, the service that performs the transaction publishes an event. The event triggers a subsequent transaction in another service, and the chain continues until the entire business transaction is completed. If any particular transaction in the chain fails, the saga rolls back by executing a series of compensating transactions that undo the impact of all the previous transactions. There are two types of saga implementations: Orchestration-based saga Choreography-based saga Sharding Sharding helps in building cloud-native applications. It involves separating rows of one table into multiple different tables. This is also known as horizontal partitioning, but when the partitions reside on different nodes, they are known as shards. Sharding helps us improve the read and write scalability of the database. Also, it improves the performance of queries because a particular query must deal with fewer records as a result of sharding. Replication Replication is a very important data management pattern. It involves creating multiple copies of the database. Each copy is identical and runs on a different server or node. Changes made to one copy are propagated to the other copies. This is known as replication. There are several types of replication approaches, such as: Single-leader replication Multi-leader replication Leaderless replication Replication helps us achieve high availability and boosts reliability, and it lets us scale out read operations since read requests can be diverted to multiple servers. Figure 3 below shows sharding and replication working in combination. Figure 3: Using sharding and replication together Best Practices for Database Design in a Cloud-Native Environment While these patterns can go a long way in addressing data management issues in microservices and cloud-native architecture, we also need to follow some best practices to make life easier. Here are a few best practices: We must try to design a solution for resilience. This is because faults are inevitable in a microservices architecture, and the design should accommodate failures and recover from them without disrupting the business. We must implement proper migration strategies when transitioning to one of the patterns. Some of the common strategies that can be evaluated are schema first versus data first, blue-green deployments, or using the strangler pattern. Don't ignore backups and well-tested disaster recovery systems. These things are important even for single-node databases. However, in a distributed data management approach, disaster recovery becomes even more important. Constant monitoring and observability are equally important in microservices or cloud-native applications. For example, techniques like sharding can lead to unbalanced partitions and hotspots. Without proper monitoring solutions, any reactions to such situations may come too late and may put the business at risk. Conclusion We can conclude that good database design is absolutely vital in a microservices and cloud-native environment. Without proper design, an application will face multiple problems due to the inherent complexity of distributed data. Multiple data management patterns exist to help us deal with data in a more reliable and scalable manner. However, each pattern has its own challenges and set of advantages and disadvantages. No pattern fits all the possible scenarios, and we should select a particular pattern only after managing the various trade-offs. This is an article from DZone's 2023 Database Systems Trend Report.For more: Read the Report
This is an article from DZone's 2023 Database Systems Trend Report.For more: Read the Report Good database design is essential to ensure data accuracy, consistency, and integrity and that databases are efficient, reliable, and easy to use. The design must address the storing and retrieving of data quickly and easily while handling large volumes of data in a stable way. An experienced database designer can create a robust, scalable, and secure database architecture that meets the needs of modern data systems. Architecture and Design A modern data architecture for microservices and cloud-native applications involves multiple layers, and each one has its own set of components and preferred technologies. Typically, the foundational layer is constructed as a storage layer, encompassing one or more databases such as SQL, NoSQL, or NewSQL. This layer assumes responsibility for the storage, retrieval, and management of data, including tasks like indexing, querying, and transaction management. To enhance this architecture, it is advantageous to design a data access layer that resides above the storage layer but below the service layer. This data access layer leverages technologies like object-relational mapping or data access objects to simplify data retrieval and manipulation. Finally, at the topmost layer lies the presentation layer, where the information is skillfully presented to the end user. The effective transmission of data through the layers of an application, culminating in its presentation as meaningful information to users, is of utmost importance in a modern data architecture. The goal here is to design a scalable database with the ability to handle a high volume of traffic and data while minimizing downtime and performance issues. By following best practices and addressing a few challenges, we can meet the needs of today's modern data architecture for different applications. Figure 1: Layered architecture Considerations By taking into account the following considerations when designing a database for enterprise-level usage, it is possible to create a robust and efficient system that meets the specific needs of the organization while ensuring data integrity, availability, security, and scalability. One important consideration is the data that will be stored in the database. This involves assessing the format, size, complexity, and relationships between data entities. Different types of data may require specific storage structures and data models. For instance, transactional data often fits well with a relational database model, while unstructured data like images or videos may require a NoSQL database model. The frequency of data retrieval or access plays a significant role in determining the design considerations. In read-heavy systems, implementing a cache for frequently accessed data can enhance query response times. Conversely, the emphasis may be on lower data retrieval frequencies for data warehouse scenarios. Techniques such as indexing, caching, and partitioning can be employed to optimize query performance. Ensuring the availability of the database is crucial for maintaining optimal application performance. Techniques such as replication, load balancing, and failover are commonly used to achieve high availability. Additionally, having a robust disaster recovery plan in place adds an extra layer of protection to the overall database system. As data volumes grow, it is essential that the database system can handle increased loads without compromising performance. Employing techniques like partitioning, sharding, and clustering allows for effective scalability within a database system. These approaches enable the efficient distribution of data and workload across multiple servers or nodes. Data security is a critical consideration in modern database design, given the rising prevalence of fraud and data breaches. Implementing robust access controls, encryption mechanisms for sensitive personally identifiable information, and conducting regular audits are vital for enhancing the security of a database system. In transaction-heavy systems, maintaining consistency in transactional data is paramount. Many databases provide features such as appropriate locking mechanisms and transaction isolation levels to ensure data integrity and consistency. These features help to prevent issues like concurrent data modifications and inconsistencies. Challenges Determining the most suitable tool or technology for our database needs can be a challenge due to the rapid growth and evolving nature of the database landscape. With different types of databases emerging daily and even variations among vendors offering the same type, it is crucial to plan carefully based on your specific use cases and requirements. By thoroughly understanding our needs and researching the available options, we can identify the right tool with the appropriate features to meet our database needs effectively. Polyglot persistence is a consideration that arises from the demand of certain applications, leading to the use of multiple SQL or NoSQL databases. Selecting the right databases for transactional systems, ensuring data consistency, handling financial data, and accommodating high data volumes pose challenges. Careful consideration is necessary to choose the appropriate databases that can fulfill the specific requirements of each aspect while maintaining overall system integrity. Integrating data from different upstream systems, each with its own structure and volume, presents a significant challenge. The goal is to achieve a single source of truth by harmonizing and integrating the data effectively. This process requires comprehensive planning to ensure compatibility and future-proofing the integration solution to accommodate potential changes and updates. Performance is an ongoing concern in both applications and database systems. Every addition to the database system can potentially impact performance. To address performance issues, it is essential to follow best practices when adding, managing, and purging data, as well as properly indexing, partitioning, and implementing encryption techniques. By employing these practices, you can mitigate performance bottlenecks and optimize the overall performance of your database system. Considering these factors will contribute to making informed decisions and designing an efficient and effective database system for your specific requirements. Advice for Building Your Architecture Goals for a better database design should include efficiency, scalability, security, and compliance. In the table below, each goal is accompanied by its corresponding industry expectation, highlighting the key aspects that should be considered when designing a database for optimal performance, scalability, security, and compliance. GOALS FOR DATABASE DESIGN Goal Industry Expectation Efficiency Optimal performance and responsiveness of the database system, minimizing latency and maximizing throughput. Efficient handling of data operations, queries, and transactions. Scalability Ability to handle increasing data volumes, user loads, and concurrent transactions without sacrificing performance. Scalable architecture that allows for horizontal or vertical scaling to accommodate growth. Security Robust security measures to protect against unauthorized access, data breaches, and other security threats. Implementation of access controls, encryption, auditing mechanisms, and adherence to industry best practices and compliance regulations. Compliance Adherence to relevant industry regulations, standards, and legal requirements. Ensuring data privacy, confidentiality, and integrity. Implementing data governance practices and maintaining audit trails to demonstrate compliance. Table 1 When building your database architecture, it's important to consider several key factors to ensure the design is effective and meets your specific needs. Start by clearly defining the system's purpose, data types, volume, access patterns, and performance expectations. Consider clear requirements that provide clarity on the data to be stored and the relationships between the data entities. This will help ensure that the database design aligns with quality standards and conforms to your requirements. Also consider normalization, which enables efficient storage use by minimizing redundant data, improves data integrity by enforcing consistency and reliability, and facilitates easier maintenance and updates. Selecting the right database model or opting for polyglot persistence support is crucial to ensure the database aligns with your specific needs. This decision should be based on the requirements of your application and the data it handles. Planning for future growth is essential to accommodate increasing demand. Consider scalability options that allow your database to handle growing data volumes and user loads without sacrificing performance. Alongside growth, prioritize data protection by implementing industry-standard security recommendations and ensuring appropriate access levels are in place and encourage implementing IT security measures to protect the database from unauthorized access, data theft, and security threats. A good back-up system is a testament to the efficiency of a well-designed database. Regular backups and data synchronization, both on-site and off-site, provide protection against data loss or corruption, safeguarding your valuable information. To validate the effectiveness of your database design, test the model using sample data from real-world scenarios. This testing process will help validate the performance, reliability, and functionality of the database system you are using, ensuring it meets your expectations. Good documentation practices play a vital role in improving feedback systems and validating thought processes and implementations during the design and review phases. Continuously improving documentation will aid in future maintenance, troubleshooting, and system enhancement efforts. Primary and secondary keys contribute to data integrity and consistency. Use indexes to optimize database performance by indexing frequently queried fields and limiting the number of fields returned in queries. Regularly backing up the database protects against data loss during corruption, system failure, or other unforeseen circumstances. Data archiving and purging practices help remove infrequently accessed data, reducing the size of the active dataset. Proper error handling and logging aid in debugging, troubleshooting, and system maintenance. Regular maintenance is crucial for growing database systems. Plan and schedule regular backups, perform performance tuning, and stay up to date with software upgrades to ensure optimal database performance and stability. Conclusion Designing a modern data architecture that can handle the growing demands of today's digital world is not an easy job. However, if you follow best practices and take advantage of the latest technologies and techniques, it is very much possible to build a scalable, flexible, and secure database. It just requires the right mindset and your commitment to learning and improving with a proper feedback loop. Additional reading: Semantic Modeling for Data: Avoiding Pitfalls and Breaking Dilemmas by Panos Alexopoulos Learn PostgreSQL: Build and manage high-performance database solutions using PostgreSQL 12 and 13 by Luca Ferrari and Enrico Pirozzi Designing Data-Intensive Applications by Martin Kleppmann This is an article from DZone's 2023 Database Systems Trend Report.For more: Read the Report
In today's world, interacting with AI systems like ChatGPT has become an everyday experience. These AI systems can understand and respond to us in a more human-like way. But how do they do it? That's where prompt engineering comes in. Think of prompt engineering as the instruction manual for AI. It tells AI systems like ChatGPT how to understand what we want and respond appropriately. It's like giving clear directions to a helpful friend. In this guide, we're going to explore prompt engineering, with a special focus on how it combines with something called GenAI. GenAI is like the secret sauce that makes AI even smarter. By mixing GenAI with ChatGPT and prompt engineering, we can make AI understand and talk to us even better. Whether you're new to this world or an expert, this guide will show you the ropes. We'll dive into the tricks of prompt design, look at what's right and wrong, and share ways to make ChatGPT perform its best with GenAI. So, let's embark on this journey to make AI, like ChatGPT, even more amazing with the help of prompt engineering and GenAI. What Is Prompt Engineering? Prompt engineering is the art of crafting clear and precise instructions or inputs given to AI models, such as ChatGPT, to guide their responses effectively. It serves as the bridge between human communication and AI understanding. Imagine you're chatting with an AI chatbot, and you want it to tell you a joke. The prompt is the message you send to the chatbot, like saying, "Tell me a funny joke." It helps the chatbot understand your request and respond with a joke. In essence, prompt engineering ensures that AI knows what to do when you talk to it. Importance of Prompt Engineering Prompt engineering plays a pivotal role in AI interactions for several key reasons: Effective communication: Good prompt engineering ensures that users can communicate their needs clearly to AI models, leading to more accurate and relevant responses. Example: Asking ChatGPT, "Can you summarize the key points of the latest climate change report for a general audience?" is a clear prompt that conveys the desired task. Bias mitigation: Well-crafted prompts can help reduce biases in AI responses by guiding models to provide fair and unbiased answers. Example: Using a prompt like "Provide an overview of the benefits and drawbacks of various renewable energy sources" ensures a balanced response. Improved performance: Proper prompts can enhance the performance of AI models, making them more useful and accurate in delivering information or completing tasks. Example: When instructing ChatGPT to "Explain the principles of machine learning in simple terms," the prompt's clarity aids in effective communication. Ethical use: Prompt engineering plays a crucial role in ensuring that AI systems are used ethically and responsibly, avoiding harmful or inappropriate responses. Example: Instructing ChatGPT to "Avoid generating offensive content or engaging in harmful discussions" sets ethical boundaries. Customization: It allows users to customize AI responses to specific tasks or contexts, making the technology more versatile and adaptable. Example: Crafting a prompt like "Summarize the key findings of the research paper on sustainable agriculture" tailors the response to a specific task. Effective Prompts: What Works A well-constructed prompt is a vital ingredient in prompt engineering. Here's an example of an effective prompt: Good Prompt: "Explain the principles of thermodynamics and their applications in mechanical engineering, focusing on the concept of energy conservation and providing real-world examples." In this prompt, the following elements contribute to its effectiveness: Task definition: The task is well-defined (explaining thermodynamics principles and their applications). Field specification: The specific field of study is mentioned (mechanical engineering). Contextual clarity: The user's request for real-world examples adds clarity and context, making it an effective prompt. Ineffective Prompts: What to Avoid Conversely, ineffective prompts can hinder prompt engineering efforts. Here's an example of a poorly constructed prompt: Bad Prompt: "Explain Thermodynamics?" This prompt exhibits several shortcomings: Vagueness: It's too vague and lacks clarity. It doesn't specify what aspect of thermodynamics the user is interested in or what level of detail is expected. Consequently, it's unlikely to yield a meaningful or informative response in the context of technical education. The Prompt Framework Prompt engineering is like providing a set of rules and guidelines to an AI system, enabling it to understand and execute tasks effectively. Think of it as having a manual that instructs you on how to communicate with a computer or AI system using words. This framework ensures that the AI comprehends your instructions, leading to accurate and desired outcomes. The framework essentially consists of three major principles: Subject: Define what you want the computer or AI to do. For example, if you want it to translate a sentence, you need to specify that. Example: "Emerging Quality Engineering technologies" Define the Task: Be clear about what you expect the computer or AI to achieve. If it's a summary, you should say that. Example: "Write me a blog on the Emerging Quality Engineering technologies" Clear Instruction: Give the computer clear and specific directions so it knows exactly what to do. Example: "The blog should be 500 to 700 words, in a persuasive and informative tone, and include at least seven benefits of the importance of Quality Engineers in today’s tech world." Offering Context: Sometimes, you might need to provide additional information or context to help the computer understand your request better. Example: "Imagine you are creating this blog post for people looking to start/sustain their career in the field of Quality Engineering." Prompt Framework Template (Copy/Paste this template for further use.) Subject: [Subject] Task: [Task] Instruction: The [type of content] should be between [word count range], written in a [tone], and include at least [number] [specific details]. Context: Imagine you are creating this [type of content] targeting [target audience]. Prompt Framework Sample Subject: Emerging Quality Engineering Technologies Task: Write me a blog on the Emerging Quality Engineering Technologies Instruction: The blog host should be 500 to 700 words in a persuasive and informative tone and include at least seven benefits of the importance of Quality Engineers in today’s tech world Context: Imagine you are creating this blog post for people looking to start/sustain their career in the field of Quality Engineering (Please use the above sample in ChatGPT to see amazing results that attract Quality Engineering Techies.) Prompt Iteration Ask ChatGPT questions until you have the information you need by rephrasing and giving more context. Why? By asking more questions, you build a better understanding between AI and you for more accurate answers. When? When you have information but don’t know how to get the best answers or formulations yourself. Sample Prompt 1: "I am a manager looking to improve my team's collaboration and communication. Can you suggest some strategies to enhance teamwork and create a more productive work environment? If I need more clarification or ideas, I’ll ask further questions." We can continue the conversation by adding additional prompts to the same conversation by typing a new prompt or clicking on the edit icon near the previous prompt. Prompt 2: "I forgot to tell you that I am a manager in a tech organization." 4th Grader Explanation Explaining in a 4th-grade style involves breaking down information or concepts into simpler terms and providing straightforward and comprehensible descriptions to help others grasp the topic easily. For example, explain Chaos theory and its benefits. Before Using 4th Grader Chaos theory is a scientific concept that studies complex systems. It helps us understand how seemingly random events in nature might have some order or predictability. Its benefit is that it allows scientists to make sense of chaotic phenomena, from weather patterns to the behavior of molecules. After Using 4th Grader Chaos theory is like solving puzzles in nature. It helps scientists understand things that look messy and unpredictable. The cool part is it lets them find hidden patterns in things like weather and tiny particles. Follow-Up Questions Follow-up questions in prompt engineering refer to additional questions or prompts that are used to provide clarification or gather more specific information when interacting with AI models or systems. These questions are asked in a conversation to guide the AI's responses and ensure that the desired information or action is obtained. Scenario 1: Virtual Assistant for Travel Planning User Prompt: "I'd like to plan a vacation in December." AI Follow-up Question: "Great choice! Could you please specify your preferred dates and any specific activities or attractions you'd like to include in your itinerary?" Scenario 2: Content Generation User Prompt: "Write an article about renewable energy sources." AI Follow-up Question: "Sure, I can do that. Do you have a specific angle or focus in mind for the article, such as solar power, wind energy, or sustainable practices?" Scenario 3: Language Translation User Prompt: "Translate this text from English to Spanish: 'The quick brown fox jumps over the lazy dog.'" AI Follow-up Question: "Got it! Is there a particular dialect or region you'd like the translation to be in, or should I provide a standard translation?" Scenario 4: Code Generation User Prompt: "Write a Python code snippet to calculate the factorial of a number." AI Follow-up Question: "Sure, I can do that. Do you have a specific number you'd like to calculate the factorial for, or should I provide a general code template?" These examples illustrate how follow-up questions can help refine user requests and gather the necessary details to generate more tailored and accurate responses from AI systems. Teach Me Instruct ChatGPT to provide step-by-step instructions or explanations on various topics, helping users learn new skills or gain knowledge. "Teach me" requests typically seek clear and informative responses to specific questions or tasks. For Example "Teach me as a video editor professional how to use Premiere Pro to edit marketing videos step by step, excluding details about program installation or video importing." Write Like Me "Write like me" prompts instruct ChatGPT to mimic a specific writing style, whether it's a personal style or a brand's unique voice. This approach is valuable for maintaining a consistent brand identity and creating content that resonates with the intended audience. For Example "Write a cover letter for a marketing position using the same tone and language style found in my resume and previous cover letters." Conclusion Effective prompt engineering is essential for making AI work better and understand us. With the power of GenAI, we can take AI interactions to the next level. As you explore prompt engineering and AI, remember that you hold the key to making AI smarter. Together, we can bridge the gap between humans and machines, making AI not just smart but insightful. Thank you for joining us on this journey into the world of prompt engineering, where possibilities are limitless.
In part one of this two-part series, we looked at how walletless dApps smooth the web3 user experience by abstracting away the complexities of blockchains and wallets. Thanks to account abstraction from Flow and the Flow Wallet API, we can easily build walletless dApps that enable users to sign up with credentials that they're accustomed to using (such as social logins or email accounts). We began our walkthrough by building the backend of our walletless dApp. Here in part two, we'll wrap up our walkthrough by building the front end. Here we go! Create a New Next.js Application Let's use the Next.js framework so we have the frontend and backend in one application. On our local machine, we will use create-next-app to bootstrap our application. This will create a new folder for our Next.js application. We run the following command: Shell $ npx create-next-app flow_walletless_app Some options will appear; you can mark them as follows (or as you prefer!). Make sure to choose No for using Tailwind CSS and the App Router. This way, your folder structure and style references will match what I demo in the rest of this tutorial. Shell ✔ Would you like to use TypeScript with this project? ... Yes ✔ Would you like to use ESLint with this project? ... No ✔ Would you like to use Tailwind CSS with this project? ... No <-- IMPORTANT ✔ Would you like to use `src/` directory with this project? ... No ✔ Use App Router (recommended)? ... No <-- IMPORTANT ✔ Would you like to customize the default import alias? ... No Start the development server. Shell $ npm run dev The application will run on port 3001 because the default port (3000) is occupied by our wallet API running through Docker. Set Up Prisma for Backend User Management We will use the Prisma library as an ORM to manage our database. When a user logs in, we store their information in a database at a user entity. This contains the user's email, token, Flow address, and other information. The first step is to install the Prisma dependencies in our Next.js project: Shell $ npm install prisma --save-dev To use Prisma, we need to initialize the Prisma Client. Run the following command: Shell $ npx prisma init The above command will create two files: prisma/schema.prisma: The main Prisma configuration file, which will host the database configuration .env: Will contain the database connection URL and other environment variables Configure the Database Used by Prisma We will use SQLite as the database for our Next.js application. Open the schema.prisma file and change the datasource db settings as follows: Shell datasource db { provider = "sqlite" url = env("DATABASE_URL") } Then, in our .env file for the Next.js application, we will change the DATABASE_URL field. Because we’re using SQLite, we need to define the location (which, for SQLite, is a file) where the database will be stored in our application: Shell DATABASE_URL="file:./dev.db" Create a User Model Models represent entities in our app. The model describes how the data should be stored in our database. Prisma takes care of creating tables and fields. Let’s add the following User model in out schema.prisma file: Shell model User { id Int @id @default(autoincrement()) email String @unique name String? flowWalletJobId String? flowWalletAddress String? createdAt DateTime @default(now()) updatedAt DateTime @updatedAt } With our model created, we need to synchronize with the database. For this, Prisma offers a command: Shell $ npx prisma db push Environment variables loaded from .env Prisma schema loaded from prisma/schema.prisma Datasource "db": SQLite database "dev.db" at "file:./dev.db" SQLite database dev.db created at file:./dev.db -> Your database is now in sync with your Prisma schema. Done in 15ms After successfully pushing our users table, we can use Prisma Studio to track our database data. Run the command: Shell $ npx prisma studio Set up the Prisma Client That's it! Our entity and database configuration are complete. Now let's go to the client side. We need to install the Prisma client dependencies in our Next.js app. To do this, run the following command: Shell $ npm install @prisma/client Generate the client from the Prisma schema file: Shell $ npx prisma generate Create a folder named lib in the root folder of your project. Within that folder, create a file entitled prisma.ts. This file will host the client connection. Paste the following code into that file: TypeScript // lib/prisma.ts import { PrismaClient } from '@prisma/client'; let prisma: PrismaClient; if (process.env.NODE_ENV === "production") { prisma = new PrismaClient(); } else { let globalWithPrisma = global as typeof globalThis & { prisma: PrismaClient; }; if (!globalWithPrisma.prisma) { globalWithPrisma.prisma = new PrismaClient(); } prisma = globalWithPrisma.prisma; } export default prisma; Build the Next.js Application Frontend Functionality With our connection on the client part finalized, we can move on to the visual part of our app! Replace the code inside pages/index.tsx file, delete all lines of code and paste in the following code: TypeScript # pages/index.tsx import styles from "@/styles/Home.module.css"; import { Inter } from "next/font/google"; import Head from "next/head"; const inter = Inter({ subsets: ["latin"] }); export default function Home() { return ( <> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={styles.main}> <div className={styles.card}> <h1 className={inter.className}>Welcome to Flow Walletless App!</h1> <div style={{ display: "flex", flexDirection: "column", gap: "20px", margin: "20px", } > <button style={{ padding: "20px", width: 'auto' }>Sign Up</button> <button style={{ padding: "20px" }>Sign Out</button> </div> </div> </main> </> ); } In this way, we have the basics and the necessities to illustrate the creation of wallets and accounts! The next step is to configure the Google client to use the Google API to authenticate users. Set up Use of Google OAuth for Authentication We will need Google credentials. For that, open your Google console. Click Create Credentials and select the OAuth Client ID option. Choose Web Application as the application type and define a name for it. We will use the same name: flow_walletless_app. Add http://localhost:3001/api/auth/callback/google as the authorized redirect URI. Click on the Create button. A modal should appear with the Google credentials. We will need the Client ID and Client secret to use in our .env file shortly. Next, we’ll add the next-auth package. To do this, run the following command: Shell $ npm i next-auth Open the .env file and add the following new environment variables to it: Shell GOOGLE_CLIENT_ID= <GOOGLE CLIENT ID> GOOGLE_CLIENT_SECRET=<GOOGLE CLIENT SECRET> NEXTAUTH_URL=http://localhost:3001 NEXTAUTH_SECRET=<YOUR NEXTAUTH SECRET> Paste in your copied Google Client ID and Client Secret. The NextAuth secret can be generated via the terminal with the following command: Shell $ openssl rand -base64 32 Copy the result, which should be a random string of letters, numbers, and symbols. Use this as your value for NEXTAUTH_SECRET in the .env file. Configure NextAuth to Use Google Next.js allows you to create serverless API routes without creating a full backend server. Each file under api is treated like an endpoint. Inside the pages/api/ folder, create a new folder called auth. Then create a file in that folder, called [...nextauth].ts, and add the code below: TypeScript // pages/api/auth/[...nextauth].ts import NextAuth from "next-auth" import GoogleProvider from "next-auth/providers/google"; export default NextAuth({ providers: [ GoogleProvider({ clientId: process.env.GOOGLE_CLIENT_ID as string, clientSecret: process.env.GOOGLE_CLIENT_SECRET as string, }) ], }) Update _app.tsx file to use NextAuth SessionProvider Modify the _app.tsx file found inside the pages folder by adding the SessionProvider from the NextAuth library. Your file should look like this: TypeScript // pages/_app.tsx import "@/styles/globals.css"; import { SessionProvider } from "next-auth/react"; import type { AppProps } from "next/app"; export default function App({ Component, pageProps }: AppProps) { return ( <SessionProvider session={pageProps.session}> <Component {...pageProps} /> </SessionProvider> ); } Update the Main Page To Use NextAuth Functions Let us go back to our index.tsx file in the pages folder. We need to import the functions from the NextAuth library and use them to log users in and out. Our update index.tsx file should look like this: TypeScript // pages/index.tsx import styles from "@/styles/Home.module.css"; import { Inter } from "next/font/google"; import Head from "next/head"; import { useSession, signIn, signOut } from "next-auth/react"; const inter = Inter({ subsets: ["latin"] }); export default function Home() { const { data: session } = useSession(); console.log("session data",session) const signInWithGoogle = () => { signIn(); }; const signOutWithGoogle = () => { signOut(); }; return ( <> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={styles.main}> <div className={styles.card}> <h1 className={inter.className}>Welcome to Flow Walletless App!</h1> <div style={{ display: "flex", flexDirection: "column", gap: "20px", margin: "20px", } > <button onClick={signInWithGoogle} style={{ padding: "20px", width: "auto" }>Sign Up</button> <button onClick={signOutWithGoogle} style={{ padding: "20px" }>Sign Out</button> </div> </div> </main> </> ); } Build the “Create User” Endpoint Let us now create a users folder underneath pages/api. Inside this new folder, create a file called index.ts. This file is responsible for: Creating a user (first we check if this user already exists) Calling the Wallet API to create a wallet for this user Calling the Wallet API and retrieving the jobId data if the User entity does not yet have the address created These actions are performed within the handle function, which calls the checkWallet function. Paste the following snippet into your index.ts file: TypeScript // pages/api/users/index.ts import { User } from "@prisma/client"; import { BaseNextRequest, BaseNextResponse } from "next/dist/server/base-http"; import prisma from "../../../lib/prisma"; export default async function handle( req: BaseNextRequest, res: BaseNextResponse ) { const userEmail = JSON.parse(req.body).email; const userName = JSON.parse(req.body).name; try { const user = await prisma.user.findFirst({ where: { email: userEmail, }, }); if (user == null) { await prisma.user.create({ data: { email: userEmail, name: userName, flowWalletAddress: null, flowWalletJobId: null, }, }); } else { await checkWallet(user); } } catch (e) { console.log(e); } } const checkWallet = async (user: User) => { const jobId = user.flowWalletJobId; const address = user.flowWalletAddress; if (address != null) { return; } if (jobId != null) { const request: any = await fetch(`http://localhost:3000/v1/jobs/${jobId}`, { method: "GET", }); const jsonData = await request.json(); if (jsonData.state === "COMPLETE") { const address = await jsonData.result; await prisma.user.update({ where: { id: user.id, }, data: { flowWalletAddress: address, }, }); return; } if (request.data.state === "FAILED") { const request: any = await fetch("http://localhost:3000/v1/accounts", { method: "POST", }); const jsonData = await request.json(); await prisma.user.update({ where: { id: user.id, }, data: { flowWalletJobId: jsonData.jobId, }, }); return; } } if (jobId == null) { const request: any = await fetch("http://localhost:3000/v1/accounts", { method: "POST", }); const jsonData = await request.json(); await prisma.user.update({ where: { id: user.id, }, data: { flowWalletJobId: jsonData.jobId, }, }); return; } }; POST requests to the api/users path will result in calling the handle function. We’ll get to that shortly, but first, we need to create another endpoint for retrieving existing user information. Build the “Get User” Endpoint We’ll create another file in the pages/api/users folder, called getUser.ts. This file is responsible for finding a user in our database based on their email. Copy the following snippet and paste it into getUser.ts: TypeScript // pages/api/users/getUser.ts import prisma from "../../../lib/prisma"; export default async function handle( req: { query: { email: string; }; }, res: any ) { try { const { email } = req.query; const user = await prisma.user.findFirst({ where: { email: email, }, }); return res.json(user); } catch (e) { console.log(e); } } And that's it! With these two files in the pages/api/users folder, we are ready for our Next.js application frontend to make calls to our backend. Add “Create User” and “Get User” Functions to Main Page Now, let’s go back to the pages/index.tsx file to add the new functions that will make the requests to the backend. Replace the contents of index.tsx file with the following snippet: TypeScript // pages/index.tsx import styles from "@/styles/Home.module.css"; import { Inter } from "next/font/google"; import Head from "next/head"; import { useSession, signIn, signOut } from "next-auth/react"; import { useEffect, useState } from "react"; import { User } from "@prisma/client"; const inter = Inter({ subsets: ["latin"] }); export default function Home() { const { data: session } = useSession(); const [user, setUser] = useState<User | null>(null); const signInWithGoogle = () => { signIn(); }; const signOutWithGoogle = () => { signOut(); }; const getUser = async () => { const response = await fetch( `/api/users/getUser?email=${session?.user?.email}`, { method: "GET", } ); const data = await response.json(); setUser(data); return data?.flowWalletAddress != null ? true : false; }; console.log(user) const createUser = async () => { await fetch("/api/users", { method: "POST", body: JSON.stringify({ email: session?.user?.email, name: session?.user?.name }), }); }; useEffect(() => { if (session) { getUser(); createUser(); } }, [session]); return ( <> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={styles.main}> <div className={styles.card}> <h1 className={inter.className}>Welcome to Flow Walletless App!</h1> <div style={{ display: "flex", flexDirection: "column", gap: "20px", margin: "20px", } > {user ? ( <div> <h5 className={inter.className}>User Name: {user.name}</h5> <h5 className={inter.className}>User Email: {user.email}</h5> <h5 className={inter.className}>Flow Wallet Address: {user.flowWalletAddress ? user.flowWalletAddress : 'Creating address...'}</h5> </div> ) : ( <button onClick={signInWithGoogle} style={{ padding: "20px", width: "auto" } > Sign Up </button> )} <button onClick={signOutWithGoogle} style={{ padding: "20px" }> Sign Out </button> </div> </div> </main> </> ); } We have added two functions: getUser searches the database for a user with the email logged in. createUser creates a user or updates it if it does not have an address yet. We also added a useEffect that checks if the user is logged in with their Google account. If so, the getUser function is called, returning true if the user exists and has a registered email address. If not, we call the createUser function, which makes the necessary checks and calls. Test Our Next.js Application Finally, we restart our Next.js application with the following command: Shell $ npm run dev You can now sign in with your Google account, and the app will make the necessary calls to our wallet API to create a Flow Testnet address! This is the first step in the walletless Flow process! By following these instructions, your app will create users and accounts in a way that is convenient for the end user. But the wallet API does not stop there. You can do much more with it, such as execute and sign transactions, run scripts to fetch data from the blockchain, and more. Conclusion Account abstraction and walletless onboarding in Flow offer developers a unique solution. By being able to delegate control over accounts, Flow allows developers to create applications that provide users with a seamless onboarding experience. This will hopefully lead to greater adoption of dApps and a new wave of web3 users.
The JVM is an excellent platform for monkey-patching. Monkey patching is a technique used to dynamically update the behavior of a piece of code at run-time. A monkey patch (also spelled monkey-patch, MonkeyPatch) is a way to extend or modify the runtime code of dynamic languages (e.g. Smalltalk, JavaScript, Objective-C, Ruby, Perl, Python, Groovy, etc.) without altering the original source code. — Wikipedia I want to demo several approaches for monkey-patching in Java in this post. As an example, I'll use a sample for-loop. Imagine we have a class and a method. We want to call the method multiple times without doing it explicitly. The Decorator Design Pattern While the Decorator Design Pattern is not monkey-patching, it's an excellent introduction to it anyway. Decorator is a structural pattern described in the foundational book, Design Patterns: Elements of Reusable Object-Oriented Software. The decorator pattern is a design pattern that allows behavior to be added to an individual object, dynamically, without affecting the behavior of other objects from the same class. — Decorator pattern Our use-case is a Logger interface with a dedicated console implementation: We can implement it in Java like this: Java public interface Logger { void log(String message); } public class ConsoleLogger implements Logger { @Override public void log(String message) { System.out.println(message); } } Here's a simple, configurable decorator implementation: Java public class RepeatingDecorator implements Logger { //1 private final Logger logger; //2 private final int times; //3 public RepeatingDecorator(Logger logger, int times) { this.logger = logger; this.times = times; } @Override public void log(String message) { for (int i = 0; i < times; i++) { //4 logger.log(message); } } } Must implement the interface Underlying logger Loop configuration Call the method as many times as necessary Using the decorator is straightforward: Java var logger = new ConsoleLogger(); var threeTimesLogger = new RepeatingDecorator(logger, 3); threeTimesLogger.log("Hello world!"); The Java Proxy The Java Proxy is a generic decorator that allows attaching dynamic behavior: Proxy provides static methods for creating objects that act like instances of interfaces but allow for customized method invocation. — Proxy Javadoc The Spring Framework uses Java Proxies a lot. It's the case of the @Transactional annotation. If you annotate a method, Spring creates a Java Proxy around the encasing class at runtime. When you call it, Spring calls the proxy instead. Depending on the configuration, it opens the transaction or joins an existing one, then calls the actual method, and finally commits (or rollbacks). The API is simple: We can write the following handler: Java public class RepeatingInvocationHandler implements InvocationHandler { private final Logger logger; //1 private final int times; //2 public RepeatingInvocationHandler(Logger logger, int times) { this.logger = logger; this.times = times; } @Override public Object invoke(Object proxy, Method method, Object[] args) throws Exception { if (method.getName().equals("log") && args.length ## 1 && args[0] instanceof String) { //3 for (int i = 0; i < times; i++) { method.invoke(logger, args[0]); //4 } } return null; } } Underlying logger Loop configuration Check every requirement is upheld Call the initial method on the underlying logger Here's how to create the proxy: Java var logger = new ConsoleLogger(); var proxy = (Logger) Proxy.newProxyInstance( //1-2 Main.class.getClassLoader(), new Class[]{Logger.class}, //3 new RepeatingInvocationHandler(logger, 3)); //4 proxy.log("Hello world!"); Create the Proxy object We must cast to Logger as the API was created before generics, and it returns an Object Array of interfaces the object needs to conform to Pass our handler Instrumentation Instrumentation is the capability of the JVM to transform bytecode before it loads it via a Java agent. Two Java agent flavors are available: Static, with the agent passed on the command line when you launch the application Dynamic allows connecting to a running JVM and attaching an agent on it via the Attach API. Note that it represents a huge security issue and has been drastically limited in the latest JDK. The Instrumentation API's surface is limited: As seen above, the API exposes the user to low-level bytecode manipulation via byte arrays. It would be unwieldy to do it directly. Hence, real-life projects rely on bytecode manipulation libraries. ASM has been the traditional library for this, but it seems that Byte Buddy has superseded it. Note that Byte Buddy uses ASM but provides a higher-level abstraction. The Byte Buddy API is outside the scope of this blog post, so let's dive directly into the code: Java public class Repeater { public static void premain(String arguments, Instrumentation instrumentation) { //1 var withRepeatAnnotation = isAnnotatedWith(named("ch.frankel.blog.instrumentation.Repeat")); //2 new AgentBuilder.Default() //3 .type(declaresMethod(withRepeatAnnotation)) //4 .transform((builder, typeDescription, classLoader, module, domain) -> builder //5 .method(withRepeatAnnotation) //6 .intercept( //7 SuperMethodCall.INSTANCE //8 .andThen(SuperMethodCall.INSTANCE) .andThen(SuperMethodCall.INSTANCE)) ).installOn(instrumentation); //3 } } Required signature; it's similar to the main method, with the added Instrumentation argument Match that is annotated with the @Repeat annotation. The DSL reads fluently even if you don't know it (I don't). Byte Buddy provides a builder to create the Java agent Match all types that declare a method with the @Repeat annotation Transform the class accordingly Transform methods annotated with @Repeat Replace the original implementation with the following Call the original implementation three times The next step is to create the Java agent package. A Java agent is a regular JAR with specific manifest attributes. Let's configure Maven to build the agent: XML <plugin> <artifactId>maven-assembly-plugin</artifactId> <!--1--> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> <!--2--> </descriptorRefs> <archive> <manifestEntries> <Premain-Class>ch.frankel.blog.instrumentation.Repeater</Premain-Class> <!--3--> </manifestEntries> </archive> </configuration> <executions> <execution> <goals> <goal>single</goal> </goals> <phase>package</phase> <!--4--> </execution> </executions> </plugin> Create a JAR containing all dependencies () Testing is more involved, as we need two different codebases, one for the agent and one for the regular code with the annotation. Let's create the agent first: Shell mvn install We can then run the app with the agent: Shell java -javaagent:/Users/nico/.m2/repository/ch/frankel/blog/agent/1.0-SNAPSHOT/agent-1.0-SNAPSHOT-jar-with-dependencies.jar \ #1 -cp ./target/classes #2 ch.frankel.blog.instrumentation.Main #3 Run Java with the agent created in the previous step. The JVM will run the premain method of the class configured in the agent Configure the classpath Set the main class Aspect-Oriented Programming The idea behind AOP is to apply some code across different unrelated object hierarchies - cross-cutting concerns. It's a valuable technique in languages that don't allow traits, code you can graft on third-party objects/classes. Fun fact: I learned about AOP before Proxy. AOP relies on two main concepts: an aspect is the transformation applied to code, while a point cut matches where the aspect applies. In Java, AOP's historical implementation is the excellent AspectJ library. AspectJ provides two approaches, known as weaving: build-time weaving, which transforms the compiled bytecode, and runtime weaving, which relies on the above instrumentation. Either way, AspectJ uses a specific format for aspects and pointcuts. Before Java 5, the format looked like Java but not quite; for example, it used the aspect keyword. With Java 5, one can use annotations in regular Java code to achieve the same goal. We need an AspectJ dependency: XML <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> <version>1.9.19</version> </dependency> As Byte Buddy, AspectJ also uses ASM underneath. Here's the code: Java @Aspect //1 public class RepeatingAspect { @Pointcut("@annotation(repeat) && call(* *(..))") //2 public void callAt(Repeat repeat) {} //3 @Around("callAt(repeat)") //4 public Object around(ProceedingJoinPoint pjp, Repeat repeat) throws Throwable { //5 for (int i = 0; i < repeat.times(); i++) { //6 pjp.proceed(); //7 } return null; } } Mark this class as an aspect Define the pointcut; every call to a method annotated with @Repeat Bind the @Repeat annotation to the the repeat name used in the annotation above Define the aspect applied to the call site; it's an @Around, meaning that we need to call the original method explicitly The signature uses a ProceedingJoinPoint, which references the original method, as well as the @Repeat annotation Loop over as many times as configured Call the original method At this point, we need to weave the aspect. Let's do it at build-time. For this, we can add the AspectJ build plugin: XML <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>aspectj-maven-plugin</artifactId> <executions> <execution> <goals> <goal>compile</goal> <!--1--> </goals> </execution> </executions> </plugin> Bind execution of the plugin to the compile phase To see the demo in effect: Shell mvn compile exec:java -Dexec.mainClass=ch.frankel.blog.aop.Main Java Compiler Plugin Last, it's possible to change the generated bytecode via a Java compiler plugin, introduced in Java 6 as JSR 269. From a bird's eye view, plugins involve hooking into the Java compiler to manipulate the AST in three phases: parse the source code into multiple ASTs, analyze further into Element, and potentially generate source code. The documentation could be less sparse. I found the following Awesome Java Annotation Processing. Here's a simplified class diagram to get you started: I'm too lazy to implement the same as above with such a low-level API. As the expression goes, this is left as an exercise to the reader. If you are interested, I believe the DocLint source code is a good starting point. Conclusion I described several approaches to monkey-patching in Java in this post: the Proxy class, instrumentation via a Java Agent, AOP via AspectJ, and javac compiler plugins. To choose one over the other, consider the following criteria: build-time vs. runtime, complexity, native vs. third-party, and security concerns. To Go Further Monkey patch Guide to Java Instrumentation Byte Buddy Creating a Java Compiler Plugin Awesome Java Annotation Processing Maven AspectJ plugin
In this article, we delve into the exciting realm of containerizing Helidon applications, followed by deploying them effortlessly to a Kubernetes environment. To achieve this, we'll harness the power of JKube’s Kubernetes Maven Plugin, a versatile tool for Java applications for Kubernetes deployments that has recently been updated to version 1.14.0. What's exciting about this release is that it now supports the Helidon framework, a Java Microservices gem open-sourced by Oracle in 2018. If you're curious about Helidon, we've got some blog posts to get you up to speed: Building Microservices With Oracle Helidon Ultra-Fast Microservices: When MicroStream Meets Helidon Helidon: 2x Productivity With Microprofile REST Client In this article, we will closely examine the integration between JKube’s Kubernetes Maven Plugin and Helidon. Here's a sneak peek of the exciting journey we'll embark on: We'll kick things off by generating a Maven application from Helidon Starter Transform your Helidon application into a nifty Docker image. Craft Kubernetes YAML manifests tailored for your Helidon application. Apply those manifests to your Kubernetes cluster. We'll bundle those Kubernetes YAML manifests into a Helm Chart. We'll top it off by pushing that Helm Chart to a Helm registry. Finally, we'll deploy our Helidon application to Red Hat OpenShift. An exciting aspect worth noting is that JKube’s Kubernetes Maven Plugin can be employed with previous versions of Helidon projects as well. The only requirement is to provide your custom image configuration. With this latest release, Helidon users can now easily generate opinionated container images. Furthermore, the plugin intelligently detects project dependencies and seamlessly incorporates Kubernetes health checks into the generated manifests, streamlining the deployment process. Setting up the Project You can either use an existing Helidon project or create a new one from Helidon Starter. If you’re on JDK 17 use 3.x version of Helidon. Otherwise, you can stick to Helidon 2.6.x which works with older versions of Java. In the starter form, you can choose either Helidon SE or Helidon Microprofile, choose application type, and fill out basic details like project groupId, version, and artifactId. Once you’ve set your project, you can add JKube’s Kubernetes Maven Plugin to your pom.xml: XML <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>kubernetes-maven-plugin</artifactId> <version>1.14.0</version> </plugin> Also, the plugin version is set to 1.14.0, which is the latest version at the time of writing. You can check for the latest version on the Eclipse JKube releases page. It’s not really required to add the plugin if you want to execute it directly from some CI pipeline. You can just provide a fully qualified name of JKube’s Kubernetes Maven Plugin while issuing some goals like this: Shell $ mvn org.eclipse.jkube:kubernetes-maven-plugin:1.14.0:resource Now that we’ve added the plugin to the project, we can start using it. Creating Container Image (JVM Mode) In order to build a container image, you do not need to provide any sort of configuration. First, you need to build your project. Shell $ mvn clean install Then, you just need to run k8s:build goal of JKube’s Kubernetes Maven Plugin. By default, it builds the image using the Docker build strategy, which requires access to a Docker daemon. If you have access to a docker daemon, run this command: Shell $ mvn k8s:build If you don’t have access to any docker daemon, you can also build the image using the Jib build strategy: Shell $ mvn k8s:build -Djkube.build.strategy=jib You will notice that Eclipse JKube has created an opinionated container image for your application based on your project configuration. Here are some key points about JKube’s Kubernetes Maven Plugin to observe in this zero configuration mode: It used quay.io/jkube/jkube-java as a base image for the container image It added some labels to the container image (picked from pom.xml) It exposed some ports in the container image based on the project configuration It automatically copied relevant artifacts and libraries required to execute the jar in the container environment. Creating Container Image (Native Mode) In order to create a container image for the native executable, we need to generate the native executable first. In order to do that, let’s build our project in the native-image profile (as specified in Helidon GraalVM Native Image documentation): Shell $ mvn package -Pnative-image This creates a native executable file in the target folder of your project. In order to create a container image based on this executable, we just need to run k8s:build goal but also specify native-image profile: Shell $ mvn k8s:build -Pnative-image Like JVM mode, Eclipse JKube creates an opinionated container image but uses a lightweight base image: registry.access.redhat.com/ubi8/ubi-minimal and exposes only the required ports by application. Customizing Container Image as per Requirements Creating a container image with no configuration is a really nice way to get started. However, it might not suit everyone’s use case. Let’s take a look at how to configure various aspects of the generated container image. You can override basic aspects of the container image with some properties like this: Property Name Description jkube.generator.name Change Image Name jkube.generator.from Change Base Image jkube.generator.tags A comma-separated value of additional tags for the image If you want more control, you can provide a complete XML configuration for the image in the plugin configuration section: XML <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>kubernetes-maven-plugin</artifactId> <version>${jkube.version}</version> <configuration> <images>  </images> </configuration> </plugin> The same is also possible by providing your own Dockerfile in the project base directory. Kubernetes Maven Plugin automatically detects it and builds a container image based on its content: Dockerfile FROM openjdk:11-jre-slim COPY maven/target/helidon-quickstart-se.jar /deployments/ COPY maven/target/libs /deployments/libs CMD ["java", "-jar", "/deployments/helidon-quickstart-se.jar"] EXPOSE 8080 Pushing the Container Image to Quay.io: Once you’ve built a container image, you most likely want to push it to some public or private container registry. Before pushing the image, make sure you’ve renamed your image to include the registry name and registry user. If I want to push an image to Quay.io in the namespace of a user named rokumar, this is how I would need to rename my image: Shell $ mvn k8s:build -Djkube.generator.name=quay.io/rokumar/%a:%v %a and %v correspond to project artifactId and project version. For more information, you can check the Kubernetes Maven Plugin Image Configuration documentation. Once we’ve built an image with the correct name, the next step is to provide credentials for our registry to JKube’s Kubernetes Maven Plugin. We can provide registry credentials via the following sources: Docker login Local Maven Settings file (~/.m2/settings.xml) Provide it inline using jkube.docker.username and jkube.docker.password properties Once you’ve configured your registry credentials, you can issue the k8s:push goal to push the image to your specified registry: Shell $ mvn k8s:push Generating Kubernetes Manifests In order to generate opinionated Kubernetes manifests, you can use k8s:resource goal from JKube’s Kubernetes Maven Plugin: Shell $ mvn k8s:resource It generates Kubernetes YAML manifests in the target directory: Shell $ ls target/classes/META-INF/jkube/kubernetes helidon-quickstart-se-deployment.yml helidon-quickstart-se-service.yml JKube’s Kubernetes Maven Plugin automatically detects if the project contains io.helidon:helidon-health dependency and adds liveness, readiness, and startup probes: YAML $ cat target/classes/META-INF/jkube/kubernetes//helidon-quickstart-se-deployment. yml | grep -A8 Probe livenessProbe: failureThreshold: 3 httpGet: path: /health/live port: 8080 scheme: HTTP initialDelaySeconds: 0 periodSeconds: 10 successThreshold: 1 -- readinessProbe: failureThreshold: 3 httpGet: path: /health/ready port: 8080 scheme: HTTP initialDelaySeconds: 0 periodSeconds: 10 successThreshold: 1 Applying Kubernetes Manifests JKube’s Kubernetes Maven Plugin provides k8s:apply goal that is equivalent to kubectl apply command. It just applies the resources generated by k8s:resource in the previous step. Shell $ mvn k8s:apply Packaging Helm Charts Helm has established itself as the de facto package manager for Kubernetes. You can package generated manifests into a Helm Chart and apply it on some other cluster using Helm CLI. You can generate a Helm Chart of generated manifests using k8s:helm goal. The interesting thing is that JKube’s Kubernetes Maven Plugin doesn’t rely on Helm CLI for generating the chart. Shell $ mvn k8s:helm You’d notice Helm Chart is generated in target/jkube/helm/ directory: Shell $ ls target/jkube/helm/helidon-quickstart-se/kubernetes Chart.yaml helidon-quickstart-se-0.0.1-SNAPSHOT.tar.gz README.md templates values.yaml Pushing Helm Charts to Helm Registries Usually, after generating a Helm Chart locally, you would want to push it to some Helm registry. JKube’s Kubernetes Maven Plugin provides k8s:helm-push goal for achieving this task. But first, we need to provide registry details in plugin configuration: XML <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>kubernetes-maven-plugin</artifactId> <version>1.14.0</version> <configuration> <helm> <snapshotRepository> <name>ChartMuseum</name> <url>http://example.com/api/charts</url> <type>CHARTMUSEUM</type> <username>user1</username> </snapshotRepository> </helm> </configuration> </plugin> JKube’s Kubernetes Maven Plugin supports pushing Helm Charts to ChartMuseum, Nexus, Artifactory, and OCI registries. You have to provide the applicable Helm repository type and URL. You can provide the credentials via environment variables, properties, or ~/.m2/settings.xml. Once you’ve all set up, you can run k8s:helm-push goal to push chart: Shell $ mvn k8s:helm-push -Djkube.helm.snapshotRepository.password=yourpassword Deploying To Red Hat OpenShift If you’re deploying to Red Hat OpenShift, you can use JKube’s OpenShift Maven Plugin to deploy your Helidon application to an OpenShift cluster. It contains some add-ons specific to OpenShift like S2I build strategy, support for Routes, etc. You also need to add the JKube’s OpenShift Maven Plugin plugin to your pom.xml. Maybe you can add it in a separate profile: XML <profile> <id>openshift</id> <build> <plugins> <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>${jkube.version}</version> </plugin> </plugins> </build> </profile> Then, you can deploy the application with a combination of these goals: Shell $ mvn oc:build oc:resource oc:apply -Popenshift Conclusion In this article, you learned how smoothly you can deploy your Helidon applications to Kubernetes using Eclipse JKube’s Kubernetes Maven Plugin. We saw how effortless it is to package your Helidon application into a container image and publish it to some container image registry. We can alternatively generate Helm Charts of our Kubernetes YAML manifests and publish Helm Charts to some Helm registry. In the end, we learned about JKube’s OpenShift Maven Plugin, which is specifically designed for Red Hat OpenShift users who want to deploy their Helidon applications to Red Hat OpenShift. You can find the code used in this blog post in this GitHub repository. In case you’re interested in knowing more about Eclipse JKube, you can check these links: Documentation Github Issue Tracker StackOverflow YouTube Channel Twitter Gitter Chat
Agile estimation plays a pivotal role in Agile project management, enabling teams to gauge the effort, time, and resources necessary to accomplish their tasks. Precise estimations empower teams to efficiently plan their work, manage expectations, and make well-informed decisions throughout the project's duration. In this article, we delve into various Agile estimation techniques and best practices that enhance the accuracy of your predictions and pave the way for your team's success. The Essence of Agile Estimation Agile estimation is an ongoing, iterative process that takes place at different levels of detail, ranging from high-level release planning to meticulous sprint planning. The primary objective of Agile estimation is to provide just enough information for teams to make informed decisions without expending excessive time on analysis and documentation. Designed to be lightweight, collaborative, and adaptable, Agile estimation techniques enable teams to rapidly adjust their plans as new information emerges or priorities shift. Prominent Agile Estimation Techniques 1. Planning Poker Planning Poker is a consensus-driven estimation technique that employs a set of cards with pre-defined numerical values, often based on the Fibonacci sequence (1, 2, 3, 5, 8, 13, etc.). Each team member selects a card representing their estimate for a specific task, and all cards are revealed simultaneously. If there is a significant discrepancy in estimates, team members deliberate their reasoning and repeat the process until a consensus is achieved. 2. T-Shirt Sizing T-shirt sizing is a relative estimation technique that classifies tasks into different "sizes" according to their perceived complexity or effort, such as XS, S, M, L, and XL. This method allows teams to swiftly compare tasks and prioritize them based on their relative size. Once tasks are categorized, more precise estimation techniques can be employed if needed. 3. User Story Points User story points serve as a unit of measurement to estimate the relative effort required to complete a user story. This technique entails assigning a point value to each user story based on its complexity, risk, and effort, taking into account factors such as workload, uncertainty, and potential dependencies. Teams can then use these point values to predict the number of user stories they can finish within a given timeframe. 4. Affinity Estimation Affinity Estimation is a technique that involves grouping tasks or user stories based on their similarities in terms of effort, complexity, and size. This method helps teams quickly identify patterns and relationships among tasks, enabling them to estimate more efficiently. Once tasks are grouped, they can be assigned a relative point value or size category. 5. Wideband Delphi The Wideband Delphi method is a consensus-based estimation technique that involves multiple rounds of anonymous estimation and feedback. Team members individually provide estimates for each task, and then the estimates are shared anonymously with the entire team. Team members discuss the range of estimates and any discrepancies before submitting revised estimates in subsequent rounds. This process continues until a consensus is reached. Risk Management in Agile Estimation Identify and Assess Risks Incorporate risk identification and assessment into your Agile estimation process. Encourage team members to consider potential risks associated with each task or user story, such as technical challenges, dependencies, or resource constraints. By identifying and assessing risks early on, your team can develop strategies to mitigate them, leading to more accurate estimates and a smoother project execution. Assign Risk Factors Assign risk factors to tasks or user stories based on their level of uncertainty or potential impact on the project. These risk factors can be numerical values or qualitative categories (e.g., low, medium, high) that help your team prioritize tasks and allocate resources effectively. Incorporating risk factors into your estimates can provide a more comprehensive understanding of the work involved and help your team make better-informed decisions. Risk-Based Buffering Include risk-based buffering in your Agile estimation process by adding contingency buffers to account for uncertainties and potential risks. These buffers can be expressed as additional time, resources, or user story points, and they serve as a safety net to ensure that your team can adapt to unforeseen challenges without jeopardizing the project's success. Monitor and Control Risks Continuously monitor and control risks throughout the project lifecycle by regularly reviewing your risk assessments and updating them as new information becomes available. This proactive approach allows your team to identify emerging risks and adjust their plans accordingly, ensuring that your estimates remain accurate and relevant. Learn From Risks Encourage your team to learn from the risks encountered during the project and use this knowledge to improve their estimation and risk management practices. Conduct retrospective sessions to discuss the risks faced, their impact on the project, and the effectiveness of the mitigation strategies employed. By learning from past experiences, your team can refine its risk management approach and enhance the accuracy of future estimates. By incorporating risk management into your Agile estimation process, you can help your team better anticipate and address potential challenges, leading to more accurate estimates and a higher likelihood of project success. This approach also fosters a culture of proactive risk management and continuous learning within your team, further enhancing its overall effectiveness and adaptability. Best Practices for Agile Estimation Foster Team Collaboration Efficient Agile estimation necessitates input from all team members, as each individual contributes unique insights and perspectives. Promote open communication and collaboration during estimation sessions to ensure everyone's opinions are considered and to cultivate a shared understanding of the tasks at hand. Utilize Historical Data Draw upon historical data from previous projects or sprints to inform your estimations. Examining past performance can help teams identify trends, patterns, and areas for improvement, ultimately leading to more accurate predictions in the future. Velocity and Capacity Planning Incorporate team velocity and capacity planning into your Agile estimation process. Velocity is a measure of the amount of work a team can complete within a given sprint or iteration, while capacity refers to the maximum amount of work a team can handle. By considering these factors, you can ensure that your estimates align with your team's capabilities and avoid overcommitting to work. Break Down Large Tasks Large tasks or user stories can be challenging to estimate accurately. Breaking them down into smaller, more manageable components can make the estimation process more precise and efficient. Additionally, this approach helps teams better understand the scope and complexity of the work involved, leading to more realistic expectations and improved planning. Revisit Estimates Regularly Agile estimation is a continuous process, and teams should be prepared to revise their estimates as new information becomes available or circumstances change. Periodically review and update your estimates to ensure they remain accurate and pertinent throughout the project lifecycle. Acknowledge Uncertainty Agile estimation recognizes the inherent uncertainty in software development. Instead of striving for flawless predictions, focus on providing just enough information to make informed decisions and be prepared to adapt as necessary. Establish a Baseline Create a baseline for your estimates by selecting a well-understood task or user story as a reference point. This baseline can help teams calibrate their estimates and ensure consistency across different tasks and projects. Pursue Continuous Improvement Consider Agile estimation as an opportunity for ongoing improvement. Reflect on your team's estimation accuracy and pinpoint areas for growth. Experiment with different techniques and practices to discover what works best for your team and refine your approach over time. Conclusion Agile estimation is a vital component of successful Agile project management. By employing the appropriate techniques and adhering to best practices, teams can enhance their ability to predict project scope, effort, and duration, resulting in more effective planning and decision-making. Keep in mind that Agile estimation is an iterative process, and teams should continuously strive to learn from their experiences and refine their approach for even greater precision in the future.
This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report Artificial intelligence (AI) has revolutionized the realm of software testing, introducing new possibilities and efficiencies. The demand for faster, more reliable, and efficient testing processes has grown exponentially with the increasing complexity of modern applications. To address these challenges, AI has emerged as a game-changing force, revolutionizing the field of automated software testing. By leveraging AI algorithms, machine learning (ML), and advanced analytics, software testing has undergone a remarkable transformation, enabling organizations to achieve unprecedented levels of speed, accuracy, and coverage in their testing endeavors. This article delves into the profound impact of AI on automated software testing, exploring its capabilities, benefits, and the potential it holds for the future of software quality assurance. An Overview of AI in Testing This introduction aims to shed light on the role of AI in software testing, focusing on key aspects that drive its transformative impact. Figure 1: AI in testing Elastically Scale Functional, Load, and Performance Tests AI-powered testing solutions enable the effortless allocation of testing resources, ensuring optimal utilization and adaptability to varying workloads. This scalability ensures comprehensive testing coverage while maintaining efficiency. AI-Powered Predictive Bots AI-powered predictive bots are a significant advancement in software testing. Bots leverage ML algorithms to analyze historical data, patterns, and trends, enabling them to make informed predictions about potential defects or high-risk areas. By proactively identifying potential issues, predictive bots contribute to more effective and efficient testing processes. Automatic Update of Test Cases With AI algorithms monitoring the application and its changes, test cases can be dynamically updated to reflect modifications in the software. This adaptability reduces the effort required for test maintenance and ensures that the test suite remains relevant and effective over time. AI-Powered Analytics of Test Automation Data By analyzing vast amounts of testing data, AI-powered analytical tools can identify patterns, trends, and anomalies, providing valuable information to enhance testing strategies and optimize testing efforts. This data-driven approach empowers testing teams to make informed decisions and uncover hidden patterns that traditional methods might overlook. Visual Locators Visual locators, a type of AI application in software testing, focus on visual elements such as user interfaces and graphical components. AI algorithms can analyze screenshots and images, enabling accurate identification of and interaction with visual elements during automated testing. This capability enhances the reliability and accuracy of visual testing, ensuring a seamless user experience. Self-Healing Tests AI algorithms continuously monitor test execution, analyzing results and detecting failures or inconsistencies. When issues arise, self-healing mechanisms automatically attempt to resolve the problem, adjusting the test environment or configuration. This intelligent resilience minimizes disruptions and optimizes the overall testing process. What Is AI-Augmented Software Testing? AI-augmented software testing refers to the utilization of AI techniques — such as ML, natural language processing, and data analytics — to enhance and optimize the entire software testing lifecycle. It involves automating test case generation, intelligent test prioritization, anomaly detection, predictive analysis, and adaptive testing, among other tasks. By harnessing the power of AI, organizations can improve test coverage, detect defects more efficiently, reduce manual effort, and ultimately deliver high-quality software with greater speed and accuracy. Benefits of AI-Powered Automated Testing AI-powered software testing offers a plethora of benefits that revolutionize the testing landscape. One significant advantage lies in its codeless nature, thus eliminating the need to memorize intricate syntax. Embracing simplicity, it empowers users to effortlessly create testing processes through intuitive drag-and-drop interfaces. Scalability becomes a reality as the workload can be efficiently distributed among multiple workstations, ensuring efficient utilization of resources. The cost-saving aspect is remarkable as minimal human intervention is required, resulting in substantial reductions in workforce expenses. With tasks executed by intelligent bots, accuracy reaches unprecedented heights, minimizing the risk of human errors. Furthermore, this automated approach amplifies productivity, enabling testers to achieve exceptional output levels. Irrespective of the software type — be it a web-based desktop application or mobile application — the flexibility of AI-powered testing seamlessly adapts to diverse environments, revolutionizing the testing realm altogether. Figure 2: Benefits of AI for test automation Mitigating the Challenges of AI-Powered Automated Testing AI-powered automated testing has revolutionized the software testing landscape, but it is not without its challenges. One of the primary hurdles is the need for high-quality training data. AI algorithms rely heavily on diverse and representative data to perform effectively. Therefore, organizations must invest time and effort in curating comprehensive and relevant datasets that encompass various scenarios, edge cases, and potential failures. Another challenge lies in the interpretability of AI models. Understanding why and how AI algorithms make specific decisions can be critical for gaining trust and ensuring accurate results. Addressing this challenge requires implementing techniques such as explainable AI, model auditing, and transparency. Furthermore, the dynamic nature of software environments poses a challenge in maintaining AI models' relevance and accuracy. Continuous monitoring, retraining, and adaptation of AI models become crucial to keeping pace with evolving software systems. Additionally, ethical considerations, data privacy, and bias mitigation should be diligently addressed to maintain fairness and accountability in AI-powered automated testing. AI models used in testing can sometimes produce false positives (incorrectly flagging a non-defect as a defect) or false negatives (failing to identify an actual defect). Balancing precision and recall of AI models is important to minimize false results. AI models can exhibit biases and may struggle to generalize new or uncommon scenarios. Adequate training and validation of AI models are necessary to mitigate biases and ensure their effectiveness across diverse testing scenarios. Human intervention plays a critical role in designing test suites by leveraging their domain knowledge and insights. They can identify critical test cases, edge cases, and scenarios that require human intuition or creativity, while leveraging AI to handle repetitive or computationally intensive tasks. Continuous improvement would be possible by encouraging a feedback loop between human testers and AI systems. Human experts can provide feedback on the accuracy and relevance of AI-generated test cases or predictions, helping improve the performance and adaptability of AI models. Human testers should play a role in the verification and validation of AI models, ensuring that they align with the intended objectives and requirements. They can evaluate the effectiveness, robustness, and limitations of AI models in specific testing contexts. AI-Driven Testing Approaches AI-driven testing approaches have ushered in a new era in software quality assurance, revolutionizing traditional testing methodologies. By harnessing the power of artificial intelligence, these innovative approaches optimize and enhance various aspects of testing, including test coverage, efficiency, accuracy, and adaptability. This section explores the key AI-driven testing approaches, including differential testing, visual testing, declarative testing, and self-healing automation. These techniques leverage AI algorithms and advanced analytics to elevate the effectiveness and efficiency of software testing, ensuring higher-quality applications that meet the demands of the rapidly evolving digital landscape: Differential testing assesses discrepancies between application versions and builds, categorizes the variances, and utilizes feedback to enhance the classification process through continuous learning. Visual testing utilizes image-based learning and screen comparisons to assess the visual aspects and user experience of an application, thereby ensuring the integrity of its look and feel. Declarative testing expresses the intention of a test using a natural or domain-specific language, allowing the system to autonomously determine the most appropriate approach to execute the test. Self-healing automation automatically rectifies element selection in tests when there are modifications to the user interface (UI), ensuring the continuity of reliable test execution. Key Considerations for Harnessing AI for Software Testing Many contemporary test automation tools infused with AI provide support for open-source test automation frameworks such as Selenium and Appium. AI-powered automated software testing encompasses essential features such as auto-code generation and the integration of exploratory testing techniques. Open-Source AI Tools To Test Software When selecting an open-source testing tool, it is essential to consider several factors. Firstly, it is crucial to verify that the tool is actively maintained and supported. Additionally, it is critical to assess whether the tool aligns with the skill set of the team. Furthermore, it is important to evaluate the features, benefits, and challenges presented by the tool to ensure they are in line with your specific testing requirements and organizational objectives. A few popular open-source options include, but are not limited to: Carina – AI-driven, free forever, scriptless approach to automate functional, performance, visual, and compatibility tests TestProject – Offered the industry's first free Appium AI tools in 2021, expanding upon the AI tools for Selenium that they had previously introduced in 2020 for self-healing technology Cerberus Testing – A low-code and scalable test automation solution that offers a self-healing feature called Erratum and has a forever-free plan Designing Automated Tests With AI and Self-Testing AI has made significant strides in transforming the landscape of automated testing, offering a range of techniques and applications that revolutionize software quality assurance. Some of the prominent techniques and algorithms are provided in the tables below, along with the purposes they serve: KEY TECHNIQUES AND APPLICATIONS OF AI IN AUTOMATED TESTING Key Technique Applications Machine learning Analyze large volumes of testing data, identify patterns, and make predictions for test optimization, anomaly detection, and test case generation Natural language processing Facilitate the creation of intelligent chatbots, voice-based testing interfaces, and natural language test case generation Computer vision Analyze image and visual data in areas such as visual testing, UI testing, and defect detection Reinforcement learning Optimize test execution strategies, generate adaptive test scripts, and dynamically adjust test scenarios based on feedback from the system under test Table 1 KEY ALGORITHMS USED FOR AI-POWERED AUTOMATED TESTING Algorithm Purpose Applications Clustering algorithms Segmentation k-means and hierarchical clustering are used to group similar test cases, identify patterns, and detect anomalies Sequence generation models: recurrent neural networks or transformers Text classification and sequence prediction Trained to generate sequences such as test scripts or sequences of user interactions for log analysis Bayesian networks Dependencies and relationships between variables Test coverage analysis, defect prediction, and risk assessment Convolutional neural networks Image analysis Visual testing Evolutionary algorithms: genetic algorithms Natural selection Optimize test case generation, test suite prioritization, and test execution strategies by applying genetic operators like mutation and crossover on existing test cases to create new variants, which are then evaluated based on fitness criteria Decision trees, fandom forests, support vector machines, and neural networks Classification Classification of software components Variational autoencoders and generative adversarial networks Generative AI Used to generate new test cases that cover different scenarios or edge cases by test data generation, creating synthetic data that resembles real-world scenarios Table 2 Real-World Examples of AI-Powered Automated Testing AI-powered visual testing platforms perform automated visual validation of web and mobile applications. They use computer vision algorithms to compare screenshots and identify visual discrepancies, enabling efficient visual testing across multiple platforms and devices. NLP and ML are combined to generate test cases from plain English descriptions. They automatically execute these test cases, detect bugs, and provide actionable insights to improve software quality. Self-healing capabilities are also provided by automatically adapting test cases to changes in the application's UI, improving test maintenance efficiency. Quantum AI-Powered Automated Testing: The Road Ahead The future of quantum AI-powered automated software testing holds great potential for transforming the way testing is conducted. Figure 3: Transition of automated testing from AI to Quantum AI Quantum computing's ability to handle complex optimization problems can significantly improve test case generation, test suite optimization, and resource allocation in automated testing. Quantum ML algorithms can enable more sophisticated and accurate models for anomaly detection, regression testing, and predictive analytics. Quantum computing's ability to perform parallel computations can greatly accelerate the execution of complex test scenarios and large-scale test suites. Quantum algorithms can help enhance security testing by efficiently simulating and analyzing cryptographic algorithms and protocols. Quantum simulation capabilities can be leveraged to model and simulate complex systems, enabling more realistic and comprehensive testing of software applications in various domains, such as finance, healthcare, and transportation. Parting Thoughts AI has significantly revolutionized the traditional landscape of testing, enhancing the effectiveness, efficiency, and reliability of software quality assurance processes. AI-driven techniques such as ML, anomaly detection, NLP, and intelligent test prioritization have enabled organizations to achieve higher test coverage, early defect detection, streamlined test script creation, and adaptive test maintenance. The integration of AI in automated testing not only accelerates the testing process but also improves overall software quality, leading to enhanced customer satisfaction and reduced time to market. As AI continues to evolve and mature, it holds immense potential for further advancements in automated testing, paving the way for a future where AI-driven approaches become the norm in ensuring the delivery of robust, high-quality software applications. Embracing the power of AI in automated testing is not only a strategic imperative but also a competitive advantage for organizations looking to thrive in today's rapidly evolving technological landscape. This is an article from DZone's 2023 Automated Testing Trend Report.For more: Read the Report
Unlocking Opportunities: The Advantages of Certifications for Software Engineers
September 27, 2023 by
Top 8 Conferences Developers Can Still Attend
September 26, 2023 by CORE
September 27, 2023 by
Five Tools for Data Scientists to 10X their Productivity
September 27, 2023 by
Explainable AI: Making the Black Box Transparent
May 16, 2023 by
September 27, 2023 by
SwiftData Dependency Injection in SwiftUI Application
September 27, 2023 by
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
SwiftData Dependency Injection in SwiftUI Application
September 27, 2023 by
Driving Digital Transformation Through the Cloud
September 27, 2023 by CORE
September 27, 2023 by
Test Automation Success With Measurable Metrics
September 27, 2023 by
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
Five Tools for Data Scientists to 10X their Productivity
September 27, 2023 by
Advancements in Computer Vision: Deep Learning for Image Recognition
September 27, 2023 by
Five IntelliJ Idea Plugins That Will Change the Way You Code
May 15, 2023 by