88% of respondents use online communities as their primary learning method. See what else they had to say about the state of dev today.
Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.
Best Practices for Data Warehouses in Microsoft Fabric
The Agile Prompt Engineering Framework
Developer Experience
With tech stacks becoming increasingly diverse and AI and automation continuing to take over everyday tasks and manual workflows, the tech industry at large is experiencing a heightened demand to support engineering teams. As a result, the developer experience is changing faster than organizations can consciously maintain.We can no longer rely on DevOps practices or tooling alone — there is even greater power recognized in improving workflows, investing in infrastructure, and advocating for developers' needs. This nuanced approach brings developer experience to the forefront, where devs can begin to regain control over their software systems, teams, and processes.We are happy to introduce DZone's first-ever Developer Experience Trend Report, which assesses where the developer experience stands today, including team productivity, process satisfaction, infrastructure, and platform engineering. Taking all perspectives, technologies, and methodologies into account, we share our research and industry experts' perspectives on what it means to effectively advocate for developers while simultaneously balancing quality and efficiency. Come along with us as we explore this exciting chapter in developer culture.
Apache Cassandra Essentials
Identity and Access Management
It is exciting how different disciplines can be merged to make the processes more efficient. In 2009, DevOps was coined to address the friction between the Development and Operations teams. As a result, the industry moved towards clubbing both teams together so that the development team was responsible for the entire cycle, from writing code to production deployment. Of course, who would better understand intricacies than the people who developed them? After this shift, we have seen features being shipped rapidly and the time to market for new features coming down rapidly. DevOps also served as the foundation for many other practices like MLOps, DataOps, GitOps, and undoubtedly many more have emerged. Today we will talk about one such practice that many of you might be familiar with called the DevSecOps (Development Security Operations). So, what is DevSecOps? Security has traditionally been treated as an afterthought, with teams shipping features to production first and then scrambling to deploy security remediations during a security review or an incident. With the surge in cybersecurity, supply chain, and other sophisticated attacks, the industry quickly realized that, like development and operations, security should also be part of the process. It should be embedded into the development lifecycle as early as possible because addressing security later can be expensive (both from an architecture and operations standpoint). Now, let's discuss how it can be applied at various stages of our development lifecycle so that the code we are shipping is not only efficient but also secure. Usually, the process of shipping a feature involves: Development – where the feature is builtDistribution – where the artifacts are created and prepared for deliveryDeployment – where the feature is released into the production environment Let's discuss the steps we can take to enhance the security posture of the feature we're building during the development phase. In the development phase, a feature goes through a design review, coding, and then a pull request review. As part of the design review, the feature owner discusses what the API contracts look like, what kind of databases we are using, indexing, caching strategies, user experience, and so on (not an exhaustive list). In the security-first culture, it is also important to perform threat modeling. Perform Threat Modelling Simply put, threat modeling is "the process of identifying vulnerabilities, performing a risk assessment and implementing the recommended mitigations so that the products/organizations security posture is not compromised." Let's take an example to understand this. Imagine you are developing an API that: Lists products in your product catalog.Search for a product or product type. HTTP GET /api/products?search=laptop A threat model can look something like this: functionalityvulnerabilityrisk assessmentrecommended mitigationUnauthenticated users can search for productsThreat actors can perform DDoS (Distributed Denial-of-Service), overwhelming the database and API infrastructureHigh - Can bring down the service and reduce availabilityUse an API Gateway or a Rate limiter to prevent DDoS attacks.The user inputs a query string for the search fieldCan perform an SQL Injection attack like insert "1=1"High - The attacker can delete the production tableMake sure proper validations/sanitizations are performed on the input.The user receives product detailsExposing internal fields such as database IDs, status codes, and version numbers could give attackers critical information about the structure of the database or underlying system.Medium - The attacker can use these internal implementations to perform attacks such as such as information-gathering Only send the details required for the user. These are something we can think of when looking at the product endpoints. The best part is that you don't need to be a security expert to recognize these vulnerabilities. Tools like Microsoft Threat modeling tools and OWASP Threat Dragons can help identify them. Example of a Threat Model in Microsoft Threat Modeling Tool Analysis View The analysis view of the threat modeling tool shows different attacks that can happen on the API. Reviewing the threat model with your team can act as a brainstorming session to identify and mitigate as many security gaps as possible. Weak cryptographic usages. For example, usage of SHA1 or MD5 is considered weak. CA530 is an example of a warning that C# creates when any weak hash functions are used in the code.SQL injection attacks. CA2100 is an example that checks if the code is susceptible to any injection attacksHardcoded passwords, weak authentication and authorization mechanisms, and infrastructure misconfigurations can also be detected with static analyzers. There are existing tools in this space as well. SonarQube, CodeQL, Roslyn Analyzer, OWASP Dependency Check, and Snyk have some great offerings in this space. Integrating static analysis into build pipelines is also important. It offers advantages like: Consistent vulnerability detection experience for your developers.Improves the security posture of your service because every production deployment must go through these steps. Code Reviews From a Security Standpoint While code reviews traditionally focus on identifying bugs and ensuring best practices, it is important to evaluate it from a security perspective as well. By considering the security implications of each pull request, you can proactively prevent future threats and safeguard the integrity of your application. Conclusion To conclude, with the growing sophistication in the cybersecurity landscape, it is important to consider security in the early phases instead of leaving it to the end. As part of that, incorporate threat modeling and automated static analysis into your regular development cycle. In the upcoming articles, we will discuss what security practices we can incorporate during distribution, which involves scanning container images, dynamic application security testing (DAST), and artifact signing to protect the service from supply chain attacks.
API development should be about solving business problems, not repeating the same tedious tasks over and over again. Yet, for many developers, API creation is still bogged down by inefficiencies — from writing boilerplate code to manually managing integrations, security, and documentation. The traditional approach forces developers to spend too much time on infrastructure and maintenance, instead of focusing on what actually matters: delivering scalable, reliable APIs that support business needs. It doesn’t have to be this way. Martini eliminates the bottlenecks with a low-code, API-first approach that accelerates development while maintaining full flexibility. Here’s why traditional API development is slowing you down — and how Martini changes the game. Why API Development Wastes So Much Time Developers building APIs encounter the same challenges, over and over again: Boilerplate overload – Authentication, error handling, logging, and data transformation require writing the same repetitive code.Manual integrations – Connecting APIs to databases, third-party services, and internal applications is time-consuming and often requires custom scripting.Security overhead – Implementing OAuth, API keys, role-based access control, and rate-limiting adds complexity that must be manually configured.Slow iteration cycles – Updating APIs for changing business needs can introduce risks, requiring extra testing and maintenance. Traditional API development is slower than it should be, not because APIs are inherently complex, but because most tools don’t optimize for speed and efficiency. How Martini Eliminates These Bottlenecks 1. No More Boilerplate Martini removes the need to manually code repetitive API components. Instead of spending time on auth layers, logging, and request handling, developers can focus on business logic. Authentication, security policies, and request transformations are built-in — no need to reinvent the wheel. 2. Automatic API Generation Why manually create REST and GraphQL APIs when you can generate them instantly? Martini automatically turns data models and services into fully functional APIs with endpoints that are ready to use. 3. API-First Design Without the Overhead API-first development is great in theory, but in practice, it often means writing OpenAPI specs manually before anything gets built. Martini streamlines this process by visually defining APIs first, generating OpenAPI contracts automatically, and allowing front-end and back-end teams to work in parallel without waiting for implementation. 4. Seamless Integrations Without Custom Code Connecting APIs to third-party services or internal databases shouldn’t require a deep dive into documentation and manual scripting. Martini provides native connectors to popular services and allows custom integrations via APIs, SDKs, and webhooks — without unnecessary complexity. 5. Security Built-In, Not Bolted On Most API security configurations require extra development work, but Martini enforces best practices out of the box. Built-in OAuth, JWT authentication, role-based access, and rate-limiting ensure enterprise-grade security without additional coding. APIs Without the Implementation Headache One of Martini’s biggest strengths is abstracting APIs from their implementation. Developers don’t need to worry about whether an API should be REST, GraphQL, or another format — it just works. The same low-code services can be instantly exposed as REST and GraphQL APIs without additional effort.APIs can be modified or extended without rewriting core business logic.Changes to API structures don’t break existing integrations, ensuring backward compatibility. With Martini, developers move faster, APIs remain flexible, and businesses can adapt to change without disruption. Real-World Use Cases: How Developers Are Saving Time Teams using Martini are accelerating API development without compromising security or control. Internal and external APIs are deployed in minutes instead of weeks.Third-party integrations are seamless, eliminating the need for complex middleware.Faster API iteration cycles allow updates without breaking existing consumers. Instead of getting stuck in the maintenance cycle, developers can focus on delivering high-quality APIs at speed. Rethink API Development Stop wasting time on manual API setup, redundant coding, and integration headaches. Martini empowers developers to build, secure, and deploy APIs faster — without losing flexibility or control.
PostgreSQL logical replication provides the power and organization behind a pgEdge replication cluster, allowing you to replicate tables selectively and, on a more granular level, the changes in those tables. Whether you're using pgEdge Distributed PostgreSQL replication for real-time analytics, low latency, or high availability, optimizing replication configuration and query use allows you to optimize for performance, consistency, and reliability. Postgres replication is a powerful tool for replicating data between databases; unlike physical replication, logical replication gives you more control and flexibility over what data is replicated and how it's used. This blog explores queries that make it easier to manage logical replication for your PostgreSQL database. Monitoring Postgres Logical Replication Status Monitoring the status of your logical replication setup is critical to ensure that your replication is running smoothly. Querying the pg_stat_subscription view can help you monitor the status of all of the subscriptions in your database: SQL SELECT subname AS subscription_name, pid AS process_id, usename AS user_name, application_name, client_addr AS client_address, state, sync_state, sent_lsn, write_lsn, flush_lsn, replay_lsn, clock_timestamp() - write_lsn_timestamp AS replication_delay FROM pg_stat_subscription ORDER BY subscription_name; subscription_name | process_id | user_name | application_name | client_address | state | sync_state | sent_lsn | write_lsn | flush_lsn | replay_lsn | replication_delay -------------------+------------+-----------+------------------+----------------+-------------+------------+--------------+--------------+--------------+--------------+------------------- sub1 | 23456 | postgres | logical_rep_sub | 192.168.1.10 | streaming | synced | 0/3000128 | 0/3000128 | 0/3000128 | 0/3000128 | 00:00:00.12345 sub2 | 23478 | postgres | logical_rep_sub | 192.168.1.11 | catchup | async | 0/4000238 | 0/4000200 | 0/40001F8 | 0/40001E0 | 00:00:02.67890 subname – The name of the subscription.state – The state of the subscription process (e.g., streaming, catchup, initializing).sync_state – The synchronization state of the subscription.sent_lsn, write_lsn, flush_lsn, replay_lsn – These columns represent various Log Sequence Numbers (LSNs) that indicate replication progress.replication_delay – The delay between the LSN being written and its application on the subscriber is crucial for identifying lag in replication. This query provides a comprehensive overview of the logical replication status, allowing you to quickly identify issues such as replication lag or disconnected subscribers. Analyzing Postgres Replication Lag Understanding replication lag is essential in maintaining the consistency and freshness of data across your replicated databases. The pg_replication_slots system view can help you calculate the replication lag between the publisher and subscriber: SQL SELECT s.slot_name, s.active, s.restart_lsn, pg_wal_lsn_diff(pg_current_wal_lsn(), s.restart_lsn) AS replication_lag_bytes, clock_timestamp() - pg_last_xact_replay_timestamp() AS replication_lag_time FROM pg_replication_slots s WHERE s.active = true AND s.plugin = 'pgoutput'; slot_name | active | restart_lsn | replication_lag_bytes | replication_lag_time -----------+--------+-------------+-----------------------+----------------------- slot1 | t | 0/3000128 | 65536 | 00:00:00.12345 slot2 | t | 0/4000238 | 131072 | 00:00:02.67890 slot_name – The name of the replication slot being used.replication_lag_bytes – The difference in bytes between the current WAL position on the publisher and the last WAL position acknowledged by the subscriber.replication_lag_time – The time difference between the last transaction replayed on the subscriber and the current time. This query helps you assess the size and time-based lag in your logical replication, enabling you to take proactive measures if the lag exceeds acceptable thresholds. Monitoring Replication Slot Usage Replication slots are critical in logical replication, ensuring that WAL segments are retained until all subscribers process them. You can query the pg_replication_slots view to monitor the use of replication slots: SQL SELECT slot_name, plugin, slot_type, active, confirmed_flush_lsn, pg_wal_lsn_diff(pg_current_wal_lsn(), confirmed_flush_lsn) AS slot_lag_bytes FROM pg_replication_slots WHERE slot_type = 'logical'; slot_name | plugin | slot_type | active | confirmed_flush_lsn | slot_lag_bytes -----------+---------+-----------+--------+---------------------+---------------- slot1 | pgoutput| logical | t | 0/3000128 | 65536 slot2 | pgoutput| logical | t | 0/4000238 | 131072 slot_name – The name of the replication slot.slot_lag_bytes – The lag in bytes between the current WAL position and the last position is confirmed as flushed by the slot. Monitoring replication slot usage is crucial for preventing issues related to WAL segment retention, which could potentially lead to disk space exhaustion on the publisher. Dropping Unused Replication Slots Over time, you may accumulate unused replication slots, especially after removing subscribers or changing replication configurations. These unused slots can cause unnecessary retention of WAL files, leading to wasted disk space. The following query identifies and drops unused replication slots: SQL DO $$ DECLARE slot_record RECORD; BEGIN FOR slot_record IN SELECT slot_name FROM pg_replication_slots WHERE active = false LOOP EXECUTE format('SELECT pg_drop_replication_slot(%L)', slot_record.slot_name); END LOOP; END $$; This query iterates over your inactive replication slots and uses the pg_drop_replication_slot management function to drop them. Regularly cleaning up unused replication slots will ensure that your database remains efficient and prevent potential issues with WAL file retention. Creating Replication Slots If you need to create a new logical replication slot, the following query is useful: SQL SELECT * FROM pg_create_logical_replication_slot('my_slot', 'pgoutput'); slot_name | xlog_position -----------+--------------- my_slot | 0/3000128 This query uses the pg_create_logical_replication_slot function to create a new logical replication slot with the specified name and output plugin (pgoutput in our example). The query is useful when setting up new logical replication configurations; use it to confirm that the subscriber can start receiving changes from the correct point in the WAL records. Optimizing Logical Replication With pglogical If you’re using the pglogical extension for more advanced logical replication capabilities, the following query can help you check the status of all pglogical subscriptions: SQL SELECT subscription_name, status, received_lsn, replay_lag, last_received_change, pending_changes FROM pglogical.show_subscription_status(); subscription_name | status | received_lsn | replay_lag | last_received_change | pending_changes -------------------+----------+--------------+------------+---------------------+----------------- sub_pglogical1 | replicating | 0/3000128 | 00:00:01.234 | 2024-08-22 10:30:00 | 5 sub_pglogical2 | idle | 0/4000238 | 00:00:00.000 | 2024-08-22 10:29:30 | 0 subscription_name – The name of the pglogical subscription.replay_lag – The lag between the last received change and the current time.pending_changes – The number of changes pending to be applied to the subscriber. This query provides a detailed overview of your pglogical subscriptions, helping you fine-tune replication settings and troubleshoot issues. Conclusion pgEdge Distributed PostgreSQL uses logical replication across your cluster, providing greater control and flexibility over precisely what data is replicated and how that data is stored. pgEdge continues to develop versatile tooling that offers fine-grained control over data replication processes. The queries outlined in this blog can help you effectively monitor, manage, and optimize your logical replication clusters. These queries help ensure data consistency, minimize replication lag, and prevent conflicts, all essential for maintaining a robust and reliable database environment. As you continue to work with logical replication, consider incorporating these queries into your regular monitoring and maintenance routines to ensure your PostgreSQL databases and pgEdge clusters perform optimally.
Hey, DZone Community! We have an exciting year ahead of research for our beloved Trend Reports. And once again, we are asking for your insights and expertise (anonymously if you choose) — readers just like you drive the content we cover in our Trend Reports. Check out the details for our research survey below. Comic by Daniel Stori Generative AI Research Generative AI is revolutionizing industries, and software development is no exception. At DZone, we're diving deep into how GenAI models, algorithms, and implementation strategies are reshaping the way we write code and build software. Take our short research survey ( ~10 minutes) to contribute to our latest findings. We're exploring key topics, including: Embracing generative AI (or not)Multimodal AIThe influence of LLMsIntelligent searchEmerging tech And don't forget to enter the raffle for a chance to win an e-gift card of your choice! Join the GenAI Research Over the coming month, we will compile and analyze data from hundreds of respondents; results and observations will be featured in the "Key Research Findings" of our Trend Reports. Your responses help inform the narrative of our Trend Reports, so we truly cannot do this without you. Stay tuned for each report's launch and see how your insights align with the larger DZone Community. We thank you in advance for your help! —The DZone Content and Community team
AWS Step Functions Local is a useful tool for testing workflows without deploying them to the cloud. It allows developers to run state machines locally using Docker, enabling faster iteration and debugging. However, while testing our Step Function locally, we encountered significant limitations, particularly when trying to mock an http:endpoint task. Issue: HTTP Task Mocking Not Supported in Step Functions Local During our local testing, we successfully mocked various AWS services, such as: AWS LambdaParallel execution tasks Choice states However, when attempting to mock an http:endpoint task, we encountered the following error: Plain Text { "Type": "ExecutionFailed", "PreviousEventId": 7, "ExecutionFailedEventDetails": { "Error": "States.Runtime", "Cause": "An error occurred while scheduling the state 'Create ******'. The provided ARN 'arn:aws:states:us-east-1:123456789012:http:invoke' is invalid. Please refer to Integrating optimized services with Step Functions - AWS Step Functions for valid service ARNs." } } This error indicates that AWS Step Functions Local does not support http:endpoint tasks, despite being able to mock other AWS-integrated services. AWS Support's Response We reached out to AWS Support regarding this limitation, and they confirmed that: Step Functions Local is outdated and does not fully support all features available in the cloud.The HTTP Task feature is not supported in the current local version.AWS has no specific timeline for when a new version will support this feature.AWS recommends using the Test State API for testing state machines and HTTP task states instead of relying on Step Functions Local. Other Unsupported Services in Step Functions Local If you are relying on Step Functions Local for testing, keep in mind that several AWS services may not be fully supported or mockable. Some of these include: HTTP Endpoints (http:endpoint task) DynamoDB Streams Certain AWS SDK integrations EventBridge and SNS in some scenarios Alternative Approaches Given these limitations, here are some alternative strategies for testing AWS Step Functions: 1. Use AWS Test State API AWS recently introduced the Test State API, which allows you to test individual states within a state machine. Example 3 in the official AWS documentation provides guidance on testing HTTP Task states. 2. Deploy to a Sandbox AWS Account If you need to test full Step Function execution, consider deploying it to a dedicated AWS sandbox account. This ensures that all services are available while keeping costs low. 3. Use Local AWS Mocks (Where Applicable) For services like Lambda, S3, and DynamoDB, you can use: LocalStack for simulating AWS servicesMoto (for Python) or AWS SDK Mocks (for JavaScript) to mock API responses Final Thoughts While AWS Step Functions Local is useful for basic testing, it does not fully support all AWS services and integrations. If your workflow relies on http:endpoint or other unsupported services, you may need to use the Test State API, deploy to AWS for testing, or explore alternative mocking strategies. Have you faced similar limitations while testing AWS Step Functions locally? Let us know in the comments!
This blog covers how the new filter visible element option helps in writing more precise, user-focused tests with ease with the option locator.filter({ visible: true }). Playwright has quickly become a go-to tool for end-to-end testing, thanks to its robust API, cross-browser support, and easy handling of modern web applications. One of its standout features is the locator API, which allows testers to precisely target elements on a page. With recent updates, Playwright has added even more power to this API, including a new visible option for the locator.filter() method. This feature simplifies the process of working with only visible elements, making your tests cleaner, more reliable, and easier to maintain. In this blog, we’ll dive into what this new option does, why it matters, and how to use it effectively with a practical example. Why Visibility Matters in Testing In web testing, visibility is a critical factor. Elements on a page might exist in the DOM (Document Object Model) but not be visible to users due to CSS properties like display: none, visibility: hidden, or even positioning that pushes them off-screen. When writing automated tests, you often want to interact only with elements that a real user can see and interact with. Ignoring hidden elements ensures your tests reflect the actual user experience, avoiding false positives or unexpected failures. Before the visible option was added to locator.filter(), Playwright developers had to rely on workarounds like chaining additional selectors, using isVisible() checks, or filtering locators manually. Let’s explore how it works and why it’s a game-changer. Introducing the Visible Option in locator.filter() The locator.filter() method in Playwright allows you to refine a set of matched elements based on specific conditions. The addition of the visible option, set to true, now lets you filter a locator to include only elements that are visible on the page. This small but mighty addition eliminates the need for extra checks and keeps your test logic concise. To see it in action, let’s walk through a practical example inspired by a common testing scenario: working with a to-do list. Old Approach Example Let's take the example below, where two elements are hidden: HTML <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>To-Do List</title> </head> <body> <h1>My To-Do List</h1> <ul> <li data-testid="todo-item">Buy groceries</li> <li data-testid="todo-item">Complete project</li> <li data-testid="todo-item">Java</li> <li data-testid="todo-item" style="display: none;">Hidden task</li> <!-- Hidden item --> <li data-testid="todo-item">Read a book</li> <li data-testid="todo-item" style="visibility: hidden;">Another hidden task</li> <!-- Hidden item --> </ul> </body> </html> Without the visible option, you’d need a more complex solution, like looping through elements and checking their visibility individually — something like below. JavaScript const allItems = await page.getByTestId('todo-item').all(); const visibleItems = []; for (const item of allItems) { if (await item.isVisible()) { visibleItems.push(item); } } expect(visibleItems.length).toBe(3); With Filtering Visible Option Example Imagine you’re testing a to-do list application. The app displays a list of tasks, some of which are marked as “complete” and hidden from view (e.g., with display: none), while others remain visible. Your goal is to verify that exactly three to-do items are visible to the user. Here’s how you can achieve this with the new visible option in Playwright for the below HTML: TypeScript // example.spec.ts import { test, expect } from '@playwright/test'; test('some test', async ({ page }) => { // Navigate to the to-do list page await page.goto('http://example.com/todo'); // Get all to-do items and filter for visible ones const todoItems = page.getByTestId('todo-item').filter({ visible: true }); // Assert that there are exactly 3 visible to-do items await expect(todoItems).toHaveCount(3); }); Let’s break this down: Setup: The test navigates to a hypothetical to-do list page.Locator: page.getByTestId(‘todo-item’) selects all elements with the data-testid=”todo-item” attribute. This might match five elements in the DOM, for example, including both visible and hidden ones.Filtering: .filter({ visible: true }) narrows the selection to only those elements that are visible on the page.Assertion: expect(todoItems).toHaveCount(3) checks that exactly three to-do items are visible, ensuring the test aligns with the user’s perspective. Why This Matters for Testers The addition of visible to locator.filter() improves readability, and makes tests more intuitive. For teams managing large test suites, this can save time and reduce maintenance overhead. Plus, it aligns with Playwright’s goal of providing tools that feel natural for testing modern, dynamic web apps. Conclusion Filtering visible elements with locator.filter({ visible: true }) is a simple yet transformative addition to Playwright. It empowers testers to write more precise, user-focused tests without jumping through hoops. The to-do list example we explored demonstrates its clarity and efficiency, but the possibilities extend far beyond that. Next time you’re testing a web app, give it a try — your test suite will thank you!
DZone events bring together industry leaders, innovators, and peers to explore the latest trends, share insights, and tackle industry challenges. From Virtual Roundtables to Fireside Chats, our events cover a wide range of topics, each tailored to provide you, our DZone audience, with practical knowledge, meaningful discussions, and support for your professional growth. DZone Events Happening Soon Below, you’ll find upcoming events that you won't want to miss. DevOps for Oracle Applications: Automation nd Compliance Made Easy Date: March 11, 2025Time: 1:00 PM ET Register for Free! Join Flexagon and DZone as Flexagon's CEO unveils how FlexDeploy is helping organizations future-proof their DevOps strategy for Oracle Applications and Infrastructure. Explore innovations for automation through compliance, along with real-world success stories from companies who have adopted FlexDeploy. Make AI Your App Development Advantage: Learn Why and How Date: March 12, 2025Time: 10:00 AM ET Register for Free! The future of app development is here, and AI is leading the charge. Join Outsystems and DZone, on March 12th at 10am ET, for an exclusive Webinar with Luis Blando, CPTO of OutSystems, and John Rymer, industry analyst at Analysis.Tech, as they discuss how AI and low-code are revolutionizing development.You will also hear from David Gilkey, Leader of Solution Architecture, Americas East at OutSystems, and Roy van de Kerkhof, Director at NovioQ. This session will give you the tools and knowledge you need to accelerate your development and stay ahead of the curve in the ever-evolving tech landscape. Developer Experience: The Coalescence of Developer Productivity, Process Satisfaction, and Platform Engineering Date: March 12, 2025Time: 1:00 PM ET Register for Free! Explore the future of developer experience at DZone’s Virtual Roundtable, where a panel will dive into key insights from the 2025 Developer Experience Trend Report. Discover how AI, automation, and developer-centric strategies are shaping workflows, productivity, and satisfaction. Don’t miss this opportunity to connect with industry experts and peers shaping the next chapter of software development. Unpacking the 2025 Developer Experience Trends Report: Insights, Gaps, and Putting it into Action Date: March 19, 2025Time: 1:00 PM ET Register for Free! We’ve just seen the 2025 Developer Experience Trends Report from DZone, and while it shines a light on important themes like platform engineering, developer advocacy, and productivity metrics, there are some key gaps that deserve attention. Join Cortex Co-founders Anish Dhar and Ganesh Datta for a special webinar, hosted in partnership with DZone, where they’ll dive into what the report gets right—and challenge the assumptions shaping the DevEx conversation. Their take? Developer experience is grounded in clear ownership. Without ownership clarity, teams face accountability challenges, cognitive overload, and inconsistent standards, ultimately hampering productivity. Don’t miss this deep dive into the trends shaping your team’s future. Accelerating Software Delivery: Unifying Application and Database Changes in Modern CI/CD Date: March 25, 2025Time: 1:00 PM ET Register for Free! Want to speed up your software delivery? It’s time to unify your application and database changes. Join us for Accelerating Software Delivery: Unifying Application and Database Changes in Modern CI/CD, where we’ll teach you how to seamlessly integrate database updates into your CI/CD pipeline. Petabyte Scale, Gigabyte Costs: Mezmo’s ElasticSearch to Quickwit Evolution Date: March 27, 2025Time: 1:00 PM ET Register for Free! For Mezmo, scaling their infrastructure meant facing significant challenges with ElasticSearch. That's when they made the decision to transition to Quickwit, an open-source, cloud-native search engine designed to handle large-scale data efficiently. This is a must-attend session for anyone looking for insights on improving search platform scalability and managing data growth. What's Next? DZone has more in store! Stay tuned for announcements about upcoming Webinars, Virtual Roundtables, Fireside Chats, and other developer-focused events. Whether you’re looking to sharpen your skills, explore new tools, or connect with industry leaders, there’s always something exciting on the horizon. Don’t miss out — save this article and check back often for updates!
Developers often look for ways to build RESTful APIs with minimal effort while maintaining clean and maintainable code. Quarkus enables a fully functional CRUD (Create, Read, Update, Delete) API using just two classes. With Hibernate ORM with Panache, Quarkus simplifies entity management, and REST Data with Panache automatically exposes a complete set of RESTful endpoints. This approach reduces the time spent on boilerplate code, allowing developers to focus on business logic rather than infrastructure. This article demonstrates building a developer management API in Java using Quarkus. We will also cover how to interact with the API using cURL commands, showcasing the efficiency of this approach. Why Quarkus? Quarkus is a lightweight, cloud-native Java framework for performance and developer productivity. It reduces boilerplate code while maintaining the power of Java and Jakarta EE. We can simplify entity management with Hibernate ORM with Panache, and with REST Data with Panache, we can expose a full CRUD API without writing a controller class! Why Use Quarkus for REST APIs? Quarkus is designed for efficiency, productivity, and performance. It provides: Minimal code. Automatic REST API generation reduces the need for controllers.Fast development. Live coding quarkus:dev allows instant feedback.Optimized for cloud. Lower memory footprint and faster startup times.Seamless integration. Works effortlessly with microservices and cloud-native architectures. With these advantages, developers can quickly build and deploy high-performance applications with minimal effort. Step 1: Create the Entity Our API will manage developers with basic attributes like name, email, language, and city. We define the entity using PanacheEntity, which automatically provides an id field and simplifies data access. Java package expert.os.videos.quarkus; import io.quarkus.hibernate.orm.panache.PanacheEntity; import jakarta.persistence.Column; import jakarta.persistence.Entity; @Entity public class Developer extends PanacheEntity { @Column public String name; @Column public String email; @Column public String language; @Column public String city; } How It Works The @Entity annotation marks this class as a JPA entity.Extending PanacheEntity automatically provides an id and built-in CRUD operations.Each field is mapped to a database column with @Column. Since Quarkus manages the database interactions, no additional repository classes are required. Step 2: Create the REST API in One Line Instead of writing a full controller class, we can expose a REST API by creating an interface that extends PanacheEntityResource. Java package expert.os.videos.quarkus; import io.quarkus.hibernate.orm.rest.data.panache.PanacheEntityResource; public interface DevelopersResource extends PanacheEntityResource<Developer, Long> { } How It Works By extending PanacheEntityResource<Developer, Long>, Quarkus automatically generates endpoints for: GET/developers → Retrieve all developersGET/developers/{id} → Retrieve a specific developerPOST/developers → Create a new developerPUT/developers/{id} → Update an existing developerDELETE/developers/{id} → Delete a developer With just two classes, a complete CRUD API is available. Step 3: Run the Application Make sure you have Quarkus installed and set up in your project. Start the Quarkus application with: Shell ./mvnw compile quarkus:dev Step 4: Test the API With cURL Once the application is running, we can interact with the API using cURL. Create a developer (POST request): Shell curl -X POST http://localhost:8080/developers \ -H "Content-Type: application/json" \ -d '{ "name": "Alice", "email": "alice@example.com", "language": "Java", "city": "Lisbon" }' Get all developers (GET request): Shell curl -X GET http://localhost:8080/developers Get a single developer by ID (GET request): Shell curl -X GET http://localhost:8080/developers/1 Conclusion Quarkus significantly simplifies building REST APIs in Java, reducing the need for extensive boilerplate code while maintaining flexibility and performance. Using Hibernate ORM with Panache and REST Data with Panache, we could expose a fully functional CRUD API using just two classes — a domain entity and a resource interface. This approach eliminates the need for manually implementing repository classes or defining explicit REST controllers, allowing developers to focus on business logic rather than infrastructure. The result is a more concise, maintainable, and efficient API that integrates seamlessly with modern cloud-native architectures. For teams and developers looking to accelerate application development while maintaining the robustness of Java, Quarkus presents a compelling solution. Organizations can reduce complexity and improve productivity without sacrificing scalability by adopting its developer-friendly approach and efficient runtime.
Garbage collection in Java is something that just happens: you don’t have to worry about memory management. Or do you? The garbage collector (GC) runs in the background, quietly doing its work. But this process can have a huge impact on performance. Understanding the concepts of advanced Java GC is invaluable in tuning and troubleshooting applications. There are seven types of Java Garbage Collectors available in the JVM, some of which are obsolete. This article will look at the details, and compare the strengths and weaknesses of each. It will also look briefly at how you would go about evaluating garbage collection performance. GC Evaluation Criteria GC performance is evaluated on two criteria: Throughput – Calculated as the percentage of time an application spends on actual work as opposed to time spent on GC;Latency – The time during which the GC pauses the application. You should look at average latency times and maximum latency times when you’re evaluating performance. Comparison of Types of Java Garbage Collectors The table below gives details of the seven different algorithms, including the Java version where they were introduced, and the versions, if any, that use the algorithm as the default. algorithmcommentsintroducedused as default in Serial GC Original GC algorithm: single-threaded; tends to have long GC pauses; now obsolete Java 1.2 Java 1.2 to Java 4; also in Java 5 to 8 single-core versions Parallel GC Multi-threaded; Distributed amongst cores; high throughput but long pauses Java 5 Java 5 to 8 in multi-core versions CMS GC (Concurrent Mark & Sweep) Most work done concurrently; minimizes pauses; no compaction, therefore occasional long pauses for full GC Java 4; deprecated in Java 9 and removed in Java 14 None G1 GC (Garbage first) Heap is divided into equal-sized regions. Mostly concurrent. Balances between latency and throughput; best for heap size < 32GB Java 7 Java 8 and above Shenandoah GC Recommended for heap sizes >32GB; has high CPU consumption Open JDK 8 and above; Oracle JDK in JDK 11 None ZGC Recommended for heap size >32GB; prone to stalls on versions < Java 21 Java 11 None Epsilon GC This is a do-nothing GC, used only for benchmarking applications with and without GC Java 11 None G1 GC is probably the best algorithm in most cases, unless you have a very large heap (32GB or more). If this is the case, you can use Shenandoah if it is available or ZGC if you’re using Java 21 or later. ZGC can be unstable in earlier versions. Shenandoah may not be stable in the first few Oracle releases. CMS was deprecated from Java 9 onwards because it didn’t deal well with compacting the heap. Fragmentation over time degraded performance, and resulted in long GC pauses for compaction. Setting and Tuning the Garbage Collector The table below shows the types of Java Garbage collectors, along with the JVM switches you’d use to set each of the GC algorithms for an application. It also contains links to tuning guides for each algorithm. algorithmjvm switch to set ithow to tune it: links Serial GC -XX:+UseSerialGC Tuning Serial GC Parallel GC -XX:+UseParallelGC Tuning Parallel GC CMS GC (Concurrent Mark & Sweep) -XX:+UseConcMarkSweepGC Tuning CMS GC G1 GC (Garbage first) -XX:+UseG1GC Tuning G1 GC Shenandoah GC -XX:+UseShenandoahGC Tuning Shenandoah GC ZGC -XX:+UseZGC Tuning ZGC GC Epsilon GC -XX:+UseEpsilonGC N/A Evaluating GC Performance Before you commit to an algorithm for your application, it’s best to evaluate its performance. To do so, you’ll need to request a GC log, and then analyze it. 1. Requesting a GC Log Use JVM command line switches. For Java 9 or later: Java -Xlog:gc*:file=<gc-log-file-path> For Java 8 or earlier: Java -XX:+PrintGCDetails -Xloggc:<gc-log-file-path> 2. Evaluating the Log You can open the log with any text editor, but for long-running programs, it could take hours to evaluate it. The log information looks something like this: Java [0.082s][info][gc,heap] Heap region size: 1M [0.110s][info][gc ] Using G1 [0.110s][info][gc,heap,coops] Heap address: 0x00000000c8c00000, size: 884 MB, Compressed Oops mode: 32-bit [0.204s][info][gc,heap,exit ] Heap [0.204s][info][gc,heap,exit ] garbage-first heap total 57344K, used 1024K [0x00000000c8c00000, 0x0000000100000000) [0.204s][info][gc,heap,exit ] region size 1024K, 2 young (2048K), 0 survivors (0K) [0.204s][info][gc,heap,exit ] Metaspace used 3575K, capacity 4486K, committed 4864K, reserved 1056768K [0.204s][info][gc,heap,exit ] class space used 319K, capacity 386K, committed 512K, reserved 1048576K A good choice for quickly obtaining meaningful stats from the GC log is the GCeasy tool. It detects memory leaks, highlights long GC pauses and inefficient GC cycles, as well as making performance tuning recommendations. Below is a sample of part of the GCeasy report. Fig: Sample of GCeasy Output Conclusion In this article, we’ve looked at the different types of Java Garbage collectors, and learned how to invoke each algorithm on the JVM command line. We’ve looked briefly at how to evaluate, monitor, and tune the garbage collector. G1 GC is a good all-around choice if you're using Java 8 and above. It was still experimental in Java 7, so it may not be stable. If you have a very large heap size, consider Shenandoah or Z. Again, these may not be stable in earlier versions of Java. CMS was found to be problematic, as in some cases it caused long GC pauses (as much as 5 minutes), and was therefore deprecated and then finally removed from newer versions. For more information, you may like to read these articles: Comparing Java GC Algorithms: Which One is Best?What is Java’s default GC algorithm?
Low-code was supposed to be the future. It promised faster development, simpler integrations, and the ability to build complex applications without drowning in code. And for a while, it seemed like it would deliver. But then reality hit. Developers and IT teams who embraced low-code quickly found its limitations. Instead of accelerating innovation, it created bottlenecks. Instead of freeing developers, it forced them into rigid, vendor-controlled ecosystems. So, is low-code dead? The old version of it, yes. But low-code without limits? That’s where the future lies. Where Traditional Low-Code Went Wrong The first wave of low-code tools catered to business users. They simplified app development but introduced hard limits that made them impractical for serious enterprise work. The problems became clear: Rigid data models. Once your needs went beyond basic CRUD operations, things fell apart.Vendor lock-in. Customizations meant deeper dependence on the platform, making migration nearly impossible.Limited extensibility. If the platform didn’t support your use case, you were out of luck or stuck writing fragile workarounds. Developers don’t abandon low-code platforms because they hate them. They abandon them because they outgrow them. Breaking the False Choice Between Simplicity and Power For too long, developers have been forced to choose: Low-code for speed, but with strict limitations.Traditional development for control, but at the cost of slower delivery. We reject that tradeoff. Low-code should not mean low power. It should be flexible enough to support simple applications and complex enterprise solutions without forcing developers into unnecessary constraints. No Limits: The New Standard for Low-Code Traditional low-code platforms treat developers as an afterthought. That’s why we built a low-code platform without limits — one that gives developers full control while maintaining the speed and simplicity that makes low-code attractive. 1. Extensibility Without Barriers If the platform doesn’t support a feature, you can build it. No black-box constraints, no artificial restrictions on custom logic. 2. API-First, Not API-Restricted Most low-code platforms force you into their way of doing things. We took the opposite approach: seamless API integrations that work with your architecture, your existing systems, and your data sources. 3. Scalability Built for Enterprises Low-code platforms struggle with scale because they were never designed for it. Our architecture ensures multi-layered workflows, high-performance APIs, and cloud-native deployments without compromising flexibility. 4. Developer-First, Not Developer-Limited Low-code shouldn’t mean no-code. We give developers the freedom to write custom scripts, fine-tune APIs, and integrate deeply with backend systems without forcing them into a proprietary ecosystem. Beyond Citizen Development: A Platform for Professionals Most low-code platforms were designed for citizen developers — people with no coding experience. That’s fine for simple applications, but real enterprise development requires more. We built our platform for software engineers, enterprise architects, and IT teams who need to move fast without hitting a ceiling. That means: Advanced data handling without the limitations of predefined models.Secure, scalable applications that meet enterprise compliance needs.The ability to build freely — not be boxed in by rigid templates. The Future: Low Code Without Limits Low-code isn’t dead. But the old way of doing it — closed ecosystems, locked-down workflows, and one-size-fits-all platforms — is obsolete. The next evolution of low-code is developer-first, API-driven, and enterprise-ready. It’s a platform that doesn’t just accelerate development — it empowers developers to build without constraints. If you’ve ever felt like low-code was holding you back, it’s time for something better. It’s time for Low Code, No Limits.
Bringing Security to Digital Product Design
March 18, 2025 by
The Agile Prompt Engineering Framework
March 17, 2025
by
CORE
Rebalancing Agile: Bringing People Back into Focus
March 17, 2025 by
Financial Data and RAG Usage in LLMs
March 18, 2025 by
Role of Data Annotation Services in AI-Powered Manufacturing
March 18, 2025 by
Frugal AI: How Efficiency is Reshaping the Future of Tech
March 18, 2025
by
CORE
Dynamically Scaling Containers With KEDA and IBM App Connect
March 18, 2025 by
Bridging Cloud and On-Premises Log Processing
March 18, 2025 by
Bringing Security to Digital Product Design
March 18, 2025 by
Dynamically Scaling Containers With KEDA and IBM App Connect
March 18, 2025 by
Role of Data Annotation Services in AI-Powered Manufacturing
March 18, 2025 by
Bridging Cloud and On-Premises Log Processing
March 18, 2025 by
Dynamically Scaling Containers With KEDA and IBM App Connect
March 18, 2025 by
Bridging Cloud and On-Premises Log Processing
March 18, 2025 by
Exploring Playwright’s Feature “Copy Prompt”
March 18, 2025
by
CORE
Financial Data and RAG Usage in LLMs
March 18, 2025 by
Role of Data Annotation Services in AI-Powered Manufacturing
March 18, 2025 by
Frugal AI: How Efficiency is Reshaping the Future of Tech
March 18, 2025
by
CORE