A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Building an IoT Framework: Essential Components for Success
The Scrum Guide Expansion Pack
What's the Big Idea? Building web apps with separate frontends, backends, and databases can be a headache. A monorepo puts everything in one place, making it easier to share code, develop locally, and test the whole app together. We showed how to build a simple signup dashboard using React, Node.js, PostgreSQL (with Prisma for easy database access), and optionally ClickHouse for fast data analysis, all within a monorepo structure. This setup helps you scale your app cleanly and makes life easier for your team. In this guide, we're going to build a super simple app that shows how many people sign up each day. We'll use: React: To make the pretty stuff you see on the screen.Node.js with Express: To handle the behind-the-scenes work and talk to the database.PostgreSQL: Our main place to store important info.Prisma: A clever tool that makes talking to PostgreSQL super easy and helps avoid mistakes.ClickHouse (optional): A really fast way to look at lots of data later on.All living together in one monorepo! Why Put Everything Together? Everything in one spot: Issues, updates, and reviews all happen in the same place.Easy peasy development: One setup for everything! Shared settings and simple ways to start the app.Sharing is caring (Code!): Easily reuse little bits of code across the whole app.Testing the whole thing: You can test how the frontend, backend, and database work together, all at once.Grows with you: Adding new parts to your app later is a breeze. What We're Making: A Signup Tracker Imagine a simple page that shows how many people signed up each day using a chart: Frontend (React): The part you see. It'll have a chart (using Chart.js) that gets the signup numbers and shows them.Backend (Express): The brain. It will grab the signup numbers from the database.Database (PostgreSQL): Where we keep the list of who signed up and when. We can also use ClickHouse later to look at this data in cool ways.Prisma: Our friendly helper that talks to PostgreSQL for us. How It's All Organized Plain Text visualization-monorepo/ │ ├── apps/ │ ├── frontend/ # The React stuff │ └── backend/ # The Node.js + Express + Prisma stuff │ ├── packages/ │ └── shared/ # Bits of code we use in both the frontend and backend │ ├── db/ │ └── schema.sql # Instructions for setting up our PostgreSQL database and adding some initial info │ ├── docker-compose.yml # Tells your computer how to run all the different parts together ├── .env # Where we keep secret info, like database passwords ├── package.json # Keeps track of all the tools our project needs └── README.md # A file explaining what this project is all about Setting Up the Database (PostgreSQL) Here's the basic structure for our db/schema.sql file: SQL CREATE TABLE users ( id SERIAL PRIMARY KEY, email TEXT NOT NULL, date DATE NOT NULL ); -- Let's add some fake signup data INSERT INTO users (email, date) VALUES ('[email protected]', '2024-04-01'), ('[email protected]', '2024-04-01'), ('[email protected]', '2024-04-02'); This code creates a table called users with columns for a unique ID, email address, and the date they signed up. We also put in a few example signups. Getting Prisma Ready (Backend) This is our apps/backend/prisma/schema.prisma file: SQL datasource db { provider = "postgresql" url = env("DATABASE_URL") } generator client { provider = "prisma-client-js" } model User { id Int @id @default(autoincrement()) email String date DateTime } This tells Prisma we're using PostgreSQL and where to find it. The User model describes what our users table looks like. Here's how we can use Prisma to get the signup counts: JavaScript const signups = await prisma.user.groupBy({ by: ['date'], _count: true, orderBy: { date: 'asc' }, }); This code asks Prisma to group the users by the date they signed up and count how many signed up on each day, ordering the results by date. Building the Backend (Express API) This is our apps/backend/src/routes/signups.ts file: TypeScript import express from 'express'; import { PrismaClient } from '@prisma/client'; const router = express.Router(); const prisma = new PrismaClient(); router.get('/api/signups', async (req, res) => { const data = await prisma.user.groupBy({ by: ['date'], _count: { id: true }, orderBy: { date: 'asc' }, }); res.json(data.map(d => ({ date: d.date.toISOString().split('T')[0], count: d._count.id, }))); }); export default router; This code sets up a simple web address (/api/signups) that, when you visit it, will use Prisma to get the signup data and send it back in a format the frontend can understand (date and count). Making the Frontend (React + Chart.js) This is our apps/frontend/src/App.tsx file: TypeScript import { useEffect, useState } from 'react'; import { Line } from 'react-chartjs-2'; function App() { const [chartData, setChartData] = useState([]); useEffect(() => { fetch('/api/signups') .then(res => res.json()) .then(setChartData); }, []); return ( <Line data={{ labels: chartData.map(d => d.date), datasets: [{ label: 'Signups', data: chartData.map(d => d.count) }], } /> ); } export default App; This React code fetches the signup data from our backend API when the app starts and then uses Chart.js to display it as a line chart. Sharing Code (Types) This is our packages/shared/types.ts file: TypeScript export interface SignupData { date: string; count: number; } We define a simple structure for our signup data. Now, both the frontend and backend can use this to make sure they're talking about the same thing: TypeScript import { SignupData } from '@shared/types'; Running Everything Together (Docker Compose) This is our docker-compose.yml file: YAML version: '3.8' services: db: image: postgres environment: POSTGRES_DB: appdb POSTGRES_USER: user POSTGRES_PASSWORD: pass volumes: - ./db:/docker-entrypoint-initdb.d backend: build: ./apps/backend depends_on: [db] environment: DATABASE_URL: postgres://user:pass@db:5432/appdb frontend: build: ./apps/frontend ports: - "3000:3000" depends_on: [backend] This file tells your computer how to run the PostgreSQL database, the backend, and the frontend all at the same time. depends_on makes sure things start in the right order. Super Fast Data Crunching (ClickHouse) If you have tons of data and want to analyze it really quickly, you can use ClickHouse alongside PostgreSQL. You can use tools to automatically copy data from PostgreSQL to ClickHouse. Why ClickHouse is Cool Blazing fast: It's designed to quickly count and group huge amounts of data.Great for history: Perfect for looking at trends over long periods.Plays well with others: You can use it with PostgreSQL as a separate place to do your analysis. Here's an example of how you might set up a table in ClickHouse: SQL CREATE TABLE signups_daily ( date Date, count UInt32 ) ENGINE = MergeTree() ORDER BY date; Making Development Easier Turborepo or Nx: Tools to speed up building and testing different parts of your monorepo.ESLint and Prettier: Keep your code looking consistent with shared rules.Husky + Lint-Staged: Automatically check your code for style issues before you commit it.tsconfig.base.json: Share TypeScript settings across your projects. Why This Is Awesome in Real Life One download: You only need to download the code once to get everything.One place for everything: Easier to manage updates and who owns different parts of the code.Fewer mistakes: Sharing code types helps catch errors early.Easy to get started: New team members can get up and running quickly. Wrapping Up Using a monorepo with React, Node.js, and PostgreSQL (with Prisma) is a smart way to build full-stack apps. It keeps things organized, makes development smoother, and sets you up for growth. Adding ClickHouse later on gives you powerful tools for understanding your data. Whether you're building a small project or something that will grow big, this approach can make your life a lot easier. What's Next? Add ways for users to log in securely (using tools like Clerk, Auth0, or Passport.js).Automatically copy data to ClickHouse for faster analysis.Add features like showing data in pages, saving data temporarily (caching with Redis), or letting users filter the data.Put your app online using services like Fly.io, Railway, Render, or Vercel (for the frontend).
The rise of low-code development platforms has ignited passionate debates within the software development community. As these tools promise to democratize application creation and accelerate development cycles, a fundamental question emerges: Are low-code platforms here to supplement professional developers, or will they eventually render traditional coding obsolete? This tension between opportunity and threat has generated numerous myths and misconceptions about low-code's place in the development ecosystem. For professional developers, the question isn't merely academic — it's existential. With organizations increasingly adopting low-code solutions to address development backlogs and resource constraints, understanding the true relationship between traditional development and low-code approaches has never been more important. This article examines the reality behind the rhetoric, offering evidence-based insights into how low-code is reshaping — not replacing — the developer profession. Understanding Low-Code Development: Evolution, Not Revolution Low-code development platforms provide visual interfaces and drag-and-drop components that enable users to create applications with minimal hand-coding. These platforms abstract away much of the underlying complexity through pre-built templates, components, and automated workflows. While often lumped together with no-code tools, low-code platforms typically allow developers to extend functionality through custom code when needed — a crucial distinction that positions them as developer tools rather than developer replacements. The concept isn't entirely new. Visual programming tools and rapid application development (RAD) environments have existed since the 1990s. What distinguishes modern low-code platforms is their sophistication, cloud-native architecture, and enterprise-grade capabilities. Platforms like OutSystems, Mendix, Microsoft Power Platform, and Appian have evolved to support complex business applications, not just departmental or prototype solutions. This evolution represents an expansion of the development toolkit rather than a paradigm shift that eliminates traditional coding. Just as compilers didn't eliminate assembly language programmers but instead elevated programming to more abstract levels, low-code platforms shift developer focus to higher-value problems while automating routine implementations. The Replacement Myth: Why Developers Aren't Becoming Obsolete The most persistent myth surrounding low-code development is that it signals the beginning of the end for professional developers. This concern stems from a fundamental misunderstanding of both low-code capabilities and the nature of software development work. Myth: Low-Code Will Eliminate the Need for Professional Developers Reality: Organizations implementing low-code platforms consistently report that their need for professional developers doesn't decrease — it transforms. A 2022 Forrester study found that 65% of enterprises using low-code platforms maintained or increased their professional development staff, with developers taking on more strategic roles in architecture, integration, and complex customizations. The relationship proves symbiotic rather than adversarial. Professional developers often become more valuable in low-code environments because they understand the underlying principles that govern effective application design. They ensure that applications built with low-code tools follow proper architecture patterns, security protocols, and performance optimizations that might be overlooked by citizen developers. Myth: Anyone Can Build Enterprise-Grade Applications With Low-Code Reality: While low-code platforms lower technical barriers, building robust, scalable, and maintainable enterprise applications still requires significant expertise. Understanding requirements gathering, data modeling, user experience design, testing methodologies, and deployment strategies remains essential regardless of the development platform. Low-code tools excel at simplifying implementation but don't eliminate the need for a software engineering discipline. In fact, the ease of creating applications can lead to proliferation problems without proper governance — something professional developers are well-positioned to address through established DevOps practices and architectural oversight. Myth: Low-Code Applications Can't Handle Complex Requirements Reality: Modern enterprise low-code platforms have matured significantly, enabling the development of sophisticated applications that handle complex business logic, integrations, and scalability requirements. However, this doesn't mean traditional development skills become irrelevant. Instead, the most successful implementations blend low-code efficiency with custom development where needed. A 2023 Gartner analysis found that organizations achieving the highest ROI from low-code investments maintained hybrid development approaches, with professional developers contributing to complex components while business technologists leveraged low-code for rapid iteration and user interface development. Low-Code as a Developer's Ally: Amplifying Capabilities and Focus Rather than replacing developers, low-code platforms can serve as powerful allies that address many longstanding challenges in the profession. Understanding these benefits helps clarify why forward-thinking developers often embrace rather than resist these tools. Eliminating Repetitive Tasks Professional developers typically spend significant time on repetitive implementation tasks—creating forms, implementing standard CRUD operations, setting up user authentication, and configuring basic workflows. Low-code platforms automate these routine aspects, allowing developers to focus on unique business logic and complex technical challenges. A study by IDC found that development teams using low-code platforms reduced time spent on routine coding tasks by up to 70%, freeing technical resources for innovation and architectural improvements. This shift makes development work more engaging while addressing the frequent complaint that developers spend too much time "reinventing the wheel" instead of solving novel problems. Accelerating Delivery and Reducing Backlogs Development backlogs continue to grow in most organizations, with demand for applications far outpacing available development resources. Low-code platforms help address this imbalance by accelerating delivery timeframes. Research by Nucleus Research indicated that low-code development can be up to 10 times faster than traditional coding for many business applications. This acceleration doesn't eliminate the need for developers, but it does change how they allocate time. Rather than working on a single project for months, developers can oversee multiple initiatives simultaneously, providing architectural guidance and handling complex components while enabling business units to progress with simpler aspects of implementation. Bridging the Business-IT Divide One persistent challenge in software development has been the communication gap between business stakeholders and technical teams. Low-code platforms provide a visual, comprehensible medium that facilitates better collaboration. Business analysts can directly demonstrate requirements through prototypes, while developers can focus on validating approaches and ensuring technical integrity. This collaborative model transforms the developer's role from an isolated implementer to a technical advisor and architect who guides less technical team members through application development. Far from diminishing the developer's importance, this evolution elevates their position to one requiring both technical and business acumen. Enabling Innovation Through Rapid Experimentation Traditional development cycles often discourage experimentation due to the significant investment required to build proof-of-concept applications. Low-code platforms dramatically lower this barrier, enabling developers to quickly test ideas and get stakeholder feedback without extensive coding. This capability aligns with modern agile and lean startup methodologies that emphasize validated learning through minimum viable products. Developers who embrace low-code tools can lead innovation initiatives by rapidly prototyping solutions and gathering real-world feedback before committing to full implementation. When Traditional Development Maintains the Advantage Despite the advantages of low-code development, certain scenarios still benefit from or require traditional coding approaches. Understanding these boundaries helps developers and organizations make informed decisions about when and how to apply low-code solutions. Performance-Critical Systems Applications with stringent performance requirements often need the optimization capabilities available through traditional development. While low-code platforms continue to improve in this area, hand-coded solutions still provide more opportunities for fine-tuning and optimization. Systems processing millions of transactions, real-time data processing applications, or computationally intensive algorithms typically benefit from custom code. Professional developers with a deep understanding of performance optimization, memory management, and algorithm efficiency remain essential for these high-performance contexts. Their expertise ensures that critical systems meet operational requirements that might be difficult to achieve through generalized low-code platforms. Highly Specialized Domains Certain domains with specialized requirements may not be well-served by the generalized components available in low-code platforms. Examples include scientific computing, advanced graphics processing, embedded systems, and specialized hardware interfaces. In these domains, the abstraction provided by low-code platforms becomes a limitation rather than an advantage. Professional developers with domain-specific expertise and deep technical knowledge continue to drive development in these specialized areas, often creating custom frameworks and tools that later inform more general-purpose platforms. Complex Integrations and Legacy Systems Enterprise environments typically involve complex integration challenges, particularly with legacy systems that lack modern APIs or standardized interfaces. While low-code platforms provide many pre-built connectors, the most challenging integration scenarios still require custom development. Developers with expertise in system integration, API development, and legacy technologies remain valuable in bridging these gaps. Their ability to create custom adapters and integration layers enables low-code platforms to connect with systems that might otherwise remain isolated. Unique Competitive Advantage When organizations derive competitive advantage from unique software capabilities, custom development often provides better differentiation than low-code platforms. Since low-code environments leverage standardized components, they naturally push toward common patterns rather than novel approaches. Strategic applications that deliver unique customer experiences or proprietary business processes may benefit from custom development that enables precise implementation of differentiating features. Professional developers remain essential for creating these bespoke solutions that set organizations apart from competitors. The Evolving Developer: New Skills for a Low-Code World Rather than threatening developer careers, low-code platforms are reshaping the skills that provide greatest value. Forward-thinking developers can position themselves for success by developing competencies that complement rather than compete with low-code capabilities. Architectural Expertise As application development becomes more distributed across technical and business teams, architectural oversight becomes increasingly critical. Developers who understand patterns for scalable, maintainable applications can guide low-code implementation while ensuring long-term sustainability. This architectural role requires both technical depth and the ability to communicate complex concepts to less technical stakeholders. By focusing on architecture principles rather than implementation details, developers can influence multiple projects and ensure consistent quality across the application portfolio. Integration and API Design Low-code platforms excel when they can leverage existing services and data sources through well-designed APIs. Developers who master API design, integration patterns, and service-oriented architecture become invaluable in creating an ecosystem where low-code development thrives. By building integration frameworks and reusable services, developers can create a multiplier effect where each new API or service enables countless low-code applications. This strategic role focuses on creating building blocks rather than assembling them — a higher-value activity that's less likely to be automated. Governance and DevOps As application development accelerates and diversifies through low-code platforms, governance becomes essential to prevent chaos. Developers with expertise in DevOps practices, continuous integration and continuous deployment (CI/CD), and application lifecycle management help establish guardrails that enable innovation while maintaining quality. These governance roles ensure that applications built with low-code tools follow organizational standards for security, compliance, and maintainability. Rather than restricting innovation, well-designed governance frameworks enable safe experimentation by providing clear boundaries and automated quality checks. Complex Problem Solving Perhaps the most durable developer skill is the ability to solve complex technical problems that don't have pre-built solutions. While low-code platforms handle common patterns effectively, unique business requirements and technical challenges still require creative problem-solving and custom approaches. Developers who combine deep technical knowledge with business domain understanding can identify when standard approaches are insufficient and develop custom solutions for these edge cases. This problem-solving capability remains distinctly human, making it resistant to automation through low-code platforms. Real-World Examples: The Symbiotic Relationship Organizations achieving the greatest success with low-code platforms typically establish symbiotic relationships between professional developers and business technologists. These case studies illustrate how this collaborative approach delivers superior outcomes compared to either traditional development or low-code in isolation. Financial Services: Accelerating Compliance and Customer Experience A global financial institution implemented a low-code strategy to address growing regulatory requirements while simultaneously improving customer-facing applications. Rather than replacing their development team, they reorganized it to focus on three tiers: Core banking systems and transaction processing remained under traditional development due to performance and security requirements.Customer-facing applications and internal workflows moved to a low-code platform, with professional developers creating reusable components and extensibility frameworks.Department-specific applications and process automation were delegated to business technologists using the same low-code platform under developer guidance. This tiered approach reduced their application backlog by 60% while maintaining consistent architecture and security standards. Professional developers reported higher job satisfaction as they focused on complex challenges and framework development rather than routine implementation. Healthcare: Connecting Systems and Providers A healthcare network leveraged low-code development to bridge gaps between electronic health record systems, insurance providers, and patient engagement applications. Professional developers created secure API services that exposed data from core systems, while clinical staff used low-code tools to build specialized workflows for different departments. The development team shifted from implementing every feature request to enabling self-service for certain categories of applications. This approach reduced wait times for new functionality from months to weeks while allowing developers to focus on integration challenges and data security that required specialized expertise. By creating clear boundaries between professional and citizen development, the organization maintained control of critical systems while accelerating innovation at the departmental level. The Future: Partnership, Not Replacement As low-code platforms continue to evolve, the most likely future is one of partnership rather than replacement. The history of technology consistently shows that new tools tend to shift human focus rather than eliminate it entirely. Just as calculators changed mathematics education without eliminating mathematicians, low-code platforms are changing software development without eliminating developers. The Bureau of Labor Statistics continues to project growth in software development jobs despite increasing automation and low-code adoption. This seemingly paradoxical trend reflects what economists call the productivity paradox — as technology makes certain tasks more efficient, demand for the overall capability increases, often creating more positions with evolved responsibilities. For individual developers, the path forward involves embracing rather than resisting this evolution. Those who develop expertise in low-code platforms while maintaining traditional development skills position themselves as versatile problem-solvers who can bridge multiple approaches. This adaptability represents job security in an industry defined by constant change. For organizations, the most effective strategy isn't choosing between low-code and traditional development but creating an environment where both approaches complement each other. By establishing clear governance frameworks, training pathways, and collaboration models, companies can leverage the speed of low-code development while maintaining the depth and flexibility of traditional coding. Conclusion: Embracing the Best of Both Worlds The question of whether low-code is a developer's ally or replacement presents a false dichotomy. The reality is more nuanced, with low-code platforms serving as powerful tools that extend developer capabilities rather than replace them. By automating routine aspects of application creation, these platforms free professional developers to focus on complex challenges that truly require their expertise. For developers concerned about career implications, the evidence suggests that adapting to include low-code approaches in your toolkit represents an opportunity rather than a threat. The most valuable developers of the future will likely be those who can move fluidly between traditional coding and low-code development, applying each approach where it delivers maximum value. Organizations benefit most when they foster collaboration between professional developers and business technologists, creating governance frameworks that enable innovation while maintaining quality standards. This balanced approach addresses application backlogs while ensuring that critical systems maintain the performance, security, and scalability required for enterprise operations. Rather than asking if low-code will replace developers, a more productive question is how developers can leverage these tools to deliver greater value and focus on the most interesting challenges. By embracing this perspective, both individuals and organizations can navigate the changing technology landscape successfully.
TL;DR: When Incentives Sabotage Product Strategy Learn why many Product Owners and Managers worry about the wrong thing: saying no instead of saying yes to everything. This article reveals three systematic rejection techniques that strengthen stakeholder relationships while protecting product strategy to avoid organizational incentives sabotaging product strategy. Discover how those drive feature demands, why AI prototyping complicates strategic decisions, and how transparent Anti-Product Backlog systems transform resistance into collaboration. The Observable Problem: When Organizational Incentives Create Anti-Product Behaviors Product Owners and Managers often encounter a puzzling dynamic: stakeholders who champion features that clearly misalign with product strategy, resisting rejection with surprising intensity. While individual stakeholder psychology gets attention, the more powerful force may be systemic incentives that reward behaviors incompatible with desired product success. Charlie Munger’s observation proves relevant here: “Never, ever, think about something else when you should be thinking about the power of incentives.” Few forces shape human behavior more predictably than compensation structures, performance metrics, and career advancement criteria. Consider the sales director pushing for a dashboard feature that serves three enterprise prospects. Their quarterly bonus depends on closing those deals and rationalizing the feature request from their incentive perspective, even if it contradicts the product strategy. The customer support manager advocating for complex workflow automation may face performance reviews based on ticket resolution times, not customer satisfaction scores. These aren’t character flaws or political maneuvering. They’re logical responses to organizational incentive structures. Until Product Owners and Managers recognize incentive patterns driving stakeholder behavior, rejection conversations will address symptoms while ignoring causes. The challenge compounds when organizations layer agile practices onto unchanged incentive systems. Teams practice “collaborative prioritization,” while stakeholders receive bonuses for outcomes that require non-collaborative resource allocation. The resulting tension manifests as resistance to strategic rejection, which Product Owners and Managers often interpret as relationship problems rather than systems problems. The Generative AI Complication: When Low-Cost Prototyping Enables Poor Strategy Generative AI introduces a new dynamic that may make strategic rejection more difficult: the perceived reduction in experimentation costs. Stakeholders can now present Product Owners with quick prototypes, mockups, or even functioning code snippets, arguing that implementation costs have dropped dramatically: “Look, I already built a working prototype in an hour using Claude/ChatGPT/Copilot. How hard could it be just to integrate this?” becomes a common refrain. This generally beneficial capability creates an illusion that feature requests now carry minimal technical debt or opportunity cost. The fallacy proves dangerous: running more experiments doesn’t equate to delivering more outcomes. AI-generated prototypes may reduce initial development time, but don’t eliminate the strategic costs of unfocused Product Backlogs. Regardless of implementation speed, every feature (request) still requires user research, quality assurance, maintenance, support documentation, and, most critically, cognitive load from users navigating increasingly complex products. Worse, the ease of prototype generation may push teams toward what you may call the “analysis-paralysis zone:” endless experimentation without clear hypotheses or success criteria. When stakeholders can generate working demos quickly or assume the product team can, the pressure to “just try it and see” intensifies, potentially undermining the strategic discipline that effective product management requires. Product Owners need frameworks for rejecting AI-generated prototypes based on strategic criteria rather than technical feasibility. The question isn’t “Can we build this quickly?” but “Does this experiment advance our strategic learning objectives?” Questioning Assumptions About Stakeholder Collaboration The Agile Manifesto’s emphasis on “collaboration over contract negotiation” may create unintended consequences when stakeholder incentives misalign with product strategy. While collaboration generally produces better outcomes than adversarial relationships, some interpretations of collaboration may actually inhibit strategic clarity. Consider this hypothesis: endless collaboration on fundamentally misaligned requests might be less valuable than clear, well-reasoned rejection. This approach contradicts conventional wisdom about stakeholder management, which may not account for modern incentive complexity. The distinction between outcomes (measurable business results) and outputs (features shipped) becomes critical here. Stakeholder requests typically focus on outputs, possibly because their performance metrics reward feature delivery rather than business impact. However, optimizing for stakeholder comfort with concrete deliverables may create “feature factories,” organizations that measure success by shipping velocity rather than strategic advancement. Understanding stakeholder incentive structures seems essential for effective rejection conversations. Stakeholder requests aren’t inherently problematic, but they optimize for individual stakeholder success rather than product strategy coherence. Effective rejection requires acknowledging these incentive realities while maintaining strategic focus. The Strategic Framework: A Proven Decision-Making System to Avoid Incentives Sabotage Product Strategy The following Product Backlog management graphic illustrates a sophisticated and proven decision-making system that many Product Owners and Managers underutilize. It isn’t a theoretical framework; it represents battle-tested approaches to strategic resource allocation under constraint. The alignment-value pipeline concept demonstrates how ideas flow from multiple sources (stakeholder requests, user feedback, market data) through strategic filters (Product Goal, Product Vision) before reaching development resources. This systematic approach ensures that every feature request undergoes strategic evaluation rather than ad-hoc prioritization. The framework’s key strengths lie in its transparency and predictability. When the decision criteria are explicit and consistently applied, stakeholders can understand why their requests receive specific treatment. This transparency reduces political pressure and relationship friction because rejection feels systematic rather than personal. Moreover, it applies to everyone, regardless of position. The Anti-Product Backlog component proves particularly powerful for managing stakeholder relationships during rejection conversations. Rather than dismissing ideas, this approach documents rejected requests with clear strategic rationales, demonstrating respect for stakeholder input while maintaining product focus. The experimental validation loop directly addresses the generative AI challenge. Instead of building features because prototyping is easy, teams validate underlying hypotheses through structured experiments with measurable success criteria. This approach channels stakeholder enthusiasm for quick prototypes toward strategic learning rather than feature accumulation. The refinement color coding (green, orange, grey, white) provides tactical communication tools for managing stakeholder expectations. When stakeholders understand that development capacity is finite and strategically allocated, they may begin self-filtering inappropriate requests and presenting others more effectively. Technique One: Address Incentive Misalignments Before Feature Discussions Traditional rejection conversations focus on feature merit without addressing underlying incentive structures. This approach treats symptoms while ignoring causes, often leading to recurring requests for the same misaligned features. Consider starting rejection conversations by acknowledging stakeholder incentive realities: “I understand your quarterly goals include improving customer onboarding metrics, and this feature seems designed to address that objective. Let me explain why I think our current user activation experiments will have a greater impact on those same metrics.” This approach accomplishes several things: it demonstrates an understanding of stakeholder motivations, connects rejection to shared objectives, and redirects energy toward aligned solutions. You’re working within incentive structures rather than fighting them while maintaining strategic focus. For AI-generated prototypes, address the incentive to optimize for implementation speed over strategic value: “This prototype demonstrates technical feasibility, but before committing development resources, I need to understand the strategic hypothesis we’re testing and how we’ll measure success beyond technical implementation.” Document these incentive conversations as part of your Anti-Product Backlog entries. When stakeholders see their motivations acknowledged and addressed systematically, they’re more likely to trust future rejection decisions and collaborate on alternative approaches. Technique Two: Leverage Transparency as Strategic Protection The Anti-Product Backlog system provides more than rejection documentation: it creates transparency that protects Product Owners and Managers from political pressure while educating stakeholders about strategic thinking. Make your strategic criteria explicit and easily accessible. When stakeholders understand your decision framework before making requests, they can self-filter inappropriate ideas and present others more strategically. This transparency reduces rejection conversations by improving request quality. For each rejected item, document: The strategic misalignment (how does this conflict with Product Goal/Vision?)The opportunity cost (what strategic work would this displace?)The incentive analysis (what stakeholder objectives does this serve?)The alternative approaches (how else might we address the underlying need?)The reconsideration criteria (what would need to change to revisit this?) This systematic transparency serves multiple purposes: it demonstrates thoughtful analysis rather than arbitrary rejection, provides stakeholders with clear feedback on request quality, and creates precedent documentation that prevents the same arguments from recurring. Address AI prototype presentations with similar transparency: “I appreciate the technical exploration, but our Product Backlog prioritization depends on strategic alignment and validated user needs rather than implementation feasibility. Let me show you how this request fits into our current strategic framework.” Technique Three: Transform Rejection into Strategic Education Every rejection conversation represents an opportunity to educate stakeholders about strategic product thinking while addressing their underlying incentive pressures. Connect rejection rationales to measurable outcomes that align with stakeholder objectives: “I understand you need to improve support ticket resolution times. This feature might help marginally, but our planned user onboarding improvements could reduce ticket volume by 30% based on our support analysis, which would have a greater impact on your team’s performance metrics.” For AI-generated prototypes, use rejection as education about strategic experimentation: “This prototype shows what we could build, but effective product strategy requires understanding why we should build it and how we’ll know if it succeeds. Before committing to development, let’s define the strategic hypothesis and success criteria.” Reference the systematic process explicitly: “Our alignment-value pipeline shows 47 items in various stages representing 12 weeks of development work. This request would need to demonstrate higher strategic impact than current items to earn prioritization, and I don’t see evidence for that impact yet.” This educational approach gradually shifts stakeholder mental models from feature-focused to outcome-focused thinking. When stakeholders understand the true cost of product decisions and the strategic logic behind prioritization, they begin collaborating more effectively within strategic constraints rather than trying to circumvent them. The Incentive Reality: Systematic Causes Require Systematic Solutions Organizational incentives create predictable stakeholder behavior patterns that individual rejection conversations cannot address. Sales teams get compensated for promises that product teams must deliver. Marketing departments face engagement metrics that feature requests that could theoretically improve. Customer support managers need ticket resolution improvements that workflow automation might provide. These incentive structures aren’t necessarily wrong, but often conflict with product strategy coherence. Effective Product Owners and Managers must navigate these realities without compromising strategic focus. Building Systematic Rejection Capability Individual rejection conversations matter less than systematic practices that align organizational incentives with product strategy while maintaining stakeholder relationships. Consequently, establish regular stakeholder education sessions in which you share the alignment-value pipeline framework and demonstrate how strategic decisions are made. When stakeholders understand the system, they can work more effectively within it. Create metrics that track rejection effectiveness: ratio of strategic alignment in requests over time, stakeholder satisfaction despite rejections, value creation improvements from strategic focus, and business impact metrics from accepted features. Use Sprint Reviews to reinforce outcome-focused thinking by presenting strategic learning and business impact rather than just feature demonstrations. This gradually shifts organizational culture from output celebration to outcome achievement. Most importantly, recognize that strategic rejection isn’t about individual skills. Instead, it’s about organizational systems that either support or undermine strategic product thinking. Master systematic approaches, and you will build products that create a sustainable competitive advantage while maintaining stakeholder relationships based on mutual respect and strategic discipline, rather than diplomatic accommodation. Conclusion: Transform Your Strategic Rejection Skills Most Product Owners and Managers recognize these challenges but struggle with implementation. Reading frameworks doesn’t change entrenched stakeholder behavior patterns; systematic practice does. Start immediately: Document the incentive structures driving your three most persistent stakeholder requests. Create your first Anti-Product Backlog entry with a strategic rationale. Practice direct rejection language, focusing on strategic alignment rather than diplomatic deflection.
My introduction to the world of edge AI deployment came with many tough lessons learned over five years of squeezing neural networks onto resource-constrained devices. If you're considering moving your AI models from comfortable cloud servers to the chaotic wilderness of edge devices, this article might save you some of the headaches I've endured. The Edge AI Reality Check Before I dive into comparing frameworks, let me share what prompted our team's journey to edge computing. We were building a visual inspection system for a manufacturing client, and everything was working beautifully... until the factory floor lost internet connectivity for three days. Our cloud-based solution became useless, and the client was not happy. That experience taught us that for many real-world applications, edge AI isn't just nice to have—it's essential. Running models locally offers tangible benefits that I've seen transform projects: Latency improvements: One of our AR applications went from a noticeable 300ms lag to nearly instantaneous responsesPrivacy enhancements: Our healthcare clients could finally process sensitive patient data without regulatory nightmaresOffline functionality: That manufacturing client? They never faced downtime againCost savings: A startup I advised cut their cloud inference costs by 87% after moving to edge deployment But these benefits come with significant challenges. I've spent countless hours optimizing models that worked perfectly in PyTorch or TensorFlow but choked on mobile devices. The three frameworks I've battled with most frequently are TensorFlow Lite, ONNX Runtime, and PyTorch Mobile. Each has made me pull my hair out in unique ways, but they've also saved the day on different occasions. TensorFlow Lite: Google's Edge Solution That Made Me Both Curse and Cheer My relationship with TensorFlow Lite began three years ago when we needed to deploy a custom image classification model on both Android and iOS devices. What I've Learned About Its Architecture TensorFlow Lite consists of three main components: A converter that transforms your beautiful TensorFlow models into an optimized FlatBuffer format (which sometimes feels like forcing an elephant through a keyhole)An interpreter that runs those compressed modelsHardware acceleration APIs that, when they work, feel like magic The first time I successfully quantized a model from 32-bit float to 8-bit integers and saw it run 3x faster with only a 2% accuracy drop, I nearly wept with joy. But it took weeks of experimentation to reach that point. Where TFLite Shines in Real Projects In my experience, TensorFlow Lite absolutely excels when: You're targeting Android: The integration is seamless, and performance is exceptional.You need serious optimization: Their quantization toolkit is the most mature I've used.Hardware acceleration matters: On a Pixel phone, I've achieved near-desktop performance using their GPU delegation. During a recent project for a retail client, we deployed a real-time inventory tracking system using TFLite with Edge TPU acceleration on Coral devices. The performance was outstanding—the same model that struggled on CPU ran at 30+ FPS with the accelerator. Where TFLite Has Caused Me Pain However, TensorFlow Lite isn't all roses: Converting complex models can be maddening—I once spent three days trying to resolve compatibility issues with a custom attention mechanism.iOS deployment feels like an afterthought compared to Android.The error messages sometimes feel deliberately cryptic. One particularly frustrating project involved a custom audio processing model with LSTM layers. What worked perfectly in regular TensorFlow required nearly a complete architecture redesign to function in TFLite. ONNX Runtime: The Framework That Saved a Multi-Platform Project Last year, I inherited a troubled project that needed to deploy the same computer vision pipeline across Windows tablets, Android phones, and Linux-based kiosks. The previous team had been maintaining three separate model implementations. It was a maintenance nightmare. How ONNX Runtime Changed the Game ONNX Runtime saved this project with its architecture built around: A standardized model format that creates true interoperabilityA sophisticated graph optimization engine that sometimes makes models faster than their original frameworksPluggable execution providers that adapt to whatever hardware is available Within two weeks, we had consolidated to a single model training pipeline, exporting to ONNX, and deploying across all platforms. The client was astonished that we resolved issues the previous team had struggled with for months. When ONNX Runtime Proved Its Worth ONNX Runtime has been my go-to solution when: I'm dealing with models coming from different frameworks (a data science team that uses PyTorch and a production team that prefers TensorFlow? No problem!)Cross-platform consistency is non-negotiableI need flexibility in deployment targets On a healthcare project, we had researchers using PyTorch for model development while the production system was built around TensorFlow. ONNX bridged this gap perfectly, allowing seamless collaboration without forcing either team to abandon their preferred tools. The ONNX Runtime Limitations That Bit Me Despite its flexibility, ONNX Runtime has some drawbacks: The documentation can be fragmented and confusing—I've often had to dive into source code to understand certain behaviorsSome cutting-edge model architectures require workarounds to convert properlyThe initial setup can be more involved than framework-specific solutions During one project, we discovered that a particular implementation of a transformer model contained operations that weren't supported in ONNX Runtime. The workaround involved significant model surgery that took an entire sprint to resolve. PyTorch Mobile: The New Kid That Won My Heart for Rapid Development I was initially skeptical about PyTorch Mobile, having been burned by early versions. But on a recent project with a tight deadline, it completely changed my perspective. What Makes PyTorch Mobile Different PyTorch Mobile's approach centers on: TorchScript as an intermediate representationA surprisingly effective set of optimization toolsA development experience that feels consistent with PyTorch itself The standout feature is how it maintains PyTorch's dynamic nature where possible, which makes the development-to-deployment cycle much more intuitive for researchers and ML engineers. When PyTorch Mobile Saved the Day PyTorch Mobile became my framework of choice when: Working with research teams who live and breathe PyTorchRapid prototyping and iteration are criticalThe model uses PyTorch-specific features On an AR project that needed weekly model updates, the seamless workflow from research to deployment allowed us to iterate at a pace that would have been impossible with other frameworks. When a researcher improved the model, we could have it on testing devices the same day. Where PyTorch Mobile Still Needs Improvement However, PyTorch Mobile isn't perfect: Binary sizes tend to be larger than equivalent TFLite models—a simple classification model was 12MB in PyTorch Mobile but only 4MB in TFLiteHardware acceleration support isn't as extensive as TensorFlow LiteAndroid integration feels less polished than iOS (ironically the opposite of TFLite) A challenging project involving on-device training required us to eventually migrate from PyTorch Mobile to TFLite because of performance issues that we couldn't resolve within our timeframe. Hands-On Comparison: Real Project Insights After numerous deployments across these frameworks, I've developed some rules of thumb for choosing between them. Here's how they stack up in my real-world experience: Model Compatibility Battle Framework What Works Well What's Caused Headaches TensorFlow Lite Standard CNN architectures, MobileNet variants Custom layers, certain RNN implementations ONNX Runtime Models from multiple frameworks, traditional architectures Cutting-edge research models, custom ops PyTorch Mobile Most PyTorch models, research code Very large models, custom C++ extensions On a natural language processing project, converting our BERT-based model was straightforward with PyTorch Mobile but required significant reworking for TFLite. Conversely, a MobileNet-based detector was trivial to deploy with TFLite but needed adjustments for PyTorch Mobile. Performance Showdown from Actual Benchmarks These numbers come from a recent benchmarking effort on a Samsung S21 with a real-world image classification model: Framework Inference Time Memory Usage Battery Impact TensorFlow Lite 23ms 89MB Low ONNX Runtime 31ms 112MB Medium PyTorch Mobile 38ms 126MB Medium-High The differences were less pronounced on iOS, with PyTorch Mobile performing closer to TFLite thanks to better CoreML integration. Developer Experience Honest Assessment Having trained multiple teams on these frameworks: Framework Learning Curve Debug Friendliness Integration Effort TensorFlow Lite Steep for beginners Moderate (better tools, worse errors) Significant on iOS, minimal on Android ONNX Runtime Moderate Challenging Varies by platform PyTorch Mobile Gentle for PyTorch devs Good Straightforward but less documented Junior developers consistently become productive faster with PyTorch Mobile if they already know PyTorch. For TensorFlow developers, TFLite is the natural choice despite its occasional frustrations. Hard-Earned Wisdom: Implementation Tips That Aren't in the Docs After numerous production deployments, here are some practices that have saved me repeatedly: Performance Optimization Secrets Profile before you optimize: I wasted days optimizing a model component that wasn't actually the bottleneck. Always profile on the target device first.Test quantization thoroughly: On a facial recognition project, quantization reduced size by 75% but introduced bias against certain skin tones—a serious issue we caught just before deployment.Consider model distillation: For a noise cancellation model, traditional quantization wasn't sufficient. Training a smaller model to mimic the large one resulted in better performance than compression alone.Hybrid execution sometimes wins: For an NLP application, we kept the embedding lookup on the device but offloaded the transformer components to the server, achieving a better balance than full on-device processing. Workflow Tips That Save Time Create a consistent validation suite: We build a standard set of test inputs and expected outputs that we check at each stage—training, conversion, and deployment—to catch subtle issues.Version control everything: We've been saved multiple times by having model architecture, weights, test data, and conversion parameters in version control.Containerize your conversion pipeline: We use Docker to encapsulate the exact environment for model conversion, eliminating "it works on my machine" problems.Implement a comprehensive logging system: On devices, detailed logging of model behavior has helped us diagnose issues that weren't apparent in development. Which Framework Should You Choose? After all these projects, here's my pragmatic advice: Choose TensorFlow Lite when: You're primarily targeting AndroidMaximum performance on limited hardware is criticalYou're comfortable in the TensorFlow ecosystemYou need deployment on truly tiny devices (microcontrollers) Choose ONNX Runtime when: You're supporting multiple platforms with one codebaseYour organization uses a mix of ML frameworksFlexibility matters more than absolute performanceYou want to future-proof against framework changes Choose PyTorch Mobile when: Your team consists of PyTorch researchers or developersRapid iteration is more important than last-mile optimizationYou're working with models that use PyTorch-specific featuresDevelopment speed takes priority over deployment optimization For many of my clients, a hybrid approach has worked best: using PyTorch Mobile for rapid prototyping and ONNX Runtime or TFLite for final production deployment. The Edge AI Landscape Continues to Evolve The frameworks I've discussed are moving targets. Just last month, I redeployed a model that wasn't feasible on devices a year ago. The field is evolving rapidly: TensorFlow Lite continues to expand hardware support and optimization techniquesONNX Runtime is improving its tooling and documentationPyTorch Mobile is closing the performance gap while maintaining its developer-friendly approach In my experience, the choice of framework is significant but not definitive. More important is understanding the fundamental challenges of edge deployment and building your expertise in optimization techniques that apply across frameworks. The most successful edge AI projects I've worked on weren't successful because of the framework choice—they succeeded because the team thoroughly understood the constraints of their target devices and designed with those limitations in mind from the beginning. Further Study TensorFlow Lite Documentation ONNX Runtime Documentation PyTorch Mobile Documentation
Forget the idea that modernization has to mean rewriting everything. The real work happens in the in-between, where REST meets SOAP, where sidecars live beside WAR files, and where code changes are political before they're technical. Especially in high-stakes, compliance-bound environments like healthcare, government, and labor systems, modernization doesn’t look like a revolution. It looks like a careful negotiation. When Modernization Isn't Optional (Yet Also Isn't a Rewrite) Enterprise Java applications aren’t always a clean slate. Many developers work with monoliths that began over a decade ago, coded with Hibernate DAOs, JSP-driven UIs, and SOAP-based services stitched together with barely documented business logic. These aren’t museum pieces. They still power critical systems. The real tension arises when modern demands enter the mix: JSON interfaces, cloud triggers, and client-side rendering. Teams are often asked to retrofit REST endpoints into XML-heavy ecosystems or to integrate AWS services without disturbing a decades-old session flow. Modernization, in this context, means layering, not leaping. You’re not rewriting the system; you’re weaving new threads into a fabric that’s still holding the enterprise together. In one modernization project for a public-sector system, even a minor update to session tracking caused a chain reaction that disabled inter-agency authentication. This underscored the principle that modernization efforts must be staged, tested in sandboxed environments, and introduced gradually through abstraction layers, not pushed as big-bang releases. In a real-world healthcare analytics project for a government client, we had to preserve decades-old XML-based report generation logic while enabling modern REST-based data access for third-party dashboards. The solution wasn't to replace, but to extend: we introduced Spring Boot services that intercepted requests, repackaged outputs into JSON, and served them in parallel without touching the legacy data model. It worked, not because we transformed everything, but because we layered just enough to adapt what mattered. The Balancing Act: Spring Boot Meets Legacy WAR Files One common strategy is selective decomposition: peeling off slices of the monolith into Spring Boot services. But this isn’t a clean cut. Existing WAR deployments require coexistence strategies, especially when the primary app is deployed on traditional servlet containers like WebLogic or Tomcat. Spring Boot’s embedded Tomcat can conflict with the parent app server’s lifecycle, and developers often have to carefully configure ports, context paths, and threading models to avoid resource contention. Session management is another complication. Many legacy systems rely on stateful sessions, and introducing token-based authentication in Spring Boot may create a mismatch in how user identity is tracked across services. Teams must decide whether to retain centralized session management or introduce statelessness at the edge and deal with session bridging internally. Configuration strategies also diverge; legacy apps may rely on property files and hardcoded XML, whereas Spring Boot encourages externalized, environment-driven configs. The architectural result often looks transitional. A legacy monolith continues to function as the core engine, but adjacent REST services are spun off using Spring Boot. These services are fronted by an API gateway or reverse proxy, and internal SOAP calls may be wrapped in REST adapters. Anti-corruption layers come into play to keep new services loosely coupled. Done right, this architecture buys time and stability while enabling incremental modernization. A similar approach was applied in a case involving a pension management platform, where we deployed Spring Boot services alongside a WAR-deployed core app still running on WebLogic. The goal wasn’t to replace workflows overnight, but to carefully intercept and extend specific endpoints, such as payment triggers and beneficiary lookups, through newer RESTful interfaces. This allowed independent deployments and cloud-based enhancements without risking the system’s compliance-grade uptime. RESTful Interfaces at War With XML Contracts The move to REST isn’t just about switching verbs or moving from POST to PUT. Older systems may still speak WSDL, and dropping XML isn’t always viable due to integrations with legacy clients, vendors, or compliance protocols. Some government-facing systems still require signed XML payloads validated against DTDs or XSDs. To maintain compatibility, many teams dual-maintain both REST and SOAP formats. JAXB plays a key role in binding Java objects to XML. At the same time, libraries like XStream allow flexible serialization control for hybrid payloads that must meet backward compatibility while providing modern interfaces. Developers often annotate classes with JAXB and then use custom message converters or serialization logic to support both XML and JSON APIs. Here’s an example of a JAXB-bound class with a fallback handler that captures unknown elements: Java @XmlRootElement(name = "User") public class User { private String name; private String email; @XmlElement public String getName() { return name; } @XmlElement public String getEmail() { return email; } @XmlAnyElement public List<Object> getUnknowns() { return unknownFields; } } This setup allows parallel API exposure without breaking legacy consumers, offering a pathway to decouple the front-facing API strategy from backend contracts. From Server Rooms to the Cloud: Incremental AWS Integration You don’t forklift a Java app to the cloud. You sneak it in, one event, one interface, one file upload at a time. One practical method is integrating AWS services alongside existing workflows, without replacing core components. A common pattern involves leveraging S3 buckets for file storage. A legacy Java backend can upload user files or reports to a bucket, which then triggers a Lambda function through an S3 event notification. The Lambda function might perform transformations, enrich data, or send alerts to other services using SNS or SQS. The Spring Cloud AWS toolkit simplifies this integration. Developers use the spring-cloud-starter-aws library to configure S3 clients, IAM roles, and event subscriptions. IAM roles can be applied at the EC2 instance level or dynamically assumed using STS tokens. Within the codebase, these roles provide controlled access to buckets and queues. Diagrammatically, this looks like: This is exactly what played out in a logistics project where we needed to track shipment uploads in real-time. Rather than rewrite the core Java backend, we allowed it to upload files to an S3 bucket. An AWS Lambda picked up the event, transformed the metadata, and forwarded it to a tracking dashboard, all without changing a single servlet in the monolith. It was cloud-native augmentation, not cloud-for-cloud’s-sake. This approach decouples cloud adoption from full-stack reengineering, allowing legacy systems to coexist with modern event-driven flows. It also acts as a proving ground for broader cloud-native transitions without disrupting production workloads. A future extension could involve migrating only certain data processing steps to ECS or EKS, based on what proves stable during these early Lambda integrations. CI/CD and Monitoring for a Codebase You Didn’t Write Legacy code often resists modernization, especially when coverage is low and configuration is arcane. CI/CD pipelines offer a lifeline by enforcing baseline quality checks and deployment discipline. A typical setup involves Jenkins pipelines that trigger on Git push events. The pipeline stages include Maven builds, code linting using PMD and Checkstyle, and static analysis using SonarQube. In large codebases, teams also use Jacoco to track incremental coverage improvements, even if full coverage is impractical. Jenkins shared libraries help enforce common pipeline stages across services. Here’s a Jenkinsfile snippet from one such setup: Groovy pipeline { agent any stages { stage('Build') { steps { sh 'mvn clean install -DskipTests' } } stage('Code Quality') { steps { sh 'mvn pmd:check checkstyle:check sonar:sonar' } } } } Monitoring requires similar compromise. Tools like AppDynamics or New Relic must be configured with custom JVM arguments during startup, often injected through shell scripts or container entrypoints. With EJBs or SOAP layers, automatic instrumentation may fail, requiring explicit trace annotations or manual config. Even if the system is too fragile for full APM adoption, lightweight metrics and error tracking can surface regressions early, making the system more observable without requiring invasive rewrites. Modernization Isn’t Technical First, It’s Strategic Rushing modernity is the fastest way to break an enterprise system. What makes modernization sustainable isn’t tooling; it’s judgment. Whether we’re layering Spring Boot next to a WAR-deployed monolith, adapting REST interfaces over legacy XML, or routing data through S3 buckets instead of adding endpoints, the same principle applies: progress happens at the edges first. Modernization doesn’t begin with code. It begins with the courage to leave working systems untouched, and the clarity to change only what will move the system forward. That’s not just modernization. That’s stewardship.
In modern enterprise applications, effective logging and traceability are critical for debugging and monitoring business processes. Mapped Diagnostic Context (MDC) provides a mechanism to enrich logging statements with contextual information, making it easier to trace requests across different components. This article explores the challenges of MDC propagation in Spring integration and presents strategies to ensure that the diagnostic context remains intact as messages traverse these channels. Let's start with a very brief overview of both technologies. If you are already familiar with them, you can go straight to the 'Marry Spring Integration with MDC' section. Mapped Diagnostic Context Mapped Diagnostic Context (MDC) plays a crucial role in logging by providing a way to enrich log statements with contextual information specific to a request, transaction, or process. This enhances traceability, making it easier to correlate logs across different components in a distributed system. Java { MDC.put("SOMEID", "xxxx"); runSomeProcess(); MDC.clear(); } All the logging calls invoked inside runSomeProcess will have "SOMEID" in the context and could be added to log messages with the appropriate pattern in the logger configuration. I will use log4j2, but SL4J also supports MDC. XML pattern="%d{HH:mm:ss} %-5p [%X{SOMEID}] [%X{TRC_ID}] - %m%n" The %X placeholder in log4j2 outputs MDC values (in this case - SOMEID and TRC_ID). Output: Plain Text 18:09:19 DEBUG [SOMEIDVALUE] [] SomClass:XX - log message text Here we can see that TRC_ID was substituted with an empty string as it was not set in the MDC context (so it does not affect operations, running out of context). And logs, that are a terrible mess of threads: Plain Text 19:54:03 49 DEBUG Service1:17 - process1. src length: 2 19:54:04 52 DEBUG Service2:22 - result: [77, 81, 61, 61] 19:54:04 52 DEBUG DirectChannel:191 - preSend on channel 'bean 'demoWorkflow.channel#4'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=MQ==, headers={SOMEID=30, id=abbff9b1-1273-9fc8-127d-ca78ffaae07a, timestamp=1747500844111}] 19:54:04 52 INFO IntegrationConfiguration:81 - Result: MQ== 19:54:04 52 DEBUG DirectChannel:191 - postSend (sent=true) on channel 'bean 'demoWorkflow.channel#4'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=MQ==, headers={SOMEID=30, id=abbff9b1-1273-9fc8-127d-ca78ffaae07a, timestamp=1747500844111}] 19:54:04 52 DEBUG QueueChannel:191 - postReceive on channel 'bean 'queueChannel-Q'; defined in: 'class path resource [com/fbytes/mdcspringintegration/integration/IntegrationConfiguration.class]'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.queueChannelQ()'', message: GenericMessage [payload=1, headers={SOMEID=31, id=d0b6c58d-457e-876c-a240-c36d36f7e4f5, timestamp=1747500838034}] 19:54:04 52 DEBUG PollingConsumer:313 - Poll resulted in Message: GenericMessage [payload=1, headers={SOMEID=31, id=d0b6c58d-457e-876c-a240-c36d36f7e4f5, timestamp=1747500838034}] 19:54:04 52 DEBUG ServiceActivatingHandler:313 - ServiceActivator for [org.springframework.integration.handler.MethodInvokingMessageProcessor@1907874b] (demoWorkflow.org.springframework.integration.config.ConsumerEndpointFactoryBean#4) received message: GenericMessage [payload=1, headers={SOMEID=31, id=d0b6c58d-457e-876c-a240-c36d36f7e4f5, timestamp=1747500838034}] 19:54:04 52 DEBUG Service2:16 - encoding 1 19:54:04 49 DEBUG Service1:24 - words processed: 1 19:54:04 49 DEBUG QueueChannel:191 - preSend on channel 'bean 'queueChannel-Q'; defined in: 'class path resource [com/fbytes/mdcspringintegration/integration/IntegrationConfiguration.class]'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.queueChannelQ()'', message: GenericMessage [payload=1, headers={id=6a67a5b4-724b-6f54-4e9f-acdeb2a7a235, timestamp=1747500844114}] 19:54:04 49 DEBUG QueueChannel:191 - postSend (sent=true) on channel 'bean 'queueChannel-Q'; defined in: 'class path resource [com/fbytes/mdcspringintegration/integration/IntegrationConfiguration.class]'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.queueChannelQ()'', message: GenericMessage [payload=1, headers={SOMEID=37, id=07cf749d-741e-640c-eb4f-f9bcd293dbcd, timestamp=1747500844114}] 19:54:04 49 DEBUG DirectChannel:191 - postSend (sent=true) on channel 'bean 'demoWorkflow.channel#3'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=gd, headers={id=e7aedd50-8075-fa2a-9dd3-c11956e0d296, timestamp=1747500843637}] 19:54:04 49 DEBUG DirectChannel:191 - postSend (sent=true) on channel 'bean 'demoWorkflow.channel#2'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=gd, headers={id=e7aedd50-8075-fa2a-9dd3-c11956e0d296, timestamp=1747500843637}] 19:54:04 49 DEBUG DirectChannel:191 - postSend (sent=true) on channel 'bean 'demoWorkflow.channel#1'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=(37,gd), headers={id=3048a04c-ff44-e2ce-98a4-c4a84daa0656, timestamp=1747500843636}] 19:54:04 49 DEBUG DirectChannel:191 - postSend (sent=true) on channel 'bean 'demoWorkflow.channel#0'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=(37,gd), headers={id=d76dff34-3de5-e830-1f6b-48b337e0c658, timestamp=1747500843636}] 19:54:04 49 DEBUG SourcePollingChannelAdapter:313 - Poll resulted in Message: GenericMessage [payload=(38,g), headers={id=495fe122-df04-2d57-dde2-7fc045e8998f, timestamp=1747500844114}] 19:54:04 49 DEBUG DirectChannel:191 - preSend on channel 'bean 'demoWorkflow.channel#0'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=(38,g), headers={id=495fe122-df04-2d57-dde2-7fc045e8998f, timestamp=1747500844114}] 19:54:04 49 DEBUG ServiceActivatingHandler:313 - ServiceActivator for [org.springframework.integration.handler.LambdaMessageProcessor@7efd28bd] (demoWorkflow.org.springframework.integration.config.ConsumerEndpointFactoryBean#0) received message: GenericMessage [payload=(38,g), headers={id=495fe122-df04-2d57-dde2-7fc045e8998f, timestamp=1747500844114}] 19:54:04 49 DEBUG DirectChannel:191 - preSend on channel 'bean 'demoWorkflow.channel#1'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=(38,g), headers={id=1790d3d8-9501-f479-c5ee-6b9232295313, timestamp=1747500844114}] 19:54:04 49 DEBUG MessageTransformingHandler:313 - bean 'demoWorkflow.transformer#0' for component 'demoWorkflow.org.springframework.integration.config.ConsumerEndpointFactoryBean#1'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()' received message: GenericMessage [payload=(38,g), headers={id=1790d3d8-9501-f479-c5ee-6b9232295313, timestamp=1747500844114}] 19:54:04 49 DEBUG DirectChannel:191 - preSend on channel 'bean 'demoWorkflow.channel#2'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=g, headers={id=e2f69d41-f760-2f4d-87c2-4e990beefdaa, timestamp=1747500844114}] 19:54:04 49 DEBUG MessageFilter:313 - bean 'demoWorkflow.filter#0' for component 'demoWorkflow.org.springframework.integration.config.ConsumerEndpointFactoryBean#2'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()' received message: GenericMessage [payload=g, headers={id=e2f69d41-f760-2f4d-87c2-4e990beefdaa, timestamp=1747500844114}] 19:54:04 49 DEBUG DirectChannel:191 - preSend on channel 'bean 'demoWorkflow.channel#3'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=g, headers={id=e2f69d41-f760-2f4d-87c2-4e990beefdaa, timestamp=1747500844114}] 19:54:04 49 DEBUG ServiceActivatingHandler:313 - ServiceActivator for [org.springframework.integration.handler.MethodInvokingMessageProcessor@1e469dfd] (demoWorkflow.org.springframework.integration.config.ConsumerEndpointFactoryBean#3) received message: GenericMessage [payload=g, headers={id=e2f69d41-f760-2f4d-87c2-4e990beefdaa, timestamp=1747500844114}] 19:54:04 49 DEBUG Service1:17 - process1. src length: 1 19:54:04 49 DEBUG Service1:24 - words processed: 1 It will become readable, and even the internal Spring Integration messages are attached to specific SOMEID processing. Plain Text 19:59:44 49 DEBUG [19] [] Service1:17 - process1. src length: 3 19:59:45 52 DEBUG [6] [] Service2:22 - result: [77, 119, 61, 61] 19:59:45 52 DEBUG [6] [] DirectChannel:191 - preSend on channel 'bean 'demoWorkflow.channel#4'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=Mw==, headers={SOMEID=6, id=b19eb8b6-7c5b-aa5a-31d0-dc9b940e4cd9, timestamp=1747501185064}] 19:59:45 52 INFO [6] [] IntegrationConfiguration:81 - Result: Mw== 19:59:45 52 DEBUG [6] [] DirectChannel:191 - postSend (sent=true) on channel 'bean 'demoWorkflow.channel#4'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=Mw==, headers={SOMEID=6, id=b19eb8b6-7c5b-aa5a-31d0-dc9b940e4cd9, timestamp=1747501185064}] 19:59:45 52 DEBUG [6] [] QueueChannel:191 - postReceive on channel 'bean 'queueChannel-Q'; defined in: 'class path resource [com/fbytes/mdcspringintegration/integration/IntegrationConfiguration.class]'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.queueChannelQ()'', message: GenericMessage [payload=2, headers={SOMEID=7, id=5e4f9113-6520-c20c-afc8-f8e1520bf9e9, timestamp=1747501177082}] 19:59:45 52 DEBUG [7] [] PollingConsumer:313 - Poll resulted in Message: GenericMessage [payload=2, headers={SOMEID=7, id=5e4f9113-6520-c20c-afc8-f8e1520bf9e9, timestamp=1747501177082}] 19:59:45 52 DEBUG [7] [] ServiceActivatingHandler:313 - ServiceActivator for [org.springframework.integration.handler.MethodInvokingMessageProcessor@5d21202d] (demoWorkflow.org.springframework.integration.config.ConsumerEndpointFactoryBean#4) received message: GenericMessage [payload=2, headers={SOMEID=7, id=5e4f9113-6520-c20c-afc8-f8e1520bf9e9, timestamp=1747501177082}] 19:59:45 52 DEBUG [7] [] Service2:16 - encoding 2 19:59:45 53 DEBUG [] [] QueueChannel:191 - postReceive on channel 'bean 'queueChannel-Q'; defined in: 'class path resource [com/fbytes/mdcspringintegration/integration/IntegrationConfiguration.class]'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.queueChannelQ()'', message: GenericMessage [payload=2, headers={SOMEID=8, id=37400675-0f79-8a89-de36-dacf2feb106e, timestamp=1747501177343}] 19:59:45 53 DEBUG [8] [] PollingConsumer:313 - Poll resulted in Message: GenericMessage [payload=2, headers={SOMEID=8, id=37400675-0f79-8a89-de36-dacf2feb106e, timestamp=1747501177343}] 19:59:45 53 DEBUG [8] [] ServiceActivatingHandler:313 - ServiceActivator for [org.springframework.integration.handler.MethodInvokingMessageProcessor@5d21202d] (demoWorkflow.org.springframework.integration.config.ConsumerEndpointFactoryBean#4) received message: GenericMessage [payload=2, headers={SOMEID=8, id=37400675-0f79-8a89-de36-dacf2feb106e, timestamp=1747501177343}] 19:59:45 53 DEBUG [8] [] Service2:16 - encoding 2 19:59:45 52 DEBUG [7] [] Service2:22 - result: [77, 103, 61, 61] 19:59:45 52 DEBUG [7] [] DirectChannel:191 - preSend on channel 'bean 'demoWorkflow.channel#4'; from source: 'com.fbytes.mdcspringintegration.integration.IntegrationConfiguration.demoWorkflow()'', message: GenericMessage [payload=Mg==, headers={SOMEID=7, id=bbb9f71f-37d8-8bc4-90c3-bfb813430e4a, timestamp=1747501185469}] 19:59:45 52 INFO [7] [] IntegrationConfiguration:81 - Result: Mg== Under the hood, MDC uses ThreadLocal storage, tying the context to the current thread. This works seamlessly in single-threaded flows but requires special handling in multi-threaded scenarios, such as Spring Integration’s queue channels. Spring Integration A great part of Spring, allowing a new level of services decoupling by building the application workflow where data is passed between services as a message, defining what method of the service to invoke for data processing, rather than making direct service-to-service calls. Java IntegrationFlow flow = new IntegrationFlow.from("sourceChannel") .handle("service1", "runSomeProcess") .filter(....) .transform(...) .split() .channel("serviceInterconnect") .handle("service2", "runSomeProcess") .get(); Here we: Get data from "sourceChannel" (assuming a bean with such a name already registered);Invoke service1.runSomeProcess passing the data (unwrapped from Message<?> of Spring Integration)Returned result (whatever it is) is wrapped back in Message, undergoes some filtering and transformations;Result (assuming it is some array or Stream), split for per-entry processing;Entries (wrapped in Message) passed to "serviceInterconnect" channel;Entries processed by service2.runSomeProcess; Spring integration provides message channels of several types. What is important here is that some of them run the consumer process on a produced thread, while others (e.g., the Queue channel) delegate the processing to other consumer threads. The thread-local MDC context will be lost. So, we need to find a way to propagate it down the workflow. Marry Spring Integration With MDC While micrometer-tracing propagates MDC between microservices, it doesn’t handle Spring integration’s queue channels, where thread switches occur. To maintain the MDC context, it must be stored in message headers on the producer side and restored on the consumer side. Below are three methods to achieve this: Use Spring Integration Advice;Use Spring-AOP @Aspect;Use Spring Integration ChannelInterceptor. 1. Using Spring Integration Advice Java @Service class MdcAdvice implements MethodInterceptor { @Autowired IMDCService mdcService; @Override public Object invoke(MethodInvocation invocation) throws Throwable { Message<?> message = (Message<?>) invocation.getArguments()[0]; Map<String, String> mdcMap = (Map<String, String>) message.getHeaders().entrySet().stream() .filter(...) .collect(Collectors.toMap(Map.Entry::getKey, entry -> String.valueOf(entry.getValue()))); MDCService.set(mdcMap); try { return invocation.proceed(); } finally { MDCService.clear(mdcMap); } } } It should be directly specified for the handler in the workflow, e.g.: Java .handle("service1", "runSomeProcess", epConfig -> epConfig.advice(mdcAdvice)) Disadvantages It covers only the handler. Context cleared right after it, and thus, logging of the processes between handlers will have no context.It should be manually added to all handlers. 2. Using Spring-AOP @Aspect Java @Aspect @Component public class MdcAspect { @Autowired IMDCService mdcService; @Around("execution(* org.springframework.messaging.MessageHandler.handleMessage(..))") public Object aroundHandleMessage(ProceedingJoinPoint joinPoint) throws Throwable { Message<?> message = (Message<?>) joinPoint.getArgs()[0]; Map<String, String> mdcMap = (Map<String, String>) message.getHeaders().entrySet().stream() .filter(...) .collect(Collectors.toMap(Map.Entry::getKey, entry -> (String) entry.getValue())); mdcService.setContextMap(mdcMap); try { return joinPoint.proceed(); } finally { mdcService.clear(mdcMap); } } } Disadvantages It should automatically be invoked, but.. only for "stand-alone" MesssageHandlers. For those defined inline, e.g., it won't work, because the handler is not a proxied bean in this case. Java .handle((msg,headers) -> { return service1.runSomeProcess(); } It covers only the handlers, too. 3. Using Spring Integration ChannelInterceptor First, we need to clear the context at the end of the processing. It can be done by defining the custom TaskDecorator: Java @Service public class MdcClearingTaskDecorator implements TaskDecorator { private static final Logger logger = LogManager.getLogger(MdcClearingTaskDecorator.class); private final MDCService mdcService; public MdcClearingTaskDecorator(MDCService mdcService) { this.mdcService = mdcService; } @Override public Runnable decorate(Runnable runnable) { return () -> { try { runnable.run(); } finally { logger.debug("Cleaning the MDC context"); mdcService.clearMDC(); } }; } } And set it for all TaskExecutors: Java @Bean(name = "someTaskExecutor") public TaskExecutor someTaskExecutor() { ThreadPoolTaskExecutor executor = newThreadPoolExecutor(mdcService); executor.setTaskDecorator(mdcClearingTaskDecorator); executor.initialize(); return executor; } Used by pollers: Java @Bean(name = "somePoller") public PollerMetadata somePoller() { return Pollers.fixedDelay(Duration.ofSeconds(30)) .taskExecutor(someTaskExecutor()) .getObject(); } Inline: Java .from(consoleMessageSource, c -> c.poller(p -> p.fixedDelay(1000).taskExecutor(someTaskExecutor()))) Now, we need to save and restore the context as it passes the Pollable channels. Java @Service @GlobalChannelInterceptor(patterns = {"*-Q"}) public class MdcChannelInterceptor implements ChannelInterceptor { private static final Logger logger = LogManager.getLogger(MdcChannelInterceptor.class); @Value("${mdcspringintegration.mdc_header}") private String mdcHeader; @Autowired private MDCService mdcService; @Override public Message<?> preSend(Message<?> message, MessageChannel channel) { if (!message.getHeaders().containsKey(mdcHeader)) { return MessageBuilder.fromMessage(message) .setHeader(mdcHeader, mdcService.fetch(mdcHeader)) // Add a new header .build(); } if (channel instanceof PollableChannel) { logger.trace("Cleaning the MDC context for PollableChannel"); mdcService.clearMDC(); // clear MDC in producer's thread } return message; } @Override public Message<?> postReceive(Message<?> message, MessageChannel channel) { if (channel instanceof PollableChannel) { logger.trace("Setting MDC context for PollableChannel"); Map<String, String> mdcMap = message.getHeaders().entrySet().stream() .filter(entry -> entry.getKey().equals(mdcHeader)) .collect(Collectors.toMap(Map.Entry::getKey, entry -> (String) entry.getValue())); mdcService.setMDC(mdcMap); } return message; } } preSend is invoked on the producer thread before the message is added to the Queue and cleans the context (of the producer's thread)postReceive is invoked on the consumer thread before the message is processed by the consumer. This approach covers not only the handlers, but also the workflow (interrupting on queues only). @GlobalChannelInterceptor(patterns = {"*-Q"}) – automatically attaches the interceptor to all channels that match the pattern(s). A few words about the cleaning section of preSend. On first sight, it could look unnecessary, but let's see the thread's path when it encounters the Split. The thread iterates the item and thus keeps the context after sending the doc to the queue. Red arrows are showing places where the context will be leaked from doc1 processing to doc2 processing and from doc2 to doc3. That's it. We get an MDC context end-to-end in the Spring integration workflow. Do you know a better way? Please share in the comments. Example Code https://github.com/Sevick/MdcSpringIntegrationDemo
In this article, I will discuss in a practical and objective way the integration of the Spring framework with the resources of the OpenAI API, one of the main artificial intelligence products on the market. The use of artificial intelligence resources is becoming increasingly necessary in several products, and therefore, presenting its application in a Java solution through the Spring framework allows a huge number of projects currently in production to benefit from this resource. All of the code used in this project is available via GitHub. To download it, simply run the following command: git clone https://github.com/felipecaparelli/openai-spring.git or via SSL git clone. Note: It is important to notice that there is a cost in this API usage with the OpenAI account. Make sure that you understand the prices related to each request (it will vary by tokens used to request and present in the response). Assembling the Project 1. Get API Access As defined in the official documentation, first, you will need an API key from OpenAI to use the GPT models. Sign up at OpenAI's website if you don’t have an account and create an API key from the API dashboard. Going to the API Keys page, select the option Create new secret key. Then, in the popup, set a name to identify your key (optional) and press Create secret key.Now copy the API key value that will be used in your project configuration. 2. Configure the Project Dependencies The easiest way to prepare your project structure is via the Spring tool called Spring Initializr. It will generate the basic skeleton of your project, adding the necessary libraries, the configuration, and also the main class to start your application. You must select at least the Spring Web dependency. In the Project type, I've selected Maven and Java 17. I've also included the httpclient5 library because it will be necessary to configure our SSL connector. Follow the snipped of the pom.xml generated: XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.3.2</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>br.com.erakles</groupId> <artifactId>spring-openai</artifactId> <version>0.0.1-SNAPSHOT</version> <name>spring-openai</name> <description>Demo project to explain the Spring and OpenAI integration</description> <properties> <java.version>17</java.version> <spring-ai.version>1.0.0-M1</spring-ai.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.apache.httpcomponents.client5</groupId> <artifactId>httpclient5</artifactId> <version>5.3.1</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> 3. Basic Configuration On your configuration file (application.properties), set the OpenAI secret key in the property, openai.api.key. You can also replace the model version on the properties file to use a different API version, like gpt-4o-mini. Properties files spring.application.name=spring-openai openai.api.url=https://api.openai.com/v1/chat/completions openai.api.key=YOUR-OPENAI-API-KEY-GOES-HERE openai.api.model=gpt-3.5-turbo A tricky part about connecting with this service via Java is that it will, by default, require your HTTP client to use a valid certificate while executing this request. To fix it, we will skip this validation step. 3.1 Skip the SSL validation To disable the requirement for a security certificate required by the JDK for HTTPS requests, you must include the following modifications in your RestTemplate bean via a configuration class: Java import org.apache.hc.client5.http.classic.HttpClient; import org.apache.hc.client5.http.impl.classic.HttpClients; import org.apache.hc.client5.http.impl.io.BasicHttpClientConnectionManager; import org.apache.hc.client5.http.socket.ConnectionSocketFactory; import org.apache.hc.client5.http.socket.PlainConnectionSocketFactory; import org.apache.hc.client5.http.ssl.NoopHostnameVerifier; import org.apache.hc.client5.http.ssl.SSLConnectionSocketFactory; import org.apache.hc.core5.http.config.Registry; import org.apache.hc.core5.http.config.RegistryBuilder; import org.apache.hc.core5.ssl.SSLContexts; import org.apache.hc.core5.ssl.TrustStrategy; import org.springframework.boot.web.client.RestTemplateBuilder; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.http.client.HttpComponentsClientHttpRequestFactory; import org.springframework.web.client.RestTemplate; import javax.net.ssl.SSLContext; @Configuration public class SpringOpenAIConfig { @Bean public RestTemplate secureRestTemplate(RestTemplateBuilder builder) throws Exception { // This configuration allows your application to skip the SSL check final TrustStrategy acceptingTrustStrategy = (cert, authType) -> true; final SSLContext sslContext = SSLContexts.custom() .loadTrustMaterial(null, acceptingTrustStrategy) .build(); final SSLConnectionSocketFactory sslsf = new SSLConnectionSocketFactory(sslContext, NoopHostnameVerifier.INSTANCE); final Registry<ConnectionSocketFactory> socketFactoryRegistry = RegistryBuilder.<ConnectionSocketFactory> create() .register("https", sslsf) .register("http", new PlainConnectionSocketFactory()) .build(); final BasicHttpClientConnectionManager connectionManager = new BasicHttpClientConnectionManager(socketFactoryRegistry); HttpClient client = HttpClients.custom() .setConnectionManager(connectionManager) .build(); return builder .requestFactory(() -> new HttpComponentsClientHttpRequestFactory(client)) .build(); } } 4. Create a Service to Call the OpenAI API Now that we have all of the configuration ready, it is time to implement a service that will handle the communication with the ChatGPT API. I am using the Spring component, RestTemplate, which allows the execution of the HTTP requests to the OpenAI endpoint. Java import org.springframework.beans.factory.annotation.Value; import org.springframework.http.HttpEntity; import org.springframework.http.HttpHeaders; import org.springframework.http.HttpMethod; import org.springframework.http.MediaType; import org.springframework.stereotype.Service; import org.springframework.web.client.RestTemplate; @Service public class JavaOpenAIService { @Value("${openai.api.url}") private String apiUrl; @Value("${openai.api.key}") private String apiKey; @Value("${openai.api.model}") private String modelVersion; private final RestTemplate restTemplate; public JavaOpenAIService(RestTemplate restTemplate) { this.restTemplate = restTemplate; } /** * @param prompt - the question you are expecting to ask ChatGPT * @return the response in JSON format */ public String ask(String prompt) { HttpEntity<String> entity = new HttpEntity<>(buildMessageBody(modelVersion, prompt), buildOpenAIHeaders()); return restTemplate .exchange(apiUrl, HttpMethod.POST, entity, String.class) .getBody(); } private HttpHeaders buildOpenAIHeaders() { HttpHeaders headers = new HttpHeaders(); headers.set("Authorization", "Bearer " + apiKey); headers.set("Content-Type", MediaType.APPLICATION_JSON_VALUE); return headers; } private String buildMessageBody(String modelVersion, String prompt) { return String.format("{ \"model\": \"%s\", \"messages\": [{\"role\": \"user\", \"content\": \"%s\"}]}", modelVersion, prompt); } } 5. Create Your REST API Then, you can create your own REST API to receive the questions and redirect it to your service. Java import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; import br.com.erakles.springopenai.service.JavaOpenAIService; @RestController public class SpringOpenAIController { private final JavaOpenAIService javaOpenAIService; SpringOpenAIController(JavaOpenAIService javaOpenAIService) { this.javaOpenAIService = javaOpenAIService; } @GetMapping("/chat") public ResponseEntity<String> sendMessage(@RequestParam String prompt) { return ResponseEntity.ok(javaOpenAIService.askMeAnything(prompt)); } } Conclusion These are the steps required to integrate your web application with the OpenAI service. You can improve it later by adding more features like sending voice, images, and other files to their endpoints. After starting your Spring Boot application (./mvnw spring-boot:run), to test your web service, you must run the following URL: http://localhost:8080/ask?promp={add-your-question}. If you did everything right, you will be able to read the result on your response body as follows: JSON { "id": "chatcmpl-9vSFbofMzGkLTQZeYwkseyhzbruXK", "object": "chat.completion", "created": 1723480319, "model": "gpt-3.5-turbo-0125", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Scuba stands for \"self-contained underwater breathing apparatus.\" It is a type of diving equipment that allows divers to breathe underwater while exploring the underwater world. Scuba diving involves using a tank of compressed air or other breathing gas, a regulator to control the flow of air, and various other accessories to facilitate diving, such as fins, masks, and wetsuits. Scuba diving allows divers to explore the underwater environment and observe marine life up close.", "refusal": null }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 12, "completion_tokens": 90, "total_tokens": 102 }, "system_fingerprint": null } I hope this tutorial helped in your first interaction with the OpenAI service and makes your life easier while diving deeper into your AI journey. If you have any questions or concerns don't hesitate to send me a message.
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Low-Code Development: Elevating the Engineering Experience With Low and No Code. The rise of low-code and no-code (LCNC) platforms has sparked a debate about their impact on the role of developers. Concerns about skill devaluation are understandable; after all, if anyone can build an app, what happens to the specialized knowledge of experienced programmers? While some skepticism toward low-code platforms remains, particularly concerning their suitability for large-scale, enterprise-level applications, it's important to recognize that these platforms are constantly evolving and improving. Many platforms now offer robust features like model-driven development, automated testing, and advanced data modeling, making them capable of handling complex business requirements. In addition, the ability to incorporate custom code modules ensures that specialized functionalities can still be implemented when needed. Yes, these tools are revolutionizing software creation, but it's time to move beyond the debate of their impact on the development landscape and delve into the practical realities. Instead of being a sales pitch of codeless platforms, this article aims to equip developers with a realistic understanding of what these tools can and cannot do, how they can change developer workflows, and most importantly, how you can harness their power to become more efficient and valuable in an AI-supported, LCNC-driven world. Leveraging Modern LCNC Platforms for Developer Workflows The financial benefits of LCNC platforms are undeniable. Reduced development costs, faster time to market, and a lighter burden on IT are compelling arguments. But it's the strategic advantage of democratizing application development by empowering individuals to develop solutions without any coding experience that drives innovation and competitive edge. For IT, it means less time fixing minor problems and more time on the big, important stuff. For teams outside of IT, it's like having a toolbox to build your own solutions. Need a way to track project deadlines? There's an app for that. Want to automate a tedious report? You can probably build it yourself. This shift doesn't mean that traditional coding skills are obsolete, though. In fact, they become even more valuable. Experienced developers can now focus on building reusable components, creating templates and frameworks for citizen developers, and ensuring that their LCNC solutions integrate seamlessly with existing systems. This shift is crucial as organizations can increasingly adopt a "two-speed IT" approach, balancing the need for rapid, iterative development with the maintenance and enhancement of complex core systems. Types of Tasks Suitable for LCNC vs. Traditional Development To understand how various tasks of traditional development would differ from using a codeless solution, consider the following table of typical tasks in a developer workflow: Table 1. Developer workflow tasks: LCNC vs. traditional development Task Category LCNC Traditional (Full-Code) Recommended Tool Developer Involvement Simple form building Ideal; drag-and-drop interfaces, pre-built components Possible but requires more manual coding and configuration LCNC Minimal; drag-and-drop, minimal configuration Data visualization Excellent with built-in charts/graphs, customizable with some code More customization options, requires coding libraries or frameworks LCNC or hybrid (if customization is needed) Minimal to moderate, depending on complexity Basic workflow automation Ideal; visual workflow builders, easy integrations Requires custom coding and integration logic LCNC Minimal to moderate; integration may require some scripting Front-end app development Suitable for basic UI, but complex interactions require coding Full control over UI/UX but more time consuming Hybrid Moderate; requires front-end development skills Complex integrations Limited to pre-built connectors, custom code often needed Flexible and powerful but requires expertise Full-code or hybrid High; deep understanding of APIs and data formats Custom business logic Not ideal; may require workarounds or limited custom code Full flexibility to implement any logic Full-code High; strong programming skills and domain knowledge Performance optimization Limited options, usually handled by the platform Full control over code optimization but requires deep expertise Full-code High; expertise in profiling and code optimization API development Possible with some platforms but limited in complexity Full flexibility but requires API design and coding skills Full-code or hybrid High; API design and implementation skills Security-critical apps Depends on platform's security features, may not be sufficient Full control over security implementation but requires expertise Full-code High; expertise in security best practices and secure coding Getting the Most Out of an LCNC Platform Whether you are building your own codeless platform or adopting a ready-to-use solution, the benefits can be immense. But before you begin, remember that the core of any LCNC platform is the ability to transform a user's visual design into functional code. This is where the real magic happens, and it's also where the biggest challenges lie. For an LCNC platform to help you achieve success, you need to start with a deep understanding of your target users. What are their technical skills? What kind of applications do they want to use? The answers to these questions will inform every aspect of your platform's design, from the user interface/user experience (UI/UX) to the underlying architecture. The UI/UX is crucial for the success of any LCNC platform, but it is just the tip of the iceberg. Under the hood, you'll need a powerful engine that can translate visual elements into clean, efficient code. This typically involves complex AI algorithms, data structures, and a deep understanding of various programming languages. You'll also need to consider how your platform will handle business logic, integrations with other systems, and deployment to different environments. Figure 1. A typical LCNC architecture flow Many organizations already have a complex IT landscape, and introducing a new platform can create compatibility issues. Choosing an LCNC platform that offers robust integration options, whether through APIs, webhooks, or pre-built connectors, is crucial. You'll also need to decide whether to adopt a completely codeless (no-code) solution or a low-code solution that allows for some custom coding. Additional factors to consider are how you'll handle version control, testing, and debugging. Best Practices to Empower Citizen Developers With LCNC LCNC platforms empower developers with powerful features, but it's the knowledge of how to use those tools effectively that truly unleashes their potential. The following best practices offer guidance on how to make the most of LCNC's capabilities while aligning with broader organizational goals. Leverage Pre-Built Components and Templates Most LCNC platforms offer pre-built components and templates as ready-made elements — from form fields and buttons to entire page layouts. These building blocks can help you bypass tedious manual coding and focus on the unique aspects of your application. While convenient, pre-built components may not always fit your exact requirements. Assess if customization is necessary and feasible within the platform. Begin with a pre-built application template that aligns with your overall goal. This can save significant time and provide a solid foundation. Explore the available components before diving into development. If a pre-built component doesn't quite fit, explore customization options within the platform before resorting to complex workarounds. Prioritize the User Experience Remember, even the most powerful application is useless if it's too confusing or frustrating to use. LCNC platforms are typically designed for rapid application development. Prioritizing core features first aligns with this philosophy, allowing for faster delivery of a functional product that can then be iterated upon based on user feedback. Before you start building, take the time to understand your end users' needs and pain points. Sketch out potential workflows, gather feedback from colleagues, and test your prototype with potential users. To avoid clutter and unnecessary features, the rule of thumb should be to focus on first developing the core functionalities that users need. Use clear labels, menus, and search functionality. A visually pleasing interface can significantly enhance user engagement and satisfaction. Align With Governance and Standards Your organization likely has established guidelines for data usage, security protocols, and integration requirements. Adhering to these standards not only ensures the safety and integrity of your application but also paves the way for smoother integration with existing systems and a more cohesive IT landscape. Be aware of any industry-specific regulations or data privacy laws that may apply to your application. Adhere to established security protocols, data-handling guidelines, and coding conventions to minimize risk and ensure a smooth deployment process. Formulate an AI-based runbook that mandates getting IT approval for your application before going live, especially if it involves sensitive data or integrations with critical systems. Conclusion Instead of viewing low code and traditional coding as an either/or proposition, developers should embrace them as complementary tools. Low-code platforms excel at rapid prototyping, building core application structures, and handling common functionalities; meanwhile, traditional coding outperforms in areas like complex algorithms, bespoke integrations, and granular control. A hybrid approach offers the best of both paradigms. It is also important to note that this is not the end of the developer's role but rather a new chapter. LCNC and AI are here to stay, and the smart developer recognizes that resisting this change is futile. Instead, embracing these tools opens up new avenues for career growth and impact. Embracing change, upskilling, and adapting to the evolving landscape can help developers thrive in an AI-based LCNC era, unlocking new levels of productivity, creativity, and impact. This is an excerpt from DZone's 2024 Trend Report, Low-Code Development: Elevating the Engineering Experience With Low and No Code.Read the Free Report
The race to implement AI technologies has created a significant gap between intention and implementation, particularly in governance. According to recent data from the IAPP and Credo AI's 2025 report, while 77% of organizations are working on AI governance, only a fraction have mature frameworks in place. This disconnect between aspirational goals and practical governance has real consequences, as we've witnessed throughout 2024-2025 with high-profile failures and data breaches. I've spent the last decade working with organizations implementing AI solutions, and the pattern is distressingly familiar: enthusiasm for AI capabilities outpaces the willingness to establish robust guardrails. This article examines why good intentions are insufficient, how AI governance failures manifest in today's landscape, and offers a practical roadmap for governance frameworks that protect stakeholders while enabling innovation. Whether you're a CTO, AI engineer, or compliance officer, these insights will help bridge the critical gap between AI aspirations and responsible implementation. The Growing Gap Between AI Governance Intention and Implementation "We're taking AI governance seriously" — a claim I hear constantly from tech leaders. Yet the evidence suggests a troubling reality. A 2025 report from Zogby Analytics found that while 96% of organizations are already using AI for business operations, only 5% have implemented any AI governance framework. This staggering disconnect isn't just a statistical curiosity; it represents real organizational risk. Why does this gap persist? Fear of slowing innovation: Teams worry that governance will stifle creativity or delay launches. In reality, well-designed guardrails accelerate safe deployment and reduce costly rework.Unclear ownership: Governance often falls between IT, legal, and data science, resulting in inertia.Lack of practical models: Many organizations have high-level principles but struggle to translate them into day-to-day processes, especially across diverse AI systems. AI Governance Maturity Model The Cost of Governance Failure: Real-World Consequences The consequences of inadequate AI governance are no longer theoretical. Throughout 2024 to 2025, we've witnessed several high-profile failures that demonstrate how good intentions without robust governance frameworks can lead to significant harm. Paramount’s Privacy Lawsuit (2025) In early 2025, Paramount faced a $5 million class action lawsuit for allegedly sharing users’ viewing data with third parties without their consent. The root cause? Invisible data flows are not caught by any governance review, despite the company’s stated commitment to privacy. Change Healthcare Data Breach (2024) A breach at Change Healthcare exposed millions of patient records and halted payment systems nationwide. Investigations revealed a lack of oversight over third-party integrations and insufficient data access controls, failures that robust governance could have prevented. Biased Credit Scoring Algorithms (2024) A major credit scoring provider was found to have algorithms that systematically disadvantaged certain demographic groups. The company had invested heavily in AI but neglected to implement controls for fairness or bias mitigation. What these cases reveal is not a failure of technology, but a failure of governance. In each instance, organizations prioritized technological implementation over establishing robust governance frameworks. While technology moved quickly, governance lagged behind, creating vulnerabilities that eventually manifested as legal, financial, and ethical problems. AI Risk Assessment Heat Map Beyond Compliance: Why Regulatory Frameworks Aren't Enough The regulatory landscape for AI has evolved significantly in 2024 and 2025, with divergent approaches emerging globally. The EU AI Act officially became law in August 2024, with implementation staggered from early 2025 onwards. Its risk-based approach categorizes AI systems based on their potential harm, with high-risk applications facing stringent requirements for transparency, human oversight, and documentation. Meanwhile, in the United States, the regulatory landscape shifted dramatically with the change in administration. In January 2025, President Trump signed Executive Order 14179, "Removing Barriers to American Leadership in Artificial Intelligence," which eliminated key federal AI oversight policies from the previous administration. This deregulatory approach emphasizes industry-led innovation over government oversight. These contrasting approaches highlight a critical question: Is regulatory compliance sufficient for effective AI governance? My work with organizations across both jurisdictions suggests the answer is a resounding no. Compliance-only approaches suffer from several limitations: They establish minimum standards rather than optimal practicesThey often lag behind technological developmentsThey may not address organization-specific risks and use casesThey focus on avoiding penalties rather than creating value A more robust approach combines regulatory compliance with principles-based governance frameworks that can adapt to evolving technologies and use cases. Organizations that have embraced this dual approach demonstrate significant advantages in risk management, innovation speed, and stakeholder trust. Consider the case of a multinational financial institution with which I worked in early 2025. Despite operating in 17 jurisdictions with different AI regulations, they developed a unified governance framework based on core principles such as fairness, transparency, and accountability. This principles-based approach allowed them to maintain consistent standards across regions while adapting specific controls to local regulatory requirements. The result was more efficient compliance management and greater confidence in deploying AI solutions globally. Effective AI governance goes beyond ticking regulatory boxes; it establishes a foundation for responsible innovation that builds trust with customers, employees, and society. Building an Effective AI Governance Structure Establishing a robust AI governance structure requires more than creating another committee. It demands thoughtful design that balances oversight with operational effectiveness. In January 2025, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) released ISO/IEC 42001, the first international standard specifically focused on AI management systems. This landmark standard provides a comprehensive framework for organizations to design, implement, and maintain effective AI governance. Based on this standard and my work with organizations implementing governance structures, here are the key components of effective AI governance: Executive Sponsorship and Leadership Governance starts at the top. According to McKinsey's "The State of AI 2025" report, companies with CEO led AI governance are significantly more likely to report positive financial returns from AI investments. Executive sponsorship sends a clear message that governance is a strategic priority, not a compliance afterthought. This leadership manifests in concrete ways: Allocating resources for governance activitiesRegularly reviewing key risk metrics and governance performanceModeling responsible decision making around AI deployment Cross-Functional Representation Effective AI governance requires diverse perspectives. A model governance committee structure includes: Legal and compliance experts to address regulatory requirementsEthics specialists to evaluate value alignment and societal impactSecurity professionals to assess and mitigate technical risksBusiness leaders should ensure governance aligns with strategic objectivesTechnical experts who understand model capabilities and limitations This cross-functional approach ensures governance decisions incorporate multiple viewpoints and expertise, leading to more robust outcomes. Maturity Models and Assessment Frameworks Rather than treating governance as a binary state (present or absent), leading organizations use maturity models to guide progressive development. A typical AI governance maturity model includes five stages: Initial/Ad-hoc: Reactive approach with minimal formal processesDeveloping: Basic governance processes established but inconsistently appliedDefined: Standardized processes with clear roles and responsibilitiesManaged: Quantitative measurement of governance effectivenessOptimized: Continuous improvement based on performance metrics By assessing current maturity and mapping a path to higher levels, organizations can implement governance in manageable phases rather than attempting a comprehensive overhaul all at once. Tailored to Organizational Context While frameworks and standards provide valuable structure, effective governance must be tailored to your organization's specific context, including: Industry-specific risks and requirementsOrganizational culture and decision-making processesAI maturity and use case portfolioResource constraints and competing priorities A mid-sized healthcare provider I advised developed a streamlined governance process, specifically focused on patient data protection and clinical decision support, for their two highest-risk AI applications. This targeted approach allowed them to implement robust governance within resource constraints while addressing their most critical concerns. Building effective governance isn't about creating bureaucracy; it's about establishing the right structures to enable responsible innovation. When designed thoughtfully, governance accelerates AI deployment by increasing confidence in outcomes and reducing the need for rework. Ethical Frameworks and Control Mechanisms Moving from abstract principles to practical implementation is where many AI governance efforts falter. The key is translating ethical frameworks into concrete control mechanisms that guide day-to-day decisions and operations. Operationalizing AI Ethics Leading organizations operationalize ethical principles through structured processes that impact the entire AI lifecycle. Key approaches include: Ethical impact assessments: These structured evaluations, similar to privacy impact assessments, help identify and address ethical concerns before deployment. They typically examine potential impacts on various stakeholders, with particular attention to vulnerable groups and edge cases.Value-sensitive design: This approach incorporates ethical considerations into the technology design process itself, rather than treating ethics as a separate compliance check. By considering values like fairness, accountability, and transparency from the outset, teams create more robust systems with fewer ethical blind spots.Ethics review boards: For high-risk AI applications, dedicated review boards provide expert evaluation of ethical implications. These boards often include external experts to incorporate diverse perspectives and challenge organizational assumptions. Human-in-the-Loop Requirements Human oversight remains critical for responsible AI deployment. Effective governance frameworks specify when and how humans should be involved in AI systems, particularly for consequential decisions. A practical human-in-the-loop framework considers: Decision impact: Higher-impact decisions require greater human involvementModel confidence: Lower confidence predictions trigger human reviewEdge cases: Unusual scenarios outside normal patterns receive human attentionFeedback mechanisms: Clear protocols for humans to correct or override AI decisions One financial services organization I worked with implemented a tiered approach to credit decisions. Their AI system autonomously approved applications with high confidence scores and clear approval indicators. Applications with moderate confidence or mixed indicators were routed to human reviewers with AI recommendations. Finally, unusual or high-risk applications received full human review with AI providing supporting analysis only. This approach balanced efficiency with appropriate human oversight. Continuous Monitoring and Feedback Static governance quickly becomes outdated as AI systems and their operating environment evolve. Effective governance includes mechanisms for ongoing monitoring and improvement: Performance dashboards that track key metrics like accuracy, fairness, and user feedbackAutomated alerts for unusual patterns or potential driftRegular reviews of model behavior and decision outcomesClear channels for stakeholder concerns or complaints These mechanisms ensure that governance remains responsive to changing circumstances and emerging risks. Accountability Structures Clear accountability is essential for effective governance. This includes: Defined roles and responsibilities for AI development, deployment, and monitoringDocumentation requirements that create an audit trail for decisionsIncident response protocols for addressing issues when they ariseConsequences for bypassing governance requirements Without accountability, even well-designed governance frameworks can devolve into performative compliance rather than substantive risk management. The organizations that excel at ethical AI implementation don't treat ethics as a separate concern from technical development. Instead, they integrate ethical considerations throughout the AI lifecycle, supported by concrete processes, tools, and accountability mechanisms. Practical Steps for Implementation: From Theory to Practice Transitioning from governance theory to effective implementation requires a pragmatic approach that acknowledges organizational realities. Here are practical steps for implementing AI governance based on successful patterns I've observed: Start Small and Focused Rather than attempting to implement comprehensive governance across all AI initiatives simultaneously, begin with a focused pilot program. Select a specific AI use case with moderate risk and strategic importance, high enough stakes to matter, but not so critical that failure would be catastrophic. This approach allows you to: Test governance processes in a controlled environmentDemonstrate value to skeptical stakeholdersRefine approaches before broader deploymentBuild internal expertise and champions For example, a retail organization I advised began with governance for their product recommendation AI, an important but not mission-critical system. This allowed them to address governance challenges before tackling more sensitive applications, such as fraud detection or employee performance evaluation. Build Cross-Functional Teams with Clear Roles Effective governance requires collaboration across disciplines, but without clear roles and responsibilities, cross-functional teams can become inefficient talking shops rather than decision-making bodies. Define specific roles such as: Governance chair: Oversees the governance process and facilitates decision-makingRisk owner: Accountable for identifying and assessing potential harmsCompliance liaison: Ensures alignment with regulatory requirementsTechnical reviewer: Evaluates technical implementation and controlsBusiness value advocate: Represents business objectives and user needs Clarify which decisions require consensus versus which can be made by individual role-holders. This balance prevents both analysis paralysis and unilateral decisions on important matters. Leverage Visual Frameworks and Tools Visual tools can dramatically improve governance implementation by making abstract concepts concrete and accessible. Key visual frameworks include: AI risk assessment heat maps: These visualizations plot potential AI risks based on likelihood and impact, with color-coding to indicate severity. They help prioritize governance attention on the most significant concerns.Governance maturity dashboards: Visual representations of governance maturity across different dimensions help organizations track progress and identify improvement areas.Advanced cloud tools: Platforms like Amazon Bedrock Guardrails, SageMaker Clarify, and FmEval support bias detection, safety checks, and explainability. Automated CI/CD pipelines and monitoring (e.g., CloudWatch) ensure governance is embedded in deployment. These visual tools not only improve understanding but also facilitate communication across technical and non-technical stakeholders, a critical success factor for governance implementation. Embrace Progressive Maturity Implement governance in stages, progressively increasing sophistication as your organization builds capability and comfort. A staged approach might look like: Foundation: Establish a basic inventory of AI systems and a risk assessment frameworkStandardization: Develop consistent governance processes and documentationIntegration: Embed governance into development workflows and decision processesMeasurement: Implement metrics to track governance effectivenessOptimization: Continuously improve based on performance data and feedback This progressive approach prevents the perfect from becoming the enemy of the good. Rather than postponing governance until a comprehensive system can be implemented (which rarely happens), you can begin realizing benefits immediately while building toward more sophisticated approaches. Practical Example: Financial Services Governance Implementation A mid-sized financial institution implemented AI governance using this progressive approach in early 2025. They began with a focused pilot for their customer churn prediction model-important enough to justify governance attention but not directly involved in lending decisions. Their implementation sequence: Created a simple governance committee with representatives from data science, compliance, customer experience, and information securityDeveloped a basic risk assessment template specifically for customer-facing AI systemsEstablished monthly reviews of model performance with attention to fairness metricsImplemented a customer feedback mechanism to identify potential issuesGradually expanded governance to additional AI use cases using lessons from the pilot Within six months, they had established governance processes covering 80% of their AI portfolio, with clear risk reduction and improved stakeholder confidence. By starting small and focusing on practical implementation rather than perfect design, they achieved meaningful progress where previous governance initiatives had stalled in the planning phase. The key lesson: Perfect governance implemented someday is far less valuable than good governance implemented today. Start where you are, use what you have, and build capability progressively. Conclusion The gap between AI governance intentions and real-world outcomes is more than a compliance issue, and it’s a business imperative. As recent failures show, the cost of insufficient governance can be measured in lawsuits, lost trust, and operational chaos. But the solution isn’t to slow down innovation; it’s to build governance frameworks that enable responsible, scalable deployment. Start small, build cross-functional teams, use visual and automated tools, and progress iteratively. The organizations that master both the “why” and the “how” of AI governance will not only avoid harm-they’ll lead the next wave of sustainable AI innovation. How is your organization bridging the gap between AI hype and responsible governance? Share your experiences or questions in the comments below.
Graph databases are increasingly popular in modern applications because they can model complex relationships natively. Graphs provide a more natural representation of connected data from recommendation systems to fraud detection. Our previous articles explored graph databases broadly and delved into Neo4j. In this third part, we focus on JanusGraph, a scalable and distributed graph database. Unlike Neo4j, JanusGraph supports multiple backends and leverages Apache TinkerPop, a graph computing framework that introduces a standard API and query language (Gremlin) for various databases. This abstraction makes JanusGraph a flexible choice for enterprise applications. Understanding JanusGraph and TinkerPop JanusGraph is an open-source, distributed graph database that handles huge volumes of transactional and analytical data. It supports different storage and indexing backends, including Cassandra, HBase, BerkeleyDB, and Elasticsearch. It implements the TinkerPop framework, which provides two main components: Gremlin: A graph traversal language (both declarative and imperative).TinkerPop API: A set of interfaces for working with graph databases across different engines. This allows developers to write database-agnostic code on any compliant TinkerPop-compatible engine. Gremlin is a functional, step-based language for querying graph structures. It focuses on traversals: the act of walking through a graph. Gremlin supports OLTP (real-time) and OLAP (analytics) use cases across more than 30 graph database vendors. FeatureSQLGremlinEntity RetrievalSELECT * FROM Bookg.V().hasLabel('Book')FilteringWHERE name = 'Java'has('name','Java')Join/RelationshipJOIN Book_Category ON ...g.V().hasLabel('Book').out('is').hasLabel('Category')Grouping & CountGROUP BY category_idgroup().by('category').by(count())Schema FlexibilityFixed schemaDynamic properties, schema-optional JanusGraph supports both embedded and external configurations. To get started quickly using Cassandra and Elasticsearch, run this docker-compose file: YAML version: '3.8' services: cassandra: image: cassandra:3.11 ports: - "9042:9042" elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2 environment: - discovery.type=single-node - xpack.security.enabled=false ports: - "9200:9200" janusgraph: image: janusgraph/janusgraph:latest depends_on: - cassandra - elasticsearch ports: - "8182:8182" environment: - gremlin.graph=org.janusgraph.core.ConfiguredGraphFactory - storage.backend=cql - storage.hostname=cassandra - index.search.backend=elasticsearch - index.search.hostname=elasticsearch Alternatively, you can avoid external dependencies entirely for local development or embedded environments. JanusGraph supports embedded mode using BerkeleyDB Java Edition (berkeleyje) for local graph storage, and Lucene as the indexing engine. This is especially useful for quick prototyping or running unit tests without setting up infrastructure. BerkeleyJE is a fast, embeddable key-value store written in Java, which stores your graph data directly on the local filesystem. Here’s an example configuration for embedded mode: Properties files storage.backend=berkeleyje storage.directory=../target/jnosql/berkeleyje index.search.backend=lucene index.search.directory=../target/jnosql/lucene In case you want to execute with Casandra, please update the properties in this mode: Properties files jnosql.graph.database=janusgraph storage.backend=cql storage.hostname=localhost index.search.backend=elasticsearch index.search.hostname=localhost Before modeling the domain, we need to define the structure of the entities that will represent our graph. In this case, we use two vertex types: Book and Category. Each entity is annotated using Jakarta NoSQL annotations and contains a unique ID and a name. These entities will form the foundation of our graph, allowing us to define relationships between books and their associated categories. Java @Entity public class Book { @Id private Long id; @Column private String name; } @Entity public class Category { @Id private Long id; @Column private String name; } Once the entities are defined, the next step is to persist and retrieve them from the database. For this purpose, Eclipse JNoSQL provides the TinkerpopTemplate, which is a specialization of the generic Template interface specifically designed for graph operations using Apache TinkerPop. The service layer encapsulates the logic of querying the database for existing books or categories and inserting new ones if they don't exist. This pattern helps maintain idempotency when saving data, ensuring duplicates aren't created. Java @ApplicationScoped public class BookService { @Inject private TinkerpopTemplate template; public Book save(Book book) { return template.select(Book.class).where("name").eq(book.getName()).<Book>singleResult() .orElseGet(() -> template.insert(book)); } public Category save(Category category) { return template.select(Category.class).where("name").eq(category.getName()).<Category>singleResult() .orElseGet(() -> template.insert(category)); } } The BookApp class shows full execution: inserting entities, creating relationships (edges), and executing Gremlin queries: Java var architectureBooks = template.gremlin("g.V().hasLabel('Category').has('name','Architecture').in('is')").toList(); var highRelevanceBooks = template.gremlin("g.E().hasLabel('is').has('relevance', gte(9)).outV().hasLabel('Book').dedup()").toList(); You can also chain traversals with .traversalVertex() For more fluent pipelines: Java List<String> softwareBooks = template.traversalVertex().hasLabel("Category") .has("name", "Software") .in("is").hasLabel("Book").<Book>result() .map(Book::getName).toList(); The BookApp introduces the TinkerpopTemplate capability, where we have the bridge between Java and Janus database: Java public final class BookApp { private BookApp() { } public static void main(String[] args) { try (SeContainer container = SeContainerInitializer.newInstance().initialize()) { var template = container.select(TinkerpopTemplate.class).get(); var service = container.select(BookService.class).get(); var software = service.save(Category.of("Software")); var java = service.save(Category.of("Java")); var architecture = service.save(Category.of("Architecture")); var performance = service.save(Category.of("Performance")); var effectiveJava = service.save(Book.of("Effective Java")); var cleanArchitecture = service.save(Book.of("Clean Architecture")); var systemDesign = service.save(Book.of("System Design Interview")); var javaPerformance = service.save(Book.of("Java Performance")); template.edge(Edge.source(effectiveJava).label("is").target(java).property("relevance", 10).build()); template.edge(Edge.source(effectiveJava).label("is").target(software).property("relevance", 9).build()); template.edge(Edge.source(cleanArchitecture).label("is").target(software).property("relevance", 8).build()); template.edge(Edge.source(cleanArchitecture).label("is").target(architecture).property("relevance", 10).build()); template.edge(Edge.source(systemDesign).label("is").target(architecture).property("relevance", 9).build()); template.edge(Edge.source(systemDesign).label("is").target(software).property("relevance", 7).build()); template.edge(Edge.source(javaPerformance).label("is").target(performance).property("relevance", 8).build()); template.edge(Edge.source(javaPerformance).label("is").target(java).property("relevance", 9).build()); List<String> softwareCategories = template.traversalVertex().hasLabel("Category") .has("name", "Software") .in("is").hasLabel("Category").<Category>result() .map(Category::getName) .toList(); List<String> softwareBooks = template.traversalVertex().hasLabel("Category") .has("name", "Software") .in("is").hasLabel("Book").<Book>result() .map(Book::getName) .toList(); List<String> sofwareNoSQLBooks = template.traversalVertex().hasLabel("Category") .has("name", "Software") .in("is") .has("name", "NoSQL") .in("is").<Book>result() .map(Book::getName) .toList(); System.out.println("The software categories: " + softwareCategories); System.out.println("The software books: " + softwareBooks); System.out.println("The software and NoSQL books: " + sofwareNoSQLBooks); System.out.println("\Books in 'Architecture' category:"); var architectureBooks = template.gremlin("g.V().hasLabel('Category').has('name','Architecture').in('is')").toList(); architectureBooks.forEach(doc -> System.out.println(" - " + doc)); System.out.println("Categories with more than one book:"); var commonCategories = template.gremlin("g.V().hasLabel('Category').where(__.in('is').count().is(gt(1)))" ).toList(); commonCategories.forEach(doc -> System.out.println(" - " + doc)); var highRelevanceBooks = template.gremlin( "g.E().hasLabel('is').has('relevance', gte(9))" + ".outV().hasLabel('Book').dedup()").toList(); System.out.println("Books with high relevance:"); highRelevanceBooks.forEach(doc -> System.out.println(" - " + doc)); System.out.println("\Books with name: 'Effective Java':"); var effectiveJavaBooks = template.gremlin("g.V().hasLabel('Book').has('name', @name)", Collections.singletonMap("name", "Effective Java")).toList(); effectiveJavaBooks.forEach(doc -> System.out.println(" - " + doc)); } } } To complement the use of TinkerpopTemplate, Eclipse JNoSQL supports the Jakarta Data specification by enabling repository-based data access. This approach allows developers to define interfaces, like BookRepository and CategoryRepository — that automatically provide CRUD operations and support custom graph traversals through the @Gremlin annotation. By combining standard method name queries (e.g., findByName) with expressive Gremlin scripts, we gain both convenience and fine-grained control over graph traversal logic. These repositories are ideal for clean, testable, and declarative access patterns in graph-based applications. Java @Repository public interface BookRepository extends TinkerPopRepository<Book, Long> { Optional<Book> findByName(String name); @Gremlin("g.V().hasLabel('Book').out('is').hasLabel('Category').has('name','Architecture').in('is').dedup()") List<Book> findArchitectureBooks(); @Gremlin("g.E().hasLabel('is').has('relevance', gte(9)).outV().hasLabel('Book').dedup()") List<Book> highRelevanceBooks(); } @Repository public interface CategoryRepository extends TinkerPopRepository<Category, Long> { Optional<Category> findByName(String name); @Gremlin("g.V().hasLabel('Category').where(__.in('is').count().is(gt(1)))") List<Category> commonCategories(); } After defining the repositories, we can build a full application that leverages them to manage data and run queries. The BookApp2 class illustrates this repository-driven execution flow. It uses the repositories to create or fetch vertices (Book and Category) and falls back to GraphTemplate only when inserting edges, since Jakarta Data currently supports querying vertices but not edge creation. This hybrid model provides a clean separation of concerns and reduces boilerplate, making it easier to read, test, and maintain. Java public final class BookApp2 { private BookApp2() { } public static void main(String[] args) { try (SeContainer container = SeContainerInitializer.newInstance().initialize()) { var template = container.select(GraphTemplate.class).get(); var bookRepository = container.select(BookRepository.class).get(); var repository = container.select(CategoryRepository.class).get(); var software = repository.findByName("Software").orElseGet(() -> repository.save(Category.of("Software"))); var java = repository.findByName("Java").orElseGet(() -> repository.save(Category.of("Java"))); var architecture = repository.findByName("Architecture").orElseGet(() -> repository.save(Category.of("Architecture"))); var performance = repository.findByName("Performance").orElseGet(() -> repository.save(Category.of("Performance"))); var effectiveJava = bookRepository.findByName("Effective Java").orElseGet(() -> bookRepository.save(Book.of("Effective Java"))); var cleanArchitecture = bookRepository.findByName("Clean Architecture").orElseGet(() -> bookRepository.save(Book.of("Clean Architecture"))); var systemDesign = bookRepository.findByName("System Design Interview").orElseGet(() -> bookRepository.save(Book.of("System Design Interview"))); var javaPerformance = bookRepository.findByName("Java Performance").orElseGet(() -> bookRepository.save(Book.of("Java Performance"))); template.edge(Edge.source(effectiveJava).label("is").target(java).property("relevance", 10).build()); template.edge(Edge.source(effectiveJava).label("is").target(software).property("relevance", 9).build()); template.edge(Edge.source(cleanArchitecture).label("is").target(software).property("relevance", 8).build()); template.edge(Edge.source(cleanArchitecture).label("is").target(architecture).property("relevance", 10).build()); template.edge(Edge.source(systemDesign).label("is").target(architecture).property("relevance", 9).build()); template.edge(Edge.source(systemDesign).label("is").target(software).property("relevance", 7).build()); template.edge(Edge.source(javaPerformance).label("is").target(performance).property("relevance", 8).build()); template.edge(Edge.source(javaPerformance).label("is").target(java).property("relevance", 9).build()); System.out.println("Books in 'Architecture' category:"); var architectureBooks = bookRepository.findArchitectureBooks(); architectureBooks.forEach(doc -> System.out.println(" - " + doc)); System.out.println("Categories with more than one book:"); var commonCategories = repository.commonCategories(); commonCategories.forEach(doc -> System.out.println(" - " + doc)); var highRelevanceBooks = bookRepository.highRelevanceBooks(); System.out.println("Books with high relevance:"); highRelevanceBooks.forEach(doc -> System.out.println(" - " + doc)); var bookByName = bookRepository.queryByName("Effective Java"); System.out.println("Book by name: " + bookByName); } } } JanusGraph, backed by Apache TinkerPop and Gremlin, offers a highly scalable and portable way to model and traverse complex graphs. With Eclipse JNoSQL and Jakarta Data, Java developers can harness powerful graph capabilities while enjoying a clean and modular API. Janus adapts to your architecture, whether embedded or distributed, while keeping your queries expressive and concise. References JanusGraph ProjectApache TinkerPopGremlin TutorialEclipse JNoSQLJakarta DataSample Repository
Justin Albano
Software Engineer,
IBM