The dawn of observability across the SDLC has fully disrupted standard performance monitoring and management practices. See why.
Apache Kafka: a streaming engine for collecting, caching, and processing high volumes of data in real time. Explore the essentials now.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
How to Create a CRUD Application in Less Than 15 Minutes
Integrate Spring With Open AI
A microservice system could have a high number of components with complex interactions. It is important to reduce this complexity, at least from the standpoint of the clients interacting with the system. A gateway hides the microservices from the external world. It represents a unique entrance and implements common cross requirements. In this article, you will learn how to configure a gateway component for a Spring Boot application, using the Spring Cloud Gateway package. Spring Cloud Gateway Spring Cloud provides a gateway implementation by the Spring Cloud Gateway project. It is based on Spring Boot, Spring WebFlux, and Reactor. Since it is based on Spring WebFlux, it must run on a Netty environment, not a usual servlet container. The main function of a gateway is to route requests from external clients to the microservices. Its main components are: Route: This is the basic entity. It is configured with an ID, a destination URI, one or more predicates, and filters.Predicate: This is based on a Java function predicate. It represents a condition that must be matched on head or request parameters.Filter: It allows you to change the request or the response. We can identify the following sequence of events: A client makes a call through the gateway.The gateway decides if the request matches a configured route.If there is a match, the request is sent to a gateway web handler.The web handler sends the request to a chain of filters that can execute logic relative to the request or the response, and operate changes on them.The target service is executed. Spring Cloud Gateway Dependencies To implement our Spring Boot application as a gateway we must first provide the spring-cloud-starter-gateway dependency after having defined the release train as in the configuration fragment below: XML <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>2023.0.0</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> XML <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-gateway</artifactId> </dependency> ... </dependencies> Spring Cloud Gateway Configuration We can configure our gateway component using the application.yaml file. We can specify fully expanded arguments or shortcuts to define predicates and filters. In the first case, we define a name and an args field. The args field can contain one or more key-value pairs: YAML spring: cloud: gateway: routes: - id: route-example uri: https://example.com predicates: - name: PredicateExample args: name: predicateName regexp: predicateRegexpValue In the example above, we define a route with an ID value of "route-example", a destination URI "https://example.com," and a predicate with two args, "name" and "regexp." With the shortcut mode, we write the predicate name followed by the "=" character, and then a list of names and values separated by commas. We can rewrite the previous example by the following: YAML spring: cloud: gateway: routes: - id: route-example uri: https://example.com predicates: - Cookie=predicateName,predicateRegexpValue A specific factory class implements each predicate and filter type. There are several built-in predicate and filter factories available. The Cookie predicate shown above is an example. We will list some of them in the following sub-sections. Predicate Built-In Factories The After Predicate Factorymatches requests that happen after a specific time. After=2007-12-03T10:15:30+01:00 Europe/Paris The Before Predicate Factorymatches requests that happen before a specific time. Before=2007-12-03T10:15:30+01:00 Europe/Paris The Method Route Predicate Factoryspecifies the HTTP method types to match. Method=GET,POST Filter Built-In Factories The AddRequestHeader GatewayFilter Factoryallows the addition of an HTTP header to the request by its name and value. AddRequestHeader=X-Request-Foo, BarThe AddRequestParameter GatewayFilter Factoryallows the addition of a parameter to the request by its name and value. AddRequestParameter=foo, barThe AddResponseHeader GatewayFilter Factoryallows the addition of an HTTP header to the request by its name and value. AddResponseHeader=X-Response-Foo, Bar To implement a custom predicate or filter factory, we have to provide an implementation of a specific factory interface. The following sections show how. Custom Predicate Factories To create a custom predicate factory we can extend the AbstractRoutePredicateFactory, an abstract implementation of the RoutePredicateFactory interface. In the example below we define an inner static class Configuration, to pass its properties to the apply method and compare them to the request. Java @Component public class CustomPredicateFactory extends AbstractRoutePredicateFactory<CustomPredicateFactory.Configuration> { public CustomPredicateFactory() { super(Configuration.class); } @Override public Predicate<ServerWebExchange> apply(Configurationconfig) { return exchange -> { ServerHttpRequest request = exchange.getRequest(); //compare request with config properties return matches(config, request); }; } private boolean matches(Configuration config, ServerHttpRequest request) { //implement matching logic return false; } public static class Configuration{ //implement custom configuration properties } } Custom Filter Factories To create a custom filter factory we can extend the AbstractGatewayFilterFactory, an abstract implementation of the GatewayFilterFactory interface. In the examples below, you can see a filter factory that modifies the request and another one that changes the response, using the properties passed by a Configuration object. Java @Component public class PreCustomFilterFactory extends AbstractGatewayFilterFactory<PreCustomFilterFactory.Configuration> { public PreCustomFilterFactory() { super(Configuration.class); } @Override public GatewayFilter apply(Configuration config) { return (exchange, chain) -> { ServerHttpRequest.Builder builder = exchange.getRequest().mutate(); //use builder to modify the request return chain.filter(exchange.mutate().request(builder.build()).build()); }; } public static class Configuration { //implement the configuration properties } } @Component public class PostCustomFilterFactory extends AbstractGatewayFilterFactory<PostCustomFilterFactory.Configuration> { public PostCustomFilterFactory() { super(Configuration.class); } @Override public GatewayFilter apply(Configuration config) { return (exchange, chain) -> { return chain.filter(exchange).then(Mono.fromRunnable(() -> { ServerHttpResponse response = exchange.getResponse(); //Change the response })); }; } public static class Configuration { //implement the configuration properties } } Spring Cloud Gateway Example We will show a practical and simple example to see how the gateway works in a real scenario. You will find a link to the source code at the end of the article. The example is based on the following stack: Spring Boot: 3.2.1Spring Cloud: 2023.0.0 Java 17 We consider a minimal microservice system that implements a library with only two services: a book service and an author service. The book service calls the author service to retrieve an author's information by passing an authorName parameter. The implementation of the two applications is based on an embedded in-memory H2 database and uses JPA ORM to map and query the Book and Author tables. From the standpoint of this demonstration, the most important part is the /getAuthor REST endpoint exposed by a BookController class of the Book service: Java @RestController @RequestMapping("/library") public class BookController { Logger logger = LoggerFactory.getLogger(BookService.class); @Autowired private BookService bookService; @GetMapping(value = "/getAuthor", params = { "authorName" }) public Optional<Author> getAuthor(@RequestParam("authorName") String authorName) { return bookService.getAuthor(authorName); } } The two applications register themselves in an Eureka discovery server and are configured as discovery clients. The final component is the gateway. The gateway should not register itself with the service discovery server. This is because it is called only by external clients, not internal microservices. On the other hand, it can be configured as a discovery client to fetch the other services automatically and implement a more dynamic routing. We don't do it here, though, to keep things simple. In this example, we want to show two things: See how the routing mechanism by the predicate value worksShow how to modify the request by a filter, adding a header The gateway's configuration is the following: Java spring: application: name: gateway-service cloud: gateway: routes: - id: add_request_header_route uri: http://localhost:8082 predicates: - Path=/library/** filters: - AddRequestHeader=X-Request-red, red We have defined a route with id "add_request_header_route," and a URI value of "http://localhost:8082," the base URI of the book service. We then have a Path predicate with a "/library/**" value. Every call starting with "http://localhost:8080/library/" will be matched and routed toward the book's service with URIs starting with "http://localhost:8082/library/." Running the Example To run the example you can start each component by executing "mvn spring-boot:run" command from the component's base directory. You can then test it by executing the "http://localhost:8080/library/getAuthor?authorName=Goethe" URI. The result will be a JSON value containing info about the author. If you check the browser developer tools you will also find that an X-Request-red header has been added to the request with a value of "red." Conclusion Implementing a gateway with the Spring Cloud Gateway package is the natural choice in the Spring Boot framework. It reduces the complexity of a microservice environment by placing a single facade component in front of it. It also gives you great flexibility in implementing cross-cutting concerns like authentication, authorization, aggregate logging, tracing, and monitoring. You can find the source code of the example of this article on GitHub.
When we, developers, find some bugs in our logs, this sometimes is worse than a dragon fight! Let's start with the basics. We have this order of severity of logs, from most detailed to no detail at all: TRACEDEBUGINFOWARNERRORFATALOFF The default severity log for your classes is INFO. You don't need to change your configuration file (application.yaml). logging: level: root: INFO Let's create a sample controller to test some of the severity logs: @RestController @RequestMapping("/api") public class LoggingController { private static final Logger logger = LoggerFactory.getLogger(LoggingController.class); @GetMapping("/test") public String getTest() { testLogs(); return "Ok"; } public void testLogs() { System.out.println(" ==== LOGS ==== "); logger.error("This is an ERROR level log message!"); logger.warn("This is a WARN level log message!"); logger.info("This is an INFO level log message!"); logger.debug("This is a DEBUG level log message!"); logger.trace("This is a TRACE level log message!"); } } We can test it with HTTPie or any other REST client: Shell $ http GET :8080/api/test HTTP/1.1 200 Ok Checking in Spring Boot logs, we will see something like this: PowerShell ==== LOGS ==== 2024-09-08T20:50:15.872-03:00 ERROR 77555 --- [nio-8080-exec-5] LoggingController : This is an ERROR level log message! 2024-09-08T20:50:15.872-03:00 WARN 77555 --- [nio-8080-exec-5] LoggingController : This is a WARN level log message! 2024-09-08T20:50:15.872-03:00 INFO 77555 --- [nio-8080-exec-5] LoggingController : This is an INFO level log message! If we need to change it to DEBUG all my com.boaglio classes, we need to add this info to the application.yaml file and restart the application: logging: level: com.boaglio: DEBUG Now repeating the test, we will see a new debug line: PowerShell ==== LOGS ==== 2024-09-08T20:56:35.082-03:00 ERROR 81780 --- [nio-8080-exec-1] LoggingController : This is an ERROR level log message! 2024-09-08T20:56:35.082-03:00 WARN 81780 --- [nio-8080-exec-1] LoggingController : This is a WARN level log message! 2024-09-08T20:56:35.083-03:00 INFO 81780 --- [nio-8080-exec-1] LoggingController : This is an INFO level log message! 2024-09-08T20:56:35.083-03:00 DEBUG 81780 --- [nio-8080-exec-1] LoggingController : This is a DEBUG level log message! This is good, but sometimes we are running in production and we need to change from INFO to TRACE, just for quick research. This is possible with the LoggingSystem class. Let's add to our controller a POST API to change all logs to TRACE: @Autowired private LoggingSystem loggingSystem; @PostMapping("/trace") public void setLogLevelTrace() { loggingSystem.setLogLevel("com.boaglio",LogLevel.TRACE); logger.info("TRACE active"); testLogs(); } We are using the LoggingSystem.setLogLevel method, changing all logs from the package com.boaglio to TRACE. Let's try to call out POST API to enable TRACE: Shell $ http POST :8080/api/trace HTTP/1.1 200 Now we can check that the trace was finally enabled: PowerShell 2024-09-08T21:04:03.791-03:00 INFO 82087 --- [nio-8080-exec-3] LoggingController : TRACE active ==== LOGS ==== 2024-09-08T21:04:03.791-03:00 ERROR 82087 --- [nio-8080-exec-3] LoggingController : This is an ERROR level log message! 2024-09-08T21:04:03.791-03:00 WARN 82087 --- [nio-8080-exec-3] LoggingController : This is a WARN level log message! 2024-09-08T21:04:03.791-03:00 INFO 82087 --- [nio-8080-exec-3] LoggingController : This is an INFO level log message! 2024-09-08T21:04:03.791-03:00 DEBUG 82087 --- [nio-8080-exec-3] LoggingController : This is a DEBUG level log message! 2024-09-08T21:04:03.791-03:00 TRACE 82087 --- [nio-8080-exec-3] LoggingController : This is a TRACE level log message! And a bonus tip here, to enable DEBUG or TRACE just for the Spring Boot framework (which is great sometimes to understand what is going on under the hood), we can simply add this to our application.yaml: Shell debug:true or trace: true Let the game of trace begin!
This project implements a simple LangChain language correctness detector that detects grammatical errors, sentiment, and aggressiveness, and provides solutions for the errors in the text. Features Detects grammatical errors in the textAnalyzes the sentiment of the textMeasures the aggressiveness of the textProvides solutions for the detected errors Stack Used Node.js: JavaScript runtime environmentTypeScript: Typed superset of JavaScriptLangChain: Language processing libraryOpenAI API: For language model capabilitiesGoogle Cloud: For additional language processing services Installation Clone the repository: git clone https://github.com/xavidop/langchain-example.git cd langchain-example 2. Install the dependencies: yarn install 3. Create a .envfile in the root directory and add your OpenAI API key and Google Application credentials: OPENAI_API_KEY="your-openai-api-key" GOOGLE_APPLICATION_CREDENTIALS=credentials.json LLM_PROVIDER='OPENAI' Usage Build the project: yarn run build 2. Start the application: yarn start 3. For development, you can use: yarn run dev Code Explanation Imports and Environment Setup TypeScript import { ChatOpenAI, ChatOpenAICallOptions } from "@langchain/openai"; import { ChatVertexAI } from "@langchain/google-vertexai"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { z } from "zod"; import * as dotenv from "dotenv"; // Load environment variables from .env file dotenv.config(); Imports: The code imports necessary modules from LangChain, Zod for schema validation, and dotenv for environment variable management.Environment Setup: Loads environment variables from a .env file System Template and Schema Definition TypeScript const systemTemplate = "You are an expert in {language}, you have to detect grammar problems sentences"; const classificationSchema = z.object({ sentiment: z.enum(["happy", "neutral", "sad", "angry", "frustrated"]).describe("The sentiment of the text"), aggressiveness: z.number().int().min(1).max(10).describe("How aggressive the text is on a scale from 1 to 10"), correctness: z.number().int().min(1).max(10).describe("How the sentence is correct grammatically on a scale from 1 to 10"), errors: z.array(z.string()).describe("The errors in the text. Specify the proper way to write the text and where it is wrong. Explain it in a human-readable way. Write each error in a separate string"), solution: z.string().describe("The solution to the errors in the text. Write the solution in {language}"), language: z.string().describe("The language the text is written in"), }); System template: Defines a template for the system message, indicating the language and the task of detecting grammar problemsClassification schema: Uses Zod to define a schema for the expected output, including sentiment, aggressiveness, correctness, errors, solution, and language Prompt Template and Model Selection TypeScript const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", systemTemplate], ["user", "{text}"], ]); let model: any; if (process.env.LLM_PROVIDER == "OPENAI") { model = new ChatOpenAI({ model: "gpt-4", temperature: 0, }); } else { model = new ChatVertexAI({ model: "gemini-1.5-pro-001", temperature: 0, }); } Prompt template: Creates a prompt template using the system message and user inputModel selection: Selects the language model based on the LLM_PROVIDER environment variable; it can either be OpenAI’s GPT-4 or Google’s Vertex AI. Main Function TypeScript export const run = async () => { const llmWihStructuredOutput = model.withStructuredOutput(classificationSchema, { name: "extractor", }); const chain = await promptTemplate.pipe(llmWihStructuredOutput); const result = await chain.invoke({ language: "Spanish", text: "Yo soy enfadado" }); console.log({ result }); }; run(); Structured output: Configures the model to use the defined classification schemaPipeline: Creates a pipeline by combining the prompt template and the structured output modelInvocation: Invokes the pipeline with a sample text in Spanish, and logs the result Prompts Used for Detecting Correctness The following prompts are used to detect the correctness of the text: Grammatical errors: "Please check the following text for grammatical errors: {text}" Sentiment analysis: "Analyze the sentiment of the following text: {text}" Aggressiveness detection: "Measure the aggressiveness of the following text: {text}" Error solutions: "Provide solutions for the errors found in the following text: {text}" Examples This project can be used with different language models to detect language correctness. Here are some examples using OpenAI and Gemini models. OpenAI With OpenAI’s GPT-4 model, the system can detect grammatical errors, sentiment, and aggressiveness in the text. Input: { language: "Spanish", text: "Yo soy enfadado" } Output: { result: { sentiment: 'angry', aggressiveness: 2, correctness: 7, errors: [ "The correct form of the verb 'estar' should be used instead of 'ser' when expressing emotions or states." ], solution: 'Yo estoy enfadado', language: 'Spanish' } } Gemini With Google’s Vertex AI Gemini model, the output is quite similar: Input: { language: "Spanish", text: "Yo soy enfadado" } Output: { result: { sentiment: 'angry', aggressiveness: 1, correctness: 8, errors: [ 'The correct grammar is "estoy enfadado" because "ser" is used for permanent states and "estar" is used for temporary states. In this case, being angry is a temporary state.' ], solution: 'Estoy enfadado', language: 'Spanish' } } Conclusion This project demonstrates how to use Langchain to detect language correctness using different language models. By combining the system template, classification schema, prompt template, and language model, you can create a powerful language processing system. OpenAI and Gemini models provide accurate results for detecting grammatical errors, sentiment, and aggressiveness in the text. You can find the full code of this example in the GitHub repository. License This project is licensed under the Apache License, Version 2.0. See the LICENSE file for more details. Contributing Contributions are welcome! Please open an issue or submit a pull request for any changes. Resources LangChainVertex AIOpen AI API Happy coding!
“DX”, aka Developer Experience One of the goals of Stalactite is to make developers aware of the impact of the mapping of their entities onto the database, and, as a consequence, onto performances. To fulfill this goal, the developer's experience, as a user of the Mapping API, is key to helping him express his intention. The idea is to guide the user-developer in the choices he can make while he describes its persistence. As you may already know, Stalactite doesn’t use annotation or XML files for that. It proposes a fluent API that constrains user choices according to the context. To clarify: available methods after a call to mapOneToOne(..) are not the same as the ones after mapOneToMany(..). This capacity can be done in different ways. Stalactite chose to leverage Java proxies for it and combines it with the multiple-inheritance capability of interfaces. Contextualized Options Let’s start with a simple goal: we want to help a developer express the option of aliasing a column in a request, and also the option of casting it. Usually, we would find something like: Java select() .add("id", new ColumnOptions().as("countryId").cast("int")) .from(..); It would be smarter to have this: Java select() .add("id").as("countryId").cast("int") .add("name").as("countryName") .add("population").cast("long") .from(..); As the former is kind of trivial to implement and many examples can be found on the Internet, in particular Spring with its Security DSL or its MockMVC DSL, the latter is trickier because we have to locally mix the main API (select().from(..).where(..)) with some local one (as(..).cast(..)) on the return type of the add(..) method. This means that if the main API is brought by the FluentSelect interface, and the column options by the FluentColumnOptions interface, the method add(String) must return a third one that inherits from both: the FluentSelectColumnOptions interface. Java /** The main API */ interface FluentSelect { FluentSelect add(String columnName); FluentFrom from(String tableName); } /** The column options API */ interface FluentColumnOptions { FluentColumnOptions as(String alias); FluentColumnOptions cast(String alias); } /** The main API with column options as an argument to make it more fluent */ interface EnhancedFluentSelect extends FluentSelect { /** The returned type of this method is overwritten to return and enhanced version */ FluentSelectColumnOptions add(String columnName); FluentFrom from(String tableName); } /** The mashup between main API and column options API */ interface FluentSelectColumnOptions extends EnhancedFluentSelect, // we inherit from it to be capable of coming back to from(..) or chain with another add(..) FluentColumnOptions { /** we overwrite return types to make it capable of chaining with itself */ FluentSelectColumnOptions as(String alias); FluentSelectColumnOptions cast(String alias); } This can be done with standard Java code but brings some boilerplate code which is cumbersome to maintain. An elegant way to address it is to create a “method dispatcher” that will redirect main methods to the object that supports the main API, and redirect the options ones to the object that supports the options. Creating a Method Dispatcher: Java Proxy Luckily, Java Proxy API helps in being aware of method invocations on an object. As a reminder, a Proxy can be created as such: Proxy.newProxyInstance(classLoader, interfaces, methodHandler) It returns an instance (a "magic" one, thanks to the JVM) that can be typed in one of the interfaces passed as a parameter at any time (the Proxy implements all given interfaces).All methods of all interfaces are intercepted by InvocationHandler.invoke(..) (even equals/hashCode/toString !). So, our goal can be fulfilled if we’re capable of returning a Proxy that implements our interfaces (or the mashup one) and creates an InvocationHandler that propagates calls of the main API to the “main object” and calls the options to the “options object." Since InvocationHandler.invoke(..) gets the invoked Method as an argument, we can easily check if it belongs to one or another of the aforementioned interfaces. This gives the following naïve implementation for our example : Java public static void main(String[] args) { String mySelect = MyFluentQueryAPI.newQuery() .select("a").as("A").cast("char") .add("b").as("B").cast("int") .toString(); System.out.println(mySelect); // will print "[a as A cast char, b as B cast int]", see createFluentSelect() on case "toString" } interface Select { Select add(String s); Select select(String s); } interface ColumnOptions { ColumnOptions as(String s); ColumnOptions cast(String s); } interface FluentSelect extends Select, ColumnOptions { FluentSelect as(String s); FluentSelect cast(String s); FluentSelect add(String s); FluentSelect select(String s); } public static class MyFluentQueryAPI { public static FluentSelect newQuery() { return new MyFluentQueryAPI().createFluentSelect(); } private final SelectSupport selectSupport = new SelectSupport(); public FluentSelect createFluentSelect() { return (FluentSelect) Proxy.newProxyInstance(getClass().getClassLoader(), new Class[] { FluentSelect.class }, new InvocationHandler() { @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { switch (method.getName()) { case "as": case "cast": // we look for "as" or "cast" method on ColumnOptions class Method optionMethod = ColumnOptions.class.getMethod(method.getName(), method.getParameterTypes()); // we apply the "as" or "cast" call on element being created optionMethod.invoke(selectSupport.getCurrentElement(), args); break; case "add": case "select": // we look for "add" or "select" method on Select class Method selectMethod = Select.class.getMethod(method.getName(), method.getParameterTypes()); // we apply the "add" or "select" call on the final result (select instance) selectMethod.invoke(selectSupport, args); break; case "toString": return selectSupport.getElements().toString(); } return proxy; } }); } } /** Basic implementation of Select */ static class SelectSupport implements Select { private final List<SelectedElement> elements = new ArrayList<>(); private SelectedElement currentElement; @Override public Select add(String s) { this.currentElement = new SelectedElement(s); this.elements.add(currentElement); return this; } @Override public Select select(String s) { return add(s); } public SelectedElement getCurrentElement() { return currentElement; } public List<SelectedElement> getElements() { return elements; } } /** Basic representation of an element of the select clause, implements ColumnOptions */ static class SelectedElement implements ColumnOptions { private String clause; public SelectedElement(String clause) { this.clause = clause; } @Override public ColumnOptions as(String s) { clause += " as " + s; return this; } @Override public ColumnOptions cast(String s) { clause += " cast " + s; return this; } @Override public String toString() { return clause; } } This proof of concept needs to take inheritance into account, as well as argument type compatibility, and even more to make it a robust solution. Stalactite invested that time and created the MethodDispatcher class in the standalone library named Reflection, the final DX for an SQL Query definition is available here, and its usage is here. Stalactite DSL for persistence mapping definition is even more complex; that’s the caveat of all this: creating all the composite interfaces and redirecting correctly all the method calls is a bit complex. That’s why, for the last stage of the rocket, the MethodReferenceDispatcher has been created: it lets one redirect a method reference to some lambda expression to avoid extra code for small interfaces. Its usage can be seen here. Conclusion Implementing a naïve DSL can be straightforward but doesn’t really guide the user-developer. On the other hand, implementing a robust DSL can be cumbersome, Stalactite helped itself by creating an engine for it. While it's not easy to master, it really helps to meet the user-developer experience. Since the engine is the library Reflection, which is out of Stalactite, it can be used in other projects.
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Low-Code Development: Elevating the Engineering Experience With Low and No Code. Although the traditional software development lifecycle (SDLC) is slow and incapable of addressing dynamic business needs, it has long been the most popular way companies build applications — that was until low- and no-code (LCNC) tools entered the fray. These tools simplify the coding process so that advanced developers and non-technical users can contribute, enabling organizations to respond rapidly to market needs by shortening the SDLC. Read on to learn how software development is changing thanks to LCNC tools, how to integrate them into your operations, and the challenges that might arise when integrating them. Understanding Low- and No-Code Development Low- and no-code development environments let people build apps through visual interfaces, drag-and-drop tools, and reusable components without writing code by hand. Low-code development platforms are visual development environments that empower developers of any skill level to drop components onto a palette and connect them to create a mobile or web app. No-code development platforms target users with no or little coding experience. So how can you leverage these platforms to enhance the conventional SDLC? Suppose you are a designer with basic coding skills. Using an LCNC platform, you can quickly create a prototype using reusable components without writing a single line of code. This will expedite the software development process and ensure that the final product meets user needs. Planning and Assessment for Low- and No-Code Integration Though the SDLC varies across companies due to different SDLC models, it often comprises these stages: project planning, requirements gathering and analysis, design, testing, deployment, and maintenance. This process ensures a high level of detail but slows the development cycle and uses substantial resources. Low- and no-code tools address this challenge. For instance, during the design stage, HR teams can quickly design their recruitment portal using reusable components or pre-made templates provided by an LCNC platform to easily track candidates, job postings, and interview scheduling. That said, before integrating LCNC platforms into your existing workflows, consider your team's expertise, the compatibility of your IT infrastructure with your chosen platform, and the platform's security features. Steps for Low- and No-Code Integration To integrate low- and no-code tools into your operations, follow the steps: Figure 1. Streamlined steps for seamless LCNC integration Table 1 expands upon these steps, including an example for each: Table 1. Steps for integrating LCNC tools StepDescriptionExample1. Define objectives and goalsClearly define what you want to achieve and make it specific. Objectives could include speeding up development, reducing costs, or boosting your team's productivity.Reduce app development time by 40% within six months using pre-built templates.2. Choose the right LCNC platformsDon't settle. Evaluate various platforms and match them with your needs and objectives. Consider user friendliness, security features, and compatibility with your existing systems.Choose a no-code platform primarily for its ease of use and educational support if most team members aren't tech-savvy.3. Train and onboard your team membersEnsure that all team members, tech-savvy or not, can use your platform.Set up lectures and webinars, or even sign up your team members for professional courses offered by LCNC platforms.4. Design your integration architectureEnsure your platform is designed to be easily integrated with your current systems.Map how data flows between the current system and the preferred platform.5. Implement an integration frameworkCreate a framework for integrating LCNC platforms within your SDLC. At its simplest level, this may involve creating guidelines for selecting tools for each stage of the SDLC.Integrate LCNC platforms to collect customer survey responses, product launch feedback, or contact form submissions. Without these tools, developers would need to build the features from scratch, requiring extensive front-end development and storage integration, leading to higher costs and longer development cycles.6. Conduct testing and quality assuranceConduct rigorous testing of your LCNC tools using a mix of testing approaches, such as unit, integration, and acceptance testing.You can perform acceptance testing to ensure your app meets end users' needs and expectations.7. Manage deployments and releasesDeploy your application to end users in a structured manner using a deployment strategy (e.g., rolling deployment) and include a rollback plan in case of unforeseen issues.You can use cloud solutions to automate deployment.8. Monitor and maintainMonitor the app's performance after deployment to detect potential related issues.During maintenance, you may encounter bugs. Scheduling periodical bug fixes can maintain your app's stability, functionality, and security. Integrating Low and No Code: Practices to Implement and Avoid While low-code and no-code platforms streamline the SDLC, implementing them requires a structured approach. Next, we will explore best practices and counterproductive practices to consider during integration. Implement: Incremental Adoption Gradually integrating low- and no-code platforms into existing processes and systems can minimize disruptions to ongoing operations. Begin by implementing LCNC solutions for non-critical projects that can act as a sandbox for refining integration strategies. For instance, developers can move LCNC processes incrementally, starting with non-critical, easily packaged processes and gradually scaling it up over time. Non-critical processes, such as email notices, are more conducive to a slow and iterative rollout to a smaller portion of customers. Implement: Collaborative Development Collaborative development is a methodology that emphasizes teamwork. It brings together various stakeholders involved in the SDLC, such as project managers, business analysts, UX/UI designers, software developers, and other technical and non-technical personnel. This approach considers every stakeholder's input, resulting in the delivery of high-quality applications. Encourage collaboration by establishing clear roles and responsibilities for every stakeholder involved in the SDLC. Implement: Hybrid Development Models Combining low- and no-code platforms with traditional coding offers a balanced approach. While LCNC platforms can accelerate development, complex functionalities may require custom code. Embracing a hybrid approach can promote flexibility and maintain the integrity of applications without sacrificing the enhanced functionalities that traditional coding provides. Implement: Continuous Feedback Loops Low-code and no-code tools accelerate feedback loops, allowing teams to build prototypes rapidly, gather user feedback early and often, and refine applications based on received feedback. This approach ensures that final products align with user needs and expectations and adapt quickly to dynamic business requirements. Avoid: Over-Reliance on Low- and No-Code Platforms Low-code and no-code tools aren't supposed to overhaul traditional coding. Complex logic or performance-critical tasks still require conventional software development approaches. As a result, businesses should adopt a hybrid development model. Avoid: Lack of Proper Training and Education If misused, low code and no code can do more harm than good. Poor deployment can result in downtime, which rapidly increases costs in terms of lost customers and damages reputation (e.g., where many customers are being served) — even one second of unavailability has immense costs. The ability to benefit from these groundbreaking platforms relies wholly on providing technical and non-technical users with the proper training to avoid cumulative abnormalities. Avoid: Neglecting Security and Compliance Concerns Low- and no-code platforms eliminate various obstacles associated with conventional SDLC processes. However, they bring about security concerns primarily because your chosen platform hosts your data. Assess the security features of your selected low- or no-code platform to ensure that it meets your organization's data protection regulations and other industry regulations (e.g., GDPR, HIPAA, CCPA) to avoid security breaches and legal issues. Avoid: Ignoring Scalability and Customization Requirements Not all low- and no-code platforms scale well or allow sufficient customization. For instance, some platforms cap the number of team members using them, while others have storage restrictions. This can be a massive obstacle for growing businesses or those with particular needs. Assess whether the platform you're considering can scale and be customized to meet long-term business goals before settling on one. Low and No Code Challenges and Mitigation Strategies Incorporating low- and no-code tools into existing processes presents a few distinctive obstacles. Table 2 describes common challenges associated with integrating LCNC tools into the SDLC and their respective mitigation strategies: Table 2. Integrating low and no code: challenges and mitigation strategies ObstacleChallengeMitigation StrategyChangeOften the most widespread challenge; employees fear that LCNC tools will have a steep learning curveImplement extensive training programs to equip team members with the necessary skill setsIncompatibility and interoperabilityCan hinder the integration of LCNC platforms (e.g., due to incompatibility with outdated database protocols)Rigorously evaluate platforms to ensure compatibility with existing systems or that they can connect to systems not connected via APIsTechnical limitationsCan prevent the integration of LCNC platforms (i.e., lack of scalability)Select platforms that are scalable from the start or that provide a hybrid development approach Future of Low Code and No Code in the SDLC As low- and no-code platforms evolve, we can expect a significant transformation in software development practices. While LCNC tools won't make traditional coding obsolete, they'll accelerate development, lower costs, minimize technical debt, and democratize app development — allowing more people to build software without advanced programming skills. Low- and no-code development tools aren't just a passing trend. They are here to stay and will change how we develop and maintain software. By 2025, Gartner estimates that 70% of all new applications that enterprises develop will use LCNC technologies. Existing trends in the emerging LCNC space suggest that these platforms will grow to support increasingly complex features, such as advanced workflows and integrations. Most importantly, AI will be central to this evolution. AI-enhanced LCNC platforms that offer digital chatbots, image recognition, personalization, and other advanced features are already on the market. Conclusion Forrester says that low-code and no-code tools are "redefining how organizations build software"; the low code market alone is expected to reach nearly $30 billion in value by 2028. If you want your organization to keep up, you can't afford to ignore LCNC platforms. By implementing these steps, organizations can effectively integrate LCNC solutions: Organizations should set clear goals about what they wish to accomplish with LCNC solutions. Then, they should select suitable platforms based on their specific needs.Organizations should train their teams on the selected platform.Teams should carefully integrate the new system with existing systems and test it thoroughly before deploying it. Ultimately, a successful integration depends on adopting best practices (e.g., incremental adoption, collaborative development, hybrid development) and avoiding counterproductive practices (e.g., heavy reliance on LCNC tools, failure to consider security and scalability). Are you not using low- and no-code tools? Introduce them into your existing workflow to support your SDLC process. Additional resources: Building Low-Code Applications with Mendix: Discover Best Practices and Expert Techniques to Simplify Enterprise Web Development by Bryan Kenneweg, Imran Kasam, and Micah McMullenCost of Data Center Outages by Ponemon Institute"Gartner Says Cloud Will Be the Centerpiece of New Digital Experiences" by Gartner"The Low-Code Market Could Approach $50 Billion By 2028" by Forrester This is an excerpt from DZone's 2024 Trend Report, Low-Code Development: Elevating the Engineering Experience With Low and No Code.Read the Free Report
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Low-Code Development: Elevating the Engineering Experience With Low and No Code. The advent of large language models (LLMs) has led to a rush to shoehorn artificial intelligence (AI) into every product that makes sense, as well as into quite a few that don't. But there is one area where AI has already proven to be a powerful and useful addition: low- and no-code software development. Let's look at how and why AI makes building applications faster and easier, especially with low- and no-code tools. AI's Role in Development First, let's discuss two of the most common roles AI has in simplifying and speeding up the development process: Generating code Acting as an intelligent assistant AI code generators and assistants use LLMs trained on massive codebases that teach them the syntax, patterns, and semantics of programming languages. These models predict the code needed to fulfill a prompt — the same way chatbots use their training to predict the next word in a sentence. Automated Code Generation AI code generators create code based on input. These prompts take the form of natural language input or code in an integrated development environment (IDE) or on the command line. Code generators speed up development by freeing programmers from writing repetitive code. They can reduce common errors and typographical mistakes, too. But similar to the LLMs used to generate text, code generators require scrutiny and can make their own errors. Developers need to be careful when accepting code generated by AI, and they must test not just whether it builds but also that it does what the user asks. gpt-engineer is an open-source AI code generator that accepts natural language prompts to build entire codebases. It works with ChatGPT or custom LLMs like Llama. Intelligent Assistants for Development Intelligent assistants provide developers with real-time help as they work. They work as a form of AI code generator, but instead of using natural language prompts, they can autocomplete, provide in-line documentation, and accept specialized commands. These assistants can work inside programming tools like Eclipse and Microsoft's VS Code, the command line, or all three. These tools offer many of the same benefits as code generators, including shorter development times, fewer errors, and reduced typos. They also serve as learning tools since they provide developers programming information as they work. But like any AI tool, AI assistants are not foolproof — they require close and careful monitoring. GitHub's Copilot is a popular AI programming assistant. It uses models built on public GitHub repositories, so it supports a very wide variety of languages and plugs into all the most popular programming tools. Microsoft's Power Platform and Amazon Q Developer are two popular commercial options, while Refact.ai is an open-source alternative. AI and Low and No Code: Perfect Together Low and no code developed in response to a need for tools that allow newcomers and non-technologists to quickly customize software for their needs. AI takes this one step further by making it even easier to translate ideas into software. Democratizing Development AI code generators and assistants democratize software development by making coding more accessible, enhancing productivity, and facilitating continuous learning. These tools lower the entry barriers for newcomers to programming. A novice programmer can use them to quickly build working applications by learning on the job. For example, Microsoft Power Apps include Copilot, which generates application code for you and then works with you to refine it. How AI Enhances Low- and No-Code Platforms There are several important ways that AI enhances low- and no-code platforms. We've already covered AI's ability to generate code snippets from natural language prompts or the context in a code editor. You can use LLMs like ChatGPT and Gemini to generate code for many low-code platforms, while many no-code platforms like AppSmith and Google AppSheet use AI to generate integrations based on text that describes what you want the integration to do. You can also use AI to automate preparing, cleaning, and analyzing data, too. This makes it easier to integrate and work with large datasets that need tuning before they're suitable for use with your models. Tools like Amazon SageMaker use AI to ingest, sort, organize, and streamline data. Some platforms use AI to help create user interfaces and populate forms. For example, Microsoft's Power Platform uses AI to enable users to build user interfaces and automate processes through conversational interactions with its copilot. All these features help make low- and no-code development faster, including in terms of scalability, since more team members can take part in the development process. How Low and No Code Enable AI Development While AI is invaluable for generating code, it's also useful in your low- and no-code applications. Many low- and no-code platforms allow you to build and deploy AI-enabled applications. They abstract away the complexity of adding capabilities like natural language processing, computer vision, and AI APIs from your app. Users expect applications to offer features like voice prompts, chatbots, and image recognition. Developing these capabilities "from scratch" takes time, even for experienced developers, so many platforms offer modules that make it easy to add them with little or no code. For example, Microsoft has low-code tools for building Power Virtual Agents (now part of its Copilot Studio) on Azure. These agents can plug into a wide variety of skills backed by Azure services and drive them using a chat interface. Low- and no-code platforms like Amazon SageMaker and Google's Teachable Machine manage tasks like preparing data, training custom machine learning (ML) models, and deploying AI applications. And Zapier harnesses voice to text from Amazon's Alexa and directs the output to many different applications. Figure 1. Building low-code AI-enabled apps with building blocks Examples of AI-Powered Low- and No-Code Tools This table contains a list of widely used low- and no-code platforms that support AI code generation, AI-enabled application extensions, or both: Table 1. AI-powered low- and no-code tools ApplicationTypePrimary UsersKey FeaturesAI/ML CapabilitiesAmazon CodeWhispererAI-powered code generatorDevelopersReal-time code suggestions, security scans, broad language supportML-powered code suggestionsAmazon SageMakerFully managed ML serviceData scientists, ML engineersAbility to build, train, and deploy ML models; fully integrated IDE; support for MLOpsPre-trained models, custom model training and deploymentGitHub CopilotAI pair programmerDevelopersCode suggestions, multi-language support, context-aware suggestionsGenerative AI model for code suggestionsGoogle Cloud AutoMLNo-code AIData scientists, developersHigh-quality custom ML models can be trained with minimal effort; support for various data types, including images, text, and audioAutomated ML model training and deploymentMicrosoft Power AppsLow-code app developmentBusiness users, developersCustom business apps can be built; support for many diverse data sources; automated workflowsAI builder for app enhancementMicrosoft Power PlatformLow-code platformBusiness analysts, developersBusiness intelligence, app development, app connectivity, robotic process automationAI app builder for enhancing apps and processes Pitfalls of Using AI for Development AI's ability to improve low- and no-code development is undeniable, but so are its risks. Any use of AI requires proper training and comprehensive governance. LLM's tendency to "hallucinate" answers to prompts applies to code generation, too. So while AI tools lower the barrier to entry for novice developers, you still need experienced programmers to review, verify, and test code before you deploy it to production. Developers use AI by submitting prompts and receiving responses. Depending on the project, those prompts may contain sensitive information. If the model belongs to a third-party vendor or isn't correctly secured, your developers expose that information.When it works, AI suggests code that is likely to fulfill the prompt it's evaluating. The code is correct, but it's not necessarily the best solution. So a heavy reliance on AI to generate code can lead to code that is difficult to change and represents a large amount of technical debt. AI is already making important contributions toward democratizing programming and speeding up low- and no-code development. As LLMs gradually improve, AI tools for creating software will only get better. Even as these tools improve, IT leaders still need to proceed cautiously. AI offers great power, but that power comes with great responsibility. Any and all use of AI requires comprehensive governance and complete safeguards that protect organizations from errors, vulnerabilities, and data loss. Conclusion Integrating AI into low- and no-code development platforms has already revolutionized software development. It has democratized access to advanced coding and empowered non-experts so that they can build sophisticated applications. AI-driven tools and intelligent assistants have reduced development times, improved development scalability, and helped minimize common errors. But these powerful capabilities come with risks and responsibilities. Developers and IT leaders need to establish robust governance, testing regimes, and validation systems if they want to safely harness AI's full potential. AI technologies and models continue to improve, and it's probable that they will become the cornerstone of innovative, efficient, and secure software development. See how AI can help your organization widen your development efforts via low- and no-code tools. This is an excerpt from DZone's 2024 Trend Report, Low-Code Development: Elevating the Engineering Experience With Low and No Code.Read the Free Report
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Low-Code Development: Elevating the Engineering Experience With Low and No Code. When software development teams face pressure to deliver high-quality applications rapidly, low-code platforms offer the needed support for rapidly evolving business requirements and complex integrations. Integrating intelligent automated testing (IAT), intelligent process automation (IPA), and robotic process automation (RPA) solutions, which can adapt to changes more readily, ensures that testing and automation keep pace with the evolving applications and processes. In a low-code development environment, as shown in Figure 1, IAT, IPA, and RPA can reduce manual effort and improve test coverage, accuracy, and efficiency in the SDLC and process automation. Figure 1. Low-code development environment Using IAT, IPA, and RPA with low-code platforms can also achieve faster time to market, reduced costs, and increased productivity. The intersection of IAT, IPA, RPA, and low code is a paradigm shift in modern software development and process automation, and the impact extends to industries like professional services, consumer goods, banking, and beyond. This article explores all three integrations. For each integration, we will highlight advantages and disadvantages, explore factors to consider when deciding whether to integrate, present a use case, and highlight key implementation points. The use cases presented are popular examples of how these technologies can be applied in specific scenarios. These use cases do not imply that each integration is limited to the mentioned domains, nor do they suggest that the integrations cannot be used differently within the same domains. The flexibility and versatility of the three integrations explored in this article allow for a wide range of applications across different industries and processes. IAT With Low-Code Development AI-driven test case generation in intelligent automated testing can explore more scenarios, edge cases, and application states, leading to better test coverage and higher application quality. This is particularly beneficial in low-code environments, where complex integrations and rapidly evolving requirements can make comprehensive testing challenging. By automating testing tasks, such as test case generation, execution, and maintenance, IAT can significantly reduce the manual effort required, leading to increased efficiency and cost savings. This is advantageous in low-code development, where citizen developers with limited testing expertise are involved, minimizing the need for dedicated testing resources. Low-code platforms enable rapid application development, but testing can become a bottleneck. Automated testing and IAT can provide rapid feedback on application quality and potential issues, enabling quicker identification and resolution of defects. This may accelerate the overall development and delivery cycle. It may also allow organizations to leverage the speed of low code while maintaining quality standards. We need to keep in mind, though, that not all low-code platforms may integrate with all IAT solutions. IAT solutions may require access to sensitive application data, logs, and other information for training AI/ML models and generating test cases. In cases where training and software engineering skill development is necessary for AI/ML in IAT, we need to also consider costs like maintenance and support as well as customization and infrastructure. The decision on whether to integrate IAT with a low-code platform involves a number of factors that are highlighted in the table below: Table 1. Integrating IAT with low-code development When to IntegrateWhen Not to IntegrateRapid development is critical, but only citizen developers with limited testing experience are available Simple applications have limited functionality, and the low-code platform already provides sufficient testing capabilitiesApplications built on low-code platforms have good options for IAT integration Complexity and learning curve are high, and a deep understanding of AI/ML is requiredComplex applications need comprehensive test coverage, requiring extensive testingThere are compatibility, interoperability, and data silo issuesFrequent release cycles have well-established CI/CD pipelinesData security and regulatory compliance are challengesEnhanced decision-making for testing process is neededThere are budget constraints Use Case: Professional Services A low-code platform will be used to develop custom audit applications. Since IAT tools can be integrated to automate the testing of these applications, a professional services company will leverage IAT to enhance the accuracy, speed, efficiency, and effectiveness of its audit and assurance services. Implementation main points are summarized in Figure 2 below: Figure 2. IAT with low-code development for a custom audit app In this professional services use case for integrating IAT with low code, custom audit applications could also be developed for industries such as healthcare or finance, where automated testing can improve compliance and risk management. IPA With Low-Code Development Intelligent process automation may significantly enhance efficiency by automating various aspects of the software development and testing lifecycle. Low-code environments can benefit from IPA's advanced AI technologies, such as machine learning, natural language processing (NLP), and cognitive computing. These enhancements allow low-code platforms to automate more complex and data-intensive tasks that go beyond simple rule-based processes. IPA is not limited to simple rule-based tasks; it incorporates cognitive automation capabilities. This makes IPA able to handle more complex scenarios involving unstructured data and decision-making. IPA can learn from data patterns and make decisions based on historical data and trends. This is particularly useful for testing scenarios that involve complex logic and variable outcomes. For example, IPA can handle unstructured data like text documents, images and emails by using NLP and optical character recognition. IPA may be used to automate complex workflows and decision-making processes, reducing the need for manual intervention. End-to-end workflows and business processes can be automated, including approvals, notifications, and escalations. Automated decision-making can handle tasks such as credit scoring, risk assessment, and eligibility verification without human involvement based on predefined criteria and real-time data analysis. With IPA, low-code testing can go beyond testing applications since we can test entire processes across different verticals of an organization. As IPA can support a wide range of integration scenarios across verticals, security and regulatory compliance may be an issue. If the low-code platform does not fully support the wide range of integrations available by IPA, then we need to consider alternatives. Infrastructure setup, data migration, data integration, licensing, and customization are examples of the costs involved. The following table summarizes the factors to consider before integrating IPA: Table 2. Integrating IPA with low-code development When to IntegrateWhen Not to IntegrateStringent compliance and regulatory requirements exist that change in an adaptable, detailed, and easy-to-automate fashionRegulatory and security compliance frameworks are too rigid, having security/compliance gaps with potential legal issues, leading to challenges and uncertainties Repetitive processes exist across verticals where efficiency and accuracy can be enhancedNo clear optimization goals; manual processes are sufficient Rapid development and deployment of scalable automation solutions is necessaryThe low-code platform has limited customization for IPA End-to-end business processes can be streamlinedThere is limited IT expertiseDecision-making for complex process optimization is necessaryThere are high initial implementation costs Use Case: Consumer Goods A leading consumer goods company wants to utilize IPA to enhance its supply chain management and business operations. They will use a low-code platform to develop supply chain applications, and the platform will have the option to integrate IPA tools to automate and optimize supply chain processes. Such an integration will allow the company to improve supply chain efficiency, reduce operational costs, and enhance product delivery times. Implementation main points are summarized in Figure 3 below: Figure 3. IPA with low-code development for a consumer goods company This example of integrating IPA with low code in the consumer goods sector could be adapted for industries like retail or manufacturing, where inventory management, demand forecasting, and production scheduling can be optimized. RPA With Low-Code Development Robotic process automation and low-code development have a complementary relationship as they can be combined to enhance the overall automation and application development capabilities within an organization. For example, RPA can be used to automate repetitive tasks and integrate with various systems. Low-code platforms can be leveraged to build custom applications and workflows quickly, which may result in faster time to market. The rapid development capabilities of low-code platforms, combined with the automation power of RPA, may enable organizations to quickly build and deploy applications. By automating repetitive tasks with RPA and rapidly building custom applications with low-code platforms, organizations can significantly improve their overall operational efficiency and productivity. RPA in a low-code environment can lead to cost savings by minimizing manual effort, reducing development time, and enabling citizen developers to contribute to application development. Both RPA and low-code platforms offer scalability and flexibility, allowing organizations to adapt to changing business requirements and scale their applications and automated processes as needed. RPA bots can dynamically scale to handle varying volumes of customer queries. During peak times, additional bots can be deployed to manage the increased workload, ensuring consistent service levels. RPA tools often come with cross-platform compatibility, allowing them to interact with various applications and systems and enhancing the flexibility of low-code platforms. Data sensitivity may be an issue here as RPA bots may directly access proprietary or sensitive data. For processes that are unstable, difficult to automate, or unpredictable, RPA may not provide the expected gains. RPA relies on structured data and predefined rules to execute tasks. Frequently changing, unstable, and unstructured processes that lack clear and consistent repetitive patterns may pose significant challenges for RPA bots. Processes that are complex to automate often involve multiple decision points, exceptions, and dependencies. While RPA can handle some level of complexity, it is not designed for tasks requiring deep context understanding or sophisticated decision-making capabilities. The following table summarizes the factors to consider before integrating RPA: Table 3. Integrating RPA with low-code development When to IntegratEWhen NOT to IntegrateExisting system integrations can be further enhanced via automationTasks to be automated involve unstructured data and complex decision-makingRepetitive tasks and processes exist where manual processing is inefficientRapidly changing and complex processes must be automatedCost savings are expected by automating heavy loads of structured and repetitive tasksImplementation and maintenance costs of the integration are highScalability and flexibility of RPA can be leveraged by the low-code platformThere is a lack of technical expertiseTime to market is importantRPA bots operate on sensitive data without safeguarding Use Case: Banking A banking organization aims to streamline its data entry processes by integrating RPA with low-code development platforms to automate repetitive and time-consuming tasks, such as form filling, data extraction, and data transfer between legacy and new systems. The integration is expected to enhance operational efficiency, reduce manual errors, ensure data accuracy, and increase customer satisfaction. Additionally, it will allow the bank to handle increased volumes of customer data with greater speed and reliability. The low-code platform will provide the flexibility to rapidly develop and deploy custom applications tailored to the bank's specific needs. RPA will handle the automation of back-end processes, ensuring seamless and secure data management. Implementation main points are summarized in Figure 4 below: Figure 4. RPA with low-code development for a banking organization In this banking example for integrating RPA with low code, while RPA is used to automate back-end processes such as data entry and transfer, it can also automate front-end processes like customer service interactions and loan processing. Additionally, low code with RPA can be applied in domains such as insurance or telecommunications to automate claims processing and customer onboarding, respectively. Conclusion The value of technological integration lies in its ability to empower society and organizations to evolve, stay competitive, and thrive in a changing landscape — a landscape that calls for innovation and productivity to address market needs and societal changes. By embracing IAT, IPA, RPA, and low-code development, businesses can unlock new levels of agility, efficiency, and innovation. This will enable them to deliver exceptional customer experiences while driving sustainable growth and success. As the digital transformation journey continues to unfold, the integration of IAT, IPA, and RPA with low-code development will play a pivotal role and shape the future of software development, process automation, and business operations across industries. This is an excerpt from DZone's 2024 Trend Report, Low-Code Development: Elevating the Engineering Experience With Low and No Code.Read the Free Report
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Low-Code Development: Elevating the Engineering Experience With Low and No Code. The rise of low-code and no-code (LCNC) platforms has sparked a debate about their impact on the role of developers. Concerns about skill devaluation are understandable; after all, if anyone can build an app, what happens to the specialized knowledge of experienced programmers? While some skepticism toward low-code platforms remains, particularly concerning their suitability for large-scale, enterprise-level applications, it's important to recognize that these platforms are constantly evolving and improving. Many platforms now offer robust features like model-driven development, automated testing, and advanced data modeling, making them capable of handling complex business requirements. In addition, the ability to incorporate custom code modules ensures that specialized functionalities can still be implemented when needed. Yes, these tools are revolutionizing software creation, but it's time to move beyond the debate of their impact on the development landscape and delve into the practical realities. Instead of being a sales pitch of codeless platforms, this article aims to equip developers with a realistic understanding of what these tools can and cannot do, how they can change developer workflows, and most importantly, how you can harness their power to become more efficient and valuable in an AI-supported, LCNC-driven world. Leveraging Modern LCNC Platforms for Developer Workflows The financial benefits of LCNC platforms are undeniable. Reduced development costs, faster time to market, and a lighter burden on IT are compelling arguments. But it's the strategic advantage of democratizing application development by empowering individuals to develop solutions without any coding experience that drives innovation and competitive edge. For IT, it means less time fixing minor problems and more time on the big, important stuff. For teams outside of IT, it's like having a toolbox to build your own solutions. Need a way to track project deadlines? There's an app for that. Want to automate a tedious report? You can probably build it yourself. This shift doesn't mean that traditional coding skills are obsolete, though. In fact, they become even more valuable. Experienced developers can now focus on building reusable components, creating templates and frameworks for citizen developers, and ensuring that their LCNC solutions integrate seamlessly with existing systems. This shift is crucial as organizations can increasingly adopt a "two-speed IT" approach, balancing the need for rapid, iterative development with the maintenance and enhancement of complex core systems. Types of Tasks Suitable for LCNC vs. Traditional Development To understand how various tasks of traditional development would differ from using a codeless solution, consider the following table of typical tasks in a developer workflow: Table 1. Developer workflow tasks: LCNC vs. traditional development Task Category LCNC Traditional (Full-Code) Recommended Tool Developer Involvement Simple form building Ideal; drag-and-drop interfaces, pre-built components Possible but requires more manual coding and configuration LCNC Minimal; drag-and-drop, minimal configuration Data visualization Excellent with built-in charts/graphs, customizable with some code More customization options, requires coding libraries or frameworks LCNC or hybrid (if customization is needed) Minimal to moderate, depending on complexity Basic workflow automation Ideal; visual workflow builders, easy integrations Requires custom coding and integration logic LCNC Minimal to moderate; integration may require some scripting Front-end app development Suitable for basic UI, but complex interactions require coding Full control over UI/UX but more time consuming Hybrid Moderate; requires front-end development skills Complex integrations Limited to pre-built connectors, custom code often needed Flexible and powerful but requires expertise Full-code or hybrid High; deep understanding of APIs and data formats Custom business logic Not ideal; may require workarounds or limited custom code Full flexibility to implement any logic Full-code High; strong programming skills and domain knowledge Performance optimization Limited options, usually handled by the platform Full control over code optimization but requires deep expertise Full-code High; expertise in profiling and code optimization API development Possible with some platforms but limited in complexity Full flexibility but requires API design and coding skills Full-code or hybrid High; API design and implementation skills Security-critical apps Depends on platform's security features, may not be sufficient Full control over security implementation but requires expertise Full-code High; expertise in security best practices and secure coding Getting the Most Out of an LCNC Platform Whether you are building your own codeless platform or adopting a ready-to-use solution, the benefits can be immense. But before you begin, remember that the core of any LCNC platform is the ability to transform a user's visual design into functional code. This is where the real magic happens, and it's also where the biggest challenges lie. For an LCNC platform to help you achieve success, you need to start with a deep understanding of your target users. What are their technical skills? What kind of applications do they want to use? The answers to these questions will inform every aspect of your platform's design, from the user interface/user experience (UI/UX) to the underlying architecture. The UI/UX is crucial for the success of any LCNC platform, but it is just the tip of the iceberg. Under the hood, you'll need a powerful engine that can translate visual elements into clean, efficient code. This typically involves complex AI algorithms, data structures, and a deep understanding of various programming languages. You'll also need to consider how your platform will handle business logic, integrations with other systems, and deployment to different environments. Figure 1. A typical LCNC architecture flow Many organizations already have a complex IT landscape, and introducing a new platform can create compatibility issues. Choosing an LCNC platform that offers robust integration options, whether through APIs, webhooks, or pre-built connectors, is crucial. You'll also need to decide whether to adopt a completely codeless (no-code) solution or a low-code solution that allows for some custom coding. Additional factors to consider are how you'll handle version control, testing, and debugging. Best Practices to Empower Citizen Developers With LCNC LCNC platforms empower developers with powerful features, but it's the knowledge of how to use those tools effectively that truly unleashes their potential. The following best practices offer guidance on how to make the most of LCNC's capabilities while aligning with broader organizational goals. Leverage Pre-Built Components and Templates Most LCNC platforms offer pre-built components and templates as ready-made elements — from form fields and buttons to entire page layouts. These building blocks can help you bypass tedious manual coding and focus on the unique aspects of your application. While convenient, pre-built components may not always fit your exact requirements. Assess if customization is necessary and feasible within the platform. Begin with a pre-built application template that aligns with your overall goal. This can save significant time and provide a solid foundation. Explore the available components before diving into development. If a pre-built component doesn't quite fit, explore customization options within the platform before resorting to complex workarounds. Prioritize the User Experience Remember, even the most powerful application is useless if it's too confusing or frustrating to use. LCNC platforms are typically designed for rapid application development. Prioritizing core features first aligns with this philosophy, allowing for faster delivery of a functional product that can then be iterated upon based on user feedback. Before you start building, take the time to understand your end users' needs and pain points. Sketch out potential workflows, gather feedback from colleagues, and test your prototype with potential users. To avoid clutter and unnecessary features, the rule of thumb should be to focus on first developing the core functionalities that users need. Use clear labels, menus, and search functionality. A visually pleasing interface can significantly enhance user engagement and satisfaction. Align With Governance and Standards Your organization likely has established guidelines for data usage, security protocols, and integration requirements. Adhering to these standards not only ensures the safety and integrity of your application but also paves the way for smoother integration with existing systems and a more cohesive IT landscape. Be aware of any industry-specific regulations or data privacy laws that may apply to your application. Adhere to established security protocols, data-handling guidelines, and coding conventions to minimize risk and ensure a smooth deployment process. Formulate an AI-based runbook that mandates getting IT approval for your application before going live, especially if it involves sensitive data or integrations with critical systems. Conclusion Instead of viewing low code and traditional coding as an either/or proposition, developers should embrace them as complementary tools. Low-code platforms excel at rapid prototyping, building core application structures, and handling common functionalities; meanwhile, traditional coding outperforms in areas like complex algorithms, bespoke integrations, and granular control. A hybrid approach offers the best of both paradigms. It is also important to note that this is not the end of the developer's role but rather a new chapter. LCNC and AI are here to stay, and the smart developer recognizes that resisting this change is futile. Instead, embracing these tools opens up new avenues for career growth and impact. Embracing change, upskilling, and adapting to the evolving landscape can help developers thrive in an AI-based LCNC era, unlocking new levels of productivity, creativity, and impact. This is an excerpt from DZone's 2024 Trend Report, Low-Code Development: Elevating the Engineering Experience With Low and No Code.Read the Free Report
Generative AI (GenAI) is currently a hot topic in the tech world. It's a subset of artificial intelligence that focuses on creating new content, such as text, images, or music. One popular type of GenAI component is the Large Language Model (LLM), which can generate human-like text based on a prompt. Retrieval-Augmented Generation (RAG) is a technique that enhances the accuracy and reliability of generative AI models by grounding them in external knowledge sources. While most GenAI applications and related content are centered around Python and its ecosystem, what if you want to write a GenAI application in Java? In this blog post, we'll look at how to write GenAI applications with Java using the Spring AI framework and utilize RAG for improving answers. What Is Spring AI? Spring AI is a framework for building generative AI applications in Java. It provides a set of tools and utilities for working with generative AI models and architectures, such as large language models (LLMs) and Retrieval-Augmented Generation (RAG). Spring AI is built on top of the Spring Framework, which is a popular Java framework for building enterprise applications, allowing those already familiar with or involved in the Spring ecosystem the ability to incorporate GenAI strategies into their already existing applications and workflow. There are also other options for GenAI in Java, such as Langchain4j, but we'll focus on Spring AI for this post. Creating a Project To get started with Spring AI, you'll need to either create a new project or add the appropriate dependencies to an existing project. You can create a new project using the Spring Initializr, which is a web-based tool for generating Spring Boot projects. When creating a new project, you'll need to add the following dependencies: Spring WebOpenAI (or other LLM model, such as Mistral, Ollama, etc.)Neo4j Vector Database (other vector database options also available)Spring Data Neo4j If you're adding these dependencies manually to an existing project, you can see the dependency details in today's related GitHub repository. The Spring Web dependency allows us to create a REST API for our GenAI application. We need the OpenAI dependency to access the OpenAI model, which is a popular LLM. The Neo4j Vector Database dependency allows us to store and query vectors, which are used for similarity searches. Finally, adding the Spring Data Neo4j dependency provides support for working with Neo4j databases in Spring applications, allowing us to run Cypher queries in Neo4j and map entities to Java objects. Go ahead and generate the project, and then open it in your favorite IDE. Looking at the pom.xml file, you should see that the milestone repository is included. Since Spring AI is not a general-availability release yet, we need to include the milestone repository to get the pre-release version of the dependencies. A Bit of Boilerplate The first thing that we need is a Neo4j database. I like to use the Neo4j Aura free tier because the instance is managed for me, but there are also Docker images and other methods. Depending on the LLM model you chose, you will also need an API key. For OpenAI, you can get one by signing up at OpenAI. Once you have a Neo4j database and an API key, you can set up the config in the application.properties file. Here's an example of what that might look like: Properties files spring.ai.openai.api-key=<YOUR API KEY HERE> spring.neo4j.uri=<NEO4J URI HERE> spring.neo4j.authentication.username=<NEO4J USERNAME HERE> spring.neo4j.authentication.password=<NEO4J PASSWORD HERE> spring.data.neo4j.database=<NEO4J DATABASE NAME HERE> Note: It's a good idea to keep sensitive information like API keys and passwords in environment variables or other locations external to the application. To create environment variables, you can use the export command in the terminal or set them in your IDE. We can set up Spring Beans for the OpenAI client and the Neo4j vector store that will allow us to access necessary components wherever we need them in our application. We can put these in our SpringAiApplication class by adding the following code to the class: Java @Bean public EmbeddingClient embeddingClient() { return new OpenAiEmbeddingClient(new OpenAiApi(System.getenv("SPRING_AI_OPENAI_API_KEY"))); } @Bean public Neo4jVectorStore vectorStore(Driver driver, EmbeddingClient embeddingClient) { return new Neo4jVectorStore(driver, embeddingClient, Neo4jVectorStore.Neo4jVectorStoreConfig.builder() .withLabel("Review") .withIndexName("review-embedding-index") .build()); } The EmbeddingClient bean creates a client for the OpenAI API and passes in our API key environment variable. Lastly, the Neo4jVectorStore bean configures Neo4j as the store for embeddings (vectors). We customize the configuration by specifying the label for the nodes that will store the embeddings, as Spring's default looks for Document entities. We also specify our index name for the embeddings (default is spring-ai-document-index). Data Set For this example, we'll use a dataset of books and reviews from Goodreads. You can pull a curated version of the dataset from here. The dataset contains information about books, as well as related reviews. I have already generated embeddings using OpenAI's API, so if you want to generate your own, you will need to comment out the final Cypher statement in the script and instead run the generate-embeddings.py script (or your custom version) to generate and load the review embeddings to Neo4j. Application Model Next, we need to create a domain model in our application to map to our database model. In this example, we'll create a Book entity that represents a book node. We'll also create a Review entity that represents a review of a book. The Review entity will have an embedding (vector) associated with it, which we'll use for similarity searches. These entities are standard Spring Data Neo4j code, so I won't show the code here. However, the full code for each class is available in the GitHub repository (Book class, Review class). We also need a repository interface defined so that we can interact with the database. While we will need to define a custom query, we'll come back and add that in a bit later. Java public interface BookRepository extends Neo4jRepository<Book, String> { } Next, the core of this application where all the magic happens is the controller class. This class will contain the logic for taking a search phrase provided by the user and calling the Neo4jVectorStore to calculate and return the most similar ones. We can then pass those similar reviews into a Neo4j query to retrieve connected entities, providing additional context in the prompt for the LLM. It will use all the information provided to respond with some similar book recommendations for the original searched phrase. Controller Our controller class contains a couple of common annotations, to start. We'll also inject the Neo4jVectorStore and BookRepository beans that we defined earlier, as well as the OpenAiChatClient for our embedding client. The next thing is to define a string for our prompt. This is the text that we will pass to the LLM to generate the response. We'll use the search phrase provided by the user and the similar reviews we find in the database to populate our prompt parameters in a few minutes. Next, we define the constructor for the controller class, which will inject the necessary beans. Java @RestController @RequestMapping("/") public class BookController { private final OpenAiChatClient client; private final Neo4jVectorStore vectorStore; private final BookRepository repo; String prompt = """ You are a book expert with high-quality book information in the CONTEXT section. Answer with every book title provided in the CONTEXT. Do not add extra information from any outside sources. If you are unsure about a book, list the book and add that you are unsure. CONTEXT: {context} PHRASE: {searchPhrase} """; public BookController(OpenAiChatClient client, Neo4jVectorStore vectorStore, BookRepository repo) { this.client = client; this.vectorStore = vectorStore; this.repo = repo; } //Retrieval Augmented Generation with Neo4j - vector search + retrieval query for related context @GetMapping("/rag") public String generateResponseWithContext(@RequestParam String searchPhrase) { List<Document> results = vectorStore.similaritySearch(SearchRequest.query(searchPhrase).withTopK(5).withSimilarityThreshold(0.8)); //more code shortly! } } Finally, we define a method that will be called when a user makes a GET request to the /rag endpoint. This method will first take a search phrase as a query parameter and pass that to the vector store's similaritySearch() method to find similar reviews. I have also added a couple of customization filters to the query by limiting to the top five results (.withTopK(5)) and only pull the most similar results (withSimilarityThreshold(0.8)). The actual implementation of Spring AI's similaritySearch() method is below. Java @Override public List<Document> similaritySearch(SearchRequest request) { Assert.isTrue(request.getTopK() > 0, "The number of documents to returned must be greater than zero"); Assert.isTrue(request.getSimilarityThreshold() >= 0 && request.getSimilarityThreshold() <= 1, "The similarity score is bounded between 0 and 1; least to most similar respectively."); var embedding = Values.value(toFloatArray(this.embeddingClient.embed(request.getQuery()))); try (var session = this.driver.session(this.config.sessionConfig)) { StringBuilder condition = new StringBuilder("score >= $threshold"); if (request.hasFilterExpression()) { condition.append(" AND ") .append(this.filterExpressionConverter.convertExpression(request.getFilterExpression())); } String query = """ CALL db.index.vector.queryNodes($indexName, $numberOfNearestNeighbours, $embeddingValue) YIELD node, score WHERE %s RETURN node, score""".formatted(condition); return session .run(query, Map.of("indexName", this.config.indexName, "numberOfNearestNeighbours", request.getTopK(), "embeddingValue", embedding, "threshold", request.getSimilarityThreshold())) .list(Neo4jVectorStore::recordToDocument); } } Then, we map the similar Review nodes back to Document entities because Spring AI expects a general document type. The Neo4jVectorStore class contains methods to convert Document to a custom record, as well as the reverse for the record to Document conversion. The actual implementation for those methods is shown next. Java private Map<String, Object> documentToRecord(Document document) { var embedding = this.embeddingClient.embed(document); document.setEmbedding(embedding); var row = new HashMap<String, Object>(); row.put("id", document.getId()); var properties = new HashMap<String, Object>(); properties.put("text", document.getContent()); document.getMetadata().forEach((k, v) -> properties.put("metadata." + k, Values.value(v))); row.put("properties", properties); row.put(this.config.embeddingProperty, Values.value(toFloatArray(embedding))); return row; } private static Document recordToDocument(org.neo4j.driver.Record neoRecord) { var node = neoRecord.get("node").asNode(); var score = neoRecord.get("score").asFloat(); var metaData = new HashMap<String, Object>(); metaData.put("distance", 1 - score); node.keys().forEach(key -> { if (key.startsWith("metadata.")) { metaData.put(key.substring(key.indexOf(".") + 1), node.get(key).asObject()); } }); return new Document(node.get("id").asString(), node.get("text").asString(), Map.copyOf(metaData)); } Back in our controller method for book recommendations, we now have similar reviews for the user's searched phrase. But reviews (and their accompanying text) aren't really helpful in giving us book recommendations. So now we need to run a query in Neo4j to retrieve the related books for those reviews. This is the retrieval augmented generation (RAG) piece of the application. Let's write the query in the BookRepository interface to find the books associated with those reviews. Java public interface BookRepository extends Neo4jRepository<Book, String> { @Query("MATCH (b:Book)<-[rel:WRITTEN_FOR]-(r:Review) " + "WHERE r.id IN $reviewIds " + "AND r.text <> 'RTC' " + "RETURN b, collect(rel), collect(r);") List<Book> findBooks(List<String> reviewIds); } In the query, we pass in the IDs of the reviews from the similarity search ($reviewIds) and pull the Review -> Book pattern for those reviews. We also filter out any reviews that have the text 'RTC' (which is a placeholder for reviews that don't have text). We then return the Book nodes, the relationships, and the Review nodes. Now we need to call that method in our controller and pass the results to a prompt template. We will pass that to the LLM to generate a response with a book recommendation list based on the user's search phrase (we hope!). :) Java //Retrieval Augmented Generation with Neo4j - vector search + retrieval query for related context @GetMapping("/rag") public String generateResponseWithContext(@RequestParam String searchPhrase) { List<Document> results = vectorStore.similaritySearch(SearchRequest.query(searchPhrase).withTopK(5).withSimilarityThreshold(0.8)); List<Book> bookList = repo.findBooks(results.stream().map(Document::getId).collect(Collectors.toList())); var template = new PromptTemplate(prompt, Map.of("context", bookList.stream().map(b -> b.toString()).collect(Collectors.joining("\n")), "searchPhrase", searchPhrase)); System.out.println("----- PROMPT -----"); System.out.println(template.render()); return client.call(template.create().getContents()); } Starting right after the similarity search, we call our new findBooks() method and pass in the list of review IDs from the similarity search. The retrieval query returns to a list of books called bookList. Next, we create a prompt template with the prompt string, the context data from the graph, and the user's search phrase, mapping the context and searchPhrase prompt parameters to the graph data (list with each item on the new line) and the user's search phrase, respectively. I have also added a System.out.println() to print the prompt to the console so that we can see what is getting passed to the LLM. Finally, we call the template's create() method to generate the response from the LLM. The returning JSON object has a contents key that contains the response string with the list of book recommendations based on the user's search phrase. Let's test it out! Running the Application To run our Goodreads AI application, you can use the ./mvnw spring-boot:run command in the terminal. Once the application is running, you can make a GET request to the /rag endpoint with a search phrase as a query parameter. Some examples are included next. Shell http ":8080/rag?searchPhrase=happy%20ending" http ":8080/rag?searchPhrase=encouragement" http ":8080/rag?searchPhrase=high%tech" Sample Call and Output + Full Prompt Call and returned book recommendations: Shell jenniferreif@elf-lord springai-goodreads % http ":8080/rag?searchPhrase=encouragement" The Cross and the Switchblade The Art of Recklessness: Poetry as Assertive Force and Contradiction I am unsure about 90 Minutes in Heaven: A True Story of Death and Life The Greatest Gift: The Original Story That Inspired the Christmas Classic It's a Wonderful Life I am unsure about Aligned: Volume 1 (Aligned, #1) Application log output: Shell ----- PROMPT ----- You are a book expert with high-quality book information in the CONTEXT section. Answer with every book title provided in the CONTEXT. Do not add extra information from any outside sources. If you are unsure about a book, list the book and add that you are unsure. CONTEXT: Book[book_id=772852, title=The Cross and the Switchblade, isbn=0515090255, isbn13=9780515090253, reviewList=[Review[id=f70c68721a0654462bcc6cd68e3259bd, text=encouraging, rating=4]]] Book[book_id=89375, title=90 Minutes in Heaven: A True Story of Death and Life, isbn=0800759494, isbn13=9780800759490, reviewList=[Review[id=85ef80e09c64ebd013aeebdb7292eda9, text=inspiring & hope filled, rating=5]]] Book[book_id=1488663, title=The Greatest Gift: The Original Story That Inspired the Christmas Classic It's a Wonderful Life, isbn=0670862045, isbn13=9780670862047, reviewList=[Review[id=b74851666f2ec1841ca5876d977da872, text=Inspiring, rating=4]]] Book[book_id=7517330, title=The Art of Recklessness: Poetry as Assertive Force and Contradiction, isbn=1555975623, isbn13=9781555975623, reviewList=[Review[id=2df3600d488e182a3ef06bff7fc82eb8, text=Great insight, great encouragement, and great company., rating=4]]] Book[book_id=27802572, title=Aligned: Volume 1 (Aligned, #1), isbn=1519114796, isbn13=9781519114792, reviewList=[Review[id=60b9aa083733e751ddd471fa1a77535b, text=healing, rating=3]]] PHRASE: encouragement We can see that the LLM generated a response with a list of book recommendations based on the books found in the database (CONTEXT section of prompt). The results of the similarity search + graph retrieval query for the user's search phrase are in the prompt, and the LLM's answer uses that data for a response. Wrapping Up In today's post, you learned how to build a GenAI application with Spring AI in Java. We used the OpenAI model to generate book recommendations based on a user's search phrase. We used the Neo4j Vector Database to store and query vectors for similarity searches. We also mapped the domain model to our database model, wrote a repository interface to interact with the database, and created a controller class to handle user requests and generate responses. I hope this post helps to get you started with Spring AI and beyond. Happy coding! Resources Documentation: Spring AIWebpage: Spring AI projectAPI: Spring AI - Neo4jVectorStore
In today's security landscape, OAuth2 has become a standard for securing APIs, providing a more robust and flexible approach than basic authentication. My journey into this domain began with a critical solution architecture decision: migrating from basic authentication to OAuth2 client credentials for obtaining access tokens. While Spring Security offers strong support for both authentication methods, I encountered a significant challenge. I could not find a declarative approach that seamlessly integrated basic authentication and JWT authentication within the same application. This gap in functionality motivated me to explore and develop a solution that not only meets the authentication requirements but also supports comprehensive integration testing. This article shares my findings and provides a detailed guide on setting up Keycloak, integrating it with Spring Security and Spring Boot, and utilizing the Spock Framework for repeatable integration tests. By the end of this article, you will clearly understand how to configure and test your authentication mechanisms effectively with Keycloak as an identity provider, ensuring a smooth transition to OAuth2 while maintaining the flexibility to support basic authentication where necessary. Prerequisites Before you begin, ensure you have met the following requirements: You have installed Java 21. You have a basic understanding of Maven and Java.This is the parent project for the taptech-code-accelerator modules. It manages common dependencies and configurations for all the child modules. You can get it from here taptech-code-accelerator. Building taptech-code-accelerator To build the taptech-code-accelerator project, follow these steps: git clone the project from the repository: git clone https://github.com/glawson6/taptech-code-accelerator.git Open a terminal and change the current directory to the root directory of the taptech-code-accelerator project. cd path/to/taptech-code-accelerator Run the following command to build the project: ./build.sh This command cleans the project, compiles the source code, runs any tests, packages the compiled code into a JAR or WAR file, and installs the packaged code in your local Maven repository. It also builds the local Docker image that will be used to run later. Please ensure you have the necessary permissions to execute these commands. Keycloak Initial Setup Setting up Keycloak for integration testing involves several steps. This guide will walk you through creating a local environment configuration, starting Keycloak with Docker, configuring realms and clients, verifying the setup, and preparing a PostgreSQL dump for your integration tests. Step 1: Create a local.env File First, navigate to the taptech-common/src/test/resources/docker directory. Create a local.env file to store environment variables needed for the Keycloak service. Here's an example of what the local.env file might look like: POSTGRES_DB=keycloak POSTGRES_USER=keycloak POSTGRES_PASSWORD=admin KEYCLOAK_ADMIN=admin KEYCLOAK_ADMIN_PASSWORD=admin KC_DB_USERNAME=keycloak KC_DB_PASSWORD=keycloak SPRING_PROFILES_ACTIVE=secure-jwk KEYCLOAK_ADMIN_CLIENT_SECRET=DCRkkqpUv3XlQnosjtf8jHleP7tuduTa IDP_PROVIDER_JWKSET_URI=http://172.28.1.90:8080/realms/offices/protocol/openid-connect/certs Step 2: Start the Keycloak Service Next, start the Keycloak service using the provided docker-compose.yml file and the ./start-services.sh script. The docker-compose.yml file should define the Keycloak and PostgreSQL services. version: '3.8' services: postgres: image: postgres volumes: - postgres_data:/var/lib/postgresql/data #- ./dump:/docker-entrypoint-initdb.d environment: POSTGRES_DB: keycloak POSTGRES_USER: ${KC_DB_USERNAME} POSTGRES_PASSWORD: ${KC_DB_PASSWORD} networks: node_net: ipv4_address: 172.28.1.31 keycloak: image: quay.io/keycloak/keycloak:23.0.6 command: start #--import-realm environment: KC_HOSTNAME: localhost KC_HOSTNAME_PORT: 8080 KC_HOSTNAME_STRICT_BACKCHANNEL: false KC_HTTP_ENABLED: true KC_HOSTNAME_STRICT_HTTPS: false KC_HEALTH_ENABLED: true KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN} KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD} KC_DB: postgres KC_DB_URL: jdbc:postgresql://172.28.1.31/keycloak KC_DB_USERNAME: ${KC_DB_USERNAME} KC_DB_PASSWORD: ${KC_DB_PASSWORD} ports: - 8080:8080 volumes: - ./realms:/opt/keycloak/data/import restart: always depends_on: - postgres networks: node_net: ipv4_address: 172.28.1.90 volumes: postgres_data: driver: local networks: node_net: ipam: driver: default config: - subnet: 172.28.0.0/16 Then, use the ./start-services.sh script to start the services: Step 3: Access Keycloak Admin Console Once Keycloak has started, log in to the admin console at http://localhost:8080 using the configured admin username and password (default is admin/admin). Step 4: Create a Realm and Client Create a Realm: Log in to the Keycloak admin console.In the left-hand menu, click on "Add Realm".Enter the name of the realm (e.g., offices) and click "Create". Create a Client: Select your newly created realm from the left-hand menu.Click on "Clients" in the left-hand menu.Click on "Create" in the right-hand corner.Enter the client ID (e.g., offices), choose openid-connect as the client protocol, and click "Save."Click "Save."Extract the admin-cli Client Secret: Follow directions in the doc EXTRACTING-ADMIN-CLI-CLIENT-SECRET.md to extract the admin-cli client secret.Save the client secret for later use. Step 5: Verify the Setup With HTTP Requests To verify the setup, you can use HTTP requests to obtain tokens. Get access token: http -a admin-cli:[client secret] --form POST http://localhost:8080/realms/master/protocol/openid-connect/token grant_type=password username=admin password=Pa55w0rd Step 6: Create a PostgreSQL Dump After verifying the setup, create a PostgreSQL dump of the Keycloak database to use for seeding the database during integration tests. Create the dump: docker exec -i docker-postgres-1 /bin/bash -c "PGPASSWORD=keycloak pg_dump --username keycloak keycloak" > dump/keycloak-dump.sql Save the file: Save the keycloak-dump.sql file locally. This file will be used to seed the database for integration tests. Following these steps, you will have a Keycloak instance configured and ready for integration testing with Spring Security and the Spock Framework. Spring Security and Keycloak Integration Tests This section will set up integration tests for Spring Security and Keycloak using Spock and Testcontainers. This involves configuring dependencies, setting up Testcontainers for Keycloak and PostgreSQL, and creating a base class to hold the necessary configurations. Step 1: Add Dependencies First, add the necessary dependencies to your pom.xml file. Ensure that Spock, Testcontainers for Keycloak and PostgreSQL, and other required libraries are included (check here). Step 2: Create the Base Test Class Create a base class to hold the configuration for your integration tests. package com.taptech.common.security.keycloak import com.taptech.common.security.user.InMemoryUserContextPermissionsService import com.fasterxml.jackson.databind.ObjectMapper import dasniko.testcontainers.keycloak.KeycloakContainer import org.keycloak.admin.client.Keycloak import org.slf4j.Logger import org.slf4j.LoggerFactory import org.springframework.beans.factory.annotation.Autowired import org.springframework.context.annotation.Bean import org.springframework.context.annotation.Configuration import org.testcontainers.containers.Network import org.testcontainers.containers.PostgreSQLContainer import org.testcontainers.containers.output.Slf4jLogConsumer import org.testcontainers.containers.wait.strategy.ShellStrategy import org.testcontainers.utility.DockerImageName import org.testcontainers.utility.MountableFile import spock.lang.Shared import spock.lang.Specification import spock.mock.DetachedMockFactory import java.time.Duration import java.time.temporal.ChronoUnit class BaseKeyCloakInfraStructure extends Specification { private static final Logger logger = LoggerFactory.getLogger(BaseKeyCloakInfraStructure.class); static String jdbcUrlFormat = "jdbc:postgresql://%s:%s/%s" static String keycloakBaseUrlFormat = "http://%s:%s" public static final String OFFICES = "offices"; public static final String POSTGRES_NETWORK_ALIAS = "postgres"; @Shared static Network network = Network.newNetwork(); @Shared static PostgreSQLContainer<?> postgres = createPostgresqlContainer() protected static PostgreSQLContainer createPostgresqlContainer() { PostgreSQLContainer container = new PostgreSQLContainer<>("postgres") .withNetwork(network) .withNetworkAliases(POSTGRES_NETWORK_ALIAS) .withCopyFileToContainer(MountableFile.forClasspathResource("postgres/keycloak-dump.sql"), "/docker-entrypoint-initdb.d/keycloak-dump.sql") .withUsername("keycloak") .withPassword("keycloak") .withDatabaseName("keycloak") .withLogConsumer(new Slf4jLogConsumer(logger)) .waitingFor(new ShellStrategy() .withCommand( "psql -q -o /dev/null -c \"SELECT 1\" -d keycloak -U keycloak") .withStartupTimeout(Duration.of(60, ChronoUnit.SECONDS))) return container } public static final DockerImageName KEYCLOAK_IMAGE = DockerImageName.parse("bitnami/keycloak:23.0.5"); @Shared public static KeycloakContainer keycloakContainer; @Shared static String adminCC = "admin@cc.com" def setup() { } // run before every feature method def cleanup() {} // run after every feature method def setupSpec() { postgres.start() String jdbcUrl = String.format(jdbcUrlFormat, POSTGRES_NETWORK_ALIAS, 5432, postgres.getDatabaseName()); keycloakContainer = new KeycloakContainer("quay.io/keycloak/keycloak:23.0.6") .withNetwork(network) .withExposedPorts(8080) .withEnv("KC_HOSTNAME", "localhost") .withEnv("KC_HOSTNAME_PORT", "8080") .withEnv("KC_HOSTNAME_STRICT_BACKCHANNEL", "false") .withEnv("KC_HTTP_ENABLED", "true") .withEnv("KC_HOSTNAME_STRICT_HTTPS", "false") .withEnv("KC_HEALTH_ENABLED", "true") .withEnv("KEYCLOAK_ADMIN", "admin") .withEnv("KEYCLOAK_ADMIN_PASSWORD", "admin") .withEnv("KC_DB", "postgres") .withEnv("KC_DB_URL", jdbcUrl) .withEnv("KC_DB_USERNAME", "keycloak") .withEnv("KC_DB_PASSWORD", "keycloak") keycloakContainer.start() String authServerUrl = keycloakContainer.getAuthServerUrl(); String adminUsername = keycloakContainer.getAdminUsername(); String adminPassword = keycloakContainer.getAdminPassword(); logger.info("Keycloak getExposedPorts: {}", keycloakContainer.getExposedPorts()) String keycloakBaseUrl = String.format(keycloakBaseUrlFormat, keycloakContainer.getHost(), keycloakContainer.getMappedPort(8080)); //String keycloakBaseUrl = "http://localhost:8080" logger.info("Keycloak authServerUrl: {}", authServerUrl) logger.info("Keycloak URL: {}", keycloakBaseUrl) logger.info("Keycloak adminUsername: {}", adminUsername) logger.info("Keycloak adminPassword: {}", adminPassword) logger.info("JDBC URL: {}", jdbcUrl) System.setProperty("spring.datasource.url", jdbcUrl) System.setProperty("spring.datasource.username", postgres.getUsername()) System.setProperty("spring.datasource.password", postgres.getPassword()) System.setProperty("spring.datasource.driverClassName", "org.postgresql.Driver"); System.setProperty("POSTGRES_URL", jdbcUrl) System.setProperty("POSRGRES_USER", postgres.getUsername()) System.setProperty("POSRGRES_PASSWORD", postgres.getPassword()); System.setProperty("idp.provider.keycloak.base-url", authServerUrl) System.setProperty("idp.provider.keycloak.admin-client-secret", "DCRkkqpUv3XlQnosjtf8jHleP7tuduTa") System.setProperty("idp.provider.keycloak.admin-client-id", KeyCloakConstants.ADMIN_CLI) System.setProperty("idp.provider.keycloak.admin-username", adminUsername) System.setProperty("idp.provider.keycloak.admin-password", adminPassword) System.setProperty("idp.provider.keycloak.default-context-id", OFFICES) System.setProperty("idp.provider.keycloak.client-secret", "x9RIGyc7rh8A4w4sMl8U5rF3HuNm2wOC3WOD") System.setProperty("idp.provider.keycloak.client-id", OFFICES) System.setProperty("idp.provider.keycloak.token-uri", "/realms/offices/protocol/openid-connect/token") System.setProperty("idp.provider.keycloak.jwkset-uri", authServerUrl + "/realms/offices/protocol/openid-connect/certs") System.setProperty("idp.provider.keycloak.issuer-url", authServerUrl + "/realms/offices") System.setProperty("idp.provider.keycloak.admin-token-uri", "/realms/master/protocol/openid-connect/token") System.setProperty("idp.provider.keycloak.user-uri", "/admin/realms/{realm}/users") System.setProperty("idp.provider.keycloak.use-strict-jwt-validators", "false") } // run before the first feature method def cleanupSpec() { keycloakContainer.stop() postgres.stop() } // run after @Autowired Keycloak keycloak @Autowired KeyCloakAuthenticationManager keyCloakAuthenticationManager @Autowired InMemoryUserContextPermissionsService userContextPermissionsService @Autowired KeyCloakManagementService keyCloakService @Autowired KeyCloakIdpProperties keyCloakIdpProperties @Autowired KeyCloakJwtDecoderFactory keyCloakJwtDecoderFactory def test_config() { expect: keycloak != null keyCloakAuthenticationManager != null keyCloakService != null } static String basicAuthCredsFrom(String s1, String s2) { return "Basic " + toBasicAuthCreds(s1, s2); } static toBasicAuthCreds(String s1, String s2) { return Base64.getEncoder().encodeToString((s1 + ":" + s2).getBytes()); } @Configuration @EnableKeyCloak public static class TestConfig { @Bean ObjectMapper objectMapper() { return new ObjectMapper(); } DetachedMockFactory mockFactory = new DetachedMockFactory() } } In the BaseKeyCloakInfraStructure class, a method named createPostgresqlContainer() is used to set up a PostgreSQL test container. This method configures the container with various settings, including network settings, username, password, and database name. This class sets up the entire Postgresql and Keycloak env. One of the key steps in this method is the use of a PostgreSQL dump file to populate the database with initial data. This is done using the withCopyFileToContainer() method, which copies a file from the classpath to a specified location within the container. If you have problems starting, you might need to restart the Docker Compose file and extract the client secret. This is explained in EXTRACTING-ADMIN-CLI-CLIENT-SECRET. The code snippet for this is: .withCopyFileToContainer(MountableFile.forClasspathResource("postgres/keycloak-dump.sql"), "/docker-entrypoint-initdb.d/keycloak-dump.sql") Step 3: Extend the Base Class End Run Your Tests package com.taptech.common.security.token import com.taptech.common.EnableCommonConfig import com.taptech.common.security.keycloak.BaseKeyCloakInfraStructure import com.taptech.common.security.keycloak.EnableKeyCloak import com.taptech.common.security.keycloak.KeyCloakAuthenticationManager import com.taptech.common.security.user.UserContextPermissions import com.taptech.common.security.utils.SecurityUtils import com.fasterxml.jackson.databind.ObjectMapper import org.slf4j.Logger import org.slf4j.LoggerFactory import org.springframework.beans.factory.annotation.Autowired import org.springframework.boot.test.autoconfigure.web.reactive.WebFluxTest import org.springframework.context.annotation.Bean import org.springframework.context.annotation.Configuration import org.springframework.security.oauth2.client.registration.InMemoryReactiveClientRegistrationRepository import org.springframework.test.context.ContextConfiguration import org.springframework.test.web.reactive.server.EntityExchangeResult import org.springframework.test.web.reactive.server.WebTestClient import spock.mock.DetachedMockFactory import org.springframework.boot.autoconfigure.security.reactive.ReactiveSecurityAutoConfiguration @ContextConfiguration(classes = [TestApiControllerConfig.class]) @WebFluxTest(/*controllers = [TokenApiController.class],*/ properties = [ "spring.main.allow-bean-definition-overriding=true", "openapi.token.base-path=/", "idp.provider.keycloak.initialize-on-startup=true", "idp.provider.keycloak.initialize-realms-on-startup=false", "idp.provider.keycloak.initialize-users-on-startup=true", "spring.test.webtestclient.base-url=http://localhost:8888" ], excludeAutoConfiguration = ReactiveSecurityAutoConfiguration.class) class TokenApiControllerTest extends BaseKeyCloakInfraStructure { private static final Logger logger = LoggerFactory.getLogger(TokenApiControllerTest.class); /* ./mvnw clean test -Dtest=TokenApiControllerTest ./mvnw clean test -Dtest=TokenApiControllerTest#test_public_validate */ @Autowired TokenApiApiDelegate tokenApiDelegate @Autowired KeyCloakAuthenticationManager keyCloakAuthenticationManager @Autowired private WebTestClient webTestClient @Autowired TokenApiController tokenApiController InMemoryReactiveClientRegistrationRepository clientRegistrationRepository def test_configureToken() { expect: tokenApiDelegate } def test_public_jwkkeys() { expect: webTestClient.get().uri("/public/jwkKeys") .exchange() .expectStatus().isOk() .expectBody() } def test_public_login() { expect: webTestClient.get().uri("/public/login") .headers(headers -> { headers.setBasicAuth(BaseKeyCloakInfraStructure.adminCC, "admin") }) .exchange() .expectStatus().isOk() .expectBody() .jsonPath(".access_token").isNotEmpty() .jsonPath(".refresh_token").isNotEmpty() } def test_public_login_401() { expect: webTestClient.get().uri("/public/login") .headers(headers -> { headers.setBasicAuth(BaseKeyCloakInfraStructure.adminCC, "bad") }) .exchange() .expectStatus().isUnauthorized() } def test_public_refresh_token() { given: def results = keyCloakAuthenticationManager.passwordGrantLoginMap(BaseKeyCloakInfraStructure.adminCC, "admin", OFFICES).toFuture().join() def refreshToken = results.get("refresh_token") expect: webTestClient.get().uri("/public/refresh") .headers(headers -> { headers.set("Authorization", SecurityUtils.toBearerHeaderFromToken(refreshToken)) headers.set("contextId", OFFICES) }) .exchange() .expectStatus().isOk() .expectBody() .jsonPath(".access_token").isNotEmpty() .jsonPath(".refresh_token").isNotEmpty() } def test_public_validate() { given: def results = keyCloakAuthenticationManager.passwordGrantLoginMap(BaseKeyCloakInfraStructure.adminCC, "admin", OFFICES).toFuture().join() def accessToken = results.get("access_token") expect: EntityExchangeResult<UserContextPermissions> entityExchangeResult = webTestClient.get().uri("/public/validate") .headers(headers -> { headers.set("Authorization", SecurityUtils.toBearerHeaderFromToken(accessToken)) }) .exchange() .expectStatus().isOk() .expectBody(UserContextPermissions.class) .returnResult() logger.info("entityExchangeResult: {}", entityExchangeResult.getResponseBody()) } @Configuration @EnableCommonConfig @EnableKeyCloak @EnableTokenApi public static class TestApiControllerConfig { @Bean ObjectMapper objectMapper() { return new ObjectMapper(); } DetachedMockFactory mockFactory = new DetachedMockFactory() } } Conclusion With this setup, you have configured Testcontainers to run Keycloak and PostgreSQL within a Docker network, seeded the PostgreSQL database with a dump file, and created a base test class to manage the lifecycle of these containers. You can now write your integration tests extending this base class to ensure your Spring Security configuration works correctly with Keycloak.
Justin Albano
Software Engineer,
IBM
Thomas Hansen
CTO,
AINIRO.IO