DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
Securing Your Software Supply Chain with JFrog and Azure
Register Today

Integration

Integration refers to the process of combining software parts (or subsystems) into one system. An integration framework is a lightweight utility that provides libraries and standardized methods to coordinate messaging among different technologies. As software connects the world in increasingly more complex ways, integration makes it all possible facilitating app-to-app communication. Learn more about this necessity for modern software development by keeping a pulse on the industry topics such as integrated development environments, API best practices, service-oriented architecture, enterprise service buses, communication architectures, integration testing, and more.

icon
Latest Refcards and Trend Reports
Trend Report
Software Integration
Software Integration
Refcard #249
GraphQL Essentials
GraphQL Essentials
Refcard #303
API Integration Patterns
API Integration Patterns

DZone's Featured Integration Resources

RBAC With API Gateway and Open Policy Agent (OPA)

RBAC With API Gateway and Open Policy Agent (OPA)

By Bobur Umurzokov
With various access control models and implementation methods available, constructing an authorization system for backend service APIs can still be challenging. However, the ultimate goal is to ensure that the correct individual has appropriate access to the relevant resource. In this article, we will discuss how to enable the Role-based access control (RBAC) authorization model for your API with open-source API Gateway Apache APISIX and Open Policy Agent (OPA). What Is RBAC? Role-based access control (RBAC)and attribute-based access control (ABAC) are two commonly used access control models used to manage permissions and control access to resources in computer systems. RBAC assigns permissions to users based on their role within an organization. In RBAC, roles are defined based on the functions or responsibilities of users, and permissions are assigned to those roles. Users are then assigned to one or more roles, and they inherit the permissions associated with those roles. In the API context, for example, a developer role might have permission to create and update API resources, while an end-user role might only have permission to read or execute API resources. Basically, RBAC assigns permissions based on user roles, while ABAC assigns permissions based on attributes associated with users and resources. In RBAC, a policy is defined by the combination of a user’s assigned role, the actions they are authorized to perform, and the resources on which they can perform those actions. What Is OPA? OPA (Open Policy Agent) is a policy engine and a set of tools that provide a unified approach to policy enforcement across an entire distributed system. It allows you to define, manage, and enforce policies centrally from a single point. By defining policies as code, OPA enables easy review, editing, and roll-back of policies, facilitating efficient policy management. OPA provides a declarative language called Rego, which allows you to create and enforce policies throughout your stack. When you request a policy decision from OPA, it uses the rules and data that you have provided in a .rego file to evaluate the query and produce a response. The query result is then sent back to you as the policy decision. OPA stores all the policies and the data it needs in its in-memory cache. As a result, OPA returns results quickly. Here is an example of a simple OPA Rego file: package example default allow = false allow { input.method == "GET" input.path =="/api/resource" input.user.role == "admin" } In this example, we have a package called “example” that defines a rule called “allow”. The “allow” rule specifies that the request is allowed if the input method is “GET”, the requested path is /api/resource, and the user's role is "admin". If these conditions are met, then the "allow" rule will evaluate as "true", allowing the request to proceed. Why Use OPA and API Gateway for RBAC? API Gateway provides a centralized location to configure and manage API, and API consumers. It can be used as a centralized authentication gateway by avoiding having each individual service implement authentication logic inside the service itself. On the other hand, OPA adds an authorization layer and decouples the policy from the code by creating a distinct benefit for authorization. With this combination, you can add permissions for an API resource to a role. Users might be associated with one or more user roles. Each user role defines a set of permissions (GET, PUT, DELETE) on RBAC resources (defined by URI paths). In the next section, let’s learn how to achieve RBAC using these two. How to Implement RBAC With OPA and Apache APISIX In Apache APISIX, you can configure routes and plugins to define the behavior of your API. You can use the APISIX opa plugin to enforce RBAC policies by forwarding requests to OPA for decision-making. Then OPA makes an authorization decision based on users’ roles and permissions in real-time. Assume that we have Conference API where you can retrieve/edit event sessions, topics, and speaker information. A speaker can only read their own sessions and topics while the admin can add/edit more sessions and topics. Or attendees can leave their feedback about the speaker’s session via a POST request to /speaker/speakerId/session/feedback and the speaker can only see by requesting the GET method of the same URI. The below diagram illustrates the whole scenario: API consumer requests a route on the API Gateway with its credential such as a JWT token in the authorization header. API Gateway sends consumer data with a JWT header to the OPA engine. OPA evaluates if the consumer has a right to access the resource by using policies (roles and permissions) we specify in the .rego file. If the OPA decision is allowed, then the request will be forwarded to the upstream Conference service. Next, we install, configure APISIX, and define policies in OPA. Prerequisites Docker is used to installing the containerized etcd and APISIX. curl is used to send requests to APISIX Admin API. You can also use tools such as Postman to interact with the API. Step 1: Install Apache APISIX APISIX can be easily installed and started with the following quickstart script: curl -sL https://run.api7.ai/apisix/quickstart | sh Step 2: Configure the Backend Service (Upstream) To route requests to the backend service for the Conference API, you’ll need to configure it by adding an upstream server in Apache APISIX via the Admin API. curl http://127.0.0.1:9180/apisix/admin/upstreams/1 -X PUT -d ' { "name":"Conferences API upstream", "desc":"Register Conferences API as the upstream", "type":"roundrobin", "scheme":"https", "nodes":{ "conferenceapi.azurewebsites.net:443":1 } }' Step 3: Create an API Consumer Next, we create a consumer (a new speaker) with the username jack in Apache APISIX. It sets up the jwt-auth plugin for the consumer with the specified key and secret. This will allow the consumer to authenticate using a JSON Web Token (JWT). curl http://127.0.0.1:9180/apisix/admin/consumers -X PUT -d ' { "username": "jack", "plugins": { "jwt-auth": { "key": "user-key", "secret": "my-secret-key" } } }' Step 4: Create a Public Endpoint to Generate a JWT Token You also need to set up a new Route that generates and signs the token using the public-api plugin. In this scenario, API Gateway acts as an identity provider server to create and verify the token with our consumer jack’s key. The identity provider can be also any other third-party services such as Google, Okta, Keycloak, and Ory Hydra. curl http://127.0.0.1:9180/apisix/admin/routes/jas -X PUT -d ' { "uri": "/apisix/plugin/jwt/sign", "plugins": { "public-api": {} } }' Step 5: Claim a New JWT Token for the API Consumer Now we can get a new token for our speaker Jack from the API Gateway using the public endpoint we created. The following curl command generates a new token with Jack’s credentials and assigns role, and permission in the payload. curl -G --data-urlencode 'payload={"role":"speaker","permission":"read"}' http://127.0.0.1:9080/apisix/plugin/jwt/sign?key=user-key -i After you run the above command, you will receive a token as a response. Save this token somewhere — later we are going to use this token to access our new API Gateway endpoint. Step 6: Create a New Plugin Config This step involves configuring APISIX’s 3 plugins: proxy-rewrite, jwt-auth, and opa plugins. curl "http://127.0.0.1:9180/apisix/admin/plugin_configs/1" -X PUT -d ' { "plugins":{ "jwt-auth":{ }, "proxy-rewrite":{ "host":"conferenceapi.azurewebsites.net" } } }' The proxy-rewrite plugin is configured to proxy requests to the conferenceapi.azurewebsites.net host. OPA authentication plugin is configured to use the OPA policy engine running at http://localhost:8181/v1/data/rbacExample. Also, APISIX sends all consumer-related information to OPA. We will add this policy .rego file in the Opa configuration section. Step 7: Create a Route for Conference Sessions The final step is to create a new route for Conferences API speaker sessions: curl "http://127.0.0.1:9180/apisix/admin/routes/1" -X PUT -d ' { "name":"Conferences API speaker sessions route", "desc":"Create a new route in APISIX for the Conferences API speaker sessions", "methods": ["GET", "POST"], "uris": ["/speaker/*/topics","/speaker/*/sessions"], "upstream_id":"1", "plugin_config_id":1 }' The payload contains information about the route, such as its name, description, methods, URIs, upstream ID, and plugin configuration ID. In this case, the route is configured to handle GET and POST requests for two different URIs, /speaker/topics and /speaker/sessions. The "upstream_id" field specifies the ID of the upstream service that will handle incoming requests for this route, while the "plugin_config_id" field specifies the ID of the plugin configuration to be used for this route. Step 8: Test the Setup Without OPA So far, we have set up all the necessary configurations for APISIX to direct incoming requests to Conference API endpoints, only allowing authorized API consumers. Now, each time an API consumer wants to access an endpoint, they must provide a JWT token to retrieve data from the Conference backend service. You can verify this by hitting the endpoint and the domain address we are requesting now is our custom API Gateway but not an actual Conference service: curl -i http://127.0.0.1:9080/speaker/1/topics -H 'Authorization: {API_CONSUMER_TOKEN}' Step 9: Run OPA Service The other two steps are we run the OPA service using Docker and upload our policy definition using its API which can be used to evaluate authorization policies for incoming requests. docker run -d --network=apisix-quickstart-net --name opa -p 8181:8181 openpolicyagent/opa:latest run -s This Docker command runs a container of the OPA image with the latest version. It creates a new container on the existing APISIX network apisix-quickstart-netwith the name opaand exposes port 8181. So, APISIX can send policy check requests to OPA directly using the address [http://opa:8181](http://opa:8181) Note that OPA and APISIX should run in the same docker network. Step 10: Define and Register the Policy The second step on the OPA side is you need to define the policies that will be used to control access to API resources. These policies should define the attributes required for access (which users have which roles) and the permission (which roles have which permissions) that are allowed or denied based on those attributes. For example, in the below configuration, we are saying to OPA, check the user_roles table to find the role for jack. This information is sent by APISIX inside input.consumer.username. Also, we are verifying the consumer’s permission by reading the JWT payload and extracting token.payload.permission from there. The comments describe the steps clearly. curl -X PUT '127.0.0.1:8181/v1/policies/rbacExample' \ -H 'Content-Type: text/plain' \ -d 'package rbacExample # Assigning user rolesuser_roles := { "jack": ["speaker"], "bobur":["admin"] } # Role permission assignments role_permissions := { "speaker": [{"permission": "read"}], "admin": [{"permission": "read"}, {"permission": "write"}] } # Helper JWT Functions bearer_token := t { t := input.request.headers.authorization } # Decode the authorization token to get a role and permission token = {"payload": payload} { [_, payload, _] := io.jwt.decode(bearer_token) } # Logic that implements RBAC default allow = falseallow { # Lookup the list of roles for the user roles := user_roles[input.consumer.username] # For each role in that list r := roles[_] # Lookup the permissions list for role r permissions := role_permissions[r] # For each permission p := permissions[_] # Check if the permission granted to r matches the users request p == {"permission": token.payload.permission} }' Step 11: Update the Existing Plugin Config With the OPA Plugin Once we defined policies on the OPA service, we need to update the existing plugin config for the route to use the OPA plugin. We specify in the policy attribute of the OPA plugin. curl "http://127.0.0.1:9180/apisix/admin/plugin_configs/1" -X PATCH -d ' { "plugins":{ "opa":{ "host":"http://opa:8181", "policy":"rbacExample", "with_consumer":true } } }' Step 12: Test the Setup With OPA Now we can test all setups we did with OPA policies. If you try to run the same curl command to access the API Gateway endpoint, it first checks the JWT token as the authentication process and sends consumer and JWT token data to OPA to verify the role and permission as the authorization process. Any request without a JWT token in place or allowed roles will be denied. curl -i http://127.0.0.1:9080/speaker/1/topics -H 'Authorization: {API_CONSUMER_TOKEN}' Conclusion In this article, we learned how to implement RBAC with OPA and Apache APISIX. We defined a simple custom policy logic to allow/disallow API resource access based on our API consumer’s role and permissions. Also, this tutorial demonstrated how we can extract API consumer-related information in the policy file from the JWT token payload or consumer object sent by APISIX. More
Testing Applications With JPA Buddy and Testcontainers

Testing Applications With JPA Buddy and Testcontainers

By Andrey Belyaev CORE
Testing is a cornerstone of any application lifecycle. Integration testing is a type of testing that helps to ensure that an application is functioning correctly with all of its external services, such as a database, authorization server, message queue, and so on. With Testcontainers, creating such an environment for integration testing becomes easier. However, setting just the environment is not enough for proper testing. Preparing test data is also an essential task. In this article, we will review the process of preparing application business logic tests. We will see how to set up Testcontainers for the application and explain some challenges we can meet during test data preparation. This article also has a companion video that guides you through the process of application testing with Testcontainers and JPA Buddy. Introduction: Application To Test Let’s review a small application allowing users to manage product stock. The application uses a “standard” technology stack: Spring Boot, Spring Data JPA, and PostgreSQL as a data store. It also contains a simple business logic: we can count a product amount for every product type. The project source code layout follows the default layout used by Gradle: Plain Text project-root - src - main - java - resources - test - java - resources We will refer to this layout later in the article. The data model consists of two entities and looks like this: Java @Entity @Table(name = "product_type") public class ProductType { @Id @Column(name = "id", nullable = false) private UUID id; @Column(name = "name", nullable = false) private String name; //Getters and setters removed for brevity } @Entity @Table(name = "product") public class Product { @Id @Column(name = "id", nullable = false) private UUID id; @Column(name = "name", nullable = false) private String name; @ManyToOne(fetch = FetchType.LAZY, optional = false) @JoinColumn(name = "product_type_id", nullable = false) private ProductType productType; //Getters and setters removed for brevity } The test data will consist of one product category (Perfume) and three products for this category. For this data, we create a simple test to verify business logic: Java @Test void quantityByProductTypeTest() { assertThat(stockService.getQuantityByProductType("Perfume")).isEqualTo(3L); } Business Logic Testing and Data Access Layer For business logic testing, we have two options: Implement mocks for the data access layer (Spring Data repositories for our case) Perform “proper” integration testing using a test database or similar setup with Testcontainers Mocking is faster to execute, does not require infrastructure setup, and allows us to isolate business logic from the other components of our application. On the other side, mocking requires much coding to prepare and support sample data to simulate the response. In addition, if we use JPA, we won’t be able to catch some edge case issues. For instance, the @Transactional annotation becomes useless in the case of mocks, so we won’t get LazyInitException in tests but can get it in production. Mocked data differs from “live” JPA entities with all these proxies, etc. Integration testing with a test database is closer to the real world; we use the same data access layer code that will run in the production. To perform the testing, we need to set up the test environment and prepare test data. As was said before, Testcontainers greatly simplify environment setup; this is what we will demonstrate later. As for the test data, we’ll use SQL scripts (or something similar) to add test data to the database. Let’s go through the preparation process and see how we can set up the environment for testing. Environment Setup: Database and Connection We are going to use JUnit5 and Testcontainers for our PostgreSQL database. Let's add the required dependencies. Groovy testImplementation 'org.junit.jupiter:junit-jupiter:5.9.2' testImplementation 'org.testcontainers:postgresql:1.18.0' testImplementation 'org.testcontainers:junit-jupiter:1.18.0' testImplementation 'org.testcontainers:testcontainers:1.18.0' We will use the test class usually generated by the start.spring.io utility as a base. We can find it in the root package in the test sources folder. Java @SpringBootTest public class StockManagementApplicationTests { @Test void contextLoads(){ } } To set up the application context, Spring Boot uses application.properties/.yml files. Creating separate files for different contexts (prod and test) is feasible; it allows us to separate production and test environments explicitly. Hence, our tests will use a dedicated properties file named application-test.properties. This file is usually located in the resources folder in the test sources section. project-root … - test - java - com.jpabuddy.example StockManagementApplicationTests.java - resources application-test.properties Now we can set up the rest of the application environment, namely the PostgreSQL database. There are several options to do it using Testcontainers. First, we can do it implicitly by specifying a particular DB connection URL in the application-test.properties file: spring.datasource.url=jdbc:tc:postgresql:alpine:///shop This URL instructs Testcontainers to start a database using the postgres:alpine image. Then Spring will connect to this database and use this connection for the datasource. This option allows using one shared container for all tests in the class. We do not need to specify anything but the URL in the application settings file and this file name in the test annotation. The database container will start automatically and be available for all tests specified in the class. Java @SpringBootTest @TestPropertySource(locations = "classpath:application-test.properties") public class StockManagementApplicationTests { @Test void contextLoads(){ } } If we need to fine-tune the container, we can use another option – create the container explicitly in the test code. For this case, we do not specify the connection URL for the datasource in the properties file but get it from the container in the code and put it into the Spring context using the @DynamicPropertiesSource annotation. The test code will look like this: Java @SpringBootTest @Testcontainers @TestPropertySource(locations = "classpath:application-test.properties") public class StockManagementApplicationTests { @Container static PostgreSQLContainer<?> postgreSQLContainer = new PostgreSQLContainer<>("postgres:alpine"); @DynamicPropertySource static void setProperties(DynamicPropertyRegistry registry) { registry.add("spring.datasource.url", postgreSQLContainer::getJdbcUrl); registry.add("spring.datasource.username", postgreSQLContainer::getUsername); registry.add("spring.datasource.password", postgreSQLContainer::getPassword); } @Test void contextLoads(){ } } Note that the container instance is static. It means that one container will be used for all tests specified in this class. If we need the container created for every test, we must make this property non-static. Creating a separate container for each test allows us to isolate tests from each other properly, but it dramatically affects test execution time. So, if we have many tests in one class, it would be preferable to run one container for all tests. As we can see, creating a test database instance can be a simple setup process, containerization solves this problem for us. Environment Setup: DB Schema Now we need to initialize our database: create the schema for JPA entities and insert some data. Let's start with the schema. HBM2DDL The simplest option is to add the spring.jpa.hibernate.ddl-auto property in the application-test.properties file and set its value to create-drop. Hibernate will recreate the schema every time in this case. However, this solution is far from ideal and very limited. First, with Hibernate 5, you cannot control what types will be generated for your columns, and these may differ from what you have in the production environment. Hibernate 6 solves this problem, but its adoption rate for production systems is not very high. Secondly, this solution will not work if you use non-standard mapping types with Hibernate Types or JPA Converters. Finally, you may need to generate other database objects like triggers or views for your test, which is obviously impossible. However, using the validate value for spring.jpa.hibernate.ddl-auto is always a good idea. For this case, Hibernate will check if your model is compatible with tables in the database. Hence, we can add it to our ‘application-test.properties’ file and continue to other options for the DB schema creation. spring.jpa.hibernate.ddl-auto=validate Spring Data Init Script Spring Boot provides us with an additional way to define the database schema. We can create a schema.sql file in the resources root location, which will be used to initialize the database. project-root … - test - java - com.jpabuddy.example StockManagementApplicationTests.java - resources application-test.properties schema.sql To execute this script during the context bootstrap, we need to set the spring.sql.init.mode property to always tell the application to execute this script. spring.datasource.url=jdbc:tc:postgresql:alpine:///shop spring.sql.init.mode=always To create a proper DDL script to initialize the database, we can use JPA Buddy. In the JPA Structure tool window, select + action and then invoke the Generate DDL by entities menu as shown in the picture: After that, select DB schema initialization as the DDL type and PostgreSQL as the target database. That’s it. We can review the generated SQL in the window and save it to the file schema.sql or copy it to the clipboard. There is more information on init script generation in the JPA Buddy documentation. Database Testcontainer Init Script We can also use Testcontainers to initialize the DB as described in the Testcontainers manual. In this case, we need to disable Spring Data's automatic script execution and specify the path to the script in the JDBC URL in the application-test.properties file: spring.datasource.url=jdbc:tc:postgresql:11.1:///shop?TC_INITSCRIPT=schema.sql spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect spring.sql.init.mode=never As for the script itself, we can generate it in the same way we did in the previous section. DB Versioning Tools The final option to create a test database schema is to use DB versioning tools (Liquibase/Flyway). This approach makes our test database identical to what we will have in production. Also, Spring Boot will execute all migration scripts automatically on test context startup, so no additional configuration is required. However, applying all migrations may be time-consuming compared to previous options. We do not need all migrations; we need a final schema to perform tests. There is an article showing how to “squash” DB migrations for Testcontainers and which gains we can get. The squashing process requires some additional coding, and we are not going to discuss it here.In general, by using DB versioning tools, we won’t get any advantages over Spring Data init script or Testcontainers init script, but test execution takes more time. DB Schema Setup: Conclusion When we need a DB schema for tests, running all DB migrations for each test class and container startup is unnecessary. All we need to do is to get the final DB initialization script and execute it. To do it, we have the following options: Spring Data built-in engine, a.k.a. “init.sql” Testcontainers “init script” parameter Think twice before deciding to use the other options: Liquibase/Flyway is a good option, but it is time-consuming due to the sequential execution of all migrations HBM2DDL is not recommended due to its inaccuracy, as described before. So, we have a test DB schema created one way or another. The next step – is the test data. Environment Setup: Test Data When implementing tests, we need to consider the following: Tests should be repeatable. It means they should return the same result if the input data is kept the same. Tests should be isolated. It is essential if we use a database with test data. Tests should spoil other tests’ data. Test execution order should not affect test results. All above means that we should prepare test data carefully, considering all operations that can be performed during test execution. Also, we need to clean up (or fully recreate) test data after the test run to ensure that our test does not affect others. Test data can be split into two parts: common data like cities, countries, or product categories, in our case. This referenced information is usually static and shared between tests. On the other hand, we have data required by the test itself, which can be changed during test execution. For our test, it will be a list of products for a particular category. Adding Common Test Data If we keep a single container running until all tests in a class are executed, it makes sense to create the shared data once before all tests are started. If we use Spring Data to create the test database, we need to add the data.sql file with INSERT statements with the shared data right next to the schema.sql in the test resources root folder. This script will be executed after the schema creation once the spring.sql.init.mode property is set to always. That’s it. project-root … - test - java - com.jpabuddy.example StockManagementApplicationTests.java - resources application-test.properties schema.sql data.sql If we use the Testcontainers init script, we’ll need to add this shared test data into the DB schema init script schema.sql after schema creation DDL. So, Testcontainers will create both schema and shared data on the container and start by executing the script. Regarding DB versioning tools, it is essential to separate test data from the production one. For Liquibase, we can use contexts. For every test changeset, we can append a context tag enabled for test execution only, as described in the corresponding article. In the migration scripts, this test data will look like this. XML <changeSet id="1" author="jpabuddy" context="test-data-common"> <sql> INSERT INTO product_type (id, name) VALUES ('7af0c1a4-f61d-439a-991a-6c2c5d510e14', 'Perfume'); </sql> </changeSet> We can specify the context tag in the application-test.properties file similar to this: spring.liquibase.contexts=test-data-common The only problem with this approach is separating Liquibase scripts with test data from ones containing prod data. If we move changesets with test data into test resources, we need to create and support an additional Liquibase master file in test resources that will include schema creation script from the main codebase and test data creation from the test one. For example, consider the following application resources layout: project-root - main - java - resources - db.changelog db.changelog-master.xml db.changelog-create-schema.xml db.changelog-production-data.xml - test - java - resources - db.changelog db.changelog-test-master.xml db.changelog-common-test-data.xml application-test.properties So, we need to keep two master files in sync: in the main folder and test one. Other than that, Liquibase contexts work fine. With Flyway, we can use different paths to versioning scripts for the test and production databases. We should use different .properties files to run various scripts and enable them using profiles or @TestPropertySource annotation. For example, in the application.properties file, we can have this entry: spring.flyway.locations=classpath:db/migration,classpath:db/data For tests, we can use other paths in our application-test.properties: spring.flyway.locations=classpath:db/migration,classpath:db/test-data So, we can put schema creation and prod data to the src/main/resources/db/migration and src/main/resources/db/data, respectively, but test data is stored in src/test/resources/db/test-data. project-root - main - java - resources - db - migration V0__create_schema.sql - data V1__add_prod_data.sql - test - java - resources - db - test-data V1__add_test_data.sql application-test.properties Again, like Liquibase, the support process for these scripts is critical; there are yet to be tools to help you with proper test data script arrangement. We must track migration version numbers carefully and prevent prod data from leaking to tests and vice versa. In conclusion: Spring Data’s data.sql or Tectcontainers’ init script adds the minimum maintenance work to add “common” test data. Suppose we decide to use DB versioning tools solely. In that case, we’ll need to remember about Liquibase contexts or keep tracking Flyway DB versions for different databases (test/prod), which is mundane and error-prone work. Adding Test-Specific Data Creating test data for every test in the class is even more challenging. We should insert test data before every test and delete it after test execution to prevent test data contamination. Of course, if we put all required data into the same script as test data and recreate a container after every test, it will resolve the problem, but we’ll spend more time on test execution. So, which options do we have? In JUnit 5, the “standard” way to do an action before a test is to put it into a method annotated with @BeforeEach and @AfterEach. With test data, this may do the trick, but we need to remember the following: @Transactional annotation does not work for test lifecycle methods as stated in the documentation. So, we need to use TransactionTemplate or something similar for transaction management while creating test data before each test. Adding @Transactional on a class level won’t help much. In this case, all data manipulations will be executed in a single transaction, so Hibernate will use its L1 cache and won’t even flush data to the DB. For example, the @DataJpaTest annotation works this way. It is a meta-annotation. Among many others, it is marked with the @Transactional annotation. So, all tests marked with @DataJpaTest will open a single transaction for the whole test class. Methods annotated with @BeforeEach are executed before each test method, obviously. It means we need to know which exact method is executed and initialize its data set. It means a lot of “if” statements in the method and problems with supporting them. Also, we can try using the @BeforeAll and @AfterAll methods to initialize all test data at once. This approach also has some disadvantages: These methods are static, so that Spring annotation-based injection won’t work for them. It means that we’ll need to get required beans like EntityManager manually. Transactions still won’t work properly. We’ll need to design test data for all tests in the test suite so that it won’t interfere. It is challenging, especially if we have several tests checking contradicting cases. @Before* and @After* methods work fine if we use DB migration tools to apply test-specific scripts. For example, for the Flyway, we can write something like this: Java @Autowired private FlywayProperties flywayProperties; @Autowired private DataSource dataSource; @BeforeEach void setUp(TestInfo testInfo) { String testMethodName = testInfo.getTestMethod().orElseThrow().getName(); List<String> locations = flywayProperties.getLocations(); locations.add("classpath:db/%s/insert".formatted(testMethodName)); Flyway flyway = Flyway.configure() .dataSource(dataSource) .locations(locations.toArray(new String[0])) .load(); flyway.migrate(); } In this code snippet, we create an empty Flyway bean and add the path to test-specific migrations into its configuration. Please note that DB schema and “common” test data should be created beforehand. So, in the application-test.properties, we still need to provide paths to these migration scripts as stated in the previous section: spring.flyway.locations=classpath:db/migration,classpath:db/test-data To delete test data, we’ll need to write a similar code in the @AfterEach method and execute migrations to remove test data. As mentioned before, these methods would be executed for all tests in a test class, so we’ll need to specify the exact script based on the current method name. Spring framework provides another way to create test data for one test - the @Sql annotation. We can add it to a test method and specify the path to a desired script that should be executed. In addition to this, we can set the execution time for each script. It means it is possible to define data insert and cleanup scripts and use them for every test. The test code will look like this: Java @Test @Sql(scripts = "insert-products.sql", executionPhase = Sql.ExecutionPhase.BEFORE_TEST_METHOD) @Sql(scripts = "delete-products.sql", executionPhase = Sql.ExecutionPhase.AFTER_TEST_METHOD) void quantityByProductTypeTest() { assertThat(stockService.getQuantityByProductType("Perfume")).isEqualTo(3L); } This approach does not require special transaction management or “if” statements. As a downside, we’ll need to manage and support many small SQL scripts for every test. For the code above, every script should be placed in the same package as the test class. Conclusion With Testcontainers, integration testing becomes much easier. There is no need to mock external services like databases to test business logic. When we need to do this, we can simply set up the database test container by specifying a particular URL or right in the test class. Creating test data is a bit more challenging process. The simplest option is to recreate a test container with all test data for every test, but this approach increases test execution time. If we want to share one test container between different tests, we need to create test data and clean it up after tests. It looks like the most efficient way to do this is as follows: Create DB schema using Spring Data schema.sql or Testcontainers init script. Insert shared data that is not changed by tests using Spring Data data.sql or Testcontainers init script. Add test-specific data, create and cleanup scripts using @Sql test annotation. To generate schema initialization scripts, we can use JPA Buddy – it dramatically simplifies this job. Also, do not forget to enable schema validation for Hibernate. It will let us be sure that we have the latest DB version used for our tests. Managing DB versioning scripts to create test data for the shared container looks a bit more complex than the one described above. Although possible to implement, it is easy to mix prod and test data or get confused with migration scripts execution orders. Also, it requires additional code (hence support) in unit tests. More
Integrating AWS With Salesforce Using Terraform
Integrating AWS With Salesforce Using Terraform
By Vladislav Bilay
Apache Kafka vs. Message Queue: Trade-Offs, Integration, Migration
Apache Kafka vs. Message Queue: Trade-Offs, Integration, Migration
By Kai Wähner CORE
Health Check Response Format for HTTP APIs
Health Check Response Format for HTTP APIs
By Nicolas Fränkel CORE
IBM App Connect Enterprise CI Builds With On-Demand Service Provisioning
IBM App Connect Enterprise CI Builds With On-Demand Service Provisioning

The industry-wide move to continuous integration (CI) build, and test presents a challenge for the integration world due to the number and variety of resource dependencies involved, such as databases, MQ-based services, REST endpoints, etc. While it is quite common to automate testing using dedicated services (a “test” or “staging” database, for example), the fixed number of these services limits the number of builds that can be tested and therefore limits the agility of integration development. Containerization provides a way to increase CI build scalability without compromising quality by allowing a database to be created for each run and then deleted again after testing is complete. This does not require the integration solution to be deployed into containers in production and is compatible with deploying to integration nodes: only the CI pipeline needs to be able to use containers and only for dependent services. Quick summary: Start a containerized database, configure ACE policy, and run tests; a working example can be found at ot4i. Background Integration flows often interact with multiple external services such as databases, MQ queue managers, etc., and testing the flows has historically required live services to be available. This article is focused on databases, but other services follow the same pattern. A common pattern in integration development relies on development staff building applications and doing some level of testing locally, followed by checking the resulting source into a source-code management system (git, SVN, etc.). The source is built into a BAR file in a build pipeline and then deployed to an integration node for further testing, followed by promotion to the next stage, etc. While the names and quantity of the stages differ between different organizations, the overall picture looks something like this: This style of deployment pipeline allows organizations to ensure their applications behave as expected and interact with other services correctly but does not usually allow for changes to be delivered both quickly and safely. The key bottlenecks tend to be in the test stages of the pipeline, with build times as a less-common source of delays. While it is possible to speed up delivery by cutting back on testing (risky) or adding large numbers of QA staff (expensive), the industry has tended towards a different solution: continuous integration (CI) builds with finer-grained testing at earlier stages to catch errors quickly. With a CI pipeline enabled and automated finer-grained testing added, the picture changes to ensure more defects are found early on. QA and the other stages are still essential, but these stages do not see the same level of simple coding bugs and merge failures that might have been seen with the earlier pipelines; such bugs should be found in earlier stages, leaving the QA teams able to focus on the more complex scenarios and performance testing that might be harder to achieve in earlier stages. A simple CI pipeline (which could be Jenkins, Tekton, or many other tools) might look something like this: Note that (as discussed above) there would usually be environments to the right of the pipeline, such as staging or pre-prod, that are not shown in order to keep the diagram simple. The target of the pipeline could be containers or integration nodes, with both shown in the diagram; the DB2 database used by the pipeline could also be in either infrastructure. The pipeline steps labeled “Unit Test” and “Integration Test” are self-explanatory (with the services used by integration testing not shown), but “Component Test” is more unusual. The term “component test” was used in the ACE product development pipeline to mean “unit tests that use external services” and is distinct from integration testing because component tests only focus on one service. See ACE unit and component tests for a discussion of the difference between test styles in integration. This pipeline benefits from being able to shift testing “left,” with more testing being automated and running faster: the use of ACE v12 test capabilities (see JUnit support for flow testing for details) allows developers to run the tests on their own laptops from the toolkit as well as the tests being run automatically in the pipeline, and this can dramatically reduce the time required to validate new or modified code. This approach is widely used in other languages and systems, relying heavily on unit testing to achieve better outcomes, and can also be used in integration. This includes the use of component tests to verify interactions with services, resulting in the creation of large numbers of quick-to-run tests to cover all the required use cases. However, while shifting left is an improvement, it is still limited by the availability of live services to call during the tests. As development agility becomes more important and the frequency of build/test cycles becomes greater due to mandatory security updates as well as code changes, the need to further speed up testing becomes more pressing. While it is possible to do this while still using pre-provisioned infrastructure (for example, creating a larger fixed set of databases to be used in testing), there are still limits on how much testing can be performed at one time: providing enough services for all tests to be run in parallel might be theoretically possible but cost-prohibitive, and the next section describes a cheaper solution. On-Demand Database Provisioning While the Wikipedia article on shift-left testing says the “transition to traditional shift-left testing has largely been completed,” this does not appear to be true in the integration world (and is debatable in the rest of the industry). As the point of a lot of integration flows is to connect systems together, the availability of these systems for test purposes is a limiting factor in how far testing can be shifted left in practice. Fortunately, it is possible to run many services in containers, and these services can then be created on-demand for a pipeline run. Using a DB2 database as an example, the pipeline picture above would now look as follows: This pipeline differs from the previous picture in that it now includes creating a database container for use by tests. This requires the database to be set up (schemas, tables, etc. created) either during the test or in advance, but once the scripts and container images are in place, then the result can be scaled without the need for database administrators to create dedicated resources. Note the target could still be either integration nodes or containers. Creating a new database container every time means that there will never be data left in tables from previous runs, nor any interference from other tests being run by other pipeline runs at the same time; the tests will be completely isolated from each other in both space and time. Access credentials can also be single-use, and multiple databases can be created if needed for different tests that need greater isolation (including integration testing). While isolation may not seem to be relevant if the tests do nothing, but trigger reads from a database, the benefits become apparent when inserting new data into the database: a new database will always be in a clean state when testing starts, and so there is no need to keep track of entries to clean up after testing is complete. This is especially helpful when the flow code under test is faulty and inserts incorrect data or otherwise misbehaves, as (hopefully) tests will fail, and the whole database (including the garbage data) will be deleted at the end of the run. While it might be possible to run cleanup scripts with persistent databases to address these problems, temporary databases eliminate the issue entirely (along with the effort required to write and maintain cleanup scripts). More Test Possibilities Temporary databases combined with component testing also make new styles of testing feasible, especially in the error-handling area. It can be quite complicated to trigger the creation of invalid database table contents from external interfaces (the outer layers of the solution will hopefully refuse to accept the data in most cases), and yet the lower levels of code (common libraries or sub-flows) should be written to handle error situations where the database contains unexpected data (which could come from code bugs in other projects unrelated to integration). Writing a targeted component test to drive the lower level of code using a temporary database with invalid data (either pre-populated or created by the test code) allows error-handling code to be validated automatically in an isolated way. Isolated component testing of this sort lowers the overall cost of a solution over time: without automated error testing, the alternatives tend to be either manually testing the code once and then hoping it carries on working (fast but risky) or else having to spend a lot of developer time inspecting code and conducting thought experiments (“what happens if this happens and then that happens?”) before changing any of the code (slow but safer). Targeted testing with on-demand service provision allows solution development to be faster and safer simultaneously. The underlying technology that allows both faster and safer development is containerization and the resulting ease with which databases and other services can be instantiated when running tests. This does not require Kubernetes, and in fact, almost any technology would work (docker, Windows containers, etc.) as containers are significantly simpler than VMs when it comes to on-demand service provision. Cloud providers can also offer on-demand databases, and those would also be an option as long as the startup time is acceptable; the only critical requirements are that the DB be dynamic and network-visible. Startup Time Considerations On-demand database containers clearly provide isolation, but what about the time taken to start the container? If it takes too long, then the pipeline might be slowed down rather than sped up and consume more resources (CPU, memory, disk, etc.) than before. Several factors affect how long a startup will take and how much of a problem it is: The choice of database (DB2, Postgres, etc.) makes a lot of difference, with some database containers taking a few seconds to start while others take several minutes. This is not usually something that can be changed for existing applications, though for new use cases, it might be a factor in choosing. It is possible to test with a different type of database that is used in production, but this seriously limits the tests. The amount of setup needed (tables, stored procedures, etc.) to create a useful database once the database has started. This could be managed by the tests themselves in code, but normally it is better to use the existing database scripts responsible for creating production or test databases (especially if the database admins also do CI builds). Using real scripts helps ensure the database looks as it should, but also requires more work up-front before the tests can start. Available hardware resources can also make a big difference, especially if multiple databases are needed to isolate tests. This is also affected by the choice of database, as some databases are more resource-intensive than others. The number of tests to be run and how long they take affect how much the startup time actually matters. For a pipeline with ten minutes of database testing, a startup time of one minute is less problematic than it would be for a pipeline with only thirty seconds of testing. Some of these issues can be mitigated with a small amount of effort: database container images can be built in advance and configured with the correct tables and then stored (using docker commit if needed) as a pre-configured image that will start more quickly during the pipeline runs. The database can also be started at the beginning of the pipeline so it has a chance to start while the compile and unit test phases are running; the example in the ACE demo pipeline repo (see below) does this with a DB2 container. Limitations While on-demand databases are useful for functional testing, performance testing is harder: database containers are likely to be sharing hardware resources with other containers, and IO may be unpredictable at times. Security may also be hard to validate, depending on the security configuration of the production databases. These styles of testing may be better left to later environments that use pre-provisioned resources, but the earlier pipeline stages should have found most of the functional coding errors before then. To be most effective, on-demand provisioning requires scripts to create database objects. These may not always be available if the database has been built manually over time, though with the moves in the industry towards database CI, this should be less of a problem in the future. Integration Node Deployments Although the temporary database used for testing in the pipeline is best to run as a container, this does not mean that the pipeline must also end with deployment into container infrastructure such as Kubernetes. The goal of the earlier pipeline stages is to find errors in code and configuration as quickly as possible, and this can be achieved even if the application will be running in production in an integration node. The later deployment environments (such as pre-prod) should match the production deployment topology as closely as possible, but better pipeline-based testing further left should mean that fewer bugs are found in the later environments: simpler code and configuration issues should be caught much earlier. This will enhance agility overall even if the production topology remains unchanged. In fact, it is often better to improve the earlier pipeline stages first, as improved development efficiency can allow more time for work such as containerization. Example of Dynamic Provisioning The ACE demo pipeline on OT4i (google “ace demo pipeline”) has been extended to include the use of on-demand database provision. The demo pipeline uses Tekton to build, test, and deploy a database application (see description here), and the component tests can use a DB2 container during the pipeline run: The pipeline uses the DB2 Community Edition as the database container (see DB2 docs) and can run the IBM-provided container due to not needing to set up database objects before running tests (tables are created by the tests). Due to the startup time for the container, the database is started in the background before the build and unit test step, and the pipeline will wait if needed, for the database to finish starting before running the component tests. A shutdown script is started on a timer in the database container to ensure that it does not keep running if the pipeline is destroyed for any reason; this is less of a concern in a demo environment where resources are free (and limited!) but would be important in other environments. Note that the DB2 Community Edition license is intended for development uses but still has all the capabilities of the production-licensed code (see DB2 docs here), and as such, is a good way to validate database code; other databases may require licenses (or be completely free to use). Summary CI pipelines for integration applications face challenges due to large numbers of service interactions, but these issues can be helped by the use of on-demand service provisioning. This is especially true when combined with targeted testing using component-level tests on subsections of a solution, allowing for faster and safer development cycles. This approach is helped by the widespread availability of databases and other services in containers that can be used in pipelines without requiring a wholesale move to containers in production. Used appropriately, the resulting shift of testing to the left has the potential to help many integration organizations develop high-quality solutions with less effort, even without a wholesale move to containers in all environments.

By Trevor Dolby
RAML vs. OAS: Which Is the Best API Specification for Your Project?
RAML vs. OAS: Which Is the Best API Specification for Your Project?

Designing and documenting APIs well is essential to working on API projects. APIs should be easy to use, understand, and maintain. Ensure your API design is clearly and effectively communicated to your users and teammates. You need to generate the entirety of your API design, together with documentation, code, assessments, mocks, and so forth. What is the excellent way to do all that? An API specification language is one way to do that. An API spec language is a way of writing down your API design using a general layout that humans and machines can read. It lets you write your API layout in a simple and structured manner, which you may use to create all varieties of awesome things. There are many API spec languages. However, the popular ones are RAML and OAS. RAML stands for RESTful API Modeling Language, and it's a language that uses YAML to put into writing your APIs. OAS stands for OpenAPI Specification, a language that uses JSON or YAML to write down your APIs. In this post, I will evaluate RAML and OAS and tell you what they can do for you. Features RAML and OAS have some standard features. They both: Support RESTful APIs, which are a way of making APIs that follow some rules and conventions for how they work on the web. Let you write the resources, methods, parameters, headers, responses, schemas, examples, security schemes, and more of your APIs. Let you reuse stuff across your APIs, like data types, traits, resource types, parameters, responses, etc., which you can write once and use many times. Let you split your API spec into multiple files and import them using references or includes. Let you check your API spec for errors and make sure it follows the language rules. Let you make docs for your APIs from your API spec that your users can see and try. But RAML and OAS also have some differences. For example: RAML uses YAML as its format, while OAS uses JSON or YAML. YAML is shorter and friendlier to look at than JSON, but JSON is more common and supported by more tools and platforms. RAML has annotations, which are extra things that you can add to any part of your API spec. Annotations can give you more info or data about your APIs that the language doesn't cover. For example, you can use annotations to mark some parts as old or new. OAS doesn't have annotations by itself, Benefits RAML and OAS can both help you with your API design in different ways. They both: Help you make consistent APIs and follow RESTful APIs' best practices and conventions. Help you share your API design with your users and teammates. Help you make all kinds of cool things from your API specs, like docs, code, tests, mocks, etc., which can save you time and hassle. Help you work with different tools and platforms that support the language rules, like editors, frameworks, libraries, etc., which can make your work easier and better. But RAML and OAS also have some unique benefits that make them stand out from each other. For example: RAML is more expressive and flexible than OAS. It lets you write your APIs more naturally and easily, using features like annotations, overlays, extensions, libraries, etc. It also lets you customize your API spec to fit your needs and style. OAS is more interoperable and compatible than RAML. It lets you write your APIs in a more universal and standard way, using features like OpenAPI extensions, callbacks, links, etc. It also enables you to use the OpenAPI ecosystem, which has a significant and active community of developers, vendors, tools, etc. Drawbacks RAML and OAS also have some disadvantages to your API design. They both: They need some learning and skill to use them well. You must know RESTful APIs' language format, rules, concepts, and principles. It may cover only some situations you want to express or document for your APIs. You may need other tools or methods to complete your API spec. API development or consumption may only be compatible with specific tools or platforms. You may need converters or adapters to switch between different languages or formats. But RAML and OAS also have some specific drawbacks that make them less attractive than each other. For example: RAML is less popular and mature than OAS. It has a smaller and less active community of developers, vendors, tools, etc., which may limit its growth and improvement. It also has fewer features and options than OAS, mainly for security and linking. OAS is more complex and lengthy than RAML. It has more extensive and more diverse language rules, which may make it harder to read and write. It also has more features and options than RAML, Use Cases Depending on your goals and preferences, you can use RAML and OAS for different API projects. They both: It can be used for any type of RESTful API, whether public or private, simple or complex, internal or external, etc. It can be used for any stage of the API lifecycle, whether it's design, development, testing, deployment, management, etc. It can be used for any API team size, whether solo or collaborative, small or large, local or remote, etc. But RAML and OAS also have some specific use cases that make them more suitable than each other for particular scenarios. For example: RAML is more suitable for API projects that require more creativity and flexibility. For example, we can use RAML for experimental, innovative, or customized APIs. We can also use RAML for APIs focusing more on the user experience and the business logic than technical details. OAS is more suitable for API projects that require more interoperability and compatibility. We can also use it for standard, stable, or compliant APIs. We can also use it for APIs that focus more on the technical details and the integration than on the user experience and the business logic. Which one do I opt for, RAML or OAS? Well, to be honest, I like both. Both have strengths and weaknesses and can help me with my API layout in exceptional methods. But if I had to choose one, I could go with RAML. Why? Because RAML is fun and less complicated to use than OAS. RAML lets me write my APIs simple manner and expressively use YAML and annotations. I also like how RAML allows me to personalize my APIs to fit my needs and style, using overlays, extensions, libraries, etc. Don't get me wrong: OAS is extraordinary too. I appreciate how OAS lets me write my APIs in a universal and standard way, using JSON or YAML and OpenAPI extensions. I also understand how OAS allows me to use the OpenAPI ecosystem, which has a lot of excellent tools and resources. But sometimes, I find OAS too complex and lengthy for my taste. OAS makes me write too much code and details for my APIs. OAS also limits my creativity and flexibility for my APIs. Of course, this is just my view. You may have a different perspective based on your experience and use cases. The best API spec language is the one that works best for you. Conclusion I compared RAML and OAS as two popular API spec languages in this post. Then, I showed you what they can do for you regarding the features and benefits of your API design.

By Madhu Mallisetty
Harnessing the Power of Integration Testing
Harnessing the Power of Integration Testing

Integration testing plays a pivotal role in ensuring that these interconnected pieces work seamlessly together, validating their functionality, communication, and data exchange. Explore the benefits and challenges of integration testing here! In the ever-evolving world of software development, creating complex systems that seamlessly integrate various components and modules has become the norm. As developers strive to deliver robust, reliable, and high-performing software, the significance of integration testing cannot be overstated. Integration testing plays a pivotal role in ensuring that the interconnected components of software work seamlessly together, validating their functionality, communication, and data exchange. In this blog, we will embark on a journey to explore the realm of integration testing. Here, we'll delve into the benefits of integration testing. We'll also understand the complexities and potential roadblocks that can arise during this critical phase of the software development life cycle. Let's explore the approaches, conquer the challenges and unlock the potential of this testing approach for delivering robust and reliable software systems. What Is Integration Testing? Let's now see what integration testing is. Integration testing is a software testing approach that focuses on verifying the interactions between different components or modules of a system. It is performed after unit testing and before system testing. The goal of integration testing is to identify defects that may arise from the combination of these components and ensure that they work correctly together as a whole. During integration testing, individual modules or components are combined and tested as a group using automated testing tools. The purpose is to validate their interactions, data flow, and communication protocols. This testing level is particularly important in complex systems where multiple components need to collaborate to accomplish a specific functionality. Integration testing can be approached in different ways: Big Bang Integration In this approach, all the components are integrated simultaneously, and the system is tested. This method is suitable for small systems with well-defined components and clear interfaces. Top-down Integration This approach starts with testing the higher-level components first, simulating the missing lower-level ones using stubs or dummy modules. Then, gradually, lower-level components are integrated and tested until the entire system is tested. This method helps in identifying issues with the overall system architecture early on. Bottom-up Integration This approach begins with testing the lower-level components first, using driver modules to simulate the higher-level components. Higher-level components are progressively integrated and tested until the complete system is tested. This method is useful when the lower-level components are more stable and less prone to changes. Sandwich (or Hybrid) Integration This approach combines both top-down and bottom-up integration strategies. It involves integrating and testing subsets of modules from top to bottom and bottom to top simultaneously until the complete system is tested. This method balances the advantages of top-down and bottom-up approaches. By using integration testing tools, teams can find defects like interface mismatches, data corruption, incorrect data flow, errors in inter-component communication, and functionality issues that emerge when components interact. By detecting and resolving these issues early in the development lifecycle, this testing approach helps ensure that the system's components function harmoniously together. This ensures improved software quality and reliability, thus increasing the overall software application's performance. Uncovering the Benefits of Integration Testing After knowing what integration testing is, we'll explore its benefits now. Here are some of the benefits of integration testing in the software development process: Early Detection of Integration Issues Integration testing helps identify and resolve issues that arise when multiple components or modules interact with each other. By detecting and addressing integration problems early on, you can prevent them from escalating into more significant issues during later stages of development or in the production environment. Improved System Reliability Integration testing ensures that different components work together seamlessly, enhancing the overall reliability of the system. By verifying the correctness of data exchanges, communication protocols, and interdependencies between modules, you can reduce the risk of system failures and errors. Enhanced System Performance Integration testing allows you to evaluate the performance of the integrated system as a whole and monitor software applications' performance. By measuring response times, resource utilization, and system throughput during integration, you can identify performance bottlenecks, optimize resource allocation and ensure that the system meets performance requirements. Increased Confidence in System Behavior Integration testing helps build confidence in the behavior and functionality of the system. By validating the end-to-end flow of data and functionality across different components, you can ensure that the system behaves as expected and that business processes are executed correctly. Facilitates Collaboration and Communication Integration testing encourages collaboration among development teams responsible for different components. It provides a platform for teams to communicate, coordinate and resolve integration-related issues, fostering better teamwork and reducing silos in the development process. Cost and Time Savings Detecting and fixing integration issues early in the development cycle reduces the cost and effort required for rework and bug fixing. By identifying issues before they propagate to later stages of production, you can save time and resources that would otherwise be spent on troubleshooting and resolving complex problems. Integration testing plays a crucial role in ensuring the robustness, reliability, and performance of a system by addressing integration challenges early and ensuring smooth interoperability between different components. Navigating the Complexities: Challenges in Integration Testing Integration testing can be a complex and challenging task due to the various factors involved in testing interactions between different components. Here are some of the common challenges faced in integration testing: Dependency Management Integration testing involves testing interaction between different components or modules. Managing dependencies between these components can be challenging, especially when changes in one component affect others. Environment Setup Integration testing often requires a complex setup of various environments, such as databases, APIs, servers, and third-party services. Setting up and configuring these environments correctly can be time-consuming and error-prone. Stability and Consistency Integration tests may fail due to the instability of external systems or components. Ensuring consistent and reliable results can be difficult, especially when dealing with third-party systems that may not be under your control. Debugging and Troubleshooting Identifying the root cause of issues detected during integration testing can be challenging, as failures can result from complex interactions between components. Debugging and troubleshooting such issues requires a deep understanding of the system architecture and potential failure points. Versioning and Compatibility Integration testing becomes more challenging when dealing with different versions of components or systems. Ensuring compatibility between versions and managing version dependencies can be complex, particularly in distributed or microservices architectures. Final Thoughts Integration testing allows us to catch defects that may go unnoticed during unit testing and provides valuable insights into the system's behavior in real-world scenarios. As a result, it helps mitigate risks, enhances system robustness, and ensures the proper functioning of the system. To effectively harness the power of integration testing, it is crucial to adopt best practices such as maintaining a stable test environment, establishing clear test objectives, and employing automation testing tools and frameworks. Collaboration between development, testing, and operations teams is also vital to address challenges promptly and foster a culture of quality throughout the software development lifecycle. As software systems continue to grow in complexity, integration testing remains an essential discipline that enables us to build reliable and scalable solutions. Integration testing helps identify issues that may arise due to interactions between different components, such as compatibility problems, communication failures, or incorrect data flow. By embracing the approaches and conquering the challenges discussed in this blog, businesses can elevate our integration testing practices and unlock the full potential of our software systems.

By Ruchita Varma
What Are Events? Process, Data, and Application Integrators
What Are Events? Process, Data, and Application Integrators

I first started developing SAP Workflows 25 years ago: five years prior to Gartner announcing that Event-Driven Architecture (EDA) would be the "next big thing" to follow from Service-Oriented Architecture (SOA) – SOA likewise still in its infancy in 2003. What's interesting about this is that workflows themselves are, and always have been, "event-driven." The advancement of any given workflow from one step to the next step always depends upon a particular event (or events) occurring, such as PurchaseOrder.Approved. As you might already know, SAP has been developing ERP software since 1972, making it by far one of the most experienced software vendors in the world to have built its product offerings around business processes, around Process Integration. What's also interesting to note in this regard is that SAP very recently confirmed the tight link between workflows and Process Integration by regrouping and rebranding their cloud-based "Workflow Management" solution into a new offering: "Process Automation" — apparently the first time in 25 years that their Workflow solution carries a name other than "Workflow." SAP also prides itself on its expertise in the field of Data Integration: SAP actually stands for Systems, Applications, and Products in data processing, and we are told that their ERPs "touch 77% of global transaction revenue" – resulting in an enormous need for Data Integrations. SAP's very latest offering in this domain is the poorly named "Data Intelligence Cloud: ... A solid foundation for your data fabric architecture to connect, discover, profile, prepare, and orchestrate all your enterprise data assets." SAP has more than 50 years of experience in integration, and they want to share their knowledge with their customers. They created a guide called the Integration Solution Advisory Methodology, which is a 150-page template that shows how to integrate different systems. SAP identified five different integration styles: Process Integration, Data Integration, Analytics Integration, User Integration, and IoT/Thing Integration. Each style has a primary trigger, such as an application event, schedule, user event, or thing event. SAP updated this information only nine months ago. The important thing to note is that SAP says all the integration styles they identified can be triggered/driven by events. You may have also noticed the strange absence of Application Integration from this list. This is particularly strange given that if you research SAP's latest iPaaS offering on the web today, you are likely to read that it will enable you to: "Integrate Applications, Data, and Processes seamlessly across your enterprise." Part of the reason for SAP's apparently only partial embrace of Application Integration's potential is that they are actually quite far behind in this domain in comparison to the other IT giants. This is hardly surprising given that its flagship product – its ERP – is and always has been a monolith; Application Integration being an almost non-subject in the ancient world of monolithic applications. SAP is so far behind in the domain of Event-Driven Architecture in particular that it opted to resell two separate Solace products – PubSub+ Event Broker and PubSub+ Platform – as its own. To drive the point even further, SAP has rebadged these two products as "Event Mesh" and "Advanced Event Mesh," yet neither of the products actually corresponds with Solace's own definition of an Event Mesh, suggesting that SAP doesn't have a solid grasp of what an Event Mesh represents. Had SAP instead developed their EDA offerings in-house and consequently matured in their understanding of what good EDA looks like – certainly not like an Event Mesh – they might just have noticed a very interesting point: a point that I concede appears to have been missed by more-or-less everyone up until now. In the context of EDA, there is absolutely no difference between Process, Data, and Application integration: the "Event" should be the pulse of all well-architected integrations today. While this might not surprise you greatly in the case of Process and Application integration, as I first wrote in December 2020, Events are additionally "the mother of all data" and, as such, should also be used as the basis for all modern, real–time, data integrations.

By Cameron HUNT
Integration Architecture Guiding Principles, A Reference
Integration Architecture Guiding Principles, A Reference

The Integration Architecture guiding principles are guidelines to increase the consistency and quality of technology decision-making for the integration solutions. They describe the big picture of the enterprise within the context of its technology intent and impact on the organization. In the context of Integration Architecture, the guiding principles drive the definition of the target state of the integration landscape. Each principle will contain a description, rationale, and implications. 1. Centralized Governance Statement Any integration-related initiative should conform to the standards and recommendations of a centralized Integration Centre of Excellence (CoE); any design decision or strategy related to integrations needs to be driven, reviewed, and approved by the Integration CoE. Rationale • The Integration CoE will: Support all the Lines of Businesses (LoBs) to provide centralized governance by assessing all the integration-related initiatives. Ensure their compliance with any Enterprise Architecture (EA) principles. Help ensure consistency, which reduces the complexity of managing different integration solutions, improves business satisfaction and enables reusability of the existing integration capabilities and services. This maximizes Return on Investment (ROI) and reduces costs. In addition, adhering to the Integration CoE governance model will allow for support from multiple vendors, technical SMEs, and delivery teams, reducing the cost associated with vendor lock-in or overhead of hiring resources. Leverage a shared services model to ensure optimum resource use. It will involve staff members who understand the business, applications, data models, and underlying technologies. The following diagram describes the evolution model for the Integration CoE to address any initiatives and support continuous improvement. Implications The Integration CoE is expected to provide the following core service offerings: Integration Architecture roadmap Opportunistic interface/service identification Relevant training Guidance on applicable standards, reusable services Patterns, standards, and templates Registry/repository services Project support Integration design, build, and test expertise Stakeholder involvement and communications Governance Retrospectives and continuous improvement Development teams should be able to deploy rapid changes for new or changed business requirements and functionality. The deployments should be measured and evaluated to help create continuous improvement. Businesses should be provided with reasonable costs for the service offerings, which will result in continuous improvement by eliminating wasteful practices and technical debts and promoting reusability. Each deployment should be measured, and the costs should be analyzed, including hardware, software, design, development, testing, administration, and deployment. Total Cost of Ownership (TCO) analysis for ongoing support and maintenance factored into these analyses. The Integration CoE should enforce controls and processes to maintain the integrity of the integration platform and application components, and underlying information. The Integration CoE should provide a culturally inclusive approach to the systems’ environment and focuses on discipline, agility, and simplification. As a result, it should be easy to demonstrate that a utility within The Integration CoE will experience a much lower TCO. Diagram 1 — A Reference Integration CoE Governance Model 2. Application Decoupling Statement Achieve technological application independence through Middleware, Reusable Microservices/APIs (Application Programming Interfaces), and Asynchronous Message-oriented integrations. Rationale Integration architecture must be planned to reduce the impact of technology changes and vendor dependence on the business through decoupling. Decoupling should involve seamless integrations between information systems through Middleware, Microservices/APIs (Application Programming Interfaces), and Asynchronous Messaging systems. Every decision made concerning the technologies that enable the integration of information systems will put more dependency on those technologies. Therefore, this principle intends to ensure that the information systems are not dependent on specific technologies to integrate. The independence of applications from the supporting technology allows applications to be developed, upgraded, and operated under the best cost-to-benefit ratio. Otherwise, technology, subject to continual obsolescence and vendor dependence, becomes the driver rather than the user requirements. Avoid point-to-point integration, as it involves technological dependence on the respective information systems and tight coupling between the systems involved. Implications Reusable Microservices/APIs will be developed to enable legacy applications to interoperate with applications and operating environments developed under the enterprise architecture. The microservices should be developed and deployed independently, thus enabling agility Middleware should be used to de-couple applications from specific software solutions. Problematic parts of the application should be isolated, fixed, and maintained, while the larger part can continue without any change. One should be able to add more infrastructure resources to the specific microservice to perform better without adding servers to the whole application. This principle also implies documenting technology standards and API specifications and metrics to better understand the operational cost. Industry benchmarks should be adopted to provide comparator metrics for efficiency and ROI. Specific microservice should be implemented with higher availability than the other components, for instance, 24 hours x 7 days. In comparison, the remaining part of the application is less available to save on resources. As it can run autonomously, it can be architected to have its level of availability. 3. Data and Application Integration Statement Categorize integration use cases as either data integration or application integration and follow the respective integration lifecycle processes involved for each category. Rationale Data Integration is focused on reconciling disparate data sources, and data sets into a single view of data shared across the company. It is often a prerequisite to other processes, including analysis, reporting, and forecasting. Application Integration is focused on achieving operational efficiency to provide (near) real-time data to the applications and keep business processes running. It doesn’t involve reconciling different data sources into a coherent and shared data model to get a holistic view of all datasets (for example, finance, sales, etc.). Both Data Integration and Application Integration may use an Extract, Transform, and Load (ETL), a tripartite process in which data is extracted (collected), generally in bulk, from the raw sources, transformed (cleaned), and loaded (or saved) to a data destination. The ETL data pipeline, an umbrella term for all data movement and cleaning, enables piping data into non-unified data sources. Implications Data Integration should provide the following capabilities to an enterprise: Customer 360 view and streamlined operations Centralized Data Warehouse Data Quality Management Cross-departmental collaboration Big Data Integration For data integration use cases, standard tools available within the enterprise should be used to perform ETL and data analysis for large volumes of data. Ex: Azure Data Factory, Informatica, etc. The data storage services should be made available for data integration processes, persisting data for analysis, and reporting. Ex - Databases (SQL/NoSQL), Data Warehouse, Data Lake, etc. Application Integration should provide the following capabilities to an enterprise: Real-time or near real-time data exchange between the applications. Keeping data up to date across different applications Providing reusable interfaces for exchanging data with different applications Recovery of failed transactions in (near) real-time through retry mechanisms or reliable messaging flows. For application integration, consider using APIs/microservices, message-based middleware, or workflow services for (near) real-time data exchange patterns. Technologies that enable these services are MuleSoft, AWS, IBM MQ, etc. ETLs may be used for application integration; however, considerations should be made to process small volumes of data with high frequency for exchanging data between the applications to achieve a near real-time data transfer. 4. Event-Driven Architecture Statement Design systems to transmit and/or consume events to facilitate responsiveness. Rationale In an event-driven architecture, the client generates an event and can immediately move on to its next task. Different parts of the application then respond to the event as needed, which improves the responsiveness of the application. In an event-driven architecture, the publisher emits an event, which the event bus or message queues (MQ) acknowledge. The event bus or MQ routes events to subscribers, which process events with self-contained business logic. There is no direct communication between publishers and subscribers. This is called a Publish-Subscribe model, as described in the following diagram. The publish-subscribe model used in the event-driven architecture allows multiple subscribers for the same event; thus, different subscribers may execute other business logic or involve different systems. In the MQ terminology, the MQ component that supports multiple subscribers is called an MQ Topic. Migrating to event-driven architecture allows the handling of unpredictable traffic as the processes involved can all run at different rates independently. Event-driven architectures enable processes to perform error handling and retries efficiently as the events are persistent across different processes. Event-driven architectures promote development team independence due to loose coupling between publishers and subscribers. Applications can subscribe to events with routing requirements and business logic separate from the publisher and other subscribers. This allows publishers and subscribers to change independently, providing more flexibility to the overall architecture. Implications Applications that do not need immediate responses from the services and can allow asynchronous processing should opt for event-driven architectures. Events should be persisted in the messaging queues (MQ), which should act as the temporary stores for the events so that those can be processed as per the configured rates of the consumer services and to ensure there is no event loss. Once the events are processed, those may be purged from the MQ accordingly. Services involved should be configured to reprocess and retry the events in case of a recoverable technical issue. For example, when a downstream system goes down, the events will fail to get delivered; however, the system comes back, the failed events can be reprocessed automatically. The services should retry periodically to process the events until they are successful. Services involved may not reprocess or retry the events in case of a business or functional error. For example, if the data format is wrong, in that case, however many times the events are reprocessed, those will keep failing, creating poison events that will block the entire process. To avoid the scenario, business or functional errors should be identified, and failed events should be routed to a failed event queue. A queue is generally a component that stores the events temporarily for processing. Adoption of event-streaming systems that can handle a continuous stream of events (e.g., Kafka or Microsoft Azure Event Hub). Event Streaming is the process of capturing data in the form of streams of events in real-time from event sources such as databases, IoT devices, or others. Diagram 2 — Synchronous vs. Asynchronous Pattern Example Diagram 3 — MQ Broadcasting Pattern Example 5. Real-Time or Near Real-Time Integration Statement Streamline processes to send data frequently through message-based integrations. Rationale Messages form a well-defined technology-neutral interface between the applications. It enables loose coupling of applications. An enterprise may have multiple applications with different languages and platforms built independently. It provides a lot of options for performance, tuning, and scaling. For example: Deployment of requester and processing service on different infrastructures. Multiple requesters may share a single server. Multiple requesters may share multiple servers. The various messaging middleware independently implements the common messaging patterns from the applications. Request/Reply Fire and Forget Publish/Subscribe Middleware solutions handle message transformations. Example:: JSON to XML, XML to JSON, etc. Middleware solutions can decompose a single large request into several smaller requests. Asynchronous messaging is fundamentally a pragmatic reaction to the problems of distributed systems. A message can be sent without both systems being up and ready simultaneously. Communicating asynchronously forces developers to recognize that working with a remote application is slower, which encourages the design of components with high cohesion (lots of work locally) and low adhesion (selective work remotely). Implications The enterprise should responsively share data and processes via message-based integrations. Applications should be informed when shared data is ready for consumption. Latency in data sharing must be factored into the integration design; the longer sharing can take, the more opportunity for shared data to become stale and the more complex integration becomes. Messaging patterns should be used to transfer data packets frequently, immediately, reliably, and asynchronously using customizable formats. Applications should be integrated using messaging channels (Enterprise Service Bus, Message Queues, etc.) to work together and exchange information in real-time or near real-time. 6. Microservices Statement Publish and promote enterprise APIs / microservices to facilitate a scalable, extensible, reusable, and secure integration architecture. Rationale Monolithic architectures add risk for application availability because many dependent and tightly coupled processes increase the impact of a single process failure. With a microservices architecture, an application is built as independent components that run each application process as a service. These services communicate via a well-defined interface using lightweight APIs. Microservices architectures make applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features. Microservices allow each service to be independently scaled to meet the demand for the application feature it supports. This enables teams to right-size infrastructure needs, accurately measure the cost of a feature, and maintain availability if a service experiences a spike in demand. Microservices enable continuous integration and delivery, making trying out new ideas easy and rolling back if something doesn’t work. The low cost of failure encourages experimentation, makes it easier to update code, and accelerates time-to-market for new features. Microservices architectures don’t follow a “one size fits all” approach, thus enabling technological freedom. Teams have the freedom to choose the best tool to solve their specific problems. Therefore, teams building microservices can choose the best tool for each job. Dividing software into small, well-defined modules enables teams to use functions for multiple purposes. A service written for a specific function can be used as a building block for another feature. This allows an application to bootstrap, as developers can create new capabilities without writing code from scratch. Service independence increases an application’s resistance to failure. In a monolithic architecture, if a single component fails, it can cause the entire application to fail. With microservices, applications handle total service failure by degrading functionality and not crashing the entire application. This will support agencies to meet intra-agency commitments, enable inter-agency collaboration and integration, and secure the network to meet the digital, cyber, and citizen commitments fundamental to trust. Implication Technologies like MuleSoft, Amazon Web Services (AWS), Microsoft Azure, etc. that provide the capability to build APIs should be fully leveraged to implement the microservices architecture pattern. APIs should be documented following documentation standards like RESTful API Modelling Language (RAML) or Swagger so that the consumer can understand the APIs' methods, operations, and functionality. APIs should also follow naming and versioning standards, which should reflect in the API documentation. APIs should be published into API catalogs/repositories/portals and made discoverable so that the project teams involving developers, architects, and business analysts can discover those and promote their reusability. A multi-layered API architecture may be followed that leverages APIs at different levels with separation of concern, providing building blocks within different business domains or across LoBs. This architecture pattern is popularly known through the concept of API-led Connectivity recommended by MuleSoft. Below are the descriptions of the three layers of the API-led connectivity pattern: System LayerSystem APIs provide a means of accessing underlying systems of record and exposing that data, often in a canonical format, while providing downstream insulation from any interface changes or rationalization of those systems. These APIs will also change more infrequently and will be governed by central IT, given the importance of the underlying systems. Process LayerThe underlying processes that interact and shape this data should be strictly independent of the source systems from which that data originates. For example, in a purchase order process, some logic is common across products, geographies, and channels that should be distilled into a single service that can then be called by product-, geography-, or channel-specific parent services. These APIs perform specific functions, provide access to non-central data, and may be built by central IT or project teams. Experience LayerExperience APIs are how data can be reconfigured so that it is most easily consumed by its intended audience, all from a shared data source, rather than setting up separate point-to-point integrations for each channel. Diagram 4 — MuleSoft Recommended API-led Connectivity Pattern For the API-led approach vision to be successful, it must be realized across the whole enterprise. APIs should be secured as per the Information Security standards to ensure the API security standards are adhered to during the whole lifecycle of the APIs, from designing to development to publishing for consumption. Point-to-point integrations bypassing an API-led approach should be avoided as it creates technological dependencies on end systems and technical debts, which are hard to manage in the long term. 7. Cloud Enablement Statement Always consider cloud-based services for integration platform and application architecture solutions to ensure a proper balance of service, security, and costs. Rationale Technology services can be delivered effectively via the cloud, a viable option compared to on-premises services. Cloud services are designed to support dynamic, agile, and scalable processing environments. The timing for implementation can be significantly different for on-premises versus cloud services. Operational costs can be significantly different for on-premises versus cloud services. Serverless architecture allows building and running applications in the cloud without the overhead of managing infrastructure. It provides a way to remove architecture responsibilities from the workload, including provisioning, scaling, and maintenance. Scaling can be automatic, and payment is made for what is used. Implications An evaluation of on-premise versus cloud services should be conducted for each integration platform and application architecture solution. This includes Software as a Service (SaaS) as the first preferred option, Platform as a Service (PaaS) as the second, and Infrastructure as a Service (IaaS) as the least preferred option. The management of a Cloud Service Provider will be different from managing a vendor whose product is hosted on-premise. The agreement should capture specific support parameters, response times, and SLAs. The security policies and services for the network, data, and hosting premises should be clearly defined and disclosed by the Cloud Service Provider. The Cloud Service Provider should clearly define data ownership and access to information assets. Cloud Service Providers offer assurances that they provide secure isolation between the assets of each of their clients. Cloud services should provide mechanisms to capture resource allocation and consumption and produce measurement data. Cloud services should seamlessly handle infrastructure failures and address how to meet performance-related SLAs. Business continuity and disaster recovery plans, services, and testing of Cloud Service Providers should be analyzed and reviewed in detail. The fitment of serverless architectures should be evaluated for every integration use case to optimize costs, achieve scalability and remove the overhead of maintaining any infrastructure. 8. Operations Management Statement Integrate with the organization's operations architecture for audit, logging, error handling, monitoring, and scheduling. Rationale Operations architecture is developed to provide ongoing support and management of the IT services infrastructure of an enterprise. Operations architecture ensures that the systems within the enterprise perform as expected by centrally unifying the control of operational procedures and automating the execution of operational tasks. It also reports the performance of the IT infrastructure and applications. The implementation of an operations architecture consists of a dedicated set of tools and processes which are designed to provide centralization and automation. Operations architecture within an enterprise generally provides the following capabilities: Scheduling Performance Monitoring Network Monitoring Event Management Auditing Logging Error Handling Service Level Agreements (SLAs) Operating Level Agreements (OLAs) Implications The operations architecture should provide the ability to perform critical operational tasks like auditing, logging, error handling, monitoring, and scheduling. It should provide the ability to provide reports and statistics to identify anomalies, enabling the support team to take proactive actions before any major failure. It should provide visibility across the on-premises, cloud, and serverless infrastructures and platforms. There should be SLAs agreed upon between the support group and the customer regarding the different aspects of the services, like quality, availability, and responsibilities. SLAs will ensure that the services are provided to the customer as agreed upon in the contract. OLAs should agree to describe the responsibilities of each internal support group toward other support groups, including the process and timeframe for delivery of their services. 9. End-to-End Security Statement All technologies, solutions, tools, designs, applications, and methods used within the end-to-end target integration architecture must adhere to the organization's security and privacy policies, procedures, guidelines, and standards. Rationale It will help maintain the integrity of data and systems as well as data transport and transmission methods. Failure to secure the information exchanged through the integration layer may lead to direct financial costs and damage the organization's reputation. It will help prevent unauthorized access to sensitive information. It will help prevent disruption of integration services/APIs, e.g., denial-of-service (DDoS) attacks. It will protect the integration platform and application components from exploitation by outsiders. It will keep downtime to a minimum, ensuring business continuity. Implications Security controls for all aspects of the target-state integration architecture should be considered to ensure compliance with the organization's security regulations Security policies should be created, expanded, and/or reviewed for each integration solution to cover all items within the scope of this principle. Periodic auditing of the integration platform and application components should be performed to confirm compliance with this principle. Proper controls around authorization and access should be enforced upon the interfaces / APIs exposed on the integration layer to mitigate risk and ensure trust. Monitoring and auditing tools should be implemented regularly on the integration platforms and application components. The respective platform owners should evaluate the outcome.

By Susmit Dey CORE
Integration Testing Tutorial: A Comprehensive Guide With Examples And Best Practices
Integration Testing Tutorial: A Comprehensive Guide With Examples And Best Practices

Integration testing is an approach where different components or modules of a software application are tested as a combined entity. You can run integration tests seamlessly regardless of whether one programmer or other programmers code these modules. Before a release, a software application undergoes various operations like extensive testing, iteration, and exception handling to identify bugs and make the software business ready. Integration testing is a primary phase of the software testing process when testing is done at the development stage. What Is Integration Testing? When separate modules are combined and tested as a whole, this software testing phase is referred to as integration testing. It takes place before the verification and validation process and after the unit testing. What makes integration testing essential is its ability to check the behavior of different units of a system altogether. When these units are taken individually, they function correctly with almost no errors, but when they are brought together, they uncover incorrect behavior if that exists. Integration testing is crucial because it's done at an early stage of development and helps prevent serious issues that may arise later by costing high fixing measures. You should run integration tests every time you change the existing code. What Is the Purpose of Integration Testing? Initially, software testing was not dependent on integration testing, and nobody had ever thought about building an advanced testing phase with the capability of finding issues during the development process. But with the growing digital sphere, the demand for integration testing has increased. Here are some major reasons why integration testing is crucial: To analyze integrated software modules: Analyzing the working of integrated software modules is the primary objective of integration testing. As per the test plan requirements, integration testing ensures connectivity between individual modules by examining the rendering values and logically implementing them. To ensure seamless integration between third-party tools and different modules: It's crucial to ensure the data accepted by the API is correct so that the response generated is as per the requirement. Integration testing for the interaction between modules and third-party tools helps to ensure that the data is correct. To fix exception handling: Before releasing the final build, it is crucial to pinpoint the weak spots and red flag them to minimize exception handling defects as much as possible. Missing these defects in the initial or developing stage will be expensive to fix after the release. Difference Between Unit Testing and Integration Testing Unit testing Integration testing It is a white-box testing process. It is a black-box testing process. It is performed by developers. It is performed by testers. Finding defects is easy as each unit is tested individually. Finding defects is hard as all modules are tested together. It is always performed first before going through any other testing process. It is performed after unit testing and before system testing. Developers are aware of the internal design of the software while testing. Testers are not aware of the internal test design of the software. Difference Between Integration Testing and System Testing Integration testing System testing It ensures all combined units can work together without errors. To ensure that the total build fills the business requirements and specifications. It is black box testing. It is a white box and black box testing or grey box testing. It doesn't fall in the acceptance testing class and performs functional types of tests. It falls in the acceptance testing class and performs functional and non-functional tests. It is level two testing. It is level three testing. It identifies majorly interface errors. It helps to identify system errors. Benefits of Integration Testing Integration testing helps expose any defects that can arise when these components are integrated and need to interact with each other through integration tests. It makes sure that integrated modules work correctly as expected. It is a quick testing approach, so once the modules are available, the tester can start testing them. It detects all errors that are related to the interface between modules. Helps modules interact with third-party tools and, most importantly, different APIs. It is more efficient because it typically covers a large system volume. Increases the test coverage and also improves the reliability of tests. Types of Integration Testing Integration testing is performed by combining different functional units and testing them to examine the results. There are four types of integration testing, each of which focuses on testing the software differently. Integration testing consists of the following types: Incremental integration testing Non-incremental/Big Bang integration testing. Incremental Integration Testing In the incremental testing approach, all logically related modules are integrated, and then testing is done to check the proper functionality of the application as per the requirement. After this, the other related modules are then integrated incrementally, and the process continues until all the integrated, logically related modules are tested successfully. The incremental approach is carried out by three different methods: Top Down Approach Bottom-Up Approach Sandwich Approach Top Down Approach The top-down integration testing approach involves testing top-level units first, and lower-level units will be tested step-by-step. Test Stubs are needed to simulate lower-level units, which can't be available during the initial phases. Advantages It requires little planning. Convenient for small systems. It covers all the modules. Disadvantages The top-down testing approach is not recommended for large-scale systems as fault localization is complicated. As the prerequisite of this approach is completing all modules, the testing team remains extremely time-bound when executing the test. Since all the modules are tested simultaneously, you can't test modules based on priority or critical functionality. Non-Incremental/Big Bang Testing In this non-incremental testing approach, all the developed modules are tested individually and then integrated and tested once again. This is also known as big bang integration testing. Big Bang Integration Testing This type of integration testing involves coupling most of the developed modules into a larger system, which is then tested as a whole. This method is very effective for saving time. Test cases and their results must be recorded correctly to streamline the integration process and allow the testing team to achieve its goals. Advantages Good for testing small systems. Allows for finding errors very quickly and thus saves a lot of time. Disadvantages Fault localization is tough. Finding the root cause of the problem is quite difficult. How Is Integration Testing Done? When the system is ready, and the units are successfully tested individually, they can be integrated and tested. The complete process of integration testing includes several steps and has a range of frameworks and continuous integration. Here's how you can perform integration testing: Firstly, prepare a test integration plan and the required frameworks. Decide the type of integration testing approach: Bottom-Up, Top-Down, Sandwich testing, or Big Bang. Design test cases, scripts, and scenarios. Deploy the chosen components to run the integration testing. You must track and record the testing results if there are any errors or bugs or if the test goes bug-free. Finally, you must repeat the same process until the entire system is tested. Entry and Exit Criteria for Integration Testing Integration testing has both entry and exit criteria that one should know before starting. Entry Criteria Approval: The integration test plan document has been signed off and approved. Preparation: Integration test cases have been prepared. Data creation: Test data is created. Unit testing: Unit testing of each developed module/component is complete. Dealing with defects: All the high-priority and critical defects are closed. Test environment: The test environment is set up for integration testing. Exit Criteria All the integration test cases on different parameters have been successfully executed. All critical and priority P1 and P2 defects are closed. The test report has been prepared. Example: Integration Test Cases Integration test cases mainly focus on the data transfer between the modules as modules/components that are already unit tested, interface between the modules, and integrated links. For example, let's take integration test cases for Linkedin applications: Verifying the interface link between the login and home pages. That means when a user enters the correct login credentials, it should get directed to the homepage. Verifying the interface link between the home page and the profile page. When the user selects the profile option, the profile page should open up. Verify the interface link between the network page and your connection pages. On clicking the accept button for received Invitations on the network page, it should show the accepted invitation on your connection page once clicked. Verify the interface link between the Notification pages, and say the congrats button. On clicking the say congrats button, the user should get directed toward the new message window. These are the steps of how LinkedIn works and how Integration test cases are included in testing. Manual and Automated Integration Testing Integration testing usually doesn't require specific tools. These tests are often run manually by QA teams. In most cases, it happens in parallel with the development process, which is the most efficient approach. First, individual software units are created, and then these units are checked by a development team. After successful checks, QA engineers start combining different units and inspecting them, focusing first on the interfaces and then on the connections between these units. QA engineers don't require specific tools to inspect these features, even if they are separate. Regarding automated testing, Selenium is the most widely-used framework for integration testing. If you start with integration testing, don't waste time setting up expensive in-house test infrastructure. Opt for cloud-based testing platforms like LambdaTest. Using LambdaTest's online browser farm, you can run integration tests on 3000+ multiple browsers, devices, and OS combinations. Its simple onboarding process makes it easy to perform mobile app and web testing. LambdaTest supports automated testing tools like Selenium, Cypress, Playwright, Puppeteer, Appium, Espresso, and XCUITest, among others. Devs and testers can also leverage LambdaTest's HyperExecute - an end-to-end test orchestration cloud to run automated tests at a blazing speed of up to 70% more than any other traditional cloud grids. Integration Testing Tools With the help of automated tools available, integration testing can greatly impact the various modules of the software applications. These simplify the process and make it more agile. Here are some of the best integration testing tools: Selenium: Selenium is the leading large-scale open-source test automation framework to automate integration test suites for your web applications. Here are some primary features and highlights that make Selenium a top-popular tool: It supports multiple languages – C#, Ruby, Java, JavaScript, PHP, Java, Python, Ruby, and Perl. Run in different system environments – Mac, Windows, Linux. Works with all popular browsers, including Firefox, Safari, Chrome, and Headless. W3C standardization makes testing and scripting seamless. It allows running parallel tests with different hybrid test data. Pytest: Pytest is widely used for writing and running test code for Python automation testing. It can also scale up and works perfectly while testing complex libraries and applications. Here are some amazing features that make pytest an excellent choice for automated integration testing: Pytest can significantly reduce the overall testing time by running tests parallelly. If test files and features are not directly indicated, pytest will automatically define them. Pytest has built-in command-line support and test discovery support. RFT: RFT stands for IBM Rational Functional Tester. It is a popular tool that makes creating scripts that mimic the behavior of human testers easy. To enhance your testing experience, IBM offers different other software solutions that you can integrate with RFT. Not just maintaining test scripts, RFT provides a couple of different features as well; these are: Storyboard mode simplifies editing and test visualization, in particular, through screenshots. Applying recording tools to make test scripting easy. Data-driven testing for the same series of actions using varying data sets. For collaborative SDLC management, it allows integration with other software. VectorCAST: The VectorCAST software testing platform is one of the best in the market to automate testing activities across the software development lifecycle. The advantages of using VectorCAST are: Focus on embedded systems. Enable continuous and collaborative testing. Works with your existing software development tools. Embedded developers can use this highly automated unit and integration test tool to validate business-critical embedded systems and safety. LDRA: LDRA drives the market for software tools that can effortlessly automate code analysis and testing for safety, mission, and business-critical needs. With LDRA, you get- Customer-focused certification services. Consultancy offerings. LDRA tools to achieve early error identification and elimination. Tracing requirements through static and dynamic analysis to unit testing. Verification for various hardware and software platforms. Challenges of Integration Testing Like any other testing technique, integration testing also has some challenges that testers and developers encounter. These challenges include: Integration testing management is complex sometimes because of various factors like databases, platforms, environment, etc. Integrating a new system into one or two legacy systems requires a lot of change and testing efforts. Compatibility between systems developed by different companies is quite challenging for programmers. There are many different paths and permutations to apply for testing integrated systems. Best Practices for Integration Testing Before starting your integration testing, you should follow or implement a few best practices. Run integration tests before unit testing: It's crucial to discover bugs early in the development cycle because the later you discover the bug, the more expensive it is to fix. For a smooth development cycle, making things perfect on initial development is mandatory before stepping to "big things," like Integration testing. Avoid business logic tests: Unit tests are typically high-speed, so they are run for every build triggered in the CI environment. Since they target the fundamental correctness of code, running them frequently is critical to detect bugs early on in business logic so that the developer who introduced the bug can fix it immediately. Keep your testing suites separate: Integration tests should not be run together with unit tests. Developers working on the specific business logic in the code must be able to run unit tests and get near-immediate feedback to ensure that they haven't broken anything before committing code. Log extensively: A unit test has a specific scope and tests a tiny piece of your application, so when it fails, it's usually relatively easy to understand why and fix the problem. All in All The main objective of integration testing is to ensure that the entire software system works flawlessly when it is put together. During the unit testing phase, if any critical aspects are overlooked, they are highlighted and, in turn, can be corrected before the final launch.

By Praveen Mishra
WireMock: The Ridiculously Easy Way (For Spring Microservices)
WireMock: The Ridiculously Easy Way (For Spring Microservices)

Using WireMock for integration testing of Spring-based (micro)services can be hugely valuable. However, usually, it requires significant effort to write and maintain the stubs needed for WireMock to take a real service’s place in tests. What if generating WireMock stubs was as easy as adding @GenerateWireMockStub to your controller? Like this: Kotlin @GenerateWireMockStub @RestController class MyController { @GetMapping("/resource") fun getData() = MyServerResponse(id = "someId", message = "message") } What if that meant that you then just instantiate your producer’s controller stub in consumer-side tests… Kotlin val myControllerStub = MyControllerStub() Stub the response… Kotlin myControllerStub.getData(MyServerResponse("id", "message")) And verify calls to it with no extra effort? Kotlin myControllerStub.verifyGetData() Surely, it couldn’t be that easy?! Before I explain the framework that does this, let’s first look at the various approaches to creating WireMock stubs. The Standard Approach While working on a number of projects, I observed that the writing of WireMock stubs most commonly happens on the consumer side. What I mean by this is that the project that consumes the API contains the stub setup code required to run tests. The benefit of it is that it's easy to implement. There is nothing else the consuming project needs to do. Just import the stubs into the WireMock server in tests, and the job is done. However, there are also some significant downsides to this approach. For example, what if the API changes? What if the resource mapping changes? In most cases, the tests for the service will still pass, and the project may get deployed only to fail to actually use the API — hopefully during the build’s automated integration or end-to-end tests. Limited visibility of the API can lead to incomplete stub definitions as well. Another downside of this approach is the duplicated maintenance effort — in the worst-case scenario. Each client ends up updating the same stub definitions. Leakage of the API-specific information, in particular, sensitive information from the producer to the consumer, leads to the consumers being aware of the API characteristics they shouldn’t be. For example, the endpoint mappings or, sometimes even worse — API security keys. Maintaining stubs on the client side can also lead to increased test setup complexity. The Less Common Approach A more sophisticated approach that addresses some of the above disadvantages is to make the producer of the API responsible for providing the stubs. So, how does it work when the stubs live on the producer side? In a poly-repo environment, where each microservice has its own repository, this means the producer generates an artifact containing the stubs and publishes it to a common repository (e.g., Nexus) so that the clients can import it and use it. In a mono-repo, the dependencies on the stubs may not require the artifacts to be published in this way, but this will depend on how your project is set up. The stub source code is written manually and subsequently published to a repository as a JAR file The client imports the JAR as a dependency and downloads it from the repository Depending on what is in the Jar, the test loads the stub directly to WireMock or instantiates the dynamic stub (see next section for details) and uses it to set up WireMock stubs and verify the calls This approach improves the accuracy of the stubs and removes the duplicated effort problem since there is only one set of stubs maintained. There is no issue with visibility either since the stubs are written while having full access to the API definition, which ensures better understanding. The consistency is ensured by the consumers always loading the latest version of the published stubs every time the tests are executed. However, preparing stubs manually on the producer's side can also have its own shortcomings. It tends to be quite laborious and time-consuming. As any handwritten code intended to be used by 3rd parties, it should be tested, which adds even more effort to the development and maintenance. Another problem that may occur is a consistency issue. Different developers may write the stubs in different ways, which may mean different ways of using the stubs. This slows development down when developers maintaining different services need to first learn how the stubs have been written, in the worst-case scenario, uniquely for each service. Also, when writing stubs on the consumer's side, all that is required to prepare are stubs for the specific parts of the API that the consumer actually uses. But providing them on the producer's side means preparing all of them for the entire API as soon as the API is ready, which is great for the client but not so great for the provider. Overall, writing stubs on the provider side has several advantages over the client-side approach. For example, if the stub-publishing and API-testing are well integrated into the CI pipeline, it can serve as a simpler version of Consumer Driven Contracts, but it is also important to consider the possible implications like the requirement for the producer to keep the stubs in sync with the API. Dynamic Stubbing Some developers may define stubs statically in the form of JSON. This is additional maintenance. Alternatively, you can create helper classes that introduce a layer of abstraction — an interface that determines what stubbing is possible. Usually, they are written in one of the higher-level languages like Java/Kotlin. Such stub helpers enable the clients to set up stubs within the constraints set out by the author. Usually, it means using various values of various types. Hence I call them dynamic stubs for short. An example of such a dynamic stub could be a function with a signature along the lines of: Kotlin fun get(url: String, response: String } One could expect that such a method could be called like this: Kotlin get(url = "/someResource", response = "{ \"key\" = \"value\" }") And a potential implementation using the WireMock Java library: Kotlin fun get(url: String, response: String) { stubFor(get(urlPathEqualTo(url)) .willReturn(aResponse().withBody(response))) } Such dynamic stubs provide a foundation for the solution described below. Auto-Generating Dynamic WireMock Stubs I have been working predominantly in the Java/Kotlin Spring environment, which relies on the SpringMVC library to support HTTP endpoints. The newer versions of the library provide the @RestController annotation to mark classes as REST endpoint providers. It's these endpoints that I tend to stub most often using the above-described dynamic approach. I came to the realization that the dynamic stubs should provide only as much functionality as set out by the definition of the endpoints. For example, if a controller defines a GET endpoint with a query parameter and a resource name, the code enabling you to dynamically stub the endpoint should only allow the client to set the value of the parameter, the HTTP status code, and the body of the response. There is no point in stubbing a POST method on that endpoint if the API doesn't provide it. With that in mind, I believed there was an opportunity to automate the generation of the dynamic stubs by analyzing the definitions of the endpoints described in the controllers. Obviously, nothing is ever easy. A proof of concept showed how little I knew about the build tool that I have been using for years (Gradle), the SpringMVC library, and Java annotation processing. But nevertheless, in spite of the steep learning curve, I managed to achieve the following: parse the smallest meaningful subset of the relevant annotations (e.g., a single basic resource) design and build a data model of the dynamic stubs generate the source code of the dynamic stubs (in Java) and make Gradle build an artifact containing only the generated code and publish it (I also tested the published artifact by importing it into another project) In the end, here is what was achieved: The annotation processor iterates through all relevant annotations and generates the dynamic stub source code. Gradle compiles and packages the generated source into a JAR file and publishes it to an artifact repository (e.g., Nexus) The client imports the JAR as a dependency and downloads it from the repository The test instantiates the generated stubs and uses them to set up WireMock stubs and verify the calls made to WireMock With a mono-repo, the situation is slightly simpler since there is no need to package the generated code and upload it to a repository. The compiled stubs become available to the depending subprojects immediately. These end-to-end scenarios proved that it could work. The Final Product I developed a library with a custom annotation @GenerateWireMockStub that can be applied to a class annotated with @RestController. The annotation processor included in the library generates the Java code for dynamic stub creation in tests. The stubs can then be published to a repository or, in the case of a mono-repo, used directly by the project(s). For example, by adding the following dependencies (Kotlin project): Groovy kapt 'io.github.lsd-consulting:spring-wiremock-stub-generator:2.0.3' compileOnly 'io.github.lsd-consulting:spring-wiremock-stub-generator:2.0.3' compileOnly 'com.github.tomakehurst:wiremock:2.27.2' and annotating a controller having a basic GET mapping with @GenerateWireMockStub: Kotlin @GenerateWireMockStub @RestController class MyController { @GetMapping("/resource") fun getData() = MyServerResponse(id = "someId", message = "message") } will result in generating a stub class with the following methods: Java public class MyControllerStub { public void getData(MyServerResponse response) ... } public void getData(int httpStatus, String errorResponse) { ... } public void verifyGetData() { ... } public void verifyGetData(final int times) { ... } public void verifyGetDataNoInteraction() { ... } } The first two methods set up stubs in WireMock, whereas the other methods verify the calls depending on the expected number of calls — either once or the given number of times, or no interaction at all. That stub class can be used in a test like this: Kotlin //Create the stub for the producer’s controller val myControllerStub = MyControllerStub() //Stub the controller method with the response myControllerStub.getData(MyServerResponse("id", "message")) callConsumerThatTriggersCallToProducer() myControllerStub.verifyGetData() The framework now supports most HTTP methods, with a variety of ways to verify interactions. @GenerateWireMockStub makes maintaining these dynamic stubs effortless. It increases accuracy and consistency, making maintenance easier and enabling your build to easily catch breaking changes to APIs before your code hits production. More details can be found on the project’s website. A full example of how the library can be used in a multi-project setup and in a mono-repo: spring-wiremock-stub-generator-example spring-wiremock-stub-generator-monorepo-example Limitations The library’s limitations mostly come from the WireMock limitations. More specifically, multi-value and optional request parameters are not quite supported by WireMock. The library uses some workarounds to handle those. For more details, please check out the project’s README. Note The client must have access to the API classes used by the controller. Usually, it is achieved by exposing them in separate API modules that are published for consumers to use. Acknowledgments I would like to express my sincere gratitude to the reviewers who provided invaluable feedback and suggestions to improve the quality of this article and the library. Their input was critical in ensuring the article’s quality. A special thank you to Antony Marcano for his feedback and repeated reviews, and direct contributions to this article. This was crucial in ensuring that the article provides clear and concise documentation for the spring-wiremock-stub-generator library. I would like to extend my heartfelt thanks to Nick McDowall and Nauman Leghari for their time, effort, and expertise in reviewing the article and providing insightful feedback to improve its documentation and readability. Finally, I would also like to thank Ollie Kennedy for his careful review of the initial pull request and his suggestions for improving the codebase.

By Lukasz Gryzbon
Implementing RBAC in Quarkus
Implementing RBAC in Quarkus

REST APIs are the heart of any modern software application. Securing access to REST APIs is critical for preventing unauthorized actions and protecting sensitive data. Additionally, companies must comply with regulations and standards to operate successfully. This article describes how we can protect REST APIs using Role-based access control (RBAC) in the Quarkus Java framework. Quarkus is an open-source, full-stack Java framework designed for building cloud-native, containerized applications. The Quarkus Java framework comes with native support for RBAC, which will be the initial focus of this article. Additionally, the article will cover building a custom solution to secure REST endpoints. Concepts Authentication: Authentication is the process of validating a user's identity and typically involves utilizing a username and password. (However, other approaches, such as biometric and two-factor authentication, can also be employed). Authentication is a critical element of security and is vital for protecting systems and resources against unauthorized access. Authorization: Authorization is the process of verifying if a user has the necessary privileges to access a particular resource or execute an action. Usually, authorization follows authentication. Several methods, such as role-based access control and attribute-based access control, can be employed to implement authorization. Role-Based Access Control: Role-based access control (RBAC) is a security model that grants users access to resources based on the roles assigned to them. In RBAC, users are assigned to specific roles, and each role is given permissions that are necessary to perform their job functions. Gateway: In a conventional software setup, the gateway is responsible for authenticating the client and validating whether the client has the necessary permissions to access the resource. Gateway authentication plays a critical role in securing microservices-based architectures, as it allows organizations to implement centralized authentication. Token-based authentication: This is a technique where the gateway provides an access token to the client following successful authentication. The client then presents the access token to the gateway with each subsequent request. JWT: JSON Web Token (JWT) is a widely accepted standard for securely transmitting information between parties in the form of a JSON object. On successful login, the gateway generates a JWT and sends it back to the client. The client then includes the JWT in the header of each subsequent request to the server. The JWT can include required permissions that can be used to allow or deny access to APIs based on the user's authorization level. Example Application Consider a simple application that includes REST APIs for creating and retrieving tasks. The application has two user roles: Admin — allowed to read and write. Member — allowed to read-only. Admin and Member can access the GET API; however, only Admins are authorized to use the POST API. Java @Path("/task") public class TaskResource { @GET @Produces(MediaType.TEXT_PLAIN) public String getTask() { return "Task Data"; } @POST @Produces(MediaType.TEXT_PLAIN) public String createTask() { return "Valid Task received"; } } Configure Quarkus Security Modules In order to process and verify incoming JWTs in Quarkus, the following JWT security modules need to be included. For a maven-based project, add the following to pom.xml XML <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-jwt</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-jwt-build</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-security-jwt</artifactId> <scope>test</scope> </dependency> For a gradle-based project, add the following: Groovy implementation("io.quarkus:quarkus-smallrye-jwt") implementation("io.quarkus:quarkus-smallrye-jwt-build") testImplementation("io.quarkus:quarkus-test-security-jwt") Implementing RBAC Quarkus provides built-in RBAC support to protect REST APIs based on user roles. This can be done in a few steps. Step 1 The first step in utilizing Quarkus' built-in RBAC support is to annotate the APIs with the roles that are allowed to access them. The annotation to be added is @RolesAllowed, which is a JSR 250 security annotation that indicates that the given endpoint is accessible only if the user belongs to the specified role. Java @GET @RolesAllowed({"Admin", "Member"}) @Produces(MediaType.TEXT_PLAIN) public String getTask() { return "Task Data"; } @POST @RolesAllowed({"Admin"}) @Produces(MediaType.TEXT_PLAIN) public String createTask() { return "Valid Task received"; } Step 2 The next step is to configure the issuer URL and the public key. This enables Quarkus to verify the JWT and ensure it has not been tampered with. This can be done by adding the following properties to the application.properties file located in the /resources folder. Properties files mp.jwt.verify.publickey.location=publicKey.pem mp.jwt.verify.issuer=https://myapp.com/issuer quarkus.native.resources.includes=publicKey.pem mp.jwt.verify.publickey.location - This configuration specifies the location of the public key to Quarkus, which must be located in the classpath. The default location Quarkus looks for is the /resources folder. mp.jwt.verify.issuer - This property represents the issuer of the token, who created it and signed it with their private key. quarkus.native.resources.includes - this property informs quarks to include the public key as a resource in the native executable. Step 3 The last step is to add your public key to the application. Create a file named publicKey.pem, save the public key in it. Copy the file to the /resources folder located in the /src directory. Testing Quarkus offers robust support for unit testing to ensure code quality, particularly when it comes to RBAC. Using the @TestSecurity annotation, user roles can be defined, and a JWT can be generated to call REST APIs from within unit tests. Java @Test @TestSecurity(user = "testUser", roles = "Admin") public void testTaskPostEndpoint() { given().log().all() .body("{id: task1}") .when().post("/task") .then() .statusCode(200) .body(is("Valid Task received")); } Custom RBAC Implementation As the application grows and incorporates additional features, the built-in RBAC support may become insufficient. A well-written application allows users to create custom roles with specific permissions associated with them. It is important to decouple roles and permissions and avoid hardcoding them in the code. A role can be considered as a collection of permissions, and each API can be labeled with the required permissions to access it. To decouple roles and permissions and provide flexibility to users, let’s expand our example application to include two permissions for tasks. task:read — permission would allow users to read tasks task:write — permission would allow users to create or modify tasks. We can then associate these permissions with the two roles: "Admin" and "Member" Admin: assigned both read and write. ["task:read", "task:write"] Member: would only have read. ["task:read"] Step 1 To associate each API with a permission, we need a custom annotation that simplifies its usage and application. Let's create a new annotation called @Permissions, which accepts a string of permissions that the user must have in order to call the API. Java @Target({ ElementType.METHOD }) @Retention(RetentionPolicy.RUNTIME) public @interface Permissions { String[] value(); } Step 2 The @Permissions annotation can be added to the task APIs to specify the required permissions for accessing them. The GET task API can be accessed if the user has either task:read or task:write permissions, while the POST task API can only be accessed if the user has task:write permission. Java @GET @Permissions({"task:read", "task:write"}) @Produces(MediaType.TEXT_PLAIN) public String getTask() { return "Task Data"; } @POST @Permissions("task:write") @Produces(MediaType.TEXT_PLAIN) public String createTask() { return "Valid Task received"; } Step 3 The last step involves adding a filter that intercepts API requests and verifies if the included JWT has the necessary permissions to call the REST API. The JWT must include the userID as part of the claims, which is the case in a typical application since some form of user identification is included in the JWT token The Reflection API is used to determine the method and its associated annotation that is invoked. In the provided code, user -> role mapping and role -> permissions mapping are stored in HashMaps. In a real-world scenario, this information would be retrieved from a database and cached to allow for faster access. Java @Provider public class PermissionFilter implements ContainerRequestFilter { @Context ResourceInfo resourceInfo; @Inject JsonWebToken jwt; @Override public void filter(ContainerRequestContext requestContext) throws IOException { Method method = resourceInfo.getResourceMethod(); Permissions methodPermAnnotation = method.getAnnotation(Permissions.class); if(methodPermAnnotation != null && checkAccess(methodPermAnnotation)) { System.out.println("Verified permissions"); } else { requestContext.abortWith(Response.status(Response.Status.FORBIDDEN).build()); } } /** * Verify if JWT permissions match the API permissions */ private boolean checkAccess(Permissions perm) { boolean verified = false; if(perm == null) { //If no permission annotation verification failed verified = false; } else if(jwt.getClaim("userId") == null) { // Don’t support Anonymous users verified = false; } else { String userId = jwt.getClaim("userId"); String role = getRolesForUser(userId); String[] userPermissions = getPermissionForRole(role); if(Arrays.asList(userPermissions).stream() .anyMatch(userPerm -> Arrays.asList(perm.value()).contains(userPerm))) { verified = true; } } return verified; } // role -> permission mapping private String[] getPermissionForRole(String role) { Map<String, String[]> rolePermissionMap = new HashMap<>(); rolePermissionMap.put("Admin", new String[] {"task:write", "task:read"}); rolePermissionMap.put("Member", new String[] {"task:read"}); return rolePermissionMap.get(role); } // userId -> role mapping private String getRolesForUser(String userId) { Map<String, String> userMap = new HashMap<>(); userMap.put("1234", "Admin"); userMap.put("6789", "Member"); return userMap.get(userId); } } Testing In a similar way to testing the built-in RBAC, the @TestSecurity annotation can be utilized to create a JWT for testing purposes. Additionally, the Quarkus library offers the @JwtSecurity annotation, which enables the addition of extra claims to the JWT, including the userId claim. Java @Test @TestSecurity(user = "testUser", roles = "Admin") @JwtSecurity(claims = { @Claim(key = "userId", value = "1234") }) public void testTaskPosttEndpoint() { given().log().all() .body("{id: task1}") .when().post("/task") .then() .statusCode(200) .body(is("Task edited")); } @Test @TestSecurity(user = "testUser", roles = "Admin") @JwtSecurity(claims = { @Claim(key = "userId", value = "6789") }) public void testTaskPostMember() { given().log().all() .body("{id: task1}") .when().post("/task") .then() .statusCode(403); } Conclusion As cyber-attacks continue to rise, protecting REST APIs is becoming increasingly crucial. A potential security breach can result in massive financial losses and reputational damage for a company. While Quarkus is a versatile Java framework that provides built-in RBAC support for securing REST APIs, its native support may be inadequate in certain scenarios, particularly for fine-grained access control. The above article covers both the implementation of the built-in RBAC support in Quarkus, as well as the development and testing of a custom role-based access control solution in Quarkus.

By Aashreya Shankar
Leverage the Richness of HTTP Status Codes
Leverage the Richness of HTTP Status Codes

If you're not a REST expert, you probably use the same HTTP codes over and over in your responses, mostly 200, 404, and 500. If using authentication, you might perhaps add 401 and 403; if using redirects 301 and 302, that might be all. But the range of possible status codes is much broader than that and can improve semantics a lot. While many discussions about REST focus on entities and methods, using the correct response status codes can make your API stand out. 201: Created Many applications allow creating entities: accounts, orders, what have you. In general, one uses HTTP status code 200 is used, and that's good enough. However, the 201 code is more specific and fits better: The HTTP 201 Created success status response code indicates that the request has succeeded and has led to the creation of a resource. The new resource is effectively created before this response is sent back. and the new resource is returned in the body of the message, its location being either the URL of the request, or the content of the Location header. - MDN web docs 205: Reset Content Form-based authentication can either succeed or fail. When failing, the usual behavior is to display the form again with all fields cleared. Guess what? The 205 status code is dedicated to that: The HTTP 205 Reset Content response status tells the client to reset the document view, so for example to clear the content of a form, reset a canvas state, or to refresh the UI. - MDN web docs 428: Precondition Required When using optimistic locking, validation might fail during an update because data has already been updated by someone else. By default, frameworks (such as Hibernate) throw an exception in that case. Developers, in turn, catch it and display a nice information box asking to reload the page and re-enter data. Let's check the 428 status code: The origin server requires the request to be conditional. Intended to prevent the 'lost update' problem, where a client GETs a resource's state, modifies it, and PUTs it back to the server, when meanwhile a third party has modified the state on the server, leading to a conflict. - MDN web docs The code describes exactly the conflict case in optimistic locking! Note that RFC 6585 mentions the word conditional and shows an example using the If-Match header. However, it doesn't state exactly how to achieve that condition. 409: Conflict Interestingly enough, the 409 code states: The HTTP 409 Conflict response status code indicates a request conflict with current state of the server. - MDN web docs It can also apply to the previous case but is more general. For example, a typical use case would be to update a resource that has been deleted. 410: Gone Most of the time, when you GET a resource that is not found, the server returns a 404 code. What if the resource existed before but doesn't anymore? Interestingly enough, there's an alternative for a particular use case: The semantics of the returned HTTP code could tell that. That is precisely the reason for 410. The HTTP 410 Gone client error response code indicates that access to the target resource is no longer available at the origin server and that this condition is likely to be permanent. If you don't know whether this condition is temporary or permanent, a 404 status code should be used instead. - MDN web docs 300: Multiple Choices WARNING: This one seems a bit far-fetched, but the IETF specification fits the case. HATEOAS-driven applications offer a root page, which is an entry point allowing navigating further. For example, this is the response when accessing the Spring Boot actuator: JSON "_links": { "self": { "href": "http://localhost:8080/manage", "templated": false }, "beans": { "href": "http://localhost:8080/manage/beans", "templated": false }, "health": { "href": "http://localhost:8080/manage/health", "templated": false }, "metrics": { "href": "http://localhost:8080/manage/metrics", "templated": false }, } } No regular resource is present at this location. The server provides a set of resources, each with a dedicated identifier. It looks like a match for the 300 status code: [... ] the server SHOULD generate apayload in the 300 response containing a list of representationmetadata and URI reference(s) from which the user or user agent canchoose the one most preferred. - IETF HTTP 1.1: Semantics and Content Conclusion Generally, specific HTTP statuses only make sense when having a REST backend accessed by a JavaScript frontend. For example, resetting the form (205) doesn't make sense if the server generates the page. The issue with those codes is about the semantics: they are subject to a lot of interpretation. Why would you choose to use 409 over 428? It may be a matter of my interpretation over yours in the end. If you offer a REST public API, you'll have a combination of those codes (and others) and headers. You'll need full-fledged detailed documentation in all cases to refine the general semantics in your context. That shouldn't stop you from using them, as they offer a rich set from which you can choose. To Go Further: HTTP response status codes List of HTTP status codes Series of posts on HTTP status codes The HTTP Status Codes Problem

By Nicolas Fränkel CORE
The Ultimate API Development Guide: Strategy, Tools, Best Practices
The Ultimate API Development Guide: Strategy, Tools, Best Practices

APIs play a prominent part in the mobile app development domain. Businesses are individually building their APIs to provide ease of effort to the developers and increase their customer base. For example, Google's map API is embedded in multiple third-party apps. Further, businesses are exploring new innovations through API development. With it, the start-up economy is gaining a boost using APIs from multiple tech giants. Hence, the importance to understand APIs in detail becomes necessary for technical and non-technical audiences. This blog will let you understand APIs in depth. You will learn how APIs work and how enterprises gain benefits through API development. Further, you will also learn what practices API development businesses must implement in their API teams, the tools and terminologies, and more Understanding APIs The aim of this section is to deliver a brief introduction to APIs. The determined audience for this section is the general audience who has little or no knowledge about APIs. What Is API? Application programming interface (API) allows products or services to communicate with other products and services. During the communication process, the backed process stays hidden, delivering Flexibility, simple design, administration, and usage. The main goal of APIs providers is to deliver innovations and ease of building products to other enterprises. McKinsey Digital says, "For companies who know how to implement them, they can cut costs, improve efficiency, and help the bottom line." How Do APIs Work? Understand APIs as a contract or agreement. The way an agreement allows two parties to communicate with each other or exchange services, API works similarly. Party 1 sends a remote request to Party 2, and Party 2 responds to the request with the response or answer to the query. API Types and Release An API can be released in three ways. They are also known as Types of APIs Private: Build to be used internally. Partner: Build to be used only with specific business partners. Public: Available for everyone, also known as Open API. Composite API: Combining different service and data APIs. API Development Terminologies Endpoint: API has two points, and the one end is Endpoint. API Key: The requester has to specify an authorization code known as an API key to pass the request. JSON: Common data format used for API requests. GET: HTTP method to obtain the resources. POST: HTTP method for building resources. REST: Programming architectural implementation to enhance communication efficiency. SOAP: Messaging protocol for sharing information. It is compatible with XML and application layer protocols. Latency: Time taken b API to provide a response. API throttling: Regulating the API performance is called Throttling. It is monitored to analyze the performance of API. How Do Businesses Gain Benefits by Building APIs? With the API development process, businesses can innovate, gain opportunities and grow rapidly. Here is an example to understand how businesses innovate with APIs. Imagine you are a digital navigation solution provider like Google Maps. You have your own app with your own user base. A food delivery business wants to integrate map functionality in its business to let customers and delivery partners view the exact locations of each other. In this case, they have two options. Build their own map system from scratch. Or embed your services into their app. If we consider CASE 1, then building another robust solution will cost them massive. If we consider CASE 2, then you, as a navigation business, have to provide them with an easy way to integrate. While sharing your code or app, it must be compatible with their OS platform. Your iOS app will not be compatible with their Android app. So, the ultimate solution is to build an API that acts as a communication system between your app and their app. They can embed your system inside their app and achieve high-quality efficiency with the lowest budget. Hence an enterprise gets the following benefit by building an API: Accelerating new startups to save costs by using API. Collaboration opportunities bring growth. Marketing services in multiple apps from the third party. Attracting customers from third parties to their main apps. Facilitate open innovations and a channel to accelerate other enterprises. Expand the reach of your brand, bringing growth. Mastering the API Development Strategy You have read a brief intro about APIs; now, we are ready to talk about API development. However, before the actual development, you have to determine the API development strategy. API development strategy requires you to answer three questions: why, what, and how. Let's understand it. The "Why" of API Development Strategy The why's main objective is to focus on the value of API development on the business. Typically you might have these values associated with your API development: Grow B2C or B2B ecosystems Content distribution Bringing a new and innovative business model. Development innovation for internal use. Simplify back-end systems Participation in digital innovation Quoting the example of Flickr, a social photo-sharing sensation, it engaged with multiple partners to generate trust. Once you have identified the why related to API development, you can head to "What." The "What" of API Development Strategy What will the API development do that impacts the overall business strategy? To identify the part of "what" API development, you have to define internal and external views of the organization. Internal view: Valuable assets in organization processes. External view: Market, trends, competitors, consumers, and everything outside the organization. Quoting examples here, the mapping APIs were sold to multiple organizations and governments to deliver navigation and planning. The "How" of API Development Strategy Now that you have determined why and what API development strategy is, you must consider the How part. How you are going to build an API program to achieve your business values and objectives. Here you may try to figure out the below elements related to your API development: Designing Maintenance Promotion strategy- internal or external Determining API team Success Monitoring Building the API Team Since a team is important for several tasks in organizations, API development tools require a team. The team takes care of the building, deploying, operating, and optimizing of the API for your enterprises. You must: Hire project leader Hire designers Get experienced developers. Hire testers for quality assurance. Hire security experts API programs can be large, and it's highly important to ensure the team works with collaboration. Best Practices from Successful API development teams Once you have laid down the strategy and made your team, it's time to build the API. However, when building the APIs, you must consider some of the practices as a priority to ensure its success. Altogether, here are the best practices from the most successful API development teams. Concentrate on the Value of API While determining the strategy for API development, we talked about values in the "why" factor. During the development process, values again remain a very critical factor to consider. The Director of Platforms for Ford motor company quoted that an API program must: Offer valuable service Determine a plan and business model. Achieve simplicity, flexibility, and ease of adoption. Be easily measured and organized. Deliver the best support to developers easing their coding part. If no user group, either its consumers and business or developers, gets specific values from your API, your APIs won't be sustainable. In order to achieve the above results with API development, you can follow certain steps. Identify a problem for users and developers. Analyse pain pointers that targeted the user base face to enhance your solution. Determine what benefits users get from your API. Have a Clear Vision of the Business Model Aligning business models according to your API is never a sustainable option. It will add rapid costs in the end. Hence, build a business model and align your API development according to it. In order to determine a business model, you need to have a clear vision of the following: Market needs Customer segments Distribution channels to reach customers. The revenue model of your company. You can use the business model canvas by Strategyzer to have a clear business model. Even Netflix did the same. In 2013, Netflix shut down its Public API to realign its API according to their new business model, which was online data streaming. Netflix has given access to its private APIs to a very limited number of apps that are in collaboration with Netflix. Keep Users in Mind While Designing and Implementing API Have you noticed something in cars? Gear, race, steering, and the majority of the driving elements remain the same with every car you buy. Why? Because automobiles know that a driver must be able to drive every car and face no issues in switching to one or another company. The same is applicable to API development. Your API must be: Simple my implementing easy Data formats, Method structures, Data models, and Authentication. Flexible in delivering the most valuable and feasible use cases. Easily adopted by developers. API Operations Should Be at the Top The API operation must be according to the expectations of developers who will use it to gain value, ease, and flexibility. As per API operations donut, You can keep your API operations at the top. Here is how it looks. API Operations Donut (Operations Management book by Slack, Chambers & Johnston 2007) Build an Engaging Developer Experience Developers are your first users who would use your API in the first place. If developers do not find an engaging experience while working with your API, they might look for other alternatives. According to Musser, you can follow these practices to increase developer engagement: Clear info about API's goals. Quick sign up Absolutely free and smooth. Clear display of pricing. Deliver crisp and clear documentation. Further, you can build a developer platform. Developers can post their queries and get answers if they have issues while using your API. Additionally, a developers program will provide a clear value to developers and a great brand value for your business. Here are the elements of the developer program displayed by RedHat. Step Beyond Marketing Some enterprises build APIs but only market them at technical portals like Hackathons. You must market your API just like you market any other product. Though, The main concept should be to market it to the right person. Some entrepreneurs having IT businesses are not at all from technical backgrounds. But they might be interested in your API. Market your API by using the following: Perform proper Segmentation Evaluating the Targeted market Correct Positioning in the minds of consumers. Hence by performing the steps, you can initiate the right marketing process for your API. Don't Forget Maintenance and Updates in Your API After a heavy designing, developing, and marketing process, your API may reach the right users. But, if your API does not align with developers' feedback, it won't survive long enough. Specifically, ensure that you: Solve bugs regularly Keep Optimizing your API. New methods and functions to make it fluid. Remove unwanted methods that demand more resources. Roll out the latest versions of your API. Once you maintain it right, you increase the lifecycle of your API. Technical Tips to Keep in Mind While Building APIs API specification framework: You can stick to specifications like OpenAPI/Swagger for better tooling interoperability. Also, focus on SDKs, UI points, and documentation every time your code changes. Versioning: Enforce versioning information in your APIs so that users can see if they are running on the old version. Generally, version information is given in a URL like this. HTTP /api/v1/customers Filtering and Pagination: Use LIMIT and OFFSET statements on queries for filtration and pagination. Here is a MySQL statement example to return the slice. MySQL SELECT * from customers LIMIT 5, 10 And a JSON response. JSON // _links { “first”: “/api/v1/customers?page=1”, “prev”: “/api/v1/customers?page=1”, “next”: “/api/v1/customers?page=3”, “last”: “/api/v1/customers?page=9.” } Use REST and HATEOAS: Apply some design considerations like exposing a list of orders at the endpoint: JSON GET /api/vi/orders Secure endpoints: Ensure HTTPS connections for secure communications. API Development Tools Apigee: It is Google's API management tool assisting users in boosting their digital transformation. API Science: This tool aims to evaluate the performance of internal and external APIs. Postman: API Toolchain empowering developers to test and run performance evaluation of APIs.

By Ishan Gupta

Top Integration Experts

expert thumbnail

John Vester

Lead Software Engineer,
Marqeta @JohnJVester

Information Technology professional with 30+ years expertise in application design and architecture, feature development, project management, system administration and team supervision. Currently focusing on enterprise architecture/application design utilizing object-oriented programming languages and frameworks. Prior expertise building (Spring Boot) Java-based APIs against React and Angular client frameworks. CRM design, customization and integration with Salesforce. Additional experience using both C# (.NET Framework) and J2EE (including Spring MVC, JBoss Seam, Struts Tiles, JBoss Hibernate, Spring JDBC).
expert thumbnail

Colin Domoney

Chief Technology Evangelist,
42Crunch

‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎
expert thumbnail

Saurabh Dashora

Founder,
ProgressiveCoder

‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎
expert thumbnail

Cameron HUNT

Integration Architect,
TeamWork France

An Event-Driven anarchist, motivated by the conviction that monolithic applications can no longer meet the ever-changing needs of businesses in an increasingly complex World. The next generation of (Distributed) Enterprise Applications must instead be composed of best-of-breed 'software components'; and just what is best-of-breed changes from month-to-month, so integration agility becomes key

The Latest Integration Topics

article thumbnail
Introduction to API Gateway in Microservices Architecture
API gateway simplifies managing microservices spread over multiple Kubernetes clusters and clouds. Read on to understand its architecture, features, and benefits.
June 8, 2023
by Anas T
· 1,516 Views · 3 Likes
article thumbnail
Deploy an ERC-721 Smart Contract on Linea with Infura and Truffle
Let's look at Linea, a new Ethereum L2, and deploy a smart contract using the industry-standard tools Infura, MetaMask, Solidity, OpenZeppelin, and Truffle.
June 6, 2023
by Michael Bogan CORE
· 1,243 Views · 2 Likes
article thumbnail
Overcoming Challenges in UI/UX Application Modernization
Learn how to update UI/UX applications to meet modern user needs and avoid outdated design with proven techniques.
June 8, 2023
by Hiren Dhaduk
· 1,329 Views · 1 Like
article thumbnail
New ORM Framework for Kotlin
The article introduces a Kotlin API for some ORMs to simplify database operations by providing a lightweight and intuitive interface.
June 5, 2023
by Pavel Ponec
· 1,760 Views · 1 Like
article thumbnail
How to Supplement SharePoint Site Drive Security With Java Code Examples
This article advocates for expanding upon built-in SharePoint Online Site Drive security by integrating an external security API solution.
June 7, 2023
by Brian O'Neill CORE
· 1,948 Views · 2 Likes
article thumbnail
Microservices With Apache Camel and Quarkus (Part 3)
In Parts 1 and 2, you've seen how to run microservices as Quarkus local processes. Let's now look at some K8s-based deployments, starting with Minikube.
June 7, 2023
by Nicolas Duminil CORE
· 2,802 Views · 1 Like
article thumbnail
Understanding the Role of ERP Systems in Modern Software Development
ERP systems are crucial in modern software development for seamless integration, enhanced functionality, efficient resource management, and streamlined processes.
June 7, 2023
by Anushree Gupta
· 1,900 Views · 1 Like
article thumbnail
IDE Changing as Fast as Cloud Native
A look at how IDEs have evolved and the benefits and challenges of convergence. Will we see VS Code become all dominant?
June 7, 2023
by Phil Wilkins
· 1,874 Views · 3 Likes
article thumbnail
A Comprehensive Guide To Testing and Debugging AWS Lambda Functions
Learn how to ensure your AWS Lambda functions are running smoothly with this comprehensive guide to testing Lambda functions.
June 7, 2023
by Satrajit Basu CORE
· 2,019 Views · 1 Like
article thumbnail
How To Read File Into Char Array in C
The article provides a comprehensive guide on how to read files into character arrays in C, with easy-to-follow steps.
June 7, 2023
by Ankur Ranpariya
· 1,659 Views · 1 Like
article thumbnail
Idempotent Liquibase Changesets
Here are two ways of writing idempotent Liquibase changesets: a best practice that allows having more robust and easy-to-maintain applications.
June 6, 2023
by Horatiu Dan
· 1,386 Views · 1 Like
article thumbnail
How To Avoid “Schema Drift”
This article will explain the existing solutions and strategies to mitigate the challenge and avoid schema drift, including data versioning using LakeFS.
February 3, 2023
by Yaniv Ben Hemo
· 8,026 Views · 3 Likes
article thumbnail
GraphQL vs. REST: Differences, Similarities, and Why To Use Them
In this article, readers will learn about the differences and similarities between GraphQL and REST and why and how to use them, along with guide visuals.
February 9, 2023
by Shay Bratslavsky
· 10,713 Views · 4 Likes
article thumbnail
Structured Logging
This post introduces Structured Logging and the rationale behind its use. Some simple examples are provided to reinforce understanding.
June 2, 2023
by Karthik Viswanathan
· 6,208 Views · 6 Likes
article thumbnail
Leveraging FastAPI for Building Secure and High-Performance Banking APIs
Explore the importance of FastAPI for developing banking APIs and how it can empower financial institutions to deliver efficient and secure services to their customers.
June 5, 2023
by Amlan Patnaik
· 1,720 Views · 1 Like
article thumbnail
GraphQL vs. Protobuf: Differences, Similarities, and Uses
In this article, readers will learn about GraphQL and Protobuf, including background info, advantages and disadvantages, and differences and similarities.
February 16, 2023
by Shay Bratslavsky
· 4,375 Views · 1 Like
article thumbnail
Integrate Cucumber in Playwright With Java
Integrating Cucumber with Playwright combines natural language scenarios with browser automation, boosting project performance.
June 5, 2023
by Kailash Pathak
· 2,681 Views · 1 Like
article thumbnail
Strategies for Reducing Total Cost of Ownership (TCO) For Integration Solutions
In this article, we explore effective approaches to reducing TCO for integration solutions and strategies for cost-effective implementations.
June 5, 2023
by Susmit Dey CORE
· 1,971 Views · 4 Likes
article thumbnail
Cucumber Selenium Tutorial: A Comprehensive Guide With Examples and Best Practices
Want to learn automation testing with Cucumber? Check out our detailed guide on Cucumber Selenium tutorial with examples! Get the most out of it!
June 5, 2023
by Sarah Elson
· 2,150 Views · 3 Likes
article thumbnail
Microservices With Apache Camel and Quarkus (Part 2)
Take a look at a scenario to deploy and run locally the simplified money transfer application presented in part 1 as Quarkus standalone services.
June 3, 2023
by Nicolas Duminil CORE
· 6,884 Views · 1 Like
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com

Let's be friends: