DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Software Design and Architecture

Software design and architecture focus on the development decisions made to improve a system's overall structure and behavior in order to achieve essential qualities such as modifiability, availability, and security. The Zones in this category are available to help developers stay up to date on the latest software design and architecture trends and techniques.

Functions of Software Design and Architecture

Cloud Architecture

Cloud Architecture

Cloud architecture refers to how technologies and components are built in a cloud environment. A cloud environment comprises a network of servers that are located in various places globally, and each serves a specific purpose. With the growth of cloud computing and cloud-native development, modern development practices are constantly changing to adapt to this rapid evolution. This Zone offers the latest information on cloud architecture, covering topics such as builds and deployments to cloud-native environments, Kubernetes practices, cloud databases, hybrid and multi-cloud environments, cloud computing, and more!

Containers

Containers

Containers allow applications to run quicker across many different development environments, and a single container encapsulates everything needed to run an application. Container technologies have exploded in popularity in recent years, leading to diverse use cases as well as new and unexpected challenges. This Zone offers insights into how teams can solve these challenges through its coverage of container performance, Kubernetes, testing, container orchestration, microservices usage to build and deploy containers, and more.

Integration

Integration

Integration refers to the process of combining software parts (or subsystems) into one system. An integration framework is a lightweight utility that provides libraries and standardized methods to coordinate messaging among different technologies. As software connects the world in increasingly more complex ways, integration makes it all possible facilitating app-to-app communication. Learn more about this necessity for modern software development by keeping a pulse on the industry topics such as integrated development environments, API best practices, service-oriented architecture, enterprise service buses, communication architectures, integration testing, and more.

Microservices

Microservices

A microservices architecture is a development method for designing applications as modular services that seamlessly adapt to a highly scalable and dynamic environment. Microservices help solve complex issues such as speed and scalability, while also supporting continuous testing and delivery. This Zone will take you through breaking down the monolith step by step and designing a microservices architecture from scratch. Stay up to date on the industry's changes with topics such as container deployment, architectural design patterns, event-driven architecture, service meshes, and more.

Performance

Performance

Performance refers to how well an application conducts itself compared to an expected level of service. Today's environments are increasingly complex and typically involve loosely coupled architectures, making it difficult to pinpoint bottlenecks in your system. Whatever your performance troubles, this Zone has you covered with everything from root cause analysis, application monitoring, and log management to anomaly detection, observability, and performance testing.

Security

Security

The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.

Latest Refcards and Trend Reports
Trend Report
Enterprise Application Security
Enterprise Application Security
Refcard #387
Getting Started With CI/CD Pipeline Security
Getting Started With CI/CD Pipeline Security
Refcard #344
Kubernetes Multi-Cluster Management and Governance
Kubernetes Multi-Cluster Management and Governance
Trend Report
Kubernetes in the Enterprise
Kubernetes in the Enterprise

DZone's Featured Software Design and Architecture Resources

Trend Report

Microservices and Containerization

According to our 2022 Microservices survey, 93% of our developer respondents work for an organization that runs microservices. This number is up from 74% when we asked this question in our 2021 Containers survey. With most organizations running microservices and leveraging containers, we no longer have to discuss the need to adopt these practices, but rather how to scale them to benefit organizations and development teams. So where do adoption and scaling practices of microservices and containers go from here? In DZone's 2022 Trend Report, Microservices and Containerization, our research and expert contributors dive into various cloud architecture practices, microservice orchestration techniques, security, and advice on design principles. The goal of this Trend Report is to explore the current state of microservices and containerized environments to help developers face the challenges of complex architectural patterns.

Microservices and Containerization
How To Check Docker Images for Vulnerabilities

How To Check Docker Images for Vulnerabilities

By Gunter Rotsaert CORE
Regularly checking for vulnerabilities in your pipeline is very important. One of the steps to execute is to perform a vulnerability scan of your Docker images. In this blog, you will learn how to perform the vulnerability scan, how to fix the vulnerabilities, and how to add it to your Jenkins pipeline. Enjoy! 1. Introduction In a previous blog from a few years ago, it was described how you could scan your Docker images for vulnerabilities. A follow-up blog showed how to add the scan to a Jenkins pipeline. However, Anchore Engine, which was used in the previous blogs, is not supported anymore. An alternative solution is available with grype, which is also provided by Anchore. In this blog, you will take a closer look at grype, how it works, how you can fix the issues, and how you can add it to your Jenkins pipeline. But first of all, why check for vulnerabilities? You have to stay up-to-date with the latest security fixes nowadays. Many security vulnerabilities are publicly available and therefore can be exploited quite easily. It is therefore a must-have to fix security vulnerabilities as fast as possible in order to minimize your attack surface. But how do you keep up with this? You are mainly focused on business and do not want to have a full-time job fixing security vulnerabilities. That is why it is important to scan your application and your Docker images automatically. Grype can help with scanning your Docker images. Grype will check operating system vulnerabilities but also language-specific packages such as Java JAR files for vulnerabilities and will report them. This way, you have a great tool that will automate the security checks for you. Do note that grype is not limited to scanning Docker images. It can also scan files and directories and can therefore be used for scanning your sources. In this blog, you will create a vulnerable Docker image containing a Spring Boot application. You will install and use grype in order to scan the image and fix the vulnerabilities. In the end, you will learn how to add the scan to your Jenkins pipeline. The sources used in this blog can be found on GitHub. 2. Prerequisites The prerequisites needed for this blog are: Basic Linux knowledge Basic Docker knowledge Basic Java and Spring Boot knowledge 3. Vulnerable Application Navigate to Spring Initializr and choose a Maven build, Java 17, Spring Boot 2.7.6, and the Spring Web dependency. This will not be a very vulnerable application because Spring already ensures that you use the latest Spring Boot version. Therefore, change the Spring Boot version to 2.7.0. The Spring Boot application can be built with the following command, which will create the jar file for you: Shell $ mvn clean verify You are going to scan a Docker image, therefore a Dockerfile needs to be created. You will use a very basic Dockerfile which just contains the minimum instructions needed to create the image. If you want to create production-ready Docker images, do read the posts Docker Best Practices and Spring Boot Docker Best Practices. Dockerfile FROM eclipse-temurin:17.0.1_12-jre-alpine WORKDIR /opt/app ARG JAR_FILE COPY target/${JAR_FILE} app.jar ENTRYPOINT ["java", "-jar", "app.jar"] At the time of writing, the latest eclipse-temurin base image for Java 17 is version 17.0.5_8. Again, use an older one in order to make it vulnerable. For building the Docker image, a fork of the dockerfile-maven-plugin of Spotify will be used. The following snippet is therefore added to the pom file. XML <plugin> <groupId>com.xenoamess.docker</groupId> <artifactId>dockerfile-maven-plugin</artifactId> <version>1.4.25</version> <configuration> <repository>mydeveloperplanet/mygrypeplanet</repository> <tag>${project.version}</tag> <buildArgs> <JAR_FILE>${project.build.finalName}.jar</JAR_FILE> </buildArgs> </configuration> </plugin> The advantage of using this plugin is that you can easily reuse the configuration. Creating the Docker image can be done by a single Maven command. Building the Docker image can be done by invoking the following command: Shell $ mvn dockerfile:build You are now all set up to get started with grype. 4. Installation Installation of grype can be done by executing the following script: Shell $ curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin Verify the installation by executing the following command: Shell $ grype version Application: grype Version: 0.54.0 Syft Version: v0.63.0 BuildDate: 2022-12-13T15:02:51Z GitCommit: 93499eec7e3ce2704755e9f51457181b06b519c5 GitDescription: v0.54.0 Platform: linux/amd64 GoVersion: go1.18.8 Compiler: gc Supported DB Schema: 5 5. Scan Image Scanning the Docker image is done by calling grype followed by docker:, indicating that you want to scan an image from the Docker daemon, the image, and the tag: Shell $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT Application: grype Version: 0.54.0 Syft Version: v0.63.0 Vulnerability DB [updated] Loaded image Parsed image Cataloged packages [50 packages] Scanned image [42 vulnerabilities] NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY busybox 1.34.1-r3 1.34.1-r5 apk CVE-2022-28391 High jackson-databind 2.13.3 java-archive CVE-2022-42003 High jackson-databind 2.13.3 java-archive CVE-2022-42004 High jackson-databind 2.13.3 2.13.4 java-archive GHSA-rgv9-q543-rqg4 High jackson-databind 2.13.3 2.13.4.1 java-archive GHSA-jjjh-jjxp-wpff High java 17.0.1+12 binary CVE-2022-21248 Low java 17.0.1+12 binary CVE-2022-21277 Medium java 17.0.1+12 binary CVE-2022-21282 Medium java 17.0.1+12 binary CVE-2022-21283 Medium java 17.0.1+12 binary CVE-2022-21291 Medium java 17.0.1+12 binary CVE-2022-21293 Medium java 17.0.1+12 binary CVE-2022-21294 Medium java 17.0.1+12 binary CVE-2022-21296 Medium java 17.0.1+12 binary CVE-2022-21299 Medium java 17.0.1+12 binary CVE-2022-21305 Medium java 17.0.1+12 binary CVE-2022-21340 Medium java 17.0.1+12 binary CVE-2022-21341 Medium java 17.0.1+12 binary CVE-2022-21360 Medium java 17.0.1+12 binary CVE-2022-21365 Medium java 17.0.1+12 binary CVE-2022-21366 Medium libcrypto1.1 1.1.1l-r7 apk CVE-2021-4160 Medium libcrypto1.1 1.1.1l-r7 1.1.1n-r0 apk CVE-2022-0778 High libcrypto1.1 1.1.1l-r7 1.1.1q-r0 apk CVE-2022-2097 Medium libretls 3.3.4-r2 3.3.4-r3 apk CVE-2022-0778 High libssl1.1 1.1.1l-r7 apk CVE-2021-4160 Medium libssl1.1 1.1.1l-r7 1.1.1n-r0 apk CVE-2022-0778 High libssl1.1 1.1.1l-r7 1.1.1q-r0 apk CVE-2022-2097 Medium snakeyaml 1.30 java-archive GHSA-mjmj-j48q-9wg2 High snakeyaml 1.30 1.31 java-archive GHSA-3mc7-4q67-w48m High snakeyaml 1.30 1.31 java-archive GHSA-98wm-3w3q-mw94 Medium snakeyaml 1.30 1.31 java-archive GHSA-c4r9-r8fh-9vj2 Medium snakeyaml 1.30 1.31 java-archive GHSA-hhhw-99gj-p3c3 Medium snakeyaml 1.30 1.32 java-archive GHSA-9w3m-gqgf-c4p9 Medium snakeyaml 1.30 1.32 java-archive GHSA-w37g-rhq8-7m4j Medium spring-core 5.3.20 java-archive CVE-2016-1000027 Critical ssl_client 1.34.1-r3 1.34.1-r5 apk CVE-2022-28391 High zlib 1.2.11-r3 1.2.12-r0 apk CVE-2018-25032 High zlib 1.2.11-r3 1.2.12-r2 apk CVE-2022-37434 Critical What does this output tell you? NAME: The name of the vulnerable package INSTALLED: Which version is installed FIXED-IN: In which version the vulnerability is fixed TYPE: The type of dependency, e.g., binary for the JDK, etc. VULNERABILITY: The identifier of the vulnerability; with this identifier, you are able to get more information about the vulnerability in the CVE database SEVERITY: Speaks for itself and can be negligible, low, medium, high, or critical. As you take a closer look at the output, you will notice that not every vulnerability has a confirmed fix. So what do you do in that case? Grype provides an option in order to show only the vulnerabilities with a confirmed fix. Adding the --only-fixed flag will do the trick. Shell $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed Vulnerability DB [no update available] Loaded image Parsed image Cataloged packages [50 packages] Scanned image [42 vulnerabilities] NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY busybox 1.34.1-r3 1.34.1-r5 apk CVE-2022-28391 High jackson-databind 2.13.3 2.13.4 java-archive GHSA-rgv9-q543-rqg4 High jackson-databind 2.13.3 2.13.4.1 java-archive GHSA-jjjh-jjxp-wpff High libcrypto1.1 1.1.1l-r7 1.1.1n-r0 apk CVE-2022-0778 High libcrypto1.1 1.1.1l-r7 1.1.1q-r0 apk CVE-2022-2097 Medium libretls 3.3.4-r2 3.3.4-r3 apk CVE-2022-0778 High libssl1.1 1.1.1l-r7 1.1.1n-r0 apk CVE-2022-0778 High libssl1.1 1.1.1l-r7 1.1.1q-r0 apk CVE-2022-2097 Medium snakeyaml 1.30 1.31 java-archive GHSA-3mc7-4q67-w48m High snakeyaml 1.30 1.31 java-archive GHSA-98wm-3w3q-mw94 Medium snakeyaml 1.30 1.31 java-archive GHSA-c4r9-r8fh-9vj2 Medium snakeyaml 1.30 1.31 java-archive GHSA-hhhw-99gj-p3c3 Medium snakeyaml 1.30 1.32 java-archive GHSA-9w3m-gqgf-c4p9 Medium snakeyaml 1.30 1.32 java-archive GHSA-w37g-rhq8-7m4j Medium ssl_client 1.34.1-r3 1.34.1-r5 apk CVE-2022-28391 High zlib 1.2.11-r3 1.2.12-r0 apk CVE-2018-25032 High zlib 1.2.11-r3 1.2.12-r2 apk CVE-2022-37434 Critical Note that the vulnerabilities for the Java JDK have disappeared, although there exists a more recent update for the Java 17 JDK. However, this might not be a big issue, because the other (non-java-archive) vulnerabilities show you that the base image is outdated. 6. Fix Vulnerabilities Fixing the vulnerabilities is quite easy in this case. First of all, you need to update the Docker base image. Change the first line in the Docker image: Dockerfile FROM eclipse-temurin:17.0.1_12-jre-alpine into: Dockerfile FROM eclipse-temurin:17.0.5_8-jre-alpine Build the image and run the scan again: Shell $ mvn dockerfile:build ... $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed Vulnerability DB [no update available] Loaded image Parsed image Cataloged packages [62 packages] Scanned image [14 vulnerabilities] NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY jackson-databind 2.13.3 2.13.4 java-archive GHSA-rgv9-q543-rqg4 High jackson-databind 2.13.3 2.13.4.1 java-archive GHSA-jjjh-jjxp-wpff High snakeyaml 1.30 1.31 java-archive GHSA-3mc7-4q67-w48m High snakeyaml 1.30 1.31 java-archive GHSA-98wm-3w3q-mw94 Medium snakeyaml 1.30 1.31 java-archive GHSA-c4r9-r8fh-9vj2 Medium snakeyaml 1.30 1.31 java-archive GHSA-hhhw-99gj-p3c3 Medium snakeyaml 1.30 1.32 java-archive GHSA-9w3m-gqgf-c4p9 Medium snakeyaml 1.30 1.32 java-archive GHSA-w37g-rhq8-7m4j Medium As you can see in the output, only the java-archive vulnerabilities are still present. The other vulnerabilities have been solved. Next, fix the Spring Boot dependency vulnerability. Change the version of Spring Boot from 2.7.0 to 2.7.6 in the POM. XML <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.7.6</version> <relativePath/> <!-- lookup parent from repository --> </parent> Build the JAR file, build the Docker image, and run the scan again: Shell $ mvn clean verify ... $ mvn dockerfile:build ... $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed Vulnerability DB [no update available] Loaded image Parsed image Cataloged packages [62 packages] Scanned image [10 vulnerabilities] NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY snakeyaml 1.30 1.31 java-archive GHSA-3mc7-4q67-w48m High snakeyaml 1.30 1.31 java-archive GHSA-98wm-3w3q-mw94 Medium snakeyaml 1.30 1.31 java-archive GHSA-c4r9-r8fh-9vj2 Medium snakeyaml 1.30 1.31 java-archive GHSA-hhhw-99gj-p3c3 Medium snakeyaml 1.30 1.32 java-archive GHSA-9w3m-gqgf-c4p9 Medium snakeyaml 1.30 1.32 java-archive GHSA-w37g-rhq8-7m4j Medium So, you got rid of the jackson-databind vulnerability, but not of the snakeyaml vulnerability. So, in which dependency is snakeyaml 1.30 being used? You can find out by means of the dependency:tree Maven command. For brevity purposes, only a part of the output is shown here: Shell $ mvnd dependency:tree ... com.mydeveloperplanet:mygrypeplanet:jar:0.0.1-SNAPSHOT [INFO] +- org.springframework.boot:spring-boot-starter-web:jar:2.7.6:compile [INFO] | +- org.springframework.boot:spring-boot-starter:jar:2.7.6:compile [INFO] | | +- org.springframework.boot:spring-boot:jar:2.7.6:compile [INFO] | | +- org.springframework.boot:spring-boot-autoconfigure:jar:2.7.6:compile [INFO] | | +- org.springframework.boot:spring-boot-starter-logging:jar:2.7.6:compile [INFO] | | | +- ch.qos.logback:logback-classic:jar:1.2.11:compile [INFO] | | | | \- ch.qos.logback:logback-core:jar:1.2.11:compile [INFO] | | | +- org.apache.logging.log4j:log4j-to-slf4j:jar:2.17.2:compile [INFO] | | | | \- org.apache.logging.log4j:log4j-api:jar:2.17.2:compile [INFO] | | | \- org.slf4j:jul-to-slf4j:jar:1.7.36:compile [INFO] | | +- jakarta.annotation:jakarta.annotation-api:jar:1.3.5:compile [INFO] | | \- org.yaml:snakeyaml:jar:1.30:compile ... The output shows us that the dependency is part of the spring-boot-starter-web dependency. So, how do you solve this? Strictly speaking, Spring has to solve it. But if you do not want to wait for a solution, you can solve it by yourself. Solution 1: You do not need the dependency. This is the easiest fix and is low risk. Just exclude the dependency from the spring-boot-starter-web dependency in the pom. XML <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.yaml</groupId> <artifactId>snakeyaml</artifactId> </exclusion> </exclusions> </dependency> Build the JAR file, build the Docker image, and run the scan again: Shell $ mvn clean verify ... $ mvn dockerfile:build ... $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed Vulnerability DB [no update available] Loaded image Parsed image Cataloged packages [61 packages] Scanned image [3 vulnerabilities] No vulnerabilities found No vulnerabilities are found anymore. Solution 2: You do need the dependency. You can replace this transitive dependency by means of dependencyManagement in the pom. This is a bit more tricky because the updated transitive dependency is not tested with the spring-boot-starter-web dependency. It is a trade-off whether you want to do this or not. Add the following section to the pom: XML <dependencyManagement> <dependencies> <dependency> <groupId>org.yaml</groupId> <artifactId>snakeyaml</artifactId> <version>1.32</version> </dependency> </dependencies> </dependencyManagement> Build the jar file, build the Docker image, and run the scan again: Shell $ mvn clean verify ... $ mvn dockerfile:build ... $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed Vulnerability DB [no update available] Loaded image Parsed image Cataloged packages [62 packages] Scanned image [3 vulnerabilities] No vulnerabilities found Again, no vulnerabilities are present anymore. Solution 3: This is the solution when you do not want to do anything or whether it is a false positive notification. Create a .grype.yaml file where you exclude the vulnerability with High severity and execute the scan with the --config flag followed by the .grype.yaml file containing the exclusions. The .grype.yaml file looks as follows: YAML ignore: - vulnerability: GHSA-3mc7-4q67-w48m Run the scan again: Shell $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed Vulnerability DB [no update available] Loaded image Parsed image Cataloged packages [62 packages] Scanned image [10 vulnerabilities] NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY snakeyaml 1.30 1.31 java-archive GHSA-98wm-3w3q-mw94 Medium snakeyaml 1.30 1.31 java-archive GHSA-c4r9-r8fh-9vj2 Medium snakeyaml 1.30 1.31 java-archive GHSA-hhhw-99gj-p3c3 Medium snakeyaml 1.30 1.32 java-archive GHSA-9w3m-gqgf-c4p9 Medium snakeyaml 1.30 1.32 java-archive GHSA-w37g-rhq8-7m4j Medium The High vulnerability is not shown anymore. 7. Continuous Integration Now you know how to manually scan your Docker images. However, you probably want to scan the images as part of your continuous integration pipeline. In this section, a solution is provided when using Jenkins as a CI platform. The first question to answer is how you will be notified when vulnerabilities are found. Up until now, you only noticed the vulnerabilities by looking at the standard output. This is not a solution for a CI pipeline. You want to get notified and this can be done by failing the build. Grype has the --fail-on <severity> flag for this purpose. You probably do not want to fail the pipeline when a vulnerability with severity negligible has been found. Let’s see what happens when you execute this manually. First of all, introduce the vulnerabilities again in the Spring Boot application and in the Docker image. Build the JAR file, build the Docker image and run the scan with flag --fail-on: Shell $ mvn clean verify ... $ mvn dockerfile:build ... $ grype docker:mydeveloperplanet/mygrypeplanet:0.0.1-SNAPSHOT --only-fixed --fail-on high ... 1 error occurred: * discovered vulnerabilities at or above the severity threshold Not all output has been shown here, but only the important part. And, as you can see, at the end of the output, a message is shown that the scan has generated an error. This will cause your Jenkins pipeline to fail and as a consequence, the developers are notified that something went wrong. In order to add this to your Jenkins pipeline, several options exist. Here it is chosen to create the Docker image and execute the grype Docker scan from within Maven. There is no separate Maven plugin for grype, but you can use the exec-maven-plugin for this purpose. Add the following to the build-plugins section of the POM. XML <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>3.1.0</version> <configuration> <executable>grype</executable> <arguments> <argument>docker:mydeveloperplanet/mygrypeplanet:${project.version}</argument> <argument>--scope</argument> <argument>all-layers</argument> <argument>--fail-on</argument> <argument>high</argument> <argument>--only-fixed</argument> <argument>-q</argument> </arguments> </configuration> </plugin> </plugins> </build> Two extra flags are added here: --scope all-layers: This will scan all layers involved in the Docker image. -q: This will use quiet logging and will show only the vulnerabilities and possible failures. You can invoke this with the following command: Shell $ mvnd exec:exec You can add this to your Jenkinsfile inside the withMaven wrapper: Plain Text withMaven() { sh 'mvn dockerfile:build dockerfile:push exec:exec' } 8. Conclusion In this blog, you learned how to scan your Docker images by means of grype. Grype has some interesting, user-friendly features which allow you to efficiently add them to your Jenkins pipeline. Also, installing grype is quite easy. Grype is definitely a great improvement over Anchor Engine. More

Refcard #385

Observability Maturity Model

By Lodewijk Bogaards
Observability Maturity Model
How Observability Is Redefining Developer Roles
How Observability Is Redefining Developer Roles
By Hiren Dhaduk
Deploying Java Serverless Functions as AWS Lambda
Deploying Java Serverless Functions as AWS Lambda
By Nicolas Duminil CORE
OpenID Connect Flows
OpenID Connect Flows

In today’s text, I will describe and explain OpenID Connect Flows. The processes of authentication are described in the OpenID Connect specification. As OpenID Connect is built upon OAuth, part of the concepts below will have the same meaning as in the case of OAuth. What Is an OpenID Connect Flow? Flow is the OpenID Connect counterpart of the OAuth Grant Type. It is a process of obtaining an Access Token. It describes the exact sequence of steps involved in handling a particular request. As a result, flow affects how applications involved in handling particular requests communicate with one another. Everything is more or less similar to Grant Types from OAuth. However, there is a slight difference in how the abstract protocol works in OpenID Connect. 1. The RP (Client) sends a request to the OpenID Provider (OP). 2. The OP authenticates the End-User and obtains authorization. 3. The OP responds with an ID Token and usually an Access Token. 4. The RP can send a request with the Access Token to the UserInfo Endpoint. 5. The UserInfo Endpoint returns Claims about the End-User. As for abbreviations and concepts used in the above description: Claim is a piece of information about the requesting Entity. RP means Relying Party. It is an OAuth 2.0 Client requiring End-User Authentication and Claims from an OpenID Provider. OP means OpenID Provider. It is an OAuth 2.0 Authorization Server that is capable of Authenticating the End-User. Additionally, it provides Claims to a Relying Party about the Authentication event and the End-User. UserInfo Endpoint is a protected Resource. When presented with an Access Token by the Client, it returns authorized information about the End-User. The information is represented by the corresponding Authorization Grant. The UserInfo Endpoint URL MUST use HTTPS and MAY contain a port, path, and query parameter components. OpenID Connect Flows Opposite to OAuth being the authorization protocol, OpenID Connect is the authentication protocol. It extensively relies on pseudo-authentication, a mechanism of authentication available in OAuth. In the current OpenID Connect specification, we can find three grant types: Authorization Code Flow Implicit Flow Hybrid Flow The value of the response_type parameter from the Authorization Request determines the Flow for the current process. The table below illustrates how particular values map to Flows. The biggest difference between the flows comes in the form of the “place” where we get our Access Tokens. In the case of Authorization Code, we get them from Token Endpoint. In Implicit Flow, we get Access Tokens from Authentication Response. While in Hybrid Flow, we can choose the source of our tokens. Below you can find a table from the OpenID specification. It can be very useful while picking the Flow you want to use. The Property column contains a set of features. At the same time, the rest of the columns specify if a particular Flow supports the feature or not. Additionally, unlike OAuth, there were no major changes here. Therefore, no Flows were deprecated, and all three are still recommended. Flows Lexicon Authorization Code Flow This flow works by exchanging Authorization Code directly for an ID Token and an Access Token. Authorization Code can be obtained from Token Endpoint. Because we exchange data directly, we can not expose any details to malicious applications with possible access to the User Agent. Furthermore, authentication itself can be done before exchanging code for a token. Therefore, this flow is best suited for Clients that can securely maintain a Client Secret between themselves and the Authorization Server. All tokens are returned from Token Endpoint when using the Authorization Code Flow. Implicit Flow Opposite to the Authorization Code flow here, we get our tokens from Authorization Endpoint. Here Access and Id Tokens are returned directly to the client, exposing them to any application with access to the end user’s Agent. Thanks to this direct return, this flow is best suited for Clients implemented in a browser. Moreover, the Authorization Server does not perform Client Authentication, and Token Endpoint is not used. Hybrid Flow It is the most complex of all three flows. Here, the access token can be returned from both Authorization and Token Endpoints. It can also be returned from both of them at the same time. What is pretty interesting is that the returned tokens are not guaranteed to be the same. Because this flow combines the two previously mentioned, both Authorization and Token endpoints inherit some part of their original behavior. There are also differences here, mostly in the process of handling and validating ID Token. Summary OpenID Connect specification describes fewer procedures than OAuth. However, it is more in detail about how flows should work. So I hope that this humble lexicon of OIDC Flows will come in handy to you at some point. Thank you for your time.

By Bartłomiej Żyliński CORE
Microservices Discovery With Eureka
Microservices Discovery With Eureka

Gaining complexity in a microservices system certainly isn't for the faint of heart (though neither is complexity in monoliths!). When there are many services that need to communicate with one another, we might need to coordinate multiple services communicating with multiple other services. We also might code for varying environments such as local, development server, or the cloud. How do services know where to find one another? How can we avoid problems when a service is unavailable? How do we handle requests when we scale up or down certain parts of our system? This is where something like Spring Cloud Netflix Eureka comes into play. Eureka is a service discovery project that helps services interact with one another without hardwiring in instance-specific or environment-dependent details. Architecture We have carefully built up a system of microservices from generic application chatter to a system of services communicating among one another. In the last article, we migrated most of our standalone services into Docker Compose so that it could orchestrate startup and shutdown as a unit. In this article, we will add a service discovery component, so that services can find and talk to one another without hard-coding host and port information into applications or environments. Docker Compose manages most of the services (in dark gray area), with each containerized service encompassed in a light gray box. Neo4j is the only component managed externally with Neo4j's database-as-a-service (AuraDB). Interactions between services are shown using arrows, and the types of data objects passed to numbered services (1-4) are depicted next to each. Spring Cloud Netflix Eureka Spring Cloud Netflix originally contained a few open-sourced projects from Netflix, including Eureka, Zuul, Hystrix, and Ribbon. Since then, most of those have been migrated into other Spring projects, except for Eureka. Eureka handles service registry and discovery. A Eureka server is a central place for services to register themselves. Eureka clients register with the server and are able to find and communicate with other services on the registry without referencing hostname and port information within the service itself. Config + Eureka Architecture Decision I had to make a decision on architecture when using Spring Cloud Config and Eureka together in a microservices system. There are a couple of options: 1. Config-first approach. Applications (services1-4) will reach out to config server first before gathering up other properties. In this approach, the config server does not register with Eureka. 2. Discovery-first approach. Applications will register with Eureka before connecting to config and gathering properties. In this approach, config server becomes a Eureka client and registers with it. There is an excellent blog post that provides a clear explanation of each, along with pros and cons. I'd highly encourage checking that out! I opted for the config-first approach because there is already a bit of delay starting up applications in Docker Compose (see blog post detailing this). Going with discovery-first would mean an extra step in the chain before applications could connect to config and contact databases. Since I didn't want to slow this step down any further, I decided not to register the config server app with Eureka, leaving it separate. Without further ado, let's start coding! Applications: Eureka Server We will use the Spring Initializr at start.spring.io to set up the outline for our Eureka server application. On the form, we choose Maven for the Project, then leave Language and Spring Boot version fields defaulted. Under the Project Metadata section, I updated the group name for my personal projects, but you are welcome to leave it defaulted. I named the artifact eureka-server, though naming is up to you, as long as we map it properly where needed. All other fields in this section can remain as they are. Under the Dependencies section, we need only Eureka Server. Finally, we can click the Generate button at the bottom to download the project. The project will download as a zip, so we can unzip it and move it to our project folder with the other services. Open the project in your favorite IDE and let's get coding! The pom.xml contains the dependencies and software versions we set up on the Spring Initializr, so we can move to the application.properties file in the src/main/resources folder. Properties files server.port=8761 eureka.client.register-with-eureka=false eureka.client.fetch-registry=false We need to specify a port number for this application to use so that its traffic doesn't conflict with our other services. The default port for Spring Cloud Eureka server is 8761, so we will use that. Next, we don't need to register the server itself with Eureka (useful in systems with multiple Eureka servers), so we will set the eureka.client.register-with-eureka value to false. The last property is set to false because we also don't need this server to pull the registry from other sources (like other Eureka servers). A StackOverflow question and answer addresses these settings well. In the EurekaServerApplication class, we only need to add the annotation @EnableEurekaServer to set this up as a Eureka server. Let's test this locally by starting the application in our IDE and navigating a web browser window to localhost:8761. This should show us a page like the one below, which gives details about the server and a section for Instances currently registered with Eureka. Since we haven't connected any other services with Eureka, we don't have any services registered with the server. That's it for the server, so let's start retrofitting our other services as Eureka clients. Applications: Service1 We don't have many changes to add for Spring Cloud Eureka. Starting in the pom.xml, we need to add a dependency. XML <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> This dependency enables the application as a Eureka client. Most recommendations would also have us adding an annotation like @EnableEurekaClient (Eureka-specific) or @EnableDiscoveryClient (project-agnostic) to the main application class. However, that is not a necessary requirement, as it is defaulted to enabling this functionality when you add the dependency to the pom.xml. To run the service locally, we will also need to add a property to the `application.properties` file. Properties files eureka.client.serviceUrl.defaultZone=http://localhost:8761/eureka This tells the application where to look for the Eureka server. We will move this property to the config server file for this application, so we can comment this one out when we test everything together. However, for testing a siloed application, you will need it enabled here. Let's start on changes to service2, which interacts with service1. Applications: Service2 Just like with service1, we need to add the Eureka client dependency to service2's pom.xml to enable service discovery. XML <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-config</artifactId> </dependency> We also want to have this application use Spring Cloud Config for referencing the Eureka server, so we can retrofit that by adding the dependency. We will walk through the config file changes in a bit. Again, if we test locally, we would also need to add the following property to the application.properties file. Properties files eureka.client.serviceUrl.defaultZone=http://localhost:8761/eureka Since we will test everything together, it is commented out in the application for now. Instead, we will add a properties file for Spring Cloud Config to host, similar to our other services (next section). Next, we need to make some adjustments to the main application class to utilize Eureka over previously-defined hostname and port locations. Java public class Service2Application { public static void main(String[] args) { SpringApplication.run(Service2Application.class, args); } @Bean @LoadBalanced WebClient.Builder createLoadBalancedBuilder() { return WebClient.builder(); } @Bean WebClient client(WebClient.Builder builder) { return builder.baseUrl("http://mongo-client").build(); } } Eureka lets calling applications reference an application name, and it will map the hostname/port details behind-the-scenes, no matter where the application is running. This is where we see the mongo-client referenced in the second @Bean definition (11th line of above code). We also need to create a load-balanced bean (only required when using Eureka). Step-by-step, I created a WebClient.Builder bean, load balanced it with the @LoadBalanced annotation, then used that to create the actual WebClient bean that gets injected for use in method calls (in the BookController class). Applications: Service3 and Service4 Next, we need to add our other services to Eureka using the steps below. 1. Add the dependency to each pom.xml file. 2. For local testing, add the commented out property in the application.properties file. Now let's add the Eureka property to the Spring Cloud Config files for our applications! Spring Cloud Config For each config file the server hosts, we will need to add the following: YAML eureka: client: serviceUrl: defaultZone: http://goodreads-eureka:8761/eureka This tells the application where to look so it can register with Eureka. Full sample code for each config file is located in the related Github repository folder. We also need to create a whole new config file for service2 to use the config server. YAML spring: application: name: goodreads-client eureka: client: serviceUrl: defaultZone: http://goodreads-eureka:8761/eureka A sample is provided on the Github repository, but this file is created in a local repository initialized with git, and then referenced in the config server properties file for that project to serve up. More information on that is in a previous blog post. Let's make a few changes to the docker-compose.yml! docker-compose.yml We need to remove the dynamic environment property for service2 and to add the Eureka server project for Docker Compose to manage. YAML goodreads-svc2: #other properties... environment: - SPRING_APPLICATION_NAME=goodreads-client - SPRING_CONFIG_IMPORT=configserver:http://goodreads-config:8888 - SPRING_PROFILES_ACTIVE=docker We added environment variables for application name, config server location, and spring profiles like we see in our other services. Next, we need to add our Eureka server application to the compose file. YAML goodreads-eureka: container_name: goodreads-eureka image: jmreif/goodreads-eureka ports: - "8761:8761" environment: - EUREKA_CLIENT_REGISTER-WITH-EUREKA=false - EUREKA_CLIENT_FETCH-REGISTRY=false volumes: - $HOME/Projects/docker/goodreads/config-server/logs:/logs networks: - goodreads For our last step, we need to build all of the updated applications and create the Docker images. To do that we can execute the following commands from the project folder: Shell cd service1 mvn clean package -DskipTests=true cd ../service2 mvn clean package -DskipTests=true cd ../service3 mvn clean package -DskipTests=true cd ../service4 mvn clean package -DskipTests=true cd ../eureka-server mvn clean package Note: the Docker Compose file is using my pre-built images with Apple silicon architecture. If your machine has a different chip, you will need to do one of the following: 1) utilize the build option in the docker-compose.yml file (comment out image option), 2) create your own Docker images and publish to DockerHub (plus modify the docker-compose.yml file image options). We can run our system with the same command we have been using. Shell docker-compose up -d Note: If you are building local images with the `build` field in docker-compose.yml, then use the command `docker-compose up -d --build`. This will build the Docker containers each time on startup from the directories. Next, we can test all of our endpoints. Goodreads-config (mongo): command line with curl localhost:8888/mongo-client/docker. Goodreads-eureka: web browser with localhost:8761 and note the applications (might take a few minutes for everything to register). Goodreads-svc1: command line with curl localhost:8081/db, curl localhost:8081/db/books, and curl localhost:8081/db/book/623a1d969ff4341c13cbcc6b. Goodreads-svc2: command line with curl localhost:8080/goodreads and curl localhost:8080/goodreads/books. Goodreads-svc3: curl localhost:8082/db, curl localhost:8082/db/authors, and curl localhost:8082/db/author/623a48c1b6575ea3e899b164. Goodreads-config (neo4j): command line with curl localhost:8888/neo4j-client/docker. Neo4j database: ensure AuraDB instance is running (free instances are automatically paused after 3 days). Goodreads-svc4: curl localhost:8083/neo, curl localhost:8083/neo/reviews, and curl localhost:8083/neo/reviews/178186 or web browser with only URL. Bring everything back down again with the below command. Shell docker-compose down Wrapping Up! In this iteration of the project, we integrated service discovery through the Spring Cloud Netflix Eureka project. We created a Eureka server project, and then retrofitted our other services as Eureka clients with an added dependency. Finally, we integrated the new Eureka server project to Docker Compose and updated some of the options for the other services. We tested all of our changes by spinning up the entire microservices system and checking each of our endpoints. Keep following along in this journey to find out what comes next (or review previous iterations to see what we have accomplished). Happy coding! Resources Github: microservices-level10 repository Blog post: Baeldung's guide to Spring Cloud Netflix Eureka Blog post: Config First vs. Discovery First Documentation: Spring Cloud Netflix Interview questions: What is Spring Cloud Netflix?

By Jennifer Reif CORE
How To Validate Three Common Document Types in Python
How To Validate Three Common Document Types in Python

Regardless of the industry vertical we work in – whether that be somewhere in the technology, e-commerce, manufacturing, or financial fields (or some Venn diagram of them all) – we mostly rely on the same handful of standard document formats to share critical information between our internal teams and with external organizations. These documents are almost always on the move, bouncing across our networks as fast as our servers will allow them to. Many internal business automation workflows, for example – whether written with custom code or pieced together through an enterprise automation platform – are designed to process standard PDF invoice documents step by step as they pass from one stakeholder to another. Similarly, customized reporting applications are typically used to access and process Excel spreadsheets which the financial stakeholders of a given organization (internal or external) rely on. All the while, these documents remain beholden to strictly enforced data standards, and each application must consistently uphold these standards. That’s because every document, no matter how common, is uniquely capable of defying some specific industry regulation, containing an unknown error in its encoding, or even - at the very worst - hiding a malicious security threat behind a benign façade. As rapidly evolving business applications continue to make our professional lives more efficient, business users on any network place more and more trust in the cogs that turn within their suite of assigned applications to uphold high data standards on their behalf. As our documents travel from one location to another, the applications they pass through are ultimately responsible for determining the integrity, security, and compliance of each document’s contents. If an invalid PDF file somehow reaches its end destination, the application which processes it – and, by extension, those stakeholders responsible for creating, configuring, deploying, and maintaining the application in the first place – will have some difficult questions to answer. It’s important to know upfront, right away, whether there are any issues present within the documents our applications are actively processing. If we don’t have a way of doing that, we run the risk of allowing our own applications to shoot us in the foot. Thankfully, it’s straightforward (and standard) to solve this problem with layers of data validation APIs. In particular, document validation APIs are designed to fit seamlessly within the architecture of a file processing application, providing a quick feedback loop on each individual file they encounter to ensure the application runs smoothly when valid documents pass through and halting its process immediately when invalid documents are identified. There are dozens of common document types which require validation in a file processing application, and many of the most common among those, including PDF, Excel, and DOCX (which this article seeks to highlight), are all compressed and encoded in very unique ways, making it particularly vital to programmatically identify whether their contents are structured correctly and securely. Document Validation APIs The purpose of this article is to highlight three API solutions that can be used to validate three separate and exceedingly common document types within your various document processing applications: PDF, Excel XLSX, and Microsoft Word DOCX. These APIs are all free to use, requiring a free-tier API key and only a few lines of code (provided below in Python for your convenience) to call their services. While the process of validating each document type listed above is unique, the response body provided by each API is standardized, making it efficient and straightforward to identify whether an error was found within each document type and if so, what warnings are associated with that error. Below, I’ll quickly outline the general body of information supplied in each of the above document validation API's response: DocumentIsValid – This response contains a simple Boolean value indicating whether the document in question is valid based on its encoding. PasswordProtected – This response provides a Boolean value indicating whether the document in question contains password protection (which – if unexpected – can indicate an underlying security threat). ErrorCount – This response provides an integer reflecting the number of errors detected within the document in question. WarningCount – This response indicates the number of warnings produced by the API response independently of the error count. ErrorsAndWarnings – This response category includes more detailed information about each error identified within a document, including an error description, error path, error URI (uniform resource identifier, such as URL or URN), and IsError Boolean. Demonstration To use any of the three APIs referred to above, the first step is to install the Python SDK with a pip command provided below: pip install cloudmersive-convert-api-client With installation complete, we can turn our attention to the individual functions which call each individual API’s services. To call the PDF validation API, we can use the following code: Python from __future__ import print_function import time import cloudmersive_convert_api_client from cloudmersive_convert_api_client.rest import ApiException from pprint import pprint # Configure API key authorization: Apikey configuration = cloudmersive_convert_api_client.Configuration() configuration.api_key['Apikey'] = 'YOUR_API_KEY' # create an instance of the API class api_instance = cloudmersive_convert_api_client.ValidateDocumentApi(cloudmersive_convert_api_client.ApiClient(configuration)) input_file = '/path/to/inputfile' # file | Input file to perform the operation on. try: # Validate a PDF document file api_response = api_instance.validate_document_pdf_validation(input_file) pprint(api_response) except ApiException as e: print("Exception when calling ValidateDocumentApi->validate_document_pdf_validation: %s\n" % e) To call the Microsoft Excel XLSX validation API, we can use the following code instead: Python from __future__ import print_function import time import cloudmersive_convert_api_client from cloudmersive_convert_api_client.rest import ApiException from pprint import pprint # Configure API key authorization: Apikey configuration = cloudmersive_convert_api_client.Configuration() configuration.api_key['Apikey'] = 'YOUR_API_KEY' # create an instance of the API class api_instance = cloudmersive_convert_api_client.ValidateDocumentApi(cloudmersive_convert_api_client.ApiClient(configuration)) input_file = '/path/to/inputfile' # file | Input file to perform the operation on. try: # Validate a Excel document (XLSX) api_response = api_instance.validate_document_xlsx_validation(input_file) pprint(api_response) except ApiException as e: print("Exception when calling ValidateDocumentApi->validate_document_xlsx_validation: %s\n" % e) And finally, to call the Microsoft Word DOCX validation API, we can use the final code snippet supplied below: Python from __future__ import print_function import time import cloudmersive_convert_api_client from cloudmersive_convert_api_client.rest import ApiException from pprint import pprint # Configure API key authorization: Apikey configuration = cloudmersive_convert_api_client.Configuration() configuration.api_key['Apikey'] = 'YOUR_API_KEY' # create an instance of the API class api_instance = cloudmersive_convert_api_client.ValidateDocumentApi(cloudmersive_convert_api_client.ApiClient(configuration)) input_file = '/path/to/inputfile' # file | Input file to perform the operation on. try: # Validate a Word document (DOCX) api_response = api_instance.validate_document_docx_validation(input_file) pprint(api_response) except ApiException as e: print("Exception when calling ValidateDocumentApi->validate_document_docx_validation: %s\n" % e) Please note that while these APIs do provide some basic security benefits during their document validation processes (i.e., identifying unexpected password protection on a file, which is a common method for sneaking malicious files through a network - the password can be supplied to an unsuspecting downstream user at a later date), they do not constitute fully formed security APIs, such as those that would specifically hunt for viruses, malware, and other forms of malicious content hidden within a file. Any document – especially those that originated outside of your internal network – should always be thoroughly vetted through specific security-related services (i.e., services equipped with virus and malware signatures) before entering or leaving your file storage systems.

By Brian O'Neill CORE
How To Build a Node.js API Proxy Using http-proxy-middleware
How To Build a Node.js API Proxy Using http-proxy-middleware

A proxy is something that acts on behalf of something else. Your best friend giving your attendance in the boring lecture you bunked during college is a real-life example of proxying. When it comes to API development, a proxy is an interface between the client and the API server. The job of the interface is to proxy incoming requests to the real server. The use of proxy But why might you need an API proxy in the first place? It could be possible that the real API server is external to your organization and unstable. A proxy can provide a more stable interface to the client The response from the API server might not be compatible with the client’s expectations and you want to modify the response in some form (for example, converting XML to JSON). The real API server may be a temporary arrangement and you don’t want the clients to get impacted by any future changes There are several uses of API proxy depending on the situation. In this post, you will learn how to build a Node.js API Proxy using the http-proxy-middleware package. Just to make things clear, API Proxy is different from a Forward Proxy. 1. Node.js API Proxy Project Setup First, you need to initialize the project by executing the below command in a project directory: $ npm init -y This will generate a basic package.json file with meta-data information about the project such as name, version, author and scripts. Next, install a couple of packages for developing the Node.js API proxy. $ npm install --save express http-proxy-middleware express is a minimalistic web framework you can use to build API endpoints. http-proxy-middleware is a simple Node.js package to create an API proxy After the package installation, define a start command for the project within the package.json file. You can use this command to start the application. Your project’s package.json should look similar to the below example. { "name": "express-proxy-demo", "version": "1.0.0", "description": "Demo Application for Proxy Implementation in Node.js", "main": "index.js", "scripts": { "start": "node index.js" }, "author": "Saurabh Dashora", "license": "ISC", "dependencies": { "express": "^4.18.2", "http-proxy-middleware": "^2.0.6" } } 2. Creating a Node.js Proxy Using http-proxy-middleware Time to create the actual application. The example application will proxy incoming requests to an API hosted elsewhere. For demonstration purposes, I recommend using the fake APIs hosted at JSONPlaceholder. See the below illustration: Node.js Proxy Setup Check the below code from the index.js file that contains the logic for proxying requests. const express = require('express'); const { createProxyMiddleware } = require('http-proxy-middleware'); const app = express(); const PORT = 3000; const HOST = "localhost"; const API_URL = "<https://jsonplaceholder.typicode.com>"; app.get("/status", (req, res, next) => { res.send('This is a proxy service'); }); const proxyOptions = { target: API_URL, changeOrigin: true, pathRewrite: { [`^/api/posts`]: '/posts', }, } const proxy = createProxyMiddleware(proxyOptions); app.use('/api/posts', proxy) app.listen(PORT, HOST, () => { console.log(`Proxy Started at ${HOST}:${PORT}`) }); Let’s understand each step in the above program: Step 1: The first segment of the code contains the import statements for express and http-proxy-middleware. Step 2: The next statement creates an application instance using the call to express() function followed by declaring a few important constants such as PORT, HOST and API_URL. Step 3: Implement an endpoint /status to describe the role of the application. This endpoint has nothing to do with proxying requests and provides a way to test our application. Step 4: Next, declare an object proxyOptions. This is a configuration object for our API proxy. It contains a few important properties target - It defines the target host where you want to proxy requests. In our case, this is the [<https://jsonplaceholder.typicode.com>](<https://jsonplaceholder.typicode.com>) changeOrigin - This is set to true since we are proxying to a different origin. pathRewrite - This is a very important property where you define the rules for rewriting the path. For example, the expression [^/api/posts]: '/posts' routes all incoming requests direct at /api/posts to just /posts. In other words, this will remove the /api prefix from the path. Step 5: After declaring the configuration object, create the proxy object by calling createProxyMiddleware() function with the proxyOptions object as input. Step 6: Next, create a request handler for the path /api/posts and pass the proxy object as handler for the incoming request. Step 7: At the very end, start the Node.js API Proxy server to listen on the port and host already declared earlier. You can start the application using the command npm run start. > express-proxy-demo@1.0.0 start > node index.js [HPM] Proxy created: / -> <https://jsonplaceholder.typicode.com> [HPM] Proxy rewrite rule created: "^/api/posts" ~> "/posts" Proxy Started at localhost:3000 Messages about the proxy setup indicate that the proxy is configured properly. If you visit the URL [<http://localhost:3000/api/posts/1>](<http://localhost:3000/api/posts/1>) in the browser, you will get the response from the JSONPlaceholder APIs as below: { "userId": 1, "id": 1, "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit", "body": "quia et suscipit\\nsuscipit recusandae consequuntur expedita et cum\\nreprehenderit molestiae ut ut quas totam\\nnostrum rerum est autem sunt rem eveniet architecto" } This means that the Node.js API Proxy is doing its job by proxying requests to the mock APIs hosted by JSONPlaceholder. 3. Node.js API Proxy Context Matching The http-proxy-middleware uses the path for proxying requests. For example, in the request http://localhost:3000/api/posts?title=test, the section /api/posts is the actual path. According to the official documentation of the http-proxy-middleware package, there are various ways in which the context matching for the path takes place: Path Matching createProxyMiddleware({...}) matches all paths. This means all requests will be proxied. createProxyMiddleware('/', {...}) also matches all paths. createProxyMiddleware('/api', {...}) only matches paths starting with /api. Multiple Path Matching createProxyMiddleware(['/api', '/test', '/otherpath'], {...})can be used to match multiple paths to a particular proxy configuration Wildcard Path Matching For more fine grained control, you can also use wildcards to match paths. createProxyMiddleware('**', {...}) matches any path and all requests are proxied. createProxyMiddleware('**/*.html', {...}) matches any path which ends with .html. createProxyMiddleware('/api/**/*.html', {...}) matches requests ending with .html within the overall path/api. Custom Path MatchingFor even greater control, you can also provide a custom function to match the path for the API Proxy. See below example:const filter = function (pathname, req) { return pathname.match('^/api') && req.method === 'GET'; }; const apiProxy = createProxyMiddleware(filter, { target: '<https://jsonplaceholder.typicode.com>', }); In the above example, only GET requests to the path /api are proxied. Conclusion With this post, you have built a very simple version of the Node.js API proxy. You can extend it further based on specific requirements. The http-proxy-middleware is a simple but powerful library to build a Node.js API Proxy server. The library provides several configurable properties to handle the proxy functionalities. However, there are many more options that you can leverage for your needs. The code for this demo is available on GitHub. If you found the post useful, consider sharing it with friends and colleagues. In case of any queries, write them in the comments section below.

By Saurabh Dashora CORE
Debugging Threads and Asynchronous Code
Debugging Threads and Asynchronous Code

This week, we’ll discuss one of the harder problems in programming: threading. For many cases, threading issues aren’t as difficult to debug. At least, not in higher abstractions. Asynchronous programming is supposed to simplify the threading model but oftentimes it makes a bad situation worse by detaching us from the core context. We discuss why that is and how debuggers can solve that problem. We also explain how you can create custom asynchronous APIs that are almost as easy to debug as synchronous applications! Transcript Welcome back to the seventh part of debugging at scale where we don’t treat debugging like taking out the garbage. Concurrency and parallelism are some of the hardest problems in computer science. But debugging them doesn’t have to be so hard. In this section, we’ll review some of the IDE capabilities related to threading as well various tricks and asynchronous code features. Thread Views Let’s start by discussing some of the elements we can enable in terms of the thread view. In the stack frame, we can look at all the current threads in the combo box above the stack frame. We can toggle the currently selected thread and see the stack for that thread and the thread status. Notice that, here, we chose to suspend all threads on this breakpoint. If the threads were running, we wouldn’t be able to see their stack as it’s constantly changing. We can enable the threads view on the right hand side pull down menu to see more… As you can see, viewing the stack is more convenient in this state when we’re working with many threads. Furthermore, we can customize this view even more by going into the customize thread view and enabling additional options. The thread groups option is probably the most obvious change, as it arranges all the threads based on their groups and provides a pretty deep view of the hierarchy. Since most frameworks arrange their threads based on categories in convenient groups this is often very useful when debugging many threads. Other than that, we can show additional information such as the file name, line number, class name, and argument types. I personally like showing everything, but this does create a somewhat noisy view that might not be as helpful. Now that we switched on the grouping, we can see the hierarchy of the threads. This mode is a bit of a double-edged sword since you might miss out on an important thread in this case, but, if you have a lot of threads in a specific group, it might be the only way you can possibly work. I think we’ll see more features like this as project Loom becomes the standard and the thread count increases exponentially. I’m sure this section will see a lot of innovation moving forward. Debugging a Race Condition Next, we’ll discuss debugging race conditions. The first step of debugging a race condition is a method breakpoint. I know what I said about them, but in this case we need it. Notice the return statement in this method includes a lot of code. If I place a breakpoint on the last line, it will happen before that code executes and my coverage won’t include that part. So, let’s open the breakpoint dialog and expand it to the fully customizable dialog. Now we need to define the method breakpoint. I type the message and then get the thread name. I only use the method breakpoint for the exit portion because if I used it for both, I’d have no way to distinguish between exit and enter events. I make this a tracepoint by unchecking the suspend option. So now, we have a tracepoint that prints the name of the thread that just exited the method with. I now do the exact same thing for a line breakpoint on the first line in the method. A line breakpoint is fine since entry to the method makes sense here. I change the label and make it also into a tracepoint instead of a breakpoint. Now we look at the console. I copy the name of the thread from the first printout in the console and add a condition to reduce the noise. If there’s a race condition, there must be at least one other thread, right? So, let’s remove one thread to be sure. Going down the list, it’s obvious that multiple threads enter the code. That means there’s a risk of a race condition. Now, it means I need to read the logs and see if an enter for one thread happened before the exit of another thread. This is a bit of work, but is doable. Debugging a Deadlock Next, let’s discuss deadlocks. Here we have two threads each is waiting on a monitor held by the other thread. This is a trivial deadlock but debugging is trivial even for more complex cases. Notice the bottom two threads have a MONITOR status. This means they’re waiting on a lock and can’t continue until it’s released. Typically, you’d see this in Java as a thread is waiting on a synchronized block. You can expand these threads and see what’s going on and which monitor is held by each thread. If you’re able to reproduce a deadlock or a race in the debugger, they are both simple to fix. Asynchronous Stack Traces Stack traces are amazing in synchronous code, but what do we do when we have asynchronous callbacks? Here we have a standard async example from JetBrains that uses a list of tasks and just sends them to the executor to perform on a separate thread. Each task sleeps and prints a random number. Nothing to write home about. As far as demos go this is pretty trivial. Here’s where things get interesting. As you can see, there’s a line that separates the async stack from the current stack on the top. The IDE detected the invocation of a separate thread and kept the stack trace on the side. Then, when it needed the information, it took the stack trace from before and glued it to the bottom. The lower part of the stack trace is from the main thread and the top portion is on the executor thread. Notice that this works seamlessly with Swing, executors, Spring Async annotation, etc., very cool! Asynchronous Annotations That’s pretty cool but there’s still a big problem. How does that work and what if I have custom code? It works by saving the stack trace in places where we know an asynchronous operation is happening and then placing it later when needed. How does it connect the right traces? It uses variable values. In this demo, I created a simple listener interface. You’ll notice it has no asynchronous elements in the stack trace. By adding the async schedule and async executor annotations, I can determine the point where an async code might launch, which is the schedule marker. I can place it on a variable to indicate the variable I want to use to lookup the right stack trace. I do the same thing with execute and get custom async stack traces. I can put the annotations on a method and the current object will be used instead. Final Word In the next video, we’ll discuss memory debugging. This goes beyond what the profiler provides, the debugger can be a complimentary surgical tool you can use to pinpoint a specific problem and find out the root cause.If you have any questions, please use the comments section. Thank you!

By Shai Almog CORE
Top 5 Node.js REST API Frameworks
Top 5 Node.js REST API Frameworks

Node.js has seen meteoric growth in recent years, making it one of the most popular programming languages on the web. By combining JavaScript on the front end with Node.js for backend development, JS developers can create powerful and scalable apps that offer benefits not found elsewhere. How to Pick an API Framework If you’re a Node.js developer looking to create a REST API with Node.js, there are many different JavaScript frameworks you can choose from. With so many options available, it can be difficult to know which one is right for your app development. In this article, we’ll go over some of the top 5 Node.js REST API frameworks and help you decide which one is best for your application programming interface (API) development. When choosing a Node.js REST API framework, there are a few things to keep in mind. First, consider what kind of functionality you need from your API development. Do you need a simple CRUD API or something more complex? Second, think about how much control you want over the structure of your API. Some Node.js frameworks provide more flexibility than others. Finally, take into account the size and scope of your application. Some frameworks are better suited for large web apps while others work better for small ones. Ease of use: How easy is the framework to use? Is it well-documented? Performance: How fast is the framework? Does it scale well? Features: What features does the framework offer? Does it support everything you need? Community: Is there a large and active web developer community around the framework? With all that in mind, let’s take a look at some of the top Node.js REST API frameworks: Express The Express framework is a popular Node.js framework for building web app and mobile applications. It’s most commonly used as a router to create a single page application, multi-page, and hybrid applications. Express.js is built on top of Node.js and provides an all-in-one package for managing servers, routes, and more. Pros Links to databases like MySQL, MongoDB, etc Use Middleware for request handling Asynchronous Express provides dynamic rendering of HTML Pages, allocated by passing the arguments to the template Open source framework Cons Issues with Callbacks Errors are challenging to understand Inability to process CPUs with the capacity for tasks that require large amounts of processing power To learn more about Express framework, you can check out the docs here. FeatherJS FeathersJS is a JavaScript framework used for highly-responsive real-time apps. It simplifies JavaScript development while still being advanced. FeathersJS enables JS developers to control data through RESTful resources, meaning they don’t need external data stores or databases. Node.js Developers can also create REST APIs with Feather commands, so it’s easier to enable your web app to communicate with third-party applications and services like Twilio or Stripe. You can also integrate FeathersJS into various JavaScript frameworks. Pros Real-time API support Good Documentation for the development process Supports both JavaScript and Typescript programming language CLI scaffolding tool Supports both Relational and Non-Relational Databases Cons It uses PassportJS that which does not provide SAML authentication out of the box Larger-scale real time application in FeathersJS could cause a WebSockets issue To learn more about Feather.Js framework, you can check out the docs here. LoopBack LoopBack is a Node.js framework that can be used by JS developers and businesses to build on top of the service with TypeScript packages. It offers multiple advantages for application development, including the following: Health Checks for monitoring Metrics for collecting data about system performance Distributed Tracing for tracing issues across microservices Logging so you can gather insights about what’s going on within your applications Built-in Docker files so you can quickly build new projects without having to worry about any of the infrastructure All this combined makes LoopBack one of the few Node.js frameworks that support proprietary databases like Oracle, Microsoft SQL, IBM DB2, etc. It also provides an easy bridge between SOAP services, making it one of only a handful of Node.js frameworks providing integration with SOAP services. Pros Code is modular and structured Good ORM with available connectors Built-in user & access role feature Built in API Explorer via Swagger Cons Monolithic architecture Opinionated architecture Not as much community support Steep learning curve To learn more about LoopBack framework, you can check out the docs here. Nest.Js Nest is a framework for building modern Node.js applications with a high-performance architecture that takes advantage of the latest JavaScript features by using progressive JavaScript (TypeScript), functional programming principles, and reactive programming. It combines the best of object oriented programming and functional reactive programming approaches so you can choose your preference without being forced to conform to one particular ideology. Pros NestJS includes a built-in Direct Injection container, which makes it easier to keep your code modular and readable Can create software solutions where the components can be taken out and changed. This means there is no strong coupling between them The use of modular structures simplifies the division of a project into separate blocks. It helps to use external libraries in a project Easy to write simple API endpoints Cons Developers know less about what’s going on under the hood, which means debugging is trickier and takes longer NestJS may be lacking in features compared to frameworks in other languages, such as Spring in Java or .NET in C# Complicated development process To learn more about Nest.Js framework, you can check out the docs here. Moleculer Moleculer is a Node.js framework that helps you to build out microservices quickly and efficiently. It also gives you tools for fast recovery in the event of failure, so your services can continue running efficiently and reliably. Healthy monitoring ensures everything is up to date and any problems are quickly detected and fixed. Pros Fast performance Open source framework Durability Fault-tolerant framework with CB and load-balancer features Cons Lack of Documentation Lack of Community Support Limitations of an enterprise-grade API are that there are limited options for setting up APIs and other restrictions Not as feature-rich as other frameworks To learn more about Moleculer framework, you can check out the docs here. Adding in API Analytics and Monetization Building an API is only the start. Once your API endpoint is built, you’ll want to make sure that you are monitoring and analyzing incoming traffic. By doing this, you can identify potential issues and security flaws, and determine how your API is being used. These can all be crucial aspects in growing and supporting your APIs. As your API platform grows, you may be focused on API products. This is making the shift from simply building APIs into the domain of using the API as a business tool. Much like a more formal product, an API product needs to be managed and likely will be monetized. Building revenue from your APIs can be a great way to expand your business’s bottom line. With a reliable API monetization solution, you can achieve all of the above. Such a solution can easily be purchased as part of an API analytics package. Or, if you have developer hours to spare, many companies opt to build their own solution. Ideally, whatever solution you choose will allow you to track API usage and sync it to a billing provider like Stripe, Recurly, or Chargebee. Wrapping Up In this article, we covered five of the best Node.js frameworks for developing RESTful APIs with JavaScript programming language. We looked at a high-level overview of each and listed out some points for consideration. We also discussed some key factors in how to decide on which API framework to use. Whichever framework you choose, we encourage you to examine the unique needs of your app before choosing.

By Preet Kaur
AWS Fargate: Deploying Jakarta EE Applications on Serverless Infrastructures
AWS Fargate: Deploying Jakarta EE Applications on Serverless Infrastructures

Jakarta EE is a unanimously adopted and probably the most popular Java enterprise-grade software development framework. With the industry-wide adoption of microservices-based architectures, its popularity is skyrocketing and during these last years, it has become the preferred framework for professional software enterprise applications and services development in Java. Jakarta EE applications used to traditionally be deployed in run-times or application servers like Wildfly, GlassFish, Payara, JBoss EAP, WebLogic, WebSphere, and others, which might have been criticized for their apparent heaviness and expansive costs. With the advent and the ubiquitousness of the cloud, these constraints are going to become less restrictive, especially thanks to the serverless technology, which provides increased flexibility, for standard low costs. This article demonstrates how to alleviate the Jakarta EE run-times, servers, and applications, by deploying them on AWS Serverless infrastructures. Overview of AWS Fargate As documented in the User Guide, AWS Fargate is a serverless paradigm used in conjunction with AWS ECS (Elastic Container Service) to run containerized applications. In a nutshell, this concept allows us to: Package applications in containers Specify the host operating system, the CPU's architecture, and capacity, the memory requirements, the network, and security policies Execute in the cloud the whole resulting stack. Running containers with AWS ECS requires handling a so-called launch type (i.e. an abstraction layer) defining the way to execute standalone tasks and services. There are several launch types that might be defined for AWS ECS-based containers, and Fargate is one of them. It represents the serverless way to host AWS ECS workloads and consists of components like clusters, tasks, and services, as explained in the AWS Fargate User Guide. The figure below, extracted from the AWS Fargate documentation, is emphasizing its general architecture: As the figure above shows, in order to deploy serverless applications running as ECS containers, we need a quite complex infrastructure consisting in: A VPC (Virtual Private Cloud) An ECR (Elastic Container Registry) An ECS cluster A Fargate launch type by ECS cluster node One or more tasks by Fargate launch type An ENI (Elastic Network Interface) by task Now, if we want to deploy Jakarta EE applications in the AWS serverless cloud as ECS-based containers, we need to: Package the application as a WAR. Create a Docker image containing the Jakarta EE-compliant run-time or application server with the WAR deployed. Register this Docker image into the ECR service. Define a task to run the Docker container built from the previously defined image. The AWS console allows us to perform all these operations in a user-friendly way; nevertheless, the process is quite time-consuming and laborious. Using AWS CloudFormation, or even AWS CLI, we could automatize it, of course, but the good news is that we have a much better alternative, as explained below. Overview of AWS Copilot AWS Copilot is a CLI (Command Line Interface) tool that provides application-first, high-level commands to simplify modeling, creating, releasing, and managing production-ready containerized applications on Amazon ECS from a local development environment. The figure below shows its software architecture: Using AWS Copilot, developers can easily manage the required AWS infrastructure, from their local machine, by executing simple commands which result in the creation of the deployment pipelines, fulfilling all the required resources enumerated above. In addition, AWS Copilot can also create extra resources like subnets, security groups, load balancers, and others. Here is how. Deploying Payara 6 Applications on AWS Fargate Installing AWS Copilot is as easy as downloading and unzipping an archive, such that the documentation is guiding you. Once installed, run the command above to check whether everything works: ~$ copilot --version copilot version: v1.24.0 The first thing to do in order to deploy a Jakarta EE application is to develop and package it. A very simple way to do that for test purposes is by using the Maven archetype jakartaee10-basic-archetype, as shown below: mvn -B archetype:generate \ -DarchetypeGroupId=fr.simplex-software.archetypes \ -DarchetypeArtifactId=jakartaee10-basic-archetype \ -DarchetypeVersion=1.0-SNAPSHOT \ -DgroupId=com.exemple \ -DartifactId=test This Maven archetype generates a simple, complete Jakarta EE 10 project with all the required dependencies and artifacts to be deployed on Payara 6. It generates also all the required components to perform integration tests of the exposed JAX-RS API (for more information on this archetype please see here). Among other generated artifacts, the following Dockerfile will be of real help in our AWS Fargate Cluster setup: Dockerfile FROM payara/server-full:6.2022.1 COPY ./target/test.war $DEPLOY_DIR Now that we have our test Jakarta EE application, as well as the Dockerfile required to run the Payara Server 6 with this application deployed, let's use AWS Copilot in order to start the process of the serverless infrastructure creation. Simply run the following command: Shell $ copilot init Note: It's best to run this command in the root of your Git repository. Welcome to the Copilot CLI! We're going to walk you through some questions to help you get set up with a containerized application on AWS. An application is a collection of containerized services that operate together. Application name: jakarta-ee-10-app Workload type: Load Balanced Web Service Service name: lb-ws Dockerfile: test/Dockerfile parse EXPOSE: no EXPOSE statements in Dockerfile test/Dockerfile Port: 8080 Ok great, we'll set up a Load Balanced Web Service named lb-ws in application jakarta-ee-10-app listening on port 8080. * Proposing infrastructure changes for stack jakarta-ee-10-app-infrastructure-roles - Creating the infrastructure for stack jakarta-ee-10-app-infrastructure-roles [create complete] [76.2s] - A StackSet admin role assumed by CloudFormation to manage regional stacks [create complete] [34.0s] - An IAM role assumed by the admin role to create ECR repositories, KMS keys, and S3 buckets [create complete] [33.3s] * The directory copilot will hold service manifests for application jakarta-ee-10-app. * Wrote the manifest for service lb-ws at copilot/lb-ws/manifest.yml Your manifest contains configurations like your container size and port (:8080). - Update regional resources with stack set "jakarta-ee-10-app-infrastructure" [succeeded] [0.0s] All right, you're all set for local development. Deploy: No No problem, you can deploy your service later: - Run `copilot env init` to create your environment. - Run `copilot deploy` to deploy your service. - Be a part of the Copilot community ! Ask or answer a question, submit a feature request... Visit https://aws.github.io/copilot-cli/community/get-involved/ to see how! The process of the serverless infrastructure creation conducted by AWS Copilot is based on a dialog during which the utility is asking questions and accepts your answers. The first question concerns the name of the serverless application to be deployed. We choose to name it jakarta-ee-10-app. In the next step, AWS Copilot is asking what is the new workload type of the new service to be deployed and proposes a list of such workload types, from which we need to select Load Balanced Web Service. The name of this new service is lb-ws. Next, AWS Copilot is looking for Dockerfiles in the local workspace and will display a list from which you have either to choose one, create a new one, or use an already-existent image, in which case you need to provide its location (i.e., a DockerHub URL). We choose the Dockerfile we just created previously, when we ran the Maven archetype. It only remains for us to define the TCP port number that the newly created service will use for HTTP communication. By default, AWS Copilot proposes the TCP port number 80, but we overload it with 8080. Now, all the required information is collected and the process of infrastructure generation may start. This process consists in creating two CloudFormation stacks, as follows: A first CloudFormation stack containing the definition of the required IAM security roles; A second CloudFormation stack containing the definition of a template whose execution creates a new ECS cluster. In order to check the result of the execution of the AWS Copilot initialization phase, you can connect to your AWS console, go to the CloudFormation service, and you will see something similar to this: As you can see, the two mentioned CloudFormation stacks appear on the screen copy above and you can click on them in order to inspect the details. We just finished the initialization phase of our serverless infrastructure creation driven by AWS Copilot. Now, let's create our development environment: Shell $ copilot env init Environment name: dev Credential source: [profile default] Default environment configuration? Yes, use default. * Manifest file for environment dev already exists at copilot/environments/dev/manifest.yml, skipping writing it. - Update regional resources with stack set "jakarta-ee-10-app-infrastructure" [succeeded] [0.0s] - Update regional resources with stack set "jakarta-ee-10-app-infrastructure" [succeeded] [128.3s] - Update resources in region "eu-west-3" [create complete] [128.2s] - ECR container image repository for "lb-ws" [create complete] [2.2s] - KMS key to encrypt pipeline artifacts between stages [create complete] [121.6s] - S3 Bucket to store local artifacts [create in progress] [99.9s] * Proposing infrastructure changes for the jakarta-ee-10-app-dev environment. - Creating the infrastructure for the jakarta-ee-10-app-dev environment. [create complete] [65.8s] - An IAM Role for AWS CloudFormation to manage resources [create complete] [25.8s] - An IAM Role to describe resources in your environment [create complete] [27.0s] * Provisioned bootstrap resources for environment dev in region eu-west-3 under application jakarta-ee-10-app. Recommended follow-up actions: - Update your manifest copilot/environments/dev/manifest.yml to change the defaults. - Run `copilot env deploy --name dev` to deploy your environment. AWS Copilot starts by asking us what name we want to give to our development environment and continues by proposing to use either the current user default credentials or some temporary credentials created for the purpose. We choose the first alternative. Then, AWS Copilot creates a new stack set, named jakarta-ee-10-app-infrastructure containing the following infrastructure elements: An ECR container to register the Docker image resulted further in the execution of the build operation on the Dockerfile selected during the previous step A new KMS (Key Management Service) key, to be used for encrypting the artifacts belonging to our development environment An S3 (Simple Storage Service) bucket, to be used in order to store inside the artifacts belonging to our development environment A new dedicated CloudFormation IAM role which aims at managing resources A new dedicated IAM role to describe the resources This operation may take a significant time, depending on your bandwidth, and, once finished, the development environment, named jakarta-ee-10-app-dev, is created. You can see its details in the AWS console, as shown below: Notice that the environment creation can be also performed as an additional operation of the first initialization step. As a matter of fact, the copilot init command, as shown above, ends by asking whether you want to create a test environment. Answering yes to this question allows you to proceed immediately with a test environment creation and initialization. For pedagogical reasons, here we preferred to separate these two actions. The next phase is the deployment of our development environment: Shell $ copilot env deploy Only found one environment, defaulting to: dev * Proposing infrastructure changes for the jakarta-ee-10-app-dev environment. - Creating the infrastructure for the jakarta-ee-10-app-dev environment. [update complete] [74.2s] - An ECS cluster to group your services [create complete] [2.3s] - A security group to allow your containers to talk to each other [create complete] [0.0s] - An Internet Gateway to connect to the public internet [create complete] [15.5s] - Private subnet 1 for resources with no internet access [create complete] [5.4s] - Private subnet 2 for resources with no internet access [create complete] [2.6s] - A custom route table that directs network traffic for the public subnets [create complete] [11.5s] - Public subnet 1 for resources that can access the internet [create complete] [2.6s] - Public subnet 2 for resources that can access the internet [create complete] [2.6s] - A private DNS namespace for discovering services within the environment [create complete] [44.7s] - A Virtual Private Cloud to control networking of your AWS resources [create complete] [12.7s] The CloudFormation template created during the previous step is now executed and it results in the creation and initialization of the following infrastructure elements: The new ECS cluster, grouping all the stateless required artifacts An IAM security group to allow communication between containers An Internet Gateway such that the new service be publicly accessible Two private and two public subnets A new routing table with the required rules such that to allow traffic between public and private subnets A private Route53 (DNS) namespace A new VPC (Virtual Private Cloud) which aims at controlling the whole bunch of the AWS resources created during this step Take some time to navigate through your AWS console pages and inspect the infrastructure that AWS Copilot has created for you. As you can see, it's an overladen one and it would have been laborious and time-consuming to create it manually. The sharp-eyed reader has certainly noticed that creating and deploying an environment, like our development one, doesn't activate any service to it. In order to do that, we need to proceed with our last step: the service deployment. Simply run the command below: Shell $ copilot deploy Only found one workload, defaulting to: lb-ws Only found one environment, defaulting to: dev Sending build context to Docker daemon 13.67MB Step 1/2 : FROM payara/server-full:6.2022.1 ---> ada23f507bd2 Step 2/2 : COPY ./target/test.war $DEPLOY_DIR ---> Using cache ---> f1b0fe950252 Successfully built f1b0fe950252 Successfully tagged 495913029085.dkr.ecr.eu-west-3.amazonaws.com/jakarta-ee-10-app/lb-ws:latest WARNING! Your password will be stored unencrypted in /home/nicolas/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded Using default tag: latest The push refers to repository [495913029085.dkr.ecr.eu-west-3.amazonaws.com/jakarta-ee-10-app/lb-ws] d163b73cdee1: Pushed a9c744ad76a8: Pushed 4b2bb262595b: Pushed b1ed0705067c: Pushed b9e6d039a9a4: Pushed 99413601f258: Pushed d864802c5436: Pushed c3f11d77a5de: Pushed latest: digest: sha256:cf8a116279780e963e134d991ee252c5399df041e2ef7fc51b5d876bc5c3dc51 size: 2004 * Proposing infrastructure changes for stack jakarta-ee-10-app-dev-lb-ws - Creating the infrastructure for stack jakarta-ee-10-app-dev-lb-ws [create complete] [327.9s] - Service discovery for your services to communicate within the VPC [create complete] [2.5s] - Update your environment's shared resources [update complete] [144.9s] - A security group for your load balancer allowing HTTP traffic [create complete] [3.8s] - An Application Load Balancer to distribute public traffic to your services [create complete] [124.5s] - A load balancer listener to route HTTP traffic [create complete] [1.3s] - An IAM role to update your environment stack [create complete] [25.3s] - An IAM Role for the Fargate agent to make AWS API calls on your behalf [create complete] [25.3s] - A HTTP listener rule for forwarding HTTP traffic [create complete] [3.8s] - A custom resource assigning priority for HTTP listener rules [create complete] [3.5s] - A CloudWatch log group to hold your service logs [create complete] [0.0s] - An IAM Role to describe load balancer rules for assigning a priority [create complete] [25.3s] - An ECS service to run and maintain your tasks in the environment cluster [create complete] [119.7s] Deployments Revision Rollout Desired Running Failed Pending PRIMARY 1 [completed] 1 1 0 0 - A target group to connect the load balancer to your service [create complete] [0.0s] - An ECS task definition to group your containers and run them on ECS [create complete] [0.0s] - An IAM role to control permissions for the containers in your tasks [create complete] [25.3s] * Deployed service lb-ws. Recommended follow-up action: - You can access your service at http://jakar-Publi-H9B68022ZC03-1756944902.eu-west-3.elb.amazonaws.com over the internet. The listing above shows the process of creation of the whole bunch of the resources required in order to produce our serverless infrastructure containing the Payara Server 6, together with the test Jakarta EE 10 application, deployed into it. This infrastructure consists of a CloudFormation stack named jakarta-ee-10-app-dev-lb-ws containing, among others, security groups, listeners, IAM roles, dedicated CloudWatch log groups, and, most important, an ECS task definition having a Fargate launch type that runs the Payara Server 6 platform. This way makes available our test application, and its associated exposed JAX-RS API, at its associated public URL. You can test it by simply running the curl utility: Shell curl http://jakar-Publi-H9B68022ZC03-1756944902.eu-west-3.elb.amazonaws.com/test/api/myresource Got it ! Here we have appended to the public URL our JAX-RS API relative URN, as displayed by AWS Copilot. You may perform the same test by using your preferred browser. Also, if you prefer to run the provided integration test, you may slightly adapt it by amending the service URL. Don't hesitate to go to your AWS console to inspect the serverless infrastructure created by the AWS Copilot in detail. And once finished, don't forget to clean up your workspace by running the command below which removes the CloudFormation stack jakarta-ee-10-app-lb-ws with all its associated resources: Shell $ copilot app delete Sure? Yes * Delete stack jakarta-ee-10-app-dev-lb-ws - Update regional resources with stack set "jakarta-ee-10-app-infrastructure" [succeeded] [12.4s] - Update resources in region "eu-west-3" [update complete] [9.8s] * Deleted service lb-ws from application jakarta-ee-10-app. * Retained IAM roles for the "dev" environment * Delete environment stack jakarta-ee-10-app-dev * Deleted environment "dev" from application "jakarta-ee-10-app". * Cleaned up deployment resources. * Deleted regional resources for application "jakarta-ee-10-app" * Delete application roles stack jakarta-ee-10-app-infrastructure-roles * Deleted application configuration. * Deleted local .workspace file. Enjoy!

By Nicolas Duminil CORE
Experience the Future of Communication With GPT-3: Consume GPT-3 API Through MuleSoft
Experience the Future of Communication With GPT-3: Consume GPT-3 API Through MuleSoft

ChatGPT It is a chatbot that uses the GPT-3 (Generative Pre-trained Transformer 3) language model developed by OpenAI to generate human-like responses to user input. The chatbot is designed to be able to carry on conversations with users in a natural, conversational manner, and can be used for a variety of purposes such as customer service, online support, and virtual assistance. OpenAI API OpenAI offers a number of Application Programming Interfaces (APIs) that allow developers to access the company's powerful AI models and use them in their own projects. These APIs provide access to a wide range of capabilities, including natural language processing, computer vision, and robotics. These APIs are accessed via HTTP requests, which can be sent in a variety of programming languages, including Python, Java, and MuleSoft. It also provides client libraries for several popular programming languages to make it easy for developers to get started. Most interestingly, you can train the model on your own dataset and make it work best on a specific domain. Fine-tuning the model will make it more accurate, perform better and give you the best results. MuleSoft It is a company that provides integration software for connecting applications, data, and devices. Its products include a platform for building APIs (Application Programming Interfaces) and integrations, as well as tools for managing and deploying those APIs and integrations. MuleSoft’s products are designed to help organizations connect their systems and data, enabling them to more easily share information and automate business processes. Steps To Call GPT-3 model API in MuleSoft Account: Create an account using your email id or continue with Google or Microsoft account: Authentication: The OpenAI API uses API keys for authentication. Click API Keys to generate the API key. Do not share your API key with others, or expose it in the browser or other client-side code. In order to protect the security of your account, OpenAI may also automatically rotate any API key that we’ve found has leaked publicly. (Source: documentation) The OpenAI API is powered by a family of models with different capabilities and price points. The highly efficient GPT-3 model is categorized into four models based on the power level suitable to do their task. Model Description Max Request (Tokens Training Data (Upto) text-davinci-003 Most capable GPT-3 model. Perform any task w.r.t other models with higher quality, longer output, and better instruction-following 4000 Jun-21 text-curie-001 Very capable, but faster and lower cost than Davinci. 2048 Jun-19 text-babbage-001 Very fast and lower cost, Perform straight forward task 2048 Jul-19 text-ada-001 Perform simple task 2048 Aug-19 Code snippet to call GPT-3 model (OpenAI API) in Mule application: XML <?xml version="1.0" encoding="UTF-8"?> <mule xmlns:ee="http://www.mulesoft.org/schema/mule/ee/core" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd http://www.mulesoft.org/schema/mule/ee/core http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd"> <http:listener-config name="HTTP_Listener_config" doc:name="HTTP Listener config" doc:id="e5a1354b-1cf2-4963-a89f-36d035c95045" > <http:listener-connection host="0.0.0.0" port="8091" /> </http:listener-config> <flow name="chatgptdemoFlow" doc:id="b5747310-6c6d-4e1a-8bab-6fdfb1d6db3d" > <http:listener doc:name="Listener" doc:id="1819cccd-1751-4e9e-8e71-92a7c187ad8c" config-ref="HTTP_Listener_config" path="completions"/> <logger level="INFO" doc:name="Logger" doc:id="3049e8f0-bbbb-484f-bf19-ab4eb4d83cba" message="Calling completaion API of openAI"/> <http:request method="POST" doc:name="Calling completaions API" doc:id="390d1af1-de73-4640-b92c-4eaed6ff70d4" url="https://api.openai.com/v1/completions?oauth_consumer_key&oauth_token&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1673007532&oauth_nonce=WKkU9q&oauth_version=1.0&oauth_signature=RXuuOb4jqCef9sRbTmhSfRwXg4I="> <http:headers ><![CDATA[#[output application/java --- { "Authorization" : "Bearer sk-***", "Content-Type" : "application/json" }]]]></http:headers> </http:request> <ee:transform doc:name="Parse Response" doc:id="061cb180-48c9-428e-87aa-f4f55a39a6f2" > <ee:message > <ee:set-payload ><![CDATA[%dw 2.0 import * from dw::core::Arrays output application/json --- (((payload.choices[0].text splitBy ".") partition ((item) -> item startsWith "\n" ))[1] ) map "$$":$]]></ee:set-payload> </ee:message> </ee:transform> </flow> </mule> Make a call to the Mule application through the API client. For example, I am using Postman. Request payload: { "model": "text-davinci-003", "prompt": "Create a list of 5 advantages of MuleSoft:", "max_tokens": 150 } model: The OpenAI API is powered by a family of models with different capabilities and price points. prompt: The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays. max_tokens: The maximum number of tokens to generate in the completion. For more details, refer to the API reference. Response payload [ { "0": " Easy Integration: Mulesoft provides easy integration with existing systems and technologies, making it faster and easier to start projects or add new services or technologies to systems" }, { "1": " Flexible Architecture: Mulesoft offers a highly configurable and flexible architecture that allows users to quickly and easily customize their solutions and make changes when needed" }, { "2": " High Performance: Mulesoft's rapid response times and high throughputs make it ideal for mission critical applications" }, { "3": " Cloud Ready: Mulesoft supports cloud friendly approaches such as microservices, containers and serverless integration architectures" }, { "4": " Efficient Development Cycles: The Mulesoft Anypoint platform includes a range of tools and services that speed up and streamline development cycles, helping to reduce the time and effort associated with creating applications" } ] Where GPT-3 Could Potentially Be Used Content Creation The API can be utilized to generate written materials and translate them from one language to another. Software Code Generation The API can be used to generate software code and simplify complicated code, making it more comprehensible for new developers. Sentiment Analysis The API can be used to determine the sentiment of text, allowing businesses and organizations to understand the public's perception of their products, services, or brand. Complex Computation The API can assist in large data processing and handle complex calculations efficiently. Limitations Like all natural language processing (NLP) systems, GPT-3 has limitations in its ability to understand and respond to user inputs. Here are a few potential limitations of GPT-3: Reliability: Chatbots may not always produce acceptable outputs and determining the cause can be difficult. Complex queries may not be understood or responded to appropriately. Interpretability: The chatbot may not recognize all variations of user input, leading to misunderstandings or inappropriate responses if not trained on diverse data. Accuracy: Chatbot, being a machine learning model, can make mistakes. Regular review and testing are needed to ensure proper functioning. Efficiency: GPT-3's test-time sample efficiency is close to humans, but pre-training still involves a large amount of text. Overall, GPT-3 is a useful tool that can improve communication and productivity, but it is important to be aware of its limitations and use it appropriately. Conclusion As chatbots continue to advance, they have the potential to significantly improve how we communicate and interact with technology. This is just an example of the various innovative approaches that are being developed to enhance chatbot performance and capabilities. As we continue to explore the potential of GPT-3 and other language generation tools, it’s important that we remain authentic and responsible in our use of these tools. This means being transparent about their role in the writing process and ensuring that the resulting text is of high quality and accuracy. References OpenAI website Beta OpenAPI website

By Mukesh Thakur CORE
Essential Protocols for Python Developers to Prevent SQL Injection Attacks
Essential Protocols for Python Developers to Prevent SQL Injection Attacks

You are going to encounter a number of issues as a Python developer. Mastering the syntax of coding isn’t enough to write functioning, stable applications. You also have to familiarize yourself with different challenges the final application might deal with, including Python security risks. Many of the discussions about developing secure applications focus on using machine learning to protect customers, such as helping them avoid holiday scams. However, it is equally important to ensure the applications themselves are not vulnerable to cybercriminals. One of the challenges that Python developers must cope with is guarding their applications against cyberattacks. One of the biggest security problems that haunt software developers are SQL injections. Bitninja reports that SQL injections account for over 50% of all application attacks. This technique can allow SQL code to run through the client application without any restrictions, and thus silently alter data. The system administrators often don’t notice these changes until it is too late. This is a very serious security risk that can have daunting consequences if it is not prevented. Therefore, it is important to understand the risks of SQL injection and steps that should be taken to protect Python applications from these types of attacks. What Are SQL Injections and What Threat Do They Pose to Python Applications? As mentioned earlier, SQL injections allow SQL code to be executed from the client application directly into the database. These attacks allow hackers to change data unrestrictedly and without consent of the system administrators. This is a very serious problem that can seriously compromise the security of a system if they are not thwarted in time. SQL attacks can have a wide range of consequences for companies, such as: Website damage: An attacker can delete or modify a company's database and consequently destroy the website. Data theft or leakage: Many attacks aim to steal confidential data such as trade secrets, sensitive information, intellectual property, and — more often — information about the company's users or customers. This information can then be sold to competitors to gain commercial advantage. Privilege escalation: An attacker could use the contents of a breached database to gain access to other parts of a company's internal network. Loss of reputation and trustworthiness: It is often difficult for a company to regain the trust of its customers after a cyberattack. An analysis by the Open Web Application Security Project shows that there were over 274,000 SQL injection attacks on applications in 2021. That figure is likely growing each year. To better understand this problem, let's take a practical example. Imagine a simple application that updates customers by passing their name and age. Focusing only on the back end of the application and without any SQL injection checking, the code responsible for updating clients would basically be as follows: name = 'John' cursor.execute(f "UPDATE customer SET name={name} WHERE idcustomer=13") The above code will update the name of the customer with id 13 to "John". It appears to work effectively so far. However, it has some serious security risks bubbling under the surface. At first glance, it seems that we are just updating the name of a customer in our database. However, imagine that instead of passing just 'John' to the name variable, we pass some SQL code: name = "'Carlos' , age = 80" cursor.execute(f "UPDATE customer SET name={name} WHERE idclient=13") The above code will allow the name and age of the client with an id of “13” to be changed simultaneously, without permission or consent of the system administrator. It may seem silly not to allow the age of a customer to be edited, but imagine a banking system with this same problem and allowing the balance value to be changed by the user. This is a complex situation, which can have untold consequences if it is not remedied. But what steps can be taken to resolve them? What Can Python Developers Do to Prevent SQL Injection Attacks? To solve the SQL injection problem, we need to parameterize the queries used in our program. We need to make sure that we do not allow SQL code to be executed on the client side of the application. To do this, we alter the query as follows: name = "'Carlos' , age = 80" cursor.execute("UPDATE client SET name=%(name)s WHERE idclient=13", ({'name': name, })) With the above code, we will no longer execute the code present in the "name" variable in the UPDATE. Instead, the entire contents of this variable will be stored as the name of the customer with id 13. Therefore, the name of the customer with id 13 will be "Carlos, age = 80" and his age will remain unchanged. Conclusion This way, we will no longer allow the fields of a particular table to be changed without system permission, and thus ensure much more security for our application.

By Ryan Kh
OAuth Grant Types Guide
OAuth Grant Types Guide

In today’s text, I will describe and explain OAuth Grant Types — the processes of authorization. I will start with a quick recap of the most basic OAuth roles. Recap of OAuth Roles Although you probably know them already, I want to make a quick recap for everyone reading this to be on the same page. There are four main concepts in OAuth: Resource owner — entity capable of granting access to the protected resource. When a resource owner is a person, they are referred to as an end-users. Resource server — server hosting the protected resources. Capable of accepting and responding to protected resource requests using access tokens. Client — application making protected resource requests on behalf of the resource owner and with its authorization. Authorization server — server issues access tokens to the client after successfully authenticating the resource owner and obtaining authorization. What Is Grant Type? Grant Type is a process of obtaining an Access Token. It describes the exact sequence of steps involved in handling a particular request. Grant Type affects how applications involved in handling particular requests communicate with one another. Because of that, an OAuth service must be configured to support a particular Grant Type before a client application can initiate it. The client application specifies which grant type it wants to use in the initial authorization request it sends to the OAuth service. In most cases, information about the picked Grant Type is stored under the grant_type param of the authorization request. Besides, Grant Type determines the shape and form of an access token. Although the original protocol of authorization described in the OAuth specification looks quite complex. Modern tools do a lot of work behind the scenes. Thanks to that, we usually have to send only one request similar in structure to the one below. https://api.authorization-server.com/token& grant_type=authorization_code& code=AUTH_CODE_HERE& redirect_uri=REDIRECT_URI& client_id=CLIENT_ID& client_secret=SECRET As for naming conventions, Grant Types are often referred to as “Flows” or “Specification.” Remember that these three terms may describe the same thing. OAuth Grant Types The original OAuth specification describes four different grant types: Authorization Code Implicit (or Implicit Flow) Resource Owner Password Credentials (or Password Grant), Client Credentials. We can also count Refresh Token as a separate grant type. This is because it has its own process description and relies on exchanging refresh tokens for access tokens. Moreover, OAuth provides the mechanism for creating new custom grant types, so if you have enough determination, you can create your own grant type. Over the years, a few things have changed. Password and Implicit grants became obsolete — not in the way that they are not supported. They are no longer recommended; using them is considered a bad practice. We also got a new grant type — Device Code and security extension called — Proof Key for Code Exchange (PKCE). Grant Types Lexicon Authorization Code Authorization Code is a grant type used to obtain access and refresh tokens. It works by exchanging an authorization code for an access token. The Authorization Code grant is optimized for confidential clients but also can work with public clients. It is one of the grants which require redirection to work. The client must be able to interact with the resource owner’s user agent (typically a web browser) and receive incoming requests (via redirection) from the authorization server. The authorization code itself will be embedded in a redirect URL from the resource owner’s agent and used to request an access token. What is notable here is that the resource owner’s credentials are never shared with the client. Moreover, the direct transmission of the access token to the client without passing it through the resource owner’s user-agent eliminates the possibility of exposing our valuable tokens to unwanted agents. Implicit This grant type is the simplified version of the Authorization Code grant. It is optimized for clients running in the browser. It imminently returns an access token, omitting the authorization code exchange step. Implicit grant can improve the responsiveness and efficiency of some clients because it reduces the overall number of steps needed to achieve the final result. On the other hand, this grant type does not authenticate the client. The token can be exposed to any other agent with access to the resource owner’s user app. It means that we can expose our app to many new security attacks. It is the reason why Implicit Grant is no longer recommended by OAuth creators. Instead, they favor the Authorization Code grant with PKCE extension. What is more, OAuth 2.0 Security Best Practice recommends not using the Implicit Grant at all. Resource Owner Password Credentials In my opinion, it is the simplest grant type around, as it requires only one request to retrieve an access token. It works by exchanging user credentials (for example, username and password) to obtain an access token. This grant type can eliminate the need for the client to store the resource owner credentials for future use by exchanging the credentials with a long-lived access token or refresh token. Previously it was believed that it should be used only in cases when there is a high degree of trust between the resource owner and the client and when using other grant types is not possible. However, because of direct access to user credentials, this grant type is no longer recommended. The latest OAuth 2.0 Security Best Practice disallows the Resource Owner Password Credentials grant entirely. Client Credentials It is a grant type used to obtain an access token without user context — using only client credentials. It is the best grant in use cases when the authorization scope is limited to the protected resources under the control of the client or when protected resources were previously arranged with the authorization server. Moreover, this grant requires the usage of confidential clients. Refresh Token This grant type is probably the most crucial one. It allows the client to achieve continuous usage of valid tokens without further interaction from the user side. As you might expect, it works by exchanging a refresh token for an access token that has expired. According to OAuth Spec, a refresh token is a token issued optionally to the client by the Authorization Server alongside an access token. It is a string representation of the authorization granted to the Client by the Resource Owner and usually is opaque to the Client. Therefore, refresh Tokens are also intended to be used only with Authorization Servers and are never sent to Resource Servers. Device Authorization Grant Device Authorization Grant or Device Flow is an extension to OAuth 2.0 in the form of a single RFC 8628. Its main point is to add the possibility of obtaining an access token for devices with no browser or with limited input capabilities. The device which you want to use with Device Authorization Grant must meet a few requirements: First, the device is already connected to the Internet. Second, the device is able to make outbound HTTPS requests. Third, the device is able to display or otherwise communicate a URI and code sequence to the user. Finally, the user has a secondary device (for example smartphone) from which they can process the request. This grant type does not require two-way communication between the OAuth client and the user agent on the same device (unlike other OAuth grant types). Thus it can support several use cases that other grants are not able to cover. What is more, this is the only grant that requires using a secondary device to approve the access request. Remember that this grant type is not intended to replace browser-based OAuth in apps on devices like smartphones. Instead, those apps should use yet another grant specified in a separate RFC. Device Code Device Code grant was created as an extension grant in RFC 6749 to play an important part in Device Authorization Grant. It uses Device Code grant to exchange the previously obtained device code for an access token. When performing a request, its “grant_type” URL parameter must be set to: "urn:ietf:params:oauth:grant-type:device_code". Proof Key for Code Exchange Proof Key for Code Exchange, or PKCE, was presented in RFC 7636. It is an extension to the Authorization Code grant type. It aims to provide additional protection against CSRF and authorization code injection attacks. PKCE turned out to be useful also for web apps that use a client secret. Then it became a general recommendation in many other OAuth grant types. Summary OAuth specification describes grant types for many popular use cases. Starting from simple client-server authorization via mobile apps to two-factor authorization for more complex systems. Everyone can find something interesting in such a collection. I hope that this humble lexicon of flows will come in handy to you at some point. Thank you for your time.

By Bartłomiej Żyliński CORE

The Latest Software Design and Architecture Topics

article thumbnail
Real-Time Stream Processing With Hazelcast and StreamNative
In this article, readers will learn about real-time stream processing with Hazelcast and StreamNative in a shorter time, along with demonstrations and code.
January 27, 2023
by Timothy Spann
· 1,708 Views · 2 Likes
article thumbnail
Cloud Native London Meetup: 3 Pitfalls Everyone Should Avoid With Cloud Data
Explore this session from Cloud Native London that highlights top lessons learned as developers transitioned their data needs into cloud-native environments.
January 27, 2023
by Eric D. Schabell CORE
· 1,274 Views · 3 Likes
article thumbnail
Unit of Work With Generic Repository Implementation Using .NET Core 6 Web API
This article reviews the Unit of Work design pattern using a generic repository and a step-by-step implementation using the.NET Core 6 Web API.
January 27, 2023
by Jaydeep Patil
· 1,218 Views · 1 Like
article thumbnail
AWS Cloud Migration: Best Practices and Pitfalls to Avoid
This article post will discuss the best practices and common pitfalls to avoid when migrating to the AWS cloud.
January 27, 2023
by Rahul Nagpure
· 1,602 Views · 1 Like
article thumbnail
The 31 Flavors of Data Lineage and Why Vanilla Doesn’t Cut It
This article goes over the four critical reasons why your data quality solution needs to have data lineage.
January 27, 2023
by Lior Gavish
· 1,375 Views · 1 Like
article thumbnail
The Quest for REST
This post focuses on listing some of the lurking issues in the "Glory of REST" and provides hints at ways to solve them.
January 26, 2023
by Nicolas Fränkel CORE
· 2,095 Views · 3 Likes
article thumbnail
Fraud Detection With Apache Kafka, KSQL, and Apache Flink
Exploring fraud detection case studies and architectures with Apache Kafka, KSQL, and Apache Flink with examples, guide images, and informative details.
January 26, 2023
by Kai Wähner CORE
· 2,386 Views · 1 Like
article thumbnail
Playwright vs. Cypress: The King Is Dead, Long Live the King?
QA automation tools are an essential part of the software development process. Let's compare Cypress and Playwright.
January 26, 2023
by Serhii Zabolenny
· 1,777 Views · 1 Like
article thumbnail
Artificial Intelligence in Drug Discovery
This article explores how TypeDB empowers scientists to make the next breakthroughs in medicine possible. This is shown with guide code examples and visuals.
January 26, 2023
by Tomás Sabat
· 1,704 Views · 2 Likes
article thumbnail
Easy Smart Contract Debugging With Truffle’s Console.log
If you’re a Solidity developer, you’ll be excited to hear that Truffle now supports console logging in Solidity smart contracts. Let's look at how.
January 26, 2023
by Michael Bogan CORE
· 1,994 Views · 2 Likes
article thumbnail
CQRS and MediatR Pattern Implementation Using .NET Core 6 Web API
In this article, we are going to discuss the working of CQRS and MediatR patterns and step-by-step implementation using .NET Core 6 Web API.
January 26, 2023
by Jaydeep Patil
· 1,635 Views · 1 Like
article thumbnail
DevOps Roadmap for 2022
[Originally published February 2022] In this post, I will share some notes from my mentoring session that can help you - a DevOps engineer or platform engineer, learn where to focus.
January 26, 2023
by Anjul Sahu
· 17,938 Views · 6 Likes
article thumbnail
What Is Policy-as-Code? An Introduction to Open Policy Agent
Learn the benefits of policy as code and start testing your policies for cloud-native environments.
January 26, 2023
by Tiexin Guo
· 2,985 Views · 1 Like
article thumbnail
Data Mesh vs. Data Fabric: A Tale of Two New Data Paradigms
Data Mesh vs. Data Fabric: Are these two paradigms really in contrast with each other? What are their differences and their similarities? Find it out!
January 26, 2023
by Paolo Martinoli
· 2,102 Views · 1 Like
article thumbnail
Commonly Occurring Errors in Microsoft Graph Integrations and How to Troubleshoot Them (Part 3)
This third article explains common integration errors that may be seen in the transition from EWS to Microsoft Graph as to the resource type To Do Tasks.
January 25, 2023
by Constantin Kwiatkowski
· 2,115 Views · 1 Like
article thumbnail
Handling Automatic ID Generation in PostgreSQL With Node.js and Sequelize
In this article, readers will learn four ways to handle automatic ID generation in Sequelize and Node.js for PostgreSQL, which includes simple guide code.
January 25, 2023
by Brett Hoyer
· 2,029 Views · 3 Likes
article thumbnail
Key Considerations When Implementing Virtual Kubernetes Clusters
In this article, readers will receive key considerations to examine when implementing virutal Kubernetes clusters, along with essential questions and answers.
January 25, 2023
by Hanumantha (Hemanth) Kavuluru
· 3,030 Views · 3 Likes
article thumbnail
Choosing the Best Cloud Provider for Hosting DevOps Tools
Discover the best cloud provider for your DevOps tools hosting needs. Also, learn which provider is best suited to help grow your business.
January 25, 2023
by Ryan Kh
· 2,309 Views · 1 Like
article thumbnail
Beginners’ Guide to Run a Linux Server Securely
This article explains what you need to take some essential considerations for tackling common security risks with Linux Server.
January 25, 2023
by Hadi Samadzad
· 1,913 Views · 2 Likes
article thumbnail
The Role of Data Governance in Data Strategy: Part II
This article explains how data is cataloged and classified and how classified data is used to group and correlate the data to an individual.
January 25, 2023
by Satish Gaddipati
· 2,147 Views · 5 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: