Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service
Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.
Software design and architecture focus on the development decisions made to improve a system's overall structure and behavior in order to achieve essential qualities such as modifiability, availability, and security. The Zones in this category are available to help developers stay up to date on the latest software design and architecture trends and techniques.
Cloud architecture refers to how technologies and components are built in a cloud environment. A cloud environment comprises a network of servers that are located in various places globally, and each serves a specific purpose. With the growth of cloud computing and cloud-native development, modern development practices are constantly changing to adapt to this rapid evolution. This Zone offers the latest information on cloud architecture, covering topics such as builds and deployments to cloud-native environments, Kubernetes practices, cloud databases, hybrid and multi-cloud environments, cloud computing, and more!
Containers allow applications to run quicker across many different development environments, and a single container encapsulates everything needed to run an application. Container technologies have exploded in popularity in recent years, leading to diverse use cases as well as new and unexpected challenges. This Zone offers insights into how teams can solve these challenges through its coverage of container performance, Kubernetes, testing, container orchestration, microservices usage to build and deploy containers, and more.
Integration refers to the process of combining software parts (or subsystems) into one system. An integration framework is a lightweight utility that provides libraries and standardized methods to coordinate messaging among different technologies. As software connects the world in increasingly more complex ways, integration makes it all possible facilitating app-to-app communication. Learn more about this necessity for modern software development by keeping a pulse on the industry topics such as integrated development environments, API best practices, service-oriented architecture, enterprise service buses, communication architectures, integration testing, and more.
A microservices architecture is a development method for designing applications as modular services that seamlessly adapt to a highly scalable and dynamic environment. Microservices help solve complex issues such as speed and scalability, while also supporting continuous testing and delivery. This Zone will take you through breaking down the monolith step by step and designing a microservices architecture from scratch. Stay up to date on the industry's changes with topics such as container deployment, architectural design patterns, event-driven architecture, service meshes, and more.
Performance refers to how well an application conducts itself compared to an expected level of service. Today's environments are increasingly complex and typically involve loosely coupled architectures, making it difficult to pinpoint bottlenecks in your system. Whatever your performance troubles, this Zone has you covered with everything from root cause analysis, application monitoring, and log management to anomaly detection, observability, and performance testing.
The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.
Containers
The proliferation of containers in recent years has increased the speed, portability, and scalability of software infrastructure and deployments across all kinds of application architectures and cloud-native environments. Now, with more and more organizations migrated to the cloud, what's next? The subsequent need to efficiently manage and monitor containerized environments remains a crucial task for teams. With organizations looking to better leverage their containers — and some still working to migrate out of their own monolithic environments — the path to containerization and architectural modernization remains a perpetual climb. In DZone's 2023 Containers Trend Report, we will explore the current state of containers, key trends and advancements in global containerization strategies, and constructive content for modernizing your software architecture. This will be examined through DZone-led research, expert community articles, and other helpful resources for designing and building containerized applications.
Navigating the Skies
Observability Maturity Model
With the advent of cloud computing, managing network traffic and ensuring optimal performance have become critical aspects of system architecture. Amazon Web Services (AWS), a leading cloud service provider, offers a suite of load balancers to manage network traffic effectively for applications running on its platform. Two such offerings are the Application Load Balancer (ALB) and Network Load Balancer (NLB). This extensive guide aims to provide an in-depth comparison between these two types of load balancers, helping you choose the most suitable option for your application's needs. Overview The primary role of a load balancer is to distribute network traffic evenly among multiple servers or 'targets' to ensure smooth performance and prevent any single server from being overwhelmed. AWS provides three types of load balancers: Classic Load Balancer (CLB), Application Load Balancer (ALB), and Network Load Balancer (NLB). The ALB operates at Layer 7 of the OSI model, handling HTTP/HTTPS traffic. It offers advanced request routing based on the content of the request, making it ideal for complex web applications. On the other hand, the NLB operates at Layer 4, dealing with TCP traffic. It's designed for extreme performance and low latencies, offering static IP addresses per Availability Zone (AZ). Choosing the right load balancer is crucial as it directly impacts your application’s performance, availability, security, and cost. For instance, if your application primarily handles HTTP requests and requires sophisticated routing rules, an ALB would be more appropriate. Conversely, if your application requires high throughput, low latency, or a static IP address, you should opt for an NLB. Fundamentals of Load Balancing The Network Load Balancer is designed to handle tens of millions of requests per second while maintaining high throughput at ultra-low latency. Unpredictable traffic patterns do not affect its performance, thanks to its ability to handle sudden and volatile traffic. Furthermore, it supports long-lived TCP connections that are ideal for WebSocket-type applications. The Application Load Balancer, on the other hand, is best suited for load balancing HTTP and HTTPS traffic. It operates at the request level, allowing advanced routing, microservices, and container-based architecture. It can route requests to different services based on the content of the request, which is ideal for modern, complex web applications. Key Features and Capabilities The NLB provides several important features, such as static IP support, zonal isolation, and low-latency performance. It distributes traffic across multiple targets within one or more AZs, ensuring a robust and reliable performance. Furthermore, it offers connection multiplexing and stickiness, enabling efficient utilization of resources. On the other hand, the ALB comes with built-in features like host and path-based routing, SSL/TLS decryption, and integration with AWS WAF, protecting your applications from various threats. It also supports advanced routing algorithms, slow start mode for new targets, and integration with container services. These features make it ideal for modern, modular, and microservices-based applications. Both ALB and NLB offer unique advantages. While ALB's strength lies in flexible application management and advanced routing features, NLB shines in areas of extreme performance and support for static IP addresses. It's also worth noting that while ALB can handle HTTP/1, HTTP/2, and gRPC protocols, NLB is designed for lower-level TCP and UDP traffic. Performance and Efficiency NLB excels in terms of performance due to its design. As it operates at the transport layer (Layer 4), it merely forwards incoming TCP or UDP connections to a target without inspecting the details of every request. This makes NLB significantly faster and more efficient in forwarding incoming requests, reducing latency. In contrast, ALB operates at the application layer (Layer 7), inspecting details of every incoming HTTP/HTTPS request. While this introduces a slight overhead compared to NLB, it allows ALB to perform advanced routing based on the content of the request, providing flexibility and control. When it comes to raw performance and low latency, NLB has an advantage due to its simple operation at Layer 4. However, ALB offers additional flexibility and control at Layer 7, which can lead to more efficient request handling in complex applications. Handling Traffic Spikes NLB is designed to handle sudden and massive spikes in traffic without requiring any pre-warming or scaling. This is because NLB does not need to scale the number of nodes processing incoming connections, allowing it to adapt instantly to increased traffic. ALB, on the other hand, adapts to an increase in connections and requests automatically. However, this scaling process takes some time, so during sudden, substantial traffic spikes, ALB might not be able to handle all incoming requests immediately. In such cases, AWS recommends informing them in advance about expected traffic spikes so they can pre-warm the ALB. While both NLB and ALB can handle traffic spikes, NLB's design allows it to respond more quickly to sudden increases in traffic, making it a better choice for applications with unpredictable or highly volatile traffic patterns. However, with proper planning and communication with AWS, ALB can also effectively manage large traffic spikes. Security NLB provides robust security features, including TLS termination and integration with VPC security groups. However, it lacks some advanced security features, such as support for AWS WAF and user authentication, which are available in ALB. ALB offers advanced security features like integration with AWS WAF, SSL/TLS termination, and user authentication using OpenID Connect and SAML. It also allows the creation of custom security policies, making it more flexible in terms of security. Both NLB and ALB offer robust security features, but ALB provides additional flexibility and control with its support for AWS WAF and user authentication. However, the choice between the two should be based on your specific security requirements. If your application primarily deals with HTTP/HTTPS traffic and requires advanced security controls, ALB would be a better choice. On the other hand, for applications requiring high throughput and low latency, NLB might be a more suitable option despite its limited advanced security features. Costs and Pricing The cost of using an NLB is largely dependent on the amount of data processed, the duration of usage, and whether you use additional features like cross-zone load balancing. While NLB pricing is relatively lower than ALB, it can cause more connections and hence, a higher load on targets, potentially leading to increased costs. Like NLB, the cost of ALB is based on the amount of data processed and the duration of usage. However, due to its additional features, ALB generally has a higher cost than NLB. However, it's important to note that ALB's sophisticated routing and management features could lead to more efficient resource usage, potentially offsetting its higher price. While NLB may appear cheaper at first glance, the total cost of operation should take into account the efficiency of resource usage, which is where ALB excels with its advanced routing and management features. Ultimately, the most cost-effective choice will depend on your application's specific needs and architecture. Integration and Compatibility NLB integrates seamlessly with other AWS services, such as AWS Auto Scaling Groups, Amazon EC2 Container Service (ECS), and Amazon EC2 Spot Fleet. It also works well with containerized applications and supports both IPv4 and IPv6 addresses. ALB offers extensive integration options with a wide range of AWS services, including AWS Auto Scaling Groups, Amazon ECS, AWS Fargate, and AWS Lambda. It also supports both IPv4 and IPv6 addresses and integrates with container-based and serverless architectures. Both NLB and ALB integrate seamlessly into existing AWS infrastructure. They support various AWS services, making them versatile choices for different application architectures. However, with its additional features and capabilities, ALB may require slightly more configuration than NLB. Conclusion While both ALB and NLB are powerful tools for managing network traffic in AWS, they cater to different needs and scenarios. ALB operates at the application layer, handling HTTP/HTTPS traffic with advanced request routing capabilities, making it suitable for complex web applications. NLB operates at the transport layer, dealing with TCP/UDP traffic, providing high performance and low latency, making it ideal for applications requiring high throughput. The choice between ALB and NLB depends on your specific application requirements. If your application handles HTTP/HTTPS traffic and requires advanced routing capabilities, ALB is the right choice. If your application requires high performance, low latency, and static IP addresses, then NLB is more suitable. For microservices architecture or container-based applications that require advanced routing and flexible management, go for ALB. For applications requiring high throughput and low latency, such as multiplayer gaming, real-time streaming, or IoT applications, choose NLB. As always, the best choice depends on understanding your application's requirements and choosing the tool that best fits those needs.
In this article, we delve into the exciting realm of containerizing Helidon applications, followed by deploying them effortlessly to a Kubernetes environment. To achieve this, we'll harness the power of JKube’s Kubernetes Maven Plugin, a versatile tool for Java applications for Kubernetes deployments that has recently been updated to version 1.14.0. What's exciting about this release is that it now supports the Helidon framework, a Java Microservices gem open-sourced by Oracle in 2018. If you're curious about Helidon, we've got some blog posts to get you up to speed: Building Microservices With Oracle Helidon Ultra-Fast Microservices: When MicroStream Meets Helidon Helidon: 2x Productivity With Microprofile REST Client In this article, we will closely examine the integration between JKube’s Kubernetes Maven Plugin and Helidon. Here's a sneak peek of the exciting journey we'll embark on: We'll kick things off by generating a Maven application from Helidon Starter Transform your Helidon application into a nifty Docker image. Craft Kubernetes YAML manifests tailored for your Helidon application. Apply those manifests to your Kubernetes cluster. We'll bundle those Kubernetes YAML manifests into a Helm Chart. We'll top it off by pushing that Helm Chart to a Helm registry. Finally, we'll deploy our Helidon application to Red Hat OpenShift. An exciting aspect worth noting is that JKube’s Kubernetes Maven Plugin can be employed with previous versions of Helidon projects as well. The only requirement is to provide your custom image configuration. With this latest release, Helidon users can now easily generate opinionated container images. Furthermore, the plugin intelligently detects project dependencies and seamlessly incorporates Kubernetes health checks into the generated manifests, streamlining the deployment process. Setting up the Project You can either use an existing Helidon project or create a new one from Helidon Starter. If you’re on JDK 17 use 3.x version of Helidon. Otherwise, you can stick to Helidon 2.6.x which works with older versions of Java. In the starter form, you can choose either Helidon SE or Helidon Microprofile, choose application type, and fill out basic details like project groupId, version, and artifactId. Once you’ve set your project, you can add JKube’s Kubernetes Maven Plugin to your pom.xml: XML <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>kubernetes-maven-plugin</artifactId> <version>1.14.0</version> </plugin> Also, the plugin version is set to 1.14.0, which is the latest version at the time of writing. You can check for the latest version on the Eclipse JKube releases page. It’s not really required to add the plugin if you want to execute it directly from some CI pipeline. You can just provide a fully qualified name of JKube’s Kubernetes Maven Plugin while issuing some goals like this: Shell $ mvn org.eclipse.jkube:kubernetes-maven-plugin:1.14.0:resource Now that we’ve added the plugin to the project, we can start using it. Creating Container Image (JVM Mode) In order to build a container image, you do not need to provide any sort of configuration. First, you need to build your project. Shell $ mvn clean install Then, you just need to run k8s:build goal of JKube’s Kubernetes Maven Plugin. By default, it builds the image using the Docker build strategy, which requires access to a Docker daemon. If you have access to a docker daemon, run this command: Shell $ mvn k8s:build If you don’t have access to any docker daemon, you can also build the image using the Jib build strategy: Shell $ mvn k8s:build -Djkube.build.strategy=jib You will notice that Eclipse JKube has created an opinionated container image for your application based on your project configuration. Here are some key points about JKube’s Kubernetes Maven Plugin to observe in this zero configuration mode: It used quay.io/jkube/jkube-java as a base image for the container image It added some labels to the container image (picked from pom.xml) It exposed some ports in the container image based on the project configuration It automatically copied relevant artifacts and libraries required to execute the jar in the container environment. Creating Container Image (Native Mode) In order to create a container image for the native executable, we need to generate the native executable first. In order to do that, let’s build our project in the native-image profile (as specified in Helidon GraalVM Native Image documentation): Shell $ mvn package -Pnative-image This creates a native executable file in the target folder of your project. In order to create a container image based on this executable, we just need to run k8s:build goal but also specify native-image profile: Shell $ mvn k8s:build -Pnative-image Like JVM mode, Eclipse JKube creates an opinionated container image but uses a lightweight base image: registry.access.redhat.com/ubi8/ubi-minimal and exposes only the required ports by application. Customizing Container Image as per Requirements Creating a container image with no configuration is a really nice way to get started. However, it might not suit everyone’s use case. Let’s take a look at how to configure various aspects of the generated container image. You can override basic aspects of the container image with some properties like this: Property Name Description jkube.generator.name Change Image Name jkube.generator.from Change Base Image jkube.generator.tags A comma-separated value of additional tags for the image If you want more control, you can provide a complete XML configuration for the image in the plugin configuration section: XML <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>kubernetes-maven-plugin</artifactId> <version>${jkube.version}</version> <configuration> <images>  </images> </configuration> </plugin> The same is also possible by providing your own Dockerfile in the project base directory. Kubernetes Maven Plugin automatically detects it and builds a container image based on its content: Dockerfile FROM openjdk:11-jre-slim COPY maven/target/helidon-quickstart-se.jar /deployments/ COPY maven/target/libs /deployments/libs CMD ["java", "-jar", "/deployments/helidon-quickstart-se.jar"] EXPOSE 8080 Pushing the Container Image to Quay.io: Once you’ve built a container image, you most likely want to push it to some public or private container registry. Before pushing the image, make sure you’ve renamed your image to include the registry name and registry user. If I want to push an image to Quay.io in the namespace of a user named rokumar, this is how I would need to rename my image: Shell $ mvn k8s:build -Djkube.generator.name=quay.io/rokumar/%a:%v %a and %v correspond to project artifactId and project version. For more information, you can check the Kubernetes Maven Plugin Image Configuration documentation. Once we’ve built an image with the correct name, the next step is to provide credentials for our registry to JKube’s Kubernetes Maven Plugin. We can provide registry credentials via the following sources: Docker login Local Maven Settings file (~/.m2/settings.xml) Provide it inline using jkube.docker.username and jkube.docker.password properties Once you’ve configured your registry credentials, you can issue the k8s:push goal to push the image to your specified registry: Shell $ mvn k8s:push Generating Kubernetes Manifests In order to generate opinionated Kubernetes manifests, you can use k8s:resource goal from JKube’s Kubernetes Maven Plugin: Shell $ mvn k8s:resource It generates Kubernetes YAML manifests in the target directory: Shell $ ls target/classes/META-INF/jkube/kubernetes helidon-quickstart-se-deployment.yml helidon-quickstart-se-service.yml JKube’s Kubernetes Maven Plugin automatically detects if the project contains io.helidon:helidon-health dependency and adds liveness, readiness, and startup probes: YAML $ cat target/classes/META-INF/jkube/kubernetes//helidon-quickstart-se-deployment. yml | grep -A8 Probe livenessProbe: failureThreshold: 3 httpGet: path: /health/live port: 8080 scheme: HTTP initialDelaySeconds: 0 periodSeconds: 10 successThreshold: 1 -- readinessProbe: failureThreshold: 3 httpGet: path: /health/ready port: 8080 scheme: HTTP initialDelaySeconds: 0 periodSeconds: 10 successThreshold: 1 Applying Kubernetes Manifests JKube’s Kubernetes Maven Plugin provides k8s:apply goal that is equivalent to kubectl apply command. It just applies the resources generated by k8s:resource in the previous step. Shell $ mvn k8s:apply Packaging Helm Charts Helm has established itself as the de facto package manager for Kubernetes. You can package generated manifests into a Helm Chart and apply it on some other cluster using Helm CLI. You can generate a Helm Chart of generated manifests using k8s:helm goal. The interesting thing is that JKube’s Kubernetes Maven Plugin doesn’t rely on Helm CLI for generating the chart. Shell $ mvn k8s:helm You’d notice Helm Chart is generated in target/jkube/helm/ directory: Shell $ ls target/jkube/helm/helidon-quickstart-se/kubernetes Chart.yaml helidon-quickstart-se-0.0.1-SNAPSHOT.tar.gz README.md templates values.yaml Pushing Helm Charts to Helm Registries Usually, after generating a Helm Chart locally, you would want to push it to some Helm registry. JKube’s Kubernetes Maven Plugin provides k8s:helm-push goal for achieving this task. But first, we need to provide registry details in plugin configuration: XML <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>kubernetes-maven-plugin</artifactId> <version>1.14.0</version> <configuration> <helm> <snapshotRepository> <name>ChartMuseum</name> <url>http://example.com/api/charts</url> <type>CHARTMUSEUM</type> <username>user1</username> </snapshotRepository> </helm> </configuration> </plugin> JKube’s Kubernetes Maven Plugin supports pushing Helm Charts to ChartMuseum, Nexus, Artifactory, and OCI registries. You have to provide the applicable Helm repository type and URL. You can provide the credentials via environment variables, properties, or ~/.m2/settings.xml. Once you’ve all set up, you can run k8s:helm-push goal to push chart: Shell $ mvn k8s:helm-push -Djkube.helm.snapshotRepository.password=yourpassword Deploying To Red Hat OpenShift If you’re deploying to Red Hat OpenShift, you can use JKube’s OpenShift Maven Plugin to deploy your Helidon application to an OpenShift cluster. It contains some add-ons specific to OpenShift like S2I build strategy, support for Routes, etc. You also need to add the JKube’s OpenShift Maven Plugin plugin to your pom.xml. Maybe you can add it in a separate profile: XML <profile> <id>openshift</id> <build> <plugins> <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>${jkube.version}</version> </plugin> </plugins> </build> </profile> Then, you can deploy the application with a combination of these goals: Shell $ mvn oc:build oc:resource oc:apply -Popenshift Conclusion In this article, you learned how smoothly you can deploy your Helidon applications to Kubernetes using Eclipse JKube’s Kubernetes Maven Plugin. We saw how effortless it is to package your Helidon application into a container image and publish it to some container image registry. We can alternatively generate Helm Charts of our Kubernetes YAML manifests and publish Helm Charts to some Helm registry. In the end, we learned about JKube’s OpenShift Maven Plugin, which is specifically designed for Red Hat OpenShift users who want to deploy their Helidon applications to Red Hat OpenShift. You can find the code used in this blog post in this GitHub repository. In case you’re interested in knowing more about Eclipse JKube, you can check these links: Documentation Github Issue Tracker StackOverflow YouTube Channel Twitter Gitter Chat
What Is Kubernetes RBAC? Often, when organizations start their Kubernetes journey, they look up to implementing least privilege roles and proper authorization to secure their infrastructure. That’s where Kubernetes RBAC is implemented to secure Kubernetes resources such as sensitive data, including deployment details, persistent storage settings, and secrets. Kubernetes RBAC provides the ability to control who can access each API resource with what kind of access. You can use RBAC for both human (individual or group) and non-human users (service accounts) to define their types of access to various Kubernetes resources. For example, there are three different environments, Dev, Staging, and Production, which have to be given access to the team, such as developers, DevOps, SREs, App owners, and product managers. Before we get started, we would like to stress that we will treat users and service accounts as the same, from a level of abstraction- every request, either from a user or a service account, is finally an HTTP request. Yes, we understand users and service accounts (for non-human users) are different in nature in Kubernetes. How To Enable Kubernetes RBAC One can enable RBAC in Kubernetes by starting the API server with an authorization-mode flag on. Kubernetes resources used to apply RBAC on users are: Role, ClusterRole, RoleBinding, ClusterRoleBinding Service Account To manage users, Kubernetes provides an authentication mechanism, but it is usually advisable to integrate Kubernetes with your enterprise identity management for users such as Active Directory or LDAP. When it comes to non-human users (or machines or services) in a Kubernetes cluster, the concept of a Service Account comes into the picture. For example, The Kubernetes resources need to be accessed by a CD application such as Spinnaker or Argo to deploy applications, or one pod of service A needs to talk to another pod of service B. In such cases, a Service Account is used to create an account of a non-human user and specify the required authorization (using RoleBinding or ClusterRoleBinding). You can create a Service Account by creating a yaml like the below: YAML apiVersion: v1 kind: ServiceAccount metadata: name: nginx-sa spec: automountServiceAccountToken: false And then apply it. Shell $ kubectl apply -f nginx-sa.yaml serviceaccount/nginx-sa created And now you have to ServiceAccount for pods in the Deployments resource. YAML kind: Deployment metadata: name: nginx1 labels: app: nginx1 spec: replicas: 2 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: serviceAccountName: nginx-sa containers: - name: nginx1 image: nginx ports: - containerPort: 80 In case you don’t specify about serviceAccountName in the Deployment resources, then the pods will belong to the default Service Account. Note there is a default Service Account for each namespace and one for clusters. All the default authorization policies as per the default Service Account will be applied to the pods where Service Account info is not mentioned. In the next section, we will see how to assign various permissions to a Service Account using RoleBinding and ClusterRoleBinding. Role and ClusterRole Role and ClusterRole are the Kubernetes resources used to define the list of actions a user can perform within a namespace or a cluster, respectively. In Kubernetes, the actors, such as users, groups, or ServiceAccount, are called subjects. A subject's actions, such as create, read, write, update, and delete, are called verbs. YAML apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: read-only namespace: dev-namespace rules: - apiGroups: - "" resources: ["*"] verbs: - get - list - watch In the above Role resource, we have specified that the read-only role is only applicable to the deb-ns namespace and to all the resources inside the namespace. Any ServiceAccount or users that would be bound to the read-only role can take these actions- get, list, and watch. Similarly, the ClusterRole resource will allow you to create roles pertinent to clusters. An example is given below: YAML apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: chief-role rules: - apiGroups: - "" resources: ["*"] verbs: - get - list - watch - create - update - patch - delete Any user/group/ServiceAccount bound to the chief-role will be able to take any action in the cluster. In the next section, we will see how to grant roles to subjects using RoleBinding and ClusterRoleBinding. Also, note Kubernetes allows you to configure custom roles using Role resources or use default user-facing roles such as the following: Cluster-admin: For cluster administrators, Kubernetes provides a superuser Role. The Cluster admin can perform any action on any resource in a cluster. One can use a superuser in a ClusterRoleBinding to grant full control over every resource in the cluster (and in all namespaces) or in a RoleBinding to grant full control over every resource in the respective namespace. Admin: Kubernetes provides an admin Role to permit unlimited read/write access to resources within a namespace. admin role can create roles and role bindings within a particular namespace. It does not permit write access to the namespace itself. This can be used in the RoleBinding resource. Edit: edit role grants read/write access within a given Kubernetes namespace. It cannot view or modify roles or role bindings. View: view role allows read-only access within a given namespace. It does not allow viewing or modifying of roles or role bindings. RoleBinding and ClusterRoleBinding To apply the Role to a subject (user/group/ServiceAccount), you must define a RoleBinding. This will give the user the least privileged access to required resources within the namespace with the permissions defined in the Role configuration. YAML apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: Role-binding-dev roleRef: kind: Role name: read-only #The role name you defined in the Role configuration apiGroup: rbac.authorization.k8s.io subjects: - kind: User name: Roy #The name of the user to give the role to apiGroup: rbac.authorization.k8s.io - kind: ServiceAccount name: nginx-sa#The name of the ServiceAccount to give the role to apiGroup: rbac.authorization.k8s.io Similarly, ClusterRoleBinding resources can be created to define the Role of users. Note we have used the default superuser ClusterRole reference provided by Kubernetes instead of using our custom role. This can be applied to cluster administrators. YAML apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: superuser-binding roleRef: kind: ClusterRole name: superuser apiGroup: rbac.authorization.k8s.io subjects: - kind: User name: Aditi apiGroup: rbac.authorization.k8s.io Benefits of Kubernetes RBAC The advantage of Kubernetes RBAC is it allows you to “natively” implement the least privileges to various users and machines in your cluster. The key benefits are: Proper Authorization With the least privileges to various users and Service Accounts to Kubernetes resources, DevOps and architects can implement one of the main pillars of zero trust. Organizations can reduce the risk of data breaches and data leakage and also avoid internal employees accidentally deleting or manipulating any critical resources. Separation of Duties Applying RBAC on Kubernetes resources will always facilitate separation of duties of users such as developers, DevOps, testers, SREs, etc., in an organization. For e.g., for creating/deleting a new resource in a dev environment, developers should not depend on admin. Similarly, deploying new applications into test servers and deleting the pods after testing should not be a bottleneck for DevOps or testers. Applying authorization and permissions to users such as developers and CI/CD deployment agents into respective workspaces (say namespaces or clusters) will decrease the dependencies and cut the slack. 100% Adherence to Compliance Many industry regulations, such as HIPAA, GDPR, SOX, etc., demand tight authentication and authorization mechanisms in the software field. Using Kubernetes RBAC, DevOps, and architects can quickly implement RBAC into their Kubernetes cluster and improve their posture to adhere to those standards. Disadvantages of Kubernetes RBAC For small and medium enterprises, using Kubernetes RBAC is justified, but it is not advisable to use Kubernetes RBAC for the below reasons: There can be many users and machines, and applying Kubernetes RBAC can be cumbersome to implement and maintain. Granular visibility of who performed what operation is difficult. For example, large enterprises would require information such as violations or malicious attempts against RBAC permissions.
If you're looking for a top player in the cloud industry, AWS (Amazon Web Services) is a great choice. One of its many offerings is AWS Amplify, a comprehensive set of tools and services that can help developers create, deploy, and manage full-stack web and mobile applications on the AWS platform. Amplify is known for providing complete AWS solutions to mobile and front-end web developers, simplifying the development process with its features. Backend development: Amplify can create and manage serverless backend APIs, authentication and authorization, storage, databases, and other standard services. Frontend development: Amplify provides a variety of libraries and tools for developing frontends in popular frameworks such as React, Angular, Vue.js, and Flutter. Hosting and deployment: Amplify offers a fully managed web hosting service with continuous deployment, so developers can focus on building their apps without worrying about infrastructure. Benefits of Using AWS There are many benefits to using AWS Amplify, including: Scalability: AWS is easily scalable to meet the demands of the application. This means an end-user can ensure the application can handle even the most intense traffic spikes. Performance: AWS provides high performance that can be achieved through various techniques, such as caching, load balancing, and autoscaling for real-time applications. Reliability: AWS is known as a highly reliable platform due to its vast infrastructure and commitment to uptime. It enables customers to deploy applications and data quickly and securely. Security: AWS is a secure platform. It offers a variety of features to protect data, such as encryption, access control, and intrusion detection. Cost-effectiveness: Being a cost-effective platform, the user only pays for the resources he uses, ensuring no overpayment issue. Use Cases of AWS Amplify AWS Amplify can be used to build a wide variety of web and mobile applications, including: E-commerce apps: Amplify can be used to build e-commerce apps with features such as product catalogues, shopping carts, and payment processing. Social media apps: Amplify can be used to build social media apps with features such as user profiles, posts, and comments. Gaming apps: Amplify can be used to build gaming apps with features such as leaderboards, multiplayer games, and in-app purchases. Business apps: Amplify can be used to build business apps with features such as CRM, ERP, and HR systems. Real-Time Applications Netflix uses AWS to stream movies and TV shows to its customers. Call of Duty Warzone uses Amazon Web Services to power its real-time multiplayer mode. This allows players to experience low latency and high throughput connections, even when millions of players are online. WhatsApp uses AWS to store and process its chat messages. This allows WhatsApp to scale its service to billions of users and provide a reliable and secure chat experience. Zerodha is a discount brokerage firm that uses AWS to host its trading platform. This platform allows Zerodha clients to trade stocks, options, futures, and currencies. Features of AWS Amplify In addition to the core features listed above, AWS Amplify also provides a variety of other features to simplify development, including: Code generation: Amplify can automatically generate code for common tasks, such as creating backend APIs and connecting to AWS services. Local development: Amplify provides a local development environment so that developers can test their applications before deploying them to production. Continuous integration and delivery (CI/CD): Amplify can be integrated with CI/CD tools to automate the building, testing, and deployment process. Getting Started To start with AWS Amplify, developers must create an AWS account and install the Amplify CLI. Once the CLI is installed, developers can create a new Amplify project and select the services they want to use. Once the project is created, developers can build their front and backend applications. Amplify provides a variety of libraries and tools to simplify development, as well as documentation and tutorials to get developers started. Deploying Applications Once an application is developed, it can be deployed to AWS Amplify Hosting using the Amplify CLI. Amplify Hosting provides a fully managed web hosting service with continuous deployment, so developers can focus on building their apps without worrying about infrastructure. Additional Details Here are some additional details about AWS Amplify: Amplify Libraries: Amplify provides a variety of libraries for connecting to AWS services, including Cognito, S3, DynamoDB, Lambda, and API Gateway. These libraries add standard application features like authentication, storage, databases, and serverless APIs. Amplify Studio: Amplify Studio is a visual development environment that helps developers build full-stack applications without writing code. Studio provides a variety of pre-built components and templates for making standard features such as login screens, product catalogues, and shopping carts. Amplify CLI: The Amplify CLI is a command-line tool that helps developers configure and deploy Amplify projects. The CLI provides a variety of commands for creating new projects, adding services, and deploying applications. Amplify Hosting: Amplify Hosting is a fully managed web hosting service with continuous deployment. Developers can use Amplify Hosting to host their static and server-side rendered web applications. AWS Amplify is a good choice for developers of all skill levels, from beginners to experienced professionals. It is a powerful tool to help developers build and deploy full-stack web and mobile applications on AWS. Most Popular AWS Services AWS offers a broad selection of services, including computing, storage, networking, and machine learning. This is a list of the most popular AWS services grouped by category. 1. Compute Amazon Elastic Compute Cloud (EC2) provides virtual machines that you can use to run your applications. Amazon Lambda provides serverless computing, so you don't have to worry about managing servers. Amazon Elastic Beanstalk makes deploying and managing applications easy. 2. Storage Amazon Simple Storage Service (S3) provides scalable object storage. Amazon Elastic Block Store (EBS) provides block storage for EC2 instances. Amazon Elastic File System (EFS) provides file storage for EC2 instances. 3. Networking Amazon Virtual Private Cloud (VPC) allows you to create your private network in the cloud. Amazon Route 53 manages a DNS service. Amazon CloudFront content delivery network that can help you improve the performance of your web applications. 4. Machine Learning Amazon SageMaker manages a machine learning service that makes building and deploying machine learning models easy. Amazon Rekognition can detect objects and faces in images and videos. Amazon Lex can create conversational interfaces for your applications. 5. Databases Amazon Relational Database Service provides managed relational databases. Amazon DynamoDB provides a NoSQL database. Amazon Redshift provides a data warehouse. 6. Analytics Amazon Athena is a serverless query service for data in S3. Amazon QuickSight is a business intelligence service. Amazon Elasticsearch is a search and analytics service. 7. Developer Tools Amazon CodePipeline is a continuous delivery service. Amazon CodeBuild is a continuous integration service. Amazon CodeDeploy is a constant delivery service. 8. Security Amazon Identity and Access Management (IAM) manages user access. Amazon CloudTrail is a service for logging AWS events. Amazon GuardDuty is a service for detecting threats. 9. Compliance Amazon Artifact is a service for managing compliance artifacts. Amazon Control Tower sets up and manages a compliant AWS environment. Amazon Audit Manager is for auditing your AWS environment. Conclusion AWS Amplify is a robust set of tools and services that can help developers build, deploy, and manage full-stack web and mobile apps on AWS. It provides various features to simplify development, including backend development, frontend development, hosting, and deployment.
When choosing a user authentication method for your application, you usually have several options: develop your own system for identification, authentication, and authorization, or use a ready-made solution. A ready-made solution means that the user already has an account on an external system such as Google, Facebook, or GitHub, and you use the appropriate mechanism, most likely OAuth, to provide limited access to the user’s protected resources without transferring the username and password to it. The second option with OAuth is easier to implement, but there is a risk for your user if the user's account is blocked and the user will lose access to your site. Also, if I, as a user, want to enter a site that I do not trust, I have to provide my personal information, such as my email and full name, sacrificing my anonymity. In this article, we’ll build an alternative login method for Spring using the MetaMask browser extension. MetaMask is a cryptocurrency wallet used to manage Ethereum assets and interact with the Ethereum blockchain. Unlike the OAuth provider, only the necessary set of data can be stored on the Ethereum network. We must take care not to store secret information in the public data, but since any wallet on the Ethereum network is in fact a cryptographic strong key pair, in which the public key determines the wallet address and the private key is never transmitted over the network and is known only by the owner, we can use asymmetric encryption to authenticate users. Authentication Flow Connect to MetaMask and receive the user’s address. Obtain a one-time code (nonce) for a user address. Sign a message containing nonce with a private key using MetaMask. Authenticate the user by validating the user's signature on the back end. Generate a new nonce to prevent your signature from being compromised. Step 1: Project Setup To quickly build a project, we can use Spring Initializr. Let’s add the following dependencies: Spring Web Spring Security Thymeleaf Lombok Download the generated project and open it with a convenient IDE. In the pom.xml, we add the following dependency to verify the Ethereum signature: XML <dependency> <groupId>org.web3j</groupId> <artifactId>core</artifactId> <version>4.10.2</version> </dependency> Step 2: User Model Let’s create a simple User model containing the following fields: address and nonce. The nonce, or one-time code, is a random number we will use for authentication to ensure the uniqueness of each signed message. Java public class User { private final String address; private Integer nonce; public User(String address) { this.address = address; this.nonce = (int) (Math.random() * 1000000); } // getters } To store users, for simplicity, I’ll be using an in-memory Map with a method to retrieve User by address, creating a new User instance in case the value is missing: Java @Repository public class UserRepository { private final Map<String, User> users = new ConcurrentHashMap<>(); public User getUser(String address) { return users.computeIfAbsent(address, User::new); } } Let's define a controller allowing users to fetch nonce by their public address: Java @RestController public class NonceController { @Autowired private UserRepository userRepository; @GetMapping("/nonce/{address}") public ResponseEntity<Integer> getNonce(@PathVariable String address) { User user = userRepository.getUser(address); return ResponseEntity.ok(user.getNonce()); } } Step 3: Authentication Filter To implement a custom authentication mechanism with Spring Security, first, we need to define our AuthenticationFilter. Spring filters are designed to intercept requests for certain URLs and perform some actions. Each filter in the chain can process the request, pass it to the next filter in the chain, or not pass it, immediately sending a response to the client. Java public class MetaMaskAuthenticationFilter extends AbstractAuthenticationProcessingFilter { protected MetaMaskAuthenticationFilter() { super(new AntPathRequestMatcher("/login", "POST")); } @Override public Authentication attemptAuthentication(HttpServletRequest request, HttpServletResponse response) throws AuthenticationException { UsernamePasswordAuthenticationToken authRequest = getAuthRequest(request); authRequest.setDetails(this.authenticationDetailsSource.buildDetails(request)); return this.getAuthenticationManager().authenticate(authRequest); } private UsernamePasswordAuthenticationToken getAuthRequest(HttpServletRequest request) { String address = request.getParameter("address"); String signature = request.getParameter("signature"); return new MetaMaskAuthenticationRequest(address, signature); } } Our MetaMaskAuthenticationFilter will intercept requests with the POST "/login" pattern. In the attemptAuthentication(HttpServletRequest request, HttpServletResponse response) method, we extract address and signature parameters from the request. Next, these values are used to create an instance of MetaMaskAuthenticationRequest, which we pass as a login request to the authentication manager: Java public class MetaMaskAuthenticationRequest extends UsernamePasswordAuthenticationToken { public MetaMaskAuthenticationRequest(String address, String signature) { super(address, signature); super.setAuthenticated(false); } public String getAddress() { return (String) super.getPrincipal(); } public String getSignature() { return (String) super.getCredentials(); } } Step 4: Authentication Provider Our MetaMaskAuthenticationRequest should be processed by a custom AuthenticationProvider, where we can validate the user's signature and return a fully authenticated object. Let’s create an implementation of AbstractUserDetailsAuthenticationProvider, which is designed to work with UsernamePasswordAuthenticationToken instances: Java @Component public class MetaMaskAuthenticationProvider extends AbstractUserDetailsAuthenticationProvider { @Autowired private UserRepository userRepository; @Override protected UserDetails retrieveUser(String username, UsernamePasswordAuthenticationToken authentication) throws AuthenticationException { MetaMaskAuthenticationRequest auth = (MetaMaskAuthenticationRequest) authentication; User user = userRepository.getUser(auth.getAddress()); return new MetaMaskUserDetails(auth.getAddress(), auth.getSignature(), user.getNonce()); } @Override protected void additionalAuthenticationChecks(UserDetails userDetails, UsernamePasswordAuthenticationToken authentication) throws AuthenticationException { MetaMaskAuthenticationRequest metamaskAuthenticationRequest = (MetaMaskAuthenticationRequest) authentication; MetaMaskUserDetails metamaskUserDetails = (MetaMaskUserDetails) userDetails; if (!isSignatureValid(authentication.getCredentials().toString(), metamaskAuthenticationRequest.getAddress(), metamaskUserDetails.getNonce())) { logger.debug("Authentication failed: signature is not valid"); throw new BadCredentialsException("Signature is not valid"); } } ... } The first method, retrieveUser(String username, UsernamePasswordAuthenticationToken authentication) should load the User entity from our UserRepository and compose the UserDetails instance containing address, signature, and nonce: Java public class MetaMaskUserDetails extends User { private final Integer nonce; public MetaMaskUserDetails(String address, String signature, Integer nonce) { super(address, signature, Collections.emptyList()); this.nonce = nonce; } public String getAddress() { return getUsername(); } public Integer getNonce() { return nonce; } } The second method, additionalAuthenticationChecks(UserDetails userDetails, UsernamePasswordAuthenticationToken authentication) will do the signature verification using the Elliptic Curve Digital Signature Algorithm (ECDSA). The idea of this algorithm is to recover the wallet address from a given message and signature. If the recovered address matches our address from MetaMaskUserDetails, then the user can be authenticated. 1. Get the message hash by adding a prefix to make the calculated signature recognizable as an Ethereum signature: Java String prefix = "\u0019Ethereum Signed Message:\n" + message.length(); byte[] msgHash = Hash.sha3((prefix + message).getBytes()); 2. Extract the r, s and v components from the Ethereum signature and create a SignatureData instance: Java byte[] signatureBytes = Numeric.hexStringToByteArray(signature); byte v = signatureBytes[64]; if (v < 27) {v += 27;} byte[] r = Arrays.copyOfRange(signatureBytes, 0, 32); byte[] s = Arrays.copyOfRange(signatureBytes, 32, 64); Sign.SignatureData data = new Sign.SignatureData(v, r, s); 3. Using the method Sign.recoverFromSignature(), retrieve the public key from the signature: Java BigInteger publicKey = Sign.signedMessageHashToKey(msgHash, sd); 4. Finally, get the wallet address and compare it with the initial address: Java String recoveredAddress = "0x" + Keys.getAddress(publicKey); if (address.equalsIgnoreCase(recoveredAddress)) { // Signature is valid. } else { // Signature is not valid. } There is a complete implementation of isSignatureValid(String signature, String address, Integer nonce) method with nonce: Java public boolean isSignatureValid(String signature, String address, Integer nonce) { // Compose the message with nonce String message = "Signing a message to login: %s".formatted(nonce); // Extract the ‘r’, ‘s’ and ‘v’ components byte[] signatureBytes = Numeric.hexStringToByteArray(signature); byte v = signatureBytes[64]; if (v < 27) { v += 27; } byte[] r = Arrays.copyOfRange(signatureBytes, 0, 32); byte[] s = Arrays.copyOfRange(signatureBytes, 32, 64); Sign.SignatureData data = new Sign.SignatureData(v, r, s); // Retrieve public key BigInteger publicKey; try { publicKey = Sign.signedPrefixedMessageToKey(message.getBytes(), data); } catch (SignatureException e) { logger.debug("Failed to recover public key", e); return false; } // Get recovered address and compare with the initial address String recoveredAddress = "0x" + Keys.getAddress(publicKey); return address.equalsIgnoreCase(recoveredAddress); } Step 5: Security Configuration In the Security Configuration, besides the standard formLogin setup, we need to insert our MetaMaskAuthenticationFilter into the filter chain before the default: Java @Bean public SecurityFilterChain filterChain(HttpSecurity http, AuthenticationManager authenticationManager) throws Exception { return http .authorizeHttpRequests(customizer -> customizer .requestMatchers(HttpMethod.GET, "/nonce/*").permitAll() .anyRequest().authenticated()) .formLogin(customizer -> customizer.loginPage("/login") .failureUrl("/login?error=true") .permitAll()) .logout(customizer -> customizer.logoutUrl("/logout")) .csrf(AbstractHttpConfigurer::disable) .addFilterBefore(authenticationFilter(authenticationManager), UsernamePasswordAuthenticationFilter.class) .build(); } private MetaMaskAuthenticationFilter authenticationFilter(AuthenticationManager authenticationManager) { MetaMaskAuthenticationFilter filter = new MetaMaskAuthenticationFilter(); filter.setAuthenticationManager(authenticationManager); filter.setAuthenticationSuccessHandler(new MetaMaskAuthenticationSuccessHandler(userRepository)); filter.setAuthenticationFailureHandler(new SimpleUrlAuthenticationFailureHandler("/login?error=true")); filter.setSecurityContextRepository(new HttpSessionSecurityContextRepository()); return filter; } To prevent replay attacks in case the user’s signature gets compromised, we will create the AuthenticationSuccessHandler implementation, in which we change the user’s nonce and make the user sign the message with a new nonce next login: Java public class MetaMaskAuthenticationSuccessHandler extends SimpleUrlAuthenticationSuccessHandler { private final UserRepository userRepository; public MetaMaskAuthenticationSuccessHandler(UserRepository userRepository) { super("/"); this.userRepository = userRepository; } @Override public void onAuthenticationSuccess(HttpServletRequest request, HttpServletResponse response, Authentication authentication) throws ServletException, IOException { super.onAuthenticationSuccess(request, response, authentication); MetaMaskUserDetails principal = (MetaMaskUserDetails) authentication.getPrincipal(); User user = userRepository.getUser(principal.getAddress()); user.changeNonce(); } } Java public class User { ... public void changeNonce() { this.nonce = (int) (Math.random() * 1000000); } } We also need to configure the AuthenticationManager bean injecting our MetaMaskAuthenticationProvider: Java @Bean public AuthenticationManager authenticationManager(List<AuthenticationProvider> authenticationProviders) { return new ProviderManager(authenticationProviders); } Step 6: Templates Java @Controller public class WebController { @RequestMapping("/") public String root() { return "index"; } @RequestMapping("/login") public String login() { return "login"; } } Our WebController contains two templates: login.html and index.html: 1. The first template will be used to authenticate with MetaMask. To prompt a user to connect to MetaMask and receive a wallet address, we can use the eth_requestAccounts method: JavaScript const accounts = await window.ethereum.request({method: 'eth_requestAccounts'}); const address = accounts[0]; Next, having connected the MetaMask and received the nonce from the back end, we request the MetaMask to sign a message using the personal_sign method: JavaScript const nonce = await getNonce(address); const message = `Signing a message to login: ${nonce}`; const signature = await window.ethereum.request({method: 'personal_sign', params: [message, address]}); Finally, we send the calculated signature with the address to the back end. There is a complete template templates/login.html: HTML <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org" lang="en"> <head> <title>Login page</title> <meta charset="utf-8"/> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <link href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-/Y6pD6FV/Vv2HJnA6t+vslU6fwYXjCFtcEpHbNJ0lyAFsXTsjBbfaDjzALeQsN6M" crossorigin="anonymous"> <link href="https://getbootstrap.com/docs/4.0/examples/signin/signin.css" rel="stylesheet" crossorigin="anonymous"/> </head> <body> <div class="container"> <div class="form-signin"> <h3 class="form-signin-heading">Please sign in</h3> <p th:if="${param.error}" class="text-danger">Invalid signature</p> <button class="btn btn-lg btn-primary btn-block" type="submit" onclick="login()">Login with MetaMask</button> </div> </div> <script th:inline="javascript"> async function login() { if (!window.ethereum) { console.error('Please install MetaMask'); return; } // Prompt user to connect MetaMask const accounts = await window.ethereum.request({method: 'eth_requestAccounts'}); const address = accounts[0]; // Receive nonce and sign a message const nonce = await getNonce(address); const message = `Signing a message to login: ${nonce}`; const signature = await window.ethereum.request({method: 'personal_sign', params: [message, address]}); // Login with signature await sendLoginData(address, signature); } async function getNonce(address) { return await fetch(`/nonce/${address}`) .then(response => response.text()); } async function sendLoginData(address, signature) { return fetch('/login', { method: 'POST', headers: {'content-type': 'application/x-www-form-urlencoded'}, body: new URLSearchParams({ address: encodeURIComponent(address), signature: encodeURIComponent(signature) }) }).then(() => window.location.href = '/'); } </script> </body> </html> 2. The second templates/index.html template will be protected by our Spring Security configuration, displaying the Principal name as the wallet address after the person gets signed up: HTML <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org" xmlns:sec="http://www.thymeleaf.org/extras/spring-security" lang="en"> <head> <title>Spring Authentication with MetaMask</title> <meta charset="utf-8"/> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <link href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-/Y6pD6FV/Vv2HJnA6t+vslU6fwYXjCFtcEpHbNJ0lyAFsXTsjBbfaDjzALeQsN6M" crossorigin="anonymous"> <link href="https://getbootstrap.com/docs/4.0/examples/signin/signin.css" rel="stylesheet" crossorigin="anonymous"/> </head> <body> <div class="container" sec:authorize="isAuthenticated()"> <form class="form-signin" method="post" th:action="@{/logout}"> <h3 class="form-signin-heading">This is a secured page!</h3> <p>Logged in as: <span sec:authentication="name"></span></p> <button class="btn btn-lg btn-secondary btn-block" type="submit">Logout</button> </form> </div> </body> </html> The full source code is provided on GitHub. In this article, we developed an alternative authentication mechanism with Spring Security and MetaMask using asymmetric encryption. This method can fit into your application, but only if your target audience is using cryptocurrency and has the MetaMask extension installed in their browser.
In today's rapidly evolving world of software development and deployment, containerization has emerged as a transformative technology. It has revolutionized the way applications are built, packaged, and deployed, providing agility, scalability, and consistency to development and operations teams alike. Two of the most popular containerization tools, Docker and Kubernetes, play pivotal roles in this paradigm shift. In this blog, we'll dive deep into containerization technologies, explore how Docker and Kubernetes work together, and understand their significance in modern application deployment. Understanding Containerization A containerization is a lightweight form of virtualization that allows you to package an application and its dependencies into a single, portable unit called a container. Containers are isolated, ensuring that an application runs consistently across different environments, from development to production. Unlike traditional virtual machines (VMs), containers share the host OS kernel, which makes them extremely efficient in terms of resource utilization and startup times. Example: Containerizing a Python Web Application Let's consider a Python web application using Flask, a microweb framework. We'll containerize this application using Docker, a popular containerization tool. Step 1: Create the Python Web Application Python # app.py from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return "Hello, Containerization!" if __name__ == '__main__': app.run(debug=True, host='0.0.0.0') Step 2: Create a Dockerfile Dockerfile # Use an official Python runtime as a parent image FROM python:3.9-slim # Set the working directory to /app WORKDIR /app # Copy the current directory contents into the container at /app COPY . /app # Install any needed packages specified in requirements.txt RUN pip install -r requirements.txt # Make port 80 available to the world outside this container EXPOSE 80 # Define environment variable ENV NAME World # Run app.py when the container launches CMD ["python", "app.py"] Step 3: Build and Run the Docker Container Shell # Build the Docker image docker build -t flask-app . # Run the Docker container, mapping host port 4000 to container port 80 docker run -p 4000:80 flask-app This demonstrates containerization by encapsulating the Python web application and its dependencies within a Docker container. The containerized app can be run consistently on various environments, promoting portability and ease of deployment. Containerization simplifies application deployment, ensures consistency, and optimizes resource utilization, making it a crucial technology in modern software development and deployment pipelines. Docker: The Containerization Pioneer Docker, developed in 2013, is widely regarded as the pioneer of containerization technology. It introduced a simple yet powerful way to create, manage, and deploy containers. Here are some key Docker components: Docker Engine The Docker Engine is the core component responsible for running containers. It includes the Docker daemon, which manages containers, and the Docker CLI (Command Line Interface), which allows users to interact with Docker. Docker Images Docker images are lightweight, stand-alone, and executable packages that contain all the necessary code and dependencies to run an application. They serve as the blueprints for containers. Docker Containers Containers are instances of Docker images. They are isolated environments where applications run. Containers are highly portable and can be executed consistently across various environments. Docker's simplicity and ease of use made it a go-to choice for developers and operators. However, managing a large number of containers at scale and ensuring high availability required a more sophisticated solution, which led to the rise of Kubernetes. Kubernetes: Orchestrating Containers at Scale Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform originally developed by Google. It provides a framework for automating the deployment, scaling, and management of containerized applications. Here's a glimpse of Kubernetes' core components: Master Node The Kubernetes master node is responsible for controlling the cluster. It manages container orchestration, scaling, and load balancing. Worker Nodes Worker nodes, also known as Minions, host containers and run the tasks assigned by the master node. They provide the computing resources needed to run containers. Pods Pods are the smallest deployable units in Kubernetes. They can contain one or more containers that share the same network namespace, storage, and IP address. Services Kubernetes services enable network communication between different sets of pods. They abstract the network and ensure that applications can discover and communicate with each other reliably. Deployments Deployments in Kubernetes allow you to declaratively define the desired state of your application and ensure that the current state matches it. This enables rolling updates and automatic rollbacks in case of failures. The Docker-Kubernetes Synergy Docker and Kubernetes are often used together to create a comprehensive containerization and orchestration solution. Docker simplifies the packaging and distribution of containerized applications, while Kubernetes takes care of their deployment and management at scale. Here's how Docker and Kubernetes work together: Building Docker Images: Developers use Docker to build and package their applications into Docker images. These images are then pushed to a container registry, such as Docker Hub or Google Container Registry. Kubernetes Deployments: Kubernetes takes the Docker images and orchestrates the deployment of containers across a cluster of nodes. Developers define the desired state of their application using Kubernetes YAML manifests, including the number of replicas, resource requirements, and networking settings. Scaling and Load Balancing: Kubernetes can automatically scale the number of container replicas based on resource utilization or traffic load. It also manages load balancing to ensure high availability and efficient resource utilization. Service Discovery: Kubernetes services enable easy discovery and communication between different parts of an application. Services can be exposed internally or externally, depending on the use case. Rolling Updates: Kubernetes supports rolling updates and rollbacks, allowing applications to be updated with minimal downtime and the ability to revert to a previous version in case of issues. The Significance in Modern Application Deployment The adoption of Docker and Kubernetes has had a profound impact on modern application deployment practices. Here's why they are crucial: Portability: Containers encapsulate everything an application needs, making it highly portable. Developers can build once and run anywhere, from their local development environment to a public cloud or on-premises data center. Efficiency: Containers are lightweight and start quickly, making them efficient in terms of resource utilization and time to deployment. Scalability: Kubernetes allows applications to scale up or down automatically based on demand, ensuring optimal resource allocation and high availability. Consistency: Containers provide consistency across different environments, reducing the "it works on my machine" problem and streamlining the development and operations pipeline. DevOps Enablement: Docker and Kubernetes promote DevOps practices by enabling developers and operators to collaborate seamlessly, automate repetitive tasks, and accelerate the software delivery lifecycle. Conclusion In conclusion, Docker and Kubernetes are at the forefront of containerization and container orchestration technologies. They have reshaped the way applications are developed, deployed, and managed in the modern era. By combining the simplicity of Docker with the power of Kubernetes, organizations can achieve agility, scalability, and reliability in their application deployment processes. Embracing these technologies is not just a trend but a strategic move for staying competitive in the ever-evolving world of software development. As you embark on your containerization journey with Docker and Kubernetes, remember that continuous learning and best practices are key to success. Stay curious, explore new features, and leverage the vibrant communities surrounding these technologies to unlock their full potential in your organization's quest for innovation and efficiency. Containerization is not just a technology; it's a mindset that empowers you to build, ship, and run your applications with confidence in a rapidly changing digital landscape.
In today's interconnected business landscape, where organizations rely on a plethora of disparate systems and applications, seamless data exchange and collaboration are paramount. Enterprise Integration Patterns (EIPs) have emerged as a powerful solution to address the challenges of integrating various systems, enabling businesses to achieve streamlined processes, enhanced operational efficiency, and improved decision-making. Enterprise Integration Patterns (EIPs) are a collection of best practices and design tenets that are employed to address typical problems with integrating various systems within an enterprise. With the help of these patterns, problems with data transformation, routing, communication, and cooperation between various applications, services, and platforms can be addressed in a consistent manner. Regardless of the underlying technologies, protocols, or data formats used by the systems, the objective of EIPs is to enable seamless and effective data exchange and collaboration between them. Organizations can improve the interoperability, flexibility, scalability, and maintainability of their integration solutions by implementing EIPs. In this article, we delve into the world of Enterprise Integration Patterns, exploring their significance, common patterns, and their role in transforming businesses. Understanding Enterprise Integration Patterns Enterprise Integration Patterns, introduced by Gregor Hohpe and Bobby Woolf in their book of the same name, provide a catalog of time-tested design solutions for connecting and synchronizing systems within an enterprise. These patterns act as a common language for software architects, developers, and stakeholders, facilitating effective communication and collaboration across diverse teams. Importance of Enterprise Integration Patterns Seamless Data Exchange: EIPs enable the smooth flow of data between different systems, irrespective of their disparate architectures and technologies. They ensure data consistency, integrity, and reliability while maintaining a high level of interoperability. Scalability and Flexibility: EIPs promote scalability by allowing organizations to add or modify systems without disrupting existing integrations. They provide a flexible framework that can accommodate changes in business requirements, supporting growth and evolution. Cost Optimization: By leveraging EIPs, businesses can avoid costly point-to-point integrations and adopt a more centralized and modular approach. This reduces maintenance efforts, minimizes development time, and optimizes resource allocation. Key Concepts in Enterprise Integration Patterns Messages: Messages represent units of data exchanged between systems. They can be structured in various formats such as XML, JSON, or plain text. Messages carry information from one system to another, enabling communication and data synchronization. Channels: Channels serve as communication pathways or conduits through which messages flow. They provide a medium for sending and receiving messages between systems. Channels can be implemented using message queues, publish-subscribe mechanisms, or other communication protocols. Message Endpoints: Message endpoints are the integration points where systems interact with each other by sending or receiving messages. Endpoints define the interfaces and protocols used for message exchange, ensuring that messages are correctly transmitted and received by the intended systems. Message Routing: Message routing involves directing messages from a source system to one or more destination systems based on certain criteria. Routing can be based on content, metadata, or specific rules defined in the integration solution. It ensures that messages reach the appropriate systems for processing. Message Transformation: Message transformation involves modifying the structure or format of messages to ensure compatibility between systems. It includes activities like data mapping, validation, enrichment, and conversion from one data format to another. Transformation ensures that data is correctly interpreted and processed by the receiving system. Message Splitting and Aggregation: Sometimes, it is necessary to break down or split large messages into smaller, more manageable parts for processing. Conversely, message aggregation involves combining multiple smaller messages into a single message for further processing or analysis. Splitting and aggregation enable efficient data processing and collaboration between systems. Benefits of Enterprise Integration Patterns Standardization: EIPs provide a standardized approach to integration, allowing organizations to establish a common language and understanding among architects, developers, and stakeholders. This promotes better collaboration and communication, reducing complexity and enabling effective teamwork. Reusability: EIPs encapsulate proven design solutions to common integration challenges. By leveraging these patterns, organizations can build reusable components and frameworks, reducing development effort and promoting code reuse across different integration projects. Scalability and Flexibility: EIPs enable organizations to build scalable and flexible integration solutions. The patterns support the addition of new systems, modification of existing systems, and handling increased data volume without disrupting the overall integration architecture. This allows businesses to adapt to changing requirements and scale their integration infrastructure as needed. Maintainability: EIPs promote modular and decoupled integration solutions, making it easier to maintain and update individual components without affecting the entire system. This simplifies troubleshooting, debugging, and maintenance activities, resulting in improved system reliability and stability. Performance and Efficiency: By employing message routing, filtering, and transformation techniques, EIPs help optimize performance and reduce unnecessary data processing. Messages are selectively processed and delivered to the appropriate systems, improving system efficiency and response times. Common Enterprise Integration Patterns Publish-Subscribe: This pattern enables systems to publish messages to specific channels, and other systems that have subscribed to those channels receive the messages. It facilitates broadcasting information to multiple systems simultaneously. Request-Reply: In this pattern, a system sends a request message to another system and expects a reply message in response. It enables synchronous communication between systems, where the requester waits for a response before proceeding further. Message Translator: This pattern focuses on transforming messages from one data format or protocol to another. It enables interoperability between systems that use different data representations, allowing them to understand and process messages correctly. Message Filter: This pattern enables the selective filtering of messages based on specific criteria, allowing systems to process only the relevant information. It enhances system performance by reducing the amount of unnecessary data being processed. The message filter pattern allows systems to selectively process messages based on predefined criteria. It filters out messages that do not meet the specified conditions, ensuring that only relevant messages are processed. Content-Based Router: This pattern routes messages to different destinations based on the content of the messages. It examines the content of incoming messages and determines the appropriate destination or processing path based on predefined rules or conditions. Message Splitter: The message splitter pattern divides a single message into multiple smaller messages. It is useful when a system needs to process individual parts of a large message separately or when distributing work among multiple systems or processes. Message Aggregator: This pattern combines multiple smaller messages into a single larger message. It is used when multiple systems produce related messages that need to be aggregated and processed as a whole. Message Broker: The message broker pattern acts as an intermediary between sender and receiver systems. It receives messages from sender systems, stores them temporarily, and ensures reliable delivery to the appropriate receiver systems. It decouples systems and provides asynchronous message exchange. Event-Driven Consumer: This pattern enables systems to react to events or messages asynchronously. Instead of actively requesting or polling for new messages, systems listen for events or messages and respond accordingly when they occur. Service Activator: The service activator pattern triggers a service or system to perform a specific action in response to an incoming message. It invokes the appropriate service or component to process the message and generate a response if required. Message Routing: This pattern deals with the flow and transformation of messages between systems. It includes filters, content-based routers, and dynamic routers, enabling messages to be selectively delivered based on content, destination, or other parameters. Message Transformation: This pattern facilitates the transformation of data formats and structures to ensure compatibility between systems. It includes techniques such as message enrichment, translation, and normalization. Message Endpoint: This pattern represents the integration point where systems send or receive messages. It encompasses concepts like publish-subscribe, request-reply, and message-driven beans, enabling asynchronous communication and decoupling of systems. Message Construction: This pattern focuses on constructing complex messages from simpler ones. It includes techniques like message aggregation, composition, and splitting, allowing systems to collaborate efficiently by exchanging composite messages. Message Routing Channels: This pattern establishes channels that facilitate communication between systems. Channels can be implemented as message queues, publish-subscribe topics, or message brokers, providing reliable and scalable integration solutions. Integration Frameworks and Tools Several integration frameworks and tools have been developed to implement Enterprise Integration Patterns effectively. Apache Camel, Spring Integration, and MuleSoft are some popular frameworks that provide extensive support for designing, implementing, and managing integration solutions. These frameworks offer a wide range of connectors, processors, and adapters, simplifying the development process and reducing time to market. Conclusion Enterprise Integration Patterns have become a key building block for developing reliable and scalable integration solutions in today's complex business environment. EIPs give businesses the tools they need to overcome the difficulties of integrating dissimilar systems, ensuring smooth data exchange, and promoting collaboration. They do this by offering a comprehensive catalog of tested design solutions. By embracing EIPs and utilizing integration frameworks, businesses can achieve operational efficiency, agility, and innovation and thereby gain a competitive edge in the digital landscape. Enterprise Integration Patterns are essential for achieving effective and seamless integration of various systems within an organization. By implementing these patterns, organizations can get past the difficulties associated with data transformation, routing, and coordination, enabling them to create scalable, adaptable, and maintainable integration solutions. Organizations can streamline their operations, improve collaboration, and gain a competitive edge in today's interconnected business environment by utilizing the advantages of standardization, reusability, and performance optimization.
Learn how to record SSH sessions on a Red Hat Enterprise Linux VSI in a Private VPC network using in-built packages. The VPC private network is provisioned through Terraform and the RHEL packages are installed using Ansible automation. What Is Session Recording and Why Is It Required? As noted in "Securely record SSH sessions on RHEL in a private VPC network," a Bastion host and a jump server are both security mechanisms used in network and server environments to control and enhance security when connecting to remote systems. They serve similar purposes but have some differences in their implementation and use cases. The Bastion host is placed in front of the private network to take SSH requests from public traffic and pass the request to the downstream machine. Bastion hosts and jump servers are vulnerable to intrusion as they are exposed to public traffic. Session recording helps an administrator of a system to audit user SSH sessions and comply with regulatory requirements. In the event of a security breach, you as an administrator would like to audit and analyze the user sessions. This is critical for a security-sensitive system. Before deploying the session recording solution, you need to provision a private VPC network following the instructions in the article, "Architecting a Completely Private VPC Network and Automating the Deployment." Alternatively, if you are planning to use your own VPC infrastructure, you need to attach a floating IP to the virtual server instance and a public gateway to each of the subnets. Additionally, you need to allow network traffic from public internet access. Deploy Session Recording Using Ansible To be able to deploy the Session Recording solution you need to have the following packages installed on the RHEL VSI: tlog SSSD cockpit-session-recording The packages will be installed through Ansible automation on all the VSIs both bastion hosts and RHEL VSI. If you haven't done so yet, clone the GitHub repository and move to the Ansible folder. Shell git clone https://github.com/VidyasagarMSC/private-vpc-network cd ansible Create hosts.ini from the template file. Shell cp hosts_template.ini hosts.ini Update the hosts.ini entries as per your VPC IP addresses. Plain Text [bastions] 10.10.0.13 10.10.65.13 [servers] 10.10.128.13 [bastions:vars] ansible_port=22 ansible_user=root ansible_ssh_private_key_file=/Users/vmac/.ssh/ssh_vpc packages="['tlog','cockpit-session-recording','systemd-journal-remote']" [servers:vars] ansible_port=22 ansible_user=root ansible_ssh_private_key_file=/Users/vmac/.ssh/ssh_vpc ansible_ssh_common_args='-J root@10.10.0.13' packages="['tlog','cockpit-session-recording','systemd-journal-remote']" Run the Ansible playbook to install the packages from an IBM Cloud private mirror/repository. Shell ansible-playbook main_playbook.yml -i hosts.ini --flush-cache Running Ansible playbooks You can see in the image that after you SSH into the RHEL machine now, you will see a note saying that the current session is being recorded. Check the Session Recordings, Logs, and Reports If you closely observe the messages post SSH, you will see a URL to the web console that can be accessed using the machine name or private IP over port 9090. To allow traffic on port 9090, in the Terraform code, Change the value of the allow_port_9090 variable to true and run terraform apply. The latest terraform apply will add ACL and security group rules to allow traffic on port 9090. Now, open a browser and navigate to http://10.10.128.13:9090 . To access using the VSI name, you need to set up a private DNS (out of scope for this article). You need a root password to access the web console. RHEL web console Navigate to session recording to see the list of session recordings. Along with session recordings, you can check the logs, diagnostic reports, etc. Session recording on the Web console Recommended Reading How to use Schematics - Terraform UI to provision the cloud resources Automation, Ansible, AI
In part three of this series, we have seen how to deploy our Quarkus/Camel-based microservices in Minikube, which is one of the most commonly used Kubernetes local implementations. While such a local Kubernetes implementation is very practical for testing purposes, its single-node feature doesn't satisfy real production environment requirements. Hence, in order to check our microservices behavior in a production-like environment, we need a multi-node Kubernetes implementation. And one of the most common is OpenShift. What Is OpenShift? OpenShift is an open-source, enterprise-grade platform for container application development, deployment, and management based on Kubernetes. Developed by Red Hat as a component layer on top of a Kubernetes cluster, it comes both as a commercial product and a free platform or both as on-premise and cloud infrastructure. The figure below depicts this architecture. As with any Kubernetes implementation, OpenShift has its complexities, and installing it as a standalone on-premise platform isn't a walk in the park. Using it as a managed platform on a dedicated cloud like AWS, Azure, or GCP is a more practical approach, at least in the beginning, but it requires a certain enterprise organization. For example, ROSA (Red Hat OpenShift Service on AWS) is a commercial solution that facilitates the rapid creation and the simple management of a full Kubernetes infrastructure, but it isn't really a developer-friendly environment allowing it to quickly develop, deploy and test cloud-native services. For this later use case, Red Hat offers the OpenShift Developer's Sandbox, a development environment that gives immediate access to OpenShift without any heavy installation or subscription process and where developers can start practicing their skills and learning cycle, even before having to work on real projects. This totally free service, which doesn't require any credit card but only a Red Hat account, provides a private OpenShift environment in a shared, multi-tenant Kubernetes cluster that is pre-configured with a set of developer tools, like Java, Node.js, Python, Go, C#, including a catalog of Helm charts, the s2i build tool, and OpenShift Dev Spaces. In this post, we'll be using OpenShift Developer's Sandbox to deploy our Quarkus/Camel microservices. Deploying on OpenShift In order to deploy on OpenShift, Quarkus applications need to include the OpenShift extension. This might be done using the Qurakus CLI, of course, but given that our project is a multi-module maven one, a more practical way of doing it is to directly include the following dependency in the master POM: XML <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-openshift</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-container-image-openshift</artifactId> </dependency> This way, all the sub-modules will inherit the dependencies. OpenShift is supposed to work with vanilla Kubernetes resources; hence, our previous recipe, where we deployed our microservices on Minikube, should also apply here. After all, both Minikube and OpenShift are implementations of the same de facto standard: Kubernetes. If we look back at part three of this series, our Jib-based build and deploy process was generating vanilla Kubernetes manifest files (kubernetes.yaml), as well as Minikube ones (minikube.yaml). Then, we had the choice between using the vanilla-generated Kubernetes resources or the more specific Minikube ones, and we preferred the latter alternative. While the Minikube-specific manifest files could only work when deployed on Minikube, the vanilla Kubernetes ones are supposed to work the same way on Minikube as well as on any other Kubernetes implementation, like OpenShift. However, in practice, things are a bit more complicated, and, as far as I'm concerned, I failed to successfully deploy on OpenShift vanilla Kubernetes manifests generated by Jib. What I needed to do was to rename most of the properties whose names satisfy the pattern quarkus.kubernetes.* by quarkus.openshift.*. Also, some vanilla Kubernetes properties, for example quarkus.kubernetes.ingress.expose, have a completely different name for OpenShift. In this case quarkus.openshift.route.expose. But with the exception of these almost cosmetic alterations, everything remains on the same site as in our previous recipe of part three. Now, in order to deploy our microservices on OpenShift Developer's Sandbox, proceed as follows. Log in to OpenShift Developer's Sandbox Here are the required steps to log in to OpenShift Developer Sandbox: Fire your preferred browser and go to the OpenShift Developer's Sandbox site Click on the Login link in the upper right corner (you need to already have registered with the OpenShift Developer Sandbox) Click on the red button labeled Start your sandbox for free in the center of the screen In the upper right corner, unfold your user name and click on the Copy login command button In the new dialog labeled Log in with ... click on the DevSandbox link A new page is displayed with a link labeled Display Token. Click on this link. Copy and execute the displayed oc command, for example: Shell $ oc login --token=... --server=https://api.sandbox-m3.1530.p1.openshiftapps.com:6443 Clone the Project From GitHub Here are the steps required to clone the project's GitHub repository: Shell $ git clone https://github.com/nicolasduminil/aws-camelk.git $ cd aws-camelk $ git checkout openshift Create the OpenShift Secret In order to connect to AWS resources, like S3 buckets and SQS queues, we need to provide AWS credentials. These credentials are the Access Key ID and the Secret Access Key. There are several ways to provide these credentials, but here, we chose to use Kubernetes secrets. Here are the required steps: First, encode your Access Key ID and Secret Access Key in Base64 as follows: Shell $ echo -n <your AWS access key ID> | base64 $ echo -n <your AWS secret access key> | base64 Edit the file aws-secret.yaml and amend the following lines such that to replace ... by the Base64 encoded values: Shell AWS_ACCESS_KEY_ID: ... AWS_SECRET_ACCESS_KEY: ... Create the OpenShift secret containing the AWS access key ID and secret access key: Shell $ kubectl apply -f aws-secret.yaml Start the Microservices In order to start the microservices, run the following script: Shell $ ./start-ms.sh This script is the same as the one in our previous recipe in part three: Shell #!/bin/sh ./delete-all-buckets.sh ./create-queue.sh sleep 10 mvn -DskipTests -Dquarkus.kubernetes.deploy=true clean install sleep 3 ./copy-xml-file.sh The copy-xml-file.sh script that is used here in order to trigger the Camel file poller has been amended slightly: Shell #!/bin/sh aws_camel_file_pod=$(oc get pods | grep aws-camel-file | grep -wv -e build -e deploy | awk '{print $1}') cat aws-camelk-model/src/main/resources/xml/money-transfers.xml | oc exec -i $aws_camel_file_pod -- sh -c "cat > /tmp/input/money-transfers.xml" Here, we replaced the kubectl commands with the oc ones. Also, given that OpenShift has this particularity of creating pods not only for the microservices but also for the build and the deploy commands, we need to filter out in the list of the running pods the ones having string occurrences of build and deploy. Running this script might take some time. Once finished, make sure that all the required OpenShift controllers are running: Shell $ oc get is NAME IMAGE REPOSITORY TAGS UPDATED aws-camel-file default-route-openshift-image-registry.apps.sandbox-m3.1530.p1.openshiftapps.com/nicolasduminil-dev/aws-camel-file 1.0.0-SNAPSHOT 17 minutes ago aws-camel-jaxrs default-route-openshift-image-registry.apps.sandbox-m3.1530.p1.openshiftapps.com/nicolasduminil-dev/aws-camel-jaxrs 1.0.0-SNAPSHOT 9 minutes ago aws-camel-s3 default-route-openshift-image-registry.apps.sandbox-m3.1530.p1.openshiftapps.com/nicolasduminil-dev/aws-camel-s3 1.0.0-SNAPSHOT 16 minutes ago aws-camel-sqs default-route-openshift-image-registry.apps.sandbox-m3.1530.p1.openshiftapps.com/nicolasduminil-dev/aws-camel-sqs 1.0.0-SNAPSHOT 13 minutes ago openjdk-11 default-route-openshift-image-registry.apps.sandbox-m3.1530.p1.openshiftapps.com/nicolasduminil-dev/openjdk-11 1.10,1.10-1,1.10-1-source,1.10-1.1634738701 + 46 more... 18 minutes ago $ oc get pods NAME READY STATUS RESTARTS AGE aws-camel-file-1-build 0/1 Completed 0 19m aws-camel-file-1-d72w5 1/1 Running 0 18m aws-camel-file-1-deploy 0/1 Completed 0 18m aws-camel-jaxrs-1-build 0/1 Completed 0 14m aws-camel-jaxrs-1-deploy 0/1 Completed 0 10m aws-camel-jaxrs-1-pkf6n 1/1 Running 0 10m aws-camel-s3-1-76sqz 1/1 Running 0 17m aws-camel-s3-1-build 0/1 Completed 0 18m aws-camel-s3-1-deploy 0/1 Completed 0 17m aws-camel-sqs-1-build 0/1 Completed 0 17m aws-camel-sqs-1-deploy 0/1 Completed 0 14m aws-camel-sqs-1-jlgkp 1/1 Running 0 14m oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE aws-camel-jaxrs ClusterIP 172.30.192.74 <none> 80/TCP 11m modelmesh-serving ClusterIP None <none> 8033/TCP,8008/TCP,8443/TCP,2112/TCP 18h As shown in the listing above, all the required image streams have been created, and all the pods are either completed or running. The completed pods are the ones associated with the build and deploy operations. The running ones are associated with the microservices. There is only one service running: aws-camel-jaxrs. This service makes it possible to communicate with the pod that runs the aws-camel-jaxrs microservice by exposing the route to it. This is automatically done in effect to the quarkus.openshift.route.expose=true property. And the microservice aws-camel-sqs needs, as a matter of fact, to communicate with aws-camel-sqs and, consequently, it needs to know the route to it. To get this route, you may proceed as follows: Shell $ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD aws-camel-jaxrs aws-camel-jaxrs-nicolasduminil-dev.apps.sandbox-m3.1530.p1.openshiftapps.com aws-camel-jaxrs http None Now open the application.properties file associated with the aws-camel-sqs microservice and modify the property rest-uri such that to read as follows: Properties files rest-uri=aws-camel-jaxrs-nicolasduminil-dev.apps.sandbox-m3.1530.p1.openshiftapps.com/xfer Here, you have to replace the namespace nicolasduminil-dev with the value which makes sense in your case. Now, you need to stop the microservices and start them again: Shell $ ./kill-ms.sh ... $ ./start-ms.sh ... Your microservices should run as expected now, and you may check the log files by using commands like: Shell $ oc logs aws-camel-jaxrs-1-pkf6n As you may see, in order to get the route to the aws-camel-jaxrs service, we need to start, to stop, and to start our microservices again. This solution is far from being elegant, but I didn't find any other, and I'm relying on the advised reader to help me improve it. It's probably possible to use the OpenShift Java client in order to perform, in Java code, the same thing as the oc get routes command is doing, but I didn't find how, and the documentation isn't too explicit. I would like to present my apologies for not being able to provide here the complete solution, but enjoy it nevertheless!
In our contemporary cybersecurity landscape, sneaky custom content threats are beginning to penetrate our email security policies and firewalls/virus-scanning network proxies with greater consistency. Aptly disguised files can easily wind their way into our inboxes and our most sensitive file storage locations, and they can lurk there for extended periods, waiting patiently for unsuspecting victims to download and execute their malicious payloads. Seemingly, the faster we rush to understand and mitigate one iteration of a hidden content threat, the quicker that threat evolves into something entirely new, catching us by surprise again and again. In recent years, Office file formats, URLs, and executables have stolen the spotlight as the most commonly pursued hosts for latent email and storage-based attack vectors alike. Links to compromised websites are frequently encountered in our email inboxes, as are malicious macros and various executables. Invalid files, password-protected files, or even OLE-enabled (object linking and embedding) files with malicious content can often be found scattered throughout our cloud storage instances. Amid all of this, an even stealthier form of malware host has begun to gain ground over its contemporaries, namely, archive file formats like ZIP and RAR. According to research conducted over a three-month period in 2022, more than 40% of malware attacks used ZIP and RAR formats to deliver malicious content to a client device. That exceeds the usage of many long-established Office formats over the same period, and while that might first seem surprising, at a closer look, it’s not hard to see why. File compression formats can harness powerful encryption algorithms to safeguard their contents, and there’s not much a regular virus and malware scanning service can do when it can’t decrypt the files it needs to scan. As if an archive’s encryption algorithms weren't already posing a difficult enough obstacle for virus and malware scanning solutions to detect, making matters even more difficult is the ease with which these archive formats can be smuggled past security policies within the body of disguised invalid file types. For example, some recent attacks have buried archives within HTML documents, and these HTML documents have been designed to convincingly mimic the online PDF viewers (complete with an apparent PDF file extension and seemingly normal document thumbnail) we're regularly accustomed to opening on our browsers. If we let our eyes deceive us and download an HTML mimic file, we might unknowingly decrypt and subsequently inject the contents of an externally stored malicious ZIP or RAR archive directly onto our device, allowing an attacker to establish a direct link with our computer and initiate a fully-fledged cyberattack. As pure virus and malware detection, policies become increasingly inadequate sentinels on their own, it’s more important than ever that we simultaneously deploy content-validation-centric policies against inbound files. Detecting a stray ZIP, RAR, or invalid file type in a sensitive location can be the difference between the success and failure of a latent cyberattack. One way we can accomplish this is with the help of simple document validation APIs, and I’ve provided a few free-to-use options in the demonstration portion of this article. Demonstration The API solutions provided below are free to use (with a free-tier API key), and they’re easy to call via ready-to-run Java code examples supplied further down the page, beginning with Java SDK installation instructions. They’re designed to perform the following actions, respectively: Validate if a file is a ZIP archive. Validate if a file is a RAR archive. Automatically detect the contents of a common file type (i.e., PDF, HTML, XLSX, etc.) and perform in-depth content verification against the file’s extension. After processing each file, these solutions will return a “DocumentIsValid” Boolean response, making it straightforward to flag or divert common content threat types away from sensitive locations within our system. Additionally, all these solutions will identify whether a file has password-protection measures in place (this is often a further indication of malicious content — especially when a file in question originates from an untrustworthy source), and they'll identify any overt errors or warnings associated with the document in question. As a reminder, these APIs are NOT designed to detect or flag virus or malware signatures; their utility will depend on where you elect to deploy them. They can just as easily be deployed as simple data validation steps in the workflow of any regular file-processing application. Further down the page, I've linked a previous article that highlights an API solution that scans, validates, and verifies content all in one step. To begin structuring our API calls, let’s install the SDK with Maven by first adding a reference to the repository in pom.xml: XML <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> After that, let’s add a reference to the dependency in pom.xml: XML <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> We can then call the ZIP File Validation API using the below code: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.ValidateDocumentApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); ValidateDocumentApi apiInstance = new ValidateDocumentApi(); File inputFile = new File("/path/to/inputfile"); // File | Input file to perform the operation on. try { DocumentValidationResult result = apiInstance.validateDocumentZipValidation(inputFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling ValidateDocumentApi#validateDocumentZipValidation"); e.printStackTrace(); } We can call the RAR File Validation API using the code below: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.ValidateDocumentApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); ValidateDocumentApi apiInstance = new ValidateDocumentApi(); File inputFile = new File("/path/to/inputfile"); // File | Input file to perform the operation on. try { DocumentValidationResult result = apiInstance.validateDocumentRarValidation(inputFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling ValidateDocumentApi#validateDocumentRarValidation"); e.printStackTrace(); } Lastly, we can call the Automatic Content Validation API using the final code examples below: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.ValidateDocumentApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); ValidateDocumentApi apiInstance = new ValidateDocumentApi(); File inputFile = new File("/path/to/inputfile"); // File | Input file to perform the operation on. try { AutodetectDocumentValidationResult result = apiInstance.validateDocumentAutodetectValidation(inputFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling ValidateDocumentApi#validateDocumentAutodetectValidation"); e.printStackTrace(); } Hopefully, with a few additional content validation policies in place, we can rest assured that we’ll be aware when common threat vectors enter our system. Scan, Verify, and Validate Content All at Once To take advantage of an API solution designed to simultaneously identify viruses, malware, and custom content threats (with full content verification and custom content restriction policies), feel free to check out my previous article, "How to Protect .NET Web Applications from Viruses and Zero Day Threats." Since that article applies to .NET application development, I've provided comparable Java code examples below for Java application development. First, add the following reference to the repository in pom.xml: XML <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> Then add the following reference to the dependency in pom.xml: XML <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> Finally, use the below Java code examples to structure your API call, and once again, utilize a free-tier API key to authorize your requests. As outlined in the linked article, you can use Booleans to set custom restrictions against a variety of custom content threat types (macros, password-protected files, malicious archives, HTML, scripts, etc.), and you can custom-restrict unwanted file types by supplying a comma-separated list of accepted file extensions (e.g., .docx,.pdf,.xlsx) in the string restrictFileTypes parameter. Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.ScanApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); ScanApi apiInstance = new ScanApi(); File inputFile = new File("/path/to/inputfile"); // File | Input file to perform the operation on. Boolean allowExecutables = true; // Boolean | Set to false to block executable files (program code) from being allowed in the input file. Default is false (recommended). Boolean allowInvalidFiles = true; // Boolean | Set to false to block invalid files, such as a PDF file that is not really a valid PDF file, or a Word Document that is not a valid Word Document. Default is false (recommended). Boolean allowScripts = true; // Boolean | Set to false to block script files, such as a PHP files, Python scripts, and other malicious content or security threats that can be embedded in the file. Set to true to allow these file types. Default is false (recommended). Boolean allowPasswordProtectedFiles = true; // Boolean | Set to false to block password protected and encrypted files, such as encrypted zip and rar files, and other files that seek to circumvent scanning through passwords. Set to true to allow these file types. Default is false (recommended). Boolean allowMacros = true; // Boolean | Set to false to block macros and other threats embedded in document files, such as Word, Excel and PowerPoint embedded Macros, and other files that contain embedded content threats. Set to true to allow these file types. Default is false (recommended). Boolean allowXmlExternalEntities = true; // Boolean | Set to false to block XML External Entities and other threats embedded in XML files, and other files that contain embedded content threats. Set to true to allow these file types. Default is false (recommended). Boolean allowInsecureDeserialization = true; // Boolean | Set to false to block Insecure Deserialization and other threats embedded in JSON and other object serialization files, and other files that contain embedded content threats. Set to true to allow these file types. Default is false (recommended). Boolean allowHtml = true; // Boolean | Set to false to block HTML input in the top level file; HTML can contain XSS, scripts, local file accesses and other threats. Set to true to allow these file types. Default is false (recommended) [for API keys created prior to the release of this feature default is true for backward compatability]. String restrictFileTypes = "restrictFileTypes_example"; // String | Specify a restricted set of file formats to allow as clean as a comma-separated list of file formats, such as .pdf,.docx,.png would allow only PDF, PNG and Word document files. All files must pass content verification against this list of file formats, if they do not, then the result will be returned as CleanResult=false. Set restrictFileTypes parameter to null or empty string to disable; default is disabled. try { VirusScanAdvancedResult result = apiInstance.scanFileAdvanced(inputFile, allowExecutables, allowInvalidFiles, allowScripts, allowPasswordProtectedFiles, allowMacros, allowXmlExternalEntities, allowInsecureDeserialization, allowHtml, restrictFileTypes); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling ScanApi#scanFileAdvanced"); e.printStackTrace(); }