Containers
The proliferation of containers in recent years has increased the speed, portability, and scalability of software infrastructure and deployments across all kinds of application architectures and cloud-native environments. Now, with more and more organizations migrated to the cloud, what's next? The subsequent need to efficiently manage and monitor containerized environments remains a crucial task for teams. With organizations looking to better leverage their containers — and some still working to migrate out of their own monolithic environments — the path to containerization and architectural modernization remains a perpetual climb. In DZone's 2023 Containers Trend Report, we will explore the current state of containers, key trends and advancements in global containerization strategies, and constructive content for modernizing your software architecture. This will be examined through DZone-led research, expert community articles, and other helpful resources for designing and building containerized applications.
Threat Modeling
Distributed SQL Essentials
At work, I regularly train people on the subject of continuous integration and continuous delivery, where I predominantly utilize GitHub Actions for the workshop assignments. This choice is motivated by GitHub’s extensive adoption within the developer community and the generous offering of approximately 2,000 minutes or 33 hours of free build time per month. During one of my recent workshops, a participant raised a question regarding the possibility of locally testing workflows before pushing them to GitHub. They pointed out the inconvenience of waiting for a runner to pick up their pipeline or workflow, which negatively impacts the developer experience. At that time, I was unaware of any local options for GitHub Actions. However, I have since come across a solution called act that addresses this issue. What Is Act? act is a command-line utility that emulates a GitHub Actions environment and allows you to test your GitHub Actions workflows on your developer laptop instead of in a GitHub Actions environment. You can install act by using, for instance, brew on the Mac. Shell $ brew install act Running Workflows Locally act enables you to execute and debug GitHub Actions workflows locally, providing a faster feedback loop during development. Running the act command line will pick up the workflows in your .github/workflows folder and try to execute them. Using act can be as simple as: Shell $ act act uses Docker to create an isolated environment that closely resembles the GitHub Actions execution environment. This ensures consistency in the execution of actions and workflows. If you don’t have Docker installed, you can use Docker Desktop or use Colima, an easy way to run container runtimes on macOS. Runners When defining the workflow, you can specify a runner based on a specific virtual machine/environment when performing your steps. YAML jobs: Build: runs-on: ubuntu-latest steps: ... By default, act has a mapping to a specific Docker image when you specify the ubuntu-latest runner. When running act for the first time, it will ask you to pick a default image for ubuntu-latest. You can choose from 3 types of base images that can be mapped to ubuntu-latest: Micro Docker Image (node:16-buster-slim) Medium Docker Image (catthehacker/ubuntu:act-latest) Large Docker Image (catthehacker/ubuntu:full-latest) Don’t worry if you’re not happy with the one you selected. You can always change the default selection by changing the following file in your users home directory ~/.actrc. The large Docker image is around 18GB!! So I initially picked the medium-sized image as it should contain most of the commonly used system dependencies. I soon learned that it contains quite some libraries, but when I tried to run a Java + Maven-based project, I learned that it did not contain Apache Maven, while the normal Ubuntu-latest on GitHub does have that. [CI/Build] Run Main Build [CI/Build] docker exec cmd=[bash --noprofile --norc -e -o pipefail /var/run/act/workflow/2] user= workdir= | /var/run/act/workflow/2: line 2: mvn: command not found [CI/Build] Failure - Main Build [CI/Build] exitcode '127': command not found, please refer to https://github.com/nektos/act/issues/107 for more information I didn’t want to switch to an 18GB docker image to be able just to run Maven, so I ended up finding an existing image by Jamez Perkins. It simply takes the original act image and adds Maven version 3.x to it. You can easily specify running your workflow with custom images by providing the platform parameter. Shell $ act -P ubuntu-latest=quay.io/jamezp/act-maven After using that image, my workflow ran without any errors. Working With Multiple Jobs/Stages Your GitHub actions workflow usually consists of one or more jobs that separate different stages of your workflow. You might, for instance, have a build, test, and deploy stage. Usually, you build your application in the build job and use the resulting artifact in the deploy job. Jobs can run on different runners, so in a GitHub Actions environment, you will probably be using the upload/download artifact action, which will use centralized storage for sharing the artifacts between different runners. When using act and sharing artifacts, you will need to be specific about where the artifacts need to be stored. You can do so by providing a specific parameter named --artifact-server-path. Shell $ act -P ubuntu-latest=quay.io/jamezp/act-maven \ --artifact-server-path /tmp/act-artifacts Working With Secrets It’s a best practice always to separate your secrets from your workflow definition and only reference them from a specific secret store. When using GitHub Actions, you can store your secrets in the built-in secret management functionality. To provide an action with a secret, you can use the secrets context to access secrets you’ve created in your repository. YAML jobs: staticanalysis: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 with: # Disabling shallow clone is recommended for improving relevancy of reporting fetch-depth: 0 - name: SonarQube Scan uses: sonarsource/sonarqube-scan-action@master env: SONAR_TOKEN: ${{ secrets.SONAR_TOKEN } SONAR_HOST_URL: ${{ secrets.SONAR_URL } act does not have a UI where you can specify secrets, so you will need to explicitly provide those values from the command line or store them in a .env formatted file when testing your workflow. If you only have a few secrets, you can easily add them by providing the secret from the command line using the -s option. Shell $ act -s SONAR_TOKEN=somevalue $ act --secret-file my.secrets Working With Environment Variables Similar to secrets, you sometimes make use of environment variables inside your workflow. For a single environment variable, you can use --env myenv=foo or if you have a set of environment variables, you can create a dotenv file and provide a reference to the file from the CLI by providing the --env-file parameter. Shell $ act --env-file my.env The .env file is based on a simple standard file format which contains a set of key-value pairs divided by new lines. Plain Text MY_ENV_VAR=MY_ENV_VAR_VALUE MY_2ND_ENV_VAR="my 2nd env var value" Event Simulation Events are a fundamental part of workflows. Workflows will start due to some specific event happening within GitHub, like a push, creation of a pull request, etc. With act, you can simulate such an event to trigger your workflow(s). You can provide the event as an argument. Shell $ act pull_request Events are usually more complex than just a simple string, so if you want to be specific, you can provide a reference to an event payload: JSON $ act --eventpath pull_request.json { "pull_request": { "head": { "ref": "sample-head-ref" }, "base": { "ref": "sample-base-ref" } } } By providing your events from the command line, you can test different scenarios and observe how your workflows respond to those events. Summary Using act is straightforward and can significantly help in the initial phase of developing your workflow. act offers a significant advantage in terms of a swift feedback loop. It enables developers to perform tests locally and iterate rapidly until they achieve the desired outcome, eliminating the need to wait for GitHub’s runners to finish the workflow. act additionally aids developers in avoiding resource wastage on GitHub’s runners. By conducting local tests, developers can ensure the proper functioning of their workflows before pushing code changes to the repository and initiating a workflow on GitHub’s runners. If you’re working with GitHub Actions, I would recommend to asses act as a tool for your development team.
We reached the final installment of our Manifold series but not the end of its remarkable capabilities. Throughout this series, we have delved into various aspects of Manifold, highlighting its unique features and showcasing how it enhances Java development. In this article, we will cover some of the remaining features of Manifold, including its support for GraphQL, integration with JavaScript, and the utilization of a preprocessor. By summarizing these features and reflecting on the knowledge gained throughout the series, I hope to demonstrate the power and versatility of Manifold. Expanding Horizons With GraphQL Support GraphQL, a relatively young technology, has emerged as an alternative to REST APIs. It introduced a specification for requesting and manipulating data between client and server, offering an arguably more efficient and streamlined approach. However, GraphQL can pose challenges for static languages like Java. Thankfully, Manifold comes to the rescue by mitigating these challenges and making GraphQL accessible and usable within Java projects. By removing the rigidness of Java and providing seamless integration with GraphQL, Manifold empowers developers to leverage this modern API style. E.g. for this GraphqQL file taken from the Manifold repository: query MovieQuery($genre: Genre!, $title: String, $releaseDate: Date) { movies(genre: $genre, title: $title, releaseDate: $releaseDate) { id title genre releaseDate } } query ReviewQuery($genre: Genre) { reviews(genre: $genre) { id stars comment movie { id title } } } mutation ReviewMutation($movie: ID!, $review: ReviewInput!) { createReview(movie: $movie, review: $review) { id stars comment } } extend type Query { reviewsByStars(stars: Int) : [Review!]! } We can write this sort of fluent code: var query = MovieQuery.builder(Action).build(); var result = query.request(ENDPOINT).post(); var actionMovies = result.getMovies(); for (var movie : actionMovies) { out.println( "Title: " + movie.getTitle() + "\n" + "Genre: " + movie.getGenre() + "\n" + "Year: " + movie.getReleaseDate().getYear() + "\n"); } None of these objects need to be declared in advance. All we need are the GraphQL files. Achieving Code Parity With JavaScript Integration In some cases, regulatory requirements demand identical algorithms in both client and server code. This is common for cases like interest rate calculations, where in the past, we used Swing applications to calculate and display the rate. Since both the backend and front end were in Java, it was simple to have a single algorithm. However, this can be particularly challenging when the client-side implementation relies on JavaScript. Manifold provides a solution by enabling the integration of JavaScript within Java projects. By placing JavaScript files alongside the Java code, developers can invoke JavaScript functions and classes seamlessly using Manifold. Under the hood, Manifold utilizes Rhino to execute JavaScript, ensuring compatibility and code parity across different environments. E.g., this JavaScript snippet: JavaScript function calculateValue(total, year, rate) { var interest = rate / 100 + 1; return parseFloat((total * Math.pow(interest, year)).toFixed(4)); } Can be invoked from Java as if it was a static method: Java var interest = Calc.calculateValue(4,1999, 5); Preprocessor for Java While preprocessor-like functionality may seem unnecessary in Java due to its portable nature and JIT compilation, there are scenarios where conditional code becomes essential. For instance, when building applications that require different behavior in on-premises and cloud environments, configuration alone may not suffice. It would technically work, but it might leave proprietary bytecode in on-site deployments, and that isn’t something we would want. There are workarounds for this, but they are often very heavy-handed for something relatively simple. Manifold addresses this need by offering a preprocessor-like capability. By defining values in build.properties or through environment variables and compiler arguments, developers can conditionally execute specific code paths. This provides flexibility and control without resorting to complex build tricks or platform-specific code. With Manifold, we can write preprocessor code such as: C# #if SERVER_ON_PREM onPremCode(); #elif SERVER_CLOUD cloudCode(); #else #error “Missing definition: SERVER_ON_PREM or SERVER_CLOUD” Reflecting on Manifold's Power Throughout this series, we have explored the many features of Manifold, including type-safe reflection, extension methods, operator overloading, property support, and more. These features demonstrate Manifold's commitment to enhancing Java development and bridging the gap between Java and modern programming paradigms. By leveraging Manifold, developers can achieve cleaner, more expressive code while maintaining the robustness and type safety of the Java language. Manifold is an evolving project with many niche features I didn’t discuss in this series, including the latest one SQL Support. In a current Spring Boot project that I’m developing, I chose to use Manifold over Lombok. My main reasoning was that this is a startup project, so I’m more willing to take risks. Manifold lets me tailor itself to my needs. I don’t need many of the manifold features and, indeed didn’t add all of them. I will probably need to interact with GraphQL, though, and this was a big deciding factor when picking Manifold over Lombok. So far, I am very pleased with the results, and features such as entity beans work splendidly with property annotations. I do miss the Lombok constructor's annotations, though; I hope something like that will eventually make its way into Manifold. Alternatively, if I find the time, I might implement this myself. Final Word As we conclude this journey through Manifold, it's clear that this library offers a rich set of features that elevate Java development to new heights. Whether it's simplifying GraphQL integration, ensuring code parity with JavaScript, or enabling conditional compilation through a preprocessor-like approach, Manifold empowers developers to tackle complex challenges with ease. We hope this series has provided valuable insights and inspired you to explore the possibilities that Manifold brings to your Java projects. Don’t forget to check out the past installments in this series to get the full scope of Manifold’s power.
Integrating Java applications with NoSQL databases has become increasingly important in modern software development. To address the growing demands of this realm, Eclipse JNoSQL, a comprehensive framework for Java developers, has recently unveiled its highly anticipated 1.0.0 version. Packed with many new features and bug fixes, this update aims to streamline the integration process between Java and NoSQL, offering developers a more efficient and seamless experience. NoSQL databases are popular due to their flexible data models, scalability, and high performance. However, integrating these databases with Java applications involves intricate configuration and complex coding practices. Eclipse JNoSQL has been specifically designed to alleviate these challenges, providing developers with a powerful toolkit to simplify the interaction with various NoSQL databases. The release of Eclipse JNoSQL 1.0.0 marks a significant milestone in the framework's evolution, introducing a host of new features that enhance its usability and versatility. With this update, developers can use various advanced functionalities to optimize their Java and NoSQL integration workflow. From improved data mapping and query capabilities to enhanced support for different NoSQL databases, Eclipse JNoSQL empowers developers to harness the full potential of NoSQL technology within their Java applications. In addition to the exciting new features, the 1.0.0 version of Eclipse JNoSQL brings an array of bug fixes, addressing various issues reported by the developer community. By rectifying these bugs and enhancing the overall stability of the framework, Eclipse JNoSQL strives to provide developers with a more reliable and efficient development environment. This article dives into the details of the Eclipse JNoSQL 1.0.0 release, exploring the key features that make it a compelling choice for Java developers seeking simplified integration with NoSQL databases. We will examine the benefits of the new functionalities, highlight the bug fixes that enhance the framework's stability, and discuss the potential impact of this release on the Java and NoSQL development landscape. As the Java and NoSQL ecosystems evolve, Eclipse JNoSQL remains at the forefront of providing developers with practical tools and frameworks to streamline their workflows. The 1.0.0 version represents a significant step in achieving seamless integration between Java and NoSQL, empowering developers to build robust and scalable applications more efficiently. Features of New Eclipse JNoSQL The latest Eclipse JNoSQL, 1.0.0, introduces exciting features that enhance the framework's capabilities and simplify the integration between Java and NoSQL databases: More straightforward database configuration: One of the notable enhancements in Eclipse JNoSQL 1.0.0 is the introduction of simplified database configuration. With the new version, developers can now easily configure and connect to NoSQL databases without the need for complex and time-consuming setup procedures. This feature significantly reduces the initial setup overhead and allows developers to focus more on the core aspects of their application development. Enhanced Java Record support: Eclipse JNoSQL 1.0.0 brings improved support for Java Records, a feature introduced in Java 14. Java Records provide a concise and convenient way to define immutable data objects. With the updated version of Eclipse JNoSQL, developers can seamlessly map Java Records to NoSQL data structures, enabling efficient and effortless data handling. This enhancement enhances code readability, maintainability, and overall development productivity. Several bug fixes: Alongside introducing new features, Eclipse JNoSQL 1.0.0 addresses several bugs reported by the developer community. Enhanced repository interfaces: Eclipse JNoSQL 1.0.0 enhances the repository interfaces, which bridge Java applications and NoSQL databases. These interfaces provide a high-level abstraction for developers to interact with the database, simplifying data retrieval, storage, and query operations. The updated repository interfaces in Eclipse JNoSQL offer improved functionality, enabling developers to perform database operations with greater ease and flexibility. These new features in Eclipse JNoSQL 1.0.0 significantly contribute to making the integration between Java and NoSQL smoother and more efficient. By streamlining database configuration, enhancing Java Record support, addressing bug fixes, and improving repository interfaces, Eclipse JNoSQL empowers developers to leverage the full potential of NoSQL databases within their Java applications. With these advancements, developers can now focus more on building innovative solutions rather than grappling with the complexities of database integration. Show Me the Code To demonstrate how the new version works, we will create a MongoDB application sample where we will handle pets. The first step is running a MongoDB application. We can use either a MongoDB Atlas in the cloud or run it locally with Docker. At Docker, you can run with the following command: Shell docker run -d --name mongodb-instance -p 27017:27017 mongo Once we have a MongoDB instance running, the next step is a Maven project. The simplicity of database configuration is one feature we will explore today. To have MongoDB, we need to add the database dependency. XML <dependency> <groupId>org.eclipse.jnosql.databases</groupId> <artifactId>jnosql-mongodb</artifactId> <version>1.0.0</version> </dependency> If you want to use another database available, check the Eclipse JNoSQL databases for more information. We also provide demos on Java SE and Web profiles. Besides this dependency, you must have CDI, JSON-B, and Eclipse Microprofile Config as minimum requirements to take this from a Jakarta EE and Eclipse Microprofile vendor. The credentials will resume at two properties: the database and the MongoDB URLs, where we can overwrite on production to take advantage of The Twelve-Factor App: Properties files jnosql.document.database=pets jnosql.mongodb.host=localhost:27017 The next step is the entities types creation for Pets, where we will create two record structures and a sealed interface: Java public sealed interface Pet permits Cat, Dog { String name(); String breed(); } @Entity public record Dog(@Id String id, @Column String name, @Column String breed) implements Pet { } @Entity public record Cat(@Id String id, @Column String name, @Column String breed) implements Pet { } With this configuration, we can start to use the Java and MongoDB integration. Java @Inject Template template; var faker = new Faker(); var cat = Cat.create(faker); template.insert(cat); Optional<Cat> optional = template.find(Cat.class, cat.id()); System.out.println("The result: " + optional); But you can do more and explore the repositories features. As new, you can create interface modules such as to query by name and breed and make it pluggable for the interfaces. Furthermore, you can create several components to create queries and several database operations. Java public interface PetQueries<T extends Pet> { List<T> findByName(String name); List<T> findByBreed(String breed); } @Repository public interface DogRepository extends PageableRepository<Dog, String>, PetQueries<Dog> { default Dog register(Dog dog, Event<Pet> event) { event.fire(dog); return this.save(dog); } } We can also explore the default methods for decorating, creating an alias, and enhancing database operations. We will create a PetQueries to centralize queries by name and breed, and at DogRepository, we will include an event for dogs exploring CDI. Java @Inject DogRepository repository; @Inject Event<Pet> event; var faker = new Faker(); var dog = Dog.create(faker); System.out.println("The register result: " + repository.register(dog, event)); var optional = repository.findByBreed(dog.breed()); System.out.println("The result: " + optional); Conclusion Eclipse JNoSQL 1.0.0 revolutionizes the Java and NoSQL integration landscape by introducing new features and addressing crucial bug fixes. This version provides developers with an optimized workflow, enabling them to integrate NoSQL databases into their Java applications seamlessly. With the framework's enhanced usability, productivity gains, and improved stability, Eclipse JNoSQL stands as a valuable tool for Java developers seeking to harness the power of NoSQL databases in their software projects. As the Java and NoSQL ecosystems evolve, Eclipse JNoSQL remains at the forefront of empowering developers to build robust, scalable, and efficient applications. References Source code JNoSQL source code NoSQL spec
In today's rapidly evolving software development landscape, architects and IT leaders face the critical challenge of designing systems that can adapt, scale, and evolve effectively. As modern architectural practices emphasize the decoupling of domains to achieve a decentralized architecture, it becomes increasingly clear that the topology of development teams and their interactions plays a vital role in the success of architectural design. The interplay between team topologies and software architecture is an essential factor that architects must consider. A well-structured team topology sets the stage for efficient collaboration, effective communication, and streamlined delivery of software solutions. Without careful consideration of team dynamics and organizational structure, even the most well-conceived architectural designs may encounter obstacles and fall short of their full potential. Conway's Law, formulated by Melvin Conway in 1968, states that the structure of a software system will mimic the communication structures of the organization that produces it. This principle highlights the critical relationship between team organization and software architecture. When development teams are siloed, lack clear communication channels, or have fragmented ownership, the resulting software architecture tends to reflect these inefficiencies and complexities. Building upon this understanding, the book "Team Topologies" by Matthew Skelton and Manuel Pais provides valuable insights into the principles and best practices for shaping team structures that foster successful architectural outcomes. Let's explore the significant insights that have a direct impact on software architecture and software development efficiency: Leveraging Conway's Law as a Strategic Advantage "Team Topologies" suggests leveraging Conway's Law as a strategic advantage in software architecture. The book proposes that architects can encourage or discourage certain types of designs by shaping the organization and team structures. As Ruth Malan points out, "If we have managers deciding which services will be built, by which teams, we implicitly have managers deciding on the system architecture." This reinforces the critical role of architects and engineering professionals in actively structuring team topologies and their communications and responsibilities. Unfortunately, in many companies, team topologies are determined without adequately considering the expertise of architects and engineering professionals. This lack of involvement can lead to architectural misalignments and inefficiencies. To ensure successful architectural outcomes, it is crucial for organizations to actively involve architects and engineering professionals in decisions related to team topologies. Their knowledge and insights can help shape team structures that align with architectural goals and foster effective communication and collaboration. Clear Ownership of Software Components Every part of the software system should be owned by exactly one team. This ownership ensures accountability and responsibility for designing, developing, and maintaining specific components. When ownership is unclear or fragmented, it can lead to coordination challenges, a lack of ownership-driven innovation, and architectural complexities. Assigning clear ownership to teams can create a sense of ownership, foster better collaboration, and ensure a more efficient and effective software system design. Clear ownership also enables teams to deeply understand their assigned components, making them better equipped to make informed architectural decisions and drive improvements. It promotes autonomy and a sense of ownership, which are essential for maintaining and evolving the software system over time. Different Types of Teams in Team Topologies The book discusses different types of teams and their implications for software development. One key consideration is the ownership and responsibility of an entire end-to-end user experience involving various technology stacks, such as mobile apps, cloud processing, and embedded software for a device. The book highlights the challenges of assigning a stream-aligned team to own the entire end-to-end user experience across different tech stacks. This scenario presents difficulties in finding the required skill mix and imposes a high cognitive load and context-switching burden on the team members. Making changes that span these diverse technology domains can result in suboptimal technical and architectural outcomes, leading to increased technical debt and potentially a poor user experience for customers. Organizations should consider alternative team structures to address these challenges. For instance, they may opt for a platform team responsible for providing shared capabilities and APIs across the technology stacks, while specialized feature teams focus on specific components or layers. This approach allows for more focused expertise, reduces cognitive load, and promotes architectural clarity and efficiency. In addition to these considerations, the book also introduces other types of teams that play distinct roles in software development: Stream-Aligned Teams: Stream-aligned teams focus on delivering end-to-end customer value within specific business domains or streams. They have a clear focus on a particular set of user experiences and work closely with product owners and stakeholders to ensure the delivery of valuable features. Enabling Teams: Enabling teams provide technical expertise, support, and guidance to streamline development. They focus on creating and maintaining platforms, infrastructure, and tooling that empower stream-aligned teams. Complicated-Subsystem Teams: These teams specialize in managing and improving complex subsystems or components within the software system. They ensure the reliability and continuous enhancement of critical parts of the architecture. Platform Teams: Platform teams focus on creating and maintaining shared capabilities and services that enable other teams to deliver software more efficiently. They provide standardized APIs, tooling, and infrastructure to enhance collaboration and consistency. Effective Team Communication and Collaboration In addition to team structures, establishing a well-defined communication framework is crucial for successful software development. Effective communication channels facilitate collaboration, streamline information flow, and promote architectural clarity. Several communication models and practices can significantly impact team collaboration: Collaboration as a Service: Teams treat each other's capabilities as services, providing well-defined interfaces and services for seamless integration and collaboration. Platform as a Product: Adopting a product mindset, the platform team delivers valuable services and functionalities to other teams, enabling smooth coordination and promoting architectural clarity. "X-as-a-Service": Teams treat another team's component or service as an external service, similar to consuming services from third-party providers. Well-defined interfaces and agreements ensure smooth integration and promote modularization. End-to-End Responsibility and Team Autonomy Granting teams end-to-end responsibility in the software development cycle is crucial for efficient and high-quality outcomes. Teams should have the autonomy to handle the entire lifecycle of a software product, from development to deployment and maintenance. This includes the ability to make decisions about architectural design, execute automated tests, and deploy software changes. Organizations foster a culture of accountability, innovation, and continuous improvement by allowing teams to take ownership of the entire process. Teams can quickly adapt to changing requirements, optimize workflows, and deliver software solutions that meet customer needs. Agile Budgeting for Development Teams The traditional model of budget allocation by project development teams can have significant negative impacts. It often leads to inefficiencies, lack of specialization, and limited adaptability. However, by adopting an Agile and product-oriented approach to resource allocation, teams can overcome these challenges and realize substantial benefits for architectural design and business outcomes. The traditional project-based budgeting model often results in teams working across multiple projects, lacking a defined domain and specialized expertise. This leads to coordination challenges, reduced productivity, and architectural complexities. It hampers the ability to make informed architectural decisions and inhibits ownership-driven innovation. Additionally, the traditional model may lead to delays in delivery, increased technical debt, and a poor user experience for customers. On the other hand, an Agile and product-oriented resource allocation model allows teams to specialize in their respective areas, mitigating the negative impacts of traditional budgeting. Specialized teams can focus on specific domains, develop deep expertise, and take ownership of their components. This fosters collaboration, improves productivity, and streamlines architectural design and development processes. Teams can make informed decisions, drive continuous improvements, and deliver high-quality software solutions more efficiently. From a business perspective, Agile and product-oriented resource allocation brings several benefits. It promotes innovation and flexibility, enabling teams to adapt to market changes and customer demands. By delivering value more rapidly, businesses can seize opportunities, stay ahead of competitors, and drive business growth. The Agile approach allows for incremental funding and adjustments based on project progress and evolving architectural needs, ensuring efficient resource allocation and avoiding unnecessary context switching. Conclusion Adopting appropriate team topologies, clear component ownership, effective communication, and Agile resource allocation is crucial for achieving efficient software architecture. These practices empower development teams and enhance business agility, innovation, and the ability to deliver value rapidly. By embracing these principles and practices, organizations can position themselves for success in a dynamic and competitive software development landscape.
Blockchain technology is an emerging technology field, and to explore its wide use of application, several companies have a dedicated research teams for the same. One such field that could take advantage of this technology is risk assessment. Blockchain technology can help in creating a secure and decentralized system that can be used to manage risks. These assessments, if performed, have the potential to be considered more accurate and trustworthy than any external audits. Risk assessment is an important activity to align that is often listed as a part of an organization's security strategy policy and procedures. It starts with the analysis of the company's various assets resulting in the identification of potential risks and vulnerabilities. The likelihood and impact of the identified risks are evaluated. The security team then develops strategies to mitigate or manage them. The risk assessment process requires extensive collaboration with multiple stakeholders and is both time-consuming and resource intensive. Blockchain technology promises new ways to conduct risk assessments; it helps to create a distributed, transparent, and tamper-proof system for assessing risks. Not only can this standardize and streamline the process but also improve the accuracy and reliability of results. A point to note is that blockchain can only increase accuracy and make the process more efficient. It cannot replace human judgment and auditing expertise. It can enhance the auditing process by ensuring the integrity of transactions’ and events’ records. To understand how blockchain can help in this area, it is important to understand the technicalities behind this technology. Decentralized Data Storage It means that the data is stored across a distributed network of nodes instead of a centralized database or server. Decentralized data storage eliminates the chances of a single point of failure, along with reducing the risk of data loss or corruption. One of the key advantages of using blockchain technology is that it allows for decentralized data storage. During risk assessments, information collected can be stored on the blockchain, making it more secure and less vulnerable to attack. Additionally, the distributed nature of blockchain technology means that multiple stakeholders can access and update the data, improving collaboration and ensuring that everyone is working from the same information. Immutable Audit Trail This means that every transaction that occurs on a blockchain is recorded and verified by the network of nodes. Once the transaction is recorded, no one can alter this data or delete it, ensuring the permanent and tamper-proof recording of every network activity. For risk assessments, potential risks and vulnerabilities can hence be recorded and made tamper-proof. This enhances transparency and introduces accountability; every stakeholder can have the capability to review the audit log. Auditors can therefore rely on this information and the risk assessment process without much scrutiny. Smart Contracts They are self-executing contracts that are coded using programming languages and typically run on a blockchain network. This can help automate business processes like risk assessment. Using smart contracts, risk assessments can be managed by an automated, secure, and transparent process. They are designed to operate in a decentralized environment, where trust is established using cryptography and consensus mechanisms. Once the terms of the contract have been met, the smart contract automatically executes, removing the need for intermediaries or other third parties. One example can be an addition of a new asset. Using smart contracts, automatic tasks can be assigned to various stakeholders who can then provide risk assessment results. These results can then be recorded, and findings can be logged to track. This will ensure error reduction and a standardized, scalable, and reliable process. So, the contracts can be designed to automatically trigger specific actions based on pre-defined criteria, such as alerts or notifications for identified risks. Tokenization In the blockchain world, tokenization refers to converting a physical or digital asset into a token. In a risk assessment process, a token could be used to represent a specific risk or vulnerability in an organization's environment. Any risk or vulnerability status, including actions to mitigate or manage it, can be done using this token. Hence providing better transparency and accountability due to increased visibility across stakeholders. Distributed Ledger Once the analysis is completed, the risk assessment data needs to be safely stored and distributed. This can be done using distributed ledger architecture of blockchain that provides a decentralized platform. All the nodes within the network will have the same information, which means that even if one node is corrupted, it will be extremely difficult for the hackers to challenge the integrity of this data. This is because this database is shared and synchronized across multiple network nodes or computers. The data is stored in blocks which in turn are records of multiple transactions. They could neither be modified nor be blocked once it becomes a part of the ledger, hence making it tamper-proof. This is a secure way of record-keeping with no single point of failure. Consensus Mechanisms This is a feature used by distributed ledgers and relies on a consensus algorithm that uses rules to decide how will the nodes reach consensus on the ledger state. This helps to maintain blockchain integrity. To check the validity of transactions and the state of the blockchain, the nodes reach a consensus hence reducing the fraud risk. There are different consensus mechanisms that can be used in a blockchain: Proof of Work (PoW): Used by Bitcoin, PoW prompts miners to solve complex mathematical problems. If a solution is achieved, a new block is added to the chain, and miners get new coins. Proof of Stake (PoS): Depending on the cryptocurrency, validators can create new blocks or put up some of their own coins as collateral. If any malicious activity is detected, they lose their collateral. PoS is less energy intensive than PoW but leads to centralization if validators with the most coins are the ones chosen to create new blocks. Delegated Proof of Stake (DPoS): This is created to overcome the risk of PoS. Here coin holders vote for delegates responsible for creating new blocks. The delegates are incentivized to act in the best interests of the network since they can be voted out if they act maliciously. However, here too, centralization can happen if a small number of delegates control most of the voting power. Practical Byzantine Fault Tolerance (PBFT): If the node is trusted, then PBFT is used. In this permission blockchain, random nodes are chosen to propose new blocks. They then vote to decide whether to add the block to the chain or not. Only if the majority wins, which is usually two-thirds, is a block added. This is the fastest among all four consensus mechanisms, but it requires high trust in the nodes that are a part of the network. Cryptography This ensures that the data stored is secure, thus ensuring the confidentiality and integrity of data. The use of cryptography in blockchain also ensures the authentication of users and devices. For instance, the use of hashing, which is a process of converting the data into a fixed string size, ensures the integrity of data on the blocks. Scalability A major issue with blockchain is scalability because of the impact on performance with an increase in blockchain transactions. Both vertical and horizontal scaling could be useful. Processes like sharding and off-chain transactions could overcome these issues. Different solutions follow different approaches; Bitcoin uses Segwit, which increases the block size, and Ethereum uses the PoS consensus mechanism. Any risk assessment intends to ensure their digital assets' security. With sophisticated cyber threats, traditional risk assessment methods need to be replaced with advanced technologies like blockchain. It can eliminate the need for intermediaries and reduce fraud risk and human error. With its decentralized and distributed architecture, blockchain offers a more secure and transparent way of conducting risk assessments, reducing the possibility of data breaches, cyber-attacks, and other security threats. However, blockchain also has its limitations, as its implementation in risk assessment requires a high level of technical expertise and investment. The regulatory and legal frameworks around blockchain are still evolving, which further adds to the complexity. Risk assessment using blockchain technology is an ongoing research topic. As blockchain technology continues to mature, it can transform the risk assessment approach. It can make it more secure, trustworthy, and cost-effective.
Docker technology has revolutionized the infrastructure management landscape in such a way that Docker has now become a synonym for containers. It is important to understand that all dockers are containers, but all containers are not dockers. While Docker is the most commonly used container technology, there are several other alternatives to Docker. In this blog, we will explore the Docker alternatives to your SaaS application. What Is Docker? Docker is an application containerization platform that is quite popular in IT circles. This open-source software enables developers to easily package applications along with their dependencies, OS, libraries, and other run-time-related resources in containers and automatically deploy them on any infrastructure. With cloud-native architecture and multi-cloud environments becoming popular choices for most organizations, Docker is the most convenient choice for building, sharing, deploying, and managing containers using APIs and simple commands in these environments. How Does It Work? Docker was initially created for the Linux platform. However, it now supports Apple OS X and Windows environments. Unlike virtual machines that encapsulate the entire OS, Docker isolates the resources in the OS kernel, enabling you to run multiple containers in the same operating system. Docker Engine is the main component of the Docker ecosystem. The Docker engine creates a server-side daemon and a client-side CLI. The server-side daemon hosts containers, images, data, and network images, while the client-side CLI enables you to communicate with the server using APIs. Docker containers are called Dockerfiles. What Are Docker’s Features and Benefits? Docker offers multiple benefits to organizations. Here are some of the key benefits offered by the tool: Increased Productivity Seamless Movement across Infrastructures Lightweight Containers Container Creation Automation Optimize Costs Extensive community support Increased Productivity Docker containers are easy to build, deploy and manage compared to virtual machines. They complement the cloud-native architecture and DevOps-based CI/CD pipelines allowing developers to deliver quality software faster. Seamless Movement Across Infrastructures Contrary to Linux containers that use machine-specific configurations, Docker containers are machine-agnostic, platform-agnostic, and OS-agnostic. As such, they are easily portable across any infrastructure. Lightweight Containers Each Docker container contains a single process making it extremely lightweight. At the same time, it allows you to update the app granularly. You can edit/modify a single process without taking down the application. Container Creation Automation Docker can take your application source code and automatically build a container. It can also take the existing container as a base image template and recreate containers enabling you to reuse containers. It also comes with a versioning mechanism, meaning each Docker image can be easily rolled back. Optimize Costs The ability to run more code on each server allows you to increase productivity with minimal costs. Optimized utilization of resources ultimately results in cost savings. In addition, standardized operations allow automation and save time and human resources, saving costs. Extensive Community Support Docker enjoys a large and vibrant community support. You enjoy the luxury of thousands of user-uploaded containers in the open-source registry instead of spending time reinventing the wheel. Why Is Microservices Better Than Monolith Architecture? Microservices architecture has become the mainstream architecture in recent times. Before understanding the importance of Microservices, it is important to know the downsides of a monolith architecture. Traditionally organizations used a monolithic architecture to build applications. This architecture uses a waterfall model for software development wherein the software is designed and developed first. The code is then sent to the QA team for testing purposes. When bugs are found, the code is sent back to the developers. After successful testing, the code is pushed to a testing environment and then to a live production environment. You must repeat the entire process for any code changes or updates. When you look at monolithic software from a logical perspective, you’ll find 3 layers: the front-end layer, the business layer, and the data layer. When a user makes a request, the business layer runs the business logic, the data layer manages the data, and the presentation layer displays it to the user. Code related to all 3 layers is maintained in a single codebase. Everyone commits changes to the same codebase. As the codebase grows, the complexity of managing it grows as well. When a developer is working on a single feature, he has to pull out the entire code to the local machine to make it work. Moreover, for every change, all artifacts have to be generated. The biggest problem is seamless coordination between teams. Monolithic architecture is not flexible, scalable, and is expensive. Microservices architecture solves all these challenges. Microservices architecture facilitates a cloud-native software development approach wherein the software is developed as loosely-coupled, independently deployable microservices that communicate with each other via APIs. Each service comes with its technology stack that can be categorized by business capability, allowing you to update or scale components with ease independently. Microservices uses a cloud-native architecture which is highly suitable for DevOps-based continuous delivery. As each app runs inside a container, you can easily make changes to the app inside the container without distributing the underlying infrastructure, gaining 99.99% uptime. CI/CD environments and the ability to easily move apps between various environments bring faster time to market. It also gives the flexibility to monitor market trends and quickly make changes to your application to always stay competitive. As each app runs in a separate container, developers have the luxury of choosing a diverse technology stack to build quality software instead of getting stuck with a specialized tool for a specific function. It also optimizes costs. Microservices and Docker While microservices architecture offers multiple benefits to organizations, it comes with certain challenges. Firstly, tracking services that are distributed across multiple hosts is a challenge. Secondly, as the microservices architecture scales, the number of services grows. As such, you need to allocate resources for each small host carefully. Moreover, certain services are so small that they don’t fully utilize the AWS EC2 instance. So, wasted resources can increase your overall costs. Thirdly, the microservices architecture comprises multiple services that are developed using multiple programming languages, technologies, and frameworks. When it comes to deploying microservices code, different sets of libraries and frameworks increase the complexity and costs. Docker technology solves all these challenges while delivering more. Docker enables you to package each microservice into a separate container. You can run multiple containers for a single instance, eliminating overprovisioning issues. Docker helps you abstract data storage by hosting data on a container and referencing it from other containers. Another advantage of this approach is persistent data storage, which is stored separately even after you destroy the container. The same approach can be applied to programming languages. You can group libraries and frameworks required for a language, package them inside a container, and link them to the required containers to efficiently manage cross-platform solutions. Using a log monitoring tool, you can monitor logs of individual containers to get clear insights into data flow and app performance. Why Do Some IT Managers Look For Docker Alternatives? While Docker is the most popular containerization technology, few IT managers are looking for Docker alternatives. Here are some reasons for them to do so. Docker is not easy to use. There is a steep learning curve. There are several issues that administrators have to handle. For instance, application performance monitoring doesn’t come out of the box. While Docker offers basic statistics, you need to integrate 3rd party tools for this purpose. Persistent data storage is not straightforward, so you must move data outside the container and securely store it. Container orchestration requires considerable expertise in configuring and managing an orchestration tool such as Docker Swarm, Kubernetes, or Apache Mesos. Docker containers require more layers to be secured when compared with a traditional stack. All these factors add up to the administrative burden. Without properly understanding the tool, running Docker becomes complex and expensive. However, the benefits of Docker outweigh these minor disadvantages. Moreover, these challenges will also greet you when you use alternatives to Docker. The time and effort spent in understanding Docker will reward you big time in the long run. In case you are still curious about alternatives to Docker, here are the top 10 Docker alternatives for your SaaS application: Docker Alternatives 1: Serverless Architecture Serverless architecture is a popular alternative to Docker containerization technology. As the name points out, a serverless architecture eliminates the need to manage a server or the underlying infrastructure to run an application. It doesn’t mean that servers are not needed but the cloud vendor handles that job. Developers can simply write an application code, package it and deploy it on any platform. They can choose to buy specific backend services needed for the app and deploy them on the required platform. Serverless architecture removes infrastructure management burdens or Docker/Kubernetes configuration complexities, scalability, and upgrades to deliver faster time to market. The ability to trigger events makes it a good choice for sequenced workflows and CI/CD pipelines. One of the biggest advantages of serverless computing is that you can extend applications beyond the cloud provider capacities. The flexibility to purchase specific functionalities required for the application significantly reduces costs. For instance, when you run docker containers and experience unpredictable traffic spikes, you’ll have to increase the ECS environment capacity. However, you’ll be paying more for the extra service containers and container instances. With a SaaS business, cost optimization is always a priority. When you implement serverless architecture using AWS Lambda, you will only scale functions that are required at the application runtime and not the entire infrastructure. That way, you can optimize costs. Moreover, it streamlines the deployment process allowing you to deploy multiple services without the configuration hassles. As you can run code from anywhere, you can use the nearest server to reduce latency. On the downside, application troubleshooting gets complex as the application grows, as you don’t know what’s happening inside. For this reason, serverless is termed as a black box technology. It is important to design the right app strategy. Otherwise, you will pay more for the expensive overhead human resource costs. Autodesk, Droplr, PhotoVogue, and AbstractAI are a few examples of companies using a serverless model. Docker Alternatives 2: Virtual Machines (VMs) from VMware Deploying virtual machines from VMware is another alternative for Docker. VMware is the leader in the virtualization segment. While Docker abstracts resources at the OS level, VMware virtualizes the hardware layer. One of the important offerings of VMware is the vSphere suite that contains different tools for facilitating cloud computing virtualization OS. vSphere uses ESXi, which is the hypervisor that enables multiple OSs to run on a single host. So, each OS runs with its dedicated resources. When it comes to containerization, VMware virtualizes the hardware along with underlying resources which means they are not fully isolated. Compared to Docker, VMware VMs are more resource-intensive and not lightweight and portable. For apps that require a full server, VMware works best. Though Docker apps are lightweight and run faster, VMware is quickly catching up. The current ESXi versions equal or outperform bare-metal machines. There are multiple options to use VMware for containerization tasks. For instance, you can install VMware vSphere ESXi hypervisor and then install any OS on top of it. Photon is an open-source, container-focused OS offered by VMware. It is optimized for cloud platforms such as Google Compute Engine and Amazon Elastic Compute. It offers a lifecycle management system called tdnf that is package-based and yum-compatible. Photon apps are lightweight, boot faster and consume a lesser footprint. Alternatively, you can run any Linux distributions on top of ESXi and run containers inside the OS. Docker containers contain more layers to be secured compared to VMware virtual machines. VMware is a good choice for environments requiring high security and persistent storage. VMware VMs are best suited for machine virtualization in an IaaS environment. While VMware VMs can be used as alternatives to Docker, they are not competing technologies and complement each other. To get the best of both worlds, you can run Docker containers inside a VMware virtual machine, making it ultra-lightweight and highly portable. Docker Alternatives 3: Monolithic Instances From AWS, Azure, and GCP Another alternative to Docker is to deploy your monolithic applications using AWS instances or Azure and GCP VMs. When you implement an AWS EC2 instance, it will install the basic components of the OS and other required packages. You can use Amazon Machine Image (AMI) to create VMs within the EC2 instance. They contain instructions to launch an instance. AMIs should be specified by developers in AWS. There are preconfigured AMIs for specific use cases. You can use Amazon ECS for orchestration purposes. AWS AMI images are not lightweight when compared with Docker containers. Docker Alternatives 4: Apache Mesos Apache Mesos is an open-source container and data center management software developed by Apache Software Foundation. It was formerly known as Nexus. Mesos is written in C++. It acts as an abstraction tool separating virtual resources from the physical hardware and provides resources to apps running on it. You can run apps such as Kubernetes, Elastic Search, Hadoop, Spark, etc., on top of Mesos. Mesos was created as a cluster management tool similar to Tupperware and Borg but differs in the fact that it is open-source. It uses a modular architecture. An important feature of Mesos is that it abstracts data center resources while grouping them into a single pool, enabling administrators to efficiently manage resource allocation tasks while delivering a consistent and superior user experience. It offers higher extensibility wherein you can add new applications and technologies without disturbing the clusters. It comes with a self-healing and fault-tolerant environment powered by Zookeeper. It reduces footprint and optimizes resources by allowing you to run diverse workloads on the same infrastructure. For instance, you can run traditional applications, distributed data systems, or stateless microservices on the same infrastructure while individually managing workloads. Apache Mesos allows you to run a diverse set of workloads on top of it, including container orchestration. For container orchestration, Mesos uses an orchestration framework called Marathon. It can easily run and manage mission-critical workloads, which makes it a favorite for enterprise architecture. Mesos doesn’t support service discovery. However, you can use apps running on Mesos, such as Kubernetes, for this purpose. It is best suited for data center environments that involve the complex configuration of several clusters of Kubernetes. It is categorized as a cluster management tool and enables organizations to run, build and manage resource-efficient distributed systems. Mesos allows you to isolate tasks within Linux containers and rapidly scales to hundreds and thousands of nodes. Easy scaling is what differentiates it from Docker. If you want to run a mission-critical and diverse set of workloads on a reliable platform along with portability across clouds and data centers, Mesos is a good choice. Twitter, Uber, Netflix, and Apple (Siri) are some of the popular enterprises that use Apache Mesos. Docker Alternatives 5: Cloud Foundry Container Technology Cloud Foundry is an open-source Platform-as-a-Service (PaaS) offering that the Cloud Foundry Foundation manages. The tool was written in Ruby, Go, and Java by VMware engineers and released in 2011. Cloud Foundry is popular for its continuous delivery support, facilitating product life cycle management. Its container-based architecture is famous for multi-cloud environments as it facilitates the deployment of containers on any platform while allowing you to seamlessly move workloads without disturbing the application. The key feature of Cloud Foundry is its ease of use which allows rapid prototyping. It allows you to write and edit code from any local IDE and deploy containerized apps to the cloud. Cloud Foundry picks the right build pack and automatically configures it for simple apps. The tool limits the number of opened ports for increased security. It supports dynamic routing for high performance. Application health monitoring and service discovery come out of the box. Cloud Foundry uses its own container format called Garden and a container orchestration engine called Diego. However, as Docker gained popularity and the majority of users started using Docker containers, Cloud Foundry had to support Docker. To do so, it encapsulated docker containers in Garden image format. However, moving those containers to other orchestration engines was not easy. Another challenge for Cloud Foundry came in the form of Kubernetes. While Cloud Foundry supported stateless applications, Kubernetes was flexible enough to support stateful and stateless applications. Bowing down to user preferences, Cloud Foundry replaced its orchestration engine Diego with Kubernetes. Without its container runtime and orchestration platform, the Cloud Foundry container ecosystem became less relevant. The failure of Cloud Foundry emphasizes the importance of making an organization future-proof. It also emphasizes the importance of using Docker and Kubernetes solutions. Docker Alternatives 6: Rkt from CoreOS Rkt from CoreOS is a popular alternative for Docker container technology. Rkt was introduced in 2014 as an interoperable, open-source, and secure containerization technology. It was formerly known as CoreOS Rocket. Rkt comes with a robust ecosystem and offers end-to-end container support, making it a strong contender in the containerization segment. The initial releases of Docker ran as root, enabling hackers to gain super-user privileges when the system was compromised. Rkt was designed with security and fine-grained control in mind. Rkt uses the appt container format and can be easily integrated with other solutions. It uses Pods for container configuration and gRPC framework for RESTful APIs. Kubernetes support comes out of the box. You can visually manage containers. Rkt offers a comprehensive container technology ecosystem. However, there is a steep learning curve. The community support is good. While the tool is open-source and free, Rkt charges for the support. For instance, Kubernetes support is $3000 for 10 servers. Verizon, Salesforce.com, CA Technologies, and Viacom are prominent enterprises using CoreOS Rkt. Though Rkt quickly became popular, its future is now in the dark. In 2018, RedHat acquired CoreOS. Since then, Rkt lost its direction. Adding to its woes is the withdrawal of support by the Cloud Native Computing Foundation (CNCF) in 2019. The Rkt Github page shows that the project has ended. Being an open-source project, anyone can fork it to develop their own code project. Docker Alternatives 7: LXD LXD is a container and virtual machine manager that is powered by the Linux Container technology (LXC and is managed by Canonical Ltd., a UK-based software company. It enables administrators to deliver a unified and superior user experience across the Linux ecosystem of VMs and containers. LXD is written in Go and uses a privileged daemon that can be accessed from the CLI via REST APIs using simple commands. LXD focuses on OS virtualization, allowing you to run multiple VMs or processes inside a single container. For instance, you can run Linux, Apache, MySQL, and PHP servers inside a single container. You can also run nested Docker containers. As it runs VMs that start quickly, it is cost-effective compared to regular VMs. LXD is more like a standalone OS that offers the benefits of both containers and VMs. As it uses a full OS image with network and storage dependencies, it is less portable when compared with Docker. LXD offers limited options when it comes to interoperability. You can integrate it with fewer technologies, such as OpenNebula or OpenStack. LXD runs only on Linux distributions and doesn’t support the Windows platform. LXD uses Ubuntu and Ubuntu-daily image repositories for Ubuntu distributions. For other distributions, it uses a public image server. Docker Alternatives 8: Podman Podman is a popular containerization technology that is rapidly maturing to compete with Docker. Unlike Docker, which uses a daemon for managing containers, Podman approaches containers with a Daemon-less technology called Conmon that handles the tasks of creating containers, storing the state and pulling out container images, etc. This ability to manage multiple containers out-of-the-box using pod-level commands is what makes Podman special. Compared to Docker technology, Common uses a lesser memory footprint. To create a pod, you need to create a manifest file using the declarative format and YAML data serialization language. Kubernetes consumes these manifests for its container orchestration framework. Podman is similar to Docker, which means you can interact with Podman containers using Docker commands. Being daemon-less, Podman is more secure with a lesser attack surface. To remotely access all supported resources, you can use REST APIs. Moreover, Podman containers don’t need root access, which means you can control them from being run as the host’s root user for greater security. Another ability that separates Podman from Docker is that you can group containers as pods and efficiently manage a cluster of containers. To use this feature in Docker, you need to create a docker-compose YAML file. The ability to efficiently manage pods is what gives Podman an advantage over other containerization tools. Here is the link to the Podman site. Docker Alternatives 9: Containerd Containerd is not a replacement for Docker, but it is actually a part of Docker technology. Containerd is a container runtime that creates, manages, and destroys containers in real time, implementing the Container Runtime Interface (CRI) specifications. It is a kernel abstraction layer that abstracts OS functionality or Syscalls. It pulls out docker images and sends them to the low-level runtime called runc that manages containers. When Docker was released in 2013, it was a comprehensive containerization tool that helped organizations create and run containers. But it lacked the container orchestration system. So, Kubernetes was introduced in 2014 to simplify container orchestration processes. However, Kubernetes had to use Docker to interact with containers. As Kubernetes only needed components that are required to manage containers, it was looking for a way to bypass certain aspects of the tool. The result was Dockershim. While Kubernetes developed Dockershim to bypass Docker, Docker came up with a container orchestration tool called Docker Swarm that performed the tasks of Kubernetes. As the containerization technology evolved and multiple 3rd party integrations came into existence, managing docker containers became a complex task. To standardize container technology, Open Container Initiative (OCI) was introduced. The job of OCI was to define specifications for container and runtime standards. To make docker technology more modular, Docker extracted its runtime into another tool called containerd which was later donated to the Cloud Native Computing Foundation (CNCF). With containerd, Kubernetes was able to access low-level container components without Docker. In today’s distributed network systems, containerd helps administrators to abstract Syscalls and provide users with what they need. The latest containerd version comes with a complete storage and distribution system supporting Docker images and OCI formats. To summarize, containerd helps you build a container platform without worrying about the underlying OS. To learn more about containerd, visit this link. Docker Alternatives 10: runC Similar to containerd, runC is a part of the Docker container ecosystem that provides low-level functionality for containers. In 2015, Docker released runC as a standalone container runtime tool. As Docker is a comprehensive containerization technology that runs distributed apps on various platforms and environments, it uses a sandboxing environment to abstract the required components of the underlying host without rewriting the app. To make this abstraction possible, Docker integrated these features into a unified low-level container runtime component called runC. runC is highly portable, secure, lightweight, and scalable, making it suitable for large deployments. As there is no dependency on the Docker platform, runC makes standard containers available everywhere. runC offers native support for Windows and Linux containers and hardware manufacturers such as Arm, IBM, Intel, Qualcomm and bleeding-edge hardware features such as tpm and DPSK. runC container configuration format is governed by the Open Container Project. It is OCI-complaint and implements OCI specs. Extra Docker Alternative: Vagrant Vagrant is an open-source software tool from Hashicorp that helps organizations to build and manage portable software deployment environments such as VirtualBox. With its easy workflow and automation, Vagrant enables developers to set up portable development environments automatically. While Docker can cost-effectively run software on a containerized Windows, Linux, and macOS system, it doesn’t offer full functionality on certain operating systems such as BSD. When you are deploying apps to BSD environments, vagrant production parity is better than Docker. However, Vagrant doesn’t offer full containerization features. When you are using microservices-heavy environments, vagrant lacks the full functionality. So, vagrant is useful when you are looking for consistent and easy development workflows or when BDS deployments are involved. The direct alternative to Docker technology is the serverless architecture. However, it makes organizations heavily dependent on cloud providers. It doesn’t suit long-term applications as well. VMware doesn’t offer a comprehensive containerization system. Rkt and Cloud Foundry are heading toward a dead end. Apache Mesos was on the verge of becoming obsolete but got the support of the members at the last hour. Containerd and runC are low-level tools and work well with high-level container software such as Docker. Most of the Docker alternatives are developer-focused. Docker offers a comprehensive and robust container ecosystem that suits DevOps, microservices, and cloud-native architectures! Container Orchestration Solutions When you use containers, you need a container orchestration tool to manage deployments of container clusters. Container orchestration is about automating container management tasks such as scheduling, deploying, scaling, and monitoring containers. For instance, in a containerized environment, each server runs multiple applications that are written in different programming languages using different technologies and frameworks. When you scale this setup to hundreds and thousands of deployments, it becomes a challenge to maintain operational efficiencies and security. And if you have to move them between on-premise, cloud, and multi-cloud environments, the complexity adds up. Identifying overprovisioning of resources, load-balancing across multiple servers, updates, and rollbacks, and implementing organization security standards across the infrastructure are some additional challenges you face. Manually performing these operations for enterprise-level deployments is not feasible. A container orchestration tool helps you in this regard. Container orchestration uses a declarative programming model wherein you define the required outcome, and the platform will ensure that the environment is maintained at that desired state. It means your deployments always match the predefined state. When you deploy containers, the orchestration tool will automatically schedule those deployments choosing the best available host. It simplifies container management operations, boosts resilience, and adds security to operations. Kubernetes, Docker Swarm, Apache Mesos are some of the popular container orchestration tools available in the market. Kubernetes has become so popular in recent times that many container management tools were built on top of Kubernetes, such as Amazon Kubernetes Services (AKS), Google Kubernetes Engine (GKS), Amazon Elastic Container Service for Kubernetes (EKS), etc. Container Orchestration Solution 1: Kubernetes Kubernetes, or K8S is the most popular container orchestration tool that helps organizations efficiently manage containers at a massive scale. It was released in 2014 by Google engineers and is now offered as an open-source tool. The tool is written in Go and uses declarative programming and YAML-based deployment. Kubernetes is a comprehensive container management and container orchestration engine. It offers load-balancing, auto-scaling, secrets management, and volume management out-of-the-box. It uses ‘pods’ that allow you to group containers and provision resources based on predefined values. It also supports web UI to view and manage clusters of containers. Kubernetes uses serverless architecture, is vendor-agnostic, and comes with built-in security. It offers comprehensive support for Docker containers. It also supports the rkt engine from CoreOS. Kubernetes enjoys vibrant community support. Google Container Engine (GCE) natively supports Kubernetes. Similarly, Azure and Redhat OpenShift also support Kubernetes. However, Kubernetes is not easy to configure and use. There is a steep learning curve. Container Orchestration Solution 2: Amazon ECS Amazon Elastic Container Service (ECS) is a comprehensive container orchestration tool offered by Amazon for Docker containers. It allows organizations to efficiently run clusters of VMs on the Amazon cloud while being able to manage container groups on these VMs across the infrastructure easily. Running a serverless architecture, ECS deploys VMs and manages containers, so you have to operate containers without worrying about managing VMs. You can define apps as tasks using JSON. The biggest USP of ECS is its simplicity and ease of use. Deployments can be made right from the AWS management console. It is free to use. ECS comes integrated with an extensive set of AWS tools such as CloudWatch, IAM, CloudFormation, ELB, etc. which means you don’t have to look otherwise for container management tasks. You can write code and programmatically manage container operations, perform health checks, or easily access other AWS services. Leveraging the immutable nature of containers, you can use AWS spot instances and save costs up to 90%. All containers are launched inside a virtual private cloud so that you can enjoy added security out-of-the-box. Container Orchestration Solution 3: Amazon EKS Amazon Elastic Kubernetes Service is another powerful offering from AWS to manage Kubernetes running on the AWS cloud efficiently. It is a certified Kubernetes tool, meaning you can run all the tools used in the Kubernetes ecosystem. It supports hybrid and multi-cloud environments. While AWS ECS is easy to use, EKS can take some time to get used to as it is a complex task deploying and configuring CloudFormation or Kops templates. However, it allows more customization and portability across multi-cloud and hybrid environments and best suits large deployments. Amazon EKS adds $144 per cluster per month to your AWS bill. Container Orchestration Solution 4: Azure Kubernetes Service Azure Kubernetes Service (AKS) is a managed Kubernetes service offered by Azure. Formerly, it was called Azure Container Service and supported Docker Swarm, Mesos, and Kubernetes. The best thing with AKS is that the tool is quickly updated in line with Kubernetes’ newer releases compared with EKS and GKE. If you are a strong Microsoft user, AKS works well for you as you can easily integrate it with other Microsoft services. For instance, you can have seamless integration with Azure Active Directory. Azure Monitor and Application Insights help you monitor and log environmental issues. The Azure policy is integrated with AKS. Automatic node health repair is a useful feature of the tool. A Kubernetes extension in Visual Studio Code allows you to edit and deploy Kubernetes from the editor. The developer community is good AKS charges only for the node, and the Control plane is free. On the downside, AKS offers 99.9% SLAs only when matched with the chargeable Azure Availability Zones. For free clusters, the uptime SLA is 99.5%. Container Orchestration Solution 5: Google Kubernetes Engine Google Kubernetes Engine is the managed Kubernetes service offered by Google. As Google engineers developed Kubernetes, Google stood first in introducing the managed Kubernetes services in the form of GKE. Moreover, it offers the most advanced solutions compared to EKS and AKS. It automatically updates master and node machines. CLI support is available. You can use the Stackdriver tool for resource monitoring. Autoscaling is available out of the box. It supports node pools wherein you can choose the best available resource to deploy each service. When it comes to pricing, cluster management is free. You will be charged for the resources used. EKS vs. AKS vs. GKE Which Is the Best Tool for Container Orchestration? Using the right technology stack, you can efficiently schedule containers, gain high availability, perform health checks, and perform load balancing and service discovery. When it comes to containerization technology, Docker is the most comprehensive and feature-rich container ecosystem that is second to none. Docker is a de facto containerization standard. When it comes to container orchestration tools, Kubernetes is the best choice. It offers robust performance to efficiently manage thousands of clusters while allowing you to move workloads between different platforms seamlessly. Going for a Docker alternative can be risky. As mentioned above, organizations that used Cloud Foundry and Rkt had to realign their containerization strategies. I recommend using AWS ECS or EKS with Docker! AWS ECS with Docker is a powerful and cost-effective choice for organizations that implement simple app deployments. If your organization deals with containerization at a massive scale, AWS EKS with Docker is a good choice. AWS is the leading provider of cloud platform solutions. AWS EKS comes with high interoperability and flexibility and is cost-effective. So, AWS ECS or EKS with Docker gives you the best of the breed! Conclusion As businesses aggressively embrace cloud-native architecture and move workloads to the cloud, containerization has become mainstream in recent times. With its robust standalone ecosystem, Docker has become the de facto standard for containerization solutions. Though Docker is implemented by millions of users across the globe, there are other containerization tools available in the market that cater to specific needs. However, when exploring new Docker alternatives, it is important to clearly identify your containerization requirements and check the alternatives to Docker host OS and use cases before making a decision.
This is a continuation of the previous article where it was described how to add support for the Postgres JSON functions and use Hibernate 5. In this article, we will focus on how to use JSON operations in projects that use Hibernate framework with version 6. Native Support Hibernate 6 already has some good support for query by JSON attributes as the below example presents. We have our normal entity class that has one JSON property: Java import jakarta.persistence.Column; import jakarta.persistence.Entity; import jakarta.persistence.Id; import jakarta.persistence.Table; import org.hibernate.annotations.JdbcTypeCode; import org.hibernate.annotations.Type; import org.hibernate.type.SqlTypes; import java.io.Serializable; @Entity @Table(name = "item") public class Item implements Serializable { @Id private Long id; @JdbcTypeCode(SqlTypes.JSON) @Column(name = "jsonb_content") private JsonbContent jsonbContent; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public JsonbContent getJsonbContent() { return jsonbContent; } public void setJsonbContent(JsonbContent jsonbContent) { this.jsonbContent = jsonbContent; } } The JsonbContent type looks like the below: Java import jakarta.persistence.Embeddable; import jakarta.persistence.Enumerated; import jakarta.persistence.EnumType; import org.hibernate.annotations.Struct; import java.io.Serializable; import java.util.List; @Embeddable public class JsonbContent implements Serializable{ private Integer integer_value; private Double double_value; @Enumerated(EnumType.STRING) private UserTypeEnum enum_value; private String string_value; //Getters and Setters } When we have such a model we can for example query by the string_value attribute. Java public List<Item> findAllByStringValueAndLikeOperatorWithHQLQuery(String expression) { TypedQuery<Item> query = entityManager.createQuery("from Item as item_ where item_.jsonbContent.string_value like :expr", Item.class); query.setParameter("expr", expression); return query.getResultList(); } Important! - Currently, there seems to be some limitation with the support of query by attributes which is that we can not query by complex types like arrays. As you can see, the JsonbContent type has the Embeddable annotation, which means that If you try to add some property that is a list we could see an exception with the following message: The type that is supposed to be serialized as JSON can not have complex types as its properties: Aggregate components currently may only contain simple basic values and components of simple basic values. In the case when our JSON type does not need to have properties with a complex type, then native support is enough. Please check the below links for more information: Stack Overflow: Hibernate 6.2 and json navigation Hibernate ORM 6.2 - Composite aggregate mappings GitHub: hibernate6-tests-native-support-1 However, sometimes it is worth having the possibility to query by array attributes. Of course, we can use native SQL queries in Hibernate and use Postgres JSON functions which were presented in the previous article. But it would be also useful to have such a possibility in HQL queries or when using programmatically predicates. This second approach is even more useful when you are supposed to implement the functionality of a dynamic query. Although dynamically concatenating a string that is supposed to be an HQL query might be easy but better practice would be to use implemented predicates. This is where using the posjsonhelper library becomes handy. Posjsonhelper The project exists Maven central repository, so you can easily add it by adding it as a dependency to your Maven project. XML <dependency> <groupId>com.github.starnowski.posjsonhelper</groupId> <artifactId>hibernate6</artifactId> <version>0.2.1</version> </dependency> Register FunctionContributor To use the library, we have to attach the FunctionContributor component. We can do it in two ways. The first and most recommended is to create a file with the name org.hibernate.boot.model.FunctionContributor under the resources/META-INF/services directory. As the content of the file, just put posjsonhelper implementation of the org.hibernate.boot.model.FunctionContributor type. Plain Text com.github.starnowski.posjsonhelper.hibernate6.PosjsonhelperFunctionContributor The alternative solution is to use com.github.starnowski.posjsonhelper.hibernate6.SqmFunctionRegistryEnricher component during application start-up, as in the below example with the usage of the Spring Framework. Java import com.github.starnowski.posjsonhelper.hibernate6.SqmFunctionRegistryEnricher; import jakarta.persistence.EntityManager; import org.hibernate.query.sqm.NodeBuilder; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.ApplicationListener; import org.springframework.context.annotation.Configuration; import org.springframework.context.event.ContextRefreshedEvent; @Configuration public class FunctionDescriptorConfiguration implements ApplicationListener<ContextRefreshedEvent> { @Autowired private EntityManager entityManager; @Override public void onApplicationEvent(ContextRefreshedEvent event) { NodeBuilder nodeBuilder = (NodeBuilder) entityManager.getCriteriaBuilder(); SqmFunctionRegistryEnricher sqmFunctionRegistryEnricher = new SqmFunctionRegistryEnricher(); sqmFunctionRegistryEnricher.enrich(nodeBuilder.getQueryEngine().getSqmFunctionRegistry()); } } For more details please check "How to attach FunctionContributor." Example Model Our model looks like the example below: Java package com.github.starnowski.posjsonhelper.hibernate6.demo.model; import io.hypersistence.utils.hibernate.type.json.JsonType; import jakarta.persistence.Column; import jakarta.persistence.Entity; import jakarta.persistence.Id; import jakarta.persistence.Table; import org.hibernate.annotations.JdbcTypeCode; import org.hibernate.annotations.Type; import org.hibernate.type.SqlTypes; @Entity @Table(name = "item") public class Item { @Id private Long id; @JdbcTypeCode(SqlTypes.JSON) @Type(JsonType.class) @Column(name = "jsonb_content", columnDefinition = "jsonb") private JsonbContent jsonbContent; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public JsonbContent getJsonbContent() { return jsonbContent; } public void setJsonbContent(JsonbContent jsonbContent) { this.jsonbContent = jsonbContent; } } Important!: In this example, the JsonbConent property is a custom type (as below), but it could be also the String type. Java package com.github.starnowski.posjsonhelper.hibernate6.demo.model; import jakarta.persistence.*; import org.hibernate.annotations.JdbcTypeCode; import org.hibernate.type.SqlTypes; import java.io.Serializable; import java.util.List; public class JsonbContent implements Serializable{ private List<String> top_element_with_set_of_values; private Integer integer_value; private Double double_value; @Enumerated(EnumType.STRING) private UserTypeEnum enum_value; private String string_value; private Child child; // Setters and Getters } DDL operations for the table: SQL create table item ( id bigint not null, jsonb_content jsonb, primary key (id) ) For presentation purposes, let's assume that our database contains such records: SQL INSERT INTO item (id, jsonb_content) VALUES (1, '{"top_element_with_set_of_values":["TAG1","TAG2","TAG11","TAG12","TAG21","TAG22"]}'); INSERT INTO item (id, jsonb_content) VALUES (2, '{"top_element_with_set_of_values":["TAG3"]}'); INSERT INTO item (id, jsonb_content) VALUES (3, '{"top_element_with_set_of_values":["TAG1","TAG3"]}'); INSERT INTO item (id, jsonb_content) VALUES (4, '{"top_element_with_set_of_values":["TAG22","TAG21"]}'); INSERT INTO item (id, jsonb_content) VALUES (5, '{"top_element_with_set_of_values":["TAG31","TAG32"]}'); -- item without any properties, just an empty json INSERT INTO item (id, jsonb_content) VALUES (6, '{}'); -- int values INSERT INTO item (id, jsonb_content) VALUES (7, '{"integer_value": 132}'); INSERT INTO item (id, jsonb_content) VALUES (8, '{"integer_value": 562}'); INSERT INTO item (id, jsonb_content) VALUES (9, '{"integer_value": 1322}'); -- double values INSERT INTO item (id, jsonb_content) VALUES (10, '{"double_value": 353.01}'); INSERT INTO item (id, jsonb_content) VALUES (11, '{"double_value": -1137.98}'); INSERT INTO item (id, jsonb_content) VALUES (12, '{"double_value": 20490.04}'); -- enum values INSERT INTO item (id, jsonb_content) VALUES (13, '{"enum_value": "SUPER"}'); INSERT INTO item (id, jsonb_content) VALUES (14, '{"enum_value": "USER"}'); INSERT INTO item (id, jsonb_content) VALUES (15, '{"enum_value": "ANONYMOUS"}'); -- string values INSERT INTO item (id, jsonb_content) VALUES (16, '{"string_value": "this is full sentence"}'); INSERT INTO item (id, jsonb_content) VALUES (17, '{"string_value": "this is part of sentence"}'); INSERT INTO item (id, jsonb_content) VALUES (18, '{"string_value": "the end of records"}'); -- inner elements INSERT INTO item (id, jsonb_content) VALUES (19, '{"child": {"pets" : ["dog"]}'); INSERT INTO item (id, jsonb_content) VALUES (20, '{"child": {"pets" : ["cat"]}'); INSERT INTO item (id, jsonb_content) VALUES (21, '{"child": {"pets" : ["dog", "cat"]}'); INSERT INTO item (id, jsonb_content) VALUES (22, '{"child": {"pets" : ["hamster"]}'); Using Criteria Components Below is an example of the same query presented at the beginning, but created with SQM components and criteria builder: Java public List<Item> findAllByStringValueAndLikeOperator(String expression) { CriteriaBuilder cb = entityManager.getCriteriaBuilder(); CriteriaQuery<Item> query = cb.createQuery(Item.class); Root<Item> root = query.from(Item.class); query.select(root); query.where(cb.like(new JsonBExtractPathText(root.get("jsonbContent"), singletonList("string_value"), (NodeBuilder) cb), expression)); return entityManager.createQuery(query).getResultList(); } Hibernate is going to generate the SQL code as below: SQL select i1_0.id, i1_0.jsonb_content from item i1_0 where jsonb_extract_path_text(i1_0.jsonb_content,?) like ? escape '' The jsonb_extract_path_text is a Postgres function that is equivalent to the #>> operator (please check the Postgres documentation linked earlier for more details). Operations on Arrays The library supports a few Postgres JSON function operators, such as: ?&- This checks if all of the strings in the text array exist as top-level keys or array elements. So generally if we have a JSON property that contains an array, then you can check if it contains all elements that you are searching by. ?| - This checks if any of the strings in the text array exist as top-level keys or array elements. So generally if we have a JSON property that contains an array, then you can check if it contains the least of the elements that you are searching by. Besides executing native SQL queries, Hibernate 6 does not have support for the above operations. Required DDL Changes The operator above can not be used in HQL because of special characters. That is why we need to wrap them, for example, in a custom SQL function. Posjsonhelper the library requires two custom SQL functions that will wrap those operators. For the default setting these functions will have the implementation below. SQL CREATE OR REPLACE FUNCTION jsonb_all_array_strings_exist(jsonb, text[]) RETURNS boolean AS $$ SELECT $1 ?& $2; $$ LANGUAGE SQL; CREATE OR REPLACE FUNCTION jsonb_any_array_strings_exist(jsonb, text[]) RETURNS boolean AS $$ SELECT $1 ?| $2; $$ LANGUAGE SQL; For more information on how to customize or add programmatically required DDL please check the section "Apply DDL changes." "?&" Wrapper The below code example illustrates how to create a query that looks at records for which JSON property that contains an array has all string elements that we are using to search. Java public List<Item> findAllByAllMatchingTags(Set<String> tags) { CriteriaBuilder cb = entityManager.getCriteriaBuilder(); CriteriaQuery<Item> query = cb.createQuery(Item.class); Root<Item> root = query.from(Item.class); query.select(root); query.where(new JsonbAllArrayStringsExistPredicate(hibernateContext, (NodeBuilder) cb, new JsonBExtractPath(root.get("jsonbContent"), (NodeBuilder) cb, singletonList("top_element_with_set_of_values")), tags.toArray(new String[0]))); return entityManager.createQuery(query).getResultList(); } In case the tags contain two elements, then Hibernate would generate the below SQL: SQL select i1_0.id, i1_0.jsonb_content from item i1_0 where jsonb_all_array_strings_exist(jsonb_extract_path(i1_0.jsonb_content,?),array[?,?]) "?|" Wrapper The code in the example below illustrates how to create a query that looks at records for which JSON property contains an array and has at least one string element that we are using to search. Java public List<Item> findAllByAnyMatchingTags(HashSet<String> tags) { CriteriaBuilder cb = entityManager.getCriteriaBuilder(); CriteriaQuery<Item> query = cb.createQuery(Item.class); Root<Item> root = query.from(Item.class); query.select(root); query.where(new JsonbAnyArrayStringsExistPredicate(hibernateContext, (NodeBuilder) cb, new JsonBExtractPath(root.get("jsonbContent"), (NodeBuilder) cb, singletonList("top_element_with_set_of_values")), tags.toArray(new String[0]))); return entityManager.createQuery(query).getResultList(); } In case the tags contain two elements, then Hibernate would generate SQL as below: SQL select i1_0.id, i1_0.jsonb_content from item i1_0 where jsonb_any_array_strings_exist(jsonb_extract_path(i1_0.jsonb_content,?),array[?,?]) For more examples of how to use numeric operators please check the demo dao object and dao tests. Why Use the posjsonhelper Library When Hibernate Has Some Support for JSON Attributes Query Besides those two operators that support the array types mentioned above, the library has two additional useful operators. The jsonb_extract_path and jsonb_extract_path_text are wrappers for #> and #>> operators. The Hibernate supports the ->> operator. To see the difference between those operators, please check the Postgres documentation linked earlier. However, as you read at the beginning of the article, the native query support for JSON attributes is only allowed when the JSON class has properties with simple types. And more importantly, you can not query by attribute if it is not mapped to the property in the JSON type. That might be a problem if you assume that your JSON structure can be more dynamic and have an elastic structure not defined by any schema. With the posjsonhelper operator, you don't have this problem. You can query by any attribute which you want. It does not have to be defined as a property in JSON type. Furthermore, the property in our entity that stores the JSON column does not have to be a complex object like JsonbConent in our examples. It can be a simple string in Java. Conclusion As was mentioned in the previous article, in some cases, Postgres JSON types and functions can be good alternatives for NoSQL databases. This could save us from the decision of adding NoSQL solutions to our technology stack which could also add more complexity and additional costs. That also gives us flexibility when we need to store unstructured data in our relation base and the possibility to query in those structures.
Terraform is a popular Infrastructure as Code tool that simplifies the process of creating, managing, and updating infrastructure components. In this blog post, I’ll explore how to use Terraform to effectively provision and configure distributed YugabyteDB Managed clusters. I will guide you through the process of configuring the YugabyteDB Managed Terraform provider, defining variables, initializing the Terraform project, and adjusting configurations as needed. Let's dive in! Steps to Configure YugabyteDB Clusters With Terraform Before we begin, ensure you have: Access to a YugabyteDB Managed account Terraform CLI installed on your local machine Step 1: Configure YugabyteDB Managed Terraform Provider and Authentication Token To use YugabyteDB Managed Clusters in Terraform, we must first configure the YugabyteDB Managed Terraform provider using an authentication token. You can create an authentication token using the YugabyteDB Managed Access Control Panel. Follow the steps in the API keys documentation to generate a token. Once you have your token, create a file called main.tf with the following configuration: YAML terraform { required_providers { ybm = { source = "yugabyte/ybm" version = "1.0.2" } } } variable "auth_token" { type = string description = "API authentication token" sensitive = true } provider "ybm" { host = "cloud.yugabyte.com" use_secure_host = true auth_token = var.auth_token } Then, create a file named terraform.tfvars to store your authentication token securely: YAML auth_token = "your-authentication-token" Finally, initialize your Terraform project by running the following command: Shell terraform init You should see the message below if everything is initialized correctly: Shell Initializing the backend... Initializing provider plugins... - Finding yugabyte/ybm versions matching "1.0.2"... - Installing yugabyte/ybm v1.0.2... - Installed yugabyte/ybm v1.0.2 (self-signed, key ID 0409E86E13F86B59) Terraform has been successfully initialized! Step 2: Provision a YugabyteDB Managed Cluster Now that we have configured the YugabyteDB Managed Terraform provider, let's create a three-node cluster in the US-East region of the Google Cloud Platform. Add the following configuration snippet to your main.tf file, replacing the cluster credentials with the ones you’d like to use: YAML resource "ybm_cluster" "single_region_cluster" { cluster_name = "my-terraform-cluster" cloud_type = "GCP" cluster_type = "SYNCHRONOUS" cluster_region_info = [ { region = "us-east1" num_nodes = 3 } ] cluster_tier = "PAID" fault_tolerance = "ZONE" node_config = { num_cores = 2 disk_size_gb = 50 } credentials = { username = "myUsername" password = "mySuperStrongPassword" } } After updating the configuration, apply the changes using the following command: Shell terraform apply Your multi-zone YugabyteDB Managed cluster should be up and running within a few minutes. Step 3: Scale Your YugabyteDB Managed Cluster Scaling the cluster (both horizontally and vertically) is a breeze with the YugabyteDB Terraform provider. Let's look at how you can scale the cluster to six nodes and provision more CPUs and disk space for each node instance. To begin, update your main.tf file by changing the cluster_region_info.num_nodes to 6 nodes, node_config.num_cores to 4 CPUs, and node_config.disk_size_gb to 100GBs: YAML resource "ybm_cluster" "single_region_cluster" { # ... cluster_region_info = [ { # ... num_nodes = 6 } ] # ... node_config = { num_cores = 4 disk_size_gb = 100 } # ... } After updating the configuration, apply the changes again: Shell terraform apply Your six-node cluster with more CPUs and storage per instance will be upgraded and ready in just a few minutes. Note: the infrastructure upgrade happens as a rolling upgrade, without impacting your applications. In Conclusion This short guide shows how easy it is to use Terraform to provision and manage distributed YugabyteDB Clusters. With just a few configuration changes, we can configure the YugabyteDB Managed Terraform provider, set up a three-node cluster in the US-East region of the Google Cloud Platform, and scale the cluster to six nodes while adjusting for the increased load. To find out more about YugabyteDB Managed Terraform provider, you can check the technical documentation.
Scrum is a methodology used in project management that was first introduced by Ken Schwaber and Jeff Sutherland in the 1990s. It has since become a popular approach to software development, but it can be used in other fields as well. In this article, we will explore what Scrum is, how it works, and the benefits it can bring to your organization. What Is Scrum? The word "scrum" originally referred to a formation used in rugby, where players come together and work together to move the ball forward. The term was adopted by Schwaber and Sutherland to describe their new approach to software development, where teams work collaboratively and iteratively to create and deliver high-quality products. Scrum is an agile framework for project management that focuses on delivering value to customers in a flexible and adaptable way. Scrum's core principles include transparency, control, and alignment. The framework is designed to allow teams to work together to achieve a common goal by breaking complex tasks into smaller, manageable pieces. Scrum is based on the idea of iterative development, which means teams work in short sprints of two to four weeks to deliver a working product increment. During each sprint, the team focuses on a specific set of tasks that bring the product closer to completion. At the end of each sprint, the team holds a review and retrospective meeting to assess its progress and identify opportunities for improvement. How Does Scrum Work? Scrum consists of three roles: the Product Owner, the Scrum Master, and the Development Team. Each of these roles has specific responsibilities that contribute to the success of the project. The Product Owner is responsible for defining the product vision and ensuring that the team is building the right product. They work closely with stakeholders to identify requirements and prioritize tasks based on the value they bring to the customer. The Product Owner also maintains the product backlog, which is a prioritized list of tasks that need to be completed. The Scrum Master is responsible for facilitating the Scrum process and ensuring that the team is following the Scrum framework. They coach the team on Scrum principles and help them to overcome any obstacles that may be hindering their progress. The Scrum Master also facilitates the daily stand-up meetings, which are short meetings where the team members share their progress and identify any issues. The Development Team is responsible for building the product increment during each sprint. The team is self-organizing and cross-functional, which means that each member has a specific set of skills that contribute to the success of the project. The Development Team works closely with the Product Owner to ensure that they are building the right product and with the Scrum Master to ensure that they are following the Scrum framework. The scrum framework also includes a number of artifacts, such as the product backlog, sprint backlog, and product increment. The product backlog is a prioritized list of features and requirements for the product, while the sprint backlog is a list of tasks that the team will work on during the current sprint. The product increment is a working version of the product that is delivered at the end of each sprint, and it should be fully functional and meet the definition of done. One of the key principles of scrum is the sprint, which is a time-boxed period of one to four weeks during which the development team creates a working product increment. Each sprint begins with a sprint planning meeting, where the team selects items from the product backlog and creates a sprint backlog. During the sprint, the team meets daily for a 15-minute stand-up meeting, where each team member gives a brief update on their progress and identifies any obstacles they are facing. At the end of each sprint, the team holds a sprint review meeting, where they demonstrate the product increment to the stakeholders and receive feedback. They also hold a sprint retrospective meeting, where they reflect on the sprint and identify areas for improvement. The Scrum process consists of several events that take place during each sprint. These events include: Sprint Planning: At the beginning of each sprint, the team holds a sprint planning meeting to discuss the tasks that need to be completed and how they will be accomplished. The team reviews the product backlog and selects the tasks that they will work on during the sprint. Daily Stand-up: The team holds a daily stand-up meeting to discuss their progress and identify any issues that may be hindering their progress. The meeting is short and focused, with each team member answering three questions: What did you do yesterday? What will you do today? Are there any obstacles in your way? Sprint Review: At the end of each sprint, the team holds a sprint review meeting to demonstrate the product increment that they have built. The Product Owner and stakeholders provide feedback on the product and identify any changes that need to be made. Sprint Retrospective: The team holds a sprint retrospective meeting to reflect on their performance during the sprint. The team discusses what went well, what didn't go well, and identifies areas for improvement. Scrum is designed to be flexible and adaptable, and it can be customized to fit the needs of different teams and projects. However, there are some common practices and techniques that are often used in Scrum, such as user stories, burndown charts, and velocity tracking. User stories are a way of capturing requirements in a simple, user-focused format. They describe a specific feature or requirement from the perspective of the user, and they are often written on index cards or sticky notes. User stories are used to create the product backlog, and they help the team to stay focused on the needs of the user throughout the development process. Burndown charts are a visual representation of the team's progress during the sprint. They show how much work has been completed and how much work remains, and they can be used to identify potential problems or delays. Velocity tracking is a way of measuring the team's productivity over time. It involves tracking the amount of work the team completes during each sprint and using that data to estimate how much work they can complete in future sprints. The Benefits of Scrum Scrum offers several benefits to organizations that use it to manage their projects. Some of these benefits include: Increased Productivity: Scrum helps teams to be more productive by breaking down complex tasks into smaller, manageable chunks. This makes it easier for team members to focus on the specific tasks they need to accomplish during each sprint, which can lead to higher productivity and faster delivery times. Improved Collaboration: Scrum promotes collaboration between team members, as well as with stakeholders and customers. By working together to achieve a common goal, team members are able to share ideas and insights, leading to better decision-making and higher-quality products. Greater Flexibility: Scrum is a flexible framework that allows teams to adapt to changing requirements and circumstances. By working in short sprints, teams are able to quickly adjust their priorities and focus on the tasks that will deliver the most value to the customer. Increased Transparency: Scrum promotes transparency by making the progress of the project visible to all stakeholders. Through regular meetings and reviews, team members are able to provide updates on their progress, and stakeholders are able to provide feedback and make suggestions for improvements. Better Risk Management: Scrum helps teams to identify and mitigate risks early on in the project. By breaking down tasks into smaller chunks and regularly reviewing progress, teams are able to identify potential issues and address them before they become major problems. Implementing Scrum in Your Organization If you're interested in implementing Scrum in your organization, there are several steps you can take to get started. Educate Your Team: Before you begin implementing Scrum, it's important to educate your team on the principles and practices of the framework. There are many resources available, including books, online courses, and training programs, that can help your team get up to speed. Identify Your Product Owner and Scrum Master: The Product Owner and Scrum Master are key roles in the Scrum framework. It's important to identify individuals who have the right skills and experience to fill these roles and ensure that they receive the necessary training and support. Create Your Product Backlog: The product backlog is a prioritized list of tasks that need to be completed to achieve the product vision. Work with your Product Owner to create a backlog that is focused on delivering value to the customer. Plan Your Sprint: Once you have your product backlog, you can begin planning your sprint. Work with your team to select the tasks that will be completed during the sprint and create a sprint goal that defines what the team hopes to accomplish. Hold Regular Meetings: Regular meetings, including daily stand-ups, sprint reviews, and retrospectives, are critical to the success of Scrum. Make sure that these meetings are scheduled and attended by all team members. Challenges in Scrum Scrum is not without its challenges, however. One of the biggest challenges is that it requires a high level of discipline and commitment from all team members. This can be difficult to achieve in environments where there are competing priorities or a lack of executive support. Another challenge is that Scrum can be difficult to implement in organizations accustomed to more traditional approaches to project management. It can require significant changes in the way teams and stakeholders work together, as well as a mindset shift toward more agile and collaborative ways of working. To overcome these challenges, it is important that teams have a clear understanding of Scrum principles and practices and a commitment to continuous improvement. They should also have strong leadership and management support, as well as a culture that values transparency, collaboration, and innovation. Conclusion Scrum is a powerful framework that can help organizations to deliver high-quality products in a flexible and adaptable way. By breaking down complex tasks into smaller chunks and promoting collaboration between team members, Scrum can lead to increased productivity, improved quality, and greater customer satisfaction. If you're considering implementing Scrum in your organization, be sure to educate your team, identify your key roles, and create a product backlog and sprint plan that is focused on delivering value to the customer. In conclusion, Scrum is a powerful methodology that can help organizations to manage their projects more efficiently and effectively. Its focus on flexibility, adaptability, and continuous improvement makes it an ideal choice for software development projects, but it can also be used in other fields. By using Scrum, organizations can increase productivity, improve communication, and deliver value to the customer faster.
This article is a step-by-step guide aimed at demonstrating an interface-based approach to using Spring's type conversion system. Spring 3 introduced a core.convert package that provides a general type conversion system. The system defines an SPI to implement type conversion logic and an API to perform type conversions at runtime. In the proposed implementation, the registration process of Converters by ConversionService happens via injecting dependencies through an interface default method. This results in a quite extensible and encapsulated solution when the process of onboarding new instances of Converter takes place during their initialization as Spring beans. ConversionService Initialization To get started, let us create an instance of ConversionService by extending its implementation of DefaultConversionService: Java package com.mycompany.converter; import org.springframework.core.convert.support.DefaultConversionService; import org.springframework.stereotype.Component; @Component class MyConversionService extends DefaultConversionService { } Extending Converter API Next up is extending Converter interface by adding a default method which will be used for registration implemented instances of MyConverter by auto-wiring a ConversionService bean: Java package com.mycompany.converter; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.core.convert.converter.Converter; interface MyConverter<S, T> extends Converter<S, T> { @Autowired default void onboardConverter(ConversionService myConversionService) { myConversionService.addConverter(this); } } Converter Implementation The final step is adding a concrete implementation of MyConverter: Java package com.mycompany.converter; import org.springframework.stereotype.Component; @Component class SomeConverter implements MyConverter<String, Integer> { public Integer convert(String source) { return Integer.valueOf(source); } } Take notice that all mentioned classes, such as MyConversionService, MyConverter, SomeConverter are placed in com.mycompany.converter package and have package-level access in the interest of encapsulation in order to expose the components created for the rest of a system only through Spring's interfaces. Another point worth highlighting is that SomeConverter is a bean. This lets us for some converters if it is needed to have more complicated business logic implemented with some other dependencies which can be easily injected. And at the same time, at this level of abstraction, we are not bothered by the registration of our SomeConverter by MyConversionService which happens behind the scenes. What is beneficial from a maintenance standpoint is that other converters can be easily added or removed, and existing ones refactored without having any impact on registration business logic. ConversionService Usage The usage is the same as described in Spring documentation 3.4.6. Using a ConversionService Programmatically: Java package com.mycompany.service; import org.springframework.stereotype.Service; import org.springframework.core.convert.ConversionService; @Service public class MyService { public MyService(ConversionService myConversionService) { this.myConversionService = myConversionService; } public void doIt() { this.myConversionService.convert(...) } } Where myConversionService bean should just be injected as a dependency to perform conversion operations through registered converters. Final Word That is basically what I wanted to cover here. Hopefully, someone will find it useful when they are dealing next time with Spring's type conversion system. From my point of view, using default interface methods for dependency injection is a rather nonstandard approach that may not be obvious and even not known. I guess we should not always follow this direction. Still, I do believe that it can be useful to know about such a possibility, which in certain circumstances, can lead to quite an elegant solution. It is the same as with recursion, i.e., you should know about it and what alternatives are to be able to decide whether it suits well your concrete situation or not.
6 Must-Have Software Metrics for Engineering Managers
July 7, 2023 by
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
CI/CD Pipeline With CircleCI for React Native
July 7, 2023 by
July 7, 2023 by
Five IntelliJ Idea Plugins That Will Change the Way You Code
May 15, 2023 by