Internal Developer Portals are reshaping the developer experience. What's the best way to get started? Do you build or buy? Tune in to see.
Agentic AI. It's everywhere. But what does that mean for developers? Learn to leverage agentic AI to improve efficiency and innovation.
Also known as the build stage of the SDLC, coding focuses on the writing and programming of a system. The Zones in this category take a hands-on approach to equip developers with the knowledge about frameworks, tools, and languages that they can tailor to their own build needs.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.
Securing Kubernetes in Production With Wiz
Enhancing API Integration Efficiency With a Mock Client
As part of the Android Application Security series, we are going to understand the security controls provided by Android OS (operating system) to protect the applications that are running on the device. Without these security controls in place, the data on the devices or transmitted by apps could be easily accessed by other apps or devices in the network. Before getting started, if you haven't read the first part of this series, I highly recommend reading it. Before the rise of mobile devices, OS (operating system) security primarily focused on desktop computers, servers, and enterprise systems. Below are a few important controls implemented by OS. Role-based access control (RBAC) and multi-user environmentsNetwork security to protect against DoS (Denial of Service) attacks, firewalls, antivirus, etc.Memory protection mechanisms to prevent buffer overflow (BoF) attacks, including DEP (Data Execution Prevention) and ASLR (Address Space Layout Randomization)File system security, including file permissions, file encryption, and disk encryption Like these, different operating systems (Windows, Linux, Unix, Solaris, etc.) came up with several protection mechanisms by the time the mobile device era started. Mobile operating systems like Android and iOS started building mobile platforms by keeping these security controls. But still, in the earlier days, there were not many controls available on Android/iOS. They added the required controls pretty fast. Defense in Depth Android security followed a defense-in-depth model to protect confidentiality, integrity, and availability (CIA). In the defense-in-depth approach, they have implemented several controls. Below are important: Android Users and GroupsSELinux (Security Enhanced Linux)PermissionsDEP, ASLR, etcSECCOMPApp SandboxDevice EncryptionTrusted Execution Environment (TEE) The below list shows the improvements: Android 4.2 (API level 16) in November 2012 (introduction of SELinux)Android 4.3 (API level 18) in July 2013 (SELinux became enabled by default)Android 4.4 (API level 19) in October 2013 (several new APIs and ART introduced)Android 5.0 (API level 21) in November 2014 (ART used by default and many other features added)Android 6.0 (API level 23) in October 2015 (many new features and improvements, including granting; detailed permissions setup at runtime rather than all or nothing during installation)Android 7.0 (API level 24-25) in August 2016 (new JIT compiler on ART)Android 8.0 (API level 26-27) in August 2017 (a lot of security improvements)Android 9 (API level 28) in August 2018 (restriction of background usage of mic or camera, introduction of lockdown mode, default HTTPS for all apps)Android 10 (API level 29) in September 2019 (access location "only while using the app," device tracking prevention, improve secure external storage) Android App Security Features Apart from these regular operating system controls, Android will provide the following security controls for an app running on Android OS. App SandboxingPermissionsApp SigningKeystore App Sandboxing Application sandboxing is one of the critical features provided by Android OS. It allows you to keep your data or process in a sandbox environment and won't allow other apps (malicious or normal). This will protect data inside the app. Linux-based security mechanisms, file system permissions, and runtime restrictions ensure that apps operate independently without unauthorized access to system resources or other apps. As we have seen in the previous article, the application data will be in the /data/data/ folder. Shell barbet:/data/data # ls alpha.mydevices.in com.google.android.apps.internal.betterbug android com.google.android.apps.maps android.auto_generated_rro_product__ com.google.android.apps.messaging android.auto_generated_rro_vendor__ com.google.android.apps.nbu.files android.autoinstalls.config.google.nexus com.google.android.apps.nexuslauncher android.myshop.release com.google.android.apps.photos While installing the app on the device, the OS generates a unique code for the app and performs Base64 encoding on it. Then, create a folder name with that. Before Oreo, Android used a normal package name. But, after Oreo, Android started creating unique folder names to provide better unique names and security. Shell barbet:/data/data # ls /data/app/ ~~-09YTqMLYcn7FwT_vUBMOA== ~~AK82hlWJIrFJf2cgbVRR3g== ~~KvDoAodAiN0LD2UILHaetg== ~~QOHLg7084hj8rNBvVYHYeg== ~~gndAwbGNwp0mzMUTP556Sg== ~~uoiS3mnMS4L3QfsR0uwqPA== ~~1GBi7xnRkkWDvnnXZaZHPQ== ~~CeEiKh1AzGKF9Y9x76zs5A== ~~L6i9NBP_E_AWx0xH9YqlTA== ~~Qdd7zzYfVflMMaAGBQ8ZMw== ~~hfTx_FzGJ_VJ0ixIGudCZg== ~~vD_rgwS55aJpet3R2BZmPw== ~~21GSMTYpPGbmfo2J0pktSQ== ~~DBpe3alZqtqzhzbsPsiIMg== ~~LBtCo2pnLZRNu_bG1KmjIQ== ~~RBZl37VivVPAu4ovxRpX3Q== ~~j4XtEDAErb_X2lAXlXgHvA== ~~vM80pt2jacKCCUjiKCh9UA== ~~2WLzdU9faNRtTWTH9veuiw== ~~DbXAPRM1sjmHHYWO4BER0w== ~~MSw6x4JmrypY_E2G71wchw== ~~TU9zLt0XNPdlo26BAr_aIw== ~~jMmQ6FtaqWQ_HmLd85T_pQ== ~~vqp1MM_cfzUyOEucWflhDg== ~~2eFNNEK0J5-bfUpxfOnNGw== ~~DxOUwLmCdEqkM_2UXXDo-g== ~~Ma7GQ-mgVbviiz1NVUECiQ== ~~U4EbGiTCND1jmF3wZdkRYw== ~~j_HCTKzJrbJ7OVjNxm45Dw== ~~vvRXqZtaVpDagq6KtxMXQg== ~~2zp6xn7KRnCY7KP-eL1uDw== ~~En-qhd9ZYwBn2WR5JTYBcg== ~~MhXEKEfd9PxBFoHtcXMAMA== ~~WmA6pQdCwVlAa5kxwhRaYA== ~~kAhQi7P28dfkIgqlK8Ytmw== ~~wTH3BJqtk3T8SePQmQw3Zg== Permissions Android permissions are critical controls that dictate how an app interacts with system resources, such as accessing the internet, reading contacts, or accessing files on the device. Apps must declare required permissions in the AndroidManifest.xml file. Users are informed about the requested permissions during installation (or runtime for dangerous permissions) and can decide whether to grant them, ensuring transparency and control over app behavior. Install time permissionsNormal PermissionsSpecial PermissionsPermission groups App Signing To install any app (APK file) on the device, it has to be digitally signed. Without proper signature verification, the OS won't allow the installation. This helps the user to see the authenticity of the developer or organization. Keystore A Keystore in Android is a secure container used to store cryptographic keys and certificates. It plays a vital role in app signing, securing sensitive data, and enabling cryptographic operations like encryption, decryption, and authentication. Conclusion We can see how these controls help Android applications protect themselves from malicious apps in upcoming posts.
SQL Server Dynamic Data Masking is a feature that allows you to obscure sensitive data from non-privileged accounts, improving data security and compliance. Rather than showing credit card numbers, passwords, or personal identifiers in cleartext, you can define masking rules at the column level so that specific users see only masked values. In contrast, others with elevated permissions see the actual data. When to Use Dynamic Data Masking Lower environments (development, QA). Typically, developers and testers do not need access to actual sensitive information. Masking ensures that they can work with realistic datasets without risking exposure to PII.Third-party access. When sharing data with external consultants or analytics vendors, masked data prevents inadvertent or malicious disclosure of sensitive content.Regulatory compliance. For environments where regulations like GDPR, HIPAA, or PCI-DSS apply, dynamic masking helps ensure only authorized personnel can view sensitive data in cleartext Prerequisites SQL server version. Dynamic data masking is available in SQL Server 2016 and later.Permissions and roles. To create or modify masking rules, you must have the ALTER ANY MASK and ALTER permissions on the table. End-users who only have SELECT permissions on the table or view will automatically be served masked data if they do not have UNMASK permission.Assessment of sensitive fields. Identify which columns contain PII or sensitive data. Typical candidates: Email addressesPhone numbersNational identifiers (e.g., SSN)Credit card numbersPasswords or security answers How to Implement Dynamic Data Masking 1. Identify Columns to Mask Review each column and decide which requires masking using the query below: MS SQL SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Customers' ORDER BY TABLE_NAME, COLUMN_NAME; 2. Choose a Masking Function SQL Server provides a few built-in masking functions: Default() – masks the entire value using a default value depending on the data typeEmail() – masks the email format (e.g., xxx@example.com)Partial (prefix, padding, suffix) – allows partial masking of a string. For example, it shows the first character, masks the middle, and shows the last character.Random ([start range], [end range]) – for numeric columns, a random number in the specified range is returned. Example Masking Scenarios Name fields (e.g., FirstName, LastName). Use partial() to show maybe the first letter and mask the rest.Email address. Use email() for a consistent masked pattern.Credit card number. Use partial() to show only the last four digits.Password columns. Use default() to mask fully. 3. Apply Masking to the Table For example, consider a Customers table with columns: FirstName, LastName, Email, CreditCardNumber, and Password. Below are some sample queries: MS SQL -- Mask the first name to show only the first letter ALTER TABLE Customers ALTER COLUMN FirstName ADD MASKED WITH (FUNCTION = 'partial(1, "****", 0)'); -- Mask the last name similarly ALTER TABLE Customers ALTER COLUMN LastName ADD MASKED WITH (FUNCTION = 'partial(1, "*****", 0)'); -- Mask the email using built-in email masking ALTER TABLE Customers ALTER COLUMN Email ADD MASKED WITH (FUNCTION = 'email()'); -- Mask credit card number to show only the last 4 digits ALTER TABLE Customers ALTER COLUMN CreditCardNumber ADD MASKED WITH (FUNCTION = 'partial(0,"****-****-****-",4)'); -- Mask the password fully ALTER TABLE Customers ALTER COLUMN Password ADD MASKED WITH (FUNCTION = 'default()'); Important: You must be a member of the db_owner role or have the ALTER ANY MASK permission in the database. 4. Create a Non-Privileged User or Role to Test Masking Use the below queries: MS SQL CREATE USER MaskedUser WITHOUT LOGIN; GRANT SELECT ON Customers TO MaskedUser; When MaskedUser queries the table: EXECUTE AS USER = 'MaskedUser'; SELECT FirstName, LastName, Email, CreditCardNumber, Password FROM Customers; REVERT; They will see masked data. If an administrator with the UNMASK permission runs the same query, they will see the real data. 5. Monitoring and Verification Data audits. Regularly audit queries and logins to ensure no unauthorized UNMASK permissions are granted.Test masking patterns. Confirm that the masked output meets your compliance and business requirements. For instance, ensure the displayed format (like ****-****-****-1234 for a credit card) is correct.Documentation. Maintain a data dictionary or schema documentation that notes which columns are masked and how so that team members understand what they see in downstream environments. Example Result Original data (Admins with UNMASK): FirstNameLastNameEmailCreditCardNumberPasswordAliceNadellaalice.n@example.com4111-1111-1111-1231MySecretPass%JohnYesujohn.y@example.com5555-6666-7777-8899Password123$ View for Non-Privileged Users: FirstNameLastNameEmailCreditCardNumberPasswordA****S*****xxx@xxxx.com****-****-****-1234****J****D*****xxx@xxxx.com****-****-****-8888**** Conclusion Implementing Dynamic Data Masking in SQL Server is one meaningful step towards what many people have begun to call 'privacy-first' architecture. In any case, the implementation will align with basic data protection principles, such as under GDPR and CCPA, allowing only correctly authorized users to see sensitive data in full view. In contrast, other users get values either masked or partially to decrease the possibility of unauthorized disclosure of personal information. Data Minimization and Access Control GDPR and CCPA are based on data minimization, meaning they only provide the data necessary for a particular task. Dynamic masking ensures that you show only the minimal, masked version of sensitive data to non-privileged roles, thus adhering to these regulations. Improved Protection and Exposure Reduction It minimizes the risk of personal data exposure by storing sensitive data in masked form within the databases while processing. Should unauthorized access or a data breach at the application/reporting layer occur, the data already shown is masked. Therefore, such an event would have minimal impact on the subjects. Audit and Compliance Readiness Well-documented masking rules and role-based permissions support the accountability principle of GDPR and the transparency requirements of CCPA. Auditors can easily verify that your organization has technical measures to safeguard personal information, helping demonstrate compliance and due diligence. Ease of Implementation in Development and Testing For lower environments, where developers and testers often need "realistic" data, dynamic masking provides a systematic way to ensure personal information is never exposed. This approach helps maintain privacy protections throughout the data lifecycle. Scalability and Consistency Because data masking is dynamic and at the database layer, it scales to multiple applications, services, and analytic tools. This uniformity supports clearly outlined compliance policies and reduces the chance of policy drift or mistakes in bespoke masking logic at different codebases. Incorporating dynamic data masking into your general privacy-by-design strategy allows you to protect data subjects' privacy interests while lowering the risks of regulatory fines and engendering better customer trust. This fits within the GDPR's focus on privacy by design and by default, and the CCPA demands reasonable security measures to safeguard consumer data.
React, introduced by Facebook (now Meta) in 2013, forever changed how developers build user interfaces. At that time, the front-end ecosystem already had heavyweights like AngularJS, Backbone.js, and jQuery, each solving specific needs. Yet React's approach — treating the UI as a function of state — stood out. Instead of manually orchestrating data and DOM updates, React lets developers describe how the UI should look given certain conditions. Then, using an internal mechanism called the Virtual DOM, it efficiently computed and applied the necessary changes. This innovation, along with React's component-based architecture and a massive community, catapulted it to the forefront of front-end development. Since its debut, React has evolved significantly. Version after version introduced incremental improvements, with major shifts like the Fiber rewrite, Hooks, Concurrent Mode previews, and upcoming Server Components. The result is a library that stays modern while preserving backward compatibility. In what follows, we'll explore the factors that made React dominant, how it overcame early criticisms, and why it's likely to remain the top UI library for years to come. Seeds of Popularity React started internally at Facebook to address frequent UI updates on its newsfeed. Traditional frameworks at the time often struggled to manage data flow and performance efficiently. Those using two-way binding had to track changes across models, templates, and controllers, leading to complex debugging. React's solution was a one-way data flow, letting developers push state down the component tree while React reconciled differences in the DOM behind the scenes. The Virtual DOM was a key selling point. Instead of updating the browser DOM every time something changed, React created a lightweight, in-memory representation. After comparing this representation to the prior state, it would issue minimal updates to the real DOM. This approach boosted performance while making code more predictable. Another reason for early adoption was Facebook's own usage. Seeing the tech giant leverage React in production gave other companies confidence. Meanwhile, open-source licensing meant a growing community could adopt, extend, and improve React, ensuring a constant feedback loop between Facebook and open-source contributors. The Virtual DOM Advantage At first, many developers were skeptical of React's claims about the Virtual DOM. The concept of re-rendering an entire component tree whenever state changed seemed wildly inefficient. Yet, React's approach — in which it performs a "diff" of two Virtual DOM trees and updates only what’s changed — proved both performant and simpler to reason about. This workflow reduced complex DOM manipulation to "just set state." In older paradigms, a developer often had to orchestrate exactly which elements in the DOM should update and when. React effectively said, "Don't worry about it; we'll figure out the most efficient way." This lets developers focus on writing declarative code, describing final states rather than the step-by-step manipulations required to reach them. Moreover, testability improved. With a predictable input (props and state) and output (rendered markup), React components felt like pure functions — at least from the standpoint of rendering. Side effects could be managed more centrally, paving the way for robust testing strategies and simpler debugging. Declarative, Component-Based Philosophy React's embrace of a component-based architecture is one of its most powerful ideas. Instead of forcing code into "template + logic + style" silos, React components combine markup (via JSX), logic (in JavaScript), and optional styling (through various methods) into cohesive units. Each component is responsible for rendering a specific part of the UI, making it easy to reuse in multiple places. Encapsulation and Reuse Once a component is built, you can drop it into any part of the application. As long as you pass the appropriate props, the component behaves predictably. This approach helps create consistent design systems and accelerates development. When a bug is fixed in a shared component, the fix automatically propagates across the application. Readability Declarative code means developers describe the final UI rather than orchestrate how to get there step by step. If a component's props or state changes, React re-renders just that part. Combined with a unidirectional data flow — where data moves from parent to child — this clarity reduces accidental side effects that can plague large projects. JSX JSX, which lets developers write HTML-like syntax in JavaScript files, flew in the face of conventional web development wisdom that demanded strict separation of HTML, CSS, and JS. Yet many developers quickly realized that JSX actually collocated concerns — logic, markup, and sometimes style — rather than conflating them. Why JSX Works Familiarity. Developers used to writing HTML find JSX relatively easy to pick up, even if it initially looks unusual.Integration with JS. Because it's essentially syntactic sugar for React.createElement, you can embed complex JavaScript logic right in your markup. Loops, conditionals, and variable interpolations become more natural.Tooling. Modern editors and IDEs offer syntax highlighting and error checking for JSX, and many design systems are built around componentization that aligns well with this pattern. Over time, the community embraced JSX so thoroughly that even those who once disliked it acknowledged its power. Now, single-file component structures are common in other frameworks (Vue, Svelte, Angular with inline templates) as well, proving React was ahead of its time. Thriving Ecosystem and Community One of React's undeniable strengths is its extensive ecosystem and the community-driven approach to problem-solving. Because React focuses narrowly on the view layer, developers can pick and choose solutions for routing, state management, testing, data fetching, and more. This flexibility spawned specialized libraries that are now considered best in class: State management. Redux popularized a single-store approach for predictable state updates. Others like MobX, Zustand, and Recoil provide alternatives, each addressing different developer preferences.Routing. React Router is the go-to for client-side routing, though frameworks like Next.js have their own integrated routing solutions.Styling. From plain CSS to CSS Modules to CSS-in-JS (Styled Components, Emotion), React doesn't force a single path. Developers can choose what fits their use case.Full frameworks. Next.js and Gatsby turned React into a platform for server-side rendering, static site generation, and advanced deployments. This community grew so large that it became self-sustaining. Chances are, if you face a React-related issue, someone has already documented a solution. The synergy between official tools (like Create React App) and third-party libraries ensures new and seasoned developers alike can find robust, time-tested approaches to common problems. Performance and Scalability While React's Virtual DOM is a core aspect of its performance story, the library also has advanced techniques to ensure scalability for large applications: React Fiber. Introduced around React 16, Fiber was a rewrite of React's reconciliation engine. It improved how React breaks, rendering work into small units that can be paused, resumed, or abandoned. This means smoother user experiences, especially under heavy load.Concurrent mode (experimental). Aims to let React work on rendering without blocking user interactions. Though still evolving, it sets React apart for high-performance UI tasks.Memoization and pure components. React's API encourages developers to use React.memo and memoization Hooks (useMemo, useCallback) to skip unnecessary re-renders. This leads to apps that handle large data sets or complex updates gracefully. Big-name products with massive traffic — Facebook, Instagram, Netflix, Airbnb — run on React. This track record convinces companies that React can scale effectively in real-world scenarios. React Hooks: A Paradigm Shift When React Hooks arrived in version 16.8 (2019), they fundamentally changed how developers write React code. Prior to Hooks, class components were the primary way to manage state and side effects like fetching data or subscribing to events. Although classes worked, they introduced complexities around this binding and spread logic across multiple lifecycle methods. Simplified State and Side Effects useState – lets functional components track state in a cleaner wayuseEffect – centralizes side effects like data fetching or setting up subscriptions. Instead of scattering logic among componentDidMount, componentDidUpdate, and componentWillUnmount, everything can live in one place with clear control over dependencies. Custom Hooks Possibly the most powerful outcome is custom Hooks. You can extract stateful logic (e.g., form handling, WebSocket connections) into reusable functions. This fosters code reuse and modularity without complex abstractions. It also helped quell skepticism about React's reliance on classes, making it more approachable to those coming from purely functional programming backgrounds. Hooks revitalized developer enthusiasm. People who had moved on to frameworks like Vue or Angular gave React another look, and many new developers found Hooks-based React easier to learn. Backing by Facebook (Meta) A key factor ensuring React's long-term stability is its corporate sponsorship by one of the world's largest tech companies: Dedicated engineering team. Facebook employs a team that works on React, guaranteeing regular updates and bug fixes.Reliability. Companies choosing React know it's used in mission-critical apps like Facebook and Instagram. This track record instills confidence that React won't be abandoned.Open-source collaborations. Facebook's involvement doesn't stop community contributions. Instead, it fuels a cycle where user feedback and corporate resources shape each release. While other libraries have strong community backing (e.g., Vue) or big-company sponsorship (e.g., Angular by Google), React's synergy with Meta's vast ecosystem has helped it remain stable and well-resourced. Why React Will Keep Leading With the front-end world evolving rapidly, how has React maintained its top spot, and why is it likely to stay there? Mature Ecosystem and Tooling React is more than a library: it's the center of a vast ecosystem. From code bundlers to full-stack frameworks, thousands of third-party packages revolve around React. Once a technology hits critical mass in package managers, online tutorials, and job postings, dislodging it becomes very difficult. This "network effect" means new projects often default to React simply because it's a safe, well-understood choice. Constant Innovation React's willingness to break new ground keeps it relevant. Major changes like Fiber, Hooks, and the upcoming Server Components show that React doesn't rest on past success. Each time a significant development arises in front-end architecture (e.g., SSR, offline-first PWAs, concurrency), React either provides an official solution, or the community quickly creates one. Developer and Business Momentum Employers often list React experience as a top hiring priority. This job demand incentivizes developers to learn React, thus growing the talent pool. Meanwhile, businesses know they can find engineers familiar with React, making it less risky to adopt. This cycle continues to reinforce React's position as the go-to library. Adaptability React started off focusing primarily on client-side rendering, but it's now used for: SSR. Next.js handles server-side rendering and API routes.SSG. Gatsby and Next.js can statically generate pages for performance and SEO.Native Apps. React Native allows developers to build mobile apps using React's paradigm. By expanding across platforms and rendering strategies, React adapts to practically any use case, making it a one-stop shop for many organizations. Addressing the Competition React is not alone. Angular, Vue, Svelte, SolidJS, and others each have loyal followers and unique strengths. Vue, for example, is lauded for its gentle learning curve and integrated reactivity. Angular is praised for its out-of-the-box, feature-complete solution, appealing to enterprises that prefer structure over flexibility. Svelte and SolidJS take innovative approaches to compilation and reactivity, potentially reducing runtime overhead. However, React's dominance persists due to factors like: Early adopter advantage. React's head start, plus support from Facebook, made it the first choice for many.Tooling and community. The sheer volume of React-related content, tutorials, and solutions far exceeds what other ecosystems have.Corporate trust. React is deeply entrenched in the product stacks of numerous Fortune 500 companies. While it's possible that the front-end space evolves in ways we can't predict, React's community-driven nature and proven record indicate it will remain a pillar in web development for the foreseeable future. Recognized Pitfalls and Criticisms No technology is perfect. React's critics point out a few recurring issues: Boilerplate and setup. Configuring a new React project for production can be confusing — bundlers, Babel, linting, SSR, code splitting. Tools like Create React App (CRA) help, but advanced setups still require build expertise.Fragmented approach. React itself is just the UI library. Developers still have to choose solutions for routing, global state, and side effects, which can be overwhelming for newcomers.Frequent changes. React has introduced large updates like Hooks, forcing developers to migrate or learn new patterns. The React team does maintain backward compatibility, but staying on top of best practices can feel like a never-ending task. Ultimately, these issues haven't slowed React's growth significantly. The community addresses most pain points quickly, and official documentation remains excellent. As a result, even critics acknowledge that React’s strengths outweigh its shortcomings, especially for large-scale projects. Conclusion React's journey from a nascent library at Facebook to the world's leading front-end technology is marked by visionary ideas, robust engineering, and a dynamic community. Its distinctive approach — combining declarative rendering, components, and the Virtual DOM — set a new standard in how developers think about building UIs. Over time, iterative improvements like Fiber, Hooks, and concurrent features showed that React could continually reinvent itself without sacrificing stability. Why will React continue to lead? Its massive ecosystem, encompassing everything from integrated frameworks like Next.js to specialized state managers like Redux or Recoil, offers a level of flexibility that appeals to startups, mid-sized companies, and enterprises alike. Ongoing innovations ensure React won't become stagnant: upcoming features like Server Components will simplify data fetching and enable even more seamless user experiences. Plus, backed by Meta's resources and used in production by world-class platforms, React has unmatched proof of scalability and performance in real-world conditions. While new frameworks challenge React's reign, none so far have unseated it as the default choice for countless developers. Its thriving community, mature tooling, and steady corporate backing create a self-reinforcing cycle of adoption. In a field where frameworks come and go, React has not only stood the test of time but has grown stronger with each passing year. For these reasons, it's hard to imagine React's momentum slowing anytime soon. Indeed, it has become more than just a library: it's an entire ecosystem and philosophy for crafting modern interfaces — and it shows no signs of giving up the throne.
Aspect-oriented programming (AOP) is a programming paradigm that enables the modularisation of concerns that cut across multiple types and objects. It provides additional behavior to existing code without modifying the code itself. AOP can solve many problems in a graceful way that is easy to maintain. One such common problem is adding some new behavior to a controller (@Controller) so that it works “outside” the main logic of the controller. In this article, we will look at how to use AOP to add logic when an application returns a successful response (HTTP 200). An entity should be deleted after it is returned to a client. This can relate to applications that, for some reason (e.g., legal), cannot store data for a long time and should delete it once it is processed. We will be using AspectJ in the Spring application. AspectJ is an implementation of AOP for Java and has good integration with Spring. Before that, you can find more about AOP in Spring here. Possible Solutions To achieve our goal and delete an entity after the logic in the controller was executed we can use several approaches. We can implement an interceptor (HandlerInterceptor) or a filter (OncePerRequestFilter). Spring components can be leveraged to work with HTTP requests and responses. This requires some studying and understanding this part of Spring. Another way to solve the problem is to use AOP and its implementation in Java — AspectJ. AOP provides a possibility to reach the solution in a laconic way that is very easy to implement and maintain. It allows you to avoid digging into Spring implementation to solve this trivial task. AOP is a middleware solution and complements Spring. Implementation Let’s say we have a CardInfo entity that contains sensitive information that we cannot store for a long time in the database, and we are obliged to delete the entity after we process it. For simplicity, by processing, we will understand just returning the data to a client that makes a REST request to our application. We want the entity to be deleted right after it was successfully read with a GET request. We need to create a Spring Component and annotate it with @Aspect. Java @Aspect @Component @RequiredArgsConstructor @ConditionalOnExpression("${aspect.cardRemove.enabled:false}") public class CardRemoveAspect { private final CardInfoRepository repository; @Pointcut("execution(* com.cards.manager.controllers.CardController.getCard(..)) && args(id)") public void cardController(String id) { } @AfterReturning(value = "cardController(id)", argNames = "id") public void deleteCard(String id) { repository.deleteById(id); } } @Component – marks the class as a Spring component so that it can be managed by the Spring IoC.@Aspect – indicates that this class is an aspect. It is automatically detected by Spring and used to configure Spring AOP.@Pointcut – indicates a predicate that matches join points (points during the execution of a program).execution() – represents the execution of any method within the defined package (in our case the exact method name was set).@AfterReturning – advice to be run after a join point completes normally (without throwing an exception). I also annotated the class with @ConditionalOnExpression to be able to switch on/off this functionality from properties. This small piece of code with a couple of one-liner methods does the job that we are interested in. The cardController(String id) method defines the exact place/moment where the logic defined in the deleteCard(String id) method is executed. In our case, it is the getCard() method in the CardController class that is placed in com.cards.manager.controllers package. deleteCard(String id) contains the logic of the advice. In this case, we call CardInfoRepository to delete the entity by id. Since CardRemoveAspect is a Spring Component, one can easily inject other components into it. Java @Repository public interface CardInfoRepository extends CrudRepository<CardInfoEntity, String> { } @AfterReturning shows that the logic should be executed after a successful exit from the method defined in cardController(String id). CardController looks as follows: Java @RestController @RequiredArgsConstructor @RequestMapping( "/api/cards") public class CardController { private final CardService cardService; private final CardInfoConverter cardInfoConverter; @GetMapping("/{id}") public ResponseEntity<CardInfoResponseDto> getCard(@PathVariable("id") String id) { return ResponseEntity.ok(cardInfoConverter.toDto(cardService.getCard(id))); } } Conclusion AOP represents a very powerful approach to solving many problems that would be hard to achieve without it or difficult to maintain. It provides a convenient way to work with and around the web layer without the necessity to dig into Spring configuration details. To view the full example application where AOP was used, as shown in this article, read my other article on creating a service for sensitive data using Spring and Redis. The source code of the full version of this service is available on GitHub.
Problem statement: Ensuring the resilience of a microservices-based e-commerce platform. System resilience stands as the key requirement for e-commerce platforms during scaling operations to keep services operational and deliver performance excellence to users. We have developed a microservices architecture platform that encounters sporadic system failures when faced with heavy traffic events. The problems with degraded service availability along with revenue impact occur mainly because of Kubernetes pod crashes along with resource exhaustion and network disruptions that hit during peak shopping seasons. The organization plans to utilize the CNCF-incubated project Litmus for conducting assessments and resilience enhancements of the platform. Our system weakness points become clearer when we conduct simulated failure tests using Litmus, which allows us to trigger real-world failure situations like pod termination events and network delays, and resource usage limits. The experiments enable us to validate scalability automation systems while testing disaster recovery procedures and maximize Kubernetes settings toward total system reliability. The system creates a solid foundation to endure failure situations and distribute busy traffic periods safely without deteriorating user experience quality. Chaos engineering applied proactively to our infrastructure enables better risk reduction and increased observability, which allows us to develop automated recovery methods that enhance our platform's e-commerce resilience to every operational condition. Set Up the Chaos Experiment Environment Install LitmusChaos in your Kubernetes cluster: Shell helm repo add litmuschaos https://litmuschaos.github.io/litmus-helm/ helm repo update helm install litmus litmuschaos/litmus Verify installation: Shell kubectl get pods -n litmus Note: Ensure your cluster is ready for chaos experiments. Define the Chaos Experiment Create a ChaosExperiment YAML file to simulate a Pod Delete scenario. Example (pod-delete.yaml): YAML apiVersion: litmuschaos.io/v1alpha1 kind: ChaosExperiment metadata: name: pod-delete namespace: litmus spec: definition: scope: Namespaced permissions: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] image: "litmuschaos/go-runner:latest" args: - -c - ./experiments/generic/pod_delete/pod_delete.test command: - /bin/bash Install ChaosOperator and Configure Service Account Deploy ChaosOperator to manage experiments: Shell kubectl apply -f https://raw.githubusercontent.com/litmuschaos/litmus/master/litmus-operator/cluster-k8s.yml Note: Create a ServiceAccount to grant necessary permissions. Inject Chaos into the Target Application Label the application namespace for chaos: Shell kubectl label namespace <target-namespace> litmuschaos.io/chaos=enabled Deploy a ChaosEngine to trigger the experiment: Example (chaosengine.yaml): YAML apiVersion: litmuschaos.io/v1alpha1 kind: ChaosEngine metadata: name: pod-delete-engine namespace: <target-namespace> spec: appinfo: appns: '<target-namespace>' applabel: 'app=<your-app-label>' appkind: 'deployment' chaosServiceAccount: litmus-admin monitoring: false experiments: - name: pod-delete Apply the ChaosEngine: Shell kubectl apply -f chaosengine.yaml Monitor the Experiment View the progress: Shell kubectl describe chaosengine pod-delete-engine -n <target-namespace> Check the status of the chaos pods: Shell kubectl get pods -n <target-namespace> Analyze the Results Post-experiment, review logs and metrics to determine if the application recovered automatically or failed under stress. Here are some metrics to monitor: Application response timeError rates during and after the experimentTime taken for pods to recover Solution Root cause identified: During high traffic, pods failed due to an insufficient number of replicas in the deployment and improper resource limits. Fixes applied: Increased the number of replicas in the deployment to handle higher trafficConfigured proper resource requests and limits for CPU and memory in the pod specificationImplemented a Horizontal Pod Autoscaler (HPA) to handle traffic spikes dynamically Conclusion By using LitmusChaos to simulate pod failures, we identified key weaknesses in the e-commerce platform’s Kubernetes deployment. The chaos experiment demonstrated that resilience can be significantly improved with scaling and resource allocation adjustments. Chaos engineering enabled proactive system hardening, leading to better uptime and customer satisfaction.
In Terraform, comments are lines or sections of code that are ignored during execution but are useful for providing context, explanations, or notes within the code. They ensure team members can quickly grasp the purpose and functionality of configurations, reducing confusion and improving efficiency. In this article, we’ll cover the types of comments in Terraform, how to use them effectively, and best practices for writing clear, concise annotations. Types of Comments in Terraform There are two main types of comments in Terraform. They are used to annotate the configuration by providing context and explanations: Single-line comments. Start with # or // and are used for brief explanations or disabling specific lines of code.Multi-line comments. Enclosed in a comment block between /* and */ and are used for longer explanations or commenting out blocks of code. Regardless of type, comments are ignored by the Terraform parser and do not affect the actual execution of the code. When using custom tooling or integrations, you may encounter references to /// comments. These are not officially supported as part of Terraform's syntax but are sometimes utilized in specialized workflows. For example, some documentation generation tools or linters may interpret lines beginning with /// as markers for extracting structured documentation, similar to how certain programming languages use triple-slash comments. When to Use Terraform Comments Comments should enhance understanding and collaboration without cluttering the codebase. Here are some common scenarios when you should include comments in your configurations: Explain the purpose. Describe the purpose of a resource, variable, or module to provide context.Document assumptions. Highlight assumptions made during the configuration.Mark TODOs. Use comments to note areas that need further attention or future updates (e.g., # TODO: Add monitoring).Provide references. Link to documentation or ticket numbers related to the code.Versioning. Comment on the configuration to reflect the Terraform or provider version compatibility. How to Add Single-Line Comments in Terraform Files Single-line comments are added in Terraform code using the # or // symbols. Both styles are supported, and you can use them interchangeably, depending on your preference. Note: Using # is considered the default comment style and is more commonly used in Terraform configurations. It is the standard style in most Terraform examples and documentation. Let’s see some examples. Single-line and inline comments using the hash symbol: Plain Text # Define an AWS EC2 instance resource "aws_instance" "example" { ami = "ami-12345678" # Amazon Machine Image ID instance_type = "t2.micro" # Instance type } Single-line and inline comments using double slashes: Plain Text // Define an AWS S3 bucket resource "aws_s3_bucket" "example" { bucket = "example-bucket-name" // Name of the S3 bucket acl = "private" // Access control list for the bucket } Inline comments can follow any valid Terraform configuration line. Ensure there is a space between the code and the comment for readability. How to Add Multiline Comments in Terraform Files For multi-line comments, Terraform supports standard block comments using the /* ... */ syntax. Everything between /* and */ is treated as a comment, similar to multi-line comments in many programming languages like Java or C. For example: Plain Text /* This is a multiline comment in Terraform. Use this to document: - Configuration details - Explanation of resources - Notes for other team members Block comments are clean and versatile. */ resource "aws_instance" "example" { ami = "ami-12345678" instance_type = "t2.micro" } Alternatively, you can use the # symbol at the beginning of each line for multiline comments. While this isn't a true "block comment," it works for multiple lines. Plain Text # This is a multiline comment in Terraform. # Use this format when you prefer single-line hash comments: # - Each line starts with a hash (#). # - Provides clear separation for each line. Multiline comments are particularly useful for temporarily commenting out sections of your Terraform code or providing detailed documentation directly in your configuration files. Best Practices for Commenting in Terraform Code Follow the best practices below to create clear, maintainable, and professional Terraform configurations. Standardize comment style. Use a consistent format for comments across your infrastructure to improve readability and reduce confusion when working with large teams.Avoid over-commenting. Too many comments, especially redundant ones, can clutter the codebase and reduce readability.Avoid overusing inline comments. Excessive inline comments can break code flow and make configurations harder to read. Use inline comments sparingly and only for clarifications that are immediately relevant.Avoid commenting sensitive information. Comments may inadvertently expose passwords, API keys, or other sensitive details, leading to security vulnerabilities. Avoid including sensitive information in comments or configuration files. Use environment variables or secret management tools instead.Focus on intent. Terraform code is typically self-explanatory. Comments should add value by explaining why the configuration exists, not how it works. Focus on explaining the reasoning behind design choices, constraints, or any non-obvious decisions.Document critical resources. Highlight resources with high impact (e.g., production databases, IAM roles, or security groups) with notes about dependencies, limitations, or risks.Keep comments up-to-date. Update comments when making changes to the code. Outdated comments can mislead other contributors.Indicate manual processes. Some configurations require manual steps, and documenting these ensures they are not overlooked. Use comments to flag any manual actions or processes required before or after deployment.Automate comment checks. Tools like TFLint can be customized to include comment checks to ensure consistency, accuracy, and compliance with your team's coding standards. Key points Using comments effectively can improve collaboration within teams and make Terraform configurations easier to understand and maintain over time. However, it's important to strike a balance — comments should enhance understanding without cluttering the codebase.
We call this an event when a button is pressed; a sensor detects a temperature change, or a transaction flows through. An event is an action or state change that is important to an application. Event stream processing (ESP) refers to a method or technique to stream the data in real-time as it passes through a system. The main objective of ESP is to focus on the key goal of taking action on the data as it arrives. This enables real-time analytics and action, which is important in scenarios where low-latency response is a prerequisite, e.g., fraud detection, monitoring, and automated decision-making systems. Patterns play a big role in ESP as they help spot important sequences or behaviors in data that keep flowing non-stop. What Does the Event Stream Processing Pattern Look Like? A recurrent sequence or combination of events that are discovered and processed in real-time from continuously flowing data, we call it a "pattern" in the world of ESP. Now, let’s classify the patterns into these categories, Condition-Based Patterns These are recognized when a set of event stream conditions are met within a certain period of time. For example, a smart home automation system could identify that there has been no motion in any room for the last two hours, all doors and windows are closed, and it is after 10 pm. In this case, the system may decide to turn off all the lights. Aggregation Patterns When a group of events reaches a specific threshold, aggregation patterns show it. One example would be figuring out when a specific quantity of clicks on an advertisement within a specified period of time results in a campaign or marketing alert. Time-Related or Temporal Patterns Finding event sequences within a given time frame is known as temporal pattern detection. For instance, if multiple temperature sensors show notable variations in a brief period of time, this could point to a possible issue like overheating. Abnormality or Anomaly Detection Patterns The purpose of anomaly patterns is to identify exceptional or unexpected data behavior. For example, an abrupt increase in online traffic can be interpreted as a sign of system congestion or a possible security risk. How Beneficial Is Pattern Recognition in ESP? For systems to be able to analyze, comprehend, and react in real time to the flood of massive amounts of streaming data, ESP systems need pattern matching. Patterns can be regarded as snapshot abstractions derived from event streams that help recognize important sequences or behaviors within continuous streams of data. Since the stream is coming at us in "real-time," it cannot stop and wait for us. Data waits for no one! In fact, more keeps coming every few seconds or milliseconds, depending on our expected volume. Thus, we should come up with a methodology that automatically finds useful patterns from incoming event streams so that as soon as an interesting trend, anomaly, or event occurs in this stream, we become aware and can act/decide immediately. Instantaneous Decision-Making Businesses may make decisions immediately rather than waiting for manual analysis by spotting reoccurring patterns as they appear. For instance, a manufacturing plant's automatic cooling system could be set to react when it detects a trend of rising temperatures, saving harm to the machinery. Enhanced Automation Automated reactions to particular events or conditions are made possible by patterns. This reduces the need for human intervention and allows systems to self-manage in response to detected anomalies, trends, or events. For example, based on recognized fraud trends, an online payment system may automatically identify and block questionable transactions. Improved Predictive Skills Future occurrences can be predicted with the aid of pattern recognition. Systems can predict trends, customer behavior, or possible system problems by examining historical behaviors. For example, patterns in user behavior on an e-commerce site can predict future purchases, enabling targeted promotions. Enhanced User Experience Identifying user behavior patterns in applications that interact with customers enables a smooth and customized experience. For instance, identifying browsing or purchase trends allows for tailored recommendations, which raises user engagement and happiness. Additionally, patterns aid in the detection of inconsistency or irregularity, which may be signs of dangers or failures. Businesses can take quick action to reduce risks by identifying patterns of anomalous activity in cybersecurity, which aids in the real-time detection of possible breaches or assaults. A Role of Apache Flink's FlinkCEP Library FlinkCEP, a library built on Apache Flink, helps users spot complex patterns in event streams. Apache Flink provides a strong foundation for stream processing. FlinkCEP focuses on complex event processing (CEP) for endless data streams. To use FlinkCEP in Apache Flink for event stream processing, we need to follow these main steps, starting from setting up the environment, defining event patterns, and processing events based on these patterns. The pattern API allows us to create patterns for the event stream. With this API, we can build complex pattern sequences to extract from the input stream. Each complex pattern sequence consists of multiple simple patterns, i.e., patterns looking for individual events with the same properties. Patterns come in two types: singleton and looping. Singleton patterns match one event, while looping patterns can match multiple events. For instance, we might want to create a pattern that finds a sequence where a large transaction (over 50k) happens before a smaller one. To connect the event stream and the pattern, we must use the PatternStream API. After applying the pattern, we can use the select() function to find events that match it. This allows us to do something with the patterns that match, such as sending an alert or triggering some other kind of action. FlinkCEP supports more complex patterns like loops, time windows, and branches (i.e., executing one pattern if another has matched). We might need to tune for performance, as the more complex our patterns become. Note: You can read here to learn more about examples and implementations using Java and Scala from Apache Flink Org. To Wrap Things Up Applying patterns to event stream processing is very valuable as it allows companies to automate things, improve operational efficiency, and make faster, more accurate decisions. With FlinkCEP library we don’t have to do all the tracking of the relationship between different events ourselves. Rather, we get a powerful declarative interface to define patterns over event streams and capture complex sequences of events over time, such as an order of actions or rare combinations. There are several challenges and limitations that we may encounter when using FlinkCEP, such as complexity in defining patterns, event time handling, performance overhead, etc. Please give this write-up a thumbs up and share it if you think it's helpful!
When developers set up and integrate services, they often face challenges that can take up a lot of time. Starters help simplify this process by organizing code and making it easier to manage. Let's take a look at creating two starters, configuring their settings automatically, and using them in a service. So, what are Spring Boot Starters, exactly? What benefits do they provide? Spring Boot Starters are like packages that streamline the process of incorporating libraries and components into Spring projects, making it simpler and more efficient to manage dependencies while cutting down development time significantly. Benefits of Using Spring Boot Starter Integration of Libraries Starters include all the dependencies needed for technologies. For example spring-boot-starter-web provides everything for building web applications, while spring-boot-starter-data-jpa helps with JPA database work.By adding these starters to a project, developers can start working with the desired technology without worrying about compatibility issues or version differences. Focus on Business Logic Developers can concentrate on creating business logic for dealing with infrastructure code.This approach speeds up development and feature deployment, ultimately boosting team productivity. Using Configurations Using predefined setups helps ensure consistency in setting up and organizing projects, making it easier to maintain and advance code. Moreover, it aids in onboarding team members to the project by offering a code structure and setup. Project Enhancements Furthermore, using starters that include known libraries simplifies updating dependencies and integrating Spring Boot versions.The support from the Spring team community linked with these starters also guarantees to resolve any questions or obstacles that might come up during development. Task Description In this article, we will address the issue of consolidating data from sources such as REST and GraphQL services. This problem is often encountered in projects with microservice architecture, where it is necessary to combine data coming from different services. When it comes to solutions in a microservices setup, it’s possible to establish microservices for each integration. This approach is justifiable when the integration is extensive, and there are resources for its maintenance. However, in scenarios like working with a monolith or lacking the resources for multiple microservices support, opting for starters could be more practical. The rationale behind selecting a library starter includes: Business logic segmentation. Starters facilitate the separation of business logic and integration configuration.Following the SOLID principles. Breaking down functionality into modules aligns with principles enhancing code maintainability and scalability.Simplified setup. Starters streamline the process of configuring services by minimizing the required amount of configuration code.Ease of use. Integrating a service becomes more straightforward by adding a dependency and configuring essential parameters. Our Scenario Let's illustrate the solution with an example involving a tour aggregator that gathers data from tour operators and merges them. To start off, we will develop two starters (tour-operator-one-starter and tour-operator-two-starter) both of which will use a shared module (common-model) containing fundamental models and interfaces. These starter libraries will connect to the aggregator service (tour-aggregator). Creating tour-operator-one-starter Starter is designed to integrate with the tour operator and fetch data via the REST API. All official starters use the naming scheme spring-boot-starter-*, where * denotes a specific type of application. Third-party starters should not start with spring-boot as it is reserved for official starters from the Spring team. Typically, third-party starters begin with the project name. For example, my starter will be named tour-operator-one-spring-boot-starter. 1. Create pom.xml Add dependencies. XML <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>com.common.model</groupId> <artifactId>common-model</artifactId> <version>0.0.1-SNAPSHOT</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> 2. Create TourOperatorOneProperties These are the properties we will set in tour-aggregator to configure our starter. XML @ConfigurationProperties(prefix = "tour-operator.one.service") public class TourOperatorOneProperties { private final Boolean enabled; private final String url; private final Credentials credentials; public TourOperatorOneProperties( Boolean enabled, String url, Credentials credentials) { this.enabled = enabled; this.url = url; this.credentials = credentials; } //getters public static class Credentials { private final String username; private final String password; public Credentials(String username, String password) { this.username = username; this.password = password; } //getters } } 3. Create TourOperatorOneAutoConfiguration @AutoConfiguration – indicates that this class is a configuration class for Spring Boot auto-configuration.@ConditionalOnProperty – activates the configuration if the property tour-operator.one.service.enabled is set to true. If the property is missing, the configuration is also activated due to matchIfMissing = true.@EnableConfigurationProperties(TourOperatorOneProperties.class) – enables support for @ConfigurationProperties annotations for the TourOperatorOneProperties class. XML @AutoConfiguration @ConditionalOnProperty(prefix = "tour-operator.one.service", name = "enabled", havingValue = "true", matchIfMissing = true) @EnableConfigurationProperties(TourOperatorOneProperties.class) public class TourOperatorOneAutoconfiguration { private static final Logger log = LoggerFactory.getLogger(TourOperatorOneAutoconfiguration.class); private final TourOperatorOneProperties properties; public TourOperatorOneAutoconfiguration(TourOperatorOneProperties properties) { this.properties = properties; } @Bean("operatorOneRestClient") public RestClient restClient(RestClient.Builder builder) { log.info("Configuration operatorRestClient: {}", properties); return builder .baseUrl(properties.getUrl()) .defaultHeaders(httpHeaders -> { if (null != properties.getCredentials()) { httpHeaders.setBasicAuth( properties.getCredentials().getUsername(), properties.getCredentials().getPassword()); } }) .build(); } @Bean("tourOperatorOneService") public TourOperatorOneServiceImpl tourOperatorService(TourOperatorOneProperties properties, @Qualifier("operatorOneRestClient") RestClient restClient) { log.info("Configuration tourOperatorService: {} and restClient: {}", properties, restClient); return new TourOperatorOneServiceImpl(restClient); } } In this example, I use @ConditionalOnProperty, but there are many other conditional annotations: @ConditionalOnBean – generates a bean when a specified bean exists in the BeanFactory@ConditionalOnMissingBean – facilitates creating a bean if a particular bean is not found in the BeanFactory@ConditionalOnClass – produces a bean when a specific class is present, in the classpath@ConditionalOnMissingClass – acts oppositely to @ConditionalOnClass You should choose what suits your needs best. You can learn more about conditional annotations here. 4. Create TourOperatorOneServiceImpl In this class, we implement the base interface and lay down the main business logic for retrieving data from the first tour operator and standardizing it according to the common interface. Plain Text public class TourOperatorOneServiceImpl implements TourOperatorService { private final RestClient restClient; public TourOperatorOneServiceImpl(@Qualifier("operatorOneRestClient") RestClient restClient) { this.restClient = restClient; } @Override public TourOperatorResponse makeRequest(TourOperatorRequest request) { var tourRequest = mapToOperatorRequest(request); // transformation of our request into the one that the tour operator will understand var responseList = restClient .post() .body(tourRequest) .retrieve() .toEntity(new ParameterizedTypeReference<List<TourProposition>>() { }); return TourOperatorResponse.builder() .deals(responseList .getBody() .stream() .map(ModelUtils::mapToCommonModel) .toList()) .build(); } } 5. Create Auto-Configuration File To register auto-configurations, we create the file resources/META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports . Plain Text com.tour.operator.one.autoconfiguration.TourOperatorOneAutoConfiguration This file contains a collection of configurations. In my scenario, one configuration is listed. If you have multiple configurations, make sure that each configuration is listed on a separate line. By creating this file, you are informing Spring Boot that it should load and utilize the TourOperatorOneAutoConfiguration class for setup when certain conditions specified by the @ConditionalOnProperty annotation are satisfied. Thus, we have established the setup for collaborating with the tour operator by developing configuration classes and beans and leveraging properties. Creating tour-operator-two-starter Up is creating tour-operator-two-starter a kit designed to integrate with the second tour operator and retrieve data from a GraphQL server through a straightforward HTTP request. Let's proceed with the process used for tour-operator-one-starter. 1. Create pom.xml Add dependencies. XML <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>com.common.model</groupId> <artifactId>common-model</artifactId> <version>0.0.1-SNAPSHOT</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> 2. Create TourOperatorTwoProperties These are the properties we will set in tour-aggregator to configure our starter. @ConfigurationProperties(prefix = "tour-operator.two.service") public class TourOperatorTwoProperties { private final Boolean enabled; private final String url; private final String apiKey; public TourOperatorTwoProperties( Boolean enabled, String url, String apiKey) { this.enabled = enabled; this.url = url; this.apiKey = apiKey; } //getters } 2. Create TourOperatorOneAutoConfiguration Java @AutoConfiguration @ConditionalOnProperty(prefix = "tour-operator.two.service", name = "enabled", havingValue = "true", matchIfMissing = true) @EnableConfigurationProperties(TourOperatorTwoProperties.class) public class TourOperatorTwoAutoconfiguration { private static final Logger log = LoggerFactory.getLogger(TourOperatorTwoAutoconfiguration.class); private final TourOperatorTwoProperties properties; public TourOperatorTwoAutoconfiguration(TourOperatorTwoProperties properties) { log.info("Configuration with: {}", properties); this.properties = properties; } @Bean("operatorTwoRestClient") public RestClient restClient(RestClient.Builder builder) { log.info("Configuration operatorRestClient: {}", properties); return builder .baseUrl(properties.getUrl()) .defaultHeaders(httpHeaders -> { httpHeaders.set("X-Api-Key", properties.getApiKey()); }) .build(); } @Bean("tourOperatorTwoService") public TourOperatorTwoServiceImpl tourOperatorService(TourOperatorTwoProperties properties, @Qualifier("operatorTwoRestClient") RestClient restClient) { log.info("Configuration tourOperatorService: {} and restClient: {}", properties, restClient); return new TourOperatorTwoServiceImpl(restClient); } } 3. Create TourOperatorOneServiceImpl Receiving data from the second tour operator. Java public class TourOperatorTwoServiceImpl implements TourOperatorService { private static final String QUERY = """ query makeTourRequest($request: TourOperatorRequest) { makeTourRequest(request: $request) { id startDate endDate price currency days hotel { hotelName hotelRating countryCode } } } """; private final RestClient restClient; public TourOperatorTwoServiceImpl(@Qualifier("operatorTwoRestClient") RestClient restClient) { this.restClient = restClient; } @Override public TourOperatorResponse makeRequest(TourOperatorRequest request) { var tourRequest = mapToOperatorRequest(request); var variables = Map.ofEntries(Map.entry("request", tourRequest)); var requestBody = Map.ofEntries( Map.entry("query", QUERY), Map.entry("variables", variables)); var response = restClient .post() .body(requestBody) .retrieve() .toEntity(QueryResponse.class); return TourOperatorResponse.builder() .deals(response.getBody() .data() .makeTourRequest() .stream() .map(ModelUtils::mapToCommonModel).toList()) .build(); } } Create Auto-Configuration File Create the file resources/META-INF/spring/org.springframework.boot. autoconfigure.AutoConfiguration.imports. Plain Text com.tour.operator.two.autoconfiguration.TourOperatorTwoAutoconfiguration Creating and Using the Aggregator Service An aggregator service is designed to gather data from tour operators. This involves linking starters, configuring parameters, and using beans with a shared interface. 1. Connect Starter Libraries Include dependencies for the two libraries in the pom.xml. XML <dependencies> ... <dependency> <groupId>com.tour.operator</groupId> <artifactId>tour-operator-one-spring-boot-starter</artifactId> <version>0.0.2-SNAPSHOT</version> </dependency> <dependency> <groupId>com.tour.operator</groupId> <artifactId>tour-operator-two-spring-boot-starter</artifactId> <version>0.0.1-SNAPSHOT</version> </dependency> ... </dependencies> Configure Parameters in application.yaml Specify the necessary data, such as URLs and connection parameters, in the application.yaml. YAML spring: application: name: tour-aggregator tour-operator: one: service: enabled: true url: http://localhost:8090/api/tours credentials: username: user123 password: pass123 two: service: enabled: true url: http://localhost:8091/graphql api-key: 11d1de45-5743-4b58-9e08-f6038fe05c8f Use Services We use the established beans, which implement the TourOperatorService interface within the TourServiceImpl class. This class outlines the process of retrieving and aggregating data from various tour operators. Java @Service public class TourServiceImpl implements TourService { private static final Logger log = LoggerFactory.getLogger(TourServiceImpl.class); private final List<TourOperatorService> tourOperatorServices; private final Executor tourOperatorExecutor; private final Integer responseTimeout; public TourServiceImpl(List<TourOperatorService> tourOperatorServices, @Qualifier("tourOperatorTaskExecutor") Executor tourOperatorExecutor, @Value("${app.response-timeout:5}") Integer responseTimeout) { this.tourOperatorServices = tourOperatorServices; this.tourOperatorExecutor = tourOperatorExecutor; this.responseTimeout = responseTimeout; } public List<TourOffer> getTourOffers(@RequestBody TourOperatorRequest request) { log.info("Send request: {}", request); var futures = tourOperatorServices.stream() .map(tourOperator -> CompletableFuture.supplyAsync(() -> tourOperator.makeRequest(request), tourOperatorExecutor) .orTimeout(responseTimeout, TimeUnit.SECONDS) .exceptionally(ex -> TourOperatorResponse.builder().deals(List.of()).build()) ) .toList(); var response = futures.stream() .map(CompletableFuture::join) .map(TourOperatorResponse::getDeals) .filter(Objects::nonNull) .flatMap(List::stream) .toList(); return response; } } Allocating Resources for Calls It’s good practice to allocate separate resources for calls, allowing better thread management and performance optimization. Java @Configuration public class ThreadPoolConfig { private final Integer threadCount; public ThreadPoolConfig(@Value("${app.thread-count:5}") Integer threadCount) { this.threadCount = threadCount; } @Bean(name = "tourOperatorTaskExecutor") public Executor tourOperatorTaskExecutor() { return Executors.newFixedThreadPool(threadCount); } } This code ensures efficient management of asynchronous tasks and helps avoid blocking the main thread, thereby improving overall system performance. Conclusion In this article, we’ve created two starters for reaching out to tour operators through REST and GraphQL technology interfaces. These steps include all the configurations and elements to simplify their usage. Afterward, we merged them into a system that communicates with them in an asynchronous manner and aggregates data. This approach solved several problems: Simplified integration and setup. By using auto-configuration and properties of coding, we saved time during development.Improved flexibility and usability. Separating functions into starters improved code structure and simplified maintenance.System flexibility. We can easily add new integrations without breaking the existing logic. Now, our system is better equipped to adapt and scale effortlessly while being easier to manage, leading to enhancements in its architecture and performance. Here’s the full code. I appreciate you reading this article. I look forward to hearing your thoughts and feedback!
Hi, engineers! Have you ever been asked to implement a retry algorithm for your Java code? Or maybe you saw something similar in the codebase of your project? Java public void someActionWithRetries() { int maxRetries = 3; int attempt = 0; while (true) { attempt++; try { System.out.println("attempt number = " + attempt); performTask(); System.out.println("Task completed"); break; } catch (Exception e) { System.out.println("Failure: " + e.getMessage()); if (attempt >= maxRetries) { System.out.println("Max retries attempt”); throw new RuntimeException("Unable to complete task after " + maxRetries + " attempts", e); } System.out.println("Retrying"); } } } We can see that the code above executes a while loop until a task is successfully performed or the maximum retry attempt is reached. In this scenario, an exception is thrown, and the method execution terminates. But what if I tell you that this code might be wrapped into one line method with one annotation? This is the moment Spring Retry enters the room. Let’s first answer this simple question: When do we need retries? API integration. Our downstream service might be unavailable for short periods or just throttling, and we want to retry in case of any of these scenarios.DB connection. DB Transaction may fail because of, for instance, replicas switching or just because of a short time peak in db load. We want to implement retries to back up these scenarios as well. Messaging processing. We want to make sure that when we are consuming messages our service will not fail processing messages in case of first error. Our goal is to give a second chance before sending a message to a dead letter queue. Implementing Spring Retry To add Spring Retry to your application, you need to add two dependencies first: Spring Retry and Spring AOP. As of writing this article, the versions below are the latest. XML <dependency> <groupId>org.springframework.retry</groupId> <artifactId>spring-retry</artifactId> <version>2.0.11</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> <version>6.2.2</version> </dependency> We also need to enable retries using annotation @EnableRetry. I’m adding this annotation above @SpringBootApplication annotation. Java @EnableRetry @SpringBootApplication public class RetryApplication { public static void main(String[] args) { SpringApplication.run(RetryApplication.class, args); } } Remember the code I started with? Let’s create a new service and put this code into it. Also, let’s add an implementation of the performTask method, which typically throws exceptions. Java @Service public class RetriableService { public void someActionWithRetries() { int maxRetries = 3; int attempt = 0; while (true) { attempt++; try { System.out.println("attempt number = " + attempt); performTask(); System.out.println("Task completed"); break; } catch (Exception e) { System.out.println("Failure: " + e.getMessage()); if (attempt >= maxRetries) { System.out.println("Max retries attempt"); throw new RuntimeException("Unable to complete task after " + maxRetries + " attempts", e); } System.out.println("Retrying"); } } } private static void performTask() throws RuntimeException { double random = Math.random(); System.out.println("Random =" + random); if (random < 0.9) { throw new RuntimeException("Random Exception"); } System.out.println("Exception was not thrown"); } } And let’s add this service execution to our application entry point. Java @EnableRetry @SpringBootApplication public class RetryApplication { public static void main(String[] args) { ConfigurableApplicationContext ctx = SpringApplication.run(RetryApplication.class, args); RetriableService bean = ctx.getBean(RetriableService.class); bean.someActionWithRetries(); } } Our main goal is to execute performTask without exceptions. We implemented a simple retry strategy using a while loop and manually managing the number of retries and behavior in case of errors. Additionally, we updated our main method, just to make code executable (you may execute it any way you prefer to, it actually does not matter). When we run our application, we may see a similar log: Plain Text attempt number = 1 Random =0.2026321848196292 Failure: Random Exception Retrying attempt number = 2 Random =0.28573469016365216 Failure: Random Exception Retrying attempt number = 3 Random =0.25888484319397653 Failure: Random Exception Max retries attempt Exception in thread "main" java.lang.RuntimeException: Unable to complete task after 3 attempts As we can see, we tried the times and threw an exception when all three attempts failed. Our application is working, but just to execute a one-line method, we added a lot of lines of code. And what if we want to cover another method with the same retry mechanism? Do we need to copy-paste our code? We see that even though our solution is a working one, it does not seem to be an optimal one. Is there a way to make it better? Yes, there is. We are going to add Spring Retry to our logic. We’ve already added all necessary dependencies and enabled Retry by adding annotation. Now, let’s make our method retriable using Spring Retry. We just need to add the following annotation and provide the number of maximum attempts: Plain Text @Retryable(maxAttempts = 3) In the second step, we need to delete all the useless code, and this is how our service looks now: Java @Retryable(maxAttempts = 3) public void someActionWithRetries() { performTask(); } private static void performTask() throws RuntimeException { double random = Math.random(); System.out.println("Random =" + random); if (random < 0.9) { throw new RuntimeException("Random Exception"); } System.out.println("Exception was not thrown"); } When we execute our code, we will see the following log line: Plain Text Random =0.04263677120823861 Random =0.6175610369948504 Random =0.226853770441114 Exception in thread "main" java.lang.RuntimeException: Random Exception The code is still trying to perform the task and fails after three unsuccessful attempts. We have already improved our code by adding an aspect to handle retries. But, we also can make our retries more efficient by introducing an exponential backoff strategy. What Is Exponential Backoff? Imagine you are calling an external API for which you have some quotas. Sometimes, when you reach your quota, this API throws a throttling exception saying your quota has been exceeded and you need to wait some time to be able to make an API call again. You know that quotas should be reset quite soon, but I don’t know when exactly it will happen. So, you decide to keep making API calls until a successful response is received, but increase the delay for every next call. For instance: Plain Text 1st call - failure Wait 100ms 2nd call - failure Wait 200ms 3rd call - failure Wait 400ms 4th call - success You can see how delays between calls are exponentially increasing. This is exactly what the exponential backoff strategy is about -> retying with exponentially increasing delay. And yes, we can simply implement this strategy using Spring Retry. Let’s extend our code: Java @Retryable(maxAttempts = 10, backoff = @Backoff(delay = 100, multiplier = 2.0, maxDelay = 1000)) public void someActionWithRetries() { performTask(); } We’ve increased maxAttempts value to 10, and added backoff configuration with the following params: Delay – delay in milliseconds for the first retry Multiplier – multiplier for 2nd and following retry. In our case, the second retry will happen in 200ms after the 1st retry failed. If the second retry fails, the third will be executed in 400ms, etc. maxDelay – the limit in milliseconds for delays. When your delay reaches the maxDelay value, it is not increasing anymore. Let’s add one more log line to be able to track milliseconds of the current timestamp in the performTask method and execute our code: Java private static void performTask() throws RuntimeException { System.out.println("Current timestamp=" +System.currentTimeMillis()%100000); ........ } Plain Text Current timestamp=41935 Random =0.5630325878313412 Current timestamp=42046 Random =0.3049870877017091 Current timestamp=42252 Random =0.6046786246149355 Current timestamp=42658 Random =0.35486866685708773 Current timestamp=43463 Random =0.5374704153455458 Current timestamp=44469 Random =0.922956819951388 Exception was not thrown We can see that it took six attempts (five retries) to perform the task without an exception. We can also see that the difference between the first and second execution is about 100 ms, as configured. The difference between the second and third execution is about 200 ms, confirming that the multiplier of 2 is working as expected. Pay attention to the delay before the last execution. It is not 1,600 ms, as we might have expected (a multiplier of 2 for the fifth execution), but 1,000 ms because we set that as the upper limit. Conclusion We successfully implemented an exponential backoff strategy using Spring Retry. It helped us to get rid of Utils code and make our retry strategy more manageable. We also discussed scenarios when retries are mostly used, and now we are more aware of when to use them. The functionality I showed in this article is only about 30% of what Spring Retry allows us to do, and we will see more advanced approaches in the next article.
Most of the e-commerce applications are zero-tolerant of any downtime. Any impact on application resources can impact the overall availability metrics of the site. Azure Cosmos database is one of the major NoSQL databases used across the industry. Though the Azure Cosmos itself provides 99.99% minimum availability for a single region without an availability zone, how do we further improve the database availability with the options available in the Azure Cosmos? Multi-Region Read and Write Single-region reads will impact the availability and will also lead to a single point of failure. So, read-heavy applications should at least have multi-region read enabled, though multi-region writes are not an option for an application. But, multi-region write provides a greater availability on both read and write-heavy applications. With multi-region write capability, you can enable multi-master replication, where all configured regions can serve as write endpoints. Best Practices Select regions based closer to the region where the application is deployed.Configure multiple preferred regions based on the application's requirements to enhance availability.Set more than one preferred region in the application for reads and writes to improve availability and reduce latency.Set the preferred regions in the order of the application's current or nearest regions first in the list. Application Deployed in West US 2 Java //Configure application deployed in West US 2 as below import com.azure.cosmos.CosmosClientBuilder; import com.azure.cosmos.CosmosClient; // ... CosmosClientBuilder clientBuilder = new CosmosClientBuilder() .setEndpoint(accountEndpoint) .setKey(accountKey) .setPreferredRegions(Arrays.asList("West US 2", "East US")); CosmosClient client = clientBuilder.buildClient(); // Application Deployed in East US Java //Configure application deployed in East US as below import com.azure.cosmos.CosmosClientBuilder; import com.azure.cosmos.CosmosClient; // ... CosmosClientBuilder clientBuilder = new CosmosClientBuilder() .setEndpoint(accountEndpoint) .setKey(accountKey) .setPreferredRegions(Arrays.asList( "East US","West US 2")); CosmosClient client = clientBuilder.buildClient(); // Conclusion Though enabling multi-region read and writes can provide greater availability, configuring the application read and writes closer to the region it's being deployed and providing more than one preferred region helps the application to fall back immediately to the available region without any manual intervention. Consistency Levels Select consistency levels based on the application's requirements. Higher consistency expectations typically result in reduced availability. If the application demands strong data consistency, ensure it can tolerate potential higher latencies. Conversely, if weaker consistency is acceptable, the application can benefit from improved throughput and availability. Conclusion Choosing the right consistent level purely depends on the application's need, and though there might be an impact on the availability of stronger consistency, the overall availability of an application will not be impacted by choosing the stronger consistency levels. Failover Manual Failover Developers or associates can log in to the portal and manually failover to the next available region during an outage on the region the application is currently connected to. Though this option provides availability to some extent, it requires a manual intervention to failover, which can impact the overall site availability metrics. Service-Manged Failover Enabling service-managed failover allows Cosmos to automatically switch to the next available region based on the priority configured in the portal. This option eliminates the need for any application changes during the failover process. Conclusion Though both provide increased availability, service-managed failover throughput gives the flexibility of failing over to the next available region without worrying about the application deployment. Partition Key and Indexes Defining a partition key in Azure Cosmos DB is crucial before running any application on it. Cosmos DB is highly efficient for read-intensive applications, so it's essential to consider the lookup criteria and define the queries for reading records from the database before integrating Cosmos DB into your application.By default, every item in a Cosmos DB container is automatically indexed. However, excluding certain items or fields from indexing can help reduce the consumption of Request Units (RUs). It is just as important to add fields for indexing as it is to remove indexes on fields that aren't required to be indexed.Avoid storing excessively large items in Azure Cosmos DB. Minimize cross-partition queries whenever possible. Ensure queries include filters to improve efficiency. Avoid querying the same partition key repeatedly; rather, implement a caching layer on such use cases. Throughput Autoscale Azure Cosmos DB supports both standard (manual) and autoscale throughput at the container level. Manual Throughput The application decides the RU/s allowed, and maxing out the RU/s requests will be throttled for the configured time. Requires manual intervention to increase the throughput. Autoscale Throughput The application can configure the maximum throughput it supports, and Cosmos autoscales itself based on the traffic received. On exceeding the autoscale throughput, requests will be throttled for configured time. Conclusion Though both provide increased availability, autoscale throughput gives the flexibility of handling varying traffic without throttling or impacting the availability. Backup and Restore Azure Cosmos DB enables periodic backups by default for all accounts Periodic Backup Backups are taken periodically for every configured minute with a minimum value of 1 hour and a maximum of 24 hours. It also provides options to keep the backup storage redundant at the Geo, Zone, or Local level. The application team needs to reach out to the support to retrieve the backup. Continuous Backup The continuous backup option keeps the backup storage on the region's Cosmos database configured, and it allows the retention of data from the last 7 days or from the last 30 days. It also provides point in time restoration. Conclusion Opting for continuous backup ensures faster restoration of the database. This eliminates the need for back-and-forth interactions with support to restore the database and allows applications to restore it to any region (where backups exist) at a specific point in time. In conclusion, while availability metrics are crucial for any application, they come at a cost. Options that offer higher availability than the standard configuration incur additional expenses. Moreover, the above-mentioned options may not be necessary or suitable for all applications using Cosmos. However, it is essential to adopt and implement best practices in Azure Cosmos to optimize availability effectively.