Data is at the core of software development. Think of it as information stored in anything from text documents and images to entire software programs, and these bits of information need to be processed, read, analyzed, stored, and transported throughout systems. In this Zone, you'll find resources covering the tools and strategies you need to handle data properly.
API and Security: From IT to Cyber
A Deep Dive on Read Your Own Writes Consistency
Mostly, developers make use of if-else statements to cater to differing circumstances. Even so, this could prove itself quite troublesome especially when more conditions arise. Putting additional business needs into these chains might cause errors while making the code more complicated than necessary. It is advisable that we anticipate eventually creating solutions that could change or grow without being difficultly updated in order not only to ensure the robustness of one’s system but also to enable its adaptation within unforeseen circumstances. Our codes will then remain potent and readily adaptable to the needs ahead us in this case. In this article, we’ll delve into methods for managing functions in a calculator using Java in all examples. The aim is to enhance the processing of operations (such as addition, subtraction, multiplication, and division) in our coding. We’ll incorporate techniques by using a sample calculator that receives a request with an operation type and two values, including if-else statements, switch cases, and the strategy design pattern. The main focus will be on describing the concepts and benefits of each method. Request Structure First, let’s define the request structure, which will be used to demonstrate different approaches: Java public class CalculationRequest { private final Operation operation; private final int first; private final int second; // Constructor, getters, and setters } enum Operation{ ADD, SUBTRACTION, DIVISION, MULTIPLICATION; } If-Else Statement If-else statements are one of the simplest and most commonly used constructs for handling conditions in programming. They allow the execution of a specific block of code depending on whether a condition is met. In the context of a calculator, if-else statements can be used to handle various operations like addition, subtraction, multiplication, and division. Consider the following example demonstrating the use of if-else statements for performing these operations: Java public static Integer calculate(CalculationRequest request) { var first = request.getFirst(); var second = request.getSecond(); var operation = request.getOperation(); if (Operation.ADD.equals(operation)) { return first + second; } else if (Operation.SUBTRACTION.equals(operation)) { return first - second; } else if (Operation.DIVISION.equals(operation)) { if (second == 0) { throw new IllegalArgumentException("Can't be zero"); } return first / second; } else if (Operation.MULTIPLICATION.equals(operation)) { return first * second; } else { throw new IllegalStateException("Operation not found"); } } Pros Simplicity: Easy to understand and implement.Clarity: The code clearly shows what happens in each condition. Cons Maintenance: Changing logic requires modifications in multiple places, increasing the risk of errors. For example, if a new operation is added, another condition must be added to the if-else chain, increasing the risk of missing a condition. Numerous changes in different places also complicate debugging and testing the code.Scalability: Adding new operations requires changing existing code, violating the open/closed principle (OCP) from SOLID. Each new condition requires modifying existing code, making it less flexible and resilient to changes. This can lead to increased technical debt and reduced code quality in the long term. Switch Statement Switch statements can be more readable and convenient in some cases compared to if-else chains. They allow better structuring of the code and avoid long chains of conditions. Let’s consider using switch statements. Java public static Integer calculate(CalculationRequest request) { var first = request.getFirst(); var second = request.getSecond(); var operation = request.getOperation(); return switch (operation) { case ADD -> first + second; case SUBTRACTION -> first - second; case DIVISION -> { if (second == 0) { throw new IllegalArgumentException("Can't be zero"); } yield first / second; } case MULTIPLICATION -> first * second; }; } Pros Readability: More structured compared to a long if-else chain. The code becomes more compact and easy to read.Simplification: Clear separation of different cases, making the code neater. Cons Scalability: Like if-else, adding new operations requires changing existing code, violating the open/closed principle (OCP) from SOLID.Flexibility: Switch statements can be less flexible than some other approaches. For example, they do not easily integrate complex logic or states that may be necessary in some operations. This makes them less suitable for advanced use cases where more complex processing is required. Strategy Pattern The strategy pattern allows defining a family of algorithms, encapsulating each one, and making them interchangeable. This allows clients to use different algorithms without changing their code. In the context of a calculator, each operation (addition, subtraction, multiplication, division) can be represented by a separate strategy. This improves the extensibility and maintainability of the code, as new operations can be added without changing existing code. Pros Scalability: Easy to add new strategies without changing existing code. This is especially useful in situations where new functions need to be supported or added in the future.SOLID Support: The pattern supports the single responsibility principle (SRP), as each strategy is responsible for its specific operation. It also supports the open/closed principle (OCP), as new strategies can be added without changing existing classes.Flexibility: Algorithms can be easily changed at runtime by substituting the appropriate strategy. This makes the system more flexible and adaptable to changing requirements. Cons Complexity: Can add additional complexity to the code, especially when implementing multiple strategies. The number of classes increases, which can make project management difficult. Let’s look at different implementation options: Enum In this example, we create an enum Operation with an abstract apply method. Each enum element corresponds to an implementation of this method. This encapsulates the logic of each operation in separate enumeration elements, making the code more organized and maintainable. Java public enum Operation { ADD { @Override Integer apply(int first, int second) { return first + second; } }, SUBTRACTION { @Override Integer apply(int first, int second) { return first - second; } }, DIVISION { @Override Integer apply(int first, int second) { if (second == 0) { throw new IllegalArgumentException("Can't be zero"); } return first / second; } }, MULTIPLICATION { @Override Integer apply(int first, int second) { return first * second; } }; abstract Integer apply(int first, int second); } Usage: Java public static Integer calculate(CalculationRequest request) { var first = request.getFirst(); var second = request.getSecond(); var operation = request.getOperation(); return operation.apply(first, second); } Map of Objects The OperationStrategy interface defines the apply method to be implemented for each operation, creating a single contract for all operations, simplifying the addition of new strategies. Java public interface OperationStrategy { Integer apply(int first, int second); } Each operation is implemented as a separate class that implements the OperationStrategy interface. Each class implements the apply method to perform the corresponding operation. Java class AddOperationStrategy implements OperationStrategy { @Override public Integer apply(int first, int second) { return first + second; } } class SubtractionOperationStrategy implements OperationStrategy { @Override public Integer apply(int first, int second) { return first - second; } } class DivisionOperationStrategy implements OperationStrategy { @Override public Integer apply(int first, int second) { if (second == 0) { throw new IllegalArgumentException("Can't be zero"); } return first / second; } } class MultiplicationOperationStrategy implements OperationStrategy { @Override public Integer apply(int first, int second) { return first * second; } } We create a map STRATEGY_OBJECT_MAP where the keys are values of the Operation enum, and the values are the corresponding OperationStrategy implementations. This allows for the quick finding and use of the necessary strategy for each operation. Java public static final Map<Operation, OperationStrategy> STRATEGY_OBJECT_MAP = Map.ofEntries( Map.entry(Operation.ADD, new AddOperationStrategy()), Map.entry(Operation.SUBTRACTION, new SubtractionOperationStrategy()), Map.entry(Operation.DIVISION, new DivisionOperationStrategy()), Map.entry(Operation.MULTIPLICATION, new MultiplicationOperationStrategy()) ); The method retrieves the necessary strategy from the map and performs the operation by calling the apply method. Java public static Integer calculate(CalculationRequest request) { var first = request.first(); var second = request.second(); var operation = request.operation(); return STRATEGY_OBJECT_MAP.get(operation).apply(first, second); } Map of Functions This approach uses functional interfaces for each operation and creates a map where keys are operations, and values are functions. This avoids creating separate classes for each strategy, simplifying the code and making it more compact. Java public static final Map<Operation, BiFunction<Integer, Integer, Integer>> STRATEGY_FUNCTION_MAP; static { STRATEGY_FUNCTION_MAP = Map.ofEntries( Map.entry(Operation.ADD, (first, second) -> first + second), Map.entry(Operation.SUBTRACTION, (first, second) -> first - second), Map.entry(Operation.DIVISION, (first, second) -> { if (second == 0) { throw new IllegalArgumentException("Can't be zero"); } return first / second; }), Map.entry(Operation.MULTIPLICATION, (first, second) -> first * second) ); } public static Integer calculate(CalculationRequest request) { var first = request.getFirst(); var second = request.getSecond(); var operation = request.getOperation(); return STRATEGY_FUNCTION_MAP.get(operation).apply(first, second); } Using a map of functions is quite suitable in cases when you need to implement a set of operations as quickly and easily as possible without creating separate classes for each. However, object strategies fit better in more complex scenarios. Conclusion Every method has its advantages and disadvantages that need to be considered in software development decisions. If-else switch statements are straightforward and user-friendly at first but become challenging to organize as the number of conditions grows. They don’t adapt well to changes and make it difficult to incorporate functions without altering the codebase. The strategy pattern provides an adaptable approach to managing operations while adhering to SOLID principles. This makes it simple to incorporate operations without disrupting code, promoting code maintenance and scalability. Moreover, it enables adaptation to evolving business requirements without strain. Although initially complex, the benefits of its extendable code base prove worthwhile.
Data management is undergoing a rapid transformation and is emerging as a critical factor in distinguishing success within the Software as a Service (SaaS) industry. With the rise of AI, SaaS leaders are increasingly turning to AI-driven solutions to optimize data pipelines, improve operational efficiency, and maintain a competitive edge. However, effectively integrating AI into data systems goes beyond simply adopting the latest technologies. It requires a comprehensive strategy that tackles technical challenges, manages complex real-time data flows, and ensures compliance with regulatory standards. This article will explore the journey of building a successful AI-powered data pipeline for a SaaS product. We will cover everything from initial conception to full-scale adoption, highlighting the key challenges, best practices, and real-world use cases that can guide SaaS leaders through this critical process. 1. The Beginning: Conceptualizing the Data Pipeline Identifying Core Needs The first step in adopting AI-powered data pipelines is understanding the core data needs of your SaaS product. This involves identifying the types of data the product will handle, the specific workflows involved, and the problems the product aims to solve. Whether offering predictive analytics, personalized recommendations, or automating operational tasks, each use case will influence the design of the data pipeline and the AI tools required for optimal performance. Data Locality and Compliance Navigating the complexities of data locality and regulatory compliance is one of the initial hurdles for SaaS companies implementing AI-driven data pipelines. Laws such as the GDPR in Europe impose strict guidelines on how companies handle, store, and transfer data. SaaS leaders must ensure that both the storage and processing locations of data comply with regulatory standards to avoid legal and operational risks. Data Classification and Security Managing data privacy and security involves classifying data based on sensitivity (e.g., personally identifiable information or PII vs. non-PII) and applying appropriate access controls and encryption. Here are some essential practices for compliance: Key Elements of a Robust Data Protection Strategy By addressing these challenges, SaaS companies can build AI-driven data pipelines that are secure, compliant, and resilient. 2. The Build: Integrating AI into Data Pipelines Leveraging Cloud for Scalable and Cost-Effective AI-Powered Data Pipelines To build scalable, efficient, and cost-effective AI-powered data pipelines, many SaaS companies turn to the cloud. Cloud platforms offer a wide range of tools and services that enable businesses to integrate AI into their data pipelines without the complexity of managing on-premises infrastructure. By leveraging cloud infrastructure, companies gain flexibility, scalability, and the ability to innovate rapidly, all while minimizing operational overhead and avoiding vendor lock-in. Key Technologies in Cloud-Powered AI Pipelines An AI-powered data pipeline in the cloud typically follows a series of core stages, each supported by a set of cloud services: End-to-End Cloud Data Pipeline 1. Data Ingestion The first step in the pipeline is collecting raw data from various sources. Cloud services allow businesses to easily ingest data in real time from internal systems, customer interactions, IoT devices, and third-party APIs. These services can handle both structured and unstructured data, ensuring that no valuable data is left behind. 2. Data Storage Once data is ingested, it needs to be stored in an optimized manner for processing and analysis. Cloud platforms provide flexible storage options, such as: Data Lakes: For storing large volumes of raw, unstructured data that can later be analyzed or processed.Data Warehouses: For storing structured data, performing complex queries, and reporting.Scalable Databases: For storing key-value or document data that needs fast and efficient access. 3. Data Processing After data is stored, it needs to be processed. The cloud offers both batch and real-time data processing capabilities: Batch Processing: For historical data analysis, generating reports, and performing large-scale computations.Stream Processing: For real-time data processing, enabling quick decision-making and time-sensitive applications, such as customer support or marketing automation. 4. Data Consumption The final stage of the data pipeline is delivering processed data to end users or business applications. Cloud platforms offer various ways to consume the data, including: Business Intelligence Tools: For creating dashboards, reports, and visualizations that help business users make informed decisions.Self-Service Analytics: Enabling teams to explore and analyze data independently.AI-Powered Services: Delivering real-time insights, recommendations, and predictions to users or applications. Ensuring a Seamless Data Flow A well-designed cloud-based data pipeline ensures smooth data flow from ingestion through to storage, processing, and final consumption. By leveraging cloud infrastructure, SaaS companies can scale their data pipelines as needed, ensuring they can handle increasing volumes of data while delivering real-time AI-driven insights and improving customer experiences. Cloud platforms provide a unified environment for all aspects of the data pipeline — ingestion, storage, processing, machine learning, and consumption — allowing SaaS companies to focus on innovation rather than managing complex infrastructure. This flexibility, combined with the scalability and cost-efficiency of the cloud, makes it easier than ever to implement AI-driven solutions that can evolve alongside a business’s growth and needs. 3. Overcoming Challenges: Real-Time Data and AI Accuracy Real-Time Data Access For many SaaS applications, real-time data processing is crucial. AI-powered features need to respond to new inputs as they’re generated, providing immediate value to users. For instance, in customer support, AI must instantly interpret user queries and generate accurate, context-aware responses based on the latest data. Building a real-time data pipeline requires robust infrastructure, such as Apache Kafka or AWS Kinesis, to stream data as it’s created, ensuring that the SaaS product remains responsive and agile. Data Quality and Context The effectiveness of AI models depends on the quality and context of the data they process. Poor data quality can result in inaccurate predictions, a phenomenon often referred to as "hallucinations" in machine learning models. To mitigate this: Implement data validation systems to ensure data accuracy and relevance.Train AI models on context-aware data to improve prediction accuracy and generate actionable insights. 4. Scaling for Long-Term Success Building for Growth As SaaS products scale, so does the volume of data, which places additional demands on the data pipeline. To ensure that the pipeline can handle future growth, SaaS leaders should design their AI systems with scalability in mind. Cloud platforms like AWS, Google Cloud, and Azure offer scalable infrastructure to manage large datasets without the overhead of maintaining on-premise servers. Automation and Efficiency AI can also be leveraged to automate various aspects of the data pipeline, such as data cleansing, enrichment, and predictive analytics. Automation improves efficiency and reduces manual intervention, enabling teams to focus on higher-level tasks. Permissions & Security As the product scales, managing data permissions becomes more complex. Role-based access control (RBAC) and attribute-based access control (ABAC) systems ensure that only authorized users can access specific data sets. Additionally, implementing strong encryption protocols for both data at rest and in transit is essential to protect sensitive customer information. 5. Best Practices for SaaS Product Leaders Start Small, Scale Gradually While the idea of designing a fully integrated AI pipeline from the start can be appealing, it’s often more effective to begin with a focused, incremental approach. Start by solving specific use cases and iterating based on real-world feedback. This reduces risks and allows for continuous refinement before expanding to more complex tasks. Foster a Growth Mindset AI adoption in SaaS requires ongoing learning, adaptation, and experimentation. Teams should embrace a culture of curiosity and flexibility, continuously refining existing processes and exploring new AI models to stay competitive. Future-Proof Your Pipeline To ensure long-term success, invest in building a flexible, scalable pipeline that can adapt to changing needs and ongoing regulatory requirements. This includes staying updated on technological advancements, improving data security, and regularly revisiting your compliance strategies. 6. Conclusion Integrating AI into SaaS data pipelines is no longer optional — it’s a critical component of staying competitive in a data-driven world. From ensuring regulatory compliance to building scalable architectures, SaaS leaders must design AI systems that can handle real-time data flows, maintain high levels of accuracy, and scale as the product grows. By leveraging open-source tools, embracing automation, and building flexible pipelines that meet both operational and regulatory needs, SaaS companies can unlock the full potential of their data. This will drive smarter decision-making, improve customer experiences, and ultimately fuel sustainable growth. With the right strategy and mindset, SaaS leaders can turn AI-powered data pipelines into a significant competitive advantage, delivering greater value to customers while positioning themselves for future success.
In a world where companies rely heavily on data for insights about their performance, potential issues, and areas for improvement, logging comprehensively is crucial, but it comes at a cost. If not stored properly it can become cumbersome to maintain, query, and overall expensive. Logging detailed user activities like time spent on various apps, which interface where they are active, navigation path, app start-up times, crash reports, country of login, etc. could be vital in understanding user behaviors — but we can easily end up with billions of rows of data, which can quickly become an issue if scalable solutions are not implemented at the time of logging. In this article, we will discuss how we can efficiently store data in an HDFS system and use some of Presto’s functionality to query massive datasets with ease, reducing compute costs drastically in data pipelines. Partitioning Partitioning is a technique where similar logical data can be clubbed together and stored in a single file making retrieval quicker. For example, let's consider an app like YouTube. It would be useful to group data belonging to the same date and country into one file, which would result in multiple smaller files making scanning easier. Just by looking at the metadata, Presto can figure out which one of the specific files needs to be scanned based on the query the user provides. Internally, a folder called youtube_user_data would be created within which multiple subfolders would be created for each partition by date and country (e.g., date=2023-10-01/country=US). If the app was launched in 2 countries and has been active for 2 days, then the number of files generated would be 2*2 = 4 (cartesian product of the unique values in the partition columns). Hence, choosing columns with low cardinality is essential. For example, if we add interface as another partition column, with three possible values (ios, android, desktop), it would increase the number of files to 2×2×3=12. Based on the partitioning strategy described, the data would be stored in a directory structure like this: Below is an example query on how to create a table with partition columns as date and country: SQL CREATE TABLE youtube_user_data ( user_id BIGINT, Age int, Video_id BIGINT, login_unixtime BIGINT, interface VARCHAR, ip_address VARCHAR, login_date VARCHAR, country VARCHAR … … ) WITH ( partitioned_by = ARRAY[‘login_date’, ‘country’], format = 'DWRF', oncall = ‘your_oncall_name’, retention_days = 60, ); Ad Hoc Querying When querying a partitioned table, specifying only the needed partitions can speed up your query wall time greatly. SQL SELECT SUM(1) AS total_users_above_30 FROM youtube_user_data WHERE Login_date = ‘2023-10-01’ And country = ‘US’ And age > 30 By specifying the partition columns as filters in the query, Presto will directly jump to the folder 2023-10-01 and US, and retrieve only the file within that folder skipping the scanning of other files completely. Scheduling Jobs If the source table is partitioned by country, then setting up daily ETL jobs also becomes easier, as we can now run them in parallel. For example: Python # Sample Dataswarm job scheduling, that does parallel processing # taking advantage of partitions in the source table insert_task = {} wait_for = {} for country in ["US", "CA"]: # wait for job wait_for[country] = WaitforOperator( table="youtube_user_data", partitions=f"login_date=<DATEID>/country={country}" ) # insert job insert_task[country] = PrestoOperator( dep_list = [wait_for[country]], input_data = { "in": input.table("youtube_user_data").col("login_date").eq("<DATEID>") .col("country").eq(country)}, output_data = {"out": output.table("output_table_name").col("login_date").eq("<DATEID>") .col("country").eq(country)}, select = """ SELECT user_id, SUM(1) as total_count FROM <in:youtube_user_data> """ ) Note: The above uses Dataswarm as an example for processing/inserting data. Here, there will be two parallel running tasks — insert_task[US] and insert_task[CA] — which will query only the data pertaining to those partitions and load them into a target table which would also be partitioned on country and date. Another benefit is that waitforoperator can be set up to check if that particular partition of interest has landed rather than waiting for the whole table. If, say, CA data is delayed, but US data has landed, then we can start the US insert task first and later once CA upstream data lands, then kick off the CA insert job. Above is a simple DAG showing the sequence of events that would be run. Bucketing If frequent Group by and join operations are to be performed on a table, then we can further optimize the storage using bucketing. Bucketing organizes data into smaller chunks within a file based on a key column (e.g., userid), so when querying, Presto would know in which bucket a specific ID would be present. How to Implement Bucketing Choose a bucketing column: Pick a key column that is commonly used for joins and group bys.Define buckets: Specify the number of buckets to divide the data into. SQL CREATE TABLE youtube_user_data ( user_id BIGINT, Age int, Video_id BIGINT, login_unixtime BIGINT, interface VARCHAR, ip_address VARCHAR, login_date VARCHAR, country VARCHAR … … ) WITH ( partitioned_by = ARRAY[‘login_date’, ‘country’], format = 'DWRF', oncall = ‘your_oncall_name’, retention_days = 60, bucket_count = 1024, bucketed_by = ARRAY['user_id'], ); Note: The bucket size should be a power of 2. In the above example, we chose 1024 (2^10). Before Bucketing Data for a partition is stored in a single file, requiring a full scan to locate a specific user_id: After Bucketing Userids are put into smaller buckets based on which range they fall under. You'll notice that user IDs are assigned to specific buckets based on their value. For example, a new user ID of 1567 would be placed in Bucket 1: Bucket 1: 1000 to 1999Bucket 2: 2000 to 2999Bucket 3: 3000 to 3999Etc. When performing a join with another table — say, to retrieve user attributes like gender and birthdate for a particular user (e.g., 4592) — it would be much quicker, as Presto would know under which bucket (bucket 4) that user would be so it can directly jump to that specific one and skip scanning the others. It would still need to search where that user would be within that bucket. We can speed up that process as well by taking advantage of sorting the data on the key ID while storing them within each of the buckets, which we will explore in the later section. SQL SELECT a.user_id, b.gender, b.birthdate FROM youtube_user_data a JOIN dim_user_info b ON a.user_id = b.user_id WHERE a.login_date = '<DATEID>' AND a.country = 'US' AND b.date = '<DATEID>' Hidden $bucket Column For bucketed tables, there is a hidden column to let you specify the buckets you want to read data from. For example, the following query will count over bucket #17 (the bucket ID starts from 0). SQL SELECT SUM(1) AS total_count FROM youtube_user_data WHERE ds='2023-05-01' AND "$bucket" = 17 The following query will roughly count over 10% of the data for a table with 1024 buckets: SQL SELECT SUM(1) AS total_count FROM youtube_user_data WHERE ds='2023-05-01' AND "$bucket" BETWEEN 0 AND 100 Sorting To further optimize the buckets, we can sort them while inserting the data so query speeds can be further improved, as Presto can directly jump to the specific index within a specific bucket within a specific partition to fetch the data needed. How to Enable Sorting Choose a sorting column: Typically, this is the same column used for bucketing, such as user_id.Sort data during insertion: Ensure that data is sorted as it is inserted into each bucket. SQL CREATE TABLE youtube_user_data ( user_id BIGINT, Age int, Video_id BIGINT, login_unixtime BIGINT, interface VARCHAR, ip_address VARCHAR, login_date VARCHAR, country VARCHAR … … ) WITH ( partitioned_by = ARRAY[‘login_date’, ‘country’], format = 'DWRF', oncall = ‘your_oncall_name’, retention_days = 60, bucket_count = 1024, bucketed_by = ARRAY['user_id'], sorted_by = ARRAY['userid'] ); In a sorted bucket, the userids are inserted in an orderly manner, which makes retrieval efficient. It becomes very handy when we have to join large tables or perform aggregations across billions of rows of data. Conclusion Partitioning: For large datasets, partition the table on low cardinality columns like date, country, and interface, which would result in smaller HDFS files. Presto can then only query the needed files by looking up the metadata/file name.Bucketing and sorting: If a table is to be used frequently in several join or group bys, then it would be beneficial to bucket and sort the data within each partition further reducing key lookup time.Caveat: There is an initial compute cost for bucketing and sorting as Presto would have to remember the order of the key while inserting. However, this one-time cost could be justified by savings in repeated downstream queries.
Microservices have become a popular architectural style for building scalable and modular applications. However, setting up a microservice from scratch can still feel complicated, especially when juggling frameworks, templates, and version support. ServiceBricks aims to simplify this by allowing you to quickly generate fully functional, open-source microservices based on a simple prompt using artificial intelligence and source code generation. In this tutorial, we’ll walk through the entire process — from signing up to running your newly created microservice locally. Getting Started Prerequisites A web browserVisual Studio 2022 Step 1: Register for an Account Begin by registering for an account on the ServiceBricks.com website. Once you’ve signed up and logged in, you’ll have access to the platform’s microservice generation features and receive 10 free credits to evaluate the product. Step 2: Create a New Microservice After logging in: Locate and click the button to "Create Microservice."Enter a prompt or description of what you want your microservice to accomplish. For example, this could be “Create a microservice to manage a grocery list.”Click the "Generate" button. Step 3: Download the Generated Source Code The generation process will create a tailored microservice solution for you. On the summary screen: Review the key details provided to ensure the microservice meets your requirements.Click the "Download Source Code" button. This will download a zip file that contains a complete solution containing multiple application projects including unit and integration tests for different .NET Core versions. Running Your Microservice Locally Once you have the source code, open the solution in Visual Studio 2022. Step 1: Set the Startup Project In the Solution Explorer, find the project for your chosen .NET Core version (e.g., WebAppNet8).Right-click the project and select "Set as Startup Project." Step 2: Run the Application Once the startup project is set, click the "Start" button (or press F5) in Visual Studio.Visual Studio will build and run the application. By default, it should start running at https://localhost:7000. Step 3: Explore the Home Page Instructions With the microservice running locally, the home page provides instructions on how to interact with your new microservice, as well as details on configuring storage providers or other backend services. By default, the web application will use the InMemory database provider. To use any of the relational database providers, you will need to create migrations so that the database can be created. Step 4: Create Migrations for Relational Databases Run the Batch File Within the source code folder of your project, run the CreateMigrations.cmd file. This will: Trigger a solution build.Generate database migrations for the following providers: PostgresSQLiteSQL Server Now that your microservice is up and running, you can begin experimenting by sending requests to the endpoints defined in your service. You can also modify the underlying code to add new features, integrate custom logic, or switch out the default storage provider. Conclusion By following these steps, you’ve seen how to take a prompt and transform it into a running microservice, ready for integration into a larger application ecosystem. You’ve learned how to register, generate code, download the source, run the application, and manage database migrations. With this solid starting point in place, you can now focus on refining and extending your service’s functionality, adjusting storage providers, and tailoring it to fit the unique requirements of your project.
Voice-controlled frontend applications have gained immense traction due to the rising popularity of smart devices, virtual assistants, and hands-free interfaces. Natural Language Processing (NLP) lies at the heart of these systems, enabling human-like understanding and generation of speech. This white paper presents an in-depth examination of NLP methodologies for voice-controlled frontend applications, reviewing the state-of-the-art in speech recognition, natural language understanding, and generation techniques, as well as their architectural integration into modern web frontends. It also discusses relevant use cases, technical challenges, ethical considerations, and emerging directions such as multimodal interaction and zero-shot learning. By synthesizing recent research, best practices, and open challenges, this paper aims to guide developers, researchers, and industry professionals in leveraging NLP for inclusive, responsive, and efficient voice-controlled frontend applications. Introduction The shift from traditional graphical interfaces to more natural, intuitive methods of human-computer interaction has accelerated over the past decade. Voice-controlled frontend applications — encompassing virtual assistants, voice-enabled search, and smart home interfaces — are at the forefront of this transformation. These applications promise hands-free, eyes-free interaction, dramatically expanding accessibility for users with disabilities and delivering more streamlined user experiences in scenarios where visual attention is limited (e.g., driving, cooking). At the core of these voice-controlled systems lies Natural Language Processing (NLP), a multidisciplinary field combining linguistics, computer science, and artificial intelligence. NLP enables machines to interpret, understand, and generate human language. When integrated into frontend applications, NLP powers speech recognition, semantic understanding, and context-aware response generation — all crucial to building interfaces that feel human-like and intuitive. This paper provides a comprehensive analysis of NLP’s role in voice-controlled front-end architectures. We explore foundational components, such as Automatic Speech Recognition (ASR), Natural Language Understanding (NLU), Natural Language Generation (NLG), and Text-to-Speech (TTS) synthesis. Beyond these fundamentals, we delve into advanced topics like large pre-trained language models, edge computing, and multilingual support. We discuss practical applications, such as accessibility tools, smart home controls, e-commerce platforms, and gaming interfaces. Furthermore, the paper highlights current challenges — such as scalability, bias in NLP models, and privacy — and surveys emerging research directions, including emotion recognition and zero-shot learning. By synthesizing existing literature, case studies, and best practices, we aim to offer a roadmap for the future development and deployment of NLP-based voice-controlled frontends. Key Components of Voice-Controlled Frontend Applications Speech Recognition The first step in any voice-controlled system is converting spoken language into text. Automatic Speech Recognition (ASR) models leverage deep learning architectures like Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and more recently, transformer-based architectures. These models are trained on large corpora of spoken language, enabling them to accurately transcribe input speech even in noisy environments. Modern APIs (e.g., Google Speech-to-Text, Amazon Transcribe, Microsoft Azure Speech) offer robust ASR capabilities, while open-source solutions like Kaldi and Wav2Vec 2.0 (Baevski et al., 2020) enable developers to train custom models. Challenges persist in handling domain-specific jargon, diverse accents, and low-resource languages. Contextual biasing and custom language models have emerged as solutions, allowing ASR systems to dynamically adapt to application-specific vocabularies and user-specific preferences. Natural Language Understanding (NLU) NLU transforms raw text into structured semantic representations that encapsulate user intent and context. Core NLU tasks include tokenization, part-of-speech tagging, named entity recognition (NER), intent classification, and sentiment analysis. Early NLU systems relied on handcrafted rules and statistical methods, but contemporary approaches often involve deep learning models fine-tuned on large pre-trained language models (e.g., BERT, Devlin et al., 2019). NLU frameworks like Rasa, Dialogflow, and spaCy simplify development by providing tools to classify user intents and extract entities. Maintaining context over multi-turn conversations remains a challenge, as does handling ambiguous or implied user requests. Techniques such as Transformer-based contextual encoders and memory-augmented architectures help preserve conversational context over extended dialogues. Natural Language Generation (NLG) NLG focuses on producing coherent, contextually relevant responses to user queries. With the rise of large language models such as GPT-3 (Brown et al., 2020) and GPT-4, generating human-like responses has become more achievable. These models can be fine-tuned for specific domains, ensuring that generated text aligns with the brand voice, domain constraints, and user expectations. Key challenges in NLG include producing factually correct outputs, avoiding repetitive or nonsensical responses, and maintaining a consistent persona. Recent research on controlled text generation enables more predictable, factual, and stylistically consistent responses. In voice-controlled frontends, NLG quality directly impacts the user experience, influencing trust and perceived intelligence of the system. Speech Synthesis (Text-to-Speech, TTS) TTS converts textual responses into synthetic speech. Early systems used concatenative synthesis, while modern approaches rely on neural models like Tacotron 2 (Shen et al., 2018) and WaveNet (Oord et al., 2016) to produce more natural prosody and intonation. Advances in TTS allow for customization of voice attributes (e.g., pitch, speed, timbre) and multilingual capabilities. High-quality TTS enhances user engagement, accessibility, and the overall user experience. Ongoing challenges include emotional expressiveness, quick adaptation to new voices, and maintaining naturalness in code-switched dialogues. Technical Architecture for Voice-Controlled Frontends Voice-controlled frontends typically employ a client-server model. The client interface — implemented in JavaScript or framework-specific code — captures audio input through browser APIs (e.g., the Web Speech API) and streams it to a backend service. The backend performs ASR, NLU, NLG, and returns synthesized speech back to the client. Frontend Integration The frontend layer uses modern web standards and APIs to handle audio input and output. The Web Speech API in browsers like Chrome provides basic speech recognition and synthesis, enabling rapid prototyping. However, for production systems requiring higher accuracy or domain adaptation, the front end may rely on cloud-based APIs. Libraries such as Annyang simplify common tasks like voice command mapping, while custom JavaScript code can manage UI state in response to recognized commands. Performance considerations include managing latency, ensuring smooth audio capture, and handling network issues. On weaker devices, local processing may be limited, raising the need for cloud or edge-based strategies. Backend NLP Pipelines The backend is where the heavy lifting occurs. When a voice input is received, the backend’s pipeline typically involves: ASR: Transcribe audio into text.NLU: Classify intent and extract entities.Business Logic: Query databases or APIs as needed.NLG: Generate a suitable response text.TTS: Convert the response text into synthetic speech. These steps can be orchestrated using microservices or serverless functions, ensuring scalability and modularity. Cloud providers like AWS, Google Cloud, and Azure offer NLP services that integrate seamlessly with web applications. Containerization (Docker) and orchestration (Kubernetes) enable scaling services based on traffic patterns. Hybrid Architectures and Edge Computing Relying solely on cloud services can introduce latency, privacy concerns, and dependency on network connectivity. Hybrid architectures, wherein some NLP tasks run on-device while others run in the cloud, improve responsiveness and protect user data. For instance, a frontend device could locally handle wake-word detection (“Hey Siri”) and basic NLU tasks, while offloading complex queries to the cloud. Edge computing frameworks allow the deployment of lightweight NLP models on smartphones or IoT devices using libraries like TensorFlow Lite. This approach reduces round-trip time and can function offline, catering to scenarios like voice commands in low-connectivity environments (e.g., remote industrial settings, and rural areas). Applications of NLP in Voice-Controlled Frontends Accessibility Voice-controlled frontends significantly improve accessibility for users with visual impairments, motor disabilities, or cognitive challenges. Conversational interfaces reduce reliance on complex GUIs. For instance, voice-enabled navigation on news websites, educational portals, or workplace tools can empower individuals who struggle with traditional input methods. Research from the World Wide Web Consortium (W3C) and A11Y communities highlights how inclusive voice interfaces support independent living, learning, and employment. Smart Homes and IoT Smart home adoption is accelerating, and NLP-driven voice controls are integral to this growth. Users can command lights, thermostats, and security systems through natural language instructions. Virtual assistants (Alexa, Google Assistant, Apple Siri) integrate seamlessly with third-party devices, offering a unified voice interface for a broad ecosystem. Recent research explores adaptive language models that learn user preferences over time, providing proactive suggestions and energy-saving recommendations. E-Commerce and Customer Support Voice-enabled e-commerce platforms offer hands-free shopping experiences. Users can search for products, check order statuses, and reorder items using voice commands. Integrations with recommendation systems and NLU-driven chatbots enable personalized product suggestions and simplified checkout processes. Studies have shown improved customer satisfaction and reduced friction in conversational commerce experiences. Voice-enabled customer support systems, integrated with NLU backends, can handle frequently asked questions, guide users through troubleshooting steps, and escalate complex issues to human agents. The result is improved operational efficiency, reduced wait times, and a more user-friendly support experience. Gaming and Entertainment Voice control in gaming offers immersive, hands-free interactions. Players can issue commands, navigate menus, and interact with non-player characters through speech. This enhances realism and accessibility. Similarly, entertainment platforms (e.g., streaming services) allow voice navigation for selecting shows, adjusting volume, or searching content across languages. The synergy of NLP and 3D interfaces in AR/VR environments promises even more engaging and intuitive experiences. Challenges and Limitations Despite the progress in NLP-driven voice frontends, several challenges persist: Language Diversity and Multilingual Support Most NLP models are predominantly trained on high-resource languages (English, Mandarin, Spanish), leaving many languages and dialects underserved. Low-resource languages, characterized by limited annotated data, present difficulty for both ASR and NLU. Research on transfer learning, multilingual BERT-based models (Pires et al., 2019), and unsupervised pre-training aims to extend coverage to a wider range of languages. Solutions like building language-agnostic sentence embeddings and leveraging cross-lingual transfer techniques hold promise for truly global, inclusive voice interfaces. Contextual Understanding and Memory Maintaining conversation context is non-trivial. Users expect the system to remember previous turns, references, and implied information. Sophisticated approaches — such as Transformer models with attention mechanisms — help track dialogue history. Dialogue state tracking and knowledge-grounded conversation models (Dinan et al., 2019) enable more coherent multi-turn conversations. However, achieving human-level contextual reasoning remains an open research problem. Privacy and Security Voice data is sensitive. Continuous listening devices raise concerns about data misuse, unauthorized access, and user profiling. Developers must ensure strong encryption, consent-based data collection, and clear privacy policies. Privacy-preserving machine learning (differential privacy, federated learning) allows on-device model updates without sending raw voice data to the cloud. Regulatory frameworks like GDPR and CPRA push for transparent handling of user data. Scalability and Performance Voice-controlled frontends must handle potentially millions of concurrent requests. Scaling NLP services cost-effectively demands efficient load balancing, caching strategies for frequently accessed data, and model optimization techniques (quantization, pruning, distillation) to accelerate inference. Techniques such as GPU acceleration, model parallelism, and distributed training help manage computational overhead. Advancements and Opportunities Pre-Trained Language Models and Fine-Tuning The advent of large pre-trained models like BERT, GPT-3/4, and T5 has revolutionized NLP. These models, trained on massive corpora, have strong generalization capabilities. For voice applications, fine-tuning these models for domain-specific tasks — such as specialized medical vocabularies or technical support dialogues — improves understanding and response quality. OpenAI’s GPT-4, for example, can reason more accurately over complex instructions, enhancing both NLU and NLG for voice interfaces. Edge Computing and On-Device NLP Running NLP models directly on devices offers latency reductions, offline functionality, and improved privacy. Accelerators like Google’s Coral or Apple’s Neural Engine support efficient inference at the edge. Research focuses on compression and optimization techniques (mobileBERT, DistilBERT) to shrink model sizes without significantly degrading accuracy. This approach enables personalized voice experiences that adapt to the user’s environment and context in real time. Multimodal Interaction Future voice interfaces will not rely solely on audio input. Combining speech with visual cues (e.g., AR overlays), haptic feedback, or gesture recognition can create richer, more intuitive interfaces. Multimodal NLP (Baltrušaitis et al., 2019) merges language understanding with vision and other sensory data, allowing systems to ground commands in the physical world. This synergy can improve disambiguation, accessibility, and situational awareness. Personalization and User Modeling Incorporating user-specific preferences, interaction history, and personalization is a key frontier. Reinforcement learning-based approaches can optimize dialogue strategies based on user feedback. Adaptive language models, trained incrementally on user data (with privacy safeguards), can refine vocabulary, style, and responses. Such personalization leads to more satisfying experiences, reduces friction, and encourages sustained engagement. Ethical Considerations Bias and Fairness Large language models trained on web-scale data inherit societal biases present in the data. This leads to potential unfair treatment or exclusion of certain demographic groups. Voice-controlled systems must mitigate biases by curating training corpora, applying bias detection algorithms, and conducting thorough bias and fairness audits. Academic and industry efforts, including the Partnership on AI’s fairness guidelines, aim to develop standardized benchmarks and best practices. Transparency and Explainability Users should understand how voice-controlled systems make decisions. Explainable NLP techniques help surface system reasoning processes, indicating which parts of a query influenced a particular response. While neural models often function as “black boxes,” research on attention visualization and interpretable embeddings attempts to shed light on model decisions. Regulatory bodies may require such transparency for compliance and user trust. User Consent and Data Governance Users must be informed about how their voice data is collected, stored, and used. Applications should provide opt-in mechanisms, allow data deletion, and offer clear privacy statements. Data governance frameworks must align with local regulations, ensure secure data handling, and minimize the risk of data breaches or unauthorized surveillance. Case Studies Voice Assistants in Healthcare In healthcare settings, voice-controlled interfaces facilitate patient triage, symptom checks, and medication reminders. For example, conversational agents integrated with Electronic Health Record (EHR) systems can assist clinicians in retrieving patient data hands-free, improving workflow efficiency and reducing administrative burden. Studies (Shickel et al., 2018) show that voice interfaces can enhance patient engagement and adherence to treatment plans, though privacy and data compliance (HIPAA) remain critical. Voice Commerce Retailers integrate voice search and ordering capabilities to reduce friction in the shopping experience. For instance, Walmart’s voice-shopping feature allows users to add items to their carts by simply stating product names. Research indicates that streamlined voice interactions can improve conversion rates and user satisfaction, especially when paired with recommendation engines that leverage NLU to comprehend user preferences. Smart Cities Voice-controlled kiosks, public information systems, and transportation hubs can guide citizens and visitors through unfamiliar environments. Tourists might ask for restaurant recommendations, bus schedules, or directions to landmarks. Combining NLP with geospatial data and public APIs fosters intuitive, inclusive urban experiences. Pilot projects in cities like Seoul and Barcelona have explored voice-enabled access to public services, improving accessibility for non-technical populations. Future Directions Low-Resource Languages and Code-Switching Developing robust NLP solutions for languages with scarce training data remains a pressing challenge. Transfer learning, multilingual embeddings, and unsupervised pre-training on unlabeled text corpora aim to bridge this gap. Code-switching — when speakers alternate between languages within a single conversation — further complicates the NLP pipeline. Research on code-switching corpora and models is critical for voice applications in linguistically diverse regions. Emotion and Sentiment Recognition Detecting user emotions can enable more empathetic and context-sensitive responses. Emotion recognition in speech (Schuller et al., 2018) involves analyzing prosody, pitch, and energy, while sentiment analysis on textual transcriptions provides additional cues. Emotion-aware interfaces could, for example, adjust their tone or offer calming responses in stressful situations (e.g., technical support sessions). Real-Time Multilingual NLP As global connectivity increases, real-time multilingual NLP could allow seamless communication between speakers of different languages. Advances in neural machine translation, combined with on-the-fly ASR and TTS, enable voice interfaces to serve as universal translators. This capability can foster cross-cultural collaboration and improve accessibility in international contexts. Zero-Shot and Few-Shot Learning Zero-shot learning allows models to handle tasks without direct training examples. In voice applications, zero-shot NLU could interpret novel commands or domain-specific requests without prior fine-tuning. Few-shot learning reduces the amount of annotated data needed to adapt models to new domains. These paradigms promise more agile development cycles, lowering barriers for custom voice interfaces. Conclusion Natural Language Processing forms the bedrock of voice-controlled frontend applications, empowering more natural, inclusive, and intuitive human-computer interactions. Advances in ASR, NLU, NLG, and TTS, combined with scalable architectures, have made it possible to deploy voice interfaces across diverse domains — ranging from smart homes and healthcare to e-commerce and urban services. The journey is far from complete. Ongoing research addresses challenges in handling language diversity, maintaining conversational context, ensuring user privacy, and scaling NLP systems efficiently. Ethical considerations, such as bias mitigation and explainability, remain paramount as these technologies become increasingly pervasive in daily life. Looking ahead, innovations in edge computing, multimodal interaction, and personalization will further enhance the capabilities and reach of voice-controlled frontends. Zero-shot learning and real-time multilingual NLP will break down language barriers, and emotion recognition will lead to more empathetic and user-centric experiences. By continuing to invest in research, responsible development, and inclusive design principles, we can realize the full potential of NLP for voice-controlled front-end applications — ultimately making digital services more accessible, natural, and empowering for everyone. References Baevski, A., Zhou, Y., Mohamed, A., & Auli, M. (2020). wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations. Advances in Neural Information Processing Systems (NeurIPS).Baltrušaitis, T., Ahuja, C., & Morency, L-P. (2019). Multimodal Machine Learning: A Survey and Taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2), 423-443.Brown, T., Mann, B., Ryder, N., et al. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems (NeurIPS).Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of NAACL-HLT.Dinan, E., Roller, S., Shuster, K., et al. (2019). Wizard of Wikipedia: Knowledge-Powered Conversational Agents. International Conference on Learning Representations (ICLR).Oord, A. v. d., Dieleman, S., Zen, H., et al. (2016). WaveNet: A Generative Model for Raw Audio. ArXiv:1609.03499.Pires, T., Schlinger, E., & Garrette, D. (2019). How multilingual is Multilingual BERT? Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.Schuller, B., Batliner, A., Steidl, S., & Seppi, D. (2018). Recognising Realistic Emotions and Affect in Speech: State of the Art and Lessons Learnt from the First Challenge. Speech Communication, 53(9–10), 1062–1087.Shen, J., Pang, R., Weiss, R. J., et al. (2018). Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. ICASSP 2018.Shickel, B., Tighe, P. J., Bihorac, A., & Rashidi, P. (2018). Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis. IEEE Journal of Biomedical and Health Informatics, 22(5), 1589-1604.Vaswani, A., Shazeer, N., Parmar, N., et al. (2017). Attention Is All You Need. Advances in Neural Information Processing Systems (NeurIPS).World Wide Web Consortium (W3C). (n.d.). Web Accessibility Initiative (WAI). [Online].
Since the Java platform adopted a six-month release cycle, we've moved past the perennial questions such as "Will Java die this year?" or "Is it worth migrating to the new version?". Despite 28 years since its first release, Java continues to thrive and remains a popular choice as the primary programming language for many new projects. Java 17 was a significant milestone, but Java 21 has now taken 17’s place as the next long-term support release (LTS). It's essential for Java developers to stay informed about the changes and new features this version brings. Inspired by my colleague Darek, who detailed Java 17 features in his article, I've decided to discuss JDK 21 in a similar fashion (I've also analyzed Java 23 features in a follow-up piece, so check it out too). JDK 21 comprises a total of 15 JEPs (JDK Enhancement Proposals). You can review the complete list on the official Java site. In this article, I’ll highlight several Java 21 JEPs that I believe are particularly noteworthy. Namely: String TemplatesSequenced CollectionsPattern Matching for switch and Record PatternsVirtual Threads Without further delay, let's delve into the code and explore these updates. String Templates (Preview) The String Templates feature is still in preview mode. To use it, you have to add the --enable-preview flag to your compiler args. However, I’ve decided to mention it despite its preview status. Why? Because I get very irritated every time I have to write a log message or SQL statement that contains many arguments or decipher which placeholder will be replaced with a given arg. And String Templates promises to help me (and you) with that. As JEP documentation says, the purpose of String Templates is to “simplify the writing of Java programs by making it easy to express strings that include values computed at run time." Let’s check if it really is simpler. The “old way” would be to use the formatted() method on a String object: var msg = "Log message param1: %s, pram2: %s".formatted(p1, p2); Now, with StringTemplate.Processor (STR), it looks like this: var interpolated = STR."Log message param1: \{p1}, param2: \{p2}"; With a short text like the one above, the profit may not be that visible — but believe me, when it comes to big text blocks (JSONs, SQL statements, etc.), named parameters will help you a lot. Sequenced Collections Java 21 introduced a new Java Collection Hierarchy. Look at the diagram below and compare it to what you probably have learned during your programming classes. You’ll notice that three new structures have been added (highlighted by the green color). Image source: JEP 431 Sequenced collections introduce a new built-in Java API, enhancing operations on ordered datasets. This API allows not only convenient access to the first and last elements of a collection but also enables efficient traversal, insertion at specific positions, and retrieval of sub-sequences. These enhancements make operations that depend on the order of elements simpler and more intuitive, improving both performance and code readability when working with lists and similar data structures. This is the full listing of the SequencedCollection interface: Java public interface SequencedCollection<E> extends Collection<E> { SequencedCollection<E> reversed(); default void addFirst(E e) { throw new UnsupportedOperationException(); } default void addLast(E e) { throw new UnsupportedOperationException(); } default E getFirst() { return this.iterator().next(); } default E getLast() { return this.reversed().iterator().next(); } default E removeFirst() { var it = this.iterator(); E e = it.next(); it.remove(); return e; } default E removeLast() { var it = this.reversed().iterator(); E e = it.next(); it.remove(); return e; } } So, now, instead of: Java var first = myList.stream().findFirst().get(); var anotherFirst = myList.get(0); var last = myList.get(myList.size() - 1); We can just write: Java var first = sequencedCollection.getFirst(); var last = sequencedCollection.getLast(); var reversed = sequencedCollection.reversed(); This is a small change, but IMHO, it’s such a convenient and usable feature. Pattern Matching and Record Patterns Because of the similarity of Pattern Matching for switch and Record Patterns, I will describe them together. Record patterns are a fresh feature: they have been introduced in Java 19 (as a preview). On the other hand, Pattern Matching for switch is kinda a continuation of the extended instanceof expression. It brings in new possible syntax for switch statements which lets you express complex data-oriented queries more easily. Let's forget about the basics of OOP for the sake of this example and deconstruct the employee object manually (employee is a POJO class). Before Java 21, It looked like this: Java if (employee instanceof Manager e) { System.out.printf("I’m dealing with manager of %s department%n", e.department); } else if (employee instanceof Engineer e) { System.out.printf("I’m dealing with %s engineer.%n", e.speciality); } else { throw new IllegalStateException("Unexpected value: " + employee); } What if we could get rid of the ugly instanceof? Well, now we can, thanks to the power of Pattern Matching from Java 21: Java switch (employee) { case Manager m -> printf("Manager of %s department%n", m.department); case Engineer e -> printf("I%s engineer.%n", e.speciality); default -> throw new IllegalStateException("Unexpected value: " + employee); } While talking about the switch statement, we can also discuss the Record Patterns feature. When dealing with a Java Record, it allows us to do much more than with a standard Java class: Java switch (shape) { // shape is a record case Rectangle(int a, int b) -> System.out.printf("Area of rectangle [%d, %d] is: %d.%n", a, b, shape.calculateArea()); case Square(int a) -> System.out.printf("Area of square [%d] is: %d.%n", a, shape.calculateArea()); default -> throw new IllegalStateException("Unexpected value: " + shape); } As the code shows, with that syntax, record fields are easily accessible. Moreover, we can put some additional logic to our case statements: Java switch (shape) { case Rectangle(int a, int b) when a < 0 || b < 0 -> System.out.printf("Incorrect values for rectangle [%d, %d].%n", a, b); case Square(int a) when a < 0 -> System.out.printf("Incorrect values for square [%d].%n", a); default -> System.out.println("Created shape is correct.%n"); } We can use a similar syntax for the if statements. Also, in the example below, we can see that Record Patterns also work for nested records: Java if (r instanceof Rectangle(ColoredPoint(Point p, Color c), ColoredPoint lr)) { //sth } Virtual Threads The Virtual Threads feature is probably the hottest one among all Java 21 — or at least one the Java developers have waited the most for. As JEP documentation (linked in the previous sentence) says, one of the goals of the virtual threads was to “enable server applications written in the simple thread-per-request style to scale with near-optimal hardware utilization”. However, does this mean we should migrate our entire code that uses java.lang.Thread? First, let’s examine the problem with the approach that existed before Java 21 (in fact, pretty much since Java’s first release). We can approximate that one java.lang.Thread consumes (depending on OS and configuration) about 2 to 8 MB of memory. However, the important thing here is that one Java Thread is mapped 1:1 to a kernel thread. For simple web apps that use a “one thread per request” approach, we can easily calculate that either our machine will be “killed” when traffic increases (it won’t be able to handle the load) or we’ll be forced to purchase a device with more RAM, and our AWS bills will increase as a result. Of course, virtual threads are not the only way to handle this problem. We have asynchronous programming (frameworks like WebFlux or native Java API like CompletableFuture). However, for some reason — maybe because of the “unfriendly API” or high entry threshold — these solutions aren’t that popular. Virtual Threads aren’t overseen or scheduled by the operating system. Rather, their scheduling is handled by the JVM. While real tasks must be executed in a platform thread, the JVM employs so-called carrier threads — essentially platform threads — to "carry" any virtual thread when it is due for execution. Virtual Threads are designed to be lightweight and use much less memory than standard platform threads. The diagram below shows how Virtual Threads are connected to platform and OS threads: So, to see how Virtual Threads are used by Platform Threads, let’s run code that starts (1 + number of CPUs the machine has, in my case 8 cores) virtual threads. Java var numberOfCores = 8; // final ThreadFactory factory = Thread.ofVirtual().name("vt-", 0).factory(); try (var executor = Executors.newThreadPerTaskExecutor(factory)) { IntStream.range(0, numberOfCores + 1) .forEach(i -> executor.submit(() -> { var thread = Thread.currentThread(); System.out.println(STR."[\{thread}] VT number: \{i}"); try { sleep(Duration.ofSeconds(1L)); } catch (InterruptedException e) { throw new RuntimeException(e); } })); } The output looks like this: Plain Text [VirtualThread[#29,vt-6]/runnable@ForkJoinPool-1-worker-7] VT number: 6 [VirtualThread[#26,vt-4]/runnable@ForkJoinPool-1-worker-5] VT number: 4 [VirtualThread[#30,vt-7]/runnable@ForkJoinPool-1-worker-8] VT number: 7 [VirtualThread[#24,vt-2]/runnable@ForkJoinPool-1-worker-3] VT number: 2 [VirtualThread[#23,vt-1]/runnable@ForkJoinPool-1-worker-2] VT number: 1 [VirtualThread[#27,vt-5]/runnable@ForkJoinPool-1-worker-6] VT number: 5 [VirtualThread[#31,vt-8]/runnable@ForkJoinPool-1-worker-6] VT number: 8 [VirtualThread[#25,vt-3]/runnable@ForkJoinPool-1-worker-4] VT number: 3 [VirtualThread[#21,vt-0]/runnable@ForkJoinPool-1-worker-1] VT number: 0 So, ForkJonPool-1-worker-X Platform Threads are our carrier threads that manage our virtual threads. We observe that Virtual Threads number 5 and 8 are using the same carrier thread number 6. The last thing about Virtual Threads I want to show you is how they can help you with the blocking I/O operations. Whenever a Virtual Thread encounters a blocking operation, such as I/O tasks, the JVM efficiently detaches it from the underlying physical thread (the carrier thread). This detachment is critical because it frees up the carrier thread to run other Virtual Threads instead of being idle, waiting for the blocking operation to complete. As a result, a single carrier thread can multiplex many Virtual Threads, which could number in the thousands or even millions, depending on the available memory and the nature of tasks performed. Let’s try to simulate this behavior. To do this, we will force our code to use only one CPU core, with only 2 virtual threads — for better clarity. Java System.setProperty("jdk.virtualThreadScheduler.parallelism", "1"); System.setProperty("jdk.virtualThreadScheduler.maxPoolSize", "1"); System.setProperty("jdk.virtualThreadScheduler.minRunnable", "1"); Thread 1: Java Thread v1 = Thread.ofVirtual().name("long-running-thread").start( () -> { var thread = Thread.currentThread(); while (true) { try { Thread.sleep(250L); System.out.println(STR."[\{thread}] - Handling http request ...."); } catch (InterruptedException e) { throw new RuntimeException(e); } } } ); Thread 2: Java Thread v2 = Thread.ofVirtual().name("entertainment-thread").start( () -> { try { Thread.sleep(1000L); } catch (InterruptedException e) { throw new RuntimeException(e); } var thread = Thread.currentThread(); System.out.println(STR."[\{thread}] - Executing when 'http-thread' hit 'sleep' function"); } ); Execution: Java v1.join(); v2.join(); Result: Plain Text [VirtualThread[#21,long-running-thread]/runnable@ForkJoinPool-1-worker-1] - Handling http request .... [VirtualThread[#21,long-running-thread]/runnable@ForkJoinPool-1-worker-1] - Handling http request .... [VirtualThread[#21,long-running-thread]/runnable@ForkJoinPool-1-worker-1] - Handling http request .... [VirtualThread[#23,entertainment-thread]/runnable@ForkJoinPool-1-worker-1] - Executing when 'http-thread' hit 'sleep' function [VirtualThread[#21,long-running-thread]/runnable@ForkJoinPool-1-worker-1] - Handling http request .... [VirtualThread[#21,long-running-thread]/runnable@ForkJoinPool-1-worker-1] - Handling http request .... [VirtualThread[#21,long-running-thread]/runnable@ForkJoinPool-1-worker-1] - Handling http request .... [VirtualThread[#21,long-running-thread]/runnable@ForkJoinPool-1-worker-1] - Handling http request .... [VirtualThread[#21,long-running-thread]/runnable@ForkJoinPool-1-worker-1] - Handling http request .... [VirtualThread[#21,long-running-thread]/runnable@ForkJoinPool-1-worker-1] - Handling http request .... We observe that both Virtual Threads (long-running-thread and entertainment-thread) are being carried by only one Platform Thread, which is ForkJoinPool-1-worker-1. To summarize, this model enables Java applications to achieve high levels of concurrency and scalability with much lower overhead than traditional thread models, where each thread maps directly to a single operating system thread. It’s worth noting that virtual threads are a vast topic, and what I’ve described is only a small fraction. I strongly encourage you to learn more about the scheduling, pinned threads, and the internals of VirtualThreads. Summary: The Future of the Java Programming Language The features described above are the ones I consider to be the most important in Java 21. Most of them aren’t as groundbreaking as some of the things introduced in JDK 17, but they’re still very useful, and nice to have QOL (Quality of Life) changes. However, you shouldn’t discount other JDK 21 improvements either — I highly encourage you to analyze the complete list and explore all the features further. For example, one thing I consider particularly noteworthy is the Vector API, which allows vector computations on some supported CPU architectures — not possible before. Currently, it’s still in the incubator status/experimental phase (which is why I didn’t highlight it in more detail here), but it holds great promise for the future of Java. Overall, the advancement Java made in various areas signals the team’s ongoing commitment to improving efficiency and performance in high-demand applications.
OneLake in Microsoft Fabric aims to provide an enterprise with a consolidated analytical approach by developing its data and tools into one logical base. OneLake, which is automatically available across all Microsoft Fabric tenants, enables users to manage large volumes of data without the need to build separate databases or overlays, encouraging data usage across the dimensions of the analytical ecosystem. Overview of Medallion Architecture Medallion Architecture, a systematic data management approach, offers a three-tier structure for data processing: Bronze, Silver, and Gold. Bronze Layer This basic layer absorbs diverse data types, including unstructured, semi-structured, and structured. The aim here is to ensure that the data is secured in its raw form during the storage processes so that it can be processed in the future. Silver Layer Data in this medium layer is cleaned and transformed to maintain uniformity and fitness. Each data source is suitable for analysis by employing data cleaning, joining, and filtering methods that permit data unification from different sources. Therefore, the layer takes care of errors or a non-standardized data structure and improves its quality, easing end users' understanding of the data. Gold Layer The last layer incorporates the best available and most consolidated data suited to a particular business need; for example, reporting, machine learning, or analytics. It addresses business logic and requirements and delivers aggregated data suitable for operational queries and dashboards. Why Use OneLake With Medallion Architecture? Several important factors support the integration of Microsoft OneLake with the Medallion Architecture in the Fabric environment. Scalability OneLake provides a unified conceptual data hub to correspond with the increasing volume of data. As data progresses from the Bronze layer to the Silver and Gold layers of the Medallion Architecture, such scalability ensures that there can always be sufficient capacity for processing and storage requirements. Data Governance By distributing data in layers with different characteristics and authentication levels, OneLake and the Medallion structure allow for greater data control. By providing such a clear-cut framework, the need for such a framework minimizes the processes and procedures needed to meet legal and corporate compliances while ensuring standardization of data treatment and storage. Security OneLake practices multi-layered security, offering detailed access permissions for folders, items, and workspaces. Organizations can use role-based access permissions for every layer of the Medallion Architecture to protect sensitive information throughout the data lifecycle. How Medallion Architecture Works in OneLake With Microsoft OneLake, data is divided into three levels for the Medallion Architecture: Bronze, Silver, and Gold, in increasing order of sophistication. The Bronze layer takes in and retains unprocessed data from various sources and does not modify it. This data undergoes cleansing and transformation in the Silver layer to ensure consistency and quality. This structured data in the OneLake within the organization certainly makes access easier while improving data quality and creating more controlled processes while ensuring that reporting has clear-cut and purposeful quantitative insights. Security and Governance in OneLake Built-In Security Mechanisms A well-designed security model within OneLake secures data in all layers. In this model, Role-Based Access Control (RBAC) allows administrators to grant specific access rights according to the user’s purpose. This makes it possible to prevent unauthorized users from reading sensitive information at various levels of the Medallion Architecture. Moreover, data is secured through encryption both when transmitted and when it is stored to prevent the occurrence of unauthorized access. Data Lineage and Governance As a part of Fabric, data lineage tracking tools are present, making data transparency and compliance possible throughout the data pipeline. Data lineage tracking permits users to follow data to its very creation and can do so for any data processing phase. This aspect is very important in meeting regulatory requirements, making auditing tasks and the processes associated with the confirmation of data’s movement simple. In addition, Fabric provides the necessary tools for controlling and monitoring data quality in the Bronze, Silver, and Gold tiers for timely governance and control of trust in data used for decision-making and analytics. Best Practices for OneLake Medallion Architecture Below are some Medallion Best Practices for One Lake: Optimizing Performance Across Layers All layers must employ low-latency techniques to store and configure the data sets in OneLake's Medallion Architecture. Several of these methods, resources, and techniques involve partitioning large data sets, which helps in faster information retrieval and processing times. Moreover, the dynamic adjustment of compute resources in response to workload demands helps to guarantee that certain performance requirements are maintained throughout busy periods. This elasticity allows computing capacity to match data processing requirements without being over-provisioned, improving performance relative to resource consumption. Cost Optimization Cost management in OneLake does not occur spontaneously; thus, effort must be made to ensure that resources are scaled efficiently. With the help of the cost management tools available in Fabric, resource requirements can be estimated, and expenditure on storage and computing can be monitored and avoided to minimize losses. Maintenance and Monitoring A well-defined maintenance and continuous monitoring plan is one of the most pivotal factors in achieving a sustained, effective data pipeline in OneLake. Users' performance is measurable through built-in monitoring tools, which help users pinpoint limits and ensure seamless data movement between layers. To maintain the pipeline in good working order, it is recommended that performance evaluations and modifications be carried out frequently. This also eliminates the possibility of problems occurring in the first place so that the entire architecture can guarantee access to data at any time and in the required quality. Accessing OneLake and Data Consumption Using OneLake File Explorer for Data Access OneLake’s Key Developer enables developers to read and manipulate OneLake through the File Explorer interface. This tool provides a file system interface that allows users to browse, view, and work with data files. OneLake File Explorer allows its users to work with data from different levels, Bronze, Silver, and Gold and enables them to work with the data that has been archived. This tool also has features that allow users to access the data and resolve problems, i.e., issues in the data pipeline. Consumption of Data in OneLake Many tools, such as Power BI, Synapse Analytics, and other external tools, complement data consumption in OneLake. Using Power BI to integrate with other OneLake services allows users to access OneLake directly to visualize and analyze data and produce interactive reports and dashboards. Finally, Synapse provides comprehensive analytics and transformation capabilities within the lake. Conclusion Combining OneLake and the Medallion Architecture in a Fabric environment creates a secure and efficient framework for data management that is also easily scalable. Such layered approaches, in turn, improve data governance, quality, and accessibility and allow organizations to have the confidence to make data-centered decisions.
Back-end development can feel like you’re constantly putting out fires — one messy query here, a crashing API call there. But it doesn’t have to be that way! By using well-established design patterns, you can make your codebase more organized, scalable, and easier to maintain. Plus, it’ll keep your boss impressed and your weekends stress-free. Here are some essential back-end patterns every developer should know, with examples in Java to get you started. 1. Repository Pattern: Tidy Up Your Data Layer If your application’s data access logic is scattered across your codebase, debugging becomes a nightmare. The Repository Pattern organizes this mess by acting as an intermediary between the business logic and the database. It abstracts data access so you can switch databases or frameworks without rewriting your app logic. Why It’s Useful Simplifies testing by decoupling business logic from data access.Reduces repetitive SQL or ORM code.Provides a single source of truth for data access. Example in Java Java public interface UserRepository { User findById(String id); List<User> findAll(); void save(User user); } public class UserRepositoryImpl implements UserRepository { private EntityManager entityManager; public UserRepositoryImpl(EntityManager entityManager) { this.entityManager = entityManager; } @Override public User findById(String id) { return entityManager.find(User.class, id); } @Override public List<User> findAll() { return entityManager.createQuery("SELECT u FROM User u", User.class).getResultList(); } @Override public void save(User user) { entityManager.persist(user); } } 2. CQRS Pattern: Give Reads and Writes Their Space The Command Query Responsibility Segregation (CQRS) pattern is all about separating read and write operations into different models. This allows you to optimize each one independently. For example, you could use an optimized database for reads (like Elasticsearch) and a transactional database for writes (like PostgreSQL). Why It’s Awesome Optimizes performance for read-heavy or write-heavy systems.Simplifies scalability by isolating workloads.Allows different data structures for reading and writing. Example in Java Java // Command: Writing data public void createOrder(Order order) { entityManager.persist(order); } // Query: Reading data public Order getOrderById(String id) { return entityManager.find(Order.class, id); } 3. Builder Pattern: Create Complex Objects With Ease Constructing objects with multiple optional parameters can lead to bloated constructors. The Builder Pattern solves this problem by providing a step-by-step approach to creating objects. Why You’ll Love It Keeps constructors clean and readable.Makes object creation more modular and flexible.Simplifies debugging and testing. Example in Java Java public class Order { private String id; private double amount; private Order(Builder builder) { this.id = builder.id; this.amount = builder.amount; } public static class Builder { private String id; private double amount; public Builder setId(String id) { this.id = id; return this; } public Builder setAmount(double amount) { this.amount = amount; return this; } public Order build() { return new Order(this); } } } // Usage Order order = new Order.Builder() .setId("123") .setAmount(99.99) .build(); 4. Event-Driven Architecture: Let Services Communicate Smoothly Microservices thrive on asynchronous communication. The Event-Driven Architecture pattern allows services to publish events that other services can subscribe to. It decouples systems and ensures they remain independent yet coordinated. Why It Works Simplifies scaling individual services.Handles asynchronous workflows like notifications or audit logs.Makes your architecture more resilient to failures. Example in Java Java // Event publisher public class EventPublisher { private final List<EventListener> listeners = new ArrayList<>(); public void subscribe(EventListener listener) { listeners.add(listener); } public void publish(String event) { for (EventListener listener : listeners) { listener.handle(event); } } } // Event listener public interface EventListener { void handle(String event); } // Usage EventPublisher publisher = new EventPublisher(); publisher.subscribe(event -> System.out.println("Received event: " + event)); publisher.publish("OrderCreated"); 5. Saga Pattern: Keep Distributed Transactions in Check When multiple services are involved in a single transaction, things can get messy. The Saga Pattern coordinates distributed transactions by breaking them into smaller steps. If something goes wrong, it rolls back previously completed steps gracefully. Why It’s Essential Ensures data consistency in distributed systems.Simplifies failure handling with compensating actions.Avoids the need for a central transaction manager. Example in Java Java public class OrderSaga { public boolean processOrder(Order order) { try { createOrder(order); deductInventory(order); processPayment(order); return true; } catch (Exception e) { rollbackOrder(order); return false; } } private void createOrder(Order order) { // Create order logic } private void deductInventory(Order order) { // Deduct inventory logic } private void processPayment(Order order) { // Payment processing logic } private void rollbackOrder(Order order) { System.out.println("Rolling back transaction for order: " + order.getId()); // Rollback logic } } Wrapping Up: Patterns Are Your Best Friend Design patterns aren’t just fancy concepts — they’re practical solutions to everyday back-end challenges. Whether you’re managing messy data access, handling distributed transactions, or just trying to keep your codebase sane, these patterns are here to help. So, the next time someone asks how you built such an efficient, maintainable backend, just smile and say, “It’s all about the patterns.”
AI/ML workflows excel on structured, reliable data pipelines. To streamline these processes, DBT and Snowpark offer complementary capabilities: DBT is for modular SQL transformations, and Snowpark is for programmatic Python-driven feature engineering. Here are some key benefits of using DBT, Snowpark, and Snowflake together: Simplifies SQL-based ETL with DBT’s modularity and tests.Handles complex computations with Snowpark’s Python UDFs.Leverages Snowflake’s high-performance engine for large-scale data processing. Here’s a step-by-step guide to installing, configuring, and integrating DBT and Snowpark into your workflows. Step 1: Install DBT In Shell, you can use Python’s pip command for installing packages. Assuming Python is already installed and added to your PATH, follow these steps: Shell # Set up a Python virtual environment (recommended): python3 -m venv dbt_env source dbt_env/bin/activate # Install DBT and the Snowflake adapter: pip install dbt-snowflake # Verify DBT installation dbt --version Step 2: Install Snowpark Shell # Install Snowpark for Python pip install snowflake-snowpark-python # Install additional libraries for data manipulation pip install pandas numpy # Verify Snowpark installation python -c "from snowflake.snowpark import Session; print('successful Snowpark installation')" Step 3: Configuring DBT for Snowflake DBT requires a profiles.yml file to define connection settings for Snowflake. Locate the DBT Profiles Directory By default, DBT expects the profiles.yml file in the ~/.dbt/ directory. Create the directory if it doesn’t exist: Shell mkdir -p ~/.dbt Create the profiles.yml File Define your Snowflake credentials in the following format: YAML my_project: outputs: dev: type: snowflake account: your_account_identifier user: your_username password: your_password role: your_role database: your_database warehouse: your_warehouse schema: your_schema target: dev Replace placeholders like your_account_identifier with your Snowflake account details. Test the Connection Run the following command to validate your configuration: Shell dbt debug If the setup is correct, you’ll see a success message confirming the connection. Step 4: Setting Up Snowpark Ensure Snowflake Permissions Before using Snowpark, ensure your Snowflake user has the following permissions: Access to the warehouse and schema.Ability to create and register UDFs (User-Defined Functions). Create a Snowpark Session Set up a Snowpark session using the same credentials from profiles.yml: Python from snowflake.snowpark import Session def create_session(): connection_params = { "account": "your_account_identifier", "user": "your_username", "password": "your_password", "role": "your_role", "database": "your_database", "warehouse": "your_warehouse", "schema": "your_schema", } return Session.builder.configs(connection_params).create() session = create_session() print("Snowpark session created successfully") Register a Sample UDF Here’s an example of registering a simple Snowpark UDF for text processing: Python def clean_text(input_text): return input_text.strip().lower() session.udf.register( func=clean_text, name="clean_text_udf", input_types=["string"], return_type="string", is_permanent=True ) print("UDF registered successfully") Step 5: Integrating DBT With Snowpark You have a DBT model named raw_table that contains raw data. raw_table DBT Model Definition SQL -- models/raw_table.sql SELECT * FROM my_database.my_schema.source_table Use Snowpark UDFs in DBT Models Once you’ve registered a UDF in Snowflake using Snowpark, you can call it directly from your DBT models. SQL -- models/processed_data.sql WITH raw_data AS ( SELECT id, text_column FROM {{ ref('raw_table') } ), cleaned_data AS ( SELECT id, clean_text_udf(text_column) AS cleaned_text FROM raw_data ) SELECT * FROM cleaned_data; Run DBT Models Execute your DBT models to apply the transformation: Shell dbt run --select processed_data Step 6: Advanced AI/ML Use Case For AI/ML workflows, Snowpark can handle tasks like feature engineering directly in Snowflake. Here’s an example of calculating text embeddings: Create an Embedding UDF Using Python and a pre-trained model, you can generate text embeddings: Python from transformers import pipeline def generate_embeddings(text): model = pipeline("feature-extraction", model="bert-base-uncased") return model(text)[0] session.udf.register( func=generate_embeddings, name="generate_embeddings_udf", input_types=["string"], return_type="array", is_permanent=True ) Integrate UDF in DBT Call the embedding UDF in a DBT model to create features for ML: SQL -- models/embedding_data.sql WITH raw_text AS ( SELECT id, text_column FROM {{ ref('raw_table') } ), embedded_text AS ( SELECT id, generate_embeddings_udf(text_column) AS embeddings FROM raw_text ) SELECT * FROM embedded_text; Best Practices Use DBT for reusable transformations: Break down complex SQL logic into reusable models.Optimize Snowpark UDFs: Write lightweight, efficient UDFs to minimize resource usage.Test Your Data: Leverage DBT’s testing framework for data quality.Version Control Everything: Track changes in DBT models and Snowpark scripts for traceability. Conclusion By combining DBT’s SQL-based data transformations with Snowpark’s advanced programming capabilities, you can build AI/ML pipelines that are both scalable and efficient. This setup allows teams to collaborate effectively while leveraging Snowflake’s computational power to process large datasets. Whether you’re cleaning data, engineering features, or preparing datasets for ML models, the DBT-Snowpark integration provides a seamless workflow to unlock your data’s full potential.
Requirements An organization's employee journey will involve a number of events, some of which need to be updated in various systems within the company. Workday will notify downstream systems, such as D365, DocuSign, and Salesforce, of events such as: ON_BOARDING, OFF_BOARDING, DEPARTMENT_CHANGE, OFFICE_LOCATION, MANAGER_CHANGE, and INDIVIDUAL_LEAVE. Some of the events on the list are necessary for particular downstream systems, while others are not. Here, learn how to design a flow that can guarantee 100% delivery to the necessary systems. Generic Data Model A generic data model is as follows: A set of data that can be reused across systems for various reasonsIt offers relation types that are standardized.Any object can be classified using generic data models, and part-whole relations can be specified for any objectAn infinite number of facts can be expressed by generic data models Dynamic Data Model Consistency within an organization can be ensured by sharing dynamic data between platforms and departments. A flexible data structure is one that doesn't have to be predefined in a strict schema and can change to accommodate evolving needs or data kinds. Workday Events Workday can generate a number of events related to an employee. For example, ON_BOARDING, OFF_BOARDING, DEPARTMENT_CHANGE, OFFICE_LOCATION, MANAGER_CHANGE, INDIVIDUAL_LEAVE, etc. Based on downstream system requirements (i.e., D365, DocuSign, Salesforce), they can adapt all the events or part of the events. To design a generic, dynamic, simple, and easily adaptable data model, workday publishes events with the same message structure: actioncompanyCodeemployeeIdemailcustomField1, customField2, etc. Event Data Model Examples Onboarding Event JSON { "action": "ON_BOARDING", "companyCode": "ANKURAN", "employeeId": "ABC125", "email": "bhuyan@ankur-online.com", "customField1": { "Name": "firstName", "Value": "Ankur" }, "customField2": { "Name": "lastName", "Value": "Bhuyan" }, "customField3": { "Name": "status", "Value": "Active" } } Offboarding Event JSON { "action": "OFF_BOARDING", "companyCode": "ANKURAN", "employeeId": "ABC125", "email": "bhuyan@ankur-online.com", "customField1": { "Name": "status", "Value": "Close" }, "customField2": { "Name": "terminationDate", "Value": "21-08-2024" } } Individual Leave JSON { "action": "INDIVIDUAL_LEAVE", "companyCode": "ANKURAN", "employeeId": "ABC125", "email": "bhuyan@ankur-online.com", "customField1": { "Name": "managerId", "Value": "XYZ123" }, "customField2": { "Name": "managerEmail", "Value": "manager@ankur-online.com" }, "customField3": { "Name": "leaveType", "Value": "private" }, "customField4": { "Name": "effectiveDate", "Value": "21-08-2024" }, "customField5": { "Name": "durationInHour", "Value": "8" } } Solution Design Workday Experience API Capture the "action" from workday input data and add as anypoint-mq:properties along with "correlationId" while publishing. XML <sub-flow name="update-user" doc:id="3e7ee53b-be64-45e9-93c8-c2fc1cf71a52"> <ee:transform doc:name="Prepare request parameter" doc:id="3ed4576c-6234-4053-b23e-c80a3733d53e" > <ee:variables > <ee:set-variable variableName="userProperties" > <![CDATA[%dw 2.0 output application/java --- { "correlationId" : vars.baselogger.correlationId default "", "action" : payload.action default "" }]]> </ee:set-variable> </ee:variables> </ee:transform> <logger level="INFO" doc:name="INFO : Published data" doc:id="33b3a36c-4cce-4133-885d-fa90e5265318" message="Published data : #[payload]"/> <logger level="INFO" doc:name="INFO : Published user properties" doc:id="82252294-19a3-4bdb-af58-b0409348198d" message="Published user properties : #[vars.userProperties]"/> <anypoint-mq:publish doc:name="Publish to X-USER-DEACTIVATE" doc:id="8fd1aebc-93e8-4d30-a77a-4dae339f7d27" config-ref="Anypoint_MQ_Config" destination="${anypoint.exchange}"> <anypoint-mq:body > <![CDATA[#[%dw 2.0 output application/json --- payload]]]> </anypoint-mq:body> <anypoint-mq:properties > <![CDATA[#[vars.userProperties]]]> </anypoint-mq:properties> </anypoint-mq:publish> </sub-flow> Anypoint MQ Exchange Configuration Configure the routing rules based on user properties. For example: action. Based on configured rules the data will be filtered/routed to respective downstream systems. Matcher Configuration Property Type: String Equals: The value in this rule exactly matches the value in the message property.Prefix: The value in this rule is the first value in the message property.Any of: Any of the values in this rule correspond to the value of the message property.None of: None of the values in this rule correspond to the value of the message property.Exists: A property in the message has the same name as the property in this rule. Property Type: Number Equals: The value in this rule is equivalent to the value of the message property.Less than: The value in the message property is lower than the value in this rule.Less than or equal: The value in the message property is lower than or equal to the value in this rule.Greater than: The value in the message property is greater than the value in this rule.Greater than or equal: The value in the message property is greater than or equal to the value in this rule.Range: The value of the message property falls into the range of values allowed by this rule.None of: None of the values in this rule correspond to the value of the message property.Exists: A property in the message has the same name as the property in this rule. Conclusion The purpose of this article is to demonstrate the definition of a generic and dynamic data model that can be used for various publisher-generated events. Both the source and destination systems must agree on the specified data model in order to implement this strategy. Adopting this event-driven strategy can be simple, and the organization's development effort can be decreased without relying on one another. Recognize how to configure rules on message exchanges in Anypoint MQ to route messages.
Kai Wähner
Technology Evangelist,
Confluent
Salman Khan
Associate Director,
Afiniti