Open source refers to non-proprietary software that allows anyone to modify, enhance, or view the source code behind it. Our resources enable programmers to work or collaborate on projects created by different teams, companies, and organizations.
When we are running servers, or even our local computer, different applications may install the same piece of software multiple times. For example, it is not uncommon to accidentally have two versions of Node.JS installed on a server or computer. In the example where we have multiple versions of Node.JS, it can be confusing which versions are running, or which will be used when we run the node command in a terminal window. If we want to know the origins of a command, we can use the which command to find where it is installed. The which command has the following syntax, [x], [y] and [z] are what we want to check: Shell which [x] [y] [z] How to Use the which Command on Linux or Mac Lets use our Node.JS example to start with. If we want to know which Node.JS is being used, we can simply type the following: Shell which node This will then return something like this: /root/.nvm/versions/node/v14.15.1/bin/node Checking Multiple Commands With the which Command on Linux or Mac If we want to check the location of multiple commands on Linux or Mac, we can use the usual which syntax, but just separate each item we want to check with a space. For example, the below text checks both node, and postfix: Shell which node postfix And for me, it returns this: /root/.nvm/versions/node/v14.15.1/bin/node /usr/sbin/postfix
What Is Playwright? Playwright by Microsoft is the newest addition to the Headless Browser Testing frameworks in popular use. Built by the same team which created Puppeteer (Headless Browser Testing Framework for Google Chrome), Playwright, too, is an open-source NodeJS based framework. However, it provides wider coverage for cross-browser testing by supporting Chrome, Firefox, and WebKit, while Puppeteer supports Chrome and Chromium browsers only. Playwright is compatible with Windows, Linux, and macOS, and can be integrated with major CI/CD servers such as Jenkins, CircleCI, Azure Pipeline, TravisCI, etc., in addition to the testing frameworks like Jest, Jasmine, Mocha. Besides JavaScript. Playwright also supports multiple programming languages such as Python, Java, and .NET C#, giving more options to QAs writing test scripts. Playwright is highly useful for performing cross-browser testing on complex applications, due to its wide coverage, accuracy, and high speed. It offers end-to-end testing through its high-level API that allows the tester to control headless browsers. When the tester runs a Playwright test script, the UI is readied at the backend before the test interacts with web elements. While for other frameworks, testers have to write code for the wait explicitly, Playwright ensures auto wait, making it easier to write concise test scripts. It also provides flexible testing through its capabilities, which cover a wide range of complex scenarios for comprehensive testing. The auto wait feature in Playwright performs all relevant checks for an element, and the requested action is performed only when the checks are duly passed. This ensures that the elements perform as expected and the test results are more accurate. Some actionability checks performed by Playwright include Attached, Visible, Stable, Receive Events, and Enabled. Playwright also supports the execution of simultaneous tests (also known as parallel testing) through Browser Context. This scales up testing and comes in handy when multiple web pages have to be tested simultaneously. Here one browser instance is used to create multiple, concurrent, and isolated browser contexts, which can be closed when not needed. Each of these browser contexts could host multiple web pages simultaneously. Thus, scaling up when the volume is high and reducing it when not required, ensures optimal usage of resources. How To Run Playwright Tests While Playwright launches browsers in the headless mode by default, it can also be used to run the browsers in headful mode. By passing a flag, when the browser is launched, Playwright can be used to run browsers in the headful mode for tests. The following code can be used to launch a headful browser: const { chromium } = require('playwright'); //to launch the headful browser for firefox and webkit, replace chromium by firefox and webkit const browser = await chromium.launch({ headless: false }); For Linux systems, xvfb is essential for launching headful browsers. Since xvfb is pre-installed in Docker Image and Github Action, running xvfb before the Node.js command allows the browsers to run in the headful mode. xvfb-run node index.js What Is Selenium? Selenium is an open-source automation testing suite that is widely used for automation testing of web applications. It automates browsers and interacts with UI elements to replicate user actions in order to test whether a web application is functioning as expected. Through its single interface, the Selenium framework allows the tester to write test scripts in different languages such as Java, Ruby, Perl, C#, NodeJS, Python, and PHP to name a few, offering flexibility. Selenium supports a wide range of browsers and their different versions to enable cross-browser testing of web applications. It is the most popular framework used to test websites and ensure seamless and consistent user experiences across different browser and device combinations. That is why Selenium is one of the most trusted automated testing suites in the software industry. Playwright vs Selenium Criteria Playwright Selenium Language Supports multiple languages such as JavaScript, Java, Python, and .NET C# Supports multiple languages such as Java, Python, C#, Ruby, Perl, PHP, and JavaScript Ease of Installation Easy to Install Easy to Install Test Runner Frameworks Supported Mocha, Jest, Jasmine Mocha, Jest, Jasmine, Protractor, and WebDriverIO Prerequisites NodeJS should be installed Java, Eclipse IDE, SeleniumStandalone Server, Client Language Bindings, and Browser Drivers should be installed Operating Systems Supported Windows, Linux, and Mac OS Windows, Linux, Solaris, and Mac OS Open Source Open Source and Free Open Source and Free Architecture Headless Browser with event-driven architecture Layered Architecture based on JSON Wire Protocol Browsers Supported Chromium, Firefox, and WebKit Chrome, Firefox, IE, Edge, Opera, Safari, and more Support Since Playwright is fairly new, the support from the community is limited as compared to Selenium Provides commercial support for its users via its sponsors in Selenium Ecosystem along with self-support documents. Strong community support from professionals across the world Real Devices Support Does not support real devices but supports emulators Supports real device clouds and remote servers Which One Is Preferred: Playwright or Selenium? Both Playwright and Selenium have their own advantages and limitations, which means choosing between them is subjective to the scenario for which they will be used. Although Playwright offers fast testing in complex web applications with headless architecture and just requires Node.js as a prerequisite, it is fairly new and lacks support on various levels such as community, browsers, real devices, language options, and integrations. Selenium has all of this to offer. However, each of them supports CI/CD for a software project with due accuracy. Playwright has an upper hand in complex web applications but has limited coverage. On the contrary, Selenium offers wide coverage, scalability, and flexibility, along with strong community support.
Red Hat® OpenShift is a widely adopted Container Platform powered by Kubernetes. As the enterprise adoption of OpenShift grows, operators are often faced with the need to automatically update or generate configuration as well as ensure security and enforce best practices. Essentially they are looking to provide guardrails so that developers can continue to use OpenShift without impacting other applications or introducing security vulnerabilities via misconfigurations. Kyverno, a Kubernetes-native policy engine, is perfect for this task and is often being used to address the above-mentioned challenges. In this post, I will discuss how you can get started with Kyverno on the OpenShift Container Platform. Red Hat OpenShift Red Hat® OpenShift® Container Platform is the industry-leading hybrid cloud platform powered by containers and Kubernetes. Using the OpenShift Container Platform simplifies and accelerates the development, delivery, and lifecycle management of a hybrid mix of applications, consistently anywhere across on-premises, public clouds, and Edge. OpenShift Container Platform is designed to deliver continuous innovation and speed at any scale, helping organizations to be ready for today and build for the future. Kyverno Kyverno is the ideal solution to enable automation, governance and security for any Kubernetes-based platform, including OpenShift Container Platform. Kyverno runs as a dynamic admission controller in the cluster. It receives validating and mutating admission webhook HTTP callbacks from the kube-apiserver and applies matching policies to return results that enforce admission policies or reject requests. Kyverno policies are written in Kubernetes-native YAML, significantly reducing the learning curve required to write custom policies. Kyverno policies can match resources using the resource kind, name, and label selectors to trigger actions such as validate, mutate, generate and image verification for container signing and software supply chain attestations. Getting Started In order to get started, you will need the following: OpenShift Container Platform 4.8 or higher installed Helm version 3.2 or greater installed and configured to access your OpenShift cluster Kubectl installed and configured to access your OpenShift cluster Once you have all the components, you can get started with the following steps: Installing Kyverno Installing Kyverno policies Viewing Policy Violation Report Installing Kyverno You will need cluster-admin permissions to install Kyverno. The latest instructions to install Kyverno can be found here. First, add the Kyverno helm repository and update it. helm repo add kyverno https://kyverno.github.io/kyverno/ helm repo update Next, install Kyverno to your OpenShift cluster. Note that the namespace kyverno will automatically be created. helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace Once the Helm chart is installed, check if the Kyverno pod is running. kubectl get pods -n kyverno Note: Depending on the size of your OpenShift cluster, i.e. the number of resources in your cluster, it may be necessary to increase the memory and CPU limits for the Kyverno deployed. You should also increase the number of replicas to 2 so that Kyverno is deployed in high availability mode. On OpenShift clusters, if you want to prevent the scanning and validation of the resources in the system namespaces (the ones starting with openshift), you can update the Kyverno config map to include the following entry: webhooks: '[{"namespaceSelector":{"matchExpressions":[{"key":"openshift.io/run-level","operator":"NotIn", "values": ["0","1"]}]}]' Once Kyverno pod is running, it will automatically create the necessary admission webhooks. You can also check the CRDs that are installed for Kyverno using this command: kubectl get crds |grep kyverno Installing Kyverno Policies Now that Kyverno is installed, you can install the policies. When installing policies for the first time, it is recommended that the policies are configured to run in "audit" mode so that none of the include requests being made to your OpenShift cluster are blocked. You can check if a policy is configured as "audit" by checking the validationFailureAction property in the policy manifest. Install sample policies using the command: helm install kyverno-policies kyverno/kyverno-policies --namespace kyverno Next, you can check if the policies are installed using the command: kubectl get clusterpolicies The output will look like this: NAME BACKGROUND ACTION READY deny-privilege-escalation true audit true disallow-add-capabilities true audit true disallow-host-namespaces true audit true disallow-host-path true audit true disallow-host-ports true audit true disallow-privileged-containers true audit true disallow-selinux true audit true require-default-proc-mount true audit true require-non-root-groups true audit true require-run-as-non-root true audit true restrict-apparmor-profiles true audit true restrict-seccomp true audit true restrict-sysctls true audit true restrict-volume-types true audit true Note that the policy state READY indicates that the policy is ready to process any incoming requests or perform background tasks. Viewing Policy Violation Report Once the policies are installed and ready, they should start generating policy violations. Policy violations can be viewed by fetching the policy reports. To fetch the policy reports for all namespaces, use the command: kubectl get policyreports -A To fetch the policy violations at the cluster scope, use the command: kubectl get clusterpolicyreports You can also view detailed policy results using the kubectl describe command. Issues and Troubleshooting Kyverno Pod Constantly Crashes Check if the crash is caused due to the pod not getting enough memory. Increase the memory limit. Policies Are Not Applied Check if the validating and mutating webhooks are created correctly. kubectl get validatingwebhookconfigurations,mutatingwebhookconfigurations You should see: NAME WEBHOOKS AGE validatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-policy-validating-webhook-cfg 1 46m validatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-resource-validating-webhook-cfg 1 46m validatingwebhookconfiguration.admissionregistration.k8s.io/autoscaling.openshift.io 2 17d validatingwebhookconfiguration.admissionregistration.k8s.io/multus.openshift.io 1 17d NAME WEBHOOKS AGE mutatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-policy-mutating-webhook-cfg 1 46m mutatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-resource-mutating-webhook-cfg 1 46m mutatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-verify-mutating-webhook-cfg 1 46m Also, check if the Kyverno service is configured correctly. kubectl get services -n kyverno You should see: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kyverno-svc ClusterIP 172.30.97.254 <none> 443/TCP 13d kyverno-svc-metrics ClusterIP 172.30.110.252 <none> 8000/TCP 13d For other troubleshooting, refer to the Kyverno documentation. Summary As you can see, it is extremely easy to get started with using Kyverno on your OpenShift cluster. Once Kyverno is installed and policies are being applied, you can learn how to write new policies for your deployment. OpenShift installation includes several custom resource definitions and so in case you need to validate any custom resources, a Kyverno policy can be written. You can also find several policies contributed by the Kyverno community and apply them to your clusters.
What Is Clean Code? This quote from Bjarne Stroustrup, inventor of the C++ programming language clearly explains what clean code means: “I like my code to be elegant and efficient. The logic should be straightforward to make it hard for bugs to hide, the dependencies minimal to ease maintenance, error handling complete according to an articulated strategy, and performance close to optimal so as not to tempt people to make the code messy with unprincipled optimizations. Clean code does one thing well.” From the quote, we can pick some of the qualities of clean code: Clean code is focused. Each function, class, or module should do one thing and do it well. Clean code is easy to read and reason about. According to Grady Booch, author of Object-Oriented Analysis and Design with Applications: clean code reads like well-written prose. Clean code is easy to debug. Clean code is easy to maintain. That is it can easily be read and enhanced by other developers. Clean code is highly performant. Well, a developer is free to write their code however they please because there is no fixed or binding rule to compel him/her to write clean code. However, bad code can lead to technical debt which can have severe consequences on the company. And this, therefore, is the caveat for writing clean code. In this article, we would look at some design patterns that help us to write clean code in Python. Let’s learn about them in the next section. Patterns for Writing Clean Code in Python Naming Convention: Naming conventions is one of the most useful and important aspects of writing clean code. When naming variables, functions, classes, etc, use meaningful names that are intention-revealing. And this means we would favor long descriptive names over short ambiguous names. Below are some examples: 1. Use long descriptive names that are easy to read. And this will remove the need for writing unnecessary comments as seen below: Python # Not recommended # The au variable is the number of active users au = 105 # Recommended total_active_users = 105 2. Use descriptive intention revealing names. Other developers should be able to figure out what your variable stores from the name. In a nutshell, your code should be easy to read and reason about. Python # Not recommended c = [“UK”, “USA”, “UAE”] for x in c: print(x) # Recommended cities = [“UK”, “USA”, “UAE”] for city in cities: print(city) 3. Avoid using ambiguous shorthand. A variable should have a long descriptive name than a short confusing name. Python # Not recommended fn = 'John' Ln = ‘Doe’ cre_tmstp = 1621535852 # Recommended first_name = ‘JOhn’ Las_name = ‘Doe’ creation_timestamp = 1621535852 4. Always use the same vocabulary. Be consistent with your naming convention.Maintaining a consistent naming convention is important to eliminate confusion when other developers work on your code. And this applies to naming variables, files, functions, and even directory structures. Python # Not recommended client_first_name = ‘John’ customer_last_name = ‘Doe; # Recommended client_first_name = ‘John’ client_last_name = ‘Doe’ Also, consider this example: #bad code def fetch_clients(response, variable): # do something pass def fetch_posts(res, var): # do something pass # Recommended def fetch_clients(response, variable): # do something pass def fetch_posts(response, variable): # do something pass 5. Start tracking codebase issues in your editor. A major component of keeping your python codebase clean is making it easy for engineers to track and see issues in the code itself. Tracking codebase issues in the editor allow engineers to: Tracking codebase issues in the editor allow engineers to: Get full visibility on technical debt See context for each codebase issue Reduce context switching Solve technical debt continuously You can use various tools to track your technical debt but the quickest and easiest way to get started is to use the free Stepsize extensions for VSCode or JetBrains that integrate with Jira, Linear, Asana, and other project management tools. 6. Don’t use magic numbers. Magic numbers are numbers with special, hardcoded semantics that appear in code but do not have any meaning or explanation. Usually, these numbers appear as literals in more than one location in our code. Python import random # Not recommended def roll_dice(): return random.randint(0, 4) # what is 4 supposed to represent? # Recommended DICE_SIDES = 4 def roll_dice(): return random.randint(0, DICE_SIDES) Functions 7. Be consistent with your function naming convention.As seen with the variables above, stick to a naming convention when naming functions. Using different naming conventions would confuse other developers. Python # Not recommended def get_users(): # do something Pass def fetch_user(id): # do something Pass def get_posts(): # do something Pass def fetch_post(id): # do something pass # Recommended def fetch_users(): # do something Pass def fetch_user(id): # do something Pass def fetch_posts(): # do something Pass def fetch_post(id): # do something pass 8. Functions should do one thing and do it well. Write short and simple functions that perform a single task. A good rule of thumb to note is that if your function name contains “and” you may need to split it into two functions. Python # Not recommended def fetch_and_display_users(): users = [] # result from some api call for user in users: print(user) # Recommended def fetch_usersl(): users = [] # result from some api call return users def display_users(users): for user in users: print(user) 9. Do not use flags or Boolean flags. Boolean flags are variables that hold a boolean value — true or false. These flags are passed to a function and are used by the function to determine its behavior. Python text = "Python is a simple and elegant programming language." # Not recommended def transform_text(text, uppercase): if uppercase: return text.upper() else: return text.lower() uppercase_text = transform_text(text, True) lowercase_text = transform_text(text, False) # Recommended def transform_to_uppercase(text): return text.upper() def transform_to_lowercase(text): return text.lower() uppercase_text = transform_to_uppercase(text) lowercase_text = transform_to_lowercase(text) Classes: 10. Do not add redundant context. This can occur by adding unnecessary variables to variable names when working with classes. Python # Not recommended class Person: def __init__(self, person_username, person_email, person_phone, person_address): self.person_username = person_username self.person_email = person_email self.person_phone = person_phone self.person_address = person_address # Recommended class Person: def __init__(self, username, email, phone, address): self.username = username self.email = email self.phone = phone self.address = address In the example above, since we are already inside the Person class, there's no need to add the person_ prefix to every class variable. Bonus: Modularize your code: To keep your code organized and maintainable, split your logic into different files or classes called modules. A module in Python is simply a file that ends with the .py extension. And each module should be focused on doing one thing and doing it well. You can follow object-oriented — OOP principles such as follow basic OOP principles like encapsulation, abstraction, inheritance, and polymorphism. Conclusion Writing clean code comes with a lot of advantages: improving your software quality, code maintainability, and eliminating technical debt. And in this article, you learned about clean code in general and some patterns to write clean code using the Python programming language. However, these patterns can be replicated in other programming languages too. Lastly, I hope that by reading this article, you have learned enough about clean code and some useful patterns for writing clean code.
cat, short for concatenate, is used on Linux and Unix-based systems like MacOS for reading the contents of a file, concatenating the contents with other files, and for creating new, concatenated files. It's also frequently used to copy the contents of files. The syntax for cat is shown below, where x is the file name, and [OPTIONS] are optional settings which alter how cat works. Shell shell Copycat [OPTIONS] x Getting the Contents of a File Using cat on Linux or MacOS Using the cat command along with one file name, we can get the entire text content of a file. For example, the below command will output the content of my-file.txt into the terminal: Shell cat my-file.txt Similarly, we can see the contents of many files by separating them with a space. For example, the below line takes the content of my-file.txt, and my-new-file.txt, merges the content, and shows it in terminal: Shell cat my-file.txt my-new-file.txt Getting the contents of files with line numbers on Linux or MacOSWe can use the option -n to show line numbers. For example, the following command merges our two files, my-file.txt, and my-new-file.txt, and outputs the content with line numbers side by side. This is pretty useful for comparing files. Shell cat -n my-file.txt my-new-file.txt The output will look something like this: Plain Text 1 Content from my-file.txt 1 Some more content from my-new-file.txt Concatenating Two Files Into a New File on Linux and MacOS Since concatenate can output the contents of two files, we can use > again to merge two files into a totally new file. The below example takes my-file.txt and my-new-file.txt, merges their content, and puts it into a new file called my-combined-file.txt: Shell cat my-file.txt my-new-file.txt > my-combined-file.txt Putting Content From One File Into Another With Linux or MacOS If all we want to do is put the contents of one file at the end of another, we can instead use >>. For example, the below command will take the content from my-file.txt, and put it at the end of my-new-file.txt, thus merging both files into my-new-file.txt: Shell cat my-file.txt >> my-new-file.txt Line Numbers Note: if you use >> or > with the -n option, the line numbers will also be merged into your new concatenated file! Creating an Empty File on Linux or MacOS With cat Since it's so easy to create files with cat, we often use it to make new files. For example, the below code will create a blank file called my-file.txt, as we are concatenating a blank string into it: Shell cat > my-file.txt How to Show Nonprintable Characters on Linux or MacOS Some documents or files may contain nonprintable characters. These are used to signal to applications how a file should be formatted - but they can sometimes mess up the format of files. To show nonprintable characters when using cat, we can use the -v option. This will show all nonprintable characters using caret notation, so that we can view them easily. Shell cat -v my-file.txt Nonprintable Characters Non printable characters are signals for things like character encoding. You can find a full list of nonprintable, along with their caret notation which cat uses, here. All Options for cat on Linux or MacOS There are a bunch of other options which help us use cat to get the ouputs we want. We've already discussed -n for getting line numbers, and -v for nonprintable characters, but here are the others: -b - numbers only non empty output lines, overriding -n. -E - displays a $ at the end of every line. -s - suppresses repeated, empty lines. -T - displays tabs as ^I, so as to easily discriminate them from spaces. -A - equivalent to writing -vET.
Great programmers, architects, and founders always have a clear vision for the future of a programming language before they start building their application. While selecting a particular language for building an application, the most important thing developers consider is how long support for that language will continue to exist and whether it will be easy to transfer their code if the language gradually becomes obsolete. When Facebook was built, PHP was one of the most popular and powerful choices for web development. Although, if you ask the Facebook team to make their now famous website today, they would probably use a language like Ruby, Scala or Python. Quora, one of the leading question and answer site, was built on Python and their CEO, Adam D'Angelo, has stated that more than 5 years after Quora was developed, he is happy with the choice of language. So what are the key factors that most programmers consider? Dependency upon other stacks is one for sure! A language such as Python has many frameworks and programming models that have improved the open source stack. The Python programming model enables you to write your own open source code which can be purely based on the native Python language. Anyone who codes using Python will tell you that coding in Python is relatively simple compared to some other options. Thus, coding in Python can enhance the reusability as it would be simple to refactor and use it again in different projects. Let’s look at some of the factors, models, and frameworks that will improve Open Source for good. 1. Python-Based Languages There are numerous implementations of Python and some of them are relatively fast and better to use in comparison with pure Python. However, the implementations are bound to have some dependencies, such as libraries that belong to other open source languages. Thus, in the end, using an official Python implementation like CPython which has existed for quite a while will always be a top choice for other open source implementations. 2. Open Source Web Development Frameworks There are many open source frameworks built using the native Python language and having less of a dependency on third-party libraries or frameworks. For example, Django makes the open source platform better to use due to its minimal dependencies with other parties. The greatest advantage of using such Python-based frameworks is the compatibility and support it offers for a multitude of databases and Python versions. 3. Development Interfaces Python can be used to build open source stacks for system administration like OpenStack and Salt. Automation platforms like Ansible, which are, again, open source, can be built on Python. Interfaces with GUIs such as Tkinter and wxPython are some more examples of how the programming modules of Python will lead to improving the available open source resources. Conclusion Now, apart from I've enumerated above, the biggest resource and positive aspect that Python has is its community. There is an ample amount of Python experts who keep contributing to the community by building new versions of tools and frameworks and help fix bugs. This is something which every lead programmer or planner keeps an eye on. In the end, the open source can be improved based on current shortcomings it has with a good web host. For example, when Ruby was invented, its creator mentioned that he wanted to build a language more object-oriented than Python.
Cadence is an open-source workflow orchestration service. It is a scalable, fault-tolerant, and developer-focused platform that centralizes business logic and allows you to build complex workflows that integrate with your distributed service architecture. Developers face steep difficulty in building modern high-scale distributed applications, which require them to wrangle long-running business processes, internal services, and third-party APIs to make them play well together. Designing those convoluted interactions forces developers to solve puzzles and carefully build numerous connections, with complex states to track, responses to asynchronous events to prepare, and communications to establish with external dependencies that often aren’t as reliable as hoped. Developers normally answer these challenges by meeting complexity with complexity, putting together sprawling systems that rig up stateless services, databases, retry algorithms, and job scheduling queues to ensure the application can deliver its functionality. However, that monstrous complexity creates major issues when something goes wrong. Depending on unreliable distributed components is a recipe for availability issues. The business logic of these systems is buried under their own tremendous complexity, making remediation of those issues all the more difficult and time-consuming. From a productivity perspective, developers must often turn their focus to keeping these elaborate systems in operation, killing momentum on any other forward-looking projects. Solutions like Cadence and Temporal exist to eliminate those hardships. Available as free and open-source software, these solutions abstract away the difficult complexity of creating high-scale distributed applications by storing the full application state in virtual memory, and catching up and replaying any interrupted workflows via that stored state. How to Migrate to Open-Source Cadence From Temporal In 2016, the Temporal project was forked from Cadence and is being maintained by some ex-Cadence developers. Since then, Temporal announced that they have ceased supporting the Cadence project. Some people interpreted this to mean that the Cadence project was replaced. This is not the case; the Cadence project announced a long-term commitment and support to Cadence, driven primarily by a team at Uber, and Instaclustr is working in a partnership with them to grow the project. As of today, the projects have begun to drift apart slowly, but the client SDKs are still very similar. We know there is a section of the community who are running Temporal and are interested in trying Cadence but are not sure how much work it will take to migrate their existing activities and workflows. Migrating is a two-step process: First, you will need a Cadence cluster — for this example, we can use the Instaclustr managed service. Second, you will need to convert your client code to use the Cadence client SDK. Step 2 is the focus of this article, and our example will be comparing the Java SDK. The Orders Processing Workflow Before we get started, let’s quickly define the workflow we are converting — in this case, a simple order processing workflow. Let’s imagine an online store that generates orders and sends them to a processing backend. When an order is generated, the following steps are taken by the workflow: Check if the order is in stock If it’s not: Get the estimated restock date Notify the customer of the delay Wait for restock Start the order process again If it is in stock Package and send the order to the nominated address Notify the customer of pending delivery with a tracking reference Complete We can visualize the workflow like this: Order workflow As we can see it’s a simple workflow with a single decision and an optional loop, depending on if the item is in stock, and a possible wait. Demonstration Pull Request We have created a public repository on GitHub and opened a pull request to show the changes that our project would require. The base project is our workflow developed to run on the Temporal SDK, and the pull request is submitting the changes required for the Cadence SDK. The project includes the client code to start a workflow, activity and workflow code to run it, and common code shared across both projects. You can view the pull request here: https://github.com/johndelcastillo/orders-demo/pull/1/files Worker Project In both Temporal and Cadence, the worker is responsible for executing the workflow and activity logic. Here is a folder comparison to see which files changed in our project, the files with changes are in orange. Change the Dependencies The first thing is to change our dependencies and add the Cadence library. In our above project, it’s the Gradle config build.gradle.kts. Maven Update your pom.xml XML <dependency> <groupId>com.uber.cadence</groupId> <artifactId>cadence-client</artifactId> <version>3.6.2</version> </dependency> Gradle Update your build.gradle Groovy compile group: 'com.uber.cadence', name: 'cadence-client', version: '3.6.2' Domain Objects Our domain objects are passed around as parameters and return values from activities. Moving from Temporal to Cadence, we can remove the annotations that are used by the Jackson JSON engine. Activity and Workflow Interfaces These interface files are almost identical, the only difference is Temporal requires an additional annotation on the interface that we don’t need for Cadence. Activity Implementation No changes! It certainly may depend on how you organize your solution, but in our case since the Cadence specific code is abstracted away by the interfaces, we can port it across directly. Workflow Implementation Again, barely any change between these two files. Both Temporal and Cadence require configuring an activity stub and they differ slightly, but not much. The workflow logic comes across without any change, including calling the Workflow.sleep and Workflow.continueAsNew. Worker Startup A few changes here, the primary difference is in how clients are configured. The Temporal project renamed a few paradigms (domain, task lists) when they forked, but functionally they are the same thing and we can keep our config values as they are. The remaining code to register the worker and implementation classes remains unchanged. Client Project Client code is normally part of a larger application, but this example still represents the scope of changes that are required. Change the Dependencies Same as the worker code, swap out Temporal to Cadence. Maven Update your pom.xml XML <dependency> <groupId>com.uber.cadence</groupId> <artifactId>cadence-client</artifactId> <version>3.6.2</version> </dependency> Gradle Update your build.gradle Groovy compile group: 'com.uber.cadence', name: 'cadence-client', version: '3.6.2' Client Finally, here is the client code side-by-side. Very similar to setting up the worker, we need to adjust our connection code to update it from the Temporal SDK to Cadence. Once we have our clients configured, the remaining code to create the workflow stub and start the workflow is identical. Finishing Up We can see the amount of change required to convert this project is small and straightforward. Since the main workflow operators and commands are like-for-like, even a large activity and workflow implementation should convert across with little to no changes. The connectivity code does need updating, but we can see that those changes are small and understandable. I hope this example encourages developers out there to try switching from Temporal to Cadence and get the benefits of the open source community that has grown around the Cadence project.
Rightsizing resource requests is an increasing challenge for teams using Kubernetes—and especially critical as they scale their environments. Overprovisioning CPU and memory lead to costly overspending, but underprovisioning risks CPU throttling and out-of-memory errors if requested resources aren’t sufficient. Dev and engineering teams that don’t thoroughly understand the live performance profile of their containers will usually play it safe and request vastly more CPU and memory resources than required, often with significant budget waste. The open source Kubecost tool (https://github.com/kubecost) has had a Request Sizing dashboard to help Kubernetes users bring more cost efficiency to their resource requests. One of the tool’s most popular optimization features, the dashboard identifies over-requested resources, offers recommendations for appropriate per-container resource requests, and estimates the cost-savings impact of implementing those recommendations. The dashboard utilizes actual usage data from live containers to provide accurate recommendations. However, leveraging the dashboard has included some hurdles, requiring users to manually update YAML requests to align resource requests with Kubecost recommendations or introduce integrations using a CD tool. The newly released Kubecost v1.93 eliminates those hurdles by introducing 1-Click Request Sizing. With this feature added to the open source tool, dev and engineering teams can click a button to apply container request right-sizing recommendations automatically. The following step-by-step example introduces overprovisioned Kubernetes workloads and uses 1-Click Request Sizing to bring those requests to an optimized size. Before we begin, you’ll need a Kubernetes cluster to work with. While this example uses Civo Kubernetes, Kubecost request sizing is available for any Kubernetes environment. To create an example cluster (if needed), use this to create civo Kubernetes cluster using Civo CLI: Shell civo k3s create request-sizing-demo --region LON1 The cluster request-sizing-demo (84c6c595-505e-4e35-8e38-61364a1a80bc) has been created Now, let’s get started. 1) Install Kubecost and Enable Cluster Controller If using a previous Kubecost installation, enable Cluster Controller using the helm value below. Kubecost ensures a transparent permission model by keeping all cluster modification capabilities in the separate Cluster Controller component. 1-Click Request Sizing APIs reside in Cluster Controller since Kubernetes API write permission is required to edit container requests. Here, we’ll install Kubecost and enable Cluster Controller: Shell helm repo add kubecost https://kubecost.github.io/cost-analyzer/ helm repo update helm upgrade \ -i \ --create-namespace kubecost \ kubecost/cost-analyzer \ --namespace kubecost \ --version "v1.94.0-rc.1" \ --set clusterController.enabled=true After waiting a few minutes for the containers to get up and running, check the Kubecost namespace: Shell → kubectl get deployment -n kubecost NAME READY UP-TO-DATE AVAILABLE AGE kubecost-cluster-controller 1/1 1 1 2m12s kubecost-cost-analyzer 1/1 1 1 2m12s kubecost-grafana 1/1 1 1 2m12s kubecost-kube-state-metrics 1/1 1 1 2m12s kubecost-prometheus-server 1/1 1 1 2m12s Here we see that Kubecost is installed and running correctly. 2) Make a Sample Overprovisioned Workload We’ll purposefully create a workload that requests more resources than it needs, enabling 1-Click Request Sizing to come to the rescue. The following bash creates an “rsizing” namespace holding a 2-replica NGINX deployment, with considerable container resource requests: Shell kubectl apply -f - <<EOF apiVersion: v1 kind: Namespace metadata: name: rsizing --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: rsizing labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 resources: requests: cpu: 300m memory: 500Mi EOF We’ll check that this deployment is scheduled and running correctly: Shell → kubectl get pod -n rsizing NAME READY STATUS RESTARTS AGE nginx-deployment-bd6c697bf-qxtvk 1/1 Running 0 10s nginx-deployment-bd6c697bf-b2zml 1/1 Running 0 11s Next, we’ll use a JSONPath expression to check in on the Pods running, and the requests of their containers: Shell → kubectl get pod -n rsizing -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{range .spec.containers[*]}{.name}{'\t'}{.resources.requests}{'\n'}{end}{'\n'}{end}" nginx-deployment-bd6c697bf-qxtvk nginx {"cpu":"300m","memory":"500Mi"} nginx-deployment-bd6c697bf-b2zml nginx {"cpu":"300m","memory":"500Mi"} Just as we planned, the containers are making outsized resource requests. Next, we’ll fix those issues. 3) View Kubecost Recommendations and Put Them Into Action Access Kubecost’s frontend with kubectl's port-forward: kubectl port-forward -n kubecost service/kubecost-cost-analyzer 9090 Allow Kubecost a few minutes to collect usage profiling data and prepare its recommendations for request sizing. Then go to the request sizing recommendation page at http://localhost:9090/request-sizing.html?filters=namespace%3Arsizing. Note that this link includes a filter to show only recommendations for the “rsizing” namespace. With Cluster Controller enabled, the “Automatically implement recommendations” button will be available on this page as well: The NGINX deployment isn’t getting any traffic, causing it to be severely overprovisioned. Kubecost recognized this fact and has suggested shifting to a 10m CPU request and 20MiB RAM request. Click the “Automatically implement recommendations” button and you’ll get this message: These recommendations are filtered to the “rsizing” namespace, so clicking the Yes option will apply recommendations for this filtered set. Now check the status of the cluster: Shell → k get pod -n rsizing NAME READY STATUS RESTARTS AGE nginx-deployment-574cd8ff7f-5czgz 1/1 Running 0 16s nginx-deployment-574cd8ff7f-srt8j 1/1 Running 0 9s nginx-deployment-bd6c697bf-qxtvk 0/1 Terminating 0 53m nginx-deployment-bd6c697bf-b2zml 0/1 Terminating 0 53m After the old Pod versions have terminated, use the JSONPath expression again to check the new Pods: Shell → kubectl get pod -n rsizing -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{range .spec.containers[*]}{.name}{'\t'}{.resources.requests}{'\n'}{end}{'\n'}{end}" nginx-deployment-574cd8ff7f-5czgz nginx {"cpu":"10m","memory":"20971520"} nginx-deployment-574cd8ff7f-srt8j nginx {"cpu":"10m","memory":"20971520"} Kubecost has successfully resized the container requests! And at both the Pod and NGINX deployment levels: Shell → k get deploy -n rsizing nginx-deployment -o=jsonpath='{.spec.template.spec.containers[0].resources}' | jq { "requests": { "cpu": "10m", "memory": "20971520" } } 4) Remove the Demo Cluster Don’t forget to clean up after this demonstration by removing the test cluster (avoiding any unnecessary costs): → civo k3s remove request-sizing-demo --region LON1 Discover More About Kubecost’s 1-Click Request Sizing This example demonstrated how Kubernetes users can easily and automatically optimize their resource utilization with 1-Click Request Sizing from the open source Kubecost tool. To learn more, additional documentation is available here: 1-click request sizing feature guide 1-click request sizing API reference Cluster Controller advanced setup and reference Request sizing recommendation API reference Kubernetes costs can easily spiral out of control at scale if not carefully monitored, and if unexpected cost centers or errors with the potential to spur on runaway expenses aren’t swiftly addressed and remediated. Teams using Kubernetes need the visibility to view the complete picture of their Kubernetes spending in real-time. That visibility must include the ability to zoom out to a holistic view that accounts for external cloud services and infrastructure costs, and to zoom in and assign costs to each specific deployment, service, and namespace. Teams then need the tools to take action and successfully pursue cost efficiency optimization across their Kubernetes implementations. In this vein, 1-Click Request Sizing adds a powerful tool to Kubernetes users’ arsenal, making it that much simpler to keep Kubernetes budgets in check.
This guide is for developers and engineers of all levels looking to understand how polling works in Cadence, the relatively new (and fully open source) fault-tolerant stateful code platform originally developed by Uber (and now supported by others, including us at Instaclustr). This guide provides “Hello, World!” type examples based on simple scenarios and use cases. What You Will Learn How to set up a simple Cadence polling application on Instaclustr’s Managed Service platform. What You Will Need A free sign-up account on the Instaclustr platform Basic Java 11 and Gradle installation IntelliJ Community Edition (or any other IDE with Gradle support) Docker (optional: only needed to run Cadence command-line client) So, What's So Great About Cadence? A large number of use cases span beyond a single request-reply, require tracking of a complex state, respond to asynchronous events, and communicate to external unreliable dependencies. The usual approach to building such applications is a hodgepodge of stateless services, databases, cron jobs, and queuing systems. This negatively impacts developer productivity, as most of the code is dedicated to plumbing—obscuring the actual business logic behind a myriad of low-level details. Cadence is a fully open-source orchestration framework that helps developers write fault-tolerant, long-running applications, also known as workflows. In essence, it provides a durable virtual memory that is not linked to a specific process and preserves the full application state, including function stacks, with local variables across all sorts of host and software failures. This allows you to write code using the full power of a programming language, while Cadence takes care of the durability, availability, and scalability of the application. What Is Polling? Polling is executing a periodic action to check for state change. Examples are pinging a host, calling a REST API, or listing a storage bucket for newly uploaded files. Fig 1: Flow diagram for a polling process Polling should be avoided where possible (favoring instead an event-triggered interrupt), as busy waiting typically eats a lot of CPU cycles unnecessarily unless either: You are only going to poll for a short time, or You can afford to sleep for a reasonable time in your polling loop. It is to a computer the equivalent of asking every 5 minutes how far you are from your destination on a long trip. Nonetheless, there are times when it’s the only option available. Cadence's support for durable timers, long-running activities, and unlimited retries makes it a good fit. Polling External Services With Cadence There are several ways to implement a polling mechanism. We will focus on polling external services and how we can benefit from Cadence in doing so. To begin with, let’s briefly explain some Cadence concepts. Cadence core abstraction is a fault-oblivious stateful workflow. What that means is that the state of the workflow code, including local variables and any threads it creates, is immune to process and Cadence service failures. This is a very powerful concept as it encapsulates state, processing threads, durable timers, and event handlers. In order to fulfill deterministic execution requirements, workflows are not allowed to call any external API directly. Instead, they orchestrate the execution of activities. An activity is a business-level function that implements application logic, such as calling a service or transcoding a media file. Cadence does not recover the activity state in case of failures; therefore, an activity function is allowed to contain any code without restrictions. Writing Our Polling Loop The code itself is pretty simple—we’ll go line by line explaining what each thing does: State polledState = externalServiceActivities.getState(); while(!expectedState.equals(polledState)) { Workflow.sleep(Duration.ofSeconds(30)); polledState = externalServiceActivities.getState(); } We’ve reached our expected state! We start by calling an activity, in this case, an external service which could be a REST API. We then have our condition, matching our diamond in Fig 1. If the desired state hasn’t been reached yet, we schedule a sleep of 10 seconds. This isn’t any kind of sleep, it’s a durable timer. In this case, our polling waits for a brief period of time but it could be longer, and in those cases, you wouldn’t want to wait for the whole interval if your execution were to fail. Cadence solves this by persisting timers as events and alerting the corresponding worker (service that hosts the workflow and activity implementations) once it has been completed. These timers can manage intervals going from seconds to minutes, hours, days, and even months or years! Finally, we refresh our state by calling again our external service. It's as easy as that! Before we continue, let’s take a quick look at what Cadence is actually doing behind the scenes in order to avoid potential issues. Important: Cadence History and Polling Caveats How does Cadence achieve fault-oblivious stateful workflows? The secret lies in how Cadence persists in workflow execution. The workflow state recovery utilizes event sourcing which puts a few restrictions on how the code is written. Event sourcing persists state as a sequence of state-changing events. Whenever the state of our workflow changes, a new event is appended to its history of events. Cadence then reconstructs a workflow’s current state by replaying the events. That is why all communication with the external world should happen through activities and Cadence APIs must be used to get the current time, sleep, and create new threads. Why Be Careful When Polling? Polling requires looping over a condition over and over again. Since each activity call and timer event is persisted you may imagine how a short polling interval can result in a huge timeline. Let’s study what our polling snippet’s history could look like. We start by scheduling the activity needed to poll our external service. The activity is started by a worker. The activity completes and returns its result. Condition is not met yet, so a timer is started. Once time passes, an event is triggered to wake up the workflow. Steps 1 to 5 are repeated until the condition is met. The final poll confirms the condition is met (no need to set the timer). Workflow is marked as complete. Fig 2. Cadence history of events for our polling snippet code If the workflow were to fail somewhere in the middle and its history had to be replayed this could result in going through a huge list of events. There are several ways to keep it under control: avoid using short polling periods, set reasonable timeouts on your workflows, and limit polling to a certain number of polls. Bottom line: remember all actions are persisted and may need to be replayed by workers. Setting Up Activity Retries What happens if our external service fails for some reason? We need to try, try, try again! We briefly mentioned how Cadence uses activities for non-deterministic where something may fail unexpectedly, like consuming an API. This allows Cadence to record activity results and be able to resume workflows seamlessly while also adding support for extra functionality like retry logic. Below is an example of activity configuration with retry options enabled: private final ExternalServiceActivities externalServiceActivities = Workflow.newActivityStub(ExternalServiceActivities.class, new ActivityOptions.Builder() .setRetryOptions(new RetryOptions.Builder() .setInitialInterval(Duration.ofSeconds(10)) .setMaximumAttempts(3) .build()) .setScheduleToCloseTimeout(Duration.ofMinutes(5)) .build()); By doing so, we tell Cadence that actions present in ExternalServiceActivities should retry at most 3 times with an interval of 10 seconds between each try. In doing so, each call to an external service activity will be transparently retried without the need to write any retry logic. Use Case Example: Instafood Meets MegaBurgers In order to see this pattern in action, we’ll go through a fictional polling integration on our sample project. Instafood Brief Instafood is an online app-based meal delivery service. Customers can place an order for food from their favorite local restaurants via Instafood’s mobile app. Orders can be for pickup or delivery. If delivery is chosen, Instafood will organize to have one of their many delivery drivers pick up the order from the restaurant and deliver it to the customer. Instafood provides each restaurant a kiosk/tablet which is used for communication between Instafood and the restaurant. Instafood notifies the restaurant when an order is placed, and then the restaurant can accept the order, provide an ETA, mark it as ready, etc. For delivery orders, Instafood will coordinate to have a delivery driver pick up based on the ETA. Polling “MegaBurgers” MegaBurgers is a large multinational fast-food hamburger chain. They have an existing mobile app and website that uses a back-end REST API for customers to place orders. Instafood and MegaBurgers have come to an agreement where Instafood customers can place MegaBurger orders through the Instafood app for pickup and delivery. Instead of installing Instafood kiosks at all MegaBurger locations, it has been agreed that Instafood’s backend order workflow system will special-case MegaBurgers, and will directly integrate with MegaBurgers’s existing REST-based ordering system to place orders and receive updates. Fig 3. MegaBurger’s order state machine The MegaBurger’s REST API has no push-style mechanism (WebSockets, WebHooks, etc.) to receive order status updates. Instead, periodic GET requests need to be made to determine order status, and the result of these polls may cause the order workflow to progress on the Instafood side (such as scheduling a delivery driver for pickup). Setting Up Instafood Project In order to run the sample project by yourself, you’ll need to set up a Cadence cluster. In this example, we’ll be using the Instaclustr platform to do so. Step 1: Creating Instaclustr Managed Clusters A Cadence cluster requires an Apache Cassandra cluster to connect to for its persistence layer. In order to set up both Cadence and Cassandra clusters, we’ll follow the “Creating a Cadence Cluster” documentation. The following operations are handled automatically for you: Firewall rules will automatically get configured on the Cassandra cluster for Cadence nodes. Authentication between Cadence and Cassandra will get configured, including client encryption settings. The Cadence default and visibility keyspaces will be created automatically in Cassandra. A link will be created between the two clusters, ensuring you don’t accidentally delete the Cassandra cluster before Cadence. A Load Balancer will be created. It is recommended to use the load balancer address to connect to your cluster. Step 2: Setting Up Cadence Domain Cadence is backed by a multi-tenant service where the unit of isolation is called a domain. In order to get our Instafood application running, we first need to register a domain for it. In order to interact with our Cadence cluster, we need to install its command-line interface client. macOS If using a macOS client the Cadence CLI can be installed with Homebrew as follows: brew install cadence-workflow # run command line client cadence <command> <arguments> Other Systems If not, the CLI can be used via Docker Hub image ubercadence/cli: # run command line client docker run --network=host --rm ubercadence/cli:master <command> <arguments> For the rest of the steps, we’ll use cadence to refer to the client. 2. In order to connect, it is recommended to use the load balancer address to connect to your cluster. This can be found at the top of the Connection Info tab, and will look like this: “ab-cd12ef23-45gh-4baf-ad99-df4xy-azba45bc0c8da111.elb.us-east 1.amazonaws.com” We’ll call this the <cadence_host>. 3. We can now test our connection by listing current domains: cadence --ad <cadence_host>:7933 admin domain list 4. Add instafood domain: cadence --ad <cadence_host>:7933 --do instafood domain register --global_domain=false 5. Check that it was registered accordingly: cadence --ad <cadence_host>:7933 --do instafood domain describe Step 3: Run Instafood Sample Project 1. Clone Gradle project from Instafood project Git repository. 2. Open the property file at instafood/src/main/resources/instafood.properties and replace cadenceHost value with your load balancer address: cadenceHost=<cadence_host> 3. You can now run the app by: cadence-cookbooks-instafood/instafood$ ./gradlew run or executing InstafoodApplication main class from your IDE: 4. Check if it is running by looking into its terminal output: Looking Into MegaBurger’s API Before looking into how Instafood integrates with MegaBurger, let's first have a quick look into their API. Run MegaBurger Server Let’s start by running the server. This can be accomplished by running: cadence-cookbooks-instafood/megaburger$ ./gradlew run or MegaburgerRestApplication from your IDE. This is a simple Spring Boot Rest API with an in-memory persistence layer intended for demo purposes. All data is lost when the application closes. MegaBurger’s Orders API MegaBurger exposes its Orders API in order to track and update the state of each food order. POST /orders This creates an order and returns its id. Request: curl -X POST localhost:8080/orders -H “Content-Type: application/json” --data ‘{“meal”: “Vegan Burger”, “quantity”: 1}’ Response: { “id”: 1, “meal”: “Vegan Burger”, “quantity”: 1, “status”: “PENDING”, “eta_minutes”: null } GET /orders This returns a list with all Orders. Request: curl -X GET localhost:8080/orders Response: [ { “id”: 0, “meal”: “Vegan Burger”, “quantity”: 1, “status”: “PENDING”, “eta_minutes”: null }, { “id”: 1, “meal”: “Onion Rings”, “quantity”: 2, “status”: “PENDING”, “eta_minutes”: null } ] GET /orders / {orderId} This returns Order with id equal to orderId. Request: curl -X GET localhost:8080/orders/1 Response: { “id”: 1, “meal”: “Onion Rings”, “quantity”: 2, “status”: “PENDING”, “eta_minutes”: null } PATCH /orders/{orderId} Updates Order with id equal to orderId Request: curl -X PATCH localhost:8080/orders/1 -H “Content-Type: application/ json” --data ‘{“status”:“ACCEPTED”}’ Response: { “id”: 1, “meal”: “Onion Rings”, “quantity”: 2, “status”: “ACCEPTED”, “eta_minutes”: null } MegaBurger Polling Integration Review Now that we have everything set up, let’s look at the actual integration between Instafood and MegaBurger. Polling Workflow We begin by defining a new workflow, MegaBurgerOrderWorkflow: public interface MegaBurgerOrderWorkflow { @WorkflowMethod void orderFood(FoodOrder order); // ... } This workflow has an orderFood method which will send and track the corresponding FoodOrder by integrating with MegaBurger. Let’s look at its implementation: public class MegaBurgerOrderWorkflowImpl implements MegaBurgerOrderWork flow { // ... @Override public void orderFood(FoodOrder order) { OrderWorkflow parentOrderWorkflow = getParentOrderWorkflow(); Integer orderId = megaBurgerOrderActivities.createOrder(mapMega BurgerFoodOrder(order)); updateOrderStatus(parentOrderWorkflow, OrderStatus.PENDING); // Poll until Order is accepted/rejected updateOrderStatus(parentOrderWorkflow, pollOrderStatusTransition(orderId, OrderStatus. PENDING)); if (OrderStatus.REJECTED.equals(currentStatus)) { throw new RuntimeException(“Order with id “ + orderId + “ was rejected”); } // Send ETA to parent workflow parentOrderWorkflow.updateEta(getOrderEta(orderId)); // Poll until Order is cooking updateOrderStatus(parentOrderWorkflow, pollOrderStatusTransition(orderId, OrderStatus.ACCEPTED)); // Poll until Order is ready updateOrderStatus(parentOrderWorkflow, pollOrderStatusTransition(orderId, OrderStatus.COOKING)); // Poll until Order is delivered updateOrderStatus(parentOrderWorkflow, pollOrderStatusTransition(orderId, OrderStatus.READY)); } // ... } The workflow starts by obtaining its parent workflow. Our MegaBurgerOrderWorkflow only handles the integration with MegaBurger, getting the order delivered to the client is managed by a separate workflow; this means we are working with a child workflow. We then create the order through an activity and obtain an order id. This activity is just a wrapper for an API client which performs the POST to /orders. After creating the order, the parent workflow is notified by a signal (an external asynchronous request to a workflow) that the order is now PENDING. Now we must wait until the order transitions from PENDING to either ACCEPTED or REJECTED. This is where polling comes into play. Let's look at what our function pollOrderStatusTransition does: private OrderStatus pollOrderStatusTransition(Integer orderId, OrderStatus orderStatus) { OrderStatus polledStatus = megaBurgerOrderActivities.getOrderById(orderId).getStatus(); while (orderStatus.equals(polledStatus)) { Workflow.sleep(Duration.ofSeconds(30)); polledStatus = megaBurgerOrderActivities. getOrderById(orderId).getStatus(); } return polledStatus; } This is very similar to the polling loop we presented in the introduction of this article. The only difference being instead of waiting for a specific state it polls until there is a transition. Once again, the actual API call used to get an order by id is hidden behind an activity that has retries enabled. If the order is rejected, a runtime exception is thrown failing the workflow. If it is accepted, the parent is notified of MegaBurger’s ETA (this is used by the parent workflow to estimate delivery dispatching). Finally, each of the remaining states shown in Fig 3 is transitioned, until the order is marked as delivered. Running a Happy-Path Scenario To wrap up, let’s run a whole order scenario. This scenario is part of the test suite included with our sample project. The only requirement is running both Instafood and MegaBurger server as described in the previous steps. This test case describes a client ordering through Instafood MegaBurger’s new Vegan Burger for pick-up: cadence-cookbooks-instafood/instafood$ ./gradlew test or InstafoodApplicationTest from your IDE: class InstafoodApplicationTest { // ... @Test public void givenAnOrderItShouldBeSentToMegaBurgerAndBeDeliveredAccordingly() { FoodOrder order = new FoodOrder(Restaurant.MEGABURGER, “Vegan Burger”, 2, “+54 11 2343-2324”, “Díaz velez 433, La lucila”, true); // Client orders food WorkflowExecution workflowExecution= WorkflowClient start(orderWorkflow::orderFood, order); // Wait until order is pending Megaburger’s acceptance await().until(() -> OrderStatus.PENDING.equals(orderWorkflow. getStatus())); // Megaburger accepts order and sends ETA megaBurgerOrdersApiClient.updateStatusAndEta(getLastOrderId(), “ACCEPTED”, 15); await().until(() -> OrderStatus.ACCEPTED.equals(orderWorkflow. getStatus())); // Megaburger starts cooking order megaBurgerOrdersApiClient.updateStatus(getLastOrderId(), “COOKING”); await().until(() -> OrderStatus.COOKING.equals(orderWorkflow. getStatus())); // Megaburger signals order is ready megaBurgerOrdersApiClient.updateStatus(getLastOrderId(), “READY”); await().until(() -> OrderStatus.READY.equals(orderWorkflow. getStatus())); // Megaburger signals order has been picked-up megaBurgerOrdersApiClient.updateStatus(getLastOrderId(), “RESTAURANT_DELIVERED”); await().until(() -> OrderStatus.RESTAURANT_DELIVERED. equals(orderWorkflow.getStatus())); await().until(() -> workflowHistoryHasEvent(workflowClient, workflowExecution, EventType.WorkflowExecutionCompleted)): } } We have 3 actors in this scenario: Instafood, MegaBurger, and the Client. The Client sends the order to Instafood. Once the order reaches MegaBurger (order status is PENDING), MegaBurgers marks it as ACCEPTED and sends an ETA. We then have the whole sequence of status updates: MegaBurger marks the order as COOKING . MegaBurger marks the order as READY (this means it’s ready for delivery/pickup). MegaBurger marks the order as RESTAURANT_DELIVERED . Since this was an order created as pickup, once the Client has done so the workflow is complete. Wrapping Up In this article, we got first-hand experience with Cadence and how to use it for polling. We also showed you how to get a Cadence cluster running with our Instaclustr platform and how easy it is to get an application connected to it.
This article will cover Python frameworks, types, and which Python framework is best: Django or Flask. But before that, let’s understand the definition of a framework. What Is a Framework? A framework is software that has already been developed. It helps us while building our application. It is a reusable code that provides specific functionality that we can use while building our programs. So, a framework intends to avoid you needing to reinvent the wheel. A few examples of frameworks in Python are Django, Flask, and so on. They provide a short and easy method to write any code. The framework allows you to focus on your program and not worry about the low-level stuff. A framework has probably been well-worked by several different programmers. It has been tested and optimized, so it’s probably going to perform a lot more efficiently code-wise than we can produce by ourselves. It can take a long time for any programmer to come up to the level of some of these frameworks. What Is Flask? Flask is an essential web application framework that is written in Python. Flask was developed by Armin Ronacher. He leads an international group of Python enthusiasts named Pocco. Flask is based on the WSGI concept, and it is also based on the Jinja-2 template engine. We can use Flask for building web applications and creating web APIs, and we can also use it for developing machine learning applications in which we do a lot of end-to-end projects. What is Django? Django is a free, open-source web application framework. This framework provides a general technique for quick and operative web application development. It supports you in building and maintaining quality web applications, making the development process smooth and saving you time. It is a high-level web application framework that allows you to perform rapid development. Its main aim is to build complicated database-driven websites. Differences Between Flask and Django Flask was created in 2010, whereas Django was created in 2005 Flask supports APIs, whereas Django does not support any APIs. Flask supports visual debugging, but Django does not. Flask offers you multiple types of databases, but Django does not. Flask is a Python web application framework that is built for rapid development, but Django is built for smooth and easy projects. Flask offers a diversified operating style; Django offers a monolithic operating style. The URL sender of the Flask web application framework is a RESTful request, whereas the URL sender of the Django framework is founded on controller-regex. Flask is a Web Server Gateway Interface (WSGI) framework, whereas Django is a full-stack web development framework. Flask is a young platform, but Django is a mature framework. The Flask web framework does not give support for third-party applications, but the Django web framework supports a huge quantity of third-party applications. Advantages of Flask Here are some advantages of the usage of Flask: Higher compatibility with modern-day technologies Technical experimentation Easier to apply for easy cases Codebase length is quite smaller Highly scalable for easy applications Easy to construct a fast prototype Routing URL is easy to build and maintain applications Database integration is easy Small core and effortlessly extensible Minimal but effective platform Lots of assets are available online, particularly on GitHub Advantages of Django Below are a few advantages of the Django framework: Django is simple to set up and run It gives multilingual websites through the use of its integrated internationalization system Django permits end-to-end software testing It also permits you to record your API with an HTML output REST framework has wealthy provisions for numerous authentication rules It is used for rate-restricting API requests for a single user Helps you to outline styles for the URLs to your software Offers an integrated authentication system Cache framework comes with more than one cache mechanism High-level framework for rapid web application development An entire stack of tools Data modeled with Python classes Disadvantages of Flask Here are a few drawbacks of the Flask framework: Slower MVP development in maximum cases Higher renovation fees for extra complicated systems Complicated renovation for large implementations Asynchronous can be a bit problem Lack of database and ORM Setting up a huge project calls for some preceding knowledge of the framework Offers restricted support and a smaller network in comparison to Django Disadvantages of Django Below, some disadvantages of Django are given: It is a monolithic platform Poor compatibility with modern-day technologies A better access point for easy solutions The large length of the code Auto reload restarts the complete server Only permits you to deal with a single request at a time Routing calls for some knowledge of normal expressions You can install components together, which can create confusion Flask vs. Django: Which One to Choose? There are numerous websites developed on Flask, which perform nicely and are comparable with the ones developed on Django. Quite a few simple principles are similar in both Django and Flask. Django is complex and large and requires a deep learning curve, so in case you need to get the "feel" of a web framework, begin with Flask. After that, move on to Django, or you can firstly master one and then shift at the time your task calls for you to, instead of having to study everything at once. With Flask, on the other hand, you can begin with a few vital programming skills. Django needs you to do some primary homework for you to make even a Hello World program. But, as your task structure grows, you discover that including new features is extra difficult in Flask, while in Django, it would look like a breeze.
Mark Gardner
Independent Contractor,
The Perl Shop
Nuwan Dias
VP and Deputy CTO,
WSO2
Radivoje Ostojic
Principal Software Engineer,
BrightMarbles
Adam Houghton
Senior Software Developer,
SAS Institute