DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

JavaScript

JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.

icon
Latest Refcards and Trend Reports
Trend Report
Modern Web Development
Modern Web Development
Refcard #363
JavaScript Test Automation Frameworks
JavaScript Test Automation Frameworks
Refcard #288
Getting Started With Low-Code Development
Getting Started With Low-Code Development

DZone's Featured JavaScript Resources

Component Library With Lerna Monorepo, Vite, and Storybook

Component Library With Lerna Monorepo, Vite, and Storybook

By Anton Kalik
Building components and reusing them across different packages led me to conclude that it is necessary to organize the correct approach for the content of these projects in a single structure. Building tools should be the same, including testing environment, lint rules, and efficient resource allocation for component libraries. I was looking for tools that could bring me efficient and effective ways to build robust, powerful combinations. As a result, a formidable trio emerged. In this article, we will create several packages with all those tools. Tools Before we start, let’s examine what each of these tools does. Lerna: Manages JavaScript projects with multiple packages; It optimizes the workflow around managing multipackage repositories with Git and NPM. Vite: Build tool providing rapid hot module replacement, out-of-the-box ES Module support, extensive feature, and plugin support for React Storybook: An open-source tool for developing and organizing UI components in isolation, which also serves as a platform for visual testing and creating interactive documentation Lerna Initial Setup The first step will be to set up the Lerna project. Create a folder with lerna_vite_monorepo and inside that folder, run through the terminal npx lerna init — this will create an essential for the Lerna project. It generates two files — lerna.json, package.json — and empty folder packages. lerna.json — This file enables Lerna to streamline your monorepo configuration, providing directives on how to link dependencies, locate packages, implement versioning strategies, and execute additional tasks. Vite Initial Setup Once the installation is complete, a packages folder will be available. Our next step involves creating several additional folders inside packages the folder: vite-common footer-components body-components footer-components To create those projects, we have to run npm init vite with the project name. Choose React as a framework and Typescript as a variant. Those projects will use the same lint rules, build process, and React version. This process in each package will generate a bunch of files and folders: ├── .eslintrc.cjs ├── .gitignore ├── index.html ├── package.json ├── public │ └── vite.svg ├── src │ ├── App.css │ ├── App.tsx │ ├── assets │ │ └── react.svg │ ├── index.css │ ├── main.tsx │ └── vite-env.d.ts ├── tsconfig.json ├── tsconfig.node.json └── vite.config.ts Storybook Initial Setup Time to set up a Storybook for each of our packages. Go to one of the package folders and run there npx storybook@latest init for Storybook installation. For the question about eslint-plugin-storybook — input Y for installation. After that, the process of installing dependencies will be launched. This will generate .storybook folder with configs and stories in src. Let’s remove the stories folder because we will build our own components. Now, run the installation npx sb init --builder @storybook/builder-vite — it will help you build your stories with Vite for fast startup and HMR. Assume that for each folder, we have the same configurations. If those installation has been accomplished, then you can run yarn storybook inside the package folder and run the Storybook. Initial Configurations The idea is to reuse common settings for all of our packages. Let’s remove some files that we don’t need in each repository. Ultimately, each folder you have should contain the following set of folders and files: ├── package.json ├── src │ └── vite-env.d.ts ├── tsconfig.json └── vite.config.ts Now, let’s take all devDependencies and cut them from package.json in one of our package folders and put them all to devDependenices in the root package.json. Run in root npx storybook@latest init and fix in main.js property: stories: [ "../packages/*/src/**/*..mdx", "../packages/*/src/**/*.stories.@(js|jsx|ts|tsx)" ], And remove from the root in package.json two scripts: "storybook": "storybook dev -p 6006", "build-storybook": "storybook build" Add components folder with index.tsx file to each package folder: ├── package.json ├── src │ ├── components │ │ └── index.tsx │ ├── index.tsx │ └── vite-env.d.ts ├── tsconfig.json └── vite.config.ts We can establish common configurations that apply to all packages. This includes settings for Vite, Storybook, Jest, Babel, and Prettier, which can be universally configured. The root folder has to have the following files: ├── .eslintrc.cjs ├── .gitignore ├── .nvmrc ├── .prettierignore ├── .prettierrc.json ├── .storybook │ ├── main.ts │ ├── preview-head.html │ └── preview.ts ├── README.md ├── babel.config.json ├── jest.config.ts ├── lerna.json ├── package.json ├── packages │ ├── vite-body │ ├── vite-common │ ├── vite-footer │ └── vite-header ├── test.setup.ts ├── tsconfig.json ├── tsconfig.node.json └── vite.config.ts We won’t be considering the settings of Babel, Jest, and Prettier in this instance. Lerna Configuration First, let’s examine the Lerna configuration file that helps manage our monorepo project with multiple packages. JSON { "$schema": "node_modules/lerna/schemas/lerna-schema.json", "useWorkspaces": true, "packages": ["packages/*"], "version": "independent" } First of all, "$schema" provides structure and validation for the Lerna configuration. When "useWorkspaces" is true, Lerna will use yarn workspaces for better linkage and management of dependencies across packages. If false, Lerna manages interpackage dependencies in monorepo. "packages" defines where Lerna can find the packages in the project. "version" when set to "independent", Lerna allows each package within the monorepo to have its own version number, providing flexibility in releasing updates for individual packages. Common Vite Configuration Now, let’s examine the necessary elements within the vite.config.ts file. TypeScript import path from "path"; import { defineConfig } from "vite"; import pluginReact from "@vitejs/plugin-react"; const isExternal = (id: string) => !id.startsWith(".") && !path.isAbsolute(id); export const getBaseConfig = ({ plugins = [], lib }) => defineConfig({ plugins: [pluginReact(), ...plugins], build: { lib, rollupOptions: { external: isExternal, output: { globals: { react: "React", }, }, }, }, }); This file will export the common configs for Vite with extra plugins and libraries which we will reuse in each package. defineConfig serves as a utility function in Vite’s configuration file. While it doesn’t directly execute any logic or alter the passed configuration object, its primary role is to enhance type inference and facilitate autocompletion in specific code editors. rollupOptions allows you to specify custom Rollup options. Rollup is the module bundler that Vite uses under the hood for its build process. By providing options directly to Rollup, developers can have more fine-grained control over the build process. The external option within rollupOptions is used to specify which modules should be treated as external dependencies. In general, usage of the external option can help reduce the size of your bundle by excluding dependencies already present in the environment where your code will be run. The output option with globals: { react: "React" } in Rollup's configuration means that in your generated bundle, any import statements for react will be replaced with the global variable React. Essentially, it's assuming that React is already present in the user's environment and should be accessed as a global variable rather than included in the bundle. JSON { "compilerOptions": { "composite": true, "skipLibCheck": true, "module": "ESNext", "moduleResolution": "node", "allowSyntheticDefaultImports": true }, "include": ["vite.config.ts"] } The tsconfig.node.json file is used to specifically control how TypeScript transpiles with vite.config.ts file, ensuring it's compatible with Node.js. Vite, which serves and builds frontend assets, runs in a Node.js environment. This separation is needed because the Vite configuration file may require different TypeScript settings than your frontend code, which is intended to run in a browser. JSON { "compilerOptions": { // ... "types": ["vite/client", "jest", "@testing-library/jest-dom"], // ... }, "references": [{ "path": "./tsconfig.node.json" }] } By including "types": ["vite/client"] in tsconfig.json, is necessary because Vite provides some additional properties on the import.meta object that is not part of the standard JavaScript or TypeScript libraries, such as import.meta.env and import.meta.glob. Common Storybook Configuration The .storybook directory defines Storybook's configuration, add-ons, and decorators. It's essential for customizing and configuring how Storybook behaves. ├── main.ts └── preview.ts For the general configs, here are two files. Let’s check them all. main.ts is the main configuration file for Storybook and allows you to control the behavior of Storybook. As you can see, we’re just exporting common configs, which we’re gonna reuse in each package. TypeScript import type { StorybookConfig } from "@storybook/react-vite"; const config: StorybookConfig = { addons: [ { name: "@storybook/preset-scss", options: { cssLoaderOptions: { importLoaders: 1, modules: { mode: "local", auto: true, localIdentName: "[name]__[local]___[hash:base64:5]", exportGlobals: true, }, }, }, }, { name: "@storybook/addon-styling", options: { postCss: { implementation: require("postcss"), }, }, }, "@storybook/addon-links", "@storybook/addon-essentials", "@storybook/addon-interactions", "storybook-addon-mock", ], framework: { name: "@storybook/react-vite", options: {}, }, docs: { autodocs: "tag", }, }; export default config; File preview.ts allows us to wrap stories with decorators, which we can use to provide context or set styles across our stories globally. We can also use this file to configure global parameters. Also, it will export that general configuration for package usage. TypeScript import type { Preview } from "@storybook/react"; const preview: Preview = { parameters: { actions: { argTypesRegex: "^on[A-Z].*" }, options: { storySort: (a, b) => { return a.title === b.title ? 0 : a.id.localeCompare(b.id, { numeric: true }); }, }, layout: "fullscreen", controls: { matchers: { color: /(background|color)$/i, date: /Date$/, }, }, }, }; export default preview; Root package.json In a Lerna monorepo project, the package.json serves a similar role as in any other JavaScript or TypeScript project. However, some aspects are unique to monorepos. JSON { "name": "root", "private": true, "workspaces": [ "packages/*" ], "scripts": { "start:vite-common": "lerna run --scope vite-common storybook --stream", "build:vite-common": "lerna run --scope vite-common build --stream", "test:vite-common": "lerna run --scope vite-common test --stream", "start:vite-body": "lerna run --scope vite-body storybook --stream", "build": "lerna run build --stream", "test": "NODE_ENV=test jest --coverage" }, "dependencies": { "react": "^18.2.0", "react-dom": "^18.2.0" }, "devDependencies": { "@babel/core": "^7.22.1", "@babel/preset-env": "^7.22.2", "@babel/preset-react": "^7.22.3", "@babel/preset-typescript": "^7.21.5", "@storybook/addon-actions": "^7.0.18", "@storybook/addon-essentials": "^7.0.18", "@storybook/addon-interactions": "^7.0.18", "@storybook/addon-links": "^7.0.18", "@storybook/addon-styling": "^1.0.8", "@storybook/blocks": "^7.0.18", "@storybook/builder-vite": "^7.0.18", "@storybook/preset-scss": "^1.0.3", "@storybook/react": "^7.0.18", "@storybook/react-vite": "^7.0.18", "@storybook/testing-library": "^0.1.0", "@testing-library/jest-dom": "^5.16.5", "@testing-library/react": "^14.0.0", "@types/jest": "^29.5.1", "@types/react": "^18.0.28", "@types/react-dom": "^18.0.11", "@typescript-eslint/eslint-plugin": "^5.57.1", "@typescript-eslint/parser": "^5.57.1", "@vitejs/plugin-react": "^4.0.0", "babel-jest": "^29.5.0", "babel-loader": "^8.3.0", "eslint": "^8.41.0", "eslint-plugin-react-hooks": "^4.6.0", "eslint-plugin-react-refresh": "^0.3.4", "eslint-plugin-storybook": "^0.6.12", "jest": "^29.5.0", "jest-environment-jsdom": "^29.5.0", "lerna": "^6.5.1", "path": "^0.12.7", "prettier": "^2.8.8", "prop-types": "^15.8.1", "sass": "^1.62.1", "storybook": "^7.0.18", "storybook-addon-mock": "^4.0.0", "ts-jest": "^29.1.0", "ts-node": "^10.9.1", "typescript": "^5.0.2", "vite": "^4.3.2" } } Scripts will manage the monorepo. Running tests across all packages or building all packages. This package.json also include development dependencies that are shared across multiple packages in the monorepo, such as testing libraries or build tools. The private field is usually set to true in this package.json to prevent it from being accidentally published. Scripts, of course, can be extended with other packages for testing, building, and so on, like: "start:vite-footer": "lerna run --scope vite-footer storybook --stream", Package Level Configuration As far as we exported all configs from the root for reusing those configs, let’s apply them at our package level. Vite configuration will use root vite configuration where we just import getBaseConfig function and provide there lib. This configuration is used to build our component package as a standalone library. It specifies our package's entry point, library name, and output file name. With this configuration, Vite will generate a compiled file that exposes our component package under the specified library name, allowing it to be used in other projects or distributed separately. TypeScript import * as path from "path"; import { getBaseConfig } from "../../vite.config"; export default getBaseConfig({ lib: { entry: path.resolve(__dirname, "src/index.ts"), name: "ViteFooter", fileName: "vite-footer", }, }); For the .storybook, we use the same approach. We just import the commonConfigs. TypeScript import commonConfigs from "../../../.storybook/main"; const config = { ...commonConfigs, stories: ["../src/**/*..mdx", "../src/**/*.stories.@(js|jsx|ts|tsx)"], }; export default config; And preview it as well. TypeScript import preview from "../../../.storybook/preview"; export default preview; For the last one from the .storybook folder, we need to add preview-head.html. HTML <script> window.global = window; </script> And the best part is that we have a pretty clean package.json without dependencies, we all use them for all packages from the root. JSON { "name": "vite-footer", "private": true, "version": "1.0.0", "type": "module", "scripts": { "dev": "vite", "build": "tsc && vite build", "lint": "eslint src --ext ts,tsx --report-unused-disable-directives --max-warnings 0", "preview": "vite preview", "storybook": "storybook dev -p 6006", "build-storybook": "storybook build" }, "dependencies": { "vite-common": "^2.0.0" } } The only difference is vite-common, which is the dependency we’re using in the Footer component. Components By organizing our component packages in this manner, we can easily manage and publish each package independently while sharing common dependencies and infrastructure provided by our monorepo. Let’s look at the folder src of the Footer component. The other components will be identical, but the configuration only makes the difference. ├── assets │ └── flow.svg ├── components │ ├── Footer │ │ ├── Footer.stories.tsx │ │ └── index.tsx │ └── index.ts ├── index.ts └── vite-env.d.ts The vite-env.d.ts file in the src folder helps TypeScript understand and provide accurate type checking for Vite-related code in our project. It ensures that TypeScript can recognize and validate Vite-specific properties, functions, and features. Embedded Javascript /// <reference types="vite/client" /> In the src folder, index.ts has: TypeScript export * from "./components"; And the component that consumes vite-common components look like this: TypeScript-JSX import { Button, Links } from "vite-common"; export interface FooterProps { links: { label: string; href: string; }[]; } export const Footer = ({ links }: FooterProps) => { return ( <footer> <Links links={links} /> <Button label="Click Button" backgroundColor="green" /> </footer> ); }; export default Footer; Here’s what stories looks like for the component: TypeScript-JSX import { StoryFn, Meta } from "@storybook/react"; import { Footer } from "."; export default { title: "Example/Footer", component: Footer, parameters: { layout: "fullscreen", }, } as Meta<typeof Footer>; const mockedLinks = [ { label: "Home", href: "/" }, { label: "About", href: "/about" }, { label: "Contact", href: "/contact" }, ]; const Template: StoryFn<typeof Footer> = (args) => <Footer {...args} />; export const FooterWithLinks = Template.bind({}); FooterWithLinks.args = { links: mockedLinks, }; export const FooterWithOneLink = Template.bind({}); FooterWithOneLink.args = { links: [mockedLinks[0]], }; We use four packages in this example, but the approach is the same. Once you create all the packages, you have to be able to build, run, and test them independently. Before all are in the root level, run yarn install then yarn build to build all packages, or build yarn build:vite-common and you can start using that package in your other packages. Publish To publish all the packages in our monorepo, we can use the npx lerna publish command. This command guides us through versioning and publishing each package based on the changes made. lerna notice cli v6.6.2 lerna info versioning independent lerna info Looking for changed packages since vite-body@1.0.0 ? Select a new version for vite-body (currently 1.0.0) Major (2.0.0) ? Select a new version for vite-common (currently 2.0.0) Patch (2.0.1) ? Select a new version for vite-footer (currently 1.0.0) Minor (1.1.0) ? Select a new version for vite-header (currently 1.0.0) Patch (1.0.1) ❯ Minor (1.1.0) Major (2.0.0) Prepatch (1.0.1-alpha.0) Preminor (1.1.0-alpha.0) Premajor (2.0.0-alpha.0) Custom Prerelease Custom Version Lerna will ask us for each package version, and then you can publish it. lerna info execute Skipping releases lerna info git Pushing tags... lerna info publish Publishing packages to npm... lerna success All packages have already been published. Conclusion I was looking for a solid architecture solution for our front-end components organization in the company I am working for. For each project, we have a powerful, efficient development environment with general rules that help us become independent. This combination gives me streamlined dependency management, isolated component testing, and simplified publishing. References Repository Vite with Storybook More
taichi.js: Painless WebGPU Programming

taichi.js: Painless WebGPU Programming

By Dunfan Lu
As a computer graphics and programming languages geek, I am delighted to have found myself working on several GPU compilers in the past two years. This began in 2021 when I started to contribute to taichi, a Python library that compiles Python functions into GPU kernels in CUDA, Metal, or Vulkan. Later on, I joined Meta and started working on SparkSL, which is the shader language that powers cross-platform GPU programming for AR effects on Instagram and Facebook. Aside from personal pleasure, I have always believed, or at least hoped, that these frameworks are actually quite useful; they make GPU programming more accessible to non-experts, empowering people to create fascinating graphics content without having to master complex GPU concepts. In my latest installment of compilers, I turned my eyes to WebGPU -- the next-generation graphics API for the web. WebGPU promises to bring high-performance graphics via low CPU overhead and explicit GPU control, aligning with the trend started by Vulkan and D3D12 some seven years ago. Just like Vulkan, the performance benefits of WebGPU come at the cost of a steep learning curve. Although I'm confident that this won't stop talented programmers around the world from building amazing content with WebGPU, I wanted to provide people with a way to play with WebGPU without having to confront its complexity. This is how taichi.js came to be. Under the taichi.js programming model, programmers don't have to reason about WebGPU concepts such as devices, command queues, bind groups, etc. Instead, they write plain Javascript functions, and the compiler translates those functions into WebGPU compute or render pipelines. This means that anyone can write WebGPU code via taichi.js, as long as they are familiar with basic Javascript syntax. The remainder of this article will demonstrate the programming model of taichi.js via a "Game of Life" program. As you will see, with less than 100 lines of code, we will create an fully parallel WebGPU program containing 3 GPU compute pipelines plus a render pipeline. The full source code of the demo can be found here, and if you want to play with the code without having to set-up any local environments, go to this page. The Game The Game of Life is a classic example of a cellular automaton, a system of cells that evolve over time according to simple rules. It was invented by the mathematician John Conway in 1970 and has since become a favorite of computer scientists and mathematicians alike. The game is played on a two-dimensional grid, where each cell can be either alive or dead. The rules for the game are simple: If a living cell has fewer than two or more than three living neighbors, it dies. If a dead cell has exactly three living neighbors, it becomes alive. Despite its simplicity, the Game of Life can exhibit surprising behavior. Starting from any random initial state, the game often converges to a state where a few patterns are dominant as if these are "species" which survived through evolution. Simulation Let's dive into the Game of Life implementation using taichi.js. To begin with, we import the taichi.js library under the shorthand ti and define an async main() function, which will contain all of our logic. Within main(), we begin by calling ti.init(), which initializes the library and its WebGPU contexts. JavaScript import * as ti from "path/to/taichi.js" let main = async () => { await ti.init(); ... }; main() Following ti.init(), let's define the data structures needed by the "Game of Life" simulation: JavaScript let N = 128; let liveness = ti.field(ti.i32, [N, N]) let numNeighbors = ti.field(ti.i32, [N, N]) ti.addToKernelScope({ N, liveness, numNeighbors }); Here, we defined two variables, liveness, and numNeighbors, both of which are ti.fields. In taichi.js, a "field" is essentially an n-dimensional array, whose dimensionality is provided in the 2nd argument to ti.field(). The element type of the array is defined in the first argument. In this case, we have ti.i32, indicating 32-bit integers. However, field elements may also be other more complex types, including vectors, matrices, and even structures. The next line of code, ti.addToKernelScope({...}), ensures that the variables N, liveness, and numNeighbors are visible in taichi.js "kernel"s, which are GPU compute and/or render pipelines, defined in the form of Javascript functions. As an example, the following init kernel is used to populate our grid cells with initial liveness vales, where each cell has a 20% chance of being alive initially: JavaScript let init = ti.kernel(() => { for (let I of ti.ndrange(N, N)) { liveness[I] = 0 let f = ti.random() if (f < 0.2) { liveness[I] = 1 } } }) init() The init() kernel is created by calling ti.kernel() with a Javascript lambda as the argument. Under the hood, taichi.js will look at the JavaScript string representation of this lambda and compile its logic into WebGPU code. Here, the lambda contains a for-loop, whose loop index I iterates through ti.ndrange(N, N). This means that I will take NxN different values, ranging from [0, 0] to [N-1, N-1]. Here comes the magical part -- in taichi.js, all the top-level for-loops in the kernel will be parallelized. More specifically, for each possible value of the loop index, taichi.js will allocate one WebGPU compute shader thread to execute it. In this case, we dedicate one GPU thread to each cell in our "Game of Life" simulation, initializing it to a random liveness state. The randomness comes from a ti.random() function, which is one of the the many functions provided in the taichi.js library for kernel use. A full list of these built-in utilities is available here in the taichi.js documentation. Having created the initial state of the game, let's move on to define how the game evolves. These are the two taichi.js kernels defining this evolution: JavaScript let countNeighbors = ti.kernel(() => { for (let I of ti.ndrange(N, N)) { let neighbors = 0 for (let delta of ti.ndrange(3, 3)) { let J = (I + delta - 1) % N if ((J.x != I.x || J.y != I.y) && liveness[J] == 1) { neighbors = neighbors + 1; } } numNeighbors[I] = neighbors } }); let updateLiveness = ti.kernel(() => { for (let I of ti.ndrange(N, N)) { let neighbors = numNeighbors[I] if (liveness[I] == 1) { if (neighbors < 2 || neighbors > 3) { liveness[I] = 0; } } else { if (neighbors == 3) { liveness[I] = 1; } } } }) Same as the init() kernel we saw before, these two kernels also have top-level for loops iterating over every grid cell, which are parallelized by the compiler. In countNeighbors(), for each cell, we look at the eight neighboring cells and count how many of these neighbors are "alive." The amount of live neighbors is stored into the numNeighbors field. Notice that when iterating through neighbors, the loop for (let delta of ti.ndrange(3, 3)) {...} is not parallelized, because it is not a top-level loop. The loop index delta ranges from [0, 0] to [2, 2] and is used to offset the original cell index I. We avoid out-of-bounds accesses by taking a modulo on N. (For the topologically-inclined reader, this essentially means the game has toroidal boundary conditions). Having counted the amount of neighbors for each cell, we move on to update the their liveness states in the updateLiveness() kernel. This is a simple matter of reading the liveness state of each cell and its current amount of live neighbors and writing back a new liveness value according to the rules of the game. As usual, this process applies to all cells in parallel. This essentially concludes the implementation of the game's simulation logic. Next, we will see how to define a WebGPU render pipeline to draw the game's evolution onto a webpage. Rendering Writing rendering code in taichi.js is slightly more involved than writing general-purpose compute kernels, and it does require some understanding of vertex shaders, fragment shaders, and rasterization pipelines in general. However, you will find that the simple programming model of taichi.js makes these concepts extremely easy to work with and reason about. Before drawing anything, we need access to a piece of canvas that we are drawing onto. Assuming that a canvas named result_canvas exists in the HTML, the following lines of code create a ti.CanvasTexture object, which represents a piece of texture that can be rendered onto by a taichi.js render pipeline. JavaScript let htmlCanvas = document.getElementById('result_canvas'); htmlCanvas.width = 512; htmlCanvas.height = 512; let renderTarget = ti.canvasTexture(htmlCanvas); On our canvas, we will render a square, and we will draw the Game's 2D grid onto this square. In GPUs, geometries to be rendered are represented in the form of triangles. In this case, the square that we are trying to render will be represented as two triangles. These two triangles are defined in a ti.field, which store the coordinates of each of the six vertices of the two triangles: JavaScript let vertices = ti.field(ti.types.vector(ti.f32, 2), [6]); await vertices.fromArray([ [-1, -1], [1, -1], [-1, 1], [1, -1], [1, 1], [-1, 1], ]); As we did with the liveness and numNeighbors fields, we need to explicitly declare the renderTarget and vertices variables to be visible in GPU kernels in taichi.js: JavaScript ti.addToKernelScope({ vertices, renderTarget }); Now, we have all the data we need to implement our render pipeline. Here's the implementation of the pipeline itself: JavaScript let render = ti.kernel(() => { ti.clearColor(renderTarget, [0.0, 0.0, 0.0, 1.0]); for (let v of ti.inputVertices(vertices)) { ti.outputPosition([v.x, v.y, 0.0, 1.0]); ti.outputVertex(v); } for (let f of ti.inputFragments()) { let coord = (f + 1) / 2.0; let texelIndex = ti.i32(coord * (liveness.dimensions - 1)); let live = ti.f32(liveness[texelIndex]); ti.outputColor(renderTarget, [live, live, live, 1.0]); } }); Inside the render() kernel, we begin by clearing the renderTarget with an all-black color, represented in RGBA as [0.0, 0.0, 0.0, 1.0]. Next, we define two top-level for-loops, which, as you already know, are loops that are parallelized in WebGPU. However, unlike the previous loops where we iterate over ti.ndrange objects, these loops iterate over ti.inputVertices(vertices) and ti.inputFragments(), respectively. This indicates that these loops will be compiled into WebGPU "vertex shaders" and "fragment shaders," which work together as a render pipeline. The vertex shader has two responsibilities: For each triangle vertex, compute its final location on the screen (or, more accurately, its "Clip Space" coordinates). In a 3D rendering pipeline, this will normally involve a bunch of matrix multiplications that transforms the vertex's model coordinates into world space, and then into camera space, and then finally into "Clip Space." However, for our simple 2D square, the input coordinates of the vertices are already at their correct values in clip space so that we can avoid all of that. All we have to do is append a fixed z value of 0.0 and a fixed w value of 1.0 (don't worry if you don't know what those are -- not important here!). JavaScript ti.outputPosition([v.x, v.y, 0.0, 1.0]); For each vertex, generate data to be interpolated and then passed into the fragment shader. In a render pipeline, after the vertex shader is executed, a built-in process known as "Rasterization" is executed on all the triangles. This is a hardware-accelerated process which computes, for each triangle, which pixels are covered by this triangle. These pixels are also known as "fragments." For each triangle, the programmer is allowed to generate additional data at each of the three vertices, which will be interpolated during the rasterization stage. For each fragment in the pixel, its corresponding fragment shader thread will receive the interpolated values according to its location within the triangle.In our case, the fragment shader only needs to know the location of the fragment within the 2D square so it can fetch the corresponding liveness values of the game. For this purpose, it suffices to pass the 2D vertex coordinate into the rasterizer, which means the fragment shader will receive the interpolated 2D location of the pixel itself: JavaScript ti.outputVertex(v); Moving on to the fragment shader: JavaScript for (let f of ti.inputFragments()) { let coord = (f + 1) / 2.0; let cellIndex = ti.i32(coord * (liveness.dimensions - 1)); let live = ti.f32(liveness[cellIndex]); ti.outputColor(renderTarget, [live, live, live, 1.0]); } The value f is the interpolated pixel location passed-on from the vertex shader. Using this value, the fragment shader will look-up the liveness state of the cell in the game which covers this pixel. This is done by first converting the pixel coordinates f into the [0, 0] ~ [1, 1] range and storing this coordinate into the coord variable. This is then multiplied with the dimensions of the liveness field, which produces the index of the covering cell. Finally, we fetch the live value of this cell, which is 0 if it is dead and 1 if it is alive. Finally, we output the RGBA value of this pixel onto the renderTarget, where the R,G,B components are all equal to live, and the A component is equal to 1, for full opacity. With the render pipeline defined, all that's left is to put everything together by calling the simulation kernels and the render pipeline every frame: JavaScript async function frame() { countNeighbors() updateLiveness() await render(); requestAnimationFrame(frame); } await frame(); And that's it! We have completed a WebGPU-based "Game of Life" implementation in taichi.js. If you run the program, you should see an animation where 128x128 cells evolve for around 1400 generations before converging to a few species of stabilized organisms. Exercises I hope you found this demo interesting! If you did, then I have a few extra exercises and questions that I invite you to experiment with and think about. (By the way, for quickly experimenting with the code, go to this page.) [Easy] Add a FPS counter to the demo! What FPS value can you obtain with the current setting where N = 128? Try increasing the value of N and see how the framerate changes. Would you be able to write a vanilla Javascript program that obtains this framerate without taichi.js or WebGPU? [Medium] What would happen if we merge countNeighbors() and updateLiveness() into a single kernel and keep the neighbors counter as a local variable? Would the program still work correctly always? [Hard] In taichi.js, a ti.kernel(..) always produces an async function, regardless of whether it contains compute pipelines or render pipelines. If you have to guess, what is the meaning of this async-ness? And what is the meaning of calling await on these async calls? Finally, in the frame function defined above, why did we put await only for the render() function, but not the other two? The last two questions are especially interesting, as they touches onto the inner workings of the compiler and runtime of the taichi.js framework, as well as the principles of GPU programming. Let me know your answer! Resources Of course, this Game of Life example only scratches the surface of what you can do with taichi.js. From real-time fluid simulations to physically based renderers, there are may other taichi.js programs for you to play with, and even more for you to write yourself. For additional examples and learning resources, check out: Github page Docs Playground Happy coding! More
The Role of JavaScript in Front-End and Back-End Development
The Role of JavaScript in Front-End and Back-End Development
By Sam Allen
Unleashing the Power of React Hooks
Unleashing the Power of React Hooks
By Atul Naithani
Mastering Node.js: The Ultimate Guide
Mastering Node.js: The Ultimate Guide
By Ashokkumar Gurusamy
Next.js vs. React: The Ultimate Guide To Choosing the Right Framework
Next.js vs. React: The Ultimate Guide To Choosing the Right Framework

React and Next.js are the most popular technologies in front-end development to create high-quality websites and modern dynamic web applications worldwide. Top streaming services apps like Hulu and Netflix rely on React framework, which is extremely fast and delivers immersive experiences. So if you're familiar with React, you already have experience working on it. On the other hand, Next.js has more features and is more opinionated than React, even though they both assist with creating high-performance and more effective web user interfaces. This type of web development is especially useful for websites that are highly concerned with search engine optimization or pre-rendering. Despite both helping to create effective web user interfaces, Nextjs is more feature-rich and opinionated than React. Each has unique features and use cases that make them ideal for building modern and dynamic web applications. Well! In this guide, we will make a detailed comparison between Next.js vs. React and help you decide which framework best suits your needs by explaining the differences between them. If you are a developer or a business seeking a detailed comparison between Next.js vs. React, then you have come to the right place. So, let’s get started with the basics! Next.js: An Overview Next.js is a robust, flexible, and open-source framework built on top of React and used as a production-ready tool that eases server-side rendering (SSR) and static site generation (SSG). With its minimalistic design and performance optimization, Next.js is a popular choice for large-scale applications with improved scalability and simplicity. React websites are typically built on Next.js to simplify server-side rendering since Next.js offers all the functionality you need to create a website that works right out of the box. Next.js has comprehensive documentation that comes along with various tutorials, guides, and training videos that make it easy for beginners and new developers to get started with the platform quickly and efficiently. Advantages of Next.js Here are some of the top advantages of using Next.js for your web applications. Static Site Generation (SSG): Next.js is compatible with SSG, which allows developers to pre-render static web pages during the development stage. This method is best suitable for websites with heavy content since it helps speed up the performance and reduce the loading time and burden on the server. It takes more time to create a static site from a React codebase than a single-page React application. With static content, however, the payoff is being able to serve and cache content at the maximum speed possible without any additional computation overhead. Server-side Rendering (SSR): Next.js widely supports SSR, which is one of the most significant advantages of using this framework for web applications. By delivering full-formed HTML documents, your web pages will rank higher and much better in search engine results and load faster for the user. This leads to an improved user experience and reduced load time on the server. Easy Deployment: It is pretty easier to deploy Next.js applications to platforms like Netlify and Vercel within a few clicks. The process of deploying Next.js applications is straightforward as it comes with built-in support for server-side rendering and routing. Therefore, developers can easily build and deploy their web applications to the server, saving time and effort. Rich Ecosystem: Next.js provides developers access to a wide range of tools, libraries, and extensions, enabling them to take advantage of the power of React while taking advantage of Next.js' additional features and capabilities. Community support: Even though there are very few tutorials and support resources available on the Internet, the Next.js community and members are very active. So when it comes to dedicated community support, Next.js is almost comparable to React. Use cases for Next.js Following are a few use cases of Next.js; Enterprise Applications: Next.js is an ideal platform for building large-scale, more complex, and data-driven applications for large business enterprises. Content for blogs: The static site generation capabilities of Next.js can significantly improve the performance of your blog with a lot of content. Thus, Next.js is a perfect choice for blogs with lots of heavy content or information on the site. E-commerce websites: Next.js is the perfect technology for e-commerce websites that require improved search engine optimization, fast page loading times, and dynamic user interfaces. Therefore, if you are planning to build an eCommerce store with various custom features and functionalities, you can go for Next.js. React: An Overview React is a JavaScript library developed and maintained by Facebook and widely popular for building interactive user interfaces. React is one of the most popular frameworks in front-end development for building modern and fast-loading web applications than any other front-end development tools. It's built as a component-based architecture, making it easy to design and develop complex UIs by dividing them up into smaller parts and reusing them. Advantages of React Here are some of the top benefits of using React for your business web apps. Component-Based Approach: React’s component-based architecture encourages code reusability, maintainability, and scalability, so developers can quickly create individual components and combine them to build sophisticated user interfaces without any hassle. Virtual DOM: It performs much better thanks to a lightweight virtual DOM, which allows React to update only the parts of the real DOM that are necessary for the UI. Components: It is possible to load reusable components to different pages repeatedly while keeping their characteristics in React. All the pages will reflect the changes as soon as you make any single change to the component code. Mobile App Development: Building cross-platform mobile applications using React Native is similar to building web applications using React. React is the best-suited platform for building Android and iOS applications using a single codebase quickly and efficiently. Large Community Support: Developing apps with React is easy, thanks to the large and active community of developers and designers. You can find many resources, tutorials, and libraries to make the app development process much faster and easier. Use cases of React Below are some use cases of React; Single Page Applications (SPAs): Real-time updates and dynamic content are critical requirements of SPAs built with React. React is ideal for building high-performance single-page applications. Interactive dashboards: You can create real-time analytics interfaces and dashboards using React. Web Components: It's easy to integrate React's component-based approach into any web application when you need to create reusable UI elements. Next.js vs. React: Which One To Choose Let’s get to know more about both frameworks, Next.js vs. React, with the following detailed comparison; Documentation When comparing React and Next.js, documentation is often a topic of discussion. It will be appealing to look at the home pages of the frameworks, but you will need tutorials, books, and articles to implement them effectively. You can find various tutorials for both React and Next.js on the Internet. You can learn Next.js pretty easier, it has a set of "learn-by-doing" documentation that walks you through creating components and directing. For those still new to React, there are a few exercises easily available to guide you through the basics. In addition, you should analyze their official documentation to gain a deeper & much better understanding of React and Next.js. Search Engine Optimization Search engines can more easily and quickly crawl and index websites with Next.js's speed and pre-rendering capabilities, improving search engine optimization and overall user experience. Websites with better SEO appear higher in search engine results, which is why SEO is so important to many businesses and websites. Hence, Next.js comes up with improved SEO, higher performance, and enhanced user experience. Performance Performance is one of the biggest differences when it comes to Next.js vs React. Next.js is much faster than React because it offers features such as server-side rendering, image optimization, static destinations, and much more to make the site load instantly across all devices. Due to some missing features, React websites are not very performant or extremely fast-loading compared to Next.js sites. Since React supports client-side rendering, it has comparatively slower loading times and is not best suited for SEO. You can get high-performance sites with Next.js thanks to code splitting and automatic server-side rendering. Beginners Friendly Next.js is an ideal choice for app developers who are new or just getting started with React. The platform uses Create React App, which allows app developers to save time and effort in configuring and adjusting their toolset. It allows them to use the pre-built templates based on different app categories or build their own from start to finish. Therefore, you no longer need to create an application from the start using the Next.js approach. Speed and Easiness of Coding Components are created in React and then added to the router when creating a page using this framework. Next.js, however, simply requires you to add a link to the component header at the top of each page you create. It simplifies the life of the developer and enables them to create more products or applications much faster by using minimal coding and configuration. Setup It can be difficult to configure React until disconnected from Create React App. Next.js has minimal configuration due to its server-side rendering, while you will use it in setup or CRA read scripts. Babelrc, jest.config, eslintrc, etc., are all available to configure a Next.js template. Hence, the setup process of Next.js is straightforward compared to React. Conclusion It is important to consider the project requirements when it comes to choosing a framework between Next.js Vs. React for your web application. Since developers often choose a framework based on its convenience, performance, and seamlessness. Next.js and React both offer a lot of flexibility to app developers, while React has greater resources, while Next.js has a more turbo-charged feature set. On top of that, Next.js is the perfect platform for you when you want to use a lot of APIs for some custom features and functionality. On the other hand, you can go for React if you want to create a simple static website. Ultimately, the framework you choose will depend on several factors, including the scope, complexity, functionality, performance, and scalability of your project. So, always put your needs first before deciding between Next.js vs. React for your business.

By Avani Trivedi
Advanced React JS Concepts: A Deep Dive
Advanced React JS Concepts: A Deep Dive

The Basics of React JS Before we explore the advanced concepts, let's quickly revisit the basics of React JS. Components and JSX React applications are built using components. Components are like building blocks that encapsulate the logic and the UI of a part of the application. They can be reusable and allow developers to create complex user interfaces by composing smaller components together. In React, JSX (JavaScript XML) is used to describe the structure of components. It provides a syntax that looks similar to HTML, making it easier for developers to visualize the UI components. State and Props In React, state and props are used to manage data within components. State: It represents the local state of a component and can be changed over time. When the state updates, React will automatically re-render the component to reflect the changes. Props: Short for "properties," props are used to pass data from a parent component to a child component. Props are read-only and cannot be changed by the child component. Virtual DOM React uses a virtual DOM to optimize the rendering process. The virtual DOM is a lightweight copy of the actual DOM, and any changes made to the UI are first done on the virtual DOM. React then calculates the difference between the previous and updated virtual DOMs and efficiently updates only the necessary parts of the actual DOM, reducing rendering time. Advanced React JS Concepts Now that we have covered the basics let's dive into some advanced concepts that can enhance your React JS skills. React Hooks Introduced in React 16.8, React Hooks are functions that allow developers to use state and other React features without writing a class. Hooks, such as useState and useEffect, enable functional components to have stateful logic and side effects. Hooks make code more concise and readable, and they provide an elegant solution for managing state in functional components. Context API The Context API is a way to share data across the component tree without explicitly passing props at every level. It allows developers to create a global state that can be accessed by any component within the tree. Using the Context API eliminates the need for "prop drilling," making the data flow more efficient and organized. React Router React Router is a popular library used for handling navigation in React applications. It allows developers to create multiple routes, enabling users to navigate between different pages or views in a single-page application. With React Router, developers can implement dynamic and client-side routing, providing a seamless user experience. Error Boundaries Error Boundaries are a feature in React that helps catch errors that occur during rendering, in lifecycle methods, and in the constructors of the whole component tree. By using Error Boundaries, developers can prevent the entire application from crashing when an error occurs in a specific component. Error Boundaries improve the overall stability of the application and provide better error handling. React Performance Optimization As React applications grow in complexity, performance optimization becomes crucial. Let's explore some techniques for optimizing React applications. Memoization Memoization is a technique used to optimize expensive calculations or functions by caching the results. In React, the useMemo hook can be used to memoize the result of a function and recompute it only if the dependencies change. By memorizing calculations, React can avoid unnecessary recalculations and improve rendering performance. Lazy Loading Lazy loading is a method used to defer the loading of non-essential resources until they are needed. In React, components can be lazy-loaded using the React.lazy function and Suspense component. Lazy loading reduces the initial bundle size, resulting in faster load times for the initial page. Code Splitting Code splitting involves breaking down the application's code into smaller chunks or bundles, which are loaded on demand. This technique reduces the initial loading time of the application. React applications can benefit from code splitting, especially when dealing with large codebases. Debouncing and Throttling Debouncing and throttling are techniques used to control the rate at which a function is called. Debouncing delays the execution of a function until a specified time has passed since the last time it was invoked. Throttling limits the number of times a function can be called over a certain period. By using these techniques, developers can improve performance by reducing unnecessary function calls. React Testing Testing is a crucial aspect of software development. In React, testing can be done at different levels. Unit Testing With Jest Jest is a popular testing framework that is widely used for unit testing React components. It allows developers to write test cases to ensure that individual components behave as expected. Unit testing helps identify and fix bugs early in the development process. Integration Testing With React Testing Library The React Testing Library provides utilities for testing React components in a more realistic way by simulating user interactions. Integration testing ensures that different components work together as intended and helps validate the application's overall functionality. React Best Practices Following best practices is essential for writing maintainable and scalable React applications. Folder Structure A well-organized folder structure can make a significant difference in the development process. Grouping related components, styles, and utilities together makes it easier to locate and update code. DRY Principle (Don't Repeat Yourself) The DRY principle advocates for avoiding code duplication. In React, developers should strive to reuse components and logic whenever possible. Stateless Functional Components Stateless functional components, also known as functional or presentational components, are a recommended best practice in React. These components do not maintain state and only receive data through props. By using stateless functional components, the code becomes more modular and easier to test. Using PropTypes PropTypes is a library that helps in type-checking the props passed to components. By specifying the expected data types and whether certain props are required, developers can catch bugs and ensure that components receive the correct data. Advanced Styling in React Styling is an essential aspect of creating appealing user interfaces. React offers various methods for styling components. CSS Modules CSS Modules allow developers to write modular and scoped CSS in their components. The CSS rules defined within a component only apply to that specific component, preventing unintended styling conflicts. CSS Modules enhance code maintainability and make it easier to manage styles in larger applications. Styled Components Styled Components is a popular library that enables developers to write CSS directly within their JavaScript code. It uses tagged template literals to create styled-components. Styled Components offer a more dynamic and flexible approach to styling, making it easy to manage component styles based on props and states. React State Management As React applications grow in complexity, managing state across multiple components becomes challenging. State management libraries can help address this issue. Redux Redux is a predictable state management library that follows the Flux architecture. It centralizes the application's state in a single store and allows components to access and modify the state using reducers and actions. Redux provides a clear separation of concerns and simplifies data flow in large applications. MobX MobX is another popular state management library that offers a more flexible and reactive approach to managing state. It automatically tracks the dependencies between observables and updates components when the state changes. MobX is known for its simplicity and ease of integration with React applications. Server-Side Rendering (SSR) With React Server-Side Rendering is a technique used to render a React application on the server before sending it to the client. This improves initial loading times and enhances SEO by providing search engines with fully rendered HTML content. SSR can be achieved using libraries like Next.js, which simplifies the process of implementing server-side rendering in React applications. React Security Best Practices Web application security is of utmost importance to protect user data and prevent attacks. React developers should follow these best practices: XSS Prevention Cross-Site Scripting (XSS) is a common security vulnerability that allows attackers to inject malicious scripts into web pages. Developers can prevent XSS attacks by properly sanitizing user input and using libraries like DOMPurify to sanitize HTML. CSRF Protection Cross-Site Request Forgery (CSRF) is another security threat that involves an attacker tricking users into unknowingly performing actions on a website. To protect against CSRF attacks, developers should use CSRF tokens and enforce strict CORS policies. The Future of React React continues to evolve, and its future looks promising. Some trends and developments to watch for include: React Concurrent Mode: Concurrent Mode is an upcoming feature that will allow React to perform rendering in a more incremental and interruptible way. This will result in smoother user experiences, especially for applications with complex UIs. React Server Components: Server Components aim to take server-side rendering to the next level. They will allow developers to offload component rendering to the server, leading to even faster load times. Improved React Performance: The React team is continually working on optimizing React's performance, making it faster and more efficient. Conclusion React JS is a powerful and versatile library that enables developers to build sophisticated web applications. In this article, we explored some advanced concepts in React, including React Hooks, Context API, React Router, performance optimization, testing, state management, and more. By mastering these advanced concepts and following best practices, developers can create scalable, maintainable, and high-performing React applications that deliver exceptional user experiences.

By Sam Allen
Three Best React Form Libraries
Three Best React Form Libraries

How can we simplify work as our React project's forms become increasingly intricate? Creating and handling forms in React can be challenging and time-consuming. Fortunately, third-party libraries can help. Many exceptional form libraries are available that can simplify the process and make React form development more efficient and enjoyable. The primary question then becomes which form of library is the best. In this blog post, we'll discuss three of the top React form libraries that every React developer should know. Formik Formik is a widely used and highly popular form library for React applications. It simplifies form management by providing developers with a set of tools and utilities that help handle form state, validation, and submission seamlessly. Formik's Key Features: Declarative Approach: Developers can use Formik to create forms using a declarative syntax that minimizes the need for boilerplate code and simplifies the form structure, making it more concise. Form State Management: This tool helps keep track of changes made to form elements and simplifies handling different interactions with the form. Validation: Developers can easily define validation rules and error messages for form fields using the built-in validation capabilities offered by Formik. Form Submission: Formik makes it easier to submit forms by handling asynchronous tasks like API requests during submission. Integration with Third-Party Libraries: Formik effortlessly incorporates well-known libraries such as Yup for schema-based validation and various UI libraries like Material-UI. If you're looking for a reliable way to create intricate and ever-changing forms in your React applications, Formik is worth considering. Its robust community support and development team are constantly working to improve it. Formik Usage: Here's a basic overview of how to use Formik: Installation: To begin, you can install Formik and its peer dependency, Yup (which is used for form validation), by using either npm or yarn: JSX npm install formik yup # or yarn add formik yup Import and Setup: Import the required components from Formik and Yup, and set up your form using the Formik component: JSX import React from 'react'; import { Formik, Form, Field, ErrorMessage } from 'formik'; import * as Yup from 'yup'; const initialValues = { name: '', email: '', }; const validationSchema = Yup.object({ name: Yup.string().required('Name is required'), email: Yup.string().email('Invalid email address').required('Email is required'), }); const onSubmit = (values) => { console.log(values); }; const MyForm = () => ( <Formik initialValues={initialValues} validationSchema={validationSchema} onSubmit={onSubmit} > <Form> <div> <label htmlFor="name">Name</label> <Field type="text" id="name" name="name" /> <ErrorMessage name="name" component="div" /> </div> <div> <label htmlFor="email">Email</label> <Field type="email" id="email" name="email" /> <ErrorMessage name="email" component="div" /> </div> <button type="submit">Submit</button> </Form> </Formik> ); export default MyForm; Field and ErrorMessage: The Field component is used for inputting information, while the ErrorMessage component displays any validation errors related to the data entered in the field. Form Submission: Formik takes care of the form state, validation, and submission when the form is submitted. If the validation is successful, the onSubmit function defined in the Formik component will be triggered, passing the form values. Accessing Formik State: To access Formik's state and helpers within your form components, you can use either the useFormik hook or the withFormik higher-order component. Formik offers various features, including handling form reset, asynchronous submissions, dynamic form fields, etc. The above example covers the fundamental aspects, but you can explore the Formik Documentation to discover advanced usage and customization options. React Hook Form React Hook Form is a form library that is highly effective and utilizes React's hooks to manage form state and behavior. It prioritizes performance and aims to minimize the number of re-renders, ensuring the best possible user experience. React Hook Form Key Features: Minimal Re-renders: React Hook Form is optimized to reduce unnecessary re-renders through advanced techniques such as updates to controlled and uncontrolled components. Custom Hooks: Developers are encouraged to use custom hooks for managing form state. This approach allows for modularisation and reuse of form logic in different application parts. Validation: React Hook Form is a versatile and adaptable tool for validating forms. It offers built-in and custom validation methods, allowing flexibility in handling different validation scenarios. Async Validation: One of its features is the ability to perform asynchronous validation, which simplifies the process of validating data against remote APIs or conducting intricate validation checks. Performance: The React Hook Form is ideal for high-performance applications as it minimizes the frequency. React Hook Form Usage: Here's how you can use React Hook Form in your project: Installation: To begin, install the library by using either npm or yarn: JSX npm install react-hook-form # or yarn add react-hook-form Primary Usage: To begin, import the necessary functions from the library and then create a form component. JSX import { useForm, Controller } from 'react-hook-form'; function MyForm() { const { control, handleSubmit, formState } = useForm(); const onSubmit = (data) => { console.log(data); }; return ( <form onSubmit={handleSubmit(onSubmit)}> <label>Name</label> <Controller name="name" control={control} defaultValue="" render={({ field }) => <input {...field} /> } /> <button type="submit">Submit</button> </form> ); } Field Validation: React Hook Form has built-in validation support that utilizes schema-based validation libraries such as yup or can use inline validation functions. JSX import { useForm, Controller } from 'react-hook-form'; import * as yup from 'yup'; const schema = yup.object().shape({ name: yup.string().required('Name is required'), }); function MyForm() { const { control, handleSubmit, formState } = useForm({ resolver: yupResolver(schema), }); // ... return ( <form onSubmit={handleSubmit(onSubmit)}> <label>Name</label> <Controller name="name" control={control} defaultValue="" render={({ field, fieldState }) => ( <div> <input {...field} /> {fieldState.error && <p>{fieldState.error.message}</p>} </div> )} /> {/* ... */} </form> ); } Advanced Usage: Explore the advanced features of React Hook Form, such as dynamic fields, custom inputs, and array fields, by referring to the official documentation available on React Hook Form Documentation. Accessing Form State: To access the form state and errors, utilize the formState object provided by the useForm hook. JSX const { handleSubmit, formState: { errors, isSubmitting } } = useForm(); Performance Optimization: React Hook Form is designed to improve performance by minimizing re-renders and unnecessary updates. It is achieved through uncontrolled components and internal handling of the form state. As a result, there is less need for controlled component re-renders. It is essential to refer to the official documentation and examples to ensure that you have the most current information and are using the best practices when implementing React Hook Form in your projects. React Final Form React Final Form is a widely-used form library that effectively manages forms in React applications. It offers a solid and adaptable approach to managing form state, validation, and submission. React Final Form is user-friendly and efficient and provides advanced features for complicated scenarios. React Final Form Key Features: Declarative API: React Final Form adopts a declarative approach for defining the structure and behavior of forms. Using React components, you can easily describe the form fields and validation rules, simplifying comprehension and upkeep. Form State Management: The 'Final Form State' object manages the form state within the library. This state encompasses the values of the form fields, validation status, and fields that have been interacted with. Validation: With React Final Form, you can use synchronous and asynchronous validation to ensure your forms are error-free. Define your validation rules for each field; any errors will be automatically displayed in the user interface. Field-Level Control: You can precisely control every form field, from accessing individual field values to checking for validation errors and other properties. Submission: Managing form submissions is a simple process. You need to create a submission function that will receive the input values and can be activated by clicking a button or other similar events. Form Rendering: With React Final Form, you can select how you want to present your form components. You are not restricted in appearance, thus enabling you to design unique and personalized form layouts. Performance Optimization: The library is optimized for performance, reducing unnecessary re-renders and updates to enhance the application's speed. Third-Party Components: You can effortlessly incorporate React Final Form with third-party libraries and components, which enables you to utilize well-known UI frameworks such as Material-UI or Ant Design for your form inputs. Extensibility: The library has a highly flexible plugin system, allowing you to customize and extend its functionality according to your unique requirements. When beginning with React Final Form, you must install it as a dependency in your project and import the required components. Afterward, use JSX syntax to define your form and utilize the provided props and methods to manage form interactions. React Final Form Usage: Here's an essential guide on React Final Form: Installation: Begin by installing the necessary packages: JSX npm install react-final-form final-form Creating a Form Component: To create a new form component using React Final Form, you can follow this simple example: JSX import React from 'react'; import { Form, Field } from 'react-final-form'; const MyForm = () => { const onSubmit = (values) => { console.log('Form values:', values); }; const validate = (values) => { const errors = {}; if (!values.firstName) { errors.firstName = 'Required'; } if (!values.lastName) { errors.lastName = 'Required'; } return errors; }; return ( <Form onSubmit={onSubmit} validate={validate} render={({ handleSubmit }) => ( <form onSubmit={handleSubmit}> <div> <label>First Name</label> <Field name="firstName" component="input" type="text" /> </div> <div> <label>Last Name</label> <Field name="lastName" component="input" type="text" /> </div> <button type="submit">Submit</button> </form> )} /> ); }; export default MyForm; In this example, we're using the Form component to wrap our form and the Field component to define individual form fields. Field Components: The Field component defines form fields. It has various built-in components (input, textarea, select, etc.), or you can create custom components. You can also access the form state and validation information using the meta prop. Validation: React Final Form allows you to define validation functions to validate form values. The validate prop of the Form component accepts a validation function that returns an object with validation errors. Submission: When using the Form component, the onSubmit prop should include a function to execute upon submitting the form. This function can handle form submissions, make API calls, or perform other necessary actions. Initial Values: You can set initial values for the form using the initialValues prop of the Form component. Form State: React Final Form provides a FormSpy component that allows you to subscribe to and access the form's state, which can be helpful for more advanced scenarios. Decorators: You can enhance the functionality of your form using decorators, which are higher-order components that can modify the behavior of your form. Field-Level Validation: You can define field-level validation by passing a validation function as a prop to the Field component. Submitting and Resetting: You can control form submission and reset by using the submit and reset functions provided by the Form component. Here is a simple guide on using React Final Form. This library offers numerous features and customization choices to manage intricate application forms. Refer to the official React Final Form Documentation for more comprehensive information and examples. Conclusion When building forms in React applications, libraries like Formik, React Hook Form, and React Final Form can significantly simplify the process. Each library has its features and advantages, which can cater to various project requirements and development preferences. When you add these libraries to your React projects, you can simplify form creation, improve user experiences, and dedicate more time to developing inventive features instead of struggling with complicated form setups. Thank you for taking the time to read this article. I hope that you have found it to be helpful. Best of luck with your coding endeavors!

By Hardik Thakker
How to Use IP Geolocation in React
How to Use IP Geolocation in React

Most websites these days leverage the use of IP geolocation to accomplish various goals. It can be used to localize website contents or display the correct translations based on the web visitor’s geolocation. IP geolocation data can be retrieved from geolocation databases or web services just by using the web visitor’s IP address. Commonly used geolocation data includes the country, region, and city of the website visitor. This tutorial demonstrates how to implement the IP geolocation service, using the service from IP2Location.io, into a website built with React. To accomplish this, you will need to install React, Node.js, Express, and IP2Location.io modules. Below are the brief descriptions of each component. React will be used for front-end development, or you could call them web pages. It is a popular JavaScript library for building user interfaces that allows developers to build reusable UI components and manage the state of an application effectively. Node.js is an open-source server environment that allows developers to run JavaScript on the server side. It's built on the V8 JavaScript engine and provides a powerful set of libraries and tools for building scalable and high-performance web applications. Express is a popular Node.js web application framework that simplifies the process of building robust APIs, web applications, and other types of server-side applications. This is the framework used to communicate between the React and Node.js. IP2Location.io is a geolocation service that provides real-time IP addresses to location data. It can be used to identify the geographic location, ISP, domain, usage type, proxy and other important information about an IP address, which can be useful for a variety of applications. Step 1: Install and Set Up Node.js Create a new project called "my-project". Inside the project folder, run the following command to initialize the Node.js project: npm init Step 2: Install Express Module Next, we will install the Express package. Please run the following command to install it.npm install expressOnce completed successfully, please create a new file named as app.js inside my-project folder. Project directory should look like the below image. Then, open the App.js file and enter the code below. JavaScript const express = require('express'); const app = express(); app.get('/', (req, res)=>{ res.json({description: "Hello My World"}); }) app.listen(9000, ()=>{ console.log("Server started at port 9000"); }) The code above creates a REST API with the endpoint "/". This Node project listens for REST API requests on port 9000. Upon receiving a request, it returns a JSON string containing a single key-value pair: {description: "Hello My World"}.Run the below command to launch the code for testing.node app.jsThen, open a browser and enter "http://localhost:9000" if you are testing in the localhost environment. You should see the JSON string displayed on the screen. Step 3: Implement IP Geolocation Using IP2Location.io Stop the Node.js project with CTRL+C command. Then, execute the below command inside the my-project folder to install the ip2location.io Node.js package.npm install ip2location-io-nodejsOnce the installation has completed successfully, you can modify the app.js codes to add the geolocation features inside the project. JavaScript const express = require('express'); const {Configuration, IPGeolocation} = require('ip2location-io-nodejs'); const app = express(); app.get('/', (req, res)=>{ let mykey = "YOUR_API_KEY"; let config = new Configuration(mykey); let ipl = new IPGeolocation(config); let myip = req.query.ip;; ipl.lookup(myip) .then((data) => { // reply the data in json format res.json({"Country Name": data.country_name, "Region Name": data.region_name, "City Name": data.city_name}) }) .catch((error) => { // reply the error res.status(500).json({'Error found': data}) }); }) app.listen(9000, ()=>{ console.log("Server started at port 9000"); }) The code above will perform the IP geolocation using the visitor’s IP obtained from res.query.ip, when the user invokes the API call. It will then respond with 3 geolocation information, namely the country name, region name and city name back to the caller. Please note you will need an API key for the above code section to function, and you can subscribe with a free account. Also, in this example, I just return the country, region and city information, and you may visit the developer documentation if you wish to get other information, such as ISP, domain, usage type, proxy and more. Below is the screenshot of the above implementation, assuming your IP address is from US, California, Mountain View: Step 4: Set Up Front End Using React Now, it’s time to set up the front. Inside the my-project folder, run the below command to create the react project named as my-app.npx create-react-app my-appInside the my-app project, install the axios package. This package is required to fetch the result from Node.js project. npm install axiosThen, paste the below codes into the App.js file inside my-app folder. JavaScript import logo from './logo.svg'; import './App.css'; import React from "react"; import { useEffect, useState } from "react"; import axios from 'axios'; function App() { const [data, setData] = useState([]); useEffect(() => { axios.get('http://127.0.0.1:9000/') .then(response => { setData(response.data); }) .catch(error => { console.error(error); }); }, []); return ( <div> <p>This is the result from IP2Location.io</p> <p>Country Name: {data.country_name}</p> <p>Region Name: {data.region_name}</p> <p>City Name: {data.city_name}</p> </div> ); }; export default App; Please note that the code above utilizes the useState and useEffect hooks to fetch data from a REST API using the axios library. This data is retrieved from the backend (Node.js + Express) and displayed on the webpage. Once done, you can run the react project by entering the below command:npm start You should see the below screen if it works. If you encounter a network error when running React, you may need to enable the cors package in your Node JS project. To do so, go back to your project and stop the node from running. Then, install cors using the following command:npm install corsOnce done, open the app.js file and add the cors package as below: JavaScript const express = require('express'); const {Configuration, IPGeolocation} = require('ip2location-io-nodejs'); const cors = require('cors') const app = express(); app.use(cors()); Voila! You should now see the geolocation result displayed on the screen. Conclusion In conclusion, this tutorial provides an in-depth guide on how to successfully implement IP geolocation using IP2Location.io in a React built website. The tutorial not only covers the installation and setup of Node.js and Express, but also provides detailed instructions on how to install the IP2Location.io module. Furthermore, it includes a comprehensive walkthrough on how to set up the front end using React, including the use of the useEffect and useState hooks to fetch data from the backend. By following the step-by-step instructions provided in this tutorial, you will not only be able to successfully implement IP geolocation in your React built website, but also gain a deeper understanding of its potential applications and benefits.

By camimi Morales
jQuery vs. Angular: Common Differences You Must Know
jQuery vs. Angular: Common Differences You Must Know

A robust digital presence is essential in today's business landscape. Web development evolves constantly with new frameworks and libraries for dynamic web applications. These platforms connect with your audience and boost business productivity. Embracing these advancements is vital for success in a competitive market. For a successful online presence, prioritize a visually appealing UI, seamless navigation, top-notch content, mobile responsiveness, user-centric features, and efficient time-to-market. Choose the proper framework to reduce development time and ensure effectiveness. In web development, two powerful contenders have emerged: jQuery and Angular. Embraced worldwide, they offer top-notch efficiency and popularity. This blog compares their strengths, weaknesses, and use cases, helping you choose the best framework for your project. Understanding jQuery Introduced in 2006, jQuery has swiftly become a favorite among web developers due to its exceptional speed, lightweight design, and array of powerful features. This JavaScript library excels at streamlining DOM manipulation and event handling, simplifying complex tasks for programmers. JQuery's popularity soared thanks to its seamless cross-browser compatibility and user-friendly syntax. The library has revolutionized web development by offering a more intuitive approach to interacting with HTML elements, making tasks like animations, AJAX calls, and event management effortlessly achievable. The jQuery library has many great features, including: Manipulating HTML/DOM Handling Events Running CSS Using Dependency Injection Controlling Animations and Effects. Enabling compatible coding across different Web Browsers. It's essential to make the final code clean and lightweight. Additionally, jQuery supports JSON for Ajax calls. Pros of jQuery Simplicity: With its straightforward and user-friendly syntax, jQuery empowers developers to implement various functionalities swiftly. This ease of use makes it ideal for efficiently adding web applications and website features. Comprehensive Browser Support: jQuery ensures consistent behavior across browsers, abstracting away quirks. It simplifies development, allowing seamless, reliable web applications without browser headaches. Plugins: Leveraging jQuery plugins, developers elevate projects, adding seamless features. These tools expand functionality with pre-built solutions, enabling dynamic animations and interactive elements for sophisticated web applications. Cons of jQuery Performance: Due to its direct DOM manipulation, jQuery may exhibit lower efficiency when handling complex applications and larger datasets. Limited Structure: With jQuery's lack of enforcement for any specific application architecture, managing large-scale projects can become challenging. Not Suitable for SPAs: While jQuery excels in DOM manipulation, it may demand a more comprehensive structure when used in Single Page Applications (SPAs). Understanding Angular Angular, a robust JavaScript framework developed and maintained by Google, is tailored for creating dynamic and feature-rich Single Page Applications (SPAs). It underwent a reimagination and became Angular (version 2 and above). As of June 2023, the most recent stable version is 16.1. This modern framework embraces a structured approach to web development, leveraging TypeScript as its primary language for enhanced functionality and maintainability. Angular, an open-source framework, boasts several standout features, including: Cross-platform capabilities High speed and optimal performance An Angular App accessible to all. Support for declarative templates. Efficient two-way data binding. Angular CLI (Command Line Interface) The framework requires less code. Dependency Injection An Angular Route Resolver Pros of Angular Component-Based Architecture: With its component-based architecture, Angular simplifies the process of building and maintaining intricate applications, offering an efficient and organized approach to web development. Two-Way Data Binding: Angular's two-way data binding effortlessly synchronizes data between the model and the view, significantly reducing the need for manual updates. Dependency Injection: Angular's dependency injection system streamlines the management of component dependencies and encourages code reusability, making it easier to build modular and maintainable applications. Cons of Angular Learning Curve: Angular's rich set of features and adherence to strict conventions contribute to a steeper learning curve in comparison to jQuery. Performance Overhead: Angular's wealth of features and abstractions can lead to performance overhead, particularly in smaller projects. Size: Including Angular's core library and additional modules can increase application size, impacting the initial load times. Use Cases and Project Suitability jQuery Perfect for smaller projects, basic animations, and scenarios where existing code utilizes jQuery, making it a seamless integration. Additionally, it is well-suited for projects requiring swift and uncomplicated DOM manipulation without the complexity of application structures. Angular It is recommended for more extensive, complex applications, particularly SPAs, where you need a well-organized and maintainable codebase. Angular shines when building dynamic, data-driven applications that require advanced features, robust architecture, and scalability. What Are the Main Differences Between jQuery and Angular? jQuery and Angular stand out as prominent JavaScript frameworks employed in web development; however, they possess unique traits and serve diverse purposes. Below are some key distinctions between jQuery and Angular: Purpose jQuery: jQuery simplifies DOM manipulation and offers utilities for handling events, animations, AJAX, and other everyday JavaScript tasks. It is ideal for small to medium-sized projects and ensures enhanced browser compatibility and efficient DOM operations. Angular: Angular is a full-featured framework for creating robust single-page applications (SPAs). It offers routing, data binding, dependency injection, and form-handling tools, making it ideal for large-scale applications with a structured architecture. Modularity and Structure jQuery: jQuery allows flexible usage without enforcing specific structure or modularity. It can seamlessly integrate with other libraries or native JavaScript code, allowing developers to organize their projects. Angular: Angular follows the MVC (Model-View-Controller) pattern, fostering a structured and modular development approach with clearly defined components. Data Binding jQuery: jQuery lacks data binding capabilities, requiring manual handling of any updates or changes to the data in the DOM. Angular: Angular offers robust two-way data binding, automatically synchronizing changes between the model and view, reducing the need for manual DOM manipulation. Dependency Injection jQuery: jQuery lacks an inherent dependency injection mechanism. Angular: Angular's robust dependency injection system efficiently manages component dependencies, promoting code reusability and simplified testing. Community and Support jQuery: jQuery boasts a long-standing presence with a vast community, extensive documentation, and numerous plugins. However, its maintenance could be more active compared to some other frameworks. Angular: Angular, backed by Google, enjoys a vibrant community and receives regular updates. It offers extensive official documentation and community-driven resources. Learning Curve jQuery: jQuery has a beginner-friendly learning curve, suitable for developers with basic JavaScript knowledge. Angular: Angular's steeper learning curve is attributed to its comprehensive features and complex architecture, requiring more time for developers to become proficient. Size jQuery: jQuery's lightweight nature and small file size benefit projects with limited resources or require quick page load times. Angular: Angular is a full-featured framework with a larger file size, which may concern projects with strict size constraints. jQuery or Angular for Your Web App Development? Now that you understand the technical differences between Angular and jQuery, the question remains: Which one should you choose; jQuery or Angular? Choose jQuery: If you have a small project with straightforward needs, requiring basic DOM manipulation and event handling, and prefer a lightweight, quick development, jQuery is a suitable choice. Additionally, if your project heavily relies on jQuery, rewriting it might not be practical or cost-effective. Choose Angular: If you're tackling a large and complex web application, seeking a structured framework to enhance best practices and maintainability, Angular is an excellent choice. It offers valuable features like two-way data binding, dependency injection, and routing. Moreover, you'll benefit from strong community support and extensive documentation. The web development landscape is ever-evolving, and new frameworks and libraries may have emerged since my last update. Research the latest state of jQuery and Angular, considering your project's specific needs. Conclusion When deciding between jQuery and Angular, it's crucial to consider your web development project's specific needs and scope. For smaller projects or basic DOM manipulation, jQuery's lightweight and user-friendly nature might be ideal. On the other hand, for larger, feature-rich applications, Angular's robustness and extensive features offer a more maintainable and scalable solution. Understanding these distinctions enables developers to make informed decisions that align with their project's requirements and objectives.

By Hardik Thakker
Build a To-Do Application With React and Firebase
Build a To-Do Application With React and Firebase

To-do applications are one of the ways you can use to manage a set of tasks. As developers, learning how to build a to-do application will also help to understand certain concepts one of which includes an understanding of how to build an application with a database. In this article, you will learn how to build a to-do web app by making use of React.js and Firebase Database. Table of Contents prerequisites How to Create the Firebase Project Creating the Firestore Database How to Create the React Project Setting up the Project Structure How to Integrate Firebase in React Integrate Bootstrap 5 into React Designing the User Interface Adding Data to the Firestore in React Fetching Data from the Firestore in React Deleting Data from the Firestore in React Updating Data in the Firestore in React How to Integrate the Checkbox Functionality How to order Data by Timestamp in Firebase Conclusion Watch the video version of this article below: Prerequisites Node.js VS Code Firebase Console To install the npm packages needed for this React application such as Firebase, you need to have Node.js downloaded. Visual Studio code serves as the code editor we will use to build this project. The Firebase Console serves as the backend as a service database that helps us to store and manage our data, through the use of the Cloud Firestore. How to Setup the Firebase Project To set up Firebase, you head to the Firebase console website and create a new project by using your Google account. Once you are logged into Firebase you will see an existing project, you can click on the add project button, as seen below. Firebase dashboard After clicking on the add project button, we get navigated to a new page which requires 3 steps before the Firebase project is created: The first step requires us to name the Firebase project, which we will call todo. The second step asks if we want to enable Google Analytics, you should disable it by using the toggle button. Finally, we can now click on the create project button. Once the project is created, we click on the continue button which navigates us to the next screen, which is now our default Firebase project dashboard. Firebase dashboard We have now completed the creation of a new Firebase project. Creating the Firestore Database Inside the Firebase dashboard, on the left-hand panel, we take the following steps: Click on the Build dropdown. Within the Build dropdown, select Firestore Database, this displays a page where we can click on the Create database button. Next, a modal pops up asking if we want Production mode or Test mode. You can choose Test mode since the app is currently in the development stage. The next step asks for where we want our Cloud Firestore to be located, you can choose the location closest to your area due to the possibility of latency. Once we click on enable, we get redirected to the Cloud Firestore page which will currently have an empty Collection. How to Create the React Project We are going to create a new React project by making use of CRA (Create-React-App). Since we have node.js installed, we can run the following commands: JavaScript npm i create-react-app Once the installation of CRA is complete, we can now create a new React project with the following command below: JavaScript npm init react-app todo cd todo code . We have now created a new React project called todo, navigated into the project directory, and opened the project in Visual Studio Code. We can now begin the setup of the React application. Setting up the Project Structure To set up the architecture of our React project, you can implement the steps below: In the src directory, create two folders named components and services. The components folder will contain two new document files. The first js file is called Todo.js, while the second file is called EditTodo.js. In the services folder, create a js file called firebase.config.js. This file will contain the configuration of the firebase which we can export to different components. Finally, still within the src directory, we head to the App.js file. Here, we clear the boilerplate inside of div with the className of App and then import the Todo.js component as seen in the following lines of code: JavaScript import './App.css'; import Todo from './components/Todo'; function App() { return ( <div className="App"> <Todo ></Todo> </div> ); } export default App; With the above, we now have the basic structure of our React project set up. How to Integrate Firebase in React To add the Firebase web SDK to our new app, the first thing we need to do is run the following command inside our terminal: JavaScript npm install firebase Next, you open up the firebase.config.js file inside of the services folder and then use the following imports to configure firebase into the React app: JavaScript import { initializeApp } from "firebase/app"; import { getFirestore } from "firebase/firestore"; Furthermore, you need to grab the project configuration settings from the Firebase dashboard. On the left-hand panel of the dashboard, click on project overview and select project settings. Scroll to the bottom of the page and select the web icon as shown below: Web icon for Firebase project settings Once the web icon gets selected, a new page shows up asking you to give the app a nickname. You can proceed to call it todo or any other word you prefer, then click on register app. Firebase will now generate the Firebase configuration settings, which contains the spiky, storage bucket, auth domain, project id, app id, etc. as seen below: Firebase config settings We can now grab this and paste it inside our firebase.config.js file: JavaScript import { initializeApp } from "firebase/app"; import { getFirestore } from "firebase/firestore"; // Your web app's Firebase configuration const firebaseConfig = { apiKey: "AIzaSyC5u80wO6iaPl8E9auM0IRXliYGKyDQHfU", authDomain: "todo-b74fc.firebaseapp.com", projectId: "todo-b74fc", storageBucket: "todo-b74fc.appspot.com", messagingSenderId: "872116099545", appId: "1:872116099545:web:9bb66d12ca15f2f39521c8" }; The final step needed to complete the Firebase configuration is to initialize Firebase by making use of the config variable and then export it so it becomes available in all our components, as seen in the following lines of code: JavaScript import { initializeApp } from "firebase/app"; import { getFirestore } from "firebase/firestore"; // Your web app's Firebase configuration const config = { apiKey: "AIzaSyC5u80wO6iaPl8E9auM0IRXliYGKyDQHfU", authDomain: "todo-b74fc.firebaseapp.com", projectId: "todo-b74fc", storageBucket: "todo-b74fc.appspot.com", messagingSenderId: "872116099545", appId: "1:872116099545:web:9bb66d12ca15f2f39521c8" }; const app = initializeApp(config); export const db = getFirestore(app); With this, our Firebase configuration is successfully created and we do not need to make use of any other Firebase services. Integrate Bootstrap 5 into React To integrate Bootstrap 5 you will need to head over to the Bootstrap 5 website and grab the CDN link for both the CSS and JavaScript. You can then head back to the React project in VS Code, open the public directory, and proceed to select the index.html file. In the index.html file, we can paste the CDN links for both the CSS and JavaScript within the head section. With that, we should have the result below: HTML <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous"> Now we have access to all Bootstrap 5 classes across our components. Designing the User Interface To implement the design for the React project you will start by clearing the boilerplate code inside of the App.css file. You can now proceed to open the index.css file and then paste the following styles: CSS body{ margin-top:20px; background: #f8f8f8; } .todo-list { margin: 10px 0 } .todo-list .todo-item { padding: 15px; margin: 5px 0; border-radius: 0; } div.checker { width: 18px; height: 18px } div.checker input{ width: 18px; height: 18px } div.checker { display: inline-block; vertical-align: middle; } .done { text-decoration: line-through; } Above, what we did was ensure all the elements that will be displayed on the browser are properly arranged. Next, proceed to the Todo.js file and paste the code below within the return statement: JavaScript import React from 'react' const Todo = () => { return ( <> <div className="container"> <div className="row"> <div className="col-md-12"> <div className="card card-white"> <div className="card-body"> <button data-bs-toggle="modal" data-bs-target="#addModal" type="button" className="btn btn-info">Add Todo </button> <div className="todo-list"> <div className="todo-item"> <hr /> <span> <div className="checker" > <span className="" > <input type="checkbox" /> </span> </div> Go hard or Go Home<br /> <i>10/11/2022</i> </span> <span className=" float-end mx-3"> <EditTodo ></EditTodo></span> <button type="button" className="btn btn-danger float-end" >Delete</button> </div> </div> </div> </div> </div> </div> </div> {/* Modal */} <div className="modal fade" id="addModal" tabIndex="-1" aria-labelledby="addModalLabel" aria-hidden="true"> <div className="modal-dialog"> <form className="d-flex"> <div className="modal-content"> <div className="modal-header"> <h5 className="modal-title" id="addModalLabel"> Add Todo </h5> <button type="button" className="btn-close" data-bs-dismiss="modal" aria-label="Close"> </button> </div> <div className="modal-body"> <input type="text" className="form-control" placeholder="Add a Todo" /> </div> <div className="modal-footer"> <button className="btn btn-secondary" data-bs-dismiss="modal">Close </button> <button className="btn btn-primary">Create Todo</button> </div> </div> </form> </div> </div> </> ) } export default Todo In the above code: The div with the className of .container displays the card that contains all items in our Todo list, while the div with the id of addModal contains the modal where all the todo can be created using the save button called Add todo. The final part of our design is in the EditTodo.js file. The EditTodo.js file will only contain the modal that allows us to edit each to-do list item. The code can be seen below: JavaScript import React from 'react' const EditTodo = () => { return ( <> <button type="button" className="btn btn-primary" data-bs-toggle="modal" data-bs-target="#exampleModal" > Edit Todo </button> <div className="modal fade" id="exampleModal" tabIndex="-1" aria-labelledby="editLabel" aria-hidden="true"> <div className="modal-dialog"> <div className="modal-content"> <div className="modal-header"> <h5 className="modal-title" id="editLabel"> Update Todo Details </h5> <button type="button" className="btn-close" data-bs-dismiss="modal" aria-label="Close"> </button> </div> <div className="modal-body"> <form className="d-flex"> <input type="text" className="form-control" /> </form> </div> <div className="modal-footer"> <button type="button" className="btn btn-secondary" data-bs-dismiss="modal">Close </button> <button type="button" className="btn btn-primary" >Update Todo</button> </div> </div> </div> </div> </> ) } export default EditTodo The design for the application is now complete. If you run the command npm start in the terminal and the code compiles, you should see the result below in the browser: Complete page design Adding Data to the Firestore in React To implement the Add data functionality in our todo application, we start by importing some modules from Firebase/firestorm which can be seen below: JavaScript import { collection, addDoc, serverTimestamp } from 'firebase/firestore' A Collection is a folder that contains Documents and Data. All the data saved on our Todo application will be saved in a Collection called, which we will create soon. addDoc is a high-level method used to add data to a Collection? The serverTimestamp contains the values of both time and date values for each Document in a Collection. We then need to import the firebase.config.js file in our Todo.js file to allow us to have access to the firebase methods: JavaScript import { db } from '../services/firebase.config' Using the import from our firebase.config.js file, we can now instantiate a reference to our Collection: JavaScript const collectionRef = collection(db, 'todo'); As seen in the code above, we created a variable called collectionRef. The collectionRef the variable takes in the collection method. This method has two arguments. The first argument called db references the firebase service, while the second argument will create a new Collection called todo, which will contain all necessary Documents created. Next, we create two variables using the useStatehook: JavaScript const [createTodo, setCreateTodo] = useState("") The first thing we did here is to import the state hook in React: JavaScript import React, { useState } from 'react Then we created a getter and a setter called createTodo and setCreateTodorespectively. To proceed, we move to the modal created within the JSX and implement the next couple of things: Within the form tag, we create an onSubmit event handler called submitTodo. HTML <form className="d-flex" onSubmit={submitTodo}> In the input tag within the form tag, we create an `onChange` event handler that allows us to get the value typed inside of the form: JavaScript onChange={(e) => setCreateTodo(e.target.value)} The final implementation we need to make before adding data to the database becomes functional is to configure the onSubmitevent handler previously created. The code for this can be seen below: JavaScript //Add Todo Handler const submitTodo = async (e) => { e.preventDefault(); try { await addDoc(collectionRef, { todo: createTodo, isChecked: false, timestamp: serverTimestamp() }) window.location.reload(); } catch (err) { console.log(err); } } Above, we made the submitTodovariable asynchronous by making use of the async/await keyword in JavaScript. We then created a parameter in the arrow function called, which serves as an event. This ensures we are able to make use of the e.preventDefault() method which prevents the form from reloading after submission. Next within the try/catch block, we call the addDocmethod which takes two arguments. The first argument is the collectionRefvariable we created previously, while the second argument contains the object to be passed into the Firestore database. These objects include the todo values inside of the input field, the checkbox value which is currently set as false, and then the timestamp in which the todo was created in the database. We then make use of the window.location.reload()function in JavaScriptto refresh the form upon successful submission, while making use of catch to handle the error. With this, we can now create a new to-do and view it in our database. Creating the Todo Fetching Data from the Firestore in React To fetch the data from the Firestore in Firebase, we need to make two importation in our Todo.js file. These include the useEffect hook in React and thegetDocs from firebase/firestore: JavaScript import React, { useState, useEffect } from 'react' import { collection, addDoc, serverTimestamp, getDocs } from 'firebase/firestore' We then need to create the setter(setTodo) and getter(todos) variables to help us access the data from the Firestore: JavaScript const [todos, setTodo] = useState([]) The data can now be fetched inside of the useEffecthook: JavaScript useEffect(() => { const getTodo = async () => { await getDocs(collectionRef).then((todo) => { let todoData = todo.docs.map((doc) => ({ ...doc.data(), id: doc.id })) setTodo(todoData) }).catch((err) => { console.log(err); }) } getTodo() }, []) Inside the useEffecthook, we created a variable called getTodo that takes in an asynchronous arrow function. Then we called the getDocsmethod from Firebase. The getDocs method requires an argument, so we pass in the collectionRef. The getDocs returns a promise that we need to chain to using .then. The promise returns an array that we need to map through to access the required data from the database, which are the todo list items as well as the id. The todoDatavariable holds the data coming from the database. To have access to the data in our JSX, we will then put the todoDataas an argument in our setter which is called setTodo. We then proceed to handle the error in case there is any using the catch keyword before we finally call the getTodofunction to initialize it on page load. Now that we have access to our data from the database, we need to make it visible on the page. The data we have comes in the form of an array and we need to loop through it in the HTML file. This will be done within the divwith the classNameof todo-list as seen below: JavaScript {todos.map(({ todo, id }) => <div className="todo-list" key={id}> <div className="todo-item"> <hr /> <span> <div className="checker" > <span className="" > <input type="checkbox" /> </span> </div> {todo}<br /> <i>10/11/2022</i> </span> <span className=" float-end mx-3"> <EditTodo ></EditTodo> </span> <button type="button" className="btn btn-danger float-end" >Delete </button> </div> </div> )} In the code above, we call the todosgetter that contains our data which comes in an array format. Next, we make use of the map array method to loop through the data, restructured the data by making use of curly brackets{}, and then extract the todosas well as theid. Finally, we called the keyattribute in React, pass in id so as to enable React to track the data loaded on the page. The static text beside the &nbsp; also gets cleared before replacing it with the todo data. We can now proceed to save our changes. Deleting Data from the Firestore in React To implement the delete functionality, we need to import two Firestore functions which are doc, deleteDoc. JavaScript import { collection, addDoc, serverTimestamp, getDocs, doc, deleteDoc } from 'firebase/firestore' Next, we create a function called deleteTodo: JavaScript //Delete Handler const deleteTodo = async (id) => { try { if (window.confirm("Are you sure you want to delete this Task!")) { const documentRef = doc(db, "todo", id); await deleteDoc(documentRef) window.location.reload() } } catch (err) { console.log(err); } } Within the try block, we start by displaying a prompt to the user if they want to proceed to delete the Todo. We then create a new variable called documentRef. We then call the doc method which requires 3 arguments which are the firebase service, the collection name as well as the todo.id that we want to delete. Next, we call the deleteDoc method from Firestore and then pass it in the documentRefas an argument. This will enable the specific todo to be deleted from the database. Once this is done, we refresh the page by calling the window.location.reload() function. We then use the catchblock to handle any possible error by login into the console. Now that our delete function is ready, all we have to do is to initialize it inside our delete button as seen below: HTML <button type="button" className="btn btn-danger float-end" onClick={() => deleteTodo(id)} >Delete </button> All we did was make use of the onClickevent handler to call the deleteTodo() function anytime a specific todo is deleted. It should also be noted that the idthe parameter passed inside of the function was made accessible to us in the documentRef the variable we created earlier. Updating Data in the Firestore in React To implement the functionality to update data, we need to pass the data coming from the database as props to our EditTodo.js component: HTML {todos.map(({ todo, id }) => <div className="todo-list" key={id}> <div className="todo-item"> <hr /> <span> <div className="checker" > <span className="" > <input type="checkbox" /> </span> </div> {todo}<br /> <i>10/11/2022</i> </span> <span className=" float-end mx-3"> <EditTodo todo={todo} id={id} ></EditTodo> </span> <button type="button" className="btn btn-danger float-end" onClick={() => deleteTodo(id)} >Delete </button> </div> </div> )} We passed both the tododata as well as the idas props and it now becomes accessible in the EditTodo.js file. Inside the EditTodo.js, we extract the data by making use of curly brackets {}: JavaScript const EditTodo = ({ todo, id }) => { We then make the necessary imports required to update the data, which include the following React and Firebase terms: useState(React Hook) db (The Firebase Service instance) doc (The Firestore Document reference) updateDoc (Used to update a Document inside of a Collection) The extracted data can now be set in the state using the useState hook: JavaScript const [todos, setTodos] = useState([todo]) As seen above, we created new variables called todos, setTodos, and then set the initial value in the useStateto thetodo data coming from the database. Next, we create the function that implements the update todo: JavaScript const updateTodo = async (e) => { e.preventDefault() try { const todoDocument = doc(db, "todo", id); await updateDoc(todoDocument, { todo: todos }); window.location.reload(); } catch (err) { console.log(err); } } Above, we created a variable called, which holds the asynchronous arrow function. Next, we called, to prevent the form from reloading. Then we create a try/catch block. Within the tryblock, we create a variable called todoDocument. This variable holds document reference which requires 3 arguments(db, todo, id). With this, we are able to update a specific todo from the Firebase database using its id. In the next line, we call the updateDoc the method that updates data in the Firestore. This method takes two arguments. The first argument is the, while the second is the updated todos text value which was the getter that was created earlier in theuseStatehook. Lastly, we refresh the page when the request is successful or display an error on the console if one occurs. There are two implementations left to do. One is making the modal dynamic, while the other is calling our update data function in the JSX. To make the modal dynamic, we need to change the values of theid in theJSX, so we add the following code: HTML <button type="button" className="btn btn-primary" data-bs-toggle="modal" data-bs-target={`#id${id}`} > Edit Todo </button> <div className="modal fade" id={`id${id}`} tabIndex="-1" aria-labelledby="editLabel" aria-hidden="true"> <div className="modal-dialog"> <div className="modal-content"> <div className="modal-header"> <h5 className="modal-title" id="editLabel">Update Todo Details</h5> <button type="button" className="btn-close" data-bs-dismiss="modal" aria-label="Close"> </button> </div> We begin by replacing the static text in the data-bs-targetattribute called #exampleModal, with the dynamic id coming from the Firestore. Within our modal, we replace the static id called id="exampleModal", with the id from our Firestore. The modal is now dynamic. Next, to update the data, we need to call the setTodossetter inside the input field using theonChange the event handler in React: HTML <input type="text" className="form-control" defaultValue={todo} onChange={e => setTodos(e.target.value)} /> The defaultValuehelps to refill the form with the existing to-do in the database. The onChangeevent handler helps to get the values of the input field and save it into the setTodossetter. Finally, we can call the updateTodofunction inside our submit button: HTML <div className="modal-footer"> <button type="button" className="btn btn-secondary" data-bs-dismiss="modal"> Close </button> <button type="button" className="btn btn-primary" onClick={e => updateTodo(e)} >Update Todo </button> </div> As seen above, we can now successfully update a specific todo using the onClickthe event handler in React. How to Integrate the Checkbox Functionality To begin the checkbox implementation, we will create a variable using the useState hook: JavaScript const [checked, setChecked] = useState([]); Next, we pass the data from the Firestore into the setter called setCheckedinside of the effect: JavaScript useEffect(() => { const getTodo = async () => { await getDocs(collectionRef).then((todo) => { let todoData = todo.docs.map((doc) => ({ ...doc.data(), id: doc.id })) setTodo(todoData) setChecked(todoData) }).catch((err) => { console.log(err); }) } getTodo() }, []) In the JSX, we will extract the isCheckedvalue in the database and use it to conditionally set the CSS class used to line-through a specific todo which signifies the todo is done as seen below: HTML {todos.map(({ todo, id, isChecked }) => <div className="todo-list" key={id}> <div className="todo-item"> <hr /> <span className={`${isChecked === true ? 'done' : ''}`}> We proceed to configure the input field used for our checkbox by using the following code: HTML <input type="checkbox" defaultChecked={isChecked} name={id} onChange={(event) => checkHandler(event, todo)} /> Above we use the defaultCheckedattribute to set the default value of the checkbox coming from the Firestore which is a Boolean. Next, we pass in the idinto the name attribute. Using the onChangeevent handler, we set the event and the todo data into a function called checkHandler, which we will create and configure below: JavaScript //Checkbox Handler const checkHandler = async (event, todo) => { setChecked(state => { const indexToUpdate = state.findIndex(checkBox => checkBox.id.toString() === event.target.name); let newState = state.slice() newState.splice(indexToUpdate, 1, { ...state[indexToUpdate], isChecked: !state[indexToUpdate].isChecked, }) setTodo(newState) return newState }); The summary of the above function helps to track the state of a specific checkbox when checked and unchecked by using the `findIndex` method in JavaScript. We then create a variable called newState. This variable makes use of the slice()and splice() methods in JavaScript. The slice()method helps us to return the selected checkbox(s) in the array, as a new array, while the splice()the method helps us to save only elements that were checked into the array. Next, we then save the newly modified array into the setTodosetter before we return data using the return keyword. The final step we need to take is to save the selected checkbox values in the database. We will do this by, making use of the runTransaction method in JavaScript. The runTransactionthe method will be called within the checkHandler function: JavaScript // Persisting the checked value try { const docRef = doc(db, "todo", event.target.name); await runTransaction(db, async (transaction) => { const todoDoc = await transaction.get(docRef); if (!todoDoc.exists()) { throw "Document does not exist!"; } const newValue = !todoDoc.data().isChecked; transaction.update(docRef, { isChecked: newValue }); }); console.log("Transaction successfully committed!"); } catch (error) { console.log("Transaction failed: ", error); } The first thing we need to do is import runTransactionfrom firebase/firestore. We then checked the database to see if the particular document being queried exists or not. If it doesn't exist we throw a message that says "Document does not exist!". If the document exists, we call the transaction.update method and then pass in the value of the isCheckedvariable which then updates it in the Firestore database. How to Order Data by Timestamp in Firebase To order the data in our Firestore database, the first thing to do is to clear all the current data we currently have by deleting the entire collection on the Firebase Database. This will allow the newly created documents to get ordered correctly. We then need to import two methods from firebase/firestore called orderByand query by using the following imports: JavaScript import { collection, addDoc, serverTimestamp, getDocs, doc, deleteDoc, runTransaction, orderBy, query } from 'firebase/firestore' The orderBymethod helps to sort the data coming from the database in either ascending, descending, or timestamp in which the data was created. In our case, we will be making use of the latter, while the query method as the name implies allows us to query data from the database. Next, within the useEffect, we will remove the collectionRefwe passed as a parameter into the getDocsmethod and replace it with a newly created query variable as seen below: JavaScript const q = query(collectionRef, orderBy('timestamp')) await getDocs(q).then((todo) =>{ Above, we create a variable called, which uses the orderBymethod to sort the data by the in which they were created. We then pass this variable to the getDocsmethod that gets the data from the Database. To conclude this project, we will display the date and time on which the todo was created on the browser: JavaScript {todos.map(({ todo, id, isChecked, timestamp }) => We begin by extracting the timestampvalue from the database. We then need to clear the static date value inside the italics(<li></li>) tag: JavaScript <i>{new Date(timestamp.seconds * 1000).toLocaleString()}</i> We created a Date object and then got the seconds from the timestampwhile multiplying it by 1000. This is because `JavaScript` works with time in milliseconds. We then use the toLocaleString() method to return the data object as a string. With this, we have the result below: Results showing the todo by timestamp as well as the dates created Conclusion You have now completed the tutorial! You now know the basics of how to create a to-do application using React and Firebase Database. Now you can proceed to build your own web app from scratch and then integrate additional functionalities like Firebase login and sign-in, Firebase tools, firebase deployment, and other Firebase services into your React application. Also, the link to the complete source code of this article can be found on this Github repository.

By deji adesoga CORE
Best Practices for Using Cypress for Front-End Automation Testing
Best Practices for Using Cypress for Front-End Automation Testing

Best automation practices refer to a set of guidelines or recommendations for creating effective and efficient automated tests. These practices cover various aspects of the testing process, including planning, designing, implementing, executing, and maintaining automated tests. Cypress is a popular testing tool that has gained significant popularity in recent years due to its user-friendly testing framework and built-in features that simplify the testing process. But if you don’t use it correctly and don’t follow Cypress’s best practices, your tests’ performance will suffer significantly, you’ll make unnecessary mistakes, and your test code will be unreliable and flaky. As a test automation engineer, you should adhere to certain best practices in a Cypress project so that the code is more effective, reusable, readable, and maintainable. Cypress Best Practices In this blog, we’ll talk about the best practices a QA Engineer should know when utilizing Cypress for front-end automation testing. Here are some best practices for using Cypress: 1. Avoid Unnecessary Wait It is generally considered a bad practice to use static wait commands like cy. wait(timeout) in your Cypress tests because they introduce unnecessary delays and can make your tests less reliable. cy.visit(‘https://talent500.co/’) cy.wait(5000) In this code snippet, the cy. visit() command is used to navigate to the website and the city. wait(5000) command is used to pause the test execution for five seconds before continuing to the following command. Even if the site is loaded in a 2-second system, wait till five seconds. The better way to handle this problem is to use dynamic wait using cy.intercept(). JavaScript cy.intercept(‘/api/data’).as(‘getData’) cy.visit(‘https://example.com/’) cy.wait(‘@getData’) .its(‘response.statusCode’) .should(‘eq’, 200) In this example, we are intercepting a network request to /api/data and giving it an alias of getData. Then, we navigate to the example page and wait for the getData request to complete using cy. wait(‘@getData’). Finally, we check that the response has a status code of 200. 2. Use Hooks Wisely In Cypress, we have four hooks, i.e., before(),beforeEach(), after(), and afterEach(). Writing repetitive code is not a smart idea. Suppose we have five test cases, and in each test case, we have some set of lines that are common in every test case. So a better way to handle the duplicate code is to use a hook; in this case, you can use beforeEach(). Use before() and after() hooks sparingly: Since these hooks are executed once for the entire suite, they can be slow and make it harder to isolate test failures. Instead, try to use beforeEach() and afterEach() hooks to set up and clean up the test scenario. Example: In the below example, you can see we have used hooks before() and after(). In this scenario, these hooks best match, but we don’t need to always use these hooks, totally depending on the requirement. JavaScript describe(“Login into talent500.”, () => { before(() => { cy.visit(“https://talent500.co/auth/signin”); cy.viewport(1280,800) }); it(“Login into talent500”, () => { cy.get(‘[name=”email”]’).type(“applitoolsautomation@yopmail.com”,{force: true} ); cy.get(‘[name=”password”]’).type(“Test@123”,{force: true}); cy.get(‘[data-id=”submit-login-btn”]’).click({force: true}); cy.contains(“John”); }); it(“Verify user is logged in by verifying text ‘Contact us’ “, () => { cy.contains(“Contact us”); }); after(() => { cy.get(‘[data-id=”nav-dropdown-logout”]’).click({force: true}); }); }); Below screenshot of the test case execution of the above script. Use beforeEach() and afterEach() hooks to set up and clean up the test scenario: By using these hooks to reset the state between tests, you can make your tests more independent and reduce the risk of side effects and flaky tests. Example In this example, the beforeEach() hook is used to visit the login page and fill out the email and password fields, and click the login button. The it() test checks that the “Contact us” link is visible on the dashboard. The afterEach() hook is used to click the logout button. JavaScript describe(“Login Test Suite”, () => { beforeEach(() => { cy.visit(“https://talent500.co/auth/signin”); cy.get(‘[name=”email”]’).type(“applitoolsautomation@yopmail.com”, { force: true, }); cy.get(‘[name=”password”]’).type(“Test@123”, { force: true }); cy.get(‘[data-id=”submit-login-btn”]’).click({ force: true }); }); it(“should display the dashboard”, () => { cy.contains(“Contact us”).should(“be.visible”); }); afterEach(() => { cy.get(‘[data-id=”nav-dropdown-logout”]’).click({ force: true }); }); }); 3. Set baseUrl In Cypress Configuration File Hard-coding the baseUrl using cy.visit() in the before() block of each spec file is not the best approach, as it can lead to redundant code and make your tests harder to maintain. A better approach would be to set the baseUrl in the cypress.config.js configuration file and use the cy.visit() command with relative URLs in your test files. For example, in your cypress.json file, you can set the baseUrl to the login page URL. JavaScript { “baseUrl”: “https://www.example.com/login” } Then, in your test files, you can use relative URLs to navigate to other pages of your application. For example: JavaScript describe(‘My Test Suite’, () => { beforeEach(() => { cy.visit(‘/’) }) it(‘should perform some test’, () => { // Perform some test }) }) 4. Use the data-cy” Attribute for Identifying Locators Using data-cy attributes for identifying locators is a best practice in Cypress, as it provides a reliable and consistent way to target elements in your application. The data-cy attribute is a custom attribute that you can add to any element in your application to make it easier to target that element in your Cypress tests. Here’s an example of how you can use data-cy attributes to target a login form in your application: Suppose we have the following HTML code for a login form. HTML <form> <label for=“email”>Email:</label> <input type=“email” id=“email” name=“email”> <label for=“password”>Password:</label> <input type=“password” id=“password” name=“password”> <button type=“submit”>Login</button> </form> To use ‘data-cy-*’ attributes to identify locators, you can add the ‘data-cy’ attribute to each element we want to select in our tests. For example: HTML <form> <label for=“email”>Email:</label> <input type=“email” id=“email” name=“email” data-cy=“email-input”> <label for=“password”>Password:</label> <input type=“password” id=“password” name=“password” data-cy=“password-input”> <button type=“submit” data-cy=“login-button”>Login</button> </form> You can now use these data-cy attributes to select the elements in our Cypress tests like this: describe(‘Login form’, () => { JavaScript it(‘should allow a user to log in’, () => { cy.visit(‘/login’) cy.get(‘[data-cy=”email-input”]’).type(‘testuser@example.com’) cy.get(‘[data-cy=”password-input”]’).type(‘password123’) cy.get(‘[data-cy=”login-button”]’).click() }) }) 5. Isolate it() Blocks The best practice is to use an independent it() block that does not depend on the outcome of any other it() block to run successfully. Each it() block should have its own setup and teardown steps and should not rely on the state of the application from any previous test. Here are some benefits of using independent it() blocks in Cypress: Tests run in isolation: By using independent it() blocks, you can ensure that each test runs in isolation without depending on the state of the application from any previous test. This makes our tests more reliable and easier to maintain. Tests are easier to understand: Independent it() blocks are easier to understand since each block represents a specific test case. This makes it easier to troubleshoot and fix issues. Tests run faster: Since each it() block runs in isolation, the tests run faster, as there is no need to set up the state of the application for each test. Here’s an example of how to use independent it() blocks in Cypress: JavaScript describe(“Login into talent500 With invalid crediential”, () => { beforeEach(() => { cy.visit(‘/signin’) cy.viewport(1280, 800); }); it(“Login into talent500 when Email is incorrect”, () => { cy.get(‘[name=”email”]’).type(“applitoolsautomation11@yopmail.com”, { force: true, }); cy.get(‘[name=”password”]’).type(“Test@1234”, { force: true }); cy.get(‘[data-id=”submit-login-btn”]’).click({ force: true }); cy.contains(“Unable to login with the provided credentials”); }); it(“Login into talent500 when Password is incorrect”, () => { cy.get(‘[name=”email”]’).type(“applitoolsautomation@yopmail.com”, { force: true, }); cy.get(‘[name=”password”]’).type(“Test@123444”, { force: true }); cy.get(‘[data-id=”submit-login-btn”]’).click({ force: true }); cy.contains(“Unable to login with the provided credentials”); }); }); Output: In the below screenshot, you can see both test cases run independently and pass. 6. Multiple Assertions Per Test Writing single assertions in one test can slow down your tests and cause performance issues. The best practice is adding multiple assertions with a single command makes tests faster and better for test organization and clarity. JavaScript it(“Login into talent500 when Email is incorrect”, () => { cy.get(‘[name=”email”]’).type(“applitoolsautomation@yopmail.com”,{ force: true }).should(“have.value”, “applitoolsautomation@yopmail.com”) .and(“include.value”, “.”) .and(“include.value”, “@”) .and(“not.have.value”, “test@qa.com”) cy.get(‘[name=”password”]’).type(“Test@1234”, { force: true }); cy.get(‘[data-id=”submit-login-btn”]’).click({ force: true }); cy.contains(“Unable to login with the provided credentials”); }); Output:In the below screenshot, you can see we have verified all the assertions in a single line instead of writing different lines of code for different assertions. 7. Keeping Test Data Separate Keeping test data separate from the test code is a best practice in Cypress automation. Data separate from the test code can help make your tests more maintainable and easier to update. Here is an example of how to keep test data separate in Cypress: Create a separate file to store the test data. For example, you could create a JSON file called “test-data.json” in your project’s root directory. In this file, define the test data as key-value pairs. For example: JavaScript { “username”: “testuser”, “password”: “testpassword” } In your test code, import the test data from the JSON file using the Cypress fixture. import testData from ‘../fixtures/test-data.json’ Use the test data in your tests by referencing the keys defined in the JSON file. For example: JavaScript describe(‘Login’, () => { it(‘should login successfully’, () => { cy.visit(‘/login’) cy.get(‘#username’).type(testData.username) cy.get(‘#password’).type(testData.password) cy.get(‘#login-btn’).click() cy.url().should(‘include’, ‘/dashboard’) }) }) By storing the test data in a separate file, you can easily update the test data without modifying the test code itself. 8. Use Aliases Use aliases to chain commands together instead of repeating the same selectors in each command. This makes the test code more readable and maintainable. For example, you can alias a login button and then use it in subsequent tests without having to locate it again. In this example, we’re using the ’as’ command to assign aliases to the email input, password input, and submit button. We’re then using those aliases in the test itself. This makes the test more readable and easier to maintain, especially if you have multiple tests that use the same selectors. JavaScript describe(“Login into talent500.”, () => { beforeEach(() => { cy.visit(‘/signin’) cy.get(‘[name=”email”]’).as(’emailInput’) cy.get(‘[name=”password”]’).as(‘passwordInput’) cy.get(‘[data-id=”submit-login-btn”]’).as(‘loginButton’) }); it(“Login into talent500 when Email is incorrect”, () => { cy.get(‘@emailInput’).type(“applitoolsautomation@yopmail.com”,{ force: true }) .should(“have.value”, “applitoolsautomation@yopmail.com”) .and(“include. value”, “.”) .and(“include.value”, “@”) .and(“not.have.value”, “test@qa.com”) cy.get(‘@passwordInput’).type(“Test@1234”, { force: true }); cy.get(‘@loginButton’).click({ force: true }); cy.contains(“Unable to login with the provided credentials”); }); it(“Login into talent500 when Password is incorrect”, () => { cy.get(‘@emailInput’).type(“applitoolsautomation@yopmail.com”, { force: true, }); cy.get(‘@passwordInput’).type(“Test@123444”, { force: true }); cy.get(‘@loginButton’).click({ force: true }); cy.contains(“Unable to login with the provided credentials”); }); it(“Login into talent500 with valid credentials “, () => { cy.get(‘@emailInput’).type(“applitoolsautomation@yopmail.com”, { force: true, }); cy.get(‘@passwordInput’).type(“Test@123”, { force: true }); cy.get(‘@loginButton’).click({ force: true }); cy.get(‘[data-id=”nav-dropdown-logout”]’).click({ force: true }); }); }); Output In the below screenshot, you can see three aliases in the log, and the test case is executed successfully. Wrapping Up Cypress is a fantastic testing framework; it’s important to note that its effectiveness largely depends on how well it’s used and the adherence to best practices. By investing time and effort in setting up your tests correctly and following best practices, you can save yourself a lot of time and effort down the line and ensure that your tests are reliable, efficient, and easy to maintain.

By Kailash Pathak
GraphQL, JavaScript, Preprocessor, SQL, and More in Manifold
GraphQL, JavaScript, Preprocessor, SQL, and More in Manifold

We reached the final installment of our Manifold series but not the end of its remarkable capabilities. Throughout this series, we have delved into various aspects of Manifold, highlighting its unique features and showcasing how it enhances Java development. In this article, we will cover some of the remaining features of Manifold, including its support for GraphQL, integration with JavaScript, and the utilization of a preprocessor. By summarizing these features and reflecting on the knowledge gained throughout the series, I hope to demonstrate the power and versatility of Manifold. Expanding Horizons With GraphQL Support GraphQL, a relatively young technology, has emerged as an alternative to REST APIs. It introduced a specification for requesting and manipulating data between client and server, offering an arguably more efficient and streamlined approach. However, GraphQL can pose challenges for static languages like Java. Thankfully, Manifold comes to the rescue by mitigating these challenges and making GraphQL accessible and usable within Java projects. By removing the rigidness of Java and providing seamless integration with GraphQL, Manifold empowers developers to leverage this modern API style. E.g. for this GraphqQL file taken from the Manifold repository: query MovieQuery($genre: Genre!, $title: String, $releaseDate: Date) { movies(genre: $genre, title: $title, releaseDate: $releaseDate) { id title genre releaseDate } } query ReviewQuery($genre: Genre) { reviews(genre: $genre) { id stars comment movie { id title } } } mutation ReviewMutation($movie: ID!, $review: ReviewInput!) { createReview(movie: $movie, review: $review) { id stars comment } } extend type Query { reviewsByStars(stars: Int) : [Review!]! } We can write this sort of fluent code: var query = MovieQuery.builder(Action).build(); var result = query.request(ENDPOINT).post(); var actionMovies = result.getMovies(); for (var movie : actionMovies) { out.println( "Title: " + movie.getTitle() + "\n" + "Genre: " + movie.getGenre() + "\n" + "Year: " + movie.getReleaseDate().getYear() + "\n"); } None of these objects need to be declared in advance. All we need are the GraphQL files. Achieving Code Parity With JavaScript Integration In some cases, regulatory requirements demand identical algorithms in both client and server code. This is common for cases like interest rate calculations, where in the past, we used Swing applications to calculate and display the rate. Since both the backend and front end were in Java, it was simple to have a single algorithm. However, this can be particularly challenging when the client-side implementation relies on JavaScript. Manifold provides a solution by enabling the integration of JavaScript within Java projects. By placing JavaScript files alongside the Java code, developers can invoke JavaScript functions and classes seamlessly using Manifold. Under the hood, Manifold utilizes Rhino to execute JavaScript, ensuring compatibility and code parity across different environments. E.g., this JavaScript snippet: JavaScript function calculateValue(total, year, rate) { var interest = rate / 100 + 1; return parseFloat((total * Math.pow(interest, year)).toFixed(4)); } Can be invoked from Java as if it was a static method: Java var interest = Calc.calculateValue(4,1999, 5); Preprocessor for Java While preprocessor-like functionality may seem unnecessary in Java due to its portable nature and JIT compilation, there are scenarios where conditional code becomes essential. For instance, when building applications that require different behavior in on-premises and cloud environments, configuration alone may not suffice. It would technically work, but it might leave proprietary bytecode in on-site deployments, and that isn’t something we would want. There are workarounds for this, but they are often very heavy-handed for something relatively simple. Manifold addresses this need by offering a preprocessor-like capability. By defining values in build.properties or through environment variables and compiler arguments, developers can conditionally execute specific code paths. This provides flexibility and control without resorting to complex build tricks or platform-specific code. With Manifold, we can write preprocessor code such as: C# #if SERVER_ON_PREM onPremCode(); #elif SERVER_CLOUD cloudCode(); #else #error “Missing definition: SERVER_ON_PREM or SERVER_CLOUD” Reflecting on Manifold's Power Throughout this series, we have explored the many features of Manifold, including type-safe reflection, extension methods, operator overloading, property support, and more. These features demonstrate Manifold's commitment to enhancing Java development and bridging the gap between Java and modern programming paradigms. By leveraging Manifold, developers can achieve cleaner, more expressive code while maintaining the robustness and type safety of the Java language. Manifold is an evolving project with many niche features I didn’t discuss in this series, including the latest one SQL Support. In a current Spring Boot project that I’m developing, I chose to use Manifold over Lombok. My main reasoning was that this is a startup project, so I’m more willing to take risks. Manifold lets me tailor itself to my needs. I don’t need many of the manifold features and, indeed didn’t add all of them. I will probably need to interact with GraphQL, though, and this was a big deciding factor when picking Manifold over Lombok. So far, I am very pleased with the results, and features such as entity beans work splendidly with property annotations. I do miss the Lombok constructor's annotations, though; I hope something like that will eventually make its way into Manifold. Alternatively, if I find the time, I might implement this myself. Final Word As we conclude this journey through Manifold, it's clear that this library offers a rich set of features that elevate Java development to new heights. Whether it's simplifying GraphQL integration, ensuring code parity with JavaScript, or enabling conditional compilation through a preprocessor-like approach, Manifold empowers developers to tackle complex challenges with ease. We hope this series has provided valuable insights and inspired you to explore the possibilities that Manifold brings to your Java projects. Don’t forget to check out the past installments in this series to get the full scope of Manifold’s power.

By Shai Almog CORE
How To Call Cohere and Hugging Face AI From Within an Oracle Database Using JavaScript
How To Call Cohere and Hugging Face AI From Within an Oracle Database Using JavaScript

In this article, I will show you how to quickly create an entirely free app using a JavaScript program that runs within the free Oracle database and calls Hugging Face AI, the results of which are stored in the database and can then be accessed using SQL, JSON, REST, or any other language. All of the source code is available here. A common flow of applications in general and AI applications specifically involves an app calling an AI service and then storing the information in the database where it may be retrieved and processed (e.g., using OML) further and/or analyzed and queried later. By issuing the command from the database itself, the data is stored immediately and is thus also more reliable as it is in process and does not require an extra network call to persist. Further, by having the logic (and even code) reside in the database, the high availability, management, observability, etc. of the power of the Oracle database can be implicitly and inherently leveraged. This is a unique capability of the Oracle database as it includes Java, and, as we'll see in this blog, JavaScript runtime engines within the database itself. Hugging Face develops tools for building applications using machine learning. It is most notable for its transformers library built for natural language processing applications and its platform that allows users to share machine learning models and datasets. It has become extremely popular over the last couple of years. In short, we will do the following: Create an account and select a model. Create the Oracle database and a database user. Add a wallet with a certificate to allow HTTPS calls. Create a table. Run a short JavaScript program in the database to make a call to the Cohere or Hugging Face AI model and store the results in the table. Optionally query these results using SQL, JSON, REST, etc. Setup Cohere Account and AI Model Go to the Cohere website and Sign Up. Go to your Profile, and click the Settings button. Click on Dashboard, then API Keys. Create a key (or use trial) and copy its value for use later. Click on API Reference, and select the hamburger menu/icon on the left of any page to view the API Reference list. Select one. In this case, we will select generate section and Co.Generate. Select Language and then select the JavaScript option in order to see a sample code snippet to call the model. Setup Hugging Face Account and AI Model Go to the Hugging Face website and Sign Up. Go to your Profile, and click the Settings button. Click on Access Tokens, create a token, and copy its value for use later. Click on Models and select a model. In this case, we will select a model under the Natural Language Processing section for Question Answering. Select a model (perhaps one of the more popular ones), notice the information in the Model Card on the left, and then select the Deploy drop-down menu on the right. Select Inference API and then select the JavaScript option in order to see a sample code snippet to call the model. Setup Oracle Database We can use any flavor of the Oracle database 23c. On the cloud, we can use Oracle always free autonomous database later this year when 23c should be available. We can use the Oracle 23c Free version as it's available now and we can simply install or use a container image locally. Or of course, we could use the local for dev and cloud for production, etc. and either of these options is very quick to set up. Oracle Database Free Option You can go here to set up the Oracle Database Free 23c. Using the container image is very simple. You can simply execute the one line below, replacing -e ORACLE_PASSWORD=Welcome12345 with a password of your choice and replacing -v oracle-volume:/somedirectory with a directory location (or omitting it entirely if you only wish to have in-memory database). Notice the --add-host docker.for.mac.host.internal:host-gateway param that allows calls out of the container. This is the setting for Macs and will be different if running on a different OS. docker pull gvenzl/oracle-free ; docker run -d -p 1521:1521 --add-host docker.for.mac.host.internal:host-gateway -e ORACLE_PASSWORD=Welcome12345 gvenzl/oracle-free Setup SQLcl (or Database Actions) and Connect You can install SQLcl to manage the database by using the following steps: Download and install from this location. This will provide an [SQLcl_INSTALL_DIR]/bin/sql executable that we will use to administer the database. For convenience, you may add [SQLcl_INSTALL_DIR]/bin to your PATH. Login replacing [SQLcl_INSTALL_DIR] with the location of your SQLcl and replacing Welcome12345 with the one you provided as ORACLE_PASSWORD when creating the database. Here is an example when using a local install (e.g., Oracle Database Free container image): Shell [SQLcl_INSTALL_DIR]/bin/sql /nolog SQL> connect sys@//localhost:1521/freepdb1 as sysdba Password? (**********?) ************* Connected. And here is an example when using a cloud database (e.g., Oracle Always Free Autonomous): SQL [SQLcl_INSTALL_DIR]/bin/sql /nolog SQL> set cloudconfig /Users/pparkins/Downloads/Wallet_xr.zip SQL> connect admin@mydb_tp Password? (**********?) ************* Connected. Create the Users With Appropriate Privileges That Will Call Cohere And/Or Hugging Face via JavaScript Run the following SQL files to create the aijs user and its necessary grants and ACLs (note that these can be further tightened for better security). SQL> @sql/create_aijs_user.sql Now connect the user and create a table to store the results of the call to Hugging Face: SQL> connect aijs/Welcome12345; SQL> create table huggingfacejson (id json); SQL> create table coherejson (id json); Now everything is ready to run! Run Cohere or Hugging Face Queries From the JavaScript Code in the Database Cohere Example SQL> @sql/coherequery.sql L/SQL procedure successfully completed. Then, check the JSON results stored in the table by doing a SQL query. SQL> select * from coherejson; [{"id":"1b42f8f9-ea6d-4a65-8f02-bdcf89d9bd79","generations":[{"id":"d6467c0b-4a78-4dd4-b822-c1e2bfa4ecb0","text":"\n LLMs or Large Language Models are artificial intelligence tools that can read, summarize and translate texts and"}],"prompt":"Please explain to me how LLMs work","meta":{"api_version":{"version":"1"}} Looking at the code we just executed we can see the JavaScript module created and the function being called: create or replace mle module cohere_module language javascript as import "mle-js-fetch"; import "mle-js-oracledb"; export async function cohereDemo(apiToken) { if (apiToken === undefined) { throw Error("must provide an API token"); } const modelId = "generate"; const restAPI = `https://api.cohere.ai/v1/${modelId}`; const headers = { accept: 'application/json', 'content-type': 'application/json', "Authorization": `Bearer ${apiToken}` }; const payload = { max_tokens: 20, return_likelihoods: 'NONE', truncate: 'END', prompt: 'Please explain to me how LLMs work' }; const resp = await fetch( restAPI, { method: "POST", headers: headers, body: JSON.stringify(payload), credentials: "include" } ); const resp_json = await resp.json(); session.execute( `INSERT INTO COHEREJSON (id) VALUES (:resp_json)`, [ JSON.stringify(resp_json) ] ); } / create or replace procedure coherequery( p_API_token varchar2 ) as mle module cohere_module signature 'cohereDemo(string)'; / -- this is how you can test the API call begin utl_http.set_wallet('system:'); coherequery('[yourcoheretokenhere]'); end; / Hugging Face Example SQL> @sql/huggingfacequery.sql PL/SQL procedure successfully completed. Then, check the JSON results stored in the table by doing a SQL query. SQL> select * from huggingfacejson; [{"score":0.16964051127433777,"token":2053,"token_str":"no","sequence":"the answer to the universe is no."},{"score":0.07344779372215271,"token":2498,"token_str":"nothing","sequence":"the answer to the universe is nothing."},{"score":0.05803246051073074,"token":2748,"token_str":"yes","sequence":"the answer to the universe is yes."},{"score":0.043957971036434174,"token":4242,"token_str":"unknown","sequence":"the answer to the universe is unknown."},{"score":0.04015727713704109,"token":3722,"token_str":"simple","sequence":"the answer to the universe is simple."}] Looking at the code we just executed we can see the JavaScript module created and the function being called: create or replace mle module huggingface_module language javascript as import "mle-js-fetch"; import "mle-js-oracledb"; export async function huggingfaceDemo(apiToken) { if (apiToken === undefined) { throw Error("must provide an API token"); } const payload = { inputs: "The answer to the universe is [MASK]." }; const modelId = "bert-base-uncased"; const headers = { "Authorization": `Bearer ${apiToken}` }; const restAPI = `https://api-inference.huggingface.co/models/${modelId}`; const resp = await fetch( restAPI, { method: "POST", headers: headers, body: JSON.stringify(payload), credentials: "include" } ); const resp_json = await resp.json(); session.execute( `INSERT INTO HUGGINGFACEJSON (id) VALUES (:resp_json)`, [ JSON.stringify(resp_json) ] ); } / create or replace procedure huggingfacequery( p_API_token varchar2 ) as mle module huggingface_module signature 'huggingfaceDemo(string)'; / -- this is how you can test the API call begin utl_http.set_wallet('system:'); huggingfacequery('[hf_yourhuggingfacetokenhere]'); end; / Useful parameters and debug information for Hugging Face can be found here. The results can now be queried, analyzed, etc. using SQL or JSON (simultaneously, thanks to the new JSON duality feature), REST, or even the MongoDB API. Oracle Database works well for AI for a number of reasons, particularly as it is a vector database with capabilities such as the following already available today: Native data types for vector representation and storage: RAW, BLOB, JSON In-memory column store to store and search vector embeddings w/SIMD kernels for amazing performance Extensible indexing framework to create data-model specific indices (e.g., text, spatial) Native Oracle machine learning APIs - Modeling, classification, scoring, clustering, etc. DMLs, parallel loading, partitioning, advanced compression, parallel query, RAC, sharding, etc. It is also possible to call Oracle OCI AI and other services from within the Oracle database. Conclusion The article went into how to call Cohere and Hugging Face APIs from within the Oracle database using JavaScript, thus demonstrating a powerful combination of features perfectly suited to a broad array of AI solutions and friendly to JavaScript developers. Some other blogs related to JavaScript in the database, in general, the MultiLingual Engine (MLE) that makes it possible, etc. can be found in Martin Bach's posts and this post about importing JavaScript ES Modules in 23c. I look forward to any comments or questions you may have and really appreciate your time reading.

By Paul Parkinson
Adding a Gas Station Map to a React and Go/Gin/Gorm Application
Adding a Gas Station Map to a React and Go/Gin/Gorm Application

The ReactAndGo project imports German gas prices in shows/notifies you of cheap prices in your region. To help you find the gas stations, a map view with pins and overlays has been added. Provide the Data The Gin framework is used to provide the rest interface in the gscontroller.go: Go func searchGasStationLocation(c *gin.Context) { var searchLocationBody gsbody.SearchLocation if err := c.Bind(&searchLocationBody); err != nil { log.Printf("searchGasStationLocation: %v", err.Error()) } gsEntity := gasstation.FindBySearchLocation(searchLocationBody) c.JSON(http.StatusOK, gsEntity) } The context ‘c’ binds the location of the request to the ‘searchLocationBody’ variable. The ‘FindBySearchLocation(…)’ function gets the gas stations from the repository. The result for the front-end is then put in the ‘JSON(…)’ function of the turn-it-in HTTP JSON response. The Gorm framework is used for database access with object mapping in the gsrepo.go: Go func FindBySearchLocation(searchLocation gsbody.SearchLocation) []gsmodel.GasStation { var gasStations []gsmodel.GasStation minMax := minMaxSquare{MinLat: 1000.0, MinLng: 1000.0, MaxLat: 0.0, MaxLng: 0.0} //max supported radius 20km and add 0.1 for floation point side effects myRadius := searchLocation.Radius + 0.1 if myRadius > 20.0 { myRadius = 20.1 } minMax := calcMinMaxSquare(searchLocation.Longitude, searchLocation.Latitude, myRadius) database.DB.Where("lat >= ? and lat <= ? and lng >= ? and lng <= ?", minMax.MinLat, minMax.MaxLat, minMax.MinLng, minMax.MaxLng).Preload("GasPrices", func(db *gorm.DB) *gorm.DB { return db.Order("date DESC").Limit(50) }).Find(&gasStations) //filter for stations in circle filteredGasStations := []gsmodel.GasStation{} for _, myGasStation := range gasStations { distance, bearing := myGasStation.CalcDistanceBearing( searchLocation.Latitude, searchLocation.Longitude) if distance < myRadius && bearing > -1.0 { filteredGasStations = append(filteredGasStations, myGasStation) } } return filteredGasStations } First, the ‘minMaxSquare’ struct is initialized. Then the radius is checked and set, and the ‘calcMinMaxSquare(…)’ function is used to calculate the limiting coordinates to the search square. The Gorm is used to load the gas stations within the search square with their prices preloaded in ordered by date and limited. The gas stations are stored in the ‘gasStations’ slice, filtered for bearing and radius, and returned. Fetch the Prices The prices are fetched in the Main.tsx: TypeScript-JSX const fetchSearchLocation = (jwtToken: string) => { const requestOptions2 = { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${jwtToken}` }, body: JSON.stringify({ Longitude: globalUserDataState.Longitude, Latitude: globalUserDataState.Latitude, Radius: globalUserDataState.SearchRadius }), signal: controller?.signal } fetch('/gasstation/search/location', requestOptions2) .then(myResult => myResult.json() as Promise<GasStation[]>) .then(myJson => { const myResult = myJson .filter(value => value?.GasPrices?.length > 0).map(value => ({ location: value.Place + ' ' + value.Brand + ' ' + value.Street + ' ' + value.HouseNumber, e5: value.GasPrices[0].E5, e10: value.GasPrices[0].E10, diesel: value.GasPrices[0].Diesel, date: new Date(Date.parse(value.GasPrices[0].Date)), longitude: value.Longitude, latitude: value.Latitude } as TableDataRow)); setRows(myResult); setGsValues(myResult); } The function ‘fetchSearchLocation(…)’ gets the JwtToken and adds the token to the HTTP header. The body contains the Json of the longitude, latitude, and searchRadius. The signal can be used to cancel requests. Fetch sends the request to the URL with the content. The result JSON is turned into a Typescript object array that is filtered and mapped in ‘TableDataRow’ objects. The rows are then set in the Recoil states ‘rows’ and ‘gsValues’ to be used in the components. Show Prices and Locations on the Map The gas station locations are shown in the GsMap.tsx component with the OpenLayers library and the OpenStreetMap tiles. The component definition and the ‘useEffect(…)’ hook: TypeScript-JSX export default function GsMap(inputProps: InputProps) { let map: Map; let currentOverlay: Overlay | null = null; useEffect(() => { if (!map) { // eslint-disable-next-line react-hooks/exhaustive-deps map = new Map({ layers: [ new TileLayer({ source: new OSM(), }) ], target: 'map', view: new View({ center: [0, 0], zoom: 1, }), }); } const myOverlays = createOverlays(); addClickListener(myOverlays); map.setView(new View({ center: fromLonLat([inputProps.center.Longitude, inputProps.center.Latitude]), zoom: 12, })); }, []); The ‘GsMap’ component is initialized with the InputProps that contain the center location and the gas station data. The component properties are defined, and the ‘useEffect(…)’ creates the map once with the ‘OSM()'(OpenStreetMap) ‘TileLayer(…)’ and sets the initial center and zoom value. Then the gas station overlays and clicklisteners are created. The map gets a new ‘View’ injected with the coordinates of the properties and the zoom factor. The pins and overlays of the map are created in these functions: TypeScript-JSX function createOverlays(): Overlay[] { return inputProps.gsValues.map((gsValue, index) => { const element = document.createElement('div'); element.id = nanoid(); element.innerHTML = `${gsValue.location}<br/>E5: ${gsValue.e5}<br/>E10: ${gsValue.e10}<br/>Diesel: ${gsValue.diesel}`; const overlay = new Overlay({ element: element, offset: [-5, 0], positioning: 'bottom-center', className: 'ol-tooltip-measure ol-tooltip .ol-tooltip-static' }); overlay.setPosition(fromLonLat([gsValue.longitude, gsValue.latitude])); const myStyle = element?.style; if (!!myStyle) { myStyle.display = 'block'; } //map.addOverlay(overlay); addPins(gsValue, element, index); return overlay; }); } function addPins(gsValue: GsValue, element: HTMLDivElement, index: number) { const iconFeature = new Feature({ geometry: new Point(fromLonLat([gsValue.longitude, gsValue.latitude])), ttId: element.id, ttIndex: index }); const iconStyle = new Style({ image: new Icon({ anchor: [20, 20], anchorXUnits: 'pixels', anchorYUnits: 'pixels', src: '/public/assets/map-pin.png', }), }); iconFeature.setStyle(iconStyle); const vectorSource = new VectorSource({ features: [iconFeature], }); const vectorLayer = new VectorLayer({ source: vectorSource, }); map.addLayer(vectorLayer); } The ‘createOverlays()’ function maps the ‘inputProps’ in a ‘Overlay[]’. First, the div element for the overlay is created with a unique ‘nanoid()’. The ‘innerHTML’ contains the text of the overlay. Then the ‘Overlay(…)’ is created with the element, offset, positioning, and style classes. The overlay position is set with the help of the ‘fromLonLat(…)’ function of OpenLayers. The display is set, and the function ‘addPins(…)’ is called. The function ‘addPins(…)’ the creates a feature with a ‘geometry’ of ‘Point(…)’ for the location, a ‘ttId’ with the ‘element.id’ for the mapping and the ‘ttIndex’ for identification. The ‘Style(…)’ is created to show the pin of the ‘src’ property at the defined ‘anchor’ offset. The ‘Feature’ gets the ‘Style’ set. A new ‘VectorLayer’ with a ‘VectorSource’ that contains the ‘Feature’ is added to the map as a new layer to show the pin with the overlay(if clicked). The function ‘addClickListener(…)’ creates the listeners for the pins: TypeScript-JSX function addClickListener(myOverlays: Overlay[]) { map.on('click', (event: MapBrowserEvent<UIEvent>) => { const feature = map.forEachFeatureAtPixel(event.pixel, (feature) => { return feature; }); if (!!currentOverlay) { map.removeOverlay(currentOverlay); // eslint-disable-next-line react-hooks/exhaustive-deps currentOverlay = null; } if (!!feature?.get('ttIndex')) { // eslint-disable-next-line react-hooks/exhaustive-deps currentOverlay = myOverlays[feature?.get('ttIndex')]; map.addOverlay(currentOverlay as Overlay); } }); } return (<div className={myStyle.MyStyle}> <div id="map" className={myStyle.gsMap}></div> </div>); The ‘addClickListener(…)’ function puts a ‘click’ listener on the map that handles the events. It checks if a feature is registered for the location on the map and returns it. Then the ‘currentOverlay’ property is checked and removed from the map and set to ‘null’ to make sure only one overlay is shown at any time. Then it is checked if the feature exists and has a ‘ttIndex’. If it is true the ‘ttIndex’ is used to read the overlay from the ‘myOverlays’ array and set in the ‘currentOverlay’ property. Finally, the ‘currentOverlay’ is added to the map to show it. The map is shown in the returned ‘<div id=”map”>…’ that is connected to the map with the id property. Conclusion The OpenLayers library makes showing the map easy. Adding pins and overlays on click is easy too. The OpenStreetMap tiles have a high quality in Germany and can be used. Using OpenLayers, the high-quality types with Typescript, has made the development a lot faster. In well-supported areas, OpenLayers with OpenStreetMap can be an easy-to-use alternative to commercial map providers.

By Sven Loesekann

Top JavaScript Experts

expert thumbnail

Anthony Gore

Founder,
Vue.js Developers

I'm Anthony Gore and I'm here to teach you Vue.js! Through my books, online courses, and social media, my aim is to turn you into a Vue.js expert. I'm a Vue Community Partner, curator of the weekly Vue.js Developers Newsletter, and the founder of vuejsdevelopers.com, an online community for web professionals who love Vue.js. Curious about Vue? Take my free 30-minute "Vue.js Crash Course" to learn what Vue is, what kind of apps you can build with it, how it compares to React & Angular, and more. Enroll for free! https://courses.vuejsdevelopers.com/p/vue-js-crash-course?utm_source=dzone&utm_medium=bio
expert thumbnail

John Vester

Staff Engineer,
Marqeta @JohnJVester

Information Technology professional with 30+ years expertise in application design and architecture, feature development, project management, system administration and team supervision. Currently focusing on enterprise architecture/application design utilizing object-oriented programming languages and frameworks. Prior expertise building (Spring Boot) Java-based APIs against React and Angular client frameworks. CRM design, customization and integration with Salesforce. Additional experience using both C# (.NET Framework) and J2EE (including Spring MVC, JBoss Seam, Struts Tiles, JBoss Hibernate, Spring JDBC).
expert thumbnail

Justin Albano

Software Engineer,
IBM

I am devoted to continuously learning and improving as a software developer and sharing my experience with others in order to improve their expertise. I am also dedicated to personal and professional growth through diligent studying, discipline, and meaningful professional relationships. When not writing, I can be found playing hockey, practicing Brazilian Jiu-jitsu, watching the NJ Devils, reading, writing, or drawing. ~II Timothy 1:7~ Twitter: @justinmalbano
expert thumbnail

Swizec Teller

CEO,
preona

I'm a writer, programmer, web developer, and entrepreneur. Preona is my current startup that began its life as the team developing Twitulater. Our goal is to create a set of applications for the emerging Synaptic Web, which would rank real-time information streams in near real time, all along reading its user behaviour and understanding how to intelligently react to it. twitter: @Swizec

The Latest JavaScript Topics

article thumbnail
Best Practices for Developing Complex Form-Based Apps With React Hook Form and TypeScript Support
Creating complex form-based apps is challenging, but React Hook Form and TypeScript make it manageable.
October 2, 2023
by Oren Farhi
· 1,194 Views · 2 Likes
article thumbnail
Performance Optimization Strategies in Highly Scalable Systems
Optimizing digital applications involves Prefetching, Memoization, Concurrent Fetching, and Lazy Loading. These techniques enhance efficiency and user experience.
September 28, 2023
by Hemanth Murali
· 2,887 Views · 2 Likes
article thumbnail
Spring Boot and React in Harmony
Develop clean and maintainable business apps on top of Spring Boot and React, and do it faster using the Hilla framework.
September 28, 2023
by Tarek Oraby
· 2,608 Views · 2 Likes
article thumbnail
Exploring String Reversal Algorithms: Techniques for Reversing Text Efficiently
In this article, we will explore different string reversal algorithms, discuss their approaches, analyze their time and space complexities.
September 28, 2023
by Aditya Bhuyan
· 2,515 Views · 2 Likes
article thumbnail
Multi-Tenancy With Keycloak, Angular, and SpringBoot
Multi-tenancy is a critical aspect of contemporary software architecture. It assists in overcoming significant difficulties, particularly for SaaS software.
September 27, 2023
by Pier-Jean MALANDRINO
· 2,257 Views · 2 Likes
article thumbnail
How To Perform Cypress Accessibility Testing
This article on Cypress accessibility testing discusses the importance of accessibility testing and how to perform Cypress accessibility testing on a cloud grid.
September 27, 2023
by Enrique A Decoss
· 1,508 Views · 1 Like
article thumbnail
CI/CD Docker: How To Create a CI/CD Pipeline With Jenkins, Containers, and Amazon ECS
Create a CI/CD Pipeline with Jenkins, Containers, and Amazon ECS that deploys your application and overcomes the limitations of the traditional software delivery model.
September 27, 2023
by Rahul Shivalkar
· 4,035 Views · 4 Likes
article thumbnail
How To Create a Resource Chart in JavaScript
Master interactive resource charts (Gantt charts for resource allocation) with this step-by-step guide using the FIFA World Cup 2022 schedule as a real-world example.
September 27, 2023
by Awan Shrestha
· 2,412 Views · 2 Likes
article thumbnail
Auto-Scaling DynamoDB Streams Applications on Kubernetes
Use KEDA and DynamoDB Streams and combine two powerful techniques to build scalable, event-driven systems that can adapt based on the needs of your application.
September 26, 2023
by Abhishek Gupta CORE
· 5,948 Views · 5 Likes
article thumbnail
Common Problems in Redux With React Native
This article explores some common problems developers encounter when using Redux with React Native and how to address them.
September 26, 2023
by Lalu Prasad
· 2,220 Views · 2 Likes
article thumbnail
Build Quicker With Zipper: Building a Ping Pong Ranking App Using TypeScript Functions
You don’t have to spend time agonizing over your app’s infrastructure when you have a simple idea. Just get out there, start building, and deploy!
September 25, 2023
by Tyler Hawkins CORE
· 2,410 Views · 2 Likes
article thumbnail
TypeScript: Useful Features
Some advanced constructs may require a learning curve, but can significantly bolster your type safety. This article introduces you to some of these advanced features.
September 22, 2023
by Vasyl Mysiura
· 4,155 Views · 2 Likes
article thumbnail
Why Angular and ASP.NET Core Make a Winning Team
This blog discusses the benefits of using Angular with ASP.NET Core. It discusses the advantages of using these technologies and the drawbacks that come with it.
September 20, 2023
by Albert Smith
· 2,112 Views · 1 Like
article thumbnail
A Better Web3 Experience: Account Abstraction From Flow (Part 2)
Walletless dApps from Flow use account abstraction to improve the web3 user experience. In part two, we walk through how to build the front end for this dApp.
September 20, 2023
by Alvin Lee CORE
· 2,701 Views · 2 Likes
article thumbnail
HLS in Depth
HLS is a widely adopted, simple, robust streaming protocol. Learn how it works from a client's perspective, its segments, features, and disadvantages.
September 20, 2023
by Fenil Jain
· 2,476 Views · 3 Likes
article thumbnail
Build a Serverless App Fast With Zipper: Write TypeScript, Offload Everything Else
After reminiscing about the good-ole-days of Ruby on Rails, I discovered the Zipper platform and wanted to see just how quickly I could build something valuable.
September 20, 2023
by John Vester CORE
· 27,939 Views · 7 Likes
article thumbnail
AI for Web Devs: Project Introduction and Setup
In this post, begin bootstrapping a web development project using Qwik and get things ready to incorporate AI tooling from OpenAI.
September 16, 2023
by Austin Gil CORE
· 3,836 Views · 4 Likes
article thumbnail
Next.js vs. Gatsby: A Comprehensive Comparison
In this blog post, we embark on a deep dive into the world of Next.js and Gatsby and a comprehensive comparison between them.
September 15, 2023
by Atul Naithani
· 4,550 Views · 3 Likes
article thumbnail
Best 9 Angular Component Libraries in 2023
With so many Angular libraries available on the market today, how do you choose the best one for your project? Read our list of the top 9 Angular component libraries.
September 15, 2023
by Katie Mikova
· 3,199 Views · 1 Like
article thumbnail
Tracking Bugs Made Easy With React.js Bug Tracking Tools
In this blog post, we'll explore the world of React.js bug tracking tools, how they can streamline your bug tracking process, and some popular options to consider.
September 13, 2023
by Atul Naithani
· 2,555 Views · 2 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: