Celebrate a decade of Kubernetes. Explore why K8s continues to be one of the most prolific open-source systems in the SDLC.
With the guidance of FinOps experts, learn how to optimize AWS containers for performance and cost efficiency.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
JavaScript Frameworks: The Past, the Present, and the Future
Mastering React Efficiency: Refactoring Constructors for Peak Performance
I recently sat in on a discussion about programming based on user location. Folks that are way smarter than me covered technical limitations, legal concerns, and privacy rights. It was nuanced, to say the least. So, I thought I’d share some details. Location, Location, Location There are several common examples when you may want to add location-based logic to your app: You want to set the language or currency of your app based on the region. You’re offering discounts to people in a given country. You have a store locator that should show users their nearest location. Your weather app relies on a location before it can offer any sort of data. You want to geofence your app for legal reasons (e.g., cookie banners). These are just a few use cases. There are plenty more, but from these, we can identify some common themes: Presentation/User experience: Using location information to improve or streamline the user experience Function/Logic: The application’s business logic changes based on location Policy/Compliance: You have legal requirements to include or exclude functionality It’s not always this clear-cut. There is overlap in some cases, but it’s important to keep these distinctions in mind because getting it wrong has different levels of severity. Showing the wrong currency is not as bad as miscalculating tax rates, which is still not as bad as violating an embargo, for example. With that in mind, let’s look at the options we have. Getting User Location There are four ways I know of to access the user’s location, each with its pros and cons. User reporting Device heuristics IP Address Edge compute Getting User Location From the User This is when you have a form on your website that explicitly asks a user where they are. It may offer user experience improvements like auto-completing an address, but ultimately, you are taking the user at their word. This method has the benefits of being easy to get started (an HTML form will do), provides as reliable information as the user allows, and is flexible to support different locations. The most obvious downside is that it may not be accurate if the user mistypes or omits information. Furthermore, it’s very easy for a user to provide false information. This can be allowed in some cases, and a big mistake in others. Take this, for example. This is a legitimate place in New Jersey…it’s ok to laugh. (I actually went down a bit of a rabbit hole “researching” real places with funny names and spent way too much time, but I came across some real gems: Monkey’s Eyebrow – Kentucky, Big Butt Mountain – North Carolina, Unalaska – Alaska, Why – Arizona, Whynot – North Carolina.) Anyway, if you decide to take this approach, it’s a good idea to either use a form control with pre-selected options (select or radio) or integrate some sort of auto-complete (location API). This provides a better user experience and usually leads to more complete/reliable/accurate data. Getting User Location From the Device Modern devices like smartphones and laptops have access to their location information through GPS, Wi-Fi data, cell towers, and IP addresses. As web developers, we don’t get direct access to this information, for security reasons, but there are some things we can do. The first thing that comes to mind is the Geolocation API built into the browser. This provides a way for websites to request access to the user’s location with the getCurrentPosition method: navigator.geolocation.getCurrentPosition(data => { console.log(data) }) The function provides you with a GeolocationPosition object containing latitude, longitude, and other information: { coords: { accuracy: 1153.4846436496573 altitude: null altitudeAccuracy: null heading: null latitude: 28.4885376 longitude: 49.6407936 speed: null }, timestamp: 1710198149557 } Great! Just one problem: The first time a website tries to use the Geolocation API, the user will be prompted with a request to share their information. Best case: The user understands the extra step and accepts. Mid case: The user gets annoyed and has a 50/50 chance of accepting or denying. Worst case: The user is paranoid about government surveillance, assumes worst intentions, and never comes back to your app (this is me). When using an API that requires user verification, it’s often a good idea to let the user know ahead of time to expect the popup, and only trigger it right when you need it. In other words, don’t request access as soon as your app loads. Wait until the user has focused on the location input field, for example. Getting User Location From Their IP Address In case you’re not familiar, an IP address looks like this: 192.0.2.1. They are used to uniquely identify and locate devices in a network. This is how computers communicate over the internet, and each packet of data contains information about the IP address of the sender. Your home internet modem is a good example of a device in a network with an IP address. The relevant thing to note is that you can get location information from an IP address. Each chunk of numbers (separated by periods) represents a subnet from broader to finer scope. You can think of it as going from country to ISP, to region, to user. It doesn’t get fine enough to know someone’s specific address, but it’s possible to get the city or zip code. Here are two great resources if you want to know more about how this works: Wikipidia’s Internet geolocation page How-To Geek’s article, How to Get Location Information from an IP Address For JavaScript developers like myself, you can access the remote IP in Node.js with response.socket.remoteAddress. And note that you are not getting the user’s IP, technically. You’re getting the IP address for the user’s connection (and anyone else on their connection), by way of their modem and ISP. Internet user -> ISP -> IP address. An IP address alone is not enough to know where a user is coming from. You’ll need to look up the IP address subnets against a database of known subnet locations. It usually doesn’t make sense to maintain your own list. Instead, you can download an existing one, or ping a 3rd party service to look it up. For basic needs, ip2location.com and KeyCDN offer free, limited options. For apps that rely heavily on determining geolocation from IP addresses or need a higher level of accuracy, you’ll want something more robust. So now, we have a solution that requires no work from the user and has a pretty high level of accuracy. Pretty high accuracy is not a guarantee that the user’s IP address is accurate, as we will see. Getting User Location From Edge Compute I’ve written several articles about edge compute in the past, so I won’t go too deep, but edge compute is a way to run dynamic, server-side code against a user’s request from the nearest server. It works by routing all requests through a network of globally distributed servers, or nodes, and allowing the network to choose the nearest node to the user. The great thing about edge compute is that the platforms provide you with user location information without the need to ask the user for permission or look up an IP address. It can provide this information because every node knows where it lives. Akamai’s edge compute platform, EdgeWorkers, gives you access to a request object with a userLocation property. This property is a User Location Object that looks something like this: { areaCodes: ["617"], bandwidth: "257", city: "CAMBRIDGE", continent: "NA", // North America country: "US", dma: "506", fips: ["25"], latitude: "42.364948", longitude: "-71.088783", networkType: "mobile", region: "MA", timezone: "GMT", zipCode: "02114+02134+02138-02142+02163+02238", } So now we have a reliable source of location information with little effort. The only issue is that it’s not technically the user’s location. The User Location Object actually represents the edge node that received the user’s request. This will be the closest node to the user, likely in the same area. This is a subtle distinction, but depending on your needs, it can make a big difference. This Is Why We Can’t Have Nice Things! So we’ve covered some options along with their benefits and caveats, but here’s the real kicker. None of the options we’ve looked at can be trusted. Can’t Trust the User As mentioned above, we can’t trust users to always be honest and put in their actual location. And even if we could, they could make mistakes. And even if they don’t some data can be mistaken. For example, if I ask someone for their city, and they put “Portland” how can I be certain they mean Portland, OR (the best Portland), and not one of the 18+ others (in the US, alone)? Can’t Trust the Device The first issue with things like the Geolocation API is that the user can just disallow using it - to which you may respond, “Fine, they can’t use my app then.” But this also fails to address another issue, which is the fact that the Geolocation API information can actually be overwritten by the user in their browser settings. And it’s not even that hard. Can’t Trust the IP Address I’m not sure if it’s possible to spoof an IP address for the computer that is connecting to your website, but it’s pretty easy for a user to route their request through a proxy client. Commonly, this is referred to as a Virtual Private Network or VPN. The user connects to a VPN, their request goes to the VPN first, then the VPN connects to your website. As a result, the IP address you see is the VPN’s, not the user’s. This means any location data you get will be for the VPN, and not the user. Can’t Trust Edge Compute Edge compute offers reliable information, but that information is the location of the edge node and not the actual user. Often, they can be close enough, but it’s possible that the user lives near the border of one region and their nearest edge node is on the other side of that border. What happens if you have distinct behavior based on those regional differences? Also, edge compute is not free from the same VPN issues as IP addresses. With Akamai’s Enhanced Proxy Detection, you can identify if someone is using a VPN, but you still can’t access their original IP address. What Can We Do About It? So, there are a lot of ways to get location information, but none of them are entirely reliable. In fact, browser extensions can make it trivial for users to circumvent our efforts. Does that mean we should give up? No! I want to leave you better informed and prepared. So let’s look at some examples. Content Translation Say we have a website written in English, but also supports other languages. We’d like to improve the user experience by loading the local language of the user. How should we treat users from Belgium, where they speak Dutch (Flemish), French, and German? Should we default to the most common language (Dutch)? Should default to the default website language (English)? For the first render of the page, I think it’s safe to either use the default language or the best guess, but the key thing is to let the user decide which is best for them (maybe they only speak French) and honor their decision on subsequent visits. It could look like this: The user requests the website. The request passes through edge compute to determine it’s coming from Belgium. Edge compute looks for the language preference from an HTTP cookie. If the cookie is present, use the preferred language. If the cookie is not present, use the English or Dutch version. On the website, provide the user with a list of predefined, supported languages (maybe using a <select> field). When the user selects a language preference, store the value in a cookie for future sessions. In this scenario, we combine edge compute with user reporting to get location information to improve the experience. I don’t think it makes sense to use the Geolocation API at all. There is a risk of showing the wrong language, but the cost is low. The website works even if the location information is wrong or missing. Weather App In this example, we have an application that shows the weather information based on location. In this case, the app requires the location information in order to work. How else can we show the weather? In this scenario, it’s still safe to assume the user’s location on the first load. We can pull that information either from edge compute, or from the IP address, then show (what we think is) the user’s local weather. In addition to that, because the website’s main focus relies on location, we can use the Geolocation API to ask for more accurate data. We’ll also want to offer a flexible user reporting option in case the user wants information for a different location. For that, a search input with auto-complete to fill in the location information with as much detail as possible. How you handle future visits may vary. You could always default to the “local” weather, or you could remember the location from the previous visit. The user requests the website. On the first request, start the app assuming location information from edge compute or IP address. On the first client-side load, initiate the Geolocation API and update the information if necessary. You can store location information in a cookie for future loads. For other location searches, provide a flexible input that auto-completes location information and updates the app on submission. The important thing to note here is that the app doesn’t actually care about where the user is located. We just care about having a location. User-reported location (search) takes precedence over a location found in a cookie, edge compute, or IP address. Due to the daily change in weather, it’s also worth considering caching strategy and whether the app should be primarily server-rendered or client-rendered. Store Locator Imagine you run a brick-and-mortar business with multiple locations. You might show your product catalog and inventory online, but a good practice is to offer up-to-date information about the in-store inventory. For that, you would need to know which store to show inventory for, and for the best user experience, it should be the store closest to the user. Once again, it makes sense to predict the user’s location using edge compute or IP address. Then, you also want to offer a flexible input that allows the user to put in their location information, but any auto-complete should be limited to the list of stores, sorted by proximity. It’s also good to initiate the Geolocation API. The difference between this example and the last is that the main purpose of the site is not location-dependent. Therefore, you should wait until the user has interacted with the location-dependent feature. In other words, only ask the user for their location when they’ve focused on the store locator field. Regional Pricing This one’s a little tricky, but how would you handle charging different prices based on the user’s location? For example, some airlines and hotels have been reported to have higher prices for users booking from one region vs. another. Ethics aside, this is a question about profits, which is highly impactful. So, you probably don’t want to allow users to easily change their prices through user-reported location information. In this case, you’d probably only use edge compute or IP address. It’s possible for users to get around it with a VPN, but it’s probably the best you could do. Cookie Banners This last example focuses more on the legal compliance side, so I’ll start with a small disclaimer: I AM NOT A LAWYER!!! This is a hypothetical example and should not be taken as legal advice. In 2016, the European Union passed the General Data Protection Regulation (GDPR). It’s a law that protects the privacy of internet users in the EU, and it applies to companies that offer goods or services to individuals in the EU, even if the company is based elsewhere. It has a lot of requirements for website owners, but the one I’ll focus on is the blight of cookie banners we now see everywhere online. I’ll avoid discussing privacy issues, whether cookie banners are right or wrong, the effectiveness or ineffectiveness of them, or if there is a better approach. Instead, I’ll just say that you may want to only show cookie banners when you are legally required, and avoid them otherwise. Once again, knowing the user’s location is pretty important. This is very similar to the previous case, and the implementation is similar too. The main difference is the severity of getting it wrong, and therefore the level of effort to get it right. Cookie banners might be the most ubiquitous example of how legislation and user location can impact a website, but if you’re looking for the most powerful, it’s probably The Great Firewall of China. Closing Alright, hopefully, this long and windy road has brought us all to the same place: the magical land of nuance. We still didn’t touch on a couple of other challenges: What happens when a user changes their location mid-session? What happens if there time zones are involved? How do you report location information for disputed territories? Still, I hope you found it useful in learning how user location is determined, what challenges it faces, and some ways you might approach various scenarios. Unfortunately, there is no one right way to approach location data. Some scenarios are better suited for user reporting, some are better for device heuristics, and some are better for edge compute or IP address. In most cases, it’s some sort of combination. The important things you need to ask yourself are: Do you need the user’s location or just any location? How accurate does the data need to be? Is it OK if the user location is falsified? You also have legal compliance, regulations, and functionality: is 95% reliable ok? If any of your location logic is for legal reasons, you’ll want to take steps to protect yourself. Account for data privacy laws like CCPA and GDPR. Include messaging in your terms of service to disallow bad behavior. These are some things to consider, but I’m no lawyer. Consult your legal team. Thank you so much for reading.
As a Node.js developer and security researcher, I recently stumbled upon an interesting security regression in the Node.js core project related to prototype pollution. This happened to be found while I was conducting an independent security research for my Node.js Secure Coding books and yet the discovery highlights the complex nature of security in open-source projects and the challenges of maintaining consistent security measures across a large codebase. Even at the scale of a project like Node.js, regressions can occur, potentially leaving parts of the codebase vulnerable to attack. The Discovery: A Trip Down Prototype Lane Back in 2018, I opened a Pull Request to address a potential prototype pollution vulnerability in the child_process module. The PR aimed to fix shallow object checks like if (options.shell), which could be susceptible to prototype pollution attacks. However, the Node.js core team and Technical Steering Committee (TSC) decided not to land the PR at the time due to concerns that such a change would then merit bigger API changes in other core modules. As such, an agreement could not be reached to guard against prototype pollution in the child_process module. Fast forward to July 2023 with a similar change that did get merged through a Pull Request to harden against prototype pollution for child_process. This got me thinking: has the issue been fully resolved, or are there still lingering vulnerabilities? Node.js Core Regression of Inconsistent Prototype Hardening To investigate, I set up a simple proof-of-concept to test various child_process functions. Here’s what I found: const { execFile, spawn, spawnSync, execFileSync } = require("child_process"); // Simulate a successful prototype attack impact: const a = {}; a.__proto__.shell = true; console.log("Object.shell value:", Object.shell, "\n"); // Test various child_process functions: execFile("ls", ["-l && touch /tmp/from-ExecFile"], { stdio: "inherit", }); spawn("ls", ["-la && touch /tmp/from-Spawn"], { stdio: "inherit", }); execFileSync("ls", ["-l && touch /tmp/from-ExecFileSync"], { stdio: "inherit", }); spawnSync("ls", ["-la && touch /tmp/from-SpawnSync"], { stdio: "inherit", }); Running the above code snippet in a Node.js environment yields the following output: $ node app.js Object.shell value: true [...] $ ls -alh /tmp/from* Permissions Size User Date Modified Name .rw-r--r-- 0 lirantal 4 Jul 14:14 /tmp/from-ExecFileSync .rw-r--r-- 0 lirantal 4 Jul 14:14 /tmp/from-Spawn .rw-r--r-- 0 lirantal 4 Jul 14:14 /tmp/from-SpawnSync The results are surprising: execFile() and spawn() were properly hardened against prototype pollution. However, execFileSync(), spawnSync(), and spawn() (when provided with an options object) were still vulnerable. This inconsistency means that while some parts of the child_process module are protected, others remain exposed to potential prototype pollution attacks. The detailed expected vs actual results are as follows: Expectation Per the spawn() API documentation, spawn() should default to shell: false. Similarly, execFile() follows the same. Per the referenced prototype pollution hardening Pull Request from 2023, the following simulated attack shouldn’t work: Object.prototype.shell = true; child_process.spawn('ls', ['-l && touch /tmp/new']) Actual Object.prototype.shell = true; child_process.execFile('ls', ['-l && touch /tmp/new']) - ✅ No side effects, hardening works well. Object.prototype.shell = true; child_process.spawn('ls', ['-l && touch /tmp/new']) - ✅ No side effects, hardening works well. Object.prototype.shell = true; child_process.execFile('ls', ['-l && touch /tmp/new'], { stdio: 'inherit'}) - ✅ No side effects, hardening works well. Object.prototype.shell = true; child_process.spawn('ls', ['-l && touch /tmp/new'], { stdio: 'inherit'}) - ❌ Vulnerability manifests, hardening fails. The Security Implications Now, you might be wondering: “Is this a critical security vulnerability in Node.js?” The answer is not as straightforward as you might think. According to the Node.js Security Threat Model: Prototype Pollution Attacks (CWE-1321) Node.js trusts the inputs provided to it by application code. It is up to the application to sanitize appropriately. Therefore any scenario that requires control over user input is not considered a vulnerability. In other words, while this regression does introduce a security risk, it’s not officially classified as a vulnerability in the Node.js core project. The reasoning behind this is that Node.js expects developers to handle input sanitization in their applications. What This Means for Node.js Developers As a Node.js developer, this finding underscores a few important points: Always validate and sanitize user input: Don’t rely solely on Node.js core protections. Implement robust input validation in your applications. Stay updated: Keep an eye on Node.js releases and security advisories. Node.js security releases are a regular occurrence, and it’s essential to stay informed about potential vulnerabilities. Understand the security model: Familiarize yourself with the Node.js Security Threat Model to better understand what protections are (and aren’t) provided by the core project. Moving Forward: Addressing the Regression While this issue may not be classified as an official security vulnerability (and did not warrant a CVE), it’s still a bug that needs addressing. I’ve opened a Pull Request to the Node.js core project to address this inconsistency in the child_process module. The PR aims to ensure that the remaining vulnerable functions in the child_process module are consistently hardened against prototype pollution attacks. Conclusion This discovery serves as a reminder that security is an ongoing process, even in well-established projects like Node.js. Another interesting aspect here in terms of data analysis is how long this security regression has been present in the Node.js core project without anyone pointing it out. It’s a testament to the complexity of maintaining security across a large codebase and the challenges of ensuring consistent security measures.
My journey in programming began over two decades ago, a time when JavaScript was a far cry from its current state, and developers were primarily focused on Microsoft Internet Explorer. One of my proudest achievements back then was writing a few lines of code that allowed users to add and remove table rows entirely on the client side. We called it DHTML. Many developers today have forgotten about it — or never knew it existed. A few years later, AJAX emerged, revolutionizing the way we approached web development. The emergence of AJAX marked a significant shift in web development, transferring more logic from the server to the client, and this shift was not without reason. Client-Side Rendering This shift gained momentum due to two key factors: advancements in the JavaScript language and improvements in browser capabilities. JavaScript modules, for example, have greatly enhanced the separation of concerns, leading to more maintainable code. Meanwhile, the introduction of the Local Storage API has been a game-changer. Client-side rendering brings a host of benefits, with improved responsiveness being the most significant one, greatly enhancing the user experience. Keeping interactions local and avoiding server roundtrips makes the user experience noticeably faster. I mentioned the Local Storage API earlier. Along with Web Workers, it enables offline functionality — something browsers previously couldn't provide. I recall developing with Java Web Start (now OpenWebStart), downloading an application with server connectivity, and marveling at how it worked offline. At the time, it was one of the few technologies offering this capability. Now, we have similar functionality directly in the browser! Another advantage of migrating logic to the user's machine is reducing server computational load. This not only improves performance but also reduces cloud costs. Some Organizational Insights In parallel with these technical advancements, mobile devices rose to prominence. Many companies split their applications into client web apps and server APIs to adapt. With the advent of native applications, this approach became the norm and is often taken for granted. This separation of concerns has profound implications. Given the rapid pace of client-side technology changes and their growing complexity, it's nearly impossible for a single developer to handle an entire use case from end to end. Consequently, most organizations have split their developers into front-end and back-end teams. Each team operates in its domain, moving quickly within that space. But when it comes time to integrate, glitches arise, leading to adjustments, a.k.a., bug fixes. This process can be cumbersome, as it requires identifying where the issue lies and assigning it to the appropriate team. And let's not even get into the blame game that can ensue. This artificial separation between client and server is an unnecessary hurdle for many small to mid-sized organizations. A simple web app with well-adjusted CSS is often more than sufficient. That is why I'm a big fan of the Vaadin framework. With Vaadin, your developers only need to learn a single technology stack using one programming language and a small set of APIs. Each developer can work on both the backend and the UI, making the process simpler and more cost-effective. Yet, like with microservices, the herd mentality has been strong. The Rise of Server-Side Rendering As with any new technology, early adopters jump on the bandwagon, the technology gains traction, and issues arise over time. Client-side rendering is no different. As more code moved to the client side, some software began to hit the limits of what browsers could handle despite their improvements. Before it could start rendering the page, the browser had to download all necessary assets — primarily JavaScript libraries. To mitigate this, we minified libraries and increased the number of parallel downloads. At one point, we even resorted to creating artificial subdomains because browsers were heavily limited in the number of parallel downloads from a single domain. We developed complex techniques to trick users into thinking the page had loaded quickly, even if it wasn't fully rendered. It involved managing only the visible portion of the page initially and deferring the rest until necessary. Many of these techniques also depend on the browser's engine, which changes frequently. It led to a rise in "cargo cult" programming and black magic in what was supposed to be engineering. Another significant issue was SEO. Bots that crawl and index pages were designed for a more straightforward Web, where pages were rendered server-side. While Google and others have made strides in improving JavaScript-aware bots, nothing beats server-side rendering for SEO. Finally, server-side rendering improves initial load times and simplifies development organization. We must recognize the benefits that client-side rendering offers, but perhaps the pendulum has swung too far. Is it possible to have the best of both worlds? In some corners of the industry, cooler heads have prevailed, and the term SSR has been coined to describe a return to what we've been doing for ages—albeit with some modern enhancements. The idea is to leverage AJAX, JavaScript, and browser improvements without the unnecessary bloat. While many tools are available, I frequently hear about Vue. js and HTMX. A recent search also led me to Alpine.js. And I've long been a proponent of Vaadin. I plan to explore these technologies in this focused series by implementing a small to-do application with each. Here are my requirements: I'll approach this from the perspective of a backend developer. No front-end build steps: no TypeScript, no minification, etc. The backend app manages all dependencies, i.e., Maven. To Go Further DHTML AJAX SSR
Imagine coding with a safety net that catches errors before they happen. That's the power of TDD. In this article, we'll dive into how it can revolutionize your development workflow. In Test Driven Development (TDD), a developer writes test cases first before actually writing code to implement the functionality. There are several practical benefits to developing code with the TDD approach such as: Higher quality code: Thinking about tests upfront forces you to consider requirements and design more carefully. Rapid feedback: You get instant validation, reducing the time spent debugging. Comprehensive test coverage: TDD ensures that your entire codebase is thoroughly tested. Refactoring confidence: With a strong test suite, you can confidently improve your code without fear of breaking things. Living documentation: Your tests serve as examples of how the code is meant to be used. TDD has three main phases: Red, Green, and Refactor. The red phase means writing a test case and watching it fail. The green phase means writing minimum code to pass the test case. The refactor phase means improving the code with refactoring for better structure, readability, and maintainability without changing the functionality while ensuring test cases still pass. We will build a Login Page in React, and cover all these phases in detail. The full code for the project is available here, but I highly encourage you to follow along as TDD is as much about the process as it's about the end product. Prerequisites Here are some prerequisites to follow along in this article. Understanding of JavaScript and React NodeJS and NPM installed Code Editor of your choice Initiate a New React App Ensure NodeJS and npm are installed with node -v and npm -v Create a new react app with npx create-react-app tddreact Go to the app folder and start the app with cd tddreact and then npm start Once the app compiles fully, navigate to the localhost. You should see the app loaded. Adding Test Cases As mentioned earlier, in Test-Driven Development (TDD) you start by writing your initial test cases first. Create __tests__ folder under src folder and a filename Login.test.js Time to add your first test case, it is basic in nature ensuring the Login component is present. JavaScript // src/__tests__/Login.test.js import React from 'react'; import { render, fireEvent } from '@testing-library/react'; import '@testing-library/jest-dom/extend-expect'; import Login from '../components/Login'; test('renders Login component', () => { render(<Login />); }); Running the test case with npm test, you should encounter failure like the one below. This is the Red Phase we talked about earlier. Now it's time to add the Login component and initiate the Green Phase. Create a new file under src/components directory and name it Login.js, and add the below code to it. JavaScript // src/components/Login.js import React from 'react'; const Login = () => { return ( <> <p>Hello World!</p> </> ) } export default Login; The test case should pass now, and you have successfully implemented one cycle of the Red to Green phase. Adding Our Inputs On our login page, users should have the ability to enter a username and password and hit a button to log in. Add test cases in which username and password fields should be present on our page. JavaScript test('renders username input field', () => { const { getByLabelText } = render(<Login />); expect(getByLabelText(/username/i)).toBeInTheDocument(); }); test('renders password input field', () => { const { getByLabelText } = render(<Login />); expect(getByLabelText(/password/i)).toBeInTheDocument(); }); test('renders login button', () => { const { getByRole } = render(<Login />); expect(getByRole('button', { name: /login/i })).toBeInTheDocument(); }); You should start to see some test cases failing again. Update the return method of the Login component code as per below, which should make the failing test cases pass. JavaScript // src/components/Login.js return ( <> <div> <form> <div> <label htmlFor="username">Username</label> <input type="text" id="username" /> </div> <div> <label htmlFor="password">Password</label> <input type="password" id="password" /> </div> <button type="submit">Login</button> </form> </div> </> ) Adding Login Logic Now you can add actual login logic. For simplicity, when the user has not entered the username and password fields and hits the login button, an error message should be displayed. When the user has entered both the username and password fields and hits the login button, no error message should be displayed; instead, a welcome message, such as "Welcome John Doe." should appear. These requirements can be captured by adding the following tests to the test file: JavaScript test('shows validation message when inputs are empty and login button is clicked', async () => { const { getByRole, getByText } = render(<Login />) fireEvent.click(getByRole('button', { name: /login/i })); expect(getByText(/please fill in all fields/i)).toBeInTheDocument(); }); test('does not show validation message when inputs are filled and login button is clicked', () => { const handleLogin = jest.fn(); const { getByLabelText, getByRole, queryByText } = render(<Login onLogin={handleLogin} />); fireEvent.change(getByLabelText(/username/i), { target: { value: 'user' } }); fireEvent.change(getByLabelText(/password/i), { target: { value: 'password' } }); fireEvent.click(getByRole('button', { name: /login/i })); expect(queryByText(/welcome john doe/i)).toBeInTheDocument(); }) This should have caused test case failures, verify them using npm test if tests are not running already. Let's implement this feature in the component and pass the test case. Update the Login component code to add missing features as shown below. JavaScript // src/components/Login.js import React, { useState } from 'react'; const Login = () => { const [username, setUsername] = useState(''); const [password, setPassword] = useState(''); const [error, setError] = useState(''); const [isLoggedIn, setIsLoggedIn] = useState(false); const handleSubmit = (e) => { e.preventDefault(); if (!username || !password) { setError('Please fill in all fields'); setIsLoggedIn(false); } else { setError(''); setIsLoggedIn(true); } }; return ( <div> {!isLoggedIn && ( <div> <h1>Login</h1> <form onSubmit={handleSubmit}> <div> <label htmlFor="username">Username</label> <input type="text" id="username" value={username} onChange={(e) => setUsername(e.target.value)} /> </div> <div> <label htmlFor="password">Password</label> <input type="password" id="password" value={password} onChange={(e) => setPassword(e.target.value)} /> </div> <button type="submit">Login</button> </form> {error && <p>{error}</p>} </div> )} {isLoggedIn && <h1>Welcome John Doe</h1>} </div> ); }; export default Login; For most practical scenarios, the Login component should notify the parent component that the user has logged in. Let’s add a test case to cover the feature. After adding this test case, verify your terminal for the failing test case. JavaScript test('notifies parent component after successful login', () => { const handleLogin = jest.fn(); const { getByLabelText, getByText } = render(<Login onLogin={handleLogin} />); fireEvent.change(getByLabelText(/username/i), { target: { value: 'testuser' } }); fireEvent.change(getByLabelText(/password/i), { target: { value: 'password' } }); fireEvent.click(getByText(/login/i)); expect(handleLogin).toHaveBeenCalledWith('testuser'); expect(getByText(/welcome john doe/i)).toBeInTheDocument(); }); Let's implement this feature in the Login component. Update the Login component to receive onLogin function and update handleSubmit as per below. JavaScript const Login = ({ onLogin }) => { /* rest of the Login component code */ const handleSubmit = (e) => { e.preventDefault(); if (!username || !password) { setError('Please fill in all fields'); setIsLoggedIn(false); } else { setError(''); setIsLoggedIn(true); onLogin(username); } }; /* rest of the Login component code */ } Congratulations, the Login component is implemented and all the tests should pass as well. Integrating Login Components to the App create-react-app adds boilerplate code to the App.js file. Let's delete everything from App.js file before you start integrating our Login component. If you see App.test.js file, delete that as well. As again, let's add our test cases for the App component first. Add a new file under __test__ director named App.test.js JavaScript // App.test.js import React from 'react'; import { render, screen, fireEvent } from '@testing-library/react'; import App from '../App'; // Mock the Login component jest.mock('../components/Login', () => (props) => ( <div> <button onClick={props.onLogin}>Mock Login</button> </div> )); describe('App component', () => { test('renders the App component', () => { render(<App ></App>); expect(screen.getByText('Mock Login')).toBeInTheDocument(); }); test('sets isLoggedIn to true when Login button is clicked', () => { render(<App ></App>); const loginButton = screen.getByText('Mock Login'); fireEvent.click(loginButton); expect(screen.getByText('You are logged in.')).toBeInTheDocument(); }); }); Key Insights you can derive from these test cases: The app component holds the Login component and on successful login, a variable like isLoggedIn is needed to indicate the state of the login feature. Once the user is successfully logged in - you need to use this variable and conditionally display the text You are logged in. You are mocking the Login component - this is important as you don’t want the App component’s unit test cases to be testing Login component as well. You already covered the Login component’s test cases earlier. Implement the App component with the features described. Add the below code to App.js file. JavaScript import React, { useState } from 'react'; import logo from './logo.svg'; import './App.css'; import Login from './components/Login'; function App() { const [isLoggedIn, setIsLoggedIn] = useState(false); const onLogin = () => { setIsLoggedIn(true); } return ( <div className="App"> <Login onLogin={onLogin} /> {isLoggedIn && <p>You are logged in.</p>} </div> ); } export default App; All the test cases should pass again now, start the application with npm start. You should see the below page at the localhost. Enhancing Our App Now you have reached a crucial juncture in the TDD process — the Refactor Phase. The Login page’s look and feel is very bare-bone. Let’s enhance it by adding styles and updating the render method of the Login component. Create a new file name Login.css alongside Login.js file and add the below style to it. CSS /* src/components/Login.css */ .login-container { display: flex; justify-content: center; align-items: center; height: 100vh; background-color: #f0f4f8; } .login-form { background: #ffffff; padding: 20px; border-radius: 10px; box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); width: 300px; text-align: center; } .login-form h1 { margin-bottom: 20px; } .login-form label { display: block; text-align: left; margin-bottom: 8px; font-weight: bold; } .login-form input { width: 100%; padding: 10px; margin-bottom: 20px; border: 1px solid #ccc; border-radius: 5px; box-sizing: border-box; } .login-form input:focus { border-color: #007bff; outline: none; box-shadow: 0 0 5px rgba(0, 123, 255, 0.5); } .login-form button { width: 100%; padding: 10px; background-color: #007bff; border: none; color: #fff; font-size: 16px; cursor: pointer; border-radius: 5px; } .login-form button:hover { background-color: #0056b3; } .login-form .error { color: red; margin-bottom: 20px; } Update the render method of the Login component to use styles. Also, import the style file at the top of it. Below is the updated Login component. JavaScript // src/components/Login.js import React, { useState } from 'react'; import './Login.css'; const Login = ({ onLogin }) => { const [username, setUsername] = useState(''); const [password, setPassword] = useState(''); const [error, setError] = useState(''); const [isLoggedIn, setIsLoggedIn] = useState(false); const handleSubmit = (e) => { e.preventDefault(); if (!username || !password) { setError('Please fill in all fields'); setIsLoggedIn(false); } else { setError(''); setIsLoggedIn(true); onLogin(username); } }; return ( <div className="login-container"> {!isLoggedIn && ( <div className="login-form"> <h1>Login</h1> <form onSubmit={handleSubmit}> <div> <label htmlFor="username">Username</label> <input type="text" id="username" value={username} onChange={(e) => setUsername(e.target.value)} /> </div> <div> <label htmlFor="password">Password</label> <input type="password" id="password" value={password} onChange={(e) => setPassword(e.target.value)} /> </div> <button type="submit">Login</button> </form> {error && <p className="error">{error}</p>} </div> )} {isLoggedIn && <h1>Welcome John Doe</h1>} </div> ); }; export default Login; Ensure all test cases still pass with the output of the npm test. Start the app again with npm start — now our app should look like the below: Future Enhancements We have reached the objective for this article but your journey doesn’t need to stop here. I suggest doing further enhancements to the project and continue practicing TDD. Below are a few sample enhancements you can pursue: Advanced validation: Implement more robust validation rules for username and password fields, such as password strength checks or email format validation. Code coverage analysis: Integrate a code coverage tool (like Istanbul) into the testing workflow. This will provide insights into the percentage of code covered by unit tests, and help identify untested code lines and features. Continuous Integration (CI): Set up a CI pipeline (using tools like Jenkins or GitHub Actions) to automatically run tests and generate code coverage reports whenever changes are pushed to the repository. Conclusion In this guide, we've walked through building a React Login page using Test-Driven Development (TDD) step by step. By starting with tests and following the red-green-refactor cycle, we created a solid, well-tested component. TDD might take some getting used to, but the benefits in terms of quality and maintainability are substantial. Embracing TDD will equip you to tackle complex projects with greater confidence.
Valkey is an open-source alternative to Redis. It's a community-driven, Linux Foundation project created to keep the project available for use and distribution under the open-source Berkeley Software Distribution (BSD) 3-clause license after the Redis license changes. I think the path to Valkey was well summarised in this inaugural blog post: I will walk through how to use Valkey for JavaScript applications using existing clients in Redis ecosystem as well as iovalkey (a friendly fork of ioredis). Using Valkey With node-redis node-redis is a popular and widely used client. Here is a simple program that uses the Subscriber component of the PubSub API to subscribe to a channel. JavaScript import redis from 'redis'; const client = redis.createClient(); const channelName = 'valkey-channel'; (async () => { try { await client.connect(); console.log('Connected to Redis server'); await client.subscribe(channelName, (message, channel) => { console.log(`message "${message}" received from channel "${channel}"`) }); console.log('Waiting for messages...'); } catch (err) { console.error('Error:', err); } })(); To try this with Valkey, let's start an instance using the Valkey Docker image: docker run --rm -p 6379:637 valkey/valkey Also, head here to get OS-specific distribution, or use Homebrew (on Mac) — brew install valkey. You should now be able to use the Valkey CLI (valkey-cli). Get the code from GitHub repo: Shell git clone https://github.com/abhirockzz/valkey-javascript cd valkey-javascript npm install Start the subscriber app: node subscriber.js Publish a message and ensure that the subscriber is able to receive it: valkey-cli PUBLISH valkey-channel 'hello valkey' Nice! We were able to write a simple application with an existing Redis client and run it using Valkey (instead of Redis). Sure, this is an oversimplified example, but there were no code changes required. Use Valkey With ioredis Client ioredis is another popular client. To be doubly sure, let's try ioredis with Valkey as well. Let's write a publisher application: JavaScript import Redis from 'ioredis'; const redisClient = new Redis(); const channelName = 'valkey-channel'; const message = process.argv[2]; if (!message) { console.error('Please provide a message to publish.'); process.exit(1); } async function publishMessage() { try { const receivedCount = await redisClient.publish(channelName, message); console.log(`Message "${message}" published to channel "${channelName}". Received by ${receivedCount} subscriber(s).`); } catch (err) { console.error('Error publishing message:', err); } finally { // Close the client connection await redisClient.quit(); } } publishMessage(); Run the publisher, and confirm that the subscriber app is able to receive it: Shell node publisher.js 'hello1' node publisher.js 'hello2' You should see these logs in the subscriber application: Shell message "hello1" received from channel "valkey-channel" message "hello2" received from channel "valkey-channel" Switch to iovalkey Client As mentioned, iovalkey is a fork of ioredis. I made the following changes to port the producer code to use iovalkey: Commented out import Redis from 'ioredis'; Added import Redis from 'iovalkey'; Installed iovalkey - npm install iovalkey Here is the updated version — yes, this was all I needed to change (at least for this simple application): JavaScript // import Redis from 'ioredis'; import Redis from 'iovalkey'; Run the new iovalkey based publisher, and confirm that the subscriber is able to receive it: Shell node publisher.js 'hello from iovalkey' You should see these logs in the subscriber application: Shell message "hello from iovalkey" received from channel "valkey-channel" Awesome, this is going well. We are ready to sprinkle some generative AI now! Use Valkey With LangChainJS Along with Python, JavaScript/TypeScript is also being used in the generative AI ecosystem. LangChain is a popular framework for developing applications powered by large language models (LLMs). LangChain has JS/TS support in the form of LangchainJS. Having worked a lot with the Go port (langchaingo), as well as Python, I wanted to try LangchainJS. One of the common use cases is to use Redis as a chat history component in generative AI apps. LangchainJS has this built-in, so let's try it out with Valkey. Using Valkey as Chat History in LangChain To install LangchainJS: npm install langchain For the LLM, I will be using Amazon Bedrock (its supported natively with LangchainJS), but feel free to use others. For Amazon Bedrock, you will need to configure and set up Amazon Bedrock, including requesting access to the Foundation Model(s). Here is the chat application. As you can see, it uses the RedisChatMessageHistory component. JavaScript import { BedrockChat } from "@langchain/community/chat_models/bedrock"; import { RedisChatMessageHistory } from "@langchain/redis"; import { ConversationChain } from "langchain/chains"; import { BufferMemory } from "langchain/memory"; import prompt from "prompt"; import { ChatPromptTemplate, MessagesPlaceholder, } from "@langchain/core/prompts"; const chatPrompt = ChatPromptTemplate.fromMessages([ [ "system", "The following is a friendly conversation between a human and an AI.", ], new MessagesPlaceholder("chat_history"), ["human", "{input}"], ]); const memory = new BufferMemory({ chatHistory: new RedisChatMessageHistory({ sessionId: new Date().toISOString(), sessionTTL: 300, host: "localhost", port: 6379, }), returnMessages: true, memoryKey: "chat_history", }); const model = "anthropic.claude-3-sonnet-20240229-v1:0" const region = "us-east-1" const langchainBedrockChatModel = new BedrockChat({ model: model, region: region, modelKwargs: { anthropic_version: "bedrock-2023-05-31", }, }); const chain = new ConversationChain({ llm: langchainBedrockChatModel, memory: memory, prompt: chatPrompt, }); while (true) { prompt.start({noHandleSIGINT: true}); const {message} = await prompt.get(['message']); const response = await chain.invoke({ input: message, }); console.log(response); Run the application: node chat.js Start a conversation: If you peek into Valkey, notice that the conversations are saved in a List: valkey-cli keys * valkey-cli LRANGE <enter list name> 0 -1 Don't runkeys * in production — its just for demo purposes. Using iovalkey Implementation for Chat History The current implementation uses the node-redis client, but I wanted to try out iovalkey client. I am not a JS/TS expert, but it was simple enough to port the existing implementation. You can refer to the code on GitHub As far as the client (chat) app is concerned, I only had to make a few changes to switch the implementation: Comment out import { RedisChatMessageHistory } from "@langchain/redis"; Add import { ValkeyChatMessageHistory } from "./valkey_chat_history.js"; Replace RedisChatMessageHistory with ValkeyChatMessageHistory (while creating the memory instance) It worked the same way as above. Feel free to give it a try! Wrapping Up It's still early days for the Valkey (at the time of writing), and there is a long way to go. I'm interested in how the project evolves and also the client ecosystem for Valkey. Happy Building!
Have you ever wondered how some of your favorite apps handle real-time updates? Live sports scores, stock market tickers, or even social media notifications — all rely on event-driven architecture (EDA) to process data instantly. EDA is like having a conversation where every new piece of information triggers an immediate response. It’s what makes an application more interactive and responsive. In this walkthrough, we'll guide you through building a simple event-driven application using Apache Kafka on Heroku. We'll cover: Setting up a Kafka cluster on Heroku Building a Node.js application that produces and consumes events Deploying your application to Heroku Apache Kafka is a powerful tool for building EDA systems. It's an open-source platform designed for handling real-time data feeds. Apache Kafka on Heroku is a Heroku add-on that provides Kafka as a service. Heroku makes it pretty easy to deploy and manage applications, and I’ve been using it more in my projects recently. Combining Kafka with Heroku simplifies the setup process when you want to run an event-driven application. By the end of this guide, you'll have a running application that demonstrates the power of EDA with Apache Kafka on Heroku. Let’s get started! Getting Started Before we dive into the code, let's quickly review some core concepts. Once you understand these, following along will be easier. Events are pieces of data that signify some occurrence in the system, like a temperature reading from a sensor. Topics are categories or channels where events are published. Think of them as the subjects you subscribe to in a newsletter. Producers are the entities that create and send events to topics. In our demo EDA application, our producers will be a set of weather sensors. Consumers are the entities that read and process events from topics. Our application will have a consumer that listens for weather data events and logs them. Introduction to Our Application We'll build a Node.js application using the KafkaJS library. Here's a quick overview of how our application will work: Our weather sensors (the producers) will periodically generate data — such as temperature, humidity, and barometric pressure — and send these events to Apache Kafka. For demo purposes, the data will be randomly generated. We'll have a consumer listening to the topics. When a new event is received, it will write the data to a log. We'll deploy the entire setup to Heroku and use Heroku logs to monitor the events as they occur. Prerequisites Before we start, make sure you have the following: A Heroku account: If you don't have one, sign up at Heroku. Heroku CLI: Download and install the Heroku CLI. Node.js installed on your local machine for development. On my machine, I’m using Node (v.20.9.0) and npm (10.4.0). The codebase for this entire project is available in this GitHub repository. Feel free to clone the code and follow along throughout this post. Now that we’ve covered the basics, let’s set up our Kafka cluster on Heroku and start building. Setting up a Kafka Cluster on Heroku Let’s get everything set up on Heroku. It’s a pretty quick and easy process. Step 1: Log in via the Heroku CLI Shell ~/project$ heroku login Step 2: Create a Heroku App Shell ~/project$ heroku create weather-eda (I’ve named my Heroku app weather-eda, but you can choose a unique name for your app.) Step 3: Add the Apache Kafka on the Heroku Add-On Shell ~/project$ heroku addons:create heroku-kafka:basic-0 Creating heroku-kafka:basic-0 on ⬢ weather-eda... ~$0.139/hour (max $100/month) The cluster should be available in a few minutes. Run `heroku kafka:wait` to wait until the cluster is ready. You can read more about managing Kafka at https://devcenter.heroku.com/articles/kafka-on-heroku#managing-kafka kafka-adjacent-07560 is being created in the background. The app will restart when complete... Use heroku addons:info kafka-adjacent-07560 to check creation progress Use heroku addons:docs heroku-kafka to view documentation You can find more information about Apache Kafka on Heroku add-on here. For our demo, I’m adding the Basic 0 tier of the add-on. The cost of the add-on is $0.139/hour. As I went through building this demo application, I used the add-on for less than an hour, and then I spun it down. It takes a few minutes for Heroku to get Kafka spun up and ready for you. Pretty soon, this is what you’ll see: Shell ~/project$ heroku addons:info kafka-adjacent-07560 === kafka-adjacent-07560 Attachments: weather-eda::KAFKA Installed at: Mon May 27 2024 11:44:37 GMT-0700 (Mountain Standard Time) Max Price: $100/month Owning app: weather-eda Plan: heroku-kafka:basic-0 Price: ~$0.139/hour State: created Step 4: Get Kafka Credentials and Configurations With our Kafka cluster spun up, we will need to get credentials and other configurations. Heroku creates several config vars for our application, populating them with information from the Kafka cluster that was just created. We can see all of these config vars by running the following: Shell ~/project$ heroku config === weather-eda Config Vars KAFKA_CLIENT_CERT: -----BEGIN CERTIFICATE----- MIIDQzCCAiugAwIBAgIBADANBgkqhkiG9w0BAQsFADAyMTAwLgYDVQQDDCdjYS1h ... -----END CERTIFICATE----- KAFKA_CLIENT_CERT_KEY: -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAsgv1oBiF4Az/IQsepHSh5pceL0XLy0uEAokD7ety9J0PTjj3 ... -----END RSA PRIVATE KEY----- KAFKA_PREFIX: columbia-68051. KAFKA_TRUSTED_CERT: -----BEGIN CERTIFICATE----- MIIDfzCCAmegAwIBAgIBADANBgkqhkiG9w0BAQsFADAyMTAwLgYDVQQDDCdjYS1h ... F+f3juViDqm4eLCZBAdoK/DnI4fFrNH3YzhAPdhoHOa8wi4= -----END CERTIFICATE----- KAFKA_URL: kafka+ssl://ec2-18-233-140-74.compute-1.amazonaws.com:9096,kafka+ssl://ec2-18-208-61-56.compute-1.amazonaws.com:9096...kafka+ssl://ec2-34-203-24-91.compute-1.amazonaws.com:9096 As you can see, we have several config variables. We’ll want a file in our project root folder called .env with all of these config var values. To do this, we simply run the following command: Shell ~/project$ heroku config --shell > .env Our .env file looks like this: Shell KAFKA_CLIENT_CERT="-----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----" KAFKA_CLIENT_CERT_KEY="-----BEGIN RSA PRIVATE KEY----- ... -----END RSA PRIVATE KEY-----" KAFKA_PREFIX="columbia-68051." KAFKA_TRUSTED_CERT="-----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----" KAFKA_URL="kafka+ssl://ec2-18-233-140-74.compute-1.amazonaws.com:9096,kafka+ssl://ec2-18-208-61-56.compute-1.amazonaws.com:9096...kafka+ssl://ec2-34-203-24-91.compute-1.amazonaws.com:9096" Also, we make sure to add .env to our .gitignore file. We wouldn’t want to commit this sensitive data to our repository. Step 5: Install the Kafka Plugin Into the Heroku CLI The Heroku CLI doesn’t come with Kafka-related commands right out of the box. Since we’re using Kafka, we’ll need to install the CLI plugin. Shell ~/project$ heroku plugins:install heroku-kafka Installing plugin heroku-kafka... installed v2.12.0 Now, we can manage our Kafka cluster from the CLI. Shell ~/project$ heroku kafka:info === KAFKA_URL Plan: heroku-kafka:basic-0 Status: available Version: 2.8.2 Created: 2024-05-27T18:44:38.023+00:00 Topics: [··········] 0 / 40 topics, see heroku kafka:topics Prefix: columbia-68051. Partitions: [··········] 0 / 240 partition replicas (partitions × replication factor) Messages: 0 messages/s Traffic: 0 bytes/s in / 0 bytes/s out Data Size: [··········] 0 bytes / 4.00 GB (0.00%) Add-on: kafka-adjacent-07560 ~/project$ heroku kafka:topics === Kafka Topics on KAFKA_URL No topics found on this Kafka cluster. Use heroku kafka:topics:create to create a topic (limit 40) Step 6: Test Out Interacting With the Cluster Just as a sanity check, let’s play around with our Kafka cluster. We start by creating a topic. Shell ~/project$ heroku kafka:topics:create test-topic-01 Creating topic test-topic-01 with compaction disabled and retention time 1 day on kafka-adjacent-07560... done Use `heroku kafka:topics:info test-topic-01` to monitor your topic. Your topic is using the prefix columbia-68051.. ~/project$ heroku kafka:topics:info test-topic-01 ▸ topic test-topic-01 is not available yet Within a minute or so, our topic becomes available. Shell ~/project$ heroku kafka:topics:info test-topic-01 === kafka-adjacent-07560 :: test-topic-01 Topic Prefix: columbia-68051. Producers: 0 messages/second (0 bytes/second) total Consumers: 0 bytes/second total Partitions: 8 partitions Replication Factor: 3 Compaction: Compaction is disabled for test-topic-01 Retention: 24 hours Next, in this terminal window, we’ll act as a consumer, listening to this topic by tailing it. Shell ~/project$ heroku kafka:topics:tail test-topic-01 From here, the terminal simply waits for any events published on the topic. In a separate terminal window, we’ll act as a producer, and we’ll publish some messages on the topic. Shell ~/project$ heroku kafka:topics:write test-topic-01 "hello world!" Back in our consumer’s terminal window, this is what we see: Shell ~/project$ heroku kafka:topics:tail test-topic-01 test-topic-01 0 0 12 hello world! Excellent! We have successfully produced and consumed an event to a topic in our Kafka cluster. We’re ready to move on to our Node.js application. Let’s destroy this test topic to keep our playground tidy. Shell ~/project$ heroku kafka:topics:destroy test-topic-01 ▸ This command will affect the cluster: kafka-adjacent-07560, which is on weather-eda ▸ To proceed, type weather-eda or re-run this command with --confirm weather-eda > weather-eda Deleting topic test-topic-01... done Your topic has been marked for deletion, and will be removed from the cluster shortly ~/project$ heroku kafka:topics === Kafka Topics on KAFKA_URL No topics found on this Kafka cluster. Use heroku kafka:topics:create to create a topic (limit 40). Step 7: Prepare Kafka for Our Application To prepare for our application to use Kafka, we will need to create two things: a topic and a consumer group. Let’s create the topic that our application will use. Shell ~/project$ heroku kafka:topics:create weather-data Next, we’ll create the consumer group that our application’s consumer will be a part of: Shell ~/project$ heroku kafka:consumer-groups:create weather-consumers We’re ready to build our Node.js application! Build the Application Let’s initialize a new project and install our dependencies. Shell ~/project$ npm init -y ~/project$ npm install kafkajs dotenv @faker-js/faker pino pino-pretty Our project will have two processes running: consumer.js, which is subscribed to the topic and logs any events that are published. producer.js, which will publish some randomized weather data on the topic every few seconds. Both of these processes will need to use KafkaJS to connect to our Kafka cluster, so we will modularize our code to make it reusable. Working With the Kafka Client In the project src folder, we create a file called kafka.js. It looks like this: JavaScript const { Kafka } = require('kafkajs'); const BROKER_URLS = process.env.KAFKA_URL.split(',').map(uri => uri.replace('kafka+ssl://','' )) const TOPIC = `${process.env.KAFKA_PREFIX}weather-data` const CONSUMER_GROUP = `${process.env.KAFKA_PREFIX}weather-consumers` const kafka = new Kafka({ clientId: 'weather-eda-app-nodejs-client', brokers: BROKER_URLS, ssl: { rejectUnauthorized: false, ca: process.env.KAFKA_TRUSTED_CERT, key: process.env.KAFKA_CLIENT_CERT_KEY, cert: process.env.KAFKA_CLIENT_CERT, }, }) const producer = async () => { const p = kafka.producer() await p.connect() return p; } const consumer = async () => { const c = kafka.consumer({ groupId: CONSUMER_GROUP, sessionTimeout: 30000 }) await c.connect() await c.subscribe({ topics: [TOPIC] }); return c; } module.exports = { producer, consumer, topic: TOPIC, groupId: CONSUMER_GROUP }; In this file, we start by creating a new Kafka client. This requires URLs for the Kafka brokers, which we are able to parse from the KAFKA_URL variable in our .env file (which originally came from calling heroku config). To authenticate the connection attempt, we need to provide KAFKA_TRUSTED_CERT, KAFKA_CLIENT_CERT_KEY, and KAFKA_CLIENT_CERT. Then, from our Kafka client, we create a producer and a consumer, making sure to subscribe our consumer to the weather-data topic. Clarification on the Kafka Prefix Notice in kafka.js that we prepend KAFKA_PREFIX to our topic and consumer group name. We’re using the Basic 0 plan for Apache Kafka on Heroku, which is a multi-tenant Kafka plan. This means we work with a KAFKA_PREFIX. Even though we named our topic weather-data and our consumer group weather-consumers, their actual names in our multi-tenant Kafka cluster must have the KAFKA_PREFIX prepended to them (to ensure they are unique). So, technically, for our demo, the actual topic name is columbia-68051.weather-data, not weather-data. (Likewise for the consumer group name.) The Producer Process Now, let’s create our background process which will act as our weather sensor producers. In our project root folder, we have a file called producer.js. It looks like this: JavaScript require('dotenv').config(); const kafka = require('./src/kafka.js'); const { faker } = require('@faker-js/faker'); const SENSORS = ['sensor01','sensor02','sensor03','sensor04','sensor05']; const MAX_DELAY_MS = 20000; const READINGS = ['temperature','humidity','barometric_pressure']; const MAX_TEMP = 130; const MIN_PRESSURE = 2910; const PRESSURE_RANGE = 160; const getRandom = (arr) => arr[faker.number.int(arr.length - 1)]; const getRandomReading = { temperature: () => faker.number.int(MAX_TEMP) + (faker.number.int(100) / 100), humidity: () => faker.number.int(100) / 100, barometric_pressure: () => (MIN_PRESSURE + faker.number.int(PRESSURE_RANGE)) / 100 }; const sleep = (ms) => { return new Promise((resolve) => { setTimeout(resolve, ms); }); }; (async () => { const producer = await kafka.producer() while(true) { const sensor = getRandom(SENSORS) const reading = getRandom(READINGS) const value = getRandomReading[reading]() const data = { reading, value } await producer.send({ topic: kafka.topic, messages: [{ key: sensor, value: JSON.stringify(data) }] }) await sleep(faker.number.int(MAX_DELAY_MS)) } })() A lot of the code in the file has to do with generating random values. I’ll highlight the important parts: We’ll simulate having five different weather sensors. Their names are found in SENSORS. A sensor will emit (publish) a value for one of three possible readings: temperature, humidity, or barometric_pressure. The getRandomReading object has a function for each of these readings, to generate a reasonable corresponding value. The entire process runs as an async function with an infinite while loop. Within the while loop, we: Choose a sensor at random. Choose a reading at random. Generate a random value for that reading. Call producer.send to publish this data to the topic. The sensor serves as the key for the event, while the reading and value will form the event message. Then, we wait for up to 20 seconds before our next iteration of the loop. The Consumer Process The background process in consumer.js is considerably simpler. JavaScript require('dotenv').config(); const logger = require('./src/logger.js'); const kafka = require('./src/kafka.js'); (async () => { const consumer = await kafka.consumer() await consumer.run({ eachMessage: async ({ topic, partition, message }) => { const sensorId = message.key.toString() const messageObj = JSON.parse(message.value.toString()) const logMessage = { sensorId } logMessage[messageObj.reading] = messageObj.value logger.info(logMessage) } }) })() Our consumer is already subscribed to the weather-data topic. We call consumer.run, and then we set up a handler for eachMessage. Whenever Kafka notifies the consumer of a message, it logs the message. That’s all there is to it. Processes and the Procfile In the package.json file, we need to add a few scripts which start up our producer and consumer background processes. The file should now include the following: JSON ... "scripts": { "start": "echo 'do nothing'", "start:consumer": "node consumer.js", "start:producer": "node producer.js" }, ... The important ones are start:consumer and start:producer. But we keep start in our file (even though it doesn’t do anything meaningful) because the Heroku builder expects it to be there. Next, we create a Procfile which will tell Heroku how to start up the various workers we need for our Heroku app. In the root folder of our project, the Procfile should look like this: Shell consumer_worker: npm run start:consumer producer_worker: npm run start:producer Pretty simple, right? We’ll have a background process worker called consumer_worker, and another called producer_worker. You’ll notice that we don’t have a web worker, which is what you would typically see in Procfile for a web application. For our Heroku app, we just need the two background workers. We don’t need web. Deploy and Test the Application With that, all of our code is set. We’ve committed all of our code to the repo, and we’re ready to deploy. Shell ~/project$ git push heroku main … remote: -----> Build succeeded! … remote: -----> Compressing... remote: Done: 48.6M remote: -----> Launching... … remote: Verifying deploy... done After we’ve deployed, we want to make sure that we scale our dynos properly. We don’t need a dyno for a web process, but we’ll need one for both consumer_worker and producer_worker. We run the following command to set these processes based on our needs. Shell ~/project$ heroku ps:scale web=0 consumer_worker=1 producer_worker=1 Scaling dynos... done, now running producer_worker at 1:Eco, consumer_worker at 1:Eco, web at 0:Eco Now, everything should be up and running. Behind the scenes, our producer_worker should connect to the Kafka cluster and then begin publishing weather sensor data every few seconds. Then, our consumer_worker should connect to the Kafka cluster and log any messages that it receives from the topic that it is subscribed to. To see what our consumer_worker is doing, we can look in our Heroku logs. Shell ~/project$ heroku logs --tail … heroku[producer_worker.1]: Starting process with command `npm run start:producer` heroku[producer_worker.1]: State changed from starting to up app[producer_worker.1]: app[producer_worker.1]: > weather-eda-kafka-heroku-node@1.0.0 start:producer app[producer_worker.1]: > node producer.js app[producer_worker.1]: … heroku[consumer_worker.1]: Starting process with command `npm run start:consumer` heroku[consumer_worker.1]: State changed from starting to up app[consumer_worker.1]: app[consumer_worker.1]: > weather-eda-kafka-heroku-node@1.0.0 start:consumer app[consumer_worker.1]: > node consumer.js app[consumer_worker.1]: app[consumer_worker.1]: {"level":"INFO","timestamp":"2024-05-28T02:31:20.660Z","logger":"kafkajs","message":"[Consumer] Starting","groupId":"columbia-68051.weather-consumers"} app[consumer_worker.1]: {"level":"INFO","timestamp":"2024-05-28T02:31:23.702Z","logger":"kafkajs","message":"[ConsumerGroup] Consumer has joined the group","groupId":"columbia-68051.weather-consumers","memberId":"weather-eda-app-nodejs-client-3ee5d1fa-eba9-4b59-826c-d3b924a6e4e4","leaderId":"weather-eda-app-nodejs-client-3ee5d1fa-eba9-4b59-826c-d3b924a6e4e4","isLeader":true,"memberAssignment":{"columbia-68051.test-topic-1":[0,1,2,3,4,5,6,7]},"groupProtocol":"RoundRobinAssigner","duration":3041} app[consumer_worker.1]: [2024-05-28 02:31:23.755 +0000] INFO (21): {"sensorId":"sensor01","temperature":87.84} app[consumer_worker.1]: [2024-05-28 02:31:23.764 +0000] INFO (21): {"sensorId":"sensor01","humidity":0.3} app[consumer_worker.1]: [2024-05-28 02:31:23.777 +0000] INFO (21): {"sensorId":"sensor03","temperature":22.11} app[consumer_worker.1]: [2024-05-28 02:31:37.773 +0000] INFO (21): {"sensorId":"sensor01","barometric_pressure":29.71} app[consumer_worker.1]: [2024-05-28 02:31:54.495 +0000] INFO (21): {"sensorId":"sensor05","barometric_pressure":29.55} app[consumer_worker.1]: [2024-05-28 02:32:02.629 +0000] INFO (21): {"sensorId":"sensor04","temperature":90.58} app[consumer_worker.1]: [2024-05-28 02:32:03.995 +0000] INFO (21): {"sensorId":"sensor02","barometric_pressure":29.25} app[consumer_worker.1]: [2024-05-28 02:32:12.688 +0000] INFO (21): {"sensorId":"sensor04","humidity":0.1} app[consumer_worker.1]: [2024-05-28 02:32:32.127 +0000] INFO (21): {"sensorId":"sensor01","humidity":0.34} app[consumer_worker.1]: [2024-05-28 02:32:32.851 +0000] INFO (21): {"sensorId":"sensor02","humidity":0.61} app[consumer_worker.1]: [2024-05-28 02:32:37.200 +0000] INFO (21): {"sensorId":"sensor01","barometric_pressure":30.36} app[consumer_worker.1]: [2024-05-28 02:32:50.388 +0000] INFO (21): {"sensorId":"sensor03","temperature":104.55} It works! We know that our producer is periodically publishing messages to Kafka because our consumer is receiving them and then logging them. Of course, in a larger EDA app, every sensor is a producer. They might publish on multiple topics for various purposes, or they might all publish on the same topic. And your consumer can be subscribed to multiple topics. Also, in our demo app, our consumers simply emitted a lot on eachMessage; but in an EDA application, a consumer might respond by calling a third-party API, sending an SMS notification, or querying a database. Now that you have a basic understanding of events, topics, producers, and consumers, and you know how to work with Kafka, you can start to design and build your own EDA applications to satisfy more complex business use cases. Conclusion EDA is pretty powerful — you can decouple your systems while enjoying key features like easy scalability and real-time data processing. For EDA, Kafka is a key tool that helps you handle high-throughput data streams with ease. Using Apache Kafka on Heroku helps you get started quickly. Since it’s a managed service, you don’t need to worry about the complex parts of Kafka cluster management. You can just focus on building your apps. From here, it’s time for you to experiment and prototype. Identify which use cases fit well with EDA. Dive in, test it out on Heroku, and build something amazing. Happy coding!
In modern web development, fetching data from APIs is a common task. There are multiple ways to achieve this, including using libraries like Axios, the native Fetch API, and Angular's HttpClient. In this article, we will explore how to use these tools for data fetching, including examples of standard application code and error handling. We will also touch upon other methods and conclude with a comparison. 1. Introduction to Data Fetching Data fetching is a critical part of web applications, allowing us to retrieve data from servers and integrate it into our apps. While the Fetch API is built into JavaScript, libraries like Axios and frameworks like Angular offer additional features and more straightforward syntax. Understanding these approaches helps developers choose the best tool for their specific needs. 2. Fetch API The Fetch API provides a native way to make HTTP requests in JavaScript. It's built into the browser, so no additional libraries are needed. 2.1 Basic Fetch Usage Here is a basic example of using Fetch to get data from an API: JavaScript fetch('https://jsonplaceholder.typicode.com/posts') .then(response => response.json()) .then(data => console.log(data)) .catch(error => console.error('Error:', error)); 2.2 Fetch With Async/Await Using async and await can make the code cleaner and more readable: JavaScript // Function to fetch data using async/await async function fetchData() { try { // Await the fetch response from the API endpoint const response = await fetch('https://jsonplaceholder.typicode.com/posts'); // Check if the response is ok (status in the range 200-299) if (!response.ok) { throw new Error('Network response was not ok'); } // Await the JSON data from the response const data = await response.json(); // Log the data to the console console.log(data); } catch (error) { // Handle any errors that occurred during the fetch console.error('Fetch error:', error); } } // Call the function to execute the fetch fetchData(); 2.3 Error Handling in Fetch Error handling in Fetch requires checking the ok property of the response object. The error messages are more specific, providing additional details like HTTP status codes for better debugging. JavaScript // Function to fetch data with explicit error handling async function fetchWithErrorHandling() { try { // Await the fetch response from the API endpoint const response = await fetch('https://jsonplaceholder.typicode.com/posts'); // Check if the response was not successful if (!response.ok) { throw new Error(`HTTP error! Status: ${response.status}`); } // Await the JSON data from the response const data = await response.json(); // Log the data to the console console.log(data); } catch (error) { // Handle errors, including HTTP errors and network issues console.error('Fetch error:', error.message); } } // Call the function to execute the fetch fetchWithErrorHandling(); 3. Axios Axios is a popular library for making HTTP requests. It simplifies the process and offers additional features over the Fetch API. 3.1 Installing Axios To use Axios, you need to install it via npm or include it via a CDN: Shell npm install axios 3.2 Basic Axios Usage Here's a basic example of using Axios to fetch data: JavaScript const axios = require('axios'); axios.get('https://jsonplaceholder.typicode.com/posts') .then(response => console.log(response.data)) .catch(error => console.error('Error:', error)); 3.3 Axios With Async/Await Axios works well with async and await: JavaScript async function fetchData() { try { const response = await axios.get('https://jsonplaceholder.typicode.com/posts'); console.log(response.data); } catch (error) { console.error('Axios error:', error); } } fetchData(); 3.4 Error Handling in Axios Axios provides better error handling out of the box: JavaScript async function fetchWithErrorHandling() { try { const response = await axios.get('https://jsonplaceholder.typicode.com/posts'); console.log(response.data); } catch (error) { if (error.response) { // Server responded with a status other than 2xx console.error('Error response:', error.response.status, error.response.data); } else if (error.request) { // No response was received console.error('Error request:', error.request); } else { // Something else caused the error console.error('Error message:', error.message); } } } fetchWithErrorHandling(); 4. Angular HttpClient Angular provides a built-in HttpClient module that makes it easier to perform HTTP requests within Angular applications. 4.1 Setting up HttpClient in Angular First, ensure that the HttpClientModule is imported in your Angular module. You need to import HttpClientModule into your Angular module (usually AppModule). TypeScript import { HttpClientModule } from '@angular/common/http'; import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, HttpClientModule // Import HttpClientModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } 4.2 Basic HttpClient Usage Here's a basic example of using HttpClient to fetch data. Inject HttpClient into your component or service where you want to make HTTP requests. JavaScript import { HttpClient } from '@angular/common/http'; import { Component, OnInit } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html' }) export class AppComponent implements OnInit { constructor(private http: HttpClient) { } ngOnInit(): void { this.http.get('https://jsonplaceholder.typicode.com/posts').subscribe( (data) => { console.log(data); // Handle data }, (error) => { console.error('Angular HTTP error:', error); // Handle error } ); } } 4.3 Error Handling in HttpClient Angular's HttpClient provides a more structured approach to error handling: JavaScript import { Component, OnInit } from '@angular/core'; import { HttpClient } from '@angular/common/http'; import { catchError, throwError } from 'rxjs'; @Component({ selector: 'app-root', templateUrl: './app.component.html' }) export class AppComponent implements OnInit { posts: any[] = []; constructor(private http: HttpClient) { } ngOnInit(): void { this.http.get<any[]>('https://jsonplaceholder.typicode.com/posts') .pipe( catchError(error => { console.error('Error:', error); // Log the error to the console // Optionally, you can handle different error statuses here // For example, display user-friendly messages or redirect to an error page return throwError(() => new Error('Something went wrong; please try again later.')); }) ) .subscribe( data => { this.posts = data; // Handle successful data retrieval }, error => { // Handle error in subscription if needed (e.g., display a message to the user) console.error('Subscription error:', error); } ); } } 5. Other Data Fetching Methods Apart from Fetch, Axios, and Angular HttpClient, there are other libraries and methods to fetch data in JavaScript: 5.1 jQuery AJAX jQuery provides an ajax method for making HTTP requests, though it's less common in modern applications: JavaScript $.ajax({ url: 'https://jsonplaceholder.typicode.com/posts', method: 'GET', success: function(data) { console.log(data); }, error: function(error) { console.error('jQuery AJAX error:', error); } }); 5.2 XMLHttpRequest The older XMLHttpRequest can also be used, though it's more verbose: JavaScript const xhr = new XMLHttpRequest(); xhr.open('GET', 'https://jsonplaceholder.typicode.com/posts'); xhr.onload = function() { if (xhr.status >= 200 && xhr.status < 300) { console.log(JSON.parse(xhr.responseText)); } else { console.error('XMLHttpRequest error:', xhr.statusText); } }; xhr.onerror = function() { console.error('XMLHttpRequest error:', xhr.statusText); }; xhr.send(); 6. Conclusion Choosing between Fetch, Axios, and Angular HttpClient depends on your project requirements: Fetch API: Native to JavaScript, no additional dependencies, requires manual error handling. Axios: Simpler syntax, built-in error handling, and additional features like request cancellation, and interceptors. Angular HttpClient: Integrated with Angular, strong TypeScript support, structured error handling. Both tools are powerful and capable of fetching data efficiently. Your choice may come down to personal preference or specific project needs. For simpler projects or when minimal dependencies are crucial, the Fetch API is suitable. For larger projects requiring robust features and more intuitive syntax, Axios is an excellent choice. Angular applications benefit significantly from using HttpClient due to its integration and additional Angular-specific features. By understanding these methods, you can make an informed decision and use the best tool for your specific data-fetching tasks. HappyCoding!
Choosing a framework for starting a new project can be quite challenging, considering the many frameworks and tools available today. Developers who want to build high-performance and scalable web applications often choose Next.js over others. No wonder, since Next.js is a React framework created by Vercel, offers a comprehensive solution for building server-side rendered (SSR) and static web applications. Here are some of the key advantages: Server-Side Rendering (SSR) and Static Site Generation (SSG): Next.js supports both SSR and SSG, allowing developers to choose the best rendering method for their needs. SSR improves SEO and page load speed by rendering pages on the server, while SSG can pre-render pages at build time for faster performance. Built-in routing: Next.js simplifies routing with its file-based routing system. By organizing your files and folders in the pages directory, you can automatically create corresponding routes, eliminating the need for an external router library. Optimized performance: Next.js comes with a host of performance optimizations out of the box, including code splitting, automatic static optimization, and image optimization, ensuring your application runs efficiently. Starting from scratch can be time-consuming, especially when configuring essential features like authorization and CRUD operations. A proper approach is to use a ready-made boilerplate that includes these settings, allowing you to focus on building features rather than setting up the basics. By applying a ready-to-use Next.js boilerplate, you would get: Time and effort savings: a boilerplate provides a foundation with pre-configured settings, saving you from the hassle of initial setup and configuration. Best practices: experienced developers follow industry best practices when building boilerplates, ensuring your project starts on the right foot. Included features: built-in features such as authentication, routing, and state management, that a lot of boilerplates include, allowing you to hit the ground running. Getting Started With a Next.js Boilerplate Let’s go step-by-step on how to start your project using a boilerplate. Choose a Boilerplate Choose the boilerplate that suits your needs. In this review, we’ll use the extensive-react-boilerplate as an example, because we use it in our company. In our boilerplate overview article, we've provided the reasons behind its creation and implementation. Clone the Repository Clone the boilerplate repository to your local machine using Git. git clone - depth 1 https://github.com/brocoders/extensive-react-boilerplate.git my-app Install Dependencies Navigate to the project directory and install the necessary dependencies. cd my-app npm install Configure Environment Variables Set up your environment variables for authentication and other configurations. To do this, copy the example environment file cp example.env.local .env.local Run the Development Server Start the development server to see your project in action. npm run dev Customize Your Project With the boilerplate set up, you can now start building your features. The boilerplate provides a structure and essential configurations, allowing you to focus on the core functionality of your application. Conclusion Starting a project with Next.js offers numerous advantages, from server-side rendering to built-in routing and performance optimizations. Using a ready-made boilerplate can further accelerate your development process by providing pre-configured settings and best practices. By leveraging these tools, you can focus on what matters most: building a high-quality, scalable web application. In the next article, we will delve into mastering CRUD operations in Next.js, providing you with the tools and knowledge to manage data effectively in your applications.
In today's digital landscape, web application security has become a paramount concern for developers and businesses. With the rise of sophisticated cyber-attacks, simply reacting to threats after they occur is no longer sufficient. Instead, predictive threat analysis offers a proactive method of identifying and eliminating security threats before they can create a dent. In this blog, I'll guide you through strengthening your web application security using predictive threat analysis in Node.js. Understanding Predictive Threat Analysis Predictive threat analysis involves using advanced algorithms and AI/ML techniques to analyze patterns and predict potential security threats. By leveraging historical data and real-time inputs, we can identify abnormal behaviors and vulnerabilities that could lead to attacks. Key Tools and Technologies Before diving into the implementation, let's familiarize ourselves with some essential tools and technologies: Node.js: A powerful JavaScript runtime built on Chrome's V8 engine, ideal for server-side applications Express.js: A flexible Node.js web application framework that provides robust features for web and mobile applications TensorFlow.js: A library for developing and training machine learning models directly in JavaScript (read more at "AI Frameworks for Software Engineers: TensorFlow (Part 1)"). JWT (JSON Web Tokens): Used for securely transmitting information between parties as a JSON object (read more at "What Is a JWT Token?") MongoDB: A NoSQL database used to store user data and logs (read more at "MongoDB Essentials") Setting Up the Environment First, let's set up a basic Node.js environment. You'll need Node.js installed on your machine. If you haven't done so yet, download and install it from Node.js official site. Next, create a new project directory and initialize a Node.js project: mkdir predictive-threat-analysis cd predictive-threat-analysis npm init -y Install the necessary dependencies: npm install express mongoose jsonwebtoken bcryptjs body-parser tensorflow Implementing User Authentication User authentication is the first step towards securing your web application. We'll use JWT for token-based authentication. Below is a simplified example: 1. Setting Up Express and MongoDB Create server.js to set up our Express server and MongoDB connection: const express = require('express'); const mongoose = require('mongoose'); const bodyParser = require('body-parser'); const app = express(); app.use(bodyParser.json()); mongoose.connect('mongodb://localhost:27017/securityDB', { useNewUrlParser: true, useUnifiedTopology: true, }); const userSchema = new mongoose.Schema({ username: String, password: String, }); const User = mongoose.model('User', userSchema); app.listen(3000, () => { console.log('Server running on port 3000'); }); 2. Handling User Registration Add user registration endpoint in server.js: const bcrypt = require('bcryptjs'); const jwt = require('jsonwebtoken'); app.post('/register', async (req, res) => { const { username, password } = req.body; const hashedPassword = await bcrypt.hash(password, 10); const newUser = new User({ username, password: hashedPassword }); await newUser.save(); res.status(201).send('User registered'); }); 3. Authenticating Users Add login endpoint in server.js: app.post('/login', async (req, res) => { const { username, password } = req.body; const user = await User.findOne({ username }); if (!user || !await bcrypt.compare(password, user.password)) { return res.status(401).send('Invalid credentials'); } const token = jwt.sign({ id: user._id }, 'your_jwt_secret', { expiresIn: '1h' }); res.json({ token }); }); Implementing Predictive Threat Analysis Using TensorFlow.js Now, let's integrate predictive threat analysis using TensorFlow.js. We'll create a simple model that predicts potential threats based on user behavior. 1. Collecting Data First, we need to collect data on user interactions. For simplicity, let's assume we log login attempts with timestamps and outcomes (success or failure). Update server.js to log login attempts: const loginAttemptSchema = new mongoose.Schema({ username: String, timestamp: Date, success: Boolean, }); const LoginAttempt = mongoose.model('LoginAttempt', loginAttemptSchema); app.post('/login', async (req, res) => { const { username, password } = req.body; const user = await User.findOne({ username }); const success = user && await bcrypt.compare(password, user.password); const timestamp = new Date(); const attempt = new LoginAttempt({ username, timestamp, success }); await attempt.save(); if (!success) { return res.status(401).send('Invalid credentials'); } const token = jwt.sign({ id: user._id }, 'your_jwt_secret', { expiresIn: '1h' }); res.json({ token }); }); 2. Training the Model Use TensorFlow.js to build and train a simple model: Create trainModel.js: const tf = require('@tensorflow/tfjs-node'); const mongoose = require('mongoose'); const LoginAttempt = require('./models/LoginAttempt'); // Assuming you have the model in a separate file async function trainModel() { await mongoose.connect('mongodb://localhost:27017/securityDB', { useNewUrlParser: true, useUnifiedTopology: true, }); const attempts = await LoginAttempt.find(); const data = attempts.map(a => ({ timestamp: a.timestamp.getTime(), success: a.success ? 1 : 0, })); const xs = tf.tensor2d(data.map(a => [a.timestamp])); const ys = tf.tensor2d(data.map(a => [a.success])); const model = tf.sequential(); model.add(tf.layers.dense({ units: 1, inputShape: [1], activation: 'sigmoid' })); model.compile({ optimizer: 'sgd', loss: 'binaryCrossentropy', metrics: ['accuracy'] }); await model.fit(xs, ys, { epochs: 10 }); await model.save('file://./model'); mongoose.disconnect(); } trainModel().catch(console.error); Run the training script: node trainModel.js 3. Predicting Threats Integrate the trained model to predict potential threats during login attempts. Update server.js: const tf = require('@tensorflow/tfjs-node'); let model; async function loadModel() { model = await tf.loadLayersModel('file://./model/model.json'); } loadModel(); app.post('/login', async (req, res) => { const { username, password } = req.body; const user = await User.findOne({ username }); const timestamp = new Date(); const tsValue = timestamp.getTime(); const prediction = model.predict(tf.tensor2d([[tsValue]])).dataSync()[0]; if (prediction > 0.5) { return res.status(401).send('Potential threat detected'); } const success = user && await bcrypt.compare(password, user.password); const attempt = new LoginAttempt({ username, timestamp, success }); await attempt.save(); if (!success) { return res.status(401).send('Invalid credentials'); } const token = jwt.sign({ id: user._id }, 'your_jwt_secret', { expiresIn: '1h' }); res.json({ token }); }); Conclusion By leveraging predictive threat analysis, we can proactively identify and mitigate potential security threats in our Node.js web applications. Through the integration of machine learning models with TensorFlow.js, we can analyze user behavior and predict suspicious activities before they escalate into actual attacks. This approach enhances the security of our applications and helps us stay ahead of potential threats. Implementing such a strategy requires a thoughtful combination of authentication mechanisms, data collection, and machine learning, but the payoff in terms of security is well worth the effort.
Have you ever chosen some technology without considering alternatives? How significant is conducting the research for selecting a reasonable tech stack? How would you approach the evaluation of suitable options? In this article, we’ll focus our attention on Node.js alternatives and core aspects for consideration when comparing other solutions with one of the most used web technologies like Node.js. The question of what technology to select for the project confronts every team starting software development. It’s clear that the tech choice would play a critical role in implementing the outlined product. The development team has to put considerable effort into finding tech solutions capable of meeting the set requirements. Therefore, the choice between available options is a common step in the decision process. It’s a great practice to consider different solutions and make a detailed tech comparison. Looking at the range of Node.js alternatives, companies grasp a unique opportunity to select the best ones specifically to their needs. First, let’s start with discussing Node.js development and its specifications. What Is Node.js? Bringing up the topic of Node.js alternatives works towards certain goals. The main point is that it helps to understand the technology better and learn more about its competitors. It won’t be that easy to make a choice without detailed research and a deep understanding of the project’s needs. Taking into consideration that Node.js has become a strong market representative and the most used web technology, it’s often discussed among businesses and developers. Whether you’re getting started with web development or belong to a professional team, Node.js is still high on the list to become a primary choice. So, what makes this technology so competitive among others? How does it enable developers to create scalable, efficient, and event-driven applications? And why is it important to consider Node.js alternatives in parallel? Everything starts with the key features that Node.js provides: Non-blocking I/O: Node.js uses an event-driven, non-blocking I/O model. This means that instead of waiting for one operation to complete before moving on to the next, Node.js can handle multiple tasks concurrently. That is particularly useful for applications involving network or file system operations. Single programming language: Node.js allows developers to use JavaScript both on the client side and on the server side. That means that teams can use the same programming language for both ends of their application, which can lead to more consistent and streamlined development. Vast ecosystem: Node.js has a rich ecosystem of libraries and packages available through a Node Version Manager. This makes it easy for developers to incorporate pre-built modules into their applications, saving time and effort. Scalability: Due to its event-driven architecture, Node.js is well-suited for building highly scalable applications that need to handle a large number of simultaneous connections. This is particularly beneficial for real-time applications like chat applications, online gaming, and collaborative tools. Community support: Node.js has a strong and active community that continuously contributes to its development, updates, and improvement. This community support ensures that the platform remains up-to-date and responsive to emerging needs. Node.js is commonly used to build various types of applications, including web servers, APIs, microservices, real-time applications, IoT applications, and more. It has gained significant popularity in the web development community and has been used by numerous companies to create efficient and performant applications. What Are the Top Node Alternatives? The detailed research on the technology also includes checking on its competitors. This step outlines better opportunities and unveils the required functionality of each technology. As a result, businesses and developers gain a clear understanding of the capabilities of the chosen solutions. Java as an Alternative to Node.js Talking about viable Node.js alternatives, many teams could consider this multipurpose programming language, Java. Since it’s built around the principles of object-oriented programming, it encourages modularity and reusability of code. Both technologies have distinct characteristics that make them suitable for various types of applications and scenarios. Let’s consider some of the features that differentiate Java from other Node.js alternatives. Type: Java is a multipurpose programming language, while Node.js is a runtime environment using JavaScript as its programming language. Concurrency: Node.js excels in handling a large number of simultaneous connections due to its event-driven, non-blocking nature. Java also supports concurrency but may require more careful management of threads. Performance: Java’s JVM-based execution can provide consistent performance across platforms, whereas Node.js’s non-blocking architecture can lead to high performance for certain types of applications that involve many concurrent connections. Ecosystem: Java has a mature and extensive ecosystem with a great range of frameworks and libraries for various purposes. Node.js has a vibrant and rapidly growing ecosystem thanks to its NPM repository. Learning curve: Java might have a steeper learning curve due to its static typing and broader language features. JavaScript used with Node.js is generally considered easier to learn, especially for developers with web development experience. Use cases: Java is commonly used for enterprise applications, Android app development, and larger-scale systems. Node.js is often chosen for real-time applications, APIs, and lightweight microservices. In summary, Java excels in versatility and enterprise applications, while Node.js shines in building scalable, real-time applications with its event-driven, non-blocking architecture. The choice between them often depends on the specific requirements and the developer’s familiarity with the language and ecosystem. ASP.NET as an Alternative to Node.js Discussing the topic of web technologies bringing .NET is out of the question. That is a strong market competitor that could also be referred to as Node.js alternatives as it’s often leveraged in web development. .NET is a developer platform with some tools, programming languages, and libraries for building various applications. The known web framework ASP.NET is widely used in creating web applications and services. Type: ASP.NET is a web framework that primarily supports C# and other .NET languages. Concurrency: The .NET platform follows a more traditional, server-centric approach, while Node.js introduces an event-driven, non-blocking paradigm. Performance: Node.js is known for its lightweight and efficient event-driven architecture, which can lead to impressive performance in certain use cases. ASP.NET also offers good performance, and the choice between the two might depend on the specific workload and optimizations. Development tools: Both ecosystems have robust development tools. Visual Studio is a powerful IDE for .NET, while Node.js development often leverages lightweight editors along with tools like Visual Studio Code. Community: ASP.NET is favored for a strong .NET community and official support from Microsoft. Node.js has a large and active open-source community with support from various organizations and developers. Learning curve: ASP.NET may have a steeper learning curve, especially for those new to C# and the Microsoft ecosystem. Node.js is relatively easier to learn, especially for developers familiar with JavaScript. Use cases: ASP.NET allows developers to build dynamic web applications, APIs, and web services using a variety of programming languages, with C# being the most common choice. Node.js is particularly popular for building scalable and real-time applications, such as APIs, web servers, and networking tools. ASP.NET belongs to a versatile .NET platform suitable for delivering various application types, while Node.js is specialized for building real-time, event-driven applications. The choice always depends on the specific requirements and goals of your project. Python as an Alternative to Node.js The next Node.js alternative is Python, a versatile, high-level programming language. It’s known for its simplicity and readability. It’s used for a wide range of applications, including web development, data analysis, scientific computing, machine learning, automation, and more. Here are some of the important features to focus on. Type: Python is a high-level, interpreted, and general-purpose programming language. Concurrency: Python can limit its performance in multi-core scenarios, while Node.js is designed for asynchronous, non-blocking I/O, making it great for handling many simultaneous connections. Performance: Node.js is optimized for handling high concurrency and I/O-bound tasks. Python is versatile and well-suited for various tasks, from web development to scientific computing. Its performance depends on the specific use case and libraries being used. Ecosystem: Both languages have robust ecosystems, but Python’s ecosystem is more diverse due to its broader range of applications. Python provides a vast ecosystem of third-party libraries and frameworks that extend its capabilities. For example, Django, a popular web development framework, is often considered among Node.js alternatives. Community: These communities embrace open source, but Python’s longer history has led to a more established culture of collaboration. Python’s community is broader in terms of application domains, while Node.js’s community is more specialized in web and real-time development. Learning curve: Python’s easy-to-read syntax can make it more approachable for beginners, while Node.js can be advantageous for front-end developers familiar with JavaScript. Use cases: Python is versatile and well-suited for a wide variety of tasks, while Node.js excels in building real-time, event-driven, and highly scalable applications. Both have rich ecosystems, but Python’s breadth extends across various domains, while Node.js is particularly strong for web and network-related applications. Python’s ease of learning and its widespread use in various industries have contributed to its position as one of the most popular programming languages. In many cases, Node.js tends to excel in scenarios requiring rapid, asynchronous responses, while Python is often chosen for its ease of use, wide ecosystem, and diverse application domains. Django as an Alternative to Node.js Another technology to consider among Node.js alternatives is Django, a high-level web framework written in Python. It’s commonly used for web development but unveils different approaches, ecosystems, and use cases compared to Node.js. Let’s consider some of the core details. Type: Django is a web framework that follows the MVT architectural pattern. Besides, Django uses Python, while Node.js uses JavaScript. The final choice might often depend on familiarity with the language or the team’s expertise. Architecture: Django enforces a specific architecture, while Node.js provides more flexibility in choosing an architecture or combination of libraries. The difference in decision-making is influenced by the nature of the project’s requirements and developers’ preferences. Asynchronous handling: Node.js excels at handling a large number of concurrent connections due to its non-blocking nature. Django’s asynchronous capabilities have improved in recent versions, but Node.js is generally considered more suited for high-concurrency scenarios. Ecosystem: Django has a rich ecosystem of built-in features and a wide range of third-party packages available through Python’s package manager, Pip. Node.js presents a vast ecosystem of modules available through npm for various tasks. Learning curve: Django’s comprehensive documentation and “batteries-included” philosophy can lead to quicker development for those already familiar with Python. Node.js might have a steeper learning curve, especially if you’re new to JavaScript on the server side. Use cases: Django is often favored for content-heavy applications, e-commerce platforms, and applications requiring rapid development. Node.js is well-suited for real-time applications, APIs, microservices, and applications with a high degree of interactivity. The choice between Django and Node.js depends on your project’s requirements, your team’s expertise, and your personal preferences. Django is often chosen for its comprehensive features and security, while Node.js is preferred for real-time and asynchronous applications. Ruby on Rails as an Alternative to Node.js RoR is another alternative to Node.js with its convention over configuration approach. This web framework becomes an excellent choice for teams looking to rapidly prototype and develop applications while benefiting from a well-defined structure and a rich ecosystem of pre-built solutions. Type: Ruby on Rails is a full-stack web app framework written in the programming language Ruby. Flexibility: Ruby on Rails has a defined structure and set of conventions, which can speed up development but might limit flexibility in certain architectural decisions or customizations. Node.js offers more flexibility in terms of architecture and design choices, allowing developers to craft solutions fitting project-specific needs. Performance: Ruby on Rails might be less performant for certain scenarios due to its synchronous nature, although optimizations and caching can help mitigate this. Node.js can handle high levels of concurrency efficiently, making it perform well for certain applications. Ecosystem: Ruby on Rails has a well-established ecosystem of gems that provide ready-made solutions for common tasks, saving development time. At the same time, Node.js has a wider range of use cases and a massive library repository. Community: Both RoR and Node.js have active communities, but Ruby’s community is often associated with its focus on developer experience and creativity, while Node.js is known for its scalability and asynchronous capabilities. Learning curve: Ruby on Rails provides a set of conventions and guidelines that can make it easier for beginners to get started. Node.js might have a steeper learning curve for beginners due to its asynchronous programming concepts, event-driven architecture, and the need to manage dependencies and architectural choices more independently. Use cases: RoR is great for quickly building web applications, especially MVPs. Node.js is particularly useful for real-time applications, APIs, and applications with heavy I/O. It’s important to remember that the choice between Ruby on Rails and Node.js depends on various factors, including project requirements, the development team’s expertise, and the specific goals of the application you’re building. However, we should emphasize that the RoR market share has significantly decreased over the past few years while Node.js development keeps on growing. Node.js Alternatives: How To Make a Choice When considering alternatives to Node.js for development needs, it’s important to evaluate the options based on project-specific requirements, team expertise, and the features and characteristics the company values most. But at the same time, there’s no one-size-fits-all answer, and the best alternative for your project will depend on specific needs and constraints. It’s often a good idea to evolve a step-by-step guide on how to evaluate Node.js alternatives. Besides, you can consult with experienced developers or technical experts who have worked with the alternatives to Node.js you’re considering. Defining Project Requirements Well-defined requirements have always been crucial for the project’s success. It enables the team to reach a common understanding of product goals and efficient ways to implement outlined solutions. The development team covers scope control, resource allocation, risk identification, time management, cost estimation, etc. And it isn’t surprising that technology choice is worth special attention. Knowing your project requirement is the first step to efficiently evaluating Node.js and its alternatives for your project. Considering Constraints Project constraints are essential factors that can influence the planning, execution, and launch of a project. It’s crucial to consider these constraints from the beginning to ensure that your project stays on track and meets its objectives. The technology choice is something that is supposed to streamline the overall process. At the same time, it can negatively affect the product execution if not properly chosen and managed. The team has to find the suitable technology fit to build the outlined software. Researching Technology Options In the light of discussing Node.js alternatives, teams obviously check through Node.js and its viable options. It’s essential to conduct comprehensive research about the technology functionalities, platform compatibilities, community support, developers’ availability, development rates, etc. Besides, it’s important to stay updated as the tech market evolves really fast. It might even be necessary to iterate and adapt the technology stack. Therefore, Node.js has become a common choice due to both tech characteristics and market popularity. Evaluating the Pros and Cons of Main Competitors As long as your company narrows down the suitable options, it discovers viable alternatives to Node.js. It’s required to assess the pros and cons of each technology and find out how it could benefit your project. That may involve reading documentation, articles, and reviews and consulting with experts if possible. Make sure to consider such aspects as: Security and scalability Performance Development speed and ease of use Learning curve Development cost Community and developer support Making a Decision Based on your research, evaluations, and analysis, make an informed decision on the technology stack for your project. Remember that there is never a one-size-fits-all solution. Indeed, it’s more about choosing the right one that covers your business-specific needs and brings all the necessary functionality to deliver your successful projects. The company needs to stay updated on the latest advancements within the chosen technology to ensure the project remains current and secure. As a result, any mentioned technology can become a good match for specific projects. The main thing is that the choice has to be supported by the necessary features and benefits your product gets from it. Of course, some become stronger market representatives and have a wider adoption, like Node.js, ASP.NET, and others. However, the final choice only depends on the team’s needs and preferences for Node.js or its alternatives. Conclusion The technology choice is a vital part of the overall development process. The team takes responsibility for making informative decisions and selecting the best solutions. Later, it plays a crucial role in delivering a full-fledged product while meeting functional and non-functional requirements. Bringing the topic of Node.js alternatives, we’ve discovered other viable options for software development. Besides, it helps us define the strengths of Node.js and why it’s so popular in the market. Like with any other tech choice, teams have to put more effort into finding suitable options. Node.js has gone a long way and doesn’t seem to go anywhere anytime soon.
John Vester
Senior Staff Engineer,
Marqeta
Justin Albano
Software Engineer,
IBM