Cloud + data orchestration: Demolish your data silos. Enable complex analytics. Eliminate I/O bottlenecks. Learn the essentials (and more)!
2024 DZone Community Survey: SMEs wanted! Help shape the future of DZone. Share your insights and enter to win swag!
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
The Definitive Guide to TDD in React: Writing Tests That Guarantee Success
Getting Started With Valkey Using JavaScript
In today's digital landscape, web application security has become a paramount concern for developers and businesses. With the rise of sophisticated cyber-attacks, simply reacting to threats after they occur is no longer sufficient. Instead, predictive threat analysis offers a proactive method of identifying and eliminating security threats before they can create a dent. In this blog, I'll guide you through strengthening your web application security using predictive threat analysis in Node.js. Understanding Predictive Threat Analysis Predictive threat analysis involves using advanced algorithms and AI/ML techniques to analyze patterns and predict potential security threats. By leveraging historical data and real-time inputs, we can identify abnormal behaviors and vulnerabilities that could lead to attacks. Key Tools and Technologies Before diving into the implementation, let's familiarize ourselves with some essential tools and technologies: Node.js: A powerful JavaScript runtime built on Chrome's V8 engine, ideal for server-side applications Express.js: A flexible Node.js web application framework that provides robust features for web and mobile applications TensorFlow.js: A library for developing and training machine learning models directly in JavaScript (read more at "AI Frameworks for Software Engineers: TensorFlow (Part 1)"). JWT (JSON Web Tokens): Used for securely transmitting information between parties as a JSON object (read more at "What Is a JWT Token?") MongoDB: A NoSQL database used to store user data and logs (read more at "MongoDB Essentials") Setting Up the Environment First, let's set up a basic Node.js environment. You'll need Node.js installed on your machine. If you haven't done so yet, download and install it from Node.js official site. Next, create a new project directory and initialize a Node.js project: mkdir predictive-threat-analysis cd predictive-threat-analysis npm init -y Install the necessary dependencies: npm install express mongoose jsonwebtoken bcryptjs body-parser tensorflow Implementing User Authentication User authentication is the first step towards securing your web application. We'll use JWT for token-based authentication. Below is a simplified example: 1. Setting Up Express and MongoDB Create server.js to set up our Express server and MongoDB connection: const express = require('express'); const mongoose = require('mongoose'); const bodyParser = require('body-parser'); const app = express(); app.use(bodyParser.json()); mongoose.connect('mongodb://localhost:27017/securityDB', { useNewUrlParser: true, useUnifiedTopology: true, }); const userSchema = new mongoose.Schema({ username: String, password: String, }); const User = mongoose.model('User', userSchema); app.listen(3000, () => { console.log('Server running on port 3000'); }); 2. Handling User Registration Add user registration endpoint in server.js: const bcrypt = require('bcryptjs'); const jwt = require('jsonwebtoken'); app.post('/register', async (req, res) => { const { username, password } = req.body; const hashedPassword = await bcrypt.hash(password, 10); const newUser = new User({ username, password: hashedPassword }); await newUser.save(); res.status(201).send('User registered'); }); 3. Authenticating Users Add login endpoint in server.js: app.post('/login', async (req, res) => { const { username, password } = req.body; const user = await User.findOne({ username }); if (!user || !await bcrypt.compare(password, user.password)) { return res.status(401).send('Invalid credentials'); } const token = jwt.sign({ id: user._id }, 'your_jwt_secret', { expiresIn: '1h' }); res.json({ token }); }); Implementing Predictive Threat Analysis Using TensorFlow.js Now, let's integrate predictive threat analysis using TensorFlow.js. We'll create a simple model that predicts potential threats based on user behavior. 1. Collecting Data First, we need to collect data on user interactions. For simplicity, let's assume we log login attempts with timestamps and outcomes (success or failure). Update server.js to log login attempts: const loginAttemptSchema = new mongoose.Schema({ username: String, timestamp: Date, success: Boolean, }); const LoginAttempt = mongoose.model('LoginAttempt', loginAttemptSchema); app.post('/login', async (req, res) => { const { username, password } = req.body; const user = await User.findOne({ username }); const success = user && await bcrypt.compare(password, user.password); const timestamp = new Date(); const attempt = new LoginAttempt({ username, timestamp, success }); await attempt.save(); if (!success) { return res.status(401).send('Invalid credentials'); } const token = jwt.sign({ id: user._id }, 'your_jwt_secret', { expiresIn: '1h' }); res.json({ token }); }); 2. Training the Model Use TensorFlow.js to build and train a simple model: Create trainModel.js: const tf = require('@tensorflow/tfjs-node'); const mongoose = require('mongoose'); const LoginAttempt = require('./models/LoginAttempt'); // Assuming you have the model in a separate file async function trainModel() { await mongoose.connect('mongodb://localhost:27017/securityDB', { useNewUrlParser: true, useUnifiedTopology: true, }); const attempts = await LoginAttempt.find(); const data = attempts.map(a => ({ timestamp: a.timestamp.getTime(), success: a.success ? 1 : 0, })); const xs = tf.tensor2d(data.map(a => [a.timestamp])); const ys = tf.tensor2d(data.map(a => [a.success])); const model = tf.sequential(); model.add(tf.layers.dense({ units: 1, inputShape: [1], activation: 'sigmoid' })); model.compile({ optimizer: 'sgd', loss: 'binaryCrossentropy', metrics: ['accuracy'] }); await model.fit(xs, ys, { epochs: 10 }); await model.save('file://./model'); mongoose.disconnect(); } trainModel().catch(console.error); Run the training script: node trainModel.js 3. Predicting Threats Integrate the trained model to predict potential threats during login attempts. Update server.js: const tf = require('@tensorflow/tfjs-node'); let model; async function loadModel() { model = await tf.loadLayersModel('file://./model/model.json'); } loadModel(); app.post('/login', async (req, res) => { const { username, password } = req.body; const user = await User.findOne({ username }); const timestamp = new Date(); const tsValue = timestamp.getTime(); const prediction = model.predict(tf.tensor2d([[tsValue]])).dataSync()[0]; if (prediction > 0.5) { return res.status(401).send('Potential threat detected'); } const success = user && await bcrypt.compare(password, user.password); const attempt = new LoginAttempt({ username, timestamp, success }); await attempt.save(); if (!success) { return res.status(401).send('Invalid credentials'); } const token = jwt.sign({ id: user._id }, 'your_jwt_secret', { expiresIn: '1h' }); res.json({ token }); }); Conclusion By leveraging predictive threat analysis, we can proactively identify and mitigate potential security threats in our Node.js web applications. Through the integration of machine learning models with TensorFlow.js, we can analyze user behavior and predict suspicious activities before they escalate into actual attacks. This approach enhances the security of our applications and helps us stay ahead of potential threats. Implementing such a strategy requires a thoughtful combination of authentication mechanisms, data collection, and machine learning, but the payoff in terms of security is well worth the effort.
Have you ever chosen some technology without considering alternatives? How significant is conducting the research for selecting a reasonable tech stack? How would you approach the evaluation of suitable options? In this article, we’ll focus our attention on Node.js alternatives and core aspects for consideration when comparing other solutions with one of the most used web technologies like Node.js. The question of what technology to select for the project confronts every team starting software development. It’s clear that the tech choice would play a critical role in implementing the outlined product. The development team has to put considerable effort into finding tech solutions capable of meeting the set requirements. Therefore, the choice between available options is a common step in the decision process. It’s a great practice to consider different solutions and make a detailed tech comparison. Looking at the range of Node.js alternatives, companies grasp a unique opportunity to select the best ones specifically to their needs. First, let’s start with discussing Node.js development and its specifications. What Is Node.js? Bringing up the topic of Node.js alternatives works towards certain goals. The main point is that it helps to understand the technology better and learn more about its competitors. It won’t be that easy to make a choice without detailed research and a deep understanding of the project’s needs. Taking into consideration that Node.js has become a strong market representative and the most used web technology, it’s often discussed among businesses and developers. Whether you’re getting started with web development or belong to a professional team, Node.js is still high on the list to become a primary choice. So, what makes this technology so competitive among others? How does it enable developers to create scalable, efficient, and event-driven applications? And why is it important to consider Node.js alternatives in parallel? Everything starts with the key features that Node.js provides: Non-blocking I/O: Node.js uses an event-driven, non-blocking I/O model. This means that instead of waiting for one operation to complete before moving on to the next, Node.js can handle multiple tasks concurrently. That is particularly useful for applications involving network or file system operations. Single programming language: Node.js allows developers to use JavaScript both on the client side and on the server side. That means that teams can use the same programming language for both ends of their application, which can lead to more consistent and streamlined development. Vast ecosystem: Node.js has a rich ecosystem of libraries and packages available through a Node Version Manager. This makes it easy for developers to incorporate pre-built modules into their applications, saving time and effort. Scalability: Due to its event-driven architecture, Node.js is well-suited for building highly scalable applications that need to handle a large number of simultaneous connections. This is particularly beneficial for real-time applications like chat applications, online gaming, and collaborative tools. Community support: Node.js has a strong and active community that continuously contributes to its development, updates, and improvement. This community support ensures that the platform remains up-to-date and responsive to emerging needs. Node.js is commonly used to build various types of applications, including web servers, APIs, microservices, real-time applications, IoT applications, and more. It has gained significant popularity in the web development community and has been used by numerous companies to create efficient and performant applications. What Are the Top Node Alternatives? The detailed research on the technology also includes checking on its competitors. This step outlines better opportunities and unveils the required functionality of each technology. As a result, businesses and developers gain a clear understanding of the capabilities of the chosen solutions. Java as an Alternative to Node.js Talking about viable Node.js alternatives, many teams could consider this multipurpose programming language, Java. Since it’s built around the principles of object-oriented programming, it encourages modularity and reusability of code. Both technologies have distinct characteristics that make them suitable for various types of applications and scenarios. Let’s consider some of the features that differentiate Java from other Node.js alternatives. Type: Java is a multipurpose programming language, while Node.js is a runtime environment using JavaScript as its programming language. Concurrency: Node.js excels in handling a large number of simultaneous connections due to its event-driven, non-blocking nature. Java also supports concurrency but may require more careful management of threads. Performance: Java’s JVM-based execution can provide consistent performance across platforms, whereas Node.js’s non-blocking architecture can lead to high performance for certain types of applications that involve many concurrent connections. Ecosystem: Java has a mature and extensive ecosystem with a great range of frameworks and libraries for various purposes. Node.js has a vibrant and rapidly growing ecosystem thanks to its NPM repository. Learning curve: Java might have a steeper learning curve due to its static typing and broader language features. JavaScript used with Node.js is generally considered easier to learn, especially for developers with web development experience. Use cases: Java is commonly used for enterprise applications, Android app development, and larger-scale systems. Node.js is often chosen for real-time applications, APIs, and lightweight microservices. In summary, Java excels in versatility and enterprise applications, while Node.js shines in building scalable, real-time applications with its event-driven, non-blocking architecture. The choice between them often depends on the specific requirements and the developer’s familiarity with the language and ecosystem. ASP.NET as an Alternative to Node.js Discussing the topic of web technologies bringing .NET is out of the question. That is a strong market competitor that could also be referred to as Node.js alternatives as it’s often leveraged in web development. .NET is a developer platform with some tools, programming languages, and libraries for building various applications. The known web framework ASP.NET is widely used in creating web applications and services. Type: ASP.NET is a web framework that primarily supports C# and other .NET languages. Concurrency: The .NET platform follows a more traditional, server-centric approach, while Node.js introduces an event-driven, non-blocking paradigm. Performance: Node.js is known for its lightweight and efficient event-driven architecture, which can lead to impressive performance in certain use cases. ASP.NET also offers good performance, and the choice between the two might depend on the specific workload and optimizations. Development tools: Both ecosystems have robust development tools. Visual Studio is a powerful IDE for .NET, while Node.js development often leverages lightweight editors along with tools like Visual Studio Code. Community: ASP.NET is favored for a strong .NET community and official support from Microsoft. Node.js has a large and active open-source community with support from various organizations and developers. Learning curve: ASP.NET may have a steeper learning curve, especially for those new to C# and the Microsoft ecosystem. Node.js is relatively easier to learn, especially for developers familiar with JavaScript. Use cases: ASP.NET allows developers to build dynamic web applications, APIs, and web services using a variety of programming languages, with C# being the most common choice. Node.js is particularly popular for building scalable and real-time applications, such as APIs, web servers, and networking tools. ASP.NET belongs to a versatile .NET platform suitable for delivering various application types, while Node.js is specialized for building real-time, event-driven applications. The choice always depends on the specific requirements and goals of your project. Python as an Alternative to Node.js The next Node.js alternative is Python, a versatile, high-level programming language. It’s known for its simplicity and readability. It’s used for a wide range of applications, including web development, data analysis, scientific computing, machine learning, automation, and more. Here are some of the important features to focus on. Type: Python is a high-level, interpreted, and general-purpose programming language. Concurrency: Python can limit its performance in multi-core scenarios, while Node.js is designed for asynchronous, non-blocking I/O, making it great for handling many simultaneous connections. Performance: Node.js is optimized for handling high concurrency and I/O-bound tasks. Python is versatile and well-suited for various tasks, from web development to scientific computing. Its performance depends on the specific use case and libraries being used. Ecosystem: Both languages have robust ecosystems, but Python’s ecosystem is more diverse due to its broader range of applications. Python provides a vast ecosystem of third-party libraries and frameworks that extend its capabilities. For example, Django, a popular web development framework, is often considered among Node.js alternatives. Community: These communities embrace open source, but Python’s longer history has led to a more established culture of collaboration. Python’s community is broader in terms of application domains, while Node.js’s community is more specialized in web and real-time development. Learning curve: Python’s easy-to-read syntax can make it more approachable for beginners, while Node.js can be advantageous for front-end developers familiar with JavaScript. Use cases: Python is versatile and well-suited for a wide variety of tasks, while Node.js excels in building real-time, event-driven, and highly scalable applications. Both have rich ecosystems, but Python’s breadth extends across various domains, while Node.js is particularly strong for web and network-related applications. Python’s ease of learning and its widespread use in various industries have contributed to its position as one of the most popular programming languages. In many cases, Node.js tends to excel in scenarios requiring rapid, asynchronous responses, while Python is often chosen for its ease of use, wide ecosystem, and diverse application domains. Django as an Alternative to Node.js Another technology to consider among Node.js alternatives is Django, a high-level web framework written in Python. It’s commonly used for web development but unveils different approaches, ecosystems, and use cases compared to Node.js. Let’s consider some of the core details. Type: Django is a web framework that follows the MVT architectural pattern. Besides, Django uses Python, while Node.js uses JavaScript. The final choice might often depend on familiarity with the language or the team’s expertise. Architecture: Django enforces a specific architecture, while Node.js provides more flexibility in choosing an architecture or combination of libraries. The difference in decision-making is influenced by the nature of the project’s requirements and developers’ preferences. Asynchronous handling: Node.js excels at handling a large number of concurrent connections due to its non-blocking nature. Django’s asynchronous capabilities have improved in recent versions, but Node.js is generally considered more suited for high-concurrency scenarios. Ecosystem: Django has a rich ecosystem of built-in features and a wide range of third-party packages available through Python’s package manager, Pip. Node.js presents a vast ecosystem of modules available through npm for various tasks. Learning curve: Django’s comprehensive documentation and “batteries-included” philosophy can lead to quicker development for those already familiar with Python. Node.js might have a steeper learning curve, especially if you’re new to JavaScript on the server side. Use cases: Django is often favored for content-heavy applications, e-commerce platforms, and applications requiring rapid development. Node.js is well-suited for real-time applications, APIs, microservices, and applications with a high degree of interactivity. The choice between Django and Node.js depends on your project’s requirements, your team’s expertise, and your personal preferences. Django is often chosen for its comprehensive features and security, while Node.js is preferred for real-time and asynchronous applications. Ruby on Rails as an Alternative to Node.js RoR is another alternative to Node.js with its convention over configuration approach. This web framework becomes an excellent choice for teams looking to rapidly prototype and develop applications while benefiting from a well-defined structure and a rich ecosystem of pre-built solutions. Type: Ruby on Rails is a full-stack web app framework written in the programming language Ruby. Flexibility: Ruby on Rails has a defined structure and set of conventions, which can speed up development but might limit flexibility in certain architectural decisions or customizations. Node.js offers more flexibility in terms of architecture and design choices, allowing developers to craft solutions fitting project-specific needs. Performance: Ruby on Rails might be less performant for certain scenarios due to its synchronous nature, although optimizations and caching can help mitigate this. Node.js can handle high levels of concurrency efficiently, making it perform well for certain applications. Ecosystem: Ruby on Rails has a well-established ecosystem of gems that provide ready-made solutions for common tasks, saving development time. At the same time, Node.js has a wider range of use cases and a massive library repository. Community: Both RoR and Node.js have active communities, but Ruby’s community is often associated with its focus on developer experience and creativity, while Node.js is known for its scalability and asynchronous capabilities. Learning curve: Ruby on Rails provides a set of conventions and guidelines that can make it easier for beginners to get started. Node.js might have a steeper learning curve for beginners due to its asynchronous programming concepts, event-driven architecture, and the need to manage dependencies and architectural choices more independently. Use cases: RoR is great for quickly building web applications, especially MVPs. Node.js is particularly useful for real-time applications, APIs, and applications with heavy I/O. It’s important to remember that the choice between Ruby on Rails and Node.js depends on various factors, including project requirements, the development team’s expertise, and the specific goals of the application you’re building. However, we should emphasize that the RoR market share has significantly decreased over the past few years while Node.js development keeps on growing. Node.js Alternatives: How To Make a Choice When considering alternatives to Node.js for development needs, it’s important to evaluate the options based on project-specific requirements, team expertise, and the features and characteristics the company values most. But at the same time, there’s no one-size-fits-all answer, and the best alternative for your project will depend on specific needs and constraints. It’s often a good idea to evolve a step-by-step guide on how to evaluate Node.js alternatives. Besides, you can consult with experienced developers or technical experts who have worked with the alternatives to Node.js you’re considering. Defining Project Requirements Well-defined requirements have always been crucial for the project’s success. It enables the team to reach a common understanding of product goals and efficient ways to implement outlined solutions. The development team covers scope control, resource allocation, risk identification, time management, cost estimation, etc. And it isn’t surprising that technology choice is worth special attention. Knowing your project requirement is the first step to efficiently evaluating Node.js and its alternatives for your project. Considering Constraints Project constraints are essential factors that can influence the planning, execution, and launch of a project. It’s crucial to consider these constraints from the beginning to ensure that your project stays on track and meets its objectives. The technology choice is something that is supposed to streamline the overall process. At the same time, it can negatively affect the product execution if not properly chosen and managed. The team has to find the suitable technology fit to build the outlined software. Researching Technology Options In the light of discussing Node.js alternatives, teams obviously check through Node.js and its viable options. It’s essential to conduct comprehensive research about the technology functionalities, platform compatibilities, community support, developers’ availability, development rates, etc. Besides, it’s important to stay updated as the tech market evolves really fast. It might even be necessary to iterate and adapt the technology stack. Therefore, Node.js has become a common choice due to both tech characteristics and market popularity. Evaluating the Pros and Cons of Main Competitors As long as your company narrows down the suitable options, it discovers viable alternatives to Node.js. It’s required to assess the pros and cons of each technology and find out how it could benefit your project. That may involve reading documentation, articles, and reviews and consulting with experts if possible. Make sure to consider such aspects as: Security and scalability Performance Development speed and ease of use Learning curve Development cost Community and developer support Making a Decision Based on your research, evaluations, and analysis, make an informed decision on the technology stack for your project. Remember that there is never a one-size-fits-all solution. Indeed, it’s more about choosing the right one that covers your business-specific needs and brings all the necessary functionality to deliver your successful projects. The company needs to stay updated on the latest advancements within the chosen technology to ensure the project remains current and secure. As a result, any mentioned technology can become a good match for specific projects. The main thing is that the choice has to be supported by the necessary features and benefits your product gets from it. Of course, some become stronger market representatives and have a wider adoption, like Node.js, ASP.NET, and others. However, the final choice only depends on the team’s needs and preferences for Node.js or its alternatives. Conclusion The technology choice is a vital part of the overall development process. The team takes responsibility for making informative decisions and selecting the best solutions. Later, it plays a crucial role in delivering a full-fledged product while meeting functional and non-functional requirements. Bringing the topic of Node.js alternatives, we’ve discovered other viable options for software development. Besides, it helps us define the strengths of Node.js and why it’s so popular in the market. Like with any other tech choice, teams have to put more effort into finding suitable options. Node.js has gone a long way and doesn’t seem to go anywhere anytime soon.
JavaScript is a pivotal technology for web applications. With the emergence of Node.js, JavaScript became relevant for both client-side and server-side development, enabling a full-stack development approach with a single programming language. Both Node.js and Apache Kafka are built around event-driven architectures, making them naturally compatible for real-time data streaming. This blog post explores open-source JavaScript clients for Apache Kafka and discusses the trade-offs and limitations of JavaScript Kafka producers and consumers compared to stream processing technologies such as Kafka Streams or Apache Flink. JavaScript: A Pivotal Technology for Web Applications JavaScript is a pivotal technology for web applications, serving as the backbone of interactive and dynamic web experiences. Here are several reasons JavaScript is essential for web applications: Interactivity: JavaScript enables the creation of highly interactive web pages. It responds to user actions in real time, allowing for the development of features such as interactive forms, animations, games, and dynamic content updates without the need to reload the page. Client-side scripting: Running in the user's browser, JavaScript reduces server load by handling many tasks on the client's side. This can lead to faster web page loading times and a smoother user experience. Universal browser support: All modern web browsers support JavaScript, making it a universally accessible programming language for web development. This wide support ensures that JavaScript-based features work consistently across different browsers and devices. Versatile frameworks and libraries: The JavaScript ecosystem includes a vast array of frameworks and libraries (such as React, Angular, and Vue.js) that streamline the development of web applications, from single-page applications to complex web-based software. These tools offer reusable components, two-way data binding, and other features that enhance productivity and maintainability. Real-time applications: JavaScript is ideal for building real-time applications, such as chat apps and live streaming services, thanks to technologies like WebSockets and frameworks that support real-time communication. Rich web APIs: JavaScript can access a wide range of web APIs provided by browsers, allowing for the development of complex features, including manipulating the Document Object Model (DOM), making HTTP requests (AJAX or Fetch API), handling multimedia, and tracking user geolocation. SEO and performance optimization: Modern JavaScript frameworks and server-side rendering solutions help in building fast-loading web pages that are also search engine friendly, addressing one of the traditional criticisms of JavaScript-heavy applications. In conclusion, JavaScript's capabilities offer the tools and flexibility needed to build everything from simple websites to complex, high-performance web applications. Full-Stack Development: JavaScript for the Server-Side With Node.js With the advent of Node.js, JavaScript is not just used only for the client side of web applications. JavaScript is for both client-side and server-side development. It enables a full-stack development approach with a single programming language. This simplifies the development process and allows for seamless integration between the frontend and backend. Using JavaScript for backend applications, especially with Node.js, offers several advantages: Unified language for frontend and backend: JavaScript on the backend allows developers to use the same language across the entire stack, simplifying development and reducing context switching. This can lead to more efficient development processes and easier maintenance. High performance: Node.js is a popular JavaScript runtime. It is built on Chrome's V8 engine, which is known for its speed and efficiency. Node.js uses non-blocking, event-driven architecture. The architecture makes it particularly suitable for I/O-heavy operations and real-time applications like chat applications and online gaming. Vast ecosystem: JavaScript has one of the largest ecosystems, powered by npm (Node Package Manager). npm provides a vast library of modules and packages that can be easily integrated into your projects, significantly reducing development time. Community support: The JavaScript community is one of the largest and most active, offering a wealth of resources, frameworks, and tools. This community support can be invaluable for solving problems, learning new skills, and staying up to date with the latest technologies and best practices. Versatility: JavaScript with Node.js can be used for developing a wide range of applications, from web and mobile applications to serverless functions and microservices. This versatility makes it a go-to choice for many developers and companies. Real-time data processing: JavaScript is well-suited for applications requiring real-time data processing and updates, such as live chats, online gaming, and collaboration tools, because of its non-blocking nature and efficient handling of concurrent connections. Cross-platform development: Tools like Electron and React Native allow JavaScript developers to build cross-platform desktop and mobile applications, respectively, further extending JavaScript's reach beyond the web. Node.js's efficiency and scalability, combined with the ability to use JavaScript for both frontend and backend development, have made it a popular choice among developers and companies around the world. Its non-blocking, event-driven I/O characteristics are a perfect match for an event-driven architecture. JavaScript and Apache Kafka for Event-Driven Applications Using Node.js with Apache Kafka offers several benefits for building scalable, high-performance applications that require real-time data processing and streaming capabilities. Here are several reasons integrating Node.js with Apache Kafka is helpful: Unified language for full-stack development: Node.js allows developers to use JavaScript across both the client and server sides, simplifying development workflows and enabling seamless integration between frontend and backend systems, including Kafka-based messaging or event streaming architectures. Event-driven architecture: Both Node.js and Apache Kafka are built around event-driven architectures, making them naturally compatible. Node.js can efficiently handle Kafka's real-time data streams, processing events asynchronously and non-blocking. Scalability: Node.js is known for its ability to handle concurrent connections efficiently, which complements Kafka's scalability. This combination is ideal for applications that require handling high volumes of data or requests simultaneously, such as IoT platforms, real-time analytics, and online gaming. Large ecosystem and community support: Node.js's extensive npm ecosystem includes Kafka libraries and tools that facilitate the integration. This support speeds up development, offering pre-built modules for connecting to Kafka clusters, producing and consuming messages, and managing topics. Real-time data processing: Node.js is well-suited for building applications that require real-time data processing and streaming, a core strength of Apache Kafka. Developers can leverage Node.js to build responsive and dynamic applications that process and react to Kafka data streams in real-time. Microservices and cloud-native applications: The combination of Node.js and Kafka is powerful for developing microservices and cloud-native applications. Kafka serves as the backbone for inter-service communication. Node.js is used to build lightweight, scalable service components. Flexibility and speed: Node.js enables rapid development and prototyping. Kafka environments can implement new streaming data pipelines and applications quickly. In summary, using Node.js with Apache Kafka leverages the strengths of both technologies to build efficient, scalable, and real-time applications. The combination is an attractive choice for many developers. Open Source JavaScript Clients for Apache Kafka Various open-source JavaScript clients exist for Apache Kafka. Developers use them to build everything from simple message production and consumption to complex streaming applications. When choosing a JavaScript client for Apache Kafka, consider factors like performance requirements, ease of use, community support, commercial support, and compatibility with your Kafka version and features. Open Source JavaScript Clients for Apache Kafka For working with Apache Kafka in JavaScript environments, several clients and libraries can help you integrate Kafka into your JavaScript or Node.js applications. Here are some of the notable JavaScript clients for Apache Kafka from the past years: kafka-node: One of the original Node.js clients for Apache Kafka, kafka-node provides a straightforward and comprehensive API for interacting with Kafka clusters, including producing and consuming messages. node-rdkafka: This client is a high-performance library for Apache Kafka that wraps the native librdkafka library. It's known for its robustness and is suitable for heavy-duty operations. node-rdkafka offers advanced features and high throughput for both producing and consuming messages. KafkaJS: An Apache Kafka client for Node.js, which is entirely written in JavaScript, it focuses on simplicity and ease of use and supports the latest Kafka features. KafkaJS is designed to be lightweight and flexible, making it a good choice for applications that require a simple and efficient way to interact with a Kafka cluster. Challenges With Open Source Projects In General Open source projects are only successful if an active community maintains them. Therefore, familiar issues with open source projects include: Lack of documentation: Incomplete or outdated documentation can hinder new users and contributors. Complex contribution process: A complicated process for contributing can deter potential contributors. This is not just a disadvantage, as it guarantees code reviews and quality checks of new commits. Limited support: Relying on community support can lead to slow issue resolution times. Critical projects often require commercial support by a vendor. Project abandonment: Projects can become inactive if maintainers lose interest or lack time. Code quality and security: Ensuring high code quality and addressing security vulnerabilities can be challenging if nobody is responsible and has no critical SLAs in mind. Governance issues: Disagreements on project direction or decisions can lead to forks or conflicts. Issues With Kafka's JavaScript Open Source Clients Some of the above challenges apply for the available Kafka's open source JavaScript clients. We have seen maintenance inactivity and quality issues as the biggest challenges in projects. And be aware that it is difficult for maintainers to keep up not only with issues but also with new KIPs (Kafka Improvement Proposal). The Apache Kafka project is active and releases new features in new releases two to three times a year. kafka-node, KafkaJS, and node-rdkafka are all on different parts of the "unmaintained" spectrum. For example, kafka-node has not had a commit in 5 years. KafkaJS had an open call for maintainers around a year ago. Additionally, commercial support was not available for enterprises to get guaranteed response times and support help in case of production issues. Unfortunately, production issues happened regularly in critical deployments. For this reason, Confluent open-sourced a new JavaScript client for Apache Kafka with guaranteed maintenance and commercial support. Confluent's Open Source JavaScript Client for Kafka powered by librdkafka Confluent provides a Kafka client for JavaScript. This client works with Confluent Cloud (fully managed service) and Confluent Platform (self-managed deployments). But it is an open-source project and works with any Apache Kafka environment. The JavaScript client for Kafka comes with a long-term support and development strategy. The source code is available now on GitHub. The client is available via npm. npm (Node Package Manager) is the default package manager for Node.js. This JavaScript client is a librdkafka-based library (from node-rdkafka) with API compatibility for the very popular KafkaJS library. Users of KafkaJS can easily migrate their code over (details in the migration guide in the repo). At the time of writing in February 2024, the new Confluent JavaScript Kafka Client is in early access and not for production usage. GA is later in 2024. Please review the GitHub project, try it out, and share feedback and issues when you build new projects or migrate from other JavaScript clients. What About Stream Processing? Keep in mind that Kafka clients only provide a product and consume API. However, the real potential of event-driven architectures comes with stream processing. This is a computing paradigm that allows for the continuous ingestion, processing, and analysis of data streams in real-time. Event stream processing enables immediate responses to incoming data without the need to store and process it in batches. Stream processing frameworks like Kafka Streams or Apache Flink offer several key features that enable real-time data processing and analytics: State management: Stream processing systems can manage the state across data streams, allowing for complex event processing and aggregation over time. Windowing: They support processing data in windows, which can be based on time, data size, or other criteria, enabling temporal data analysis. Exactly-once processing: Advanced systems provide guarantees for exactly-once processing semantics, ensuring data is processed once and only once, even in the event of failures. Integration with external systems: They offer connectors for integrating with various data sources and sinks, including databases, message queues, and file systems. Event time processing: They can handle out-of-order data based on the time events actually occurred, not just when they are processed. Stream processing frameworks are NOT available for most programming languages, including JavaScript. Therefore, if you live in the JavaScript world, you have three options: Build all the stream processing capabilities by yourself. Trade-off: A lot of work! Leverage a stream processing framework in SQL (or another programming language). Trade-off: This is not JavaScript! Don't do stream processing and stay with APIs and databases. Trade-off: Cannot solve many innovative use cases. Apache Flink provides APIs for Java, Python, and ANSI SQL. SQL is an excellent option to complement JavaScript code. In a fully managed data streaming platform like Confluent Cloud, you can leverage serverless Flink SQL for stream processing and combine it with your JavaScript applications. One Programming Language Does NOT Solve All Problems JavaScript has broad adoption and sweet spots for client and server development. The new Kafka Client for JavaScript from Confluent is open source and has a long-term development strategy, including commercial support. Easy migration from KafkaJS makes the adoption very simple. If you can live with the dependency on librdkafka (which is acceptable for most situations), then this is the way to go for JavaScript Node.js development with Kafka producers and consumers. JavaScript is NOT an all-rounder. The data streaming ecosystem is broad, open, and flexible. Modern enterprise architectures leverage microservices or data mesh principles. You can choose the right technology for your application. Learn how to build data streaming applications using your favorite programming language and open-source Kafka client by looking at Confluent's developer examples: JavaScript/Node.js Java HTTP/REST C/C++/.NET Kafka Connect DataGen Go Spring Boot Python Clojure Groovy Kotlin Ruby Rust Scala Which JavaScript Kafka client do you use? What are your experiences? Or do you already develop most applications with stream processing using Kafka Streams or Apache Flink? Let’s connect on LinkedIn and discuss it!
As developers, we're constantly seeking ways to streamline our workflows and enhance the performance of our applications. One tool that has gained significant traction in the React ecosystem is Redux Toolkit Query (RTK Query). This library, built on top of Redux Toolkit, provides a solution for managing asynchronous data fetching and caching. In this article, we'll explore the key benefits of using RTK Query. The Benefits of Using RTK Query: A Scalable and Efficient Solution 1. Simplicity and Ease of Use One of the most compelling advantages of RTK Query is its simplicity. This is how one would easily define endpoints for various operations, such as querying data, and creating, updating, and deleting resources. The injectEndpoints method allows you to define these endpoints in a concise and declarative manner, reducing boilerplate code and improving readability. TypeScript booksApi.injectEndpoints({ endpoints: builder => ({ getBooks: builder.query<IBook[], void | string[]>({ // ... }), createBundle: builder.mutation<any, void>({ // ... }), addBook: builder.mutation<string, AddBookArgs>({ // ... }), // ... }), }); 2. Automatic Caching and Invalidation One of the features of RTK Query is its built-in caching mechanism. The library automatically caches the data fetched from your endpoints, ensuring that subsequent requests for the same data are served from the cache, reducing network overhead and improving performance. These examples demonstrate how RTK Query handles cache invalidation using the invalidatesTags option. TypeScript createBundle: builder.mutation<any, void>({ invalidatesTags: [BooksTag], // ... }), addBook: builder.mutation<string, AddBookArgs>({ invalidatesTags: [BooksTag], // ... }), By specifying the BooksTag, RTK Query knows which cache entries to invalidate when a mutation (e.g., createBundle or addBook) is performed, ensuring that the cache stays up-to-date and consistent with the server data. 3. Scalability and Maintainability As your application grows in complexity, managing asynchronous data fetching and caching can become increasingly challenging. RTK Query's modular approach and separation of concerns make it easier to scale and maintain your codebase. Each endpoint is defined independently, allowing you to easily add, modify, or remove endpoints as needed without affecting the rest of your application. TypeScript endpoints: builder => ({ getBooks: builder.query<IBook[], void | string[]>({ // ... }), createBundle: builder.mutation<any, void>({ // ... }), // ... }) This modular structure promotes code reusability and makes it easier to reason about the different parts of your application, leading to better maintainability and collaboration within your team. 4. Efficient Data Fetching and Normalization RTK Query provides built-in support for efficient data fetching and normalization. The queryFn shows how you can fetch data from multiple sources and normalize the data using the toSimpleBooks function. However, the current implementation can be optimized to reduce code duplication and improve readability. Here's an optimized version of the code: TypeScript async queryFn(collections) { try { const [snapshot, snapshot2] = await Promise.all( collections.map(fetchCachedCollection) ]); const success = await getBooksBundle(); const books = success ? toSimpleBooks([...snapshot.docs, ...snapshot2.docs]) : []; return { data: books }; } catch (error) { return { error }; } } In this optimized version, we're using Promise.all to fetch the two collections (latest-books-1-query and latest-books-2-query) concurrently. This approach ensures that we don't have to wait for one collection to finish fetching before starting the other, potentially reducing the overall fetching time. Additionally, we've moved the getBooksBundle call inside the try block, ensuring that it's executed only if the collections are fetched successfully. This change helps maintain a clear separation of concerns and makes the code easier to reason about. By leveraging RTK Query's efficient data fetching capabilities and employing best practices like Promise.all, you can ensure that your application fetches and normalizes data in an optimized and efficient manner, leading to improved performance and a better user experience. 5. Ease of Use With Exposed Hooks One of the standout features of RTK Query is the ease of use it provides through its exposed hooks. Finally, I like to export the available generated typed hooks so you can use them (i.e, useGetBooksQuery, useCreateBundleMutation, etc.) to interact with the defined endpoints within your React components. These hooks abstract away the complexities of managing asynchronous data fetching and caching, allowing you to focus on building your application's logic. TypeScript export const { useGetBooksQuery, useLazyGetBooksQuery, useCreateBundleMutation, useAddBookMutation, useDeleteBookMutation, useUpdateBookMutation, } = booksApi; By leveraging these hooks, you can fetch data, trigger mutations, and handle loading and error states, all while benefiting from the caching and invalidation mechanisms provided by RTK Query. Conclusion By adopting RTK Query, you gain access to a solution for managing asynchronous data fetching and caching, while experiencing the simplicity, scalability, and ease of use provided by its exposed hooks. Whether you're building a small application or a large-scale project, RTK Query can help you streamline your development process and deliver high-performance, responsive applications. The code within this post is taken from a live app in production, ReadM, a Real-time AI for Reading Fluency Assessments & Insights platform.
It’s increasingly common to see web applications incorporate custom file upload forms, and popular runtime environments like Node.js have played a noteworthy role in making this possible. This has, in turn, converted form upload entry points into a burgeoning attack vector, as threat actors are now incentivized to exploit insecure form uploads in targeted attacks using specially crafted malicious files. In this article, we’ll briefly examine why the popularity of custom form upload handlers has increased in recent years, and we’ll subsequently look at a deterministic threat detection API that can help protect a Node.js form upload application. Defining File Upload Forms When we talk about “file upload forms” in this article, we’re referring to HTML web forms that allow users to select and upload files from their computer (or device) to a web server. The form itself is composed of basic HTML elements, and it can simultaneously collect files and text-input data before sending that collection to a web server as multipart/form-data HTTP content. That collection of data is subsequently processed by a server-side application, which determines where each piece of data (text or file bytes) should go – among other things. Here’s a rudimentary example of an HTML form that captures a user's first name, last name, and email address in addition to capturing a file from their file system: HTML <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>File Upload Form</title> </head> <body> <h2>Upload File Form</h2> <form action="/upload" method="POST" enctype="multipart/form-data"> <label for="firstName">First Name:</label><br> <input type="text" id="firstName" name="firstName" required><br><br> <label for="lastName">Last Name:</label><br> <input type="text" id="lastName" name="lastName" required><br><br> <label for="email">Email Address:</label><br> <input type="email" id="email" name="email" required><br><br> <label for="file">Select File:</label><br> <input type="file" id="file" name="file" accept=".pdf,.docx" required><br><br> <input type="submit" value="Upload File"> </form> </body> </html> The Rise of Form Uploads The implementation of an HTML form is extremely simple, and the benefits of incorporating one in just about any web application – whether that be an e-commerce app, social media app, resume upload portal, or even an insurance claims portal – are easy to understand. User data and user-generated content are twinned kings of the digital age, and form uploads capture both in one fell swoop. Putting the business benefits aside, we can attribute the increased viability of custom form uploads to a few important technology-related factors. On the one hand, we can point to the steadily increased availability of cloud computing. It’s never been more affordable to purchase “pay-as-you-go” cloud storage models, and cloud storage has quickly gone from an “exciting new concept” to an “industry standard model” in what seems like no time at all. It’s easy for startups and growing businesses of all shapes and sizes to store user data and user-generated content on scalable web servers hosted by large public cloud providers. On the other hand, building server-side form upload handlers (let alone server-side applications of any kind) has never been easier – especially with the availability and popularity of an open-source, cross-platform JavaScript runtime environment like Node.js. Node.js is a runtime environment that’s more or less immediately accessible for JavaScript developers (the most common developer demographic in the world), reducing a significant barrier to entry in server-side development that once existed just 15 years ago. Compared, for instance, to an equivalent undertaking in .NET or Java, building a form upload handler (i.e., a server-side application that handles multipart/form-data inputs from an HTML form) in Node.js is relatively straightforward. It largely hinges on the installation of exceedingly popular, open-source, and easy-to-use middleware frameworks like Express.js and Multer. Using the Node Package Manager (NPM; the default package manager for Node.js), we can simply run commands like NPM install express and NPM install multer and then import those modules when we set up our server. JavaScript const express = require('express'); const multer = require('multer'); While Express.js is a flexible framework that offers a robust set of features for all sorts of web (and mobile) applications, Multer exclusively handles multipart/form-data requests, and the two integrate seamlessly. Multer can handle multiple files uploaded through different fields in a single form, and it’s easy to tie a Multer upload handler into Node.js code that sends files to a cloud storage instance (e.g., AWS and Azure). Multer’s fantastic efficiency can be attributed to the fact that it’s written on top of Busboy, a powerful module designed to parse incoming HTML form data. A Node.js form handler sending data to a cloud storage bucket might look something like the following generic example. With some extra information involved, this would send file uploads from a client-side HTML form upload to an AWS S3 storage bucket: JavaScript const express = require('express'); const multer = require('multer'); const AWS = require('aws-sdk'); const path = require('path'); const app = express(); const port = 'YOUR PORT HERE'; // This configures the AWS SDK AWS.config.update({ accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, region: process.env.AWS_REGION }); const s3 = new AWS.S3(); // Here we configure Multer for file upload const storage = multer.memoryStorage(); const upload = multer({ storage: storage }); // Here we serve the HTML form app.get('/', (req, res) => { res.sendFile(path.join(__dirname, 'index.html')); }); // Here we provide the route to handle the file upload app.post('/upload', upload.single('file'), (req, res) => { if (!req.file) { return res.status(400).send('No file uploaded.'); } // Here we define some parameters for upload to our AWS S3 bucket const params = { Bucket: process.env.S3_BUCKET_NAME, Key: Date.now() + '-' + req.file.originalname, Body: req.file.buffer }; // Here we actually upload our file to the S3 bucket s3.upload(params, (err, data) => { if (err) { console.error(err); return res.status(500).send('Error uploading file.'); } res.send(`File uploaded successfully. ${data.Location}`); }); }); // Here we start the Express.js server and ask it to listen for incoming HTTP requests on a specific port app.listen(port, () => { console.log(`Server running at http://localhost:${port}`); }); Please note that this example is intentionally simplistic for demonstration purposes. Among a few other practical issues, this code lacks robust error handling and exposes sensitive credentials in our code. It also assumes environment variables for our AWS account have already been set elsewhere. Understanding the File Upload Form Attack Vector Once we build a Node/Express/Multer server application to capture file uploads, the more challenging problem becomes: Scanning those files for traditional threats (e.g., viruses and malware) Verifying the contents of those files for non-malware content threats (e.g., executables, scripts, macros, etc.) At a high level, these are the two main groups of threats we can expect to face through a file upload entry point – though it’s important to note that file upload threats, like any cybersecurity threat, can differ drastically in complexity and concealment. While files containing viruses and malware can often be detected by comparing file signatures with known malware “families," or by actively analyzing the intended behavior of a file’s instructions in a threat sandbox, other types of threats are often more difficult to identify outright. File uploads can masquerade as one file type while containing the contents of another, and most traditional antivirus (AV) solutions are poorly equipped to identify threats in this scenario. Let’s say, for example, a fairly sophisticated threat actor decides to launch a targeted attack on our upstart business file upload portal. Their goal is to upload a specially crafted fake PDF (composed of HTML/JavaScript) to our cloud storage instance. Eventually, when a user opens the PDF in their browser (e.g., to review the document’s contents), the fake PDF will execute code that downloads malicious content onto the user’s device from a remote server. This fake PDF contains a valid PDF extension, and it does not contain any traceable malware; the code is written from scratch. We can’t rely on an AV solution to identify this threat, because no viruses or malware are involved. We can’t rely on our basic cloud storage subscription to deep-verify the PDF content for threats, either as advanced threat detection likely lies outside the scope of our affordable cloud storage subscription. Deterministic Threat Detection One way we can mitigate this threat is by incorporating a deterministic content verification solution into our server-side file upload handler. Deterministic threat detection is characterized by predefined rules; in this case, that means we’ll decide ahead of time which content can and cannot pass through our server application into cloud storage (or any storage location), and we’ll ensure unsuspecting users can’t gain access to the document as a result. If we were to deterministically verify the contents of the JavaScript injection PDF described in the earlier example, we could compare the fake PDF contents with real PDF formatting standards, and we could therefore determine that the alleged PDF did not rigorously conform with PDF formatting standards whatsoever, despite presenting a valid PDF extension. After making this assessment, we could quarantine the file for analysis (to better understand the threats entering through our file upload portal), or we could simply delete the file outright and return a generic error message to the client-side attacker. In the below demonstration, we’ll look at one free-to-use threat detection API that integrates easily into our Node.js form upload handler. It combines deterministic content verification and signature-based virus scanning to provide a dynamic and flexible threat detection solution for our Node.js application. Demonstration Using the ready-to-run Node.js code examples provided below, we can structure our threat detection API call in a few quick steps. We can authorize our API calls with a free API key, and we can install the client SDK easily with a simple NPM command. Let’s take care of that now. We can run the following command to install the client SDK: JavaScript npm install cloudmersive-virus-api-client --save Alternatively, we could also add the Node client to our package.json: JSON "dependencies": { "cloudmersive-virus-api-client": "^1.1.9" } Next, we can use the below code examples as the basis to structure our API call within our Node.js application. We can replace the ‘YOUR API KEY’ placeholder string with our own API key: JavaScript var CloudmersiveVirusApiClient = require('cloudmersive-virus-api-client'); var defaultClient = CloudmersiveVirusApiClient.ApiClient.instance; // Configure API key authorization: Apikey var Apikey = defaultClient.authentications['Apikey']; Apikey.apiKey = 'YOUR API KEY'; var apiInstance = new CloudmersiveVirusApiClient.ScanApi(); var inputFile = Buffer.from(fs.readFileSync("C:\\temp\\inputfile").buffer); // File | Input file to perform the operation on. var opts = { 'allowExecutables': true, // Boolean | Set to false to block executable files (program code) from being allowed in the input file. Default is false (recommended). 'allowInvalidFiles': true, // Boolean | Set to false to block invalid files, such as a PDF file that is not really a valid PDF file, or a Word Document that is not a valid Word Document. Default is false (recommended). 'allowScripts': true, // Boolean | Set to false to block script files, such as a PHP files, Python scripts, and other malicious content or security threats that can be embedded in the file. Set to true to allow these file types. Default is false (recommended). 'allowPasswordProtectedFiles': true, // Boolean | Set to false to block password protected and encrypted files, such as encrypted zip and rar files, and other files that seek to circumvent scanning through passwords. Set to true to allow these file types. Default is false (recommended). 'allowMacros': true, // Boolean | Set to false to block macros and other threats embedded in document files, such as Word, Excel and PowerPoint embedded Macros, and other files that contain embedded content threats. Set to true to allow these file types. Default is false (recommended). 'allowXmlExternalEntities': true, // Boolean | Set to false to block XML External Entities and other threats embedded in XML files, and other files that contain embedded content threats. Set to true to allow these file types. Default is false (recommended). 'allowInsecureDeserialization': true, // Boolean | Set to false to block Insecure Deserialization and other threats embedded in JSON and other object serialization files, and other files that contain embedded content threats. Set to true to allow these file types. Default is false (recommended). 'allowHtml': true, // Boolean | Set to false to block HTML input in the top level file; HTML can contain XSS, scripts, local file accesses and other threats. Set to true to allow these file types. Default is false (recommended) [for API keys created prior to the release of this feature default is true for backward compatability]. 'restrictFileTypes': "restrictFileTypes_example" // String | Specify a restricted set of file formats to allow as clean as a comma-separated list of file formats, such as .pdf,.docx,.png would allow only PDF, PNG and Word document files. All files must pass content verification against this list of file formats, if they do not, then the result will be returned as CleanResult=false. Set restrictFileTypes parameter to null or empty string to disable; default is disabled. }; var callback = function(error, data, response) { if (error) { console.error(error); } else { console.log('API called successfully. Returned data: ' + data); } }; apiInstance.scanFileAdvanced(inputFile, opts, callback); By default, the underlying deterministic content verification model will identify a range of content types as a threat, including executables, invalid files, macros, scripts, password-protected files (commonly used to disguise threats via encryption), HTML, Object Linking and Embedding (OLE) and other insecure content. We can customize threat rules in the request body to adjust threat detection as we see fit. We can also include a comma-separated list of acceptable file formats in the ‘restrictFileTypes’ parameter to limit our file upload threat surface. It’s worth noting we can do something similar using Multer’s ‘fileFilter’ option if we’d prefer that instead. Let’s review an example response object. The following response was generated from processing an inert JavaScript injection PDF file (this file was designed for testing; it simply displays the message “you’ve been hacked!” when opened in a web browser): JSON { "CleanResult": false, "ContainsExecutable": false, "ContainsInvalidFile": true, "ContainsScript": false, "ContainsPasswordProtectedFile": false, "ContainsRestrictedFileFormat": false, "ContainsMacros": false, "ContainsXmlExternalEntities": false, "ContainsInsecureDeserialization": false, "ContainsHtml": false, "ContainsUnsafeArchive": false, "ContainsOleEmbeddedObject": false, "VerifiedFileFormat": ".pdf", "FoundViruses": null, "ContentInformation": { "ContainsJSON": false, "ContainsXML": false, "ContainsImage": false, "RelevantSubfileName": null } } The “CleanResult”: false value indicates the file contains a threat, and the “ContainsInvalidFile”: true response tells us why. We’ll notice that the “VerifiedFileFormat” string still identifies ".pdf" as the file type, despite identifying the document as an invalid file; this indicates the file would’ve successfully executed in our browser (or, potentially, in a vulnerable PDF rendering/processing application). Conclusion In this article, we’ve taken a high-level look at the growing popularity of file upload forms, the increasing viability of server-side form upload handlers (thanks to runtime environments like Node.js), and one example of a targeted file upload attack designed to exploit an insecure file upload form in a web application. In the end, we’ve walked through a quick demonstration of a deterministic threat detection API that can help protect our Node.js form from disguised malicious content.
Network graphs are a practical and effective tool in data visualization, particularly useful for illustrating the relationships and connections within complex systems. These charts are useful for understanding structures in various contexts, from social networks to corporate hierarchies. In this tutorial, we'll delve into a quick path to creating a compelling, interactive network graph using JavaScript. We'll use the Volkswagen Group as our example, mapping out its subsidiaries and product lines to showcase how network graphs can make complex organizational structures understandable and accessible. By the end of this step-by-step guide, you'll have a clear understanding of how to quickly construct and customize a JS-based network graph. Buckle up, as it's time to hit the road! Understanding Network Graphs Network graphs consist of nodes and edges — nodes represent entities such as individuals or organizations, while edges depict the relationships between them. These visuals are invaluable for dissecting and displaying the architecture of complex networks, revealing both overt and subtle connections. In practical terms, network graphs can help illustrate the hierarchy within a corporation, the interaction between different departments, or the flow of communication or resources. Visually, these graphs use various elements like node size, color, and edge thickness to convey information about the importance, type, and strength of relationships. Below is a preview of what we will create by the end of this tutorial — a fully interactive network graph that not only serves as a visual map of the Volkswagen Group but also utilizes the dynamic features of JavaScript for a deeper exploration of data. Step-By-Step Guide To Building a Network Graph Creating a network graph involves several key steps, each contributing to the final outcome. Here’s a brief overview of what we'll cover in this tutorial: Creating an HTML page: This is where we set up the structure for our visualization, providing a canvas on which our network graph will be displayed. Including the necessary JavaScript files: Essential for graph functionality, we'll incorporate scripts needed to build and manage our network graph. Preparing the data: Here, we'll organize the data into a format that can be smoothly visualized in a network graph, distinguishing between different types of nodes and their connections. Writing the JavaScript code for visualization: The final step involves scripting the logic that brings our graph to life, enabling interactivity to better understand the underlying data. Each of these steps will be detailed in the following sections, ensuring you have a clear roadmap to follow as you create your own network graph using JavaScript. Let’s dive in and start visualizing! Step 1: Setting Up Your HTML Start by creating the basic structure for your web page if you are building from scratch. This includes setting up an HTML document that will host your network graph. Here is how you can write your HTML: HTML <!DOCTYPE html> <html> <head> <title>Network Graph in JavaScript</title> <style type="text/css"> html, body, #container { width: 100%; height: 100%; margin: 0; padding: 0; } </style> </head> <body> <div id="container"></div> </body> </html> This simple HTML structure is foundational. The <div> tag identified by id="container" is where our network graph will be rendered. The accompanying CSS ensures the graph uses the entire screen, optimizing visual space and ensuring that the graph is both prominent and clear. Step 2: Summoning JavaScript Files To integrate our network graph into the web environment without much hassle, let’s incorporate a JavaScript charting library directly within the HTML framework. There are multiple libraries out there, although not all of them support network graphs. You can check out this comprehensive comparison of JavaScript charting libraries, which details some features of various libraries, including support for network graphs. Of those listed, libraries such as amCharts, AnyChart, D3.js, and HighCharts are popular options that support network graphs. For this tutorial, we'll utilize AnyChart. It's one of the libraries I've used extensively over the years, and I thought it would work well to illustrate the common logic of the process and be easy enough to get started for those of you who are new to JS charting. Whichever libraries you opt for, here's how the necessary JS scripts are woven into the HTML, positioned within the <head> section. Additionally, we prepare the <body> section to include our forthcoming JavaScript code using those scripts, which will dynamically render the network graph: HTML <html> <head> <title>Network Graph in JavaScript</title> <style type="text/css"> html, body, #container { width: 100%; height: 100%; margin: 0; padding: 0; } </style> <script src="https://cdn.anychart.com/releases/8.12.1/js/anychart-core.min.js"></script> <script src="https://cdn.anychart.com/releases/8.12.1/js/anychart-graph.min.js"></script> </head> <body> <div id="container"></div> <script> // JS code for the network graph will be here </script> </body> </html> Step 3: Sculpting Data With our HTML ready and JS files at hand, it's time to define our nodes and edges — the fundamental components of our network graph. This involves structuring the Volkswagen Group's data, from the parent company to each product line. JavaScript var data = { "nodes": [ {"id": "Volkswagen Group", "group": "CoreCompany"}, {"id": "Audi", "group": "ChildCompany"}, {"id": "Audi Cars", "group": "Product"}, // More nodes here... ], "edges": [ {"from": "Volkswagen Group", "to": "Audi"}, {"from": "Audi", "to": "Audi Cars"}, // More edges here... ] }; Step 4: Choreographing JavaScript To Visualize Network This crucial step transforms the structured data into a vibrant, interactive network graph within the provided HTML canvas. To ensure clarity and ease of understanding, I’ve divided this process into three intuitive sub-steps, each demonstrated with its own code snippet. 1. Initializing First, we ensure that our JavaScript visualization code executes only once the HTML document is fully loaded. This is critical as it prevents any DOM manipulation attempts before the HTML is fully prepared. JavaScript anychart.onDocumentReady(function () { // Initialization of the network graph will happen here }); 2. Creating Graph Instance Inside the function, we initialize our network graph. Here, we create a graph object and chart using our predefined data. This instance will serve as the basis for our visualization. var chart = anychart.graph(data); 3. Setting Container for Graph The next step is to specify where on the webpage our network graph should be visually rendered. This is linked to the HTML container we defined earlier. JavaScript chart.container("container"); 4. Rendering Graph The final step is to instruct the graph to draw itself within the designated container. This action brings our data to life, displaying the complex relationships within the Volkswagen Group. JavaScript chart.draw(); These sub-steps collectively ensure that our network graph is not only initialized with the correct data and configurations but also properly placed and rendered on the web page, providing a dynamic and informative visual exploration of corporate relationships. Network Graph Visualization Unfolded Now that our network graph is complete, you can see the resulting picture below, which showcases the complex structure of the Volkswagen Group. This interactive chart is not only informative but also a testament to the power of JavaScript when it comes to cross-platform interactive data visualization. For a hands-on experience, I invite you to explore this chart interactively on CodePen, where you can modify the code, experiment with different configurations, and better understand the intricacies of network graphs. The complete HTML/CSS/JavaScript code for this project is available below — use it as a reference or a starting point for your own visualization projects. HTML <html> <head> <title>Network Graph in JavaScript</title> <style type="text/css"> html, body, #container { width: 100%; height: 100%; margin: 0; padding: 0; } </style> <script src="https://cdn.anychart.com/releases/8.12.1/js/anychart-core.min.js"></script> <script src="https://cdn.anychart.com/releases/8.12.1/js/anychart-graph.min.js"></script> </head> <body> <div id="container"></div> <script> anychart.onDocumentReady(function () { // Create data const data = { "nodes": [ // parent company {"id": "Volkswagen Group", "group": "CoreCompany"}, // child companies {"id": "Audi", "group": "ChildCompany"}, {"id": "CUPRA", "group": "ChildCompany"}, {"id": "Ducati", "group": "ChildCompany"}, {"id": "Lamborghini", "group": "ChildCompany"}, {"id": "MAN", "group": "ChildCompany"}, {"id": "Porsche", "group": "ChildCompany"}, {"id": "Scania", "group": "ChildCompany"}, {"id": "SEAT", "group": "ChildCompany"}, {"id": "Škoda", "group": "ChildCompany"}, {"id": "Volkswagen", "group": "ChildCompany"}, // products {"id": "Audi Cars", "group": "Product"}, {"id": "Audi SUVs", "group": "Product"}, {"id": "Audi Electric Vehicles", "group": "Product"}, {"id": "CUPRA Performance Cars", "group": "Product"}, {"id": "CUPRA SUVs", "group": "Product"}, {"id": "Ducati Motorcycles", "group": "Product"}, {"id": "Lamborghini Sports Cars", "group": "Product"}, {"id": "Lamborghini SUVs", "group": "Product"}, {"id": "MAN Trucks", "group": "Product"}, {"id": "MAN Buses", "group": "Product"}, {"id": "Porsche Sports Cars", "group": "Product"}, {"id": "Porsche SUVs", "group": "Product"}, {"id": "Porsche Sedans", "group": "Product"}, {"id": "Scania Trucks", "group": "Product"}, {"id": "Scania Buses", "group": "Product"}, {"id": "SEAT Cars", "group": "Product"}, {"id": "SEAT SUVs", "group": "Product"}, {"id": "SEAT Electric Vehicles", "group": "Product"}, {"id": "Škoda Cars", "group": "Product"}, {"id": "Škoda SUVs", "group": "Product"}, {"id": "Škoda Electric Vehicles", "group": "Product"}, {"id": "Volkswagen Cars", "group": "Product"}, {"id": "Volkswagen SUVs", "group": "Product"}, {"id": "Volkswagen Vans", "group": "Product"}, {"id": "Volkswagen Trucks", "group": "Product"} ], "edges": [ // parent to child companies {"from": "Volkswagen Group", "to": "Audi"}, {"from": "Volkswagen Group", "to": "CUPRA"}, {"from": "Volkswagen Group", "to": "Ducati"}, {"from": "Volkswagen Group", "to": "Lamborghini"}, {"from": "Volkswagen Group", "to": "MAN"}, {"from": "Volkswagen Group", "to": "Porsche"}, {"from": "Volkswagen Group", "to": "Scania"}, {"from": "Volkswagen Group", "to": "SEAT"}, {"from": "Volkswagen Group", "to": "Škoda"}, {"from": "Volkswagen Group", "to": "Volkswagen"}, // child companies to products {"from": "Audi", "to": "Audi Cars"}, {"from": "Audi", "to": "Audi SUVs"}, {"from": "Audi", "to": "Audi Electric Vehicles"}, {"from": "CUPRA", "to": "CUPRA Performance Cars"}, {"from": "CUPRA", "to": "CUPRA SUVs"}, {"from": "Ducati", "to": "Ducati Motorcycles"}, {"from": "Lamborghini", "to": "Lamborghini Sports Cars"}, {"from": "Lamborghini", "to": "Lamborghini SUVs"}, {"from": "MAN", "to": "MAN Trucks"}, {"from": "MAN", "to": "MAN Buses"}, {"from": "Porsche", "to": "Porsche Sports Cars"}, {"from": "Porsche", "to": "Porsche SUVs"}, {"from": "Porsche", "to": "Porsche Sedans"}, {"from": "Scania", "to": "Scania Trucks"}, {"from": "Scania", "to": "Scania Buses"}, {"from": "SEAT", "to": "SEAT Cars"}, {"from": "SEAT", "to": "SEAT SUVs"}, {"from": "SEAT", "to": "SEAT Electric Vehicles"}, {"from": "Škoda", "to": "Škoda Cars"}, {"from": "Škoda", "to": "Škoda SUVs"}, {"from": "Škoda", "to": "Škoda Electric Vehicles"}, {"from": "Volkswagen", "to": "Volkswagen Cars"}, {"from": "Volkswagen", "to": "Volkswagen SUVs"}, {"from": "Volkswagen", "to": "Volkswagen Vans"}, {"from": "Volkswagen", "to": "Volkswagen Trucks"} ]}; // Initialize the network graph with the provided data structure const chart = anychart.graph(data); // Specify the HTML container ID where the chart will be rendered chart.container("container"); // Initiate the rendering of the chart chart.draw(); }); </script> </body> </html> Customizing JavaScript Network Graph After establishing a basic network graph of the Volkswagen Group, let's enhance its functionality and aesthetics. This part of our tutorial will guide you through some of the various customization options, showing you how to evolve your basic JavaScript network graph into a more informative and visually appealing visualization. Each customization step builds upon the previous code, introducing new features and modifications, and providing the viewer with a deeper understanding of the relationships within the Volkswagen corporate structure. Displaying Node Labels Understanding what each node represents is crucial in a network graph. By default, node labels might not be displayed, but we can easily enable them to make our graph more informative. JavaScript chart.nodes().labels().enabled(true); Enabling labels on nodes ensures that each node is clearly identified, making it easier for users to understand the data at a glance without needing to interact with each node individually. Configuring Edge Tooltips To enhance user interaction, tooltips can provide additional information when hovering over connections (edges) between nodes. This step involves configuring a tooltip format that shows the relationship between connected nodes. JavaScript chart.edges().tooltip().format("{%from} owns {%to}"); This tooltip configuration helps to clarify the connections within the graph, showing direct ownership or affiliation between the parent company and its subsidiaries, enhancing the user's comprehension and interaction with the graph. Customizing Node Appearance Visual differentiation helps to quickly identify types of nodes. We can customize the appearance of nodes based on their group classification, such as distinguishing between the core company, child companies, and products. JavaScript // 1) configure settings for nodes representing the core company chart.group('CoreCompany') .stroke('none') .height(45) .fill('red') .labels().fontSize(15); // 2) configure settings for nodes representing child companies chart.group('ChildCompany') .stroke('none') .height(25) .labels().fontSize(12); // 3) configure settings for nodes representing products chart.group('Product') .shape('square') .stroke('black', 1) .height(15) .labels().enabled(false); These settings enhance the visual hierarchy of the graph. The core company node is more prominent, child companies are easily distinguishable, and product nodes are less emphasized but clearly structured, aiding in the quick visual processing of the graph's structure. Setting Chart Title Adding a title to the chart provides context and introduces the visual content. It's a simple but effective way to inform viewers about the purpose of the network graph. JavaScript chart.title("Volkswagen Group Network"); The title "Volkswagen Group Network" immediately informs the viewer of the graph's focus, adding a professional touch and enhancing the overall clarity. Final Network Graph Visualization With these customizations, our network graph is now a detailed and interactive visualization, ready for in-depth exploration. Below is the complete code incorporating all the enhancements discussed. This version of the JS-based network graph is not only a tool for displaying static data but also a dynamic map of the Volkswagen Group's complex structure. I invite you to view and interact with this chart on CodePen to see it in action and to tweak the code further to suit your specific needs. For your convenience, the full network graph code is also provided below: HTML <html> <head> <title>Network Graph in JavaScript</title> <style type="text/css"> html, body, #container { width: 100%; height: 100%; margin: 0; padding: 0; } </style> <script src="https://cdn.anychart.com/releases/8.12.1/js/anychart-core.min.js"></script> <script src="https://cdn.anychart.com/releases/8.12.1/js/anychart-graph.min.js"></script> </head> <body> <div id="container"></div> <script> anychart.onDocumentReady(function () { // Create data const data = { "nodes": [ // parent company {"id": "Volkswagen Group", "group": "CoreCompany"}, // child companies {"id": "Audi", "group": "ChildCompany"}, {"id": "CUPRA", "group": "ChildCompany"}, {"id": "Ducati", "group": "ChildCompany"}, {"id": "Lamborghini", "group": "ChildCompany"}, {"id": "MAN", "group": "ChildCompany"}, {"id": "Porsche", "group": "ChildCompany"}, {"id": "Scania", "group": "ChildCompany"}, {"id": "SEAT", "group": "ChildCompany"}, {"id": "Škoda", "group": "ChildCompany"}, {"id": "Volkswagen", "group": "ChildCompany"}, // products {"id": "Audi Cars", "group": "Product"}, {"id": "Audi SUVs", "group": "Product"}, {"id": "Audi Electric Vehicles", "group": "Product"}, {"id": "CUPRA Performance Cars", "group": "Product"}, {"id": "CUPRA SUVs", "group": "Product"}, {"id": "Ducati Motorcycles", "group": "Product"}, {"id": "Lamborghini Sports Cars", "group": "Product"}, {"id": "Lamborghini SUVs", "group": "Product"}, {"id": "MAN Trucks", "group": "Product"}, {"id": "MAN Buses", "group": "Product"}, {"id": "Porsche Sports Cars", "group": "Product"}, {"id": "Porsche SUVs", "group": "Product"}, {"id": "Porsche Sedans", "group": "Product"}, {"id": "Scania Trucks", "group": "Product"}, {"id": "Scania Buses", "group": "Product"}, {"id": "SEAT Cars", "group": "Product"}, {"id": "SEAT SUVs", "group": "Product"}, {"id": "SEAT Electric Vehicles", "group": "Product"}, {"id": "Škoda Cars", "group": "Product"}, {"id": "Škoda SUVs", "group": "Product"}, {"id": "Škoda Electric Vehicles", "group": "Product"}, {"id": "Volkswagen Cars", "group": "Product"}, {"id": "Volkswagen SUVs", "group": "Product"}, {"id": "Volkswagen Vans", "group": "Product"}, {"id": "Volkswagen Trucks", "group": "Product"} ], "edges": [ // parent to child companies {"from": "Volkswagen Group", "to": "Audi"}, {"from": "Volkswagen Group", "to": "CUPRA"}, {"from": "Volkswagen Group", "to": "Ducati"}, {"from": "Volkswagen Group", "to": "Lamborghini"}, {"from": "Volkswagen Group", "to": "MAN"}, {"from": "Volkswagen Group", "to": "Porsche"}, {"from": "Volkswagen Group", "to": "Scania"}, {"from": "Volkswagen Group", "to": "SEAT"}, {"from": "Volkswagen Group", "to": "Škoda"}, {"from": "Volkswagen Group", "to": "Volkswagen"}, // child companies to products {"from": "Audi", "to": "Audi Cars"}, {"from": "Audi", "to": "Audi SUVs"}, {"from": "Audi", "to": "Audi Electric Vehicles"}, {"from": "CUPRA", "to": "CUPRA Performance Cars"}, {"from": "CUPRA", "to": "CUPRA SUVs"}, {"from": "Ducati", "to": "Ducati Motorcycles"}, {"from": "Lamborghini", "to": "Lamborghini Sports Cars"}, {"from": "Lamborghini", "to": "Lamborghini SUVs"}, {"from": "MAN", "to": "MAN Trucks"}, {"from": "MAN", "to": "MAN Buses"}, {"from": "Porsche", "to": "Porsche Sports Cars"}, {"from": "Porsche", "to": "Porsche SUVs"}, {"from": "Porsche", "to": "Porsche Sedans"}, {"from": "Scania", "to": "Scania Trucks"}, {"from": "Scania", "to": "Scania Buses"}, {"from": "SEAT", "to": "SEAT Cars"}, {"from": "SEAT", "to": "SEAT SUVs"}, {"from": "SEAT", "to": "SEAT Electric Vehicles"}, {"from": "Škoda", "to": "Škoda Cars"}, {"from": "Škoda", "to": "Škoda SUVs"}, {"from": "Škoda", "to": "Škoda Electric Vehicles"}, {"from": "Volkswagen", "to": "Volkswagen Cars"}, {"from": "Volkswagen", "to": "Volkswagen SUVs"}, {"from": "Volkswagen", "to": "Volkswagen Vans"}, {"from": "Volkswagen", "to": "Volkswagen Trucks"} ]}; // Initialize the network graph with the provided data structure const chart = anychart.graph(data); // Customization step #1: // display chart node labels chart.nodes().labels().enabled(true); // Customization step #2: // configure edge tooltips chart.edges().tooltip().format("{%from} owns {%to}"); // Customization step #3: // customizing node appearance: // 1) configure settings for nodes representing the core company chart.group('CoreCompany') .stroke('none') .height(45) .fill('red') .labels().fontSize(15); // 2) configure settings for nodes representing child companies chart.group('ChildCompany') .stroke('none') .height(25) .labels().fontSize(12); // 3) configure settings for nodes representing products chart.group('Product') .shape('square') .stroke('black', 1) .height(15) .labels().enabled(false); // Customization step #4: // set the title of the chart for context chart.title("Volkswagen Group Network"); // Specify the HTML container ID where the chart will be rendered chart.container("container"); // Initiate the rendering of the chart chart.draw(); }); </script> </body> </html> Conclusion Congratulations on completing this tutorial on crafting a dynamic JavaScript network graph! You've not only learned to visualize complex structures but also how to customize the graph to enhance its clarity and interactivity. As you continue to explore the possibilities within network graph visualizations, I encourage you to delve deeper, experiment with further customization, and look for some inspiring chart examples out there. The skills you've acquired today are just the beginning. Keep experimenting, tweaking, and learning to unlock the full potential of data visualization in your projects.
Data grids are powerful tools to manage structured data in a tabular form. That’s why they are the perfect choice for web apps with large data sets that need to be organized, displayed, and manipulated effectively. There are lоts оf ready-tо-use data grid libraries written in JavaScript оr pоpular frоntend framewоrks. But what abоut Svelte? It alsо deserves sоme gооd-quality grid cоmpоnents fоr displaying and manipulating tabular data. Unlike traditional framewоrks such as React and Vue, Svelte uses a cоmpilatiоn approach that generates оptimized cоde during build time and does nоt rely upon оn a virtual DоM. This means Svelte-based apps can be faster, more efficient, and easier to debug. Here are some tips оn hоw tо chооse the perfect data grid that can be seamlessly integrated into your next Svelte-based project: Functiоnality: Determine if yоu need оnly basic features like sоrting, filtering, paginatiоn, rоw selectiоn, editing capabilities, оr additiоnal custоm actiоns. Smооth navigatiоn, respоnsive design, and accessibility features are alsо important aspects tо cоnsider. Perfоrmance: This is an important aspect, especially if you're working with large datasets. Choose data tables or data grids with dynamic rendering, virtualization, and optimized data handling. These provide smooth performance even with extensive data. Custоmizatiоn: Look for data grids that can be customized without compromising their usability. That will help you to change the appearance, styling, and behavior of the table to match your application’s design and branding. Cоmpatibility and integratiоn: Ensure that the chоsen data grid is cоmpatible with the Svelte framework and the specific version you are using. Check for any dependencies оr conflicts that may arise. Alsо, cоnsider how well it integrates with оther cоmpоnents оr libraries yоu plan tо use in yоur project. Data Grid Libraries fоr Svelte Here is a list оf sоme pоpular and trustworthy data grid libraries fоr Svelte. You can choose the оne that best fits your project requirements and design preferences. 1. Flоwbite Svelte DataTable The table cоmpоnent оf the Flоwbite Svelte library allоws yоu tо present variоus types оf cоntent, such as text, images, links, and mоre, in a well-оrganized data table. This cоmpоnent has a wide variety оf features, including sоrting and searching оptiоns, multiple head rоws, respоnsive layоut, striped rоws, hоver state, and checkbоx selectiоn. 2. SVAR DataGrid fоr Svelte This prоvides develоpers with a highly custоmizable and interactive grid that can be easily integrated into Svelte web apps. SVAR DataGrid оffers advanced features such as suppоrt fоr hierarchical data structures, inline editing, keybоard navigatiоn, fixed cоlumns, handling оf large datasets with оptimal performance, and easy expоrt tо CSV/XLSX fоrmat. It is designed to be respоnsive and adaptive, autоmatically adjusting tо available space without requiring explicit cоding. 3. Svelte Material UI DataTable It is designed to help develоpers create interactive and visually appealing data tables in their Svelte apps. The DataTable cоmpоnent оffers variоus custоmizatiоn оptiоns, including the ability tо add custоm headers, fооters, and additiоnal cоlumns with custоm cоntent. It alsо prоvides hооks and events fоr extending its functiоnality to suit your specific requirements. 4. PоwerTable This is a Svelte cоmpоnent that allоws yоu tо turn JSON data into an interactive HTML table. The main purpose оf PоwerTable is tо facilitate manual inspectiоn, sоrting, filtering, searching, and editing оf data in a tabular fоrmat. JavaScript Grids With Svelte Wrappers Sоme pоpular JavaScript grids оffer special prebuilt wrappers that allоw yоu tо easily integrate the grid cоmpоnent with Svelte framewоrk. 5. Grid.js Grid.js is an advanced JavaScript grid cоmpоnent that prоvides a pоwerful and flexible sоlutiоn for displaying and manipulating tabular data in web apps. It has glоbal search оn all rоws and cоlumns, sоrting, resizing and hiding cоlumns, setting paginatiоn and suppоrts wide tables. There is a Svelte wrapper for Grid.js, in case you want to use it with Svelte. 6. ZingGrid ZingGrid is an interactive JavaScript Web Cоmpоnent library that оffers more than 35 features, including aggregatiоn, batch editing, filtering, and sоrting. You can create interactive HTML tables with features like cоntext menus, dialоg mоdals, and virtual scrоlling. Here is how you can integrate it with Svelte framewоrk. 7. Tabulatоr Tabulatоr оffers a wide range оf additiоnal features, including virtual DоM fоr fast rendering оf large datasets, clipbоard suppоrt fоr cоpying and pasting data, rоw selectiоn, input validatiоn, tоuch-friendly functiоnality, and mоre. This article explains how to use it with Svelte. Headless Tables Headless tables provide a mоre lоw-level and custоmizable approach. Instead оf using a pre-built table cоmpоnent, headless tables prоvide a fоundatiоn оr set оf APIs fоr managing table data and handling table functiоnalities, but they do nоt include any predefined UI оr styling. These libraries fоcus оn the underlying lоgic and functiоnality, allоwing yоu tо build yоur оwn UI cоmpоnents and styles оn tоp оf them. 8. TanStack Table TanStack Table fоcuses оn prоviding a rоbust cоre functiоnality that allоws develоpers tо have full cоntrоl оver the table's markup and styling. It оffers a wide range оf features, including autоmatic state management, cоlumn and glоbal filtering, multi-cоlumn and multi-directiоnal sоrting, grоuping, aggregatiоn, and cоmes with Svelte adapter. 9. Svelte Headless Table This is an extensible data table sоlutiоn specifically designed for Svelte apps. It offers full TypeScript support and an intuitive API that lets you quickly get started and define the desired behavior оf your tables. Svelte Headless Table оperates as a headless utility, implying that it doesn't cоme as a pre-built cоmpоnent but gives yоu cоmplete cоntrоl оver the rendering оf yоur tables. 10. Svelte Simple Datatables This headless library includes suppоrt fоr server-side paginatiоn, sоrting, and filtering. This allоws yоu tо integrate yоur data table cоmpоnents with server-side APIs оr data sоurces. It is designed to have nо external dependencies, keeping the project lightweight and ensuring flexibility in usage. Cоnclusiоn The right data table for your Svelte project can notably influence the performance, usability, and overall success of your application. All data grids from this guide have unique features and benefits, that allow you to pick the best choice for your project requirements.
Vue.js is a popular JavaScript framework, and as such, it is crucial to ensure that its components work as they are supposed to: effectively, and more importantly, reliably. Mocking dependencies is one of the most efficient methods of testing, as we will discover in this article. The Need for Mocking Dependencies Mocking dependencies is a way of exerting control over tests by providing the capacity to isolate components under test from their dependencies. As all frameworks work with multiple components, which can range from APIs to services and even interactions such as clicks or hovers, it is important to be able to isolate these components to test for their durability, behavior, and reliability. Mocking dependencies allow users to create a controlled testing environment to verify the component's behavior in isolation. There are several reasons for mocking dependencies in Vue.js tests, as we will highlight the strategies for isolating components that will enhance the performance of the tests run on this software. Isolation When developers are testing a particular component, they want to focus solely on the behavior of that specific component without any input or interaction from its dependencies. Mocking gives users the ability to isolate the specific component and test it while replacing dependencies with controlled substitutes. Controlled Testing Environment When mocking dependencies, users can control the environment by simulating different scenarios and conditions without falling back on external resources such as real-world scenarios, making it much more cost-effective and reliable. Speed and Reduced Complexity Mocking will strip away dependencies that may introduce latency or require additional steps in order to set it up, all of which increase the duration for users to receive the results. By stripping away these dependencies, not only will tests have a decreased duration, but they will also increase efficiency and reliability. Consistency By removing extraneous variables, mocking provides the most accurate test results that are not hampered by factors such as network availability or data changes, etcetera. Testing Edge Cases There are some scenarios that may be hard to replicate with real dependencies and mocking will be able to test edge cases and error conditions to enhance the debugging process. For example, mocking an API response with unexpected data may be able to help verify how components handle such situations. AI Working Hand-In-Hand With Mocking AI (artificial intelligence) has been making waves in software testing, and its integration into testing Vue.js applications can streamline the entire mocking process. By predicting and automating the creation of mocks based on previous test data, it can further enhance testing by creating much more valuable insights. It is no secret that AI has the capacity to process large amounts of data, which is why it is being implemented in many different industries. Mocking often generates synthetic data, covering a wide range of scenarios, and AI will be able to break it down and make it more user-friendly in the sense that human testers will not need to go through the data themselves, which is in itself, a time-consuming process. AI can also be used to dynamically generate mock responses by automating the process. For instance, instead of manually defining mock responses for different API endpoints, AI algorithms can independently generate mock responses through past patterns. It will also be able to adapt based on any feedback, optimizing the mocking strategy to better create scenarios and edge cases which will inadvertently improve results. Aside from data generation, AI algorithms can also be used to detect anomalies within the system or application. By monitoring the interactions between the mocked dependencies and the test environment, AI will be able to identify any unexpected behavior and deviations which can help in uncovering any bugs that may have been missed in manual or human tests. AI’s hand in guiding the mocking process can also take into account recent changes and optimize for mocks that target areas with the most potential to be affected. Mocking Events and Methods When it comes to mocking events, Vue Test Utils allows developers to mock methods and events in order to make sure that the component’s responses fall within what is considered accurate. Even when the application is put under different scenarios and edge cases, it should be able to provide relevant insight into the component’s behavior. Take, for example, a component that relies on a certain method to fetch data or handle user input, having a mocking dependency test requires it to verify the results of those tests, gauging whether the components are reacting the way that they should. In fact, it should be able to test for efficacy as well. Mocking events and methods are a commonplace practice in software development. Without invoking real-world implementations, users will be able to procure the results of those simulations which are both reliable and effective. It is particularly useful in isolating components during testing for specific conditions that are difficult to replicate in real time. Leveraging Jest for Snapshot Testing Another powerful strategy is snapshot testing whereby users are able to capture the rendered output of a component and compare it with a baseline snapshot. Think of it as generating a side-by-side comparison to indicate what is different. This approach helps identify unintended changes in the component's output, and that any modifications rendered do not break the existing functionality. To implement snapshot testing, users are able to render the component using Vue Test Utils and then use Jest to capture and compare snapshots, which provides a quick way to verify the visual and structural integrity of the component over time. By combining snapshot testing with other mocking strategies, developers can achieve a comprehensive testing suite that ensures their Vue.js components are robust, maintainable, and free from regressions. Going Forward Properly mocking dependencies in Vue.js tests is essential for isolating and testing components effectively, ensuring that their tests are both robust and reliable. The Vue Test Utils, with its rich features of subbing child components, mocking global objects, and intercepting API calls are all highly commendable in their innovation. Furthermore, by leveraging AI in software testing, developers will be able to further refine the process, creating more accurate and faster testing cycles. As the complexity of web applications continues to grow, the ability to isolate components and test them thoroughly will become a benchmark for maintaining quality control over the applications that are being developed and released for use.
React 19 Beta is finally here, after a two-year hiatus. The React team has published an article about the latest version. Among the standout features is the introduction of a new compiler, aimed at performance optimization and simplifying developers’ workflows. Furthermore, the update brings significant improvements to handling state updates triggered by responses, with the introduction of actions and new handling from state hooks. Additionally, the introduction of the use() hook simplifies asynchronous operations even further, allowing developers to manage loading states and context seamlessly. React 19 Beta also marks a milestone in improving accessibility and compatibility, with full support for Web components and custom elements. Moreover, developers will benefit from built-in support for document metadata, async scripts, stylesheets, and preloading resources, further enhancing the performance and user experience of React applications. In this article, the features mentioned above will be deeply explained. React 19 New Features New Compiler: Revolutionizing Performance Optimization and Replacing useMemo and useCallback React 19 introduces an experimental compiler that allows React code into optimized JS code. While other frontend frameworks such as Astro and Sveltve have their compiler, React now joins this team, enhancing its performance optimization. React applications frequently encountered performance challenges due to excessive re-rendering triggered by state changes. To mitigate this issue, developers often manually employ useMemo, useCallback, or memo APIs. These mechanisms aimed to optimize performance by memoizing certain computations and callback functions. However, the introduction of the React compiler automates these optimizations, integrating them into the codebase. Consequently, this automation not only enhances the speed and efficiency of React applications but also simplifies the development process for engineers. Actions and New Handling Form State Hooks The introduction of actions represents one of the most significant enhancements within React’s latest features. These changes help the process of handling state updates triggered by responses, particularly in scenarios involving data mutations. A common scenario arises where a user initiates a data mutation, such as submitting a form to modify their information. This action typically involves making an API request and handling this response. Developers had to face the task of managing various states manually, including pending states, errors, and optimistic updates. However, with the new hooks like useActionState, developers can now handle this process efficiently. By simply passing an asynchronous function within this hook, developers can handle error states, submit actions, and pending states. This simplifies the codebase and enhances the overall development experience. React’s 19 documentation highlights the evolution of these hooks, with React.useActionState formerly known as ReactDOM.useFormState in the Canary releases. Moreover, the introduction of useFormStatus addresses another common challenge in design systems. Design components often require access to information about the <form> they are embedded within, without using prop drilling. While this could previously be achieved through Context, the new useFormStatus hook offers a solution by giving the pending status of calls, enabling them to disable or enable buttons and adjust component styles accordingly. The use() Hook The new use() hook in React 19 is designed specifically for asynchronous functions. This innovative hook revolutionizes the way developers handle asynchronous operations within React applications. With the use hook for async functions, developers can now pass a promise directly, eliminating the need for useEffect and setIsLoading to manage loading states and additional dependencies. The use() hook not only handles loading states effortlessly, but it also provides flexibility in handling context. Developers can easily pass context inside the use() hook, allowing integration with the broader application context. Additionally, the hook enables reading context within its scope, further enhancing its utility and convenience. By abstracting away the complexities of asynchronous operations and context management, the use() hook in React version 19 represents a significant leap forward in developer productivity and application performance. ForwardRef Developers no longer need to use forwardRef to access the ref prop. Instead, React provides direct access to the ref prop, eliminating the need for an additional layer and simplifying the component hierarchy. It enhances code readability and offers developers a more intuitive and efficient way to work with refs. Support for Document Metadata Another notable enhancement in React 19 is the built-in support for document metadata. This significant change marks a departure from reliance on external libraries like React Helmet to manage document metadata within React applications. Previously, developers often turned to React Helmet, especially when working outside of frameworks like Next.js, to manipulate document metadata such as titles and links. However, with this latest update, React gives native access directly within its components, eliminating the need for additional dependencies. Now, developers can seamlessly modify document metadata from anywhere within their React codebase, offering flexibility. Support for Async Scripts, Stylesheets, and Preloading Resources This significant update is to manage the asynchronous loading and rendering of stylesheets, fonts, and scripts, including those defined within <style>, <link>, and <script> tags. Notably, developers now have the flexibility to load stylesheets within the context of Suspense, enhancing the performance and user experience of applications by ensuring a smoother transition during asynchronous component rendering. Furthermore, when components are rendered asynchronously, developers can effortlessly incorporate the loading of styles and scripts directly within those components, streamlining the development process and improving overall code organization. Full Support for Web Components and Custom Elements Unlike previous iterations where compatibility was only partial, React 19 now seamlessly integrates with Web Components and custom elements, offering comprehensive support for their use within React applications. Previously, developers encountered challenges as React’s handling of props sometimes clashed with attributes of custom elements, leading to conflicts and inconsistencies. However, with this latest update, React has provided an intuitive experience for incorporating Web Components and custom elements into React-based projects. This enhanced compatibility opens up a world of possibilities for developers, allowing them to leverage the power and flexibility of Web Components. With full support for Web Components and custom elements, React solidifies its position as a versatile and adaptable framework. Conclusion In conclusion, React 19 Beta represents a significant step forward in the evolution of the React ecosystem, offering developers powerful tools and features to build faster, more efficient, and more accessible applications. This latest iteration of the React library empowers developers to build faster, more efficient, and more innovative web applications. From the introduction of a new compiler to improved state management and seamless integration with Web Components, React 19 Beta offers tools to elevate the developer experience and push the boundaries of what’s possible in modern web development.
AG Grid is a feature-rich JavaScript library primarily used to build robust data tables in web applications. It’s used by almost 90% of Fortune 500 companies and it’s especially useful in Business Intelligence (BI) applications and FinTech applications. React is the market leader of JavaScript libraries to build enterprise web and mobile applications. It is widely adopted by major companies and boasts a large community. In this article, we will set up a React web application and use AG Grid to build a performant data table. All the code in this article is available at this GitHub link. Prerequisites Node.js and npm are installed on your system. Knowledge of JavaScript and React. Set up a New React Application Verify that NodeJS and NPM are installed. Commands to check: node -v and npm -v We will use "create react app" to initiate a new React application, let's install it globally on the machine using npm install -g create-react-app Create a new React application using npx create-react-app AGGridReact Wait for the app to be fully created and then go to the newly created app’s folder using cd AGGridReact Start the application using npm start. Soon you will be able to access this react app on localhost port 3000 using the URL Now we are ready to make modifications to our React app. You can use the code editor of your choice, I have used Visual Studio Code. Integrating AG Grid Into Our React App AG Grid comes in two flavors, community version and enterprise version. We will use the community version to not incur any licensing fee. The enterprise version is preferred in large corporations due to the set of additional features it provides. Install the AG Grid community version with React support using npm install ag-grid-react Let’s create two folders under the src folder in our project: components and services. Let's create a service under the services folder. This service will have the job of communicating to the backend and fetching data. For simplicity purposes we will not be doing actual API calls, instead, we will have a JSON file with all sample data. Let's create movie-data.json file and add content to it from here. Add movie-service.js to the services folder. Our service will have two methods and one exported constant. Soon, all of these will make sense. Below is the reference code for this file. JavaScript import movies from './movie-data.json'; const DEFAULT_PAGE_SIZE = 5; const countOfMovies = async() => { return movies.movies.length; }; const fetchMovies = async() => { return movies.movies; }; export { DEFAULT_PAGE_SIZE, countOfMovies, fetchMovies }; At this point let’s create our React component which will hold AG Grid Table. Add AGGridTable.js file under the components folder under the src directory. Let's import React and AG Grid in our component and lay down basic component export JavaScript import React, { useState, useEffect } from 'react'; import { AgGridReact } from 'ag-grid-react'; import 'ag-grid-community/styles/ag-grid.css'; import 'ag-grid-community/styles/ag-theme-quartz.css'; export const AgGridTable = () => {} We are going to use the AGGridReactcomponent to render our table, this component needs two main things: Columns we want to display in our table. Rows we want to display in our table. We have to pass a parameter named columnDefs to our AGGridReact to tell it how we want our columns to be set up. If you look at our movie data in movie-data.json file we have columns movieID, movieName and releaseYear. Let’s map these to our column definitions parameters. We can achieve it using the below lines of code. JavaScript const columnDefs = [ { field: 'movieId', headerName: "Movie ID", minWidth: 100 }, { field: 'movieName', headerName: "Movie Name", flex: 1 }, { field: 'releaseYear', headerName: "Release Year", flex: 1 } ]; We need to fetch actual movie data, and we are going to leverage the fetchMovies function from our movie service. Also, we would want to load it on page load. This can be achieved using the useEffect hook of React by passing an empty dependency array. JavaScript useEffect(() => { const fetchCount = async () => { const totalCount = await countOfMovies(); setTotalRecords(totalCount); } fetchCount(); }, []); useEffect(() => { fetchData(); }, []); const fetchData = async () => { setIsLoading(true); try { const response = await fetchMovies(); setMovieData(response); } catch (error) { console.error(error); } finally { setIsLoading(false); } }; Let’s add some nice loading indicator variables to indicate to our users something is getting processed. JavaScript const [isLoading, setIsLoading] = useState(false); Putting everything together we get our component as below. JavaScript import React, { useState, useEffect } from 'react'; import { AgGridReact } from 'ag-grid-react'; import 'ag-grid-community/styles/ag-grid.css'; import 'ag-grid-community/styles/ag-theme-quartz.css'; import { countOfMovies, fetchMovies } from '../services/movie-service'; export const AgGridTable = () => { const [movieData, setMovieData] = useState([]); const [totalRecords, setTotalRecords] = useState(0); const [isLoading, setIsLoading] = useState(false); const columnDefs = [ { field: 'movieId', headerName: "Movie ID", minWidth: 100 }, { field: 'movieName', headerName: "Movie Name", flex: 1 }, { field: 'releaseYear', headerName: "Release Year", flex: 1 } ]; useEffect(() => { const fetchCount = async () => { const totalCount = await countOfMovies(); setTotalRecords(totalCount); } fetchCount(); }, []); useEffect(() => { fetchData(); }, []); const fetchData = async () => { setIsLoading(true); try { const response = await fetchMovies(); setMovieData(response); } catch (error) { console.error(error); } finally { setIsLoading(false); } }; return ( <> {isLoading && <div>Loading...</div>} <div className="ag-theme-quartz" style={{ height: 300, minHeight: 300 } > { totalRecords > 0 && <AgGridReact rowData={movieData} columnDefs={columnDefs} /> } </div> </> ) } Let's update our app.js to include our newly built component and perform cleanup to remove basic create React app-generated code. Below is the updated code for app.js: JavaScript import './App.css'; import { AgGridTable } from './components/AgGridTable'; function App() { return ( <div className="App"> <header className="App-header"> <h1>Welcome logged in user.</h1> </header> <AgGridTable></AgGridTable> </div> ); } export default App; Our table should load on the UI now. Enhancing Performance With Pagination We have been rendering all the rows in the table in one go till now. This approach doesn’t scale in the real world. Imagine we had 10000 rows instead of just 100, our page would be very slow and UI performance would take a huge hit. We can easily enhance this by paginating through our data. In simpler terms, pagination means breaking our data into a set of x items and displaying one set item at a time. Some key benefits of adding paginations are: Reduced DOM Size resulting in Optimized Memory Usage Improved Rendering Speed Enhanced Scrolling Performance Faster Updates Let's add additional parameters to the AGGridReact setup to enable pagination. pagination = true to tells AG Grid we want to paginate paginationPageSize tells AG Grid what is the default number of items to be displayed on the page initially. We would be passing an array to the paginationPageSizeSelector parameter. It will define different page sizes we allow our users to choose from. totalRows tells AG Grid how many records in total there are, which, in turn, helps count the number of pages in our table. To have the right value for all of the above parameters we need to update our code to fetch the total row count and define the page size selector array. JavaScript import React, { useState, useEffect, useMemo } from 'react'; import { AgGridReact } from 'ag-grid-react'; import 'ag-grid-community/styles/ag-grid.css'; import 'ag-grid-community/styles/ag-theme-quartz.css'; import { DEFAULT_PAGE_SIZE, countOfMovies, fetchMovies } from '../services/movie-service'; export const AgGridTable = () => { const [movieData, setMovieData] = useState([]); const [totalRecords, setTotalRecords] = useState(0); const [isLoading, setIsLoading] = useState(false); const columnDefs = [ { field: 'movieId', headerName: "Movie ID", minWidth: 100 }, { field: 'movieName', headerName: "Movie Name", flex: 1 }, { field: 'releaseYear', headerName: "Release Year", flex: 1 } ]; useEffect(() => { const fetchCount = async () => { const totalCount = await countOfMovies(); setTotalRecords(totalCount); } fetchCount(); }, []); useEffect(() => { fetchData(); }, []); const fetchData = async () => { setIsLoading(true); try { const response = await fetchMovies(); setMovieData(response); } catch (error) { console.error(error); } finally { setIsLoading(false); } }; const paginationPageSizeSelector = useMemo(() => { return [5, 10, 20]; }, []); return ( <> {isLoading && <div>Loading...</div>} <div className="ag-theme-quartz" style={{ height: 300, minHeight: 300 } > { totalRecords > 0 && <AgGridReact rowData={movieData} columnDefs={columnDefs} pagination={true} paginationPageSize={DEFAULT_PAGE_SIZE} paginationPageSizeSelector={paginationPageSizeSelector} totalRows={totalRecords} /> } </div> </> ) } With this code, we will have nice pagination built into it with default page size. Conclusion AG Grid integration with React is easy to set up, and we can boost the performance with techniques such as paginations. There are other ways to lazy load rows in AG Grid beyond pagination. Going through the AG Grid documentation should help you get familiar with other methods. Happy coding!
John Vester
Senior Staff Engineer,
Marqeta
Justin Albano
Software Engineer,
IBM