UK-US Data Bridge: Join TechnologyAdvice and OneTrust as they discuss the UK extension to the EU-US Data Privacy Framework (DPF).
Migrate, Modernize and Build Java Web Apps on Azure: This live workshop will cover methods to enhance Java application development workflow.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Bridging the Gap: Better Token Standards for Cross-Chain Assets
How To Implement OAuth User Authentication in Next.js
At Octomind, we are using Large Language Models (LLMs) to interact with web app UIs and extract test case steps that we want to generate. We use the LangChain library to build interaction chains with LLMs. The LLM receives a task prompt, and we, as developers, provide tools the model can utilize to solve the task. The unpredictable and non-deterministic nature of the LLM output makes ensuring type safety quite a challenge. LangChain's approach to parsing input and handling errors often leads to unexpected and inconsistent outcomes within the type system. I’d like to share what I learned about parsing and error handling of LangChain. I will explain: Why did we go for TypeScript in the first place? The issue with LLM output How a type error can go unnoticed What consequences this can have All code examples use LangChain TS on the main branch on September 22nd, 2023 (roughly version 0.0.153). Why LangChain TS Instead of Python? There are two languages supported by LangChain — Python and JS/TypeScript. There were some pros and some cons with TypeScript: On the con side: We have to live with the fact that the TypeScript implementation is somewhat lagging behind the Python version — in code and even more so in documentation, this is a solvable issue if you are willing to trade the documentation for just going through the source code. On the pro side: We don't have to write another service in a different language since we are using TypeScript elsewhere, and we allegedly get guaranteed type safety, of which we are big fans here. We decided to go for the TypeScript version of LangChain to implement parts of our AI-based test discoveries. Full disclosure: I didn’t look into how the Python version handles the issues described below. Have you found similar issues in the Python version? Feel free to share them directly in the GitHub issue I created. Find the link at the end of the article. The Issue With Types in LLMs In LangChain, you can provide a set of tools that may be called by the model if it deems it necessary. For our purposes, a tool is simply a class with a _call function that does something that the model can not do on its own, like clicking on a button on a web page. The arguments for that function are provided by the model. When your tool implementation depends on the developer knowing the input format (in contrast to just doing something with text generated by the model), LangChain provides a class called StructuredTool. The StructuredTool adds a zod schema to the tool, which is used to parse whatever the model decides to call the tool so that we can use this knowledge in our code. Let's build our "click" example under the assumption that we want the model to give us a query selector to click on: Now, when you look at this class, it seems reasonably simple without a lot of potential for things to go wrong. But how does the model actually know what schema to supply? It has no intrinsic functionality for this. It just generates a string response to a prompt. When LangChain informs the model about the tools at its disposal, it will generate format instructions for each tool. These instructions define what JSON is and what the specific input schema the model should generate to use a tool. For this, LangChain will generate an addition to your own prompt that looks something like this: You have access to the following tools. You must format your inputs to these tools to match their "JSON schema" definitions below. "JSON Schema" is a declarative language that allows you to annotate and validate JSON documents. For example, the example "JSON Schema" instance {"properties": {"foo": {"description": "a list of test words," "type": "array," "items": {"type": "string"}}, "required": ["foo"]} would match an object with one required property, "foo." The "type" property specifies "foo" must be an "array," and the "description" property semantically describes it as "a list of test words." The items within "foo" must be strings. Thus, the object {"foo": ["bar," "baz"]} is a well-formatted instance of this example, "JSON Schema." The object {"properties": {"foo": ["bar," "baz"]} is not well-formatted. Here are the JSON Schema instances for the tools you have access to: click: left click on an element on a web page represented by a query selector, args: {"selector":{"type": "string," "description": "The query selector to click on."} Don't Trust the LLM Now, we have a best-effort way to make the model call our tool with inputs in the correct schema. Best effort unfortunately does not guarantee anything. It is entirely possible that the model generates input that does not adhere to the schema. So, let's take a look at the implementation of StructuredTool to see how it deals with that issue. StructuredTool.call is the function that eventually calls our _call method from above. It starts like this: The signature of arg is interpreted as follows: If, after parsing the tool’s schema, the output can be just a string, this can also be a string or whatever object the schema defines as input. This is the case if you define your schema as schema = z.string(). In our case, our schema can not be parsed to a string, so this simplifies to the type { selector: string }, or ClickSchema. But Is This Actually the Case? According to the implementation, we only check that the input actually adheres to the schema inside of the call. The signature reads like we have already made some assumptions about the input. So one might replace the signature with something like: But looking at it further, even this has issues. The only thing we know for certain is that the model will give us a string. This means there are two options: 1. call really should have the following signature: 2. There is another element to this Something must have already decided that the string returned by the model is valid JSON and have parsed it. In case that z.output<T> extends string, something somewhere must have already decided that string is an acceptable input format for the tool, and we do not need to parse JSON. (A string by itself is not valid JSON, JSON.parse("foo") will result in a SyntaxError). Introducing the OutputParser Class Of course, the second option is what is happening. For this use case, LangChain provides a concept called OutputParser. Let's take a look at the default one (StructuredChatOuputParser) and its parse method in particular. We don't need to understand every detail, but we can see that this is where the string that the model produces is parsed to JSON, and errors are thrown if it is not valid JSON. So, from this, we either get AgentAction or AgentFinish. We don't need to concern ourselves with AgentFinish, since it is just a special case to indicate that the interaction with the model is done. AgentAction is defined as: By now, you might have already seen — neither AgentAction nor the StructuredChatOutputParserWithRetries is generic, and there is no way to connect the type of toolInput with our ClickSchema. Since we don't know which tool the agent has actually selected, we can not (easily) use generics to represent the actual type, so this is expected. But worse, toolInput is typed as string, even though we just used JSON.parse to get it! Consider the positive case where the model produced output that matches our schema, let's say the string "{\"selector\": \"myCoolButton\"}" (wrapped in all the extra fluff LangChain requires to correctly parse). Using JSON.parse, this will deserialize to an object { selector: "myCoolButton" }and not a string. But because JSON.parse's return type is any, the typescript compiler has no chance of realizing this. Unfortunately for us, this also means that we, as developers, have a hard time realizing this. The Impact on Our Production Code To understand why this is troublesome, we need to look into the execution loop where the AgentActions are used to actually invoke the tool. This happens here in AgentExecutor._call. We don't really need to understand everything that this class does. Think of it as the wrapper that handles the interaction of the model with the tool implementations to actually call them. The _call method is quite long, so here is a reduced version that only contains parts relevant to our problem (these methods are simplified parts of _call and not in the actual code base of LangChain). The first thing that happens in the loop is to look for the next action to execute. This is where the parsing using the OutputParser comes in and where its exceptions are handled. You can see that in the case of an error, the toolInput field will always be a string (if this.handleParsingErrors is a function, the return type is also string). But we have just seen above that in the non-error case toolInput will be parsed JSON! This is inconsistent behavior. We never parse the output of handleParsingErrors to JSON. Let's look at how the loop continues. The next step is to call the selected tool with the given input: We only pass the previously computed output on to the tool in tool.call(action.toolInput)! In case this causes another error, we re-use the same function to handle parsing errors that will return a string that is supposed to be the tool output in the error case. Let's summarize all the issues: We parse the model's output to JSON and use that parsed result to call a tool If the parsing succeeds, we call the tool with any valid JSON If the parsing fails, we call the tool with a string The tool parses the input with zod, which will only work in the error case if the schema is just a const stringSchema = z.string() We have not covered this, but using const stringSchema = z.string() as the tool schema will not type check at all, since the generic argument of StructuredTool is T extends z.ZodObject<any, any, any, any>, and typeof stringSchema does not fulfil that constraint The signature of tool.call allows this to type check since we don't know specifically which tool we have at the moment, so string and any JSON are potentially valid The actual type check for this happens at runtime inside this function The developer implementing the tool has no idea about this. Since only StrucStep.actionturedTool._call is abstract, you will always get what the schema indicates, but StructuredTool.call will fail, even if you have supplied a function handleParsingErrors. Whatever the tool gets called is serialized into AgentAction.toolInput: string, which is not correctly typed The library user has access to the AgentSteps with wrongly typed AgentActions, since it is possible to request them as a return value of the overall loop using returnIntermediateSteps=true. Whatever the developer does now is definitely not type-safe! How Did We Run Into This Problem? At Octomind, we are using the AgentSteps to extract the test case steps that we want to generate. We noticed that the model often makes the same errors with the tool input format. Recall our ClickSchema, which is just { selector: string }. In our clicking example, it would either generate according to the schema, or { element: string }, or just a string that is the value we want, like "myCoolButton." So, we built an auto-fixer for these common error cases. The fixer basically just checks whether it can fix the input using either of the options above. The earliest we can inject this code without overwriting a lot of the planning logic that LangChain provides is in StructuredTool.call. We can not handle it using handleParsingErrors, since that receives only the error as input, and not the original input. Once you are overwriting StructuredTool.call, you are relying on the signature of that function to be correct, which we just saw is not the case. At this point, I was stuck having to figure out all of the above to see why I was getting wrongly typed inputs. The Solution To Type Safety While these hurdles can be frustrating, they also present opportunities to take a deep dive into the library and come up with possible solutions instead of complaining. I have opened two issues at LangChain JS/TS to discuss ideas on how to solve these problems: Issue 1 Issue 2 Feel free to jump in!
As more developers adopt TypeScript, I’ve curated reasons why you should use TypeScript in your next project. Although it met some resistance early on, it has quickly become a widely-used programming language in the last decade. Here is how to use TypeScript and some of the popular benefits to programmers. But first, let's dive into what TypeScript is and the problems it can solve. What Is TypeScript? TypeScript is an open-source programming language developed by Microsoft in 2012 as a superset of JavaScript. This means it contains all of JavaScript but with more features. Building on JavaScript’s functionalities and structures, it has additional features, such as typing or object-oriented programming, and it compiles to plain JavaScript. So, any code is also valuable in JavaScript. Now, what does all this mean to your project? What Can TypeScript Solve? TypeScript’s primary purpose is to improve productivity when developing complex applications. One way this happens is to enable IDEs to have a richer environment to spot common errors while you type the code. This adds a type of safety to your projects. Developers no longer have to check for errors whenever changes are made manually. And since TypeScript technically involves adding static typing to JavaScript, it can help you avoid errors like the classic: As it catches errors for you, this makes code refactoring easier without breaking it significantly. With features like interfaces, abstract classes, type aliases, tuple, function overloading, and generics. Adopting this programming language in a large JavaScript project could provide more robust software and still be deployable anywhere a JavaScript application would run. Why Is TypeScript Better Than JavaScript? TypeScript’s motto is “JavaScript that scales.” That’s because it brings the future of development to JavaScript. But is it as good as people say? Here are a few areas where TypeScript is better than JavaScript: Optional Static Typing JavaScript is a dynamically typed language. Although this has its benefits, the freedom of dynamic typing usually leads to bugs. Not only does this reduce the programmer’s efficiency, but it slows down development due to the costs of adding new lines of code. But TypeScript’s static typing differs from JavaScript’s dynamically typed nature. For example, when you’re unsure of the type in JavaScript, you’ll generally rely on the TypeError during runtime to suggest why the variable type is wrong. On the other hand, TypeScript adds syntax to JavaScript. Its compiler uses this syntax to identify possible code errors before they happen, and it subsequently produces vanilla JavaScript that browsers understand. A study showed that TypeScript could successfully detect 15% of JavaScript bugs. IDE Support During its early years, TypeScript was only supported in Microsoft’s Visual Studio code editor. However, as it gained traction, more code editors and IDEs started to support the programming language natively or through plugins. You can write TypeScript code on nearly every code editor. This extensive IDE support has made it more relevant and popular for software developers. Other IDEs that support it include Eclipse, Atom, WebStorm, and CATS. Object Orientation It supports Object-Oriented Programming concepts like classes, encapsulation, inheritance, abstraction, and interfaces. The OOP paradigm makes creating well-organized, scalable code easier, and as your project evolves in size and complexity, this benefit becomes more apparent. Readability Due to the addition of strict types and elements that make the code more expressive, you’ll be able to see the design intent of the programmers who wrote the code. This works well for remote teams because a self-explanatory code can offset the lack of direct communication among teams. Community Support TypeScript is lucky to have a massive group of exceptionally talented people working tirelessly to improve the open-source language. This explains why it has gained traction among developers and software development teams in the last few years. Most JavaScript applications comprise hundreds of thousands of files. One change to an individual file could affect the behavior of other files. Validating the relationships between every element of your project can become time-consuming quickly. As a type-checked language, it does this automatically with immediate feedback during development. While you may not see how big of a deal this is when working with small projects, complex ones with a large codebase can become messy with bugs all over the place. Every dev would like to be more efficient and faster, which can help improve project scalability. In addition, TypeScript’s language features and reference validation make it better than JavaScript. Ultimately, TypeScript improves the developer experience and code maintainability because devs feel more confident in their code. It’ll also save lots of time that would have otherwise gone into validating they haven’t accidentally broken the project. This programming language also provides better collaboration between and within teams. Advantages of TypeScript It offers significant advantages for developers and software development teams. I’ve listed five advantages of TypeScript in your next project: 1. Compile-Time Errors It’s quite clear as day already. I’ve mentioned this earlier because it is the obvious TypeScript benefit. Compile-time errors are why most developers have started using it. They can use the compiler to detect potential errors during compile time rather than runtime. JavaScript’s inability to support types and compile-time error checks means it’s not a good fit for server-side code in complex and large codebases. On the other hand, another reason to use TypeScript is that it detects compilation errors during development, making runtime errors unlikely. It incorporates static typing, helping a programmer check type correctness at compile time. 2. Runs Everywhere I already mentioned that TypeScript compiles to pure JavaScript, meaning it can run everywhere. In fact, it compiles to any JavaScript version, including the latest version, ES2022, and others like ES6, ES5, and ES3. You can use it with frameworks like React and Angular on the front end or Node.js on the backend. 3. Tooling Over Documentation If you want a successful project in the long run, documentation is essential. But this can be tricky because it’s easy to overlook documentation, difficult to enforce, and impossible to report if it’s no longer up to date. This makes it essential to prioritize tooling over documentation. TypeScript takes tooling seriously. And this goes beyond errors and completions while typing. It documents the arguments a function is expecting, the shape of objects, and the variables that may be undefined. It’ll also notify you when it needs to be updated and where exactly. Without this programming language, each developer would have to waste a lot of time looking up the shapes of objects, combing through documentation, and hoping they’re up to date. Or you would have to debug the code and hope that your predictions about which fields are required and optional are accurate. 4. Object-Oriented Programming (OOP) As an object-oriented programming language, it is great for large and complex projects that must be actively updated or maintained. Some of the benefits that object-oriented programming provides are: Reuse of code through inheritance: The ability to assign relationships and subclasses between objects enables programmers to reuse a common logic while retaining a unique hierarchy. This attribute of OOP speeds up development and provides more accuracy by enabling a more in-depth data analysis. Increased flexibility due to polymorphism: Depending on the context, objects can take on multiple forms depending on the context. The program will identify which meaning or usage is required for each execution of that object, which reduces the need to duplicate code. Reduced Data Corruption through Encapsulation: Each object’s implementation and state are held privately within a defined class or boundary. Other objects can’t access the class nor have the authority to make changes. They can only call a list of methods or public functions. Hence, encapsulation helps you perform data hiding, which increases program security and prevents unintentional data corruption. Effective Problem Solving: Object-oriented programming takes a complex problem and breaks it into solvable chunks. For each small problem, a developer writes a class that does what they need. Ultimately, using OOP provides improved data structures and reliability while saving time in the long run. 5. Static Typing Besides helping you catch bugs, static typing gives the code more structure and ensures it is self-documented. This is because the type of information makes it easier to understand how classes, functions, and other structures work. It also becomes easier to refactor code or eliminate technical debt. In addition, static typing integrates seamlessly with autocomplete tools, ensuring they’re more reliable and accurate. That way, devs can write code faster. In most cases, static-typed code is easier for humans or robots to read. Step-By-Step To Install TypeScript By now, you already have an idea of what TypeScript does and how it makes writing code easier. But how do you use it? You need to install it first, so here is a full guide to do it. Step 1: Download and Install the NodeJS Framework The first step is downloading and installing the NodeJS framework (npm version) into your computer. If you don’t already have it installed, you can do so by visiting the Node download page. It’s recommended that you use the LTS (long-time support) version because it’s the most stable. Step 2: Navigate to the Start Menu and Click the Command Prompt After installing Node and NPM, run the command below in the NodeJS command prompt: npm install –g TypeScript The command will install TypeScript into your local systems. Step 3: Verify Installation You can verify if TypeScript has been installed by running the command below: tsc -v tsc is a TypeScript compiler, while the -v flag displays the TS version. See below: Once you’ve confirmed this, then TypeScript has been successfully installed. You can also install a specific TS version using the command ‘@’ followed by the version you want. For example: npm install –global TypeScript@4.9.3 How To Install TypeScript Into a Current Project You can also set it up on a per-project basis. That is, you install TS into your current project. This helps you have multiple projects with different TypeScript versions and ensures each project works consistently without interactions with each other. To install the TypeScript compiler locally into your project, simply use the command below: npm install –save-dev TypeScript How to Uninstall TypeScript To uninstall it, you can use the same command you used for installation. Simply replace the install with uninstall as seen below: npm uninstall –global TypeScript How To Use TypeScript After installing it, it’s time to use it. You’ll need a code editor like Visual Code Studio. If you don’t have it, you need to download and install VS Code. When you’ve done this, here’s how to use TypeScript: Step 1: Let’s create a simple Hello World project. This will help you have an idea of how to use TypeScript. Step 2: Run the following command after installation to make a project directory: mkdir HelloWorld Then move into the new directory: cd HelloWorld Step 3: Launch Visual Studio Code (or your preferred code editor). We’ll use VS code here. Step 4: Navigate to File Explorer and create a new file named helloworld.ts. The file name isn’t essential; you can name it whatever you want. However, it’s important that these end with a .ts extension. Step 5: Next, add the following TypeScript code. let message: string = ‘Hello, World!’; console.log(message); You’ll notice the keywords let and string type declaration. Step 6: To compile the TypeScript code, simply open the Integrated Terminal (Ctrl+`) and type: tsc helloworld.ts This compiles and creates a new helloworld.js JavaScript file. When you open helloworld.js, you’ll see it doesn’t look too different from helloworld.ts. You’ll see the type information has now been removed and let has been replaced with var. Conclusion Ultimately, using TypeScript will depend on your project and the time and effort required. Your team will need to assess the advantages and disadvantages of implementation. So, using TypeScript will become apparent right away, from better code completion to bug prevention, and it will make your team’s lives easier when it comes to writing code.
In today's digital landscape, it's not just about building functional systems; it's about creating systems that scale smoothly and efficiently under demanding loads. But as many developers and architects can attest, scalability often comes with its own unique set of challenges. A seemingly minute inefficiency, when multiplied a million times over, can cause systems to grind to a halt. So, how can you ensure your applications stay fast and responsive, regardless of the demand? In this article, we'll delve deep into the world of performance optimization for scalable systems. We'll explore common strategies that you can weave into any codebase, be it front end or back end, regardless of the language you're working with. These aren't just theoretical musings; they've been tried and tested in some of the world's most demanding tech environments. Having been a part of the team at Facebook, I've personally integrated several of these optimization techniques into products I've helped bring to life, including the lightweight ad creation experience in Facebook and the Meta Business Suite. So whether you're building the next big social network, an enterprise-grade software suite, or just looking to optimize your personal projects, the strategies we'll discuss here will be invaluable assets in your toolkit. Let's dive in. Prefetching Prefetching is a performance optimization technique that revolves around the idea of anticipation. Imagine a user interacting with an application. While the user performs one action, the system can anticipate the user's next move and fetch the required data in advance. This results in a seamless experience where data is available almost instantly when needed, making the application feel much faster and responsive. Proactively fetching data before it's needed can significantly enhance the user experience, but if done excessively, it can lead to wasted resources like bandwidth, memory, and even processing power. Facebook employs pre-fetching a lot, especially for their ML-intensive operations such as "Friends suggestions." When Should I Prefetch? Prefetching involves the proactive retrieval of data by sending requests to the server even before the user explicitly demands it. While this sounds promising, a developer must ensure the balance is right to avoid inefficiencies. A. Optimizing Server Time (Backend Code Optimizations) Before jumping into prefetching, it's wise to ensure that the server response time is optimized. Optimal server time can be achieved through various backend code optimizations, including: Streamlining database queries to minimize retrieval times. Ensuring concurrent execution of complex operations. Reducing redundant API calls that fetch the same data repeatedly. Stripping away any unnecessary computations that might be slowing down the server response. B. Confirming User Intent The essence of prefetching is predicting the user's next move. However, predictions can sometimes be wrong. If the system fetches data for a page or feature the user never accesses, it results in resource wastage. Developers should employ mechanisms to gauge user intent, such as tracking user behavior patterns or checking active engagements, ensuring that data isn't fetched without a reasonably high probability of being used. How To Prefetch Prefetching can be implemented using any programming language or framework. For the purpose of demonstration, let's look at an example using React. Consider a simple React component. As soon as this component finishes rendering, an AJAX call is triggered to prefetch data. When a user clicks a button in this component, a second component uses the prefetched data: JavaScript import React, { useState, useEffect } from 'react'; import axios from 'axios'; function PrefetchComponent() { const [data, setData] = useState(null); const [showSecondComponent, setShowSecondComponent] = useState(false); // Prefetch data as soon as the component finishes rendering useEffect(() => { axios.get('https://api.example.com/data-to-prefetch') .then(response => { setData(response.data); }); }, []); return ( <div> <button onClick={() => setShowSecondComponent(true)}> Show Next Component </button> {showSecondComponent && <SecondComponent data={data} />} </div> ); } function SecondComponent({ data }) { // Use the prefetched data in this component return ( <div> {data ? <div>Here is the prefetched data: {data}</div> : <div>Loading...</div>} </div> ); } export default PrefetchComponent; In the code above, the PrefetchComponent fetches data as soon as it's rendered. When the user clicks the button, SecondComponent gets displayed, which uses the prefetched data. Memoization In the realm of computer science, "Don't repeat yourself" isn't just a good coding practice; it's also the foundation of one of the most effective performance optimization techniques: memoization. Memoization capitalizes on the idea that re-computing certain operations can be a drain on resources, especially if the results of those operations don't change frequently. So, why redo what's already been done? Memoization optimizes applications by caching computation results. When a particular computation is needed again, the system checks if the result exists in the cache. If it does, the result is directly retrieved from the cache, skipping the actual computation. In essence, memoization involves creating a memory (hence the name) of past results. This is especially useful for functions that are computationally expensive and are called multiple times with the same inputs. It's akin to a student solving a tough math problem and jotting down the answer in the margin of their book. If the same question appears on a future test, the student can simply reference the margin note rather than work through the problem all over again. When Should I Memoize? Memoization isn't a one-size-fits-all solution. In certain scenarios, memoizing might consume more memory than it's worth. So, it's crucial to recognize when to use this technique: When the data doesn’t change very often: Functions that return consistent results for the same inputs, especially if these functions are compute-intensive, are prime candidates for memoization. This ensures that the effort taken to compute the result isn't wasted on subsequent identical calls. When the data is not too sensitive: Security and privacy concerns are paramount. While it might be tempting to cache everything, it's not always safe. Data like payment information, passwords, and other personal details should never be cached. However, more benign data, like the number of likes and comments on a social media post, can safely be memoized to improve performance. How To Memoize Using React, we can harness the power of hooks like useCallback and useMemo to implement memoization. Let's explore a simple example: JavaScript import React, { useState, useCallback, useMemo } from 'react'; function ExpensiveOperationComponent() { const [input, setInput] = useState(0); const [count, setCount] = useState(0); // A hypothetical expensive operation const expensiveOperation = useCallback((num) => { console.log('Computing...'); // Simulating a long computation for(let i = 0; i < 1000000000; i++) {} return num * num; }, []); const memoizedResult = useMemo(() => expensiveOperation(input), [input, expensiveOperation]); return ( <div> <input value={input} onChange={e => setInput(e.target.value)} /> <p>Result of Expensive Operation: {memoizedResult}</p> <button onClick={() => setCount(count + 1)}>Re-render component</button> <p>Component re-render count: {count}</p> </div> ); } export default ExpensiveOperationComponent; In the above example, the expensiveOperation function simulates a computationally expensive task. We've used the useCallback hook to ensure that the function doesn't get redefined on each render. The useMemo hook then stores the result of the expensiveOperation so that if the input doesn't change, the computation doesn't run again, even if the component re-renders. Concurrent Fetching Concurrent fetching is the practice of fetching multiple sets of data simultaneously rather than one at a time. It's similar to having several clerks working at a grocery store checkout instead of just one: customers get served faster, queues clear more quickly, and overall efficiency improves. In the context of data, since many datasets don't rely on each other, fetching them concurrently can greatly accelerate page load times, especially when dealing with intricate data that requires more time to retrieve. When To Use Concurrent Fetching? When each data is independent, and the data is complex to fetch: If the datasets being fetched have no dependencies on one another and they take significant time to retrieve, concurrent fetching can help speed up the process. Use mostly in the back end and use carefully in the front end: While concurrent fetching can work wonders in the back end by improving server response times, it must be employed judiciously in the front end. Overloading the client with simultaneous requests might hamper the user experience. Prioritizing network calls: If data fetching involves several network calls, it's wise to prioritize one major call and handle it in the foreground, concurrently processing the others in the background. This ensures that the most crucial data is retrieved first while secondary datasets load simultaneously. How To Use Concurrent Fetching In PHP, with the advent of modern extensions and tools, concurrent processing has become simpler. Here's a basic example using the concurrent {} block: PHP <?php use Concurrent\TaskScheduler; require 'vendor/autoload.php'; // Assume these are some functions that fetch data from various sources function fetchDataA() { // Simulated delay sleep(2); return "Data A"; } function fetchDataB() { // Simulated delay sleep(3); return "Data B"; } $scheduler = new TaskScheduler(); $result = concurrent { "a" => fetchDataA(), "b" => fetchDataB(), }; echo $result["a"]; // Outputs: Data A echo $result["b"]; // Outputs: Data B ?> In the example, fetchDataA and fetchDataB represent two data retrieval functions. By using the concurrent {} block, both functions run concurrently, reducing the total time it takes to fetch both datasets. Lazy Loading Lazy loading is a design pattern wherein data or resources are deferred until they're explicitly needed. Instead of pre-loading everything up front, you load only what's essential for the initial view and then fetch additional resources as and when they're needed. Think of it as a buffet where you only serve dishes when guests specifically ask for them, rather than keeping everything out all the time. A practical example is a modal on a web page: the data inside the modal isn't necessary until a user decides to open it by clicking a button. By applying lazy loading, we can hold off on fetching that data until the very moment it's required. How To Implement Lazy Loading For an effective lazy loading experience, it's essential to give users feedback that data is being fetched. A common approach is to display a spinner or a loading animation during the data retrieval process. This ensures that the user knows their request is being processed, even if the data isn't instantly available. Lazy Loading Example in React Let's illustrate lazy loading using a React component. This component will fetch data for a modal only when the user clicks a button to view the modal's contents: JavaScript import React, { useState } from 'react'; function LazyLoadedModal() { const [data, setData] = useState(null); const [isLoading, setIsLoading] = useState(false); const [isModalOpen, setIsModalOpen] = useState(false); const fetchDataForModal = async () => { setIsLoading(true); // Simulating an AJAX call to fetch data const response = await fetch('https://api.example.com/data'); const result = await response.json(); setData(result); setIsLoading(false); setIsModalOpen(true); }; return ( <div> <button onClick={fetchDataForModal}> Open Modal </button> {isModalOpen && ( <div className="modal"> {isLoading ? ( <p>Loading...</p> // Spinner or loading animation can be used here ) : ( <p>{data}</p> )} </div> )} </div> ); } export default LazyLoadedModal; In the above example, the data for the modal is fetched only when the user clicks the "Open Modal" button. Until then, no unnecessary network request is made. Once the data is being fetched, a loading message (or spinner) is displayed to indicate to the user that their request is in progress. Conclusion In today's fast-paced digital world, every millisecond counts. Users demand rapid responses, and businesses can't afford to keep them waiting. Performance optimization is no longer just a 'nice-to-have' but an absolute necessity for anyone serious about delivering a top-tier digital experience. Through techniques such as Pre-fetching, Memoization, Concurrent Fetching, and Lazy Loading, developers have a robust arsenal at their disposal to fine-tune and enhance their applications. These strategies, while diverse in their applications and methodologies, share a common goal: to ensure applications run as efficiently and swiftly as possible. However, it's important to remember that no single strategy fits all scenarios. Each application is unique, and performance optimization requires a judicious blend of understanding the application's needs, recognizing the users' expectations, and applying the right techniques effectively. It's an ongoing journey of refinement and learning.
There are various methods of visualizing three-dimensional objects in two-dimensional space. For example, most 3D graphics engines use perspective projection as the main form of projection. This is because perspective projection is an excellent representation of the real world, in which objects become smaller with increasing distance. But when the relative position of objects is not important, and for a better understanding of the size of objects, you can use parallel projections. They are more common in engineering and architecture, where it is important to maintain parallel lines. Since the birth of computer graphics, these projections have been used to render 3D scenes when 3D rendering hardware acceleration was not possible. Recently, various forms of parallel projections have become a style choice for digital artists, and they are used to display objects in infographics and in digital art in general. The purpose of this article is to show how to create and manipulate isometric views in SVG and how to define these objects using, in particular, the JointJS library. To illustrate SVG’s capabilities in creating parallel projections, we will use isometric projection as an example. This projection is one of the dominant projection types because it allows you to maintain the relative scale of objects along all axes. Isometric Projection Let’s define what isometric projection is. First of all, it is a parallel type of projection in which all lines from a “camera” are parallel. It means that the scale of an object does not depend on the distance between the “camera” and the object. And specifically, in isometric (which means “equal measure” in Greek) projection, scaling along each axis is the same. This is achieved by defining equal angles between all axes. In the following image, you can see how axes are positioned in isometric projection. Keep in mind that in this article, we will be using a left-handed coordinate system. One of the features of the isometric projection is that it can be deconstructed into three different 2D projections: top, side, and front projections. For example, a cuboid can be represented by three rectangles on each 2D projection and then combined into one isometric view. The next image represents separate projections of an object using the left-handed coordinate system. Separate views of the orthographic projection Then, we can combine them into one isometric view: Isometric view of the example object The challenge with SVG is that it contains 2D objects which are located on one XY-plane. But we can overcome this by combining all projections in one plane and then separately applying a transformation to every object. SVG Isometric View Transformations In 3D, to create an isometric view, we can move the camera to a certain position, but SVG is purely a 2D format, so we have to create a workaround to build such a view. We recommend reading Cody Walker’s article that presents a method for creating isometric representations from 2D object views — top, side, and front projections. Based on the article, we need to create transformations for each 2D projection of the object separately. First, we need to rotate our plane by 30 degrees. And then, we will skew our 2D image by -30 degrees. This transformation will align our axes with the axes of the isometric projection. Then, we need to use a scale operator to scale our 2D projection down vertically by 0.8602. We need to do it due to the fact of isometric projection distortion. Let’s introduce some SVG features that will help us implement isometric projection. The SVG specification allows users to specify a particular transformation in the transform attribute of an SVG element. This attribute helps us apply a linear transformation to the SVG element. To transform 2D projection into an isometric view, we need to apply scale, rotate, and skew operators. To represent the transformation in code, we can use the DOMMatrixReadOnly object, which is a browser API, to represent the transformation matrix. Using this interface, we can create a matrix as follows: JavaScript const isoMatrix = new DOMMatrixReadOnly() .rotate(30) .skewX(-30) .scale(1, 0.8602); This interface allows building a transformation matrix using our values, and then we can apply the resulting value to thetransform attribute using the matrix function. In SVG, we can present only one 2D space at a time, so for our conversion, we will be using top projection as a base projection. This is mostly because axes in this projection correspond with axes in a normal SVG viewport. To demonstrate SVG possibilities, we will be using the JointJS library. We defined a rectangular grid in the XY-plane with a cell width of 20. Let’s define SVG for the elements on the top projection from the example. To properly render this object, we need to specify two polygons for two levels of our object. Also, we can apply a translate transformation for our element in 2D space using DOMMatrix: JavaScript // Translate transformation for Top1 Element const matrix2D = new DOMMatrixReadOnly() .translate(200, 200); HTML <!--Top1 element--> <polygon joint-selector="body" id="v-4" stroke-width="2" stroke="#333333" fill="#ff0000" fill-opacity="0.7" points="0,0 60,0 60,20 40,20 40,60 0,60" transform="matrix(1,0,0,1,200,200)"> </polygon> <!--Top2 element--> <polygon joint-selector="body" id="v-6" stroke-width="2" stroke="#333333" fill="#ff0000" fill-opacity="0.7" points="0,0 20,0 20,40 0,40" transform="matrix(1,0,0,1,240,220)"> </polygon> Then, we can apply our isometric matrix to our elements. Also, we will add a translate transformation to position elements in the right place: JavaScript const isoMatrix = new DOMMatrixReadOnly() .rotate(30) .skewX(-30) .scale(1, 0.8602); const top1Matrix = isoMatrix.translate(200, 200); const top2Matrix = isoMatrix.translate(240, 220); Isometric view without height adjustment For simplicity, let’s assume that our element’s base plane is located on the XY plane. Therefore, we need to translate the top view so it will be viewed as it is located on the top of the object. To do it, we can just translate the projection by its Z coordinate on the scaled SVG space as follows. The Top1 element has an elevation of 80, so we should translate it by (-80, -80). Similarly, the Top2 element has an elevation of 40. We can just apply these translations to our existing matrix: JavaScript const top1MatrixWithHeight = top1Matrix.translate(-80, -80); const top2MatrixWithHeight = top1Matrix.translate(-40, -40); Final isometric view of top projection In the end, we will have the following transform attributes for Top1 and Top2 elements. Note that they differ only in the two last values, which represent the translate transformation: JavaScript // Top1 element transform="matrix(0.8660254037844387,0.49999999999999994,-0.8165000081062317,0.47140649947346464,5.9,116.6)" // Top2 element transform="matrix(0.8660254037844387,0.49999999999999994,-0.8165000081062317,0.47140649947346464,26.2,184.9)" To create an isometric view of side and front projections, we need to make a net so we can place all projections on 2D SVG space. Let’s create a net by attaching side and front views similar to the classic cube net: Then, we need to skewX side and front projections by 45 degrees. It will allow us to align the Z-axis for all projections. After this transformation, we will get the following image: Prepared 2D projection Then, we can apply our isoMatrix to this object: Isometric projection without depth adjustments In every projection, there are parts that have a different 3rd coordinate value. Therefore, we need to adjust this depth coordinate for every projection as we did with the top projection and its Z coordinate. In the end, we will get the following isometric view: Final isometric view of the object Using JointJS for the Isometric Diagram JointJS allows us to create and manipulate such objects with ease due to its elements framework and wide set of tools. Using JointJS, we can define and control isometric objects to build powerful isometric diagrams. Remember the basic isometric transformation from the beginning of the article? JavaScript const isoMatrix = new DOMMatrixReadOnly() .rotate(30) .skewX(-30) .scale(1, 0.8602); In the JointJS library, we can apply this transformation to the whole object which stores all SVG elements, and then simply apply the object-specific transformations on top of this. Isometric Grid Rendering JointJS has great capabilities in the rendering of custom SVG markup. Utilizing JointJS, we can generate a path that is aligned to an untransformed grid and have it transformed automatically with the grid, thanks to the global paper transformation that we mentioned previously. You can see the grid and how we interpret the coordinate system in the demo below. Note that we can dynamically change the paper transformation, which allows us to change the view on the fly: Isometric grid Creating a Custom Isometric SVG Element Here, we show a custom SVG Isometric shape in JointJS. In our example, we use the isometricHeight property to store information about a third dimension and then use it to render our isometric object. The following snippet shows how you can call the custom createIsometricElement function to alter object properties: JavaScript const element = createIsometricElement({ isometricHeight: GRID_SIZE * 3, size: { width: GRID_SIZE * 3, height: GRID_SIZE * 6 }, position: { x: GRID_SIZE * 6, y: GRID_SIZE * 6 } }); In the following demo, you can see that our custom isometric element can be moved like an ordinary element on the isometric grid. You can change dimensions by altering the parameters of the createIsometricElement function in the source code (when you click “Edit on CodePen”): Custom isometric element on the isometric grid Z-Index Calculation in Isometric Diagrams One of the problems with an isometric view is placing elements respective to their relative position. Unlike in a 2D plane, in an isometric view, objects have perceived height and can be placed one behind the other. We can achieve this behavior in SVG by placing them into the DOM in the right order. To define the order in our case, we can use the JointJS z attribute, which allows sending the correct element to the background so that it can be overlapped/hidden by the other element as expected. You can find more information about this problem in a great article by Andreas Hager. We decided to sort the elements using the topological sorting algorithm. The algorithm consists of two steps. First, we need to create a special graph, and then we need to use a depth-first search for that graph to find the correct order of elements. As the first step, we need to populate the initial graph — for each object, we need to find all objects behind it. We can do that by comparing the positions of their bottom sides. Let’s illustrate this step with images — let’s, for example, take three elements which are positioned like this: We have marked the bottom side of each object in the second image. Using this data, we will create a graph structure that will model topological relations between elements. In the image, you can see how we define the points on the bottom side — we can find the relative position of all elements by comparing aMax and bMin points. We define that if the x and y coordinates of point bMin are less than the coordinates of point aMax , then object b is located behind object a. Algorithm data in a 2D space Comparing the three elements from our previous example, we can produce the following graph: Topological graph After that, we need to use a variation of the depth-first search algorithm to find the correct rendering order. A depth-first search allows us to visit graph nodes according to the visibility order, starting from the most distant one. Here is a library-agnostic example of the algorithm: JavaScript const sortElements = (elements: Rect[]) => { const nodes = elements.map((el) => { return { el: el, behind: [], visited: false, depth: null, }; }); for (let i = 0; i < nodes.length; ++i) { const a = nodes[i].el; const aMax = aBBox.bottomRight(); for (let j = 0; j < nodes.length; ++j) { if (i != j) { const b = nodes[j].el; const bMin = bBBox.topLeft(); if (bMin.x < aMax.x && bMin.y < aMax.y) { nodes[i].behind.push(nodes[j]); } } } } const sortedElements = depthFirstSearch(nodes); return sortedElements; }; const depthFirstSearch = (nodes) => { let depth = 0; let sortedElements = []; const visitNode = (node) => { if (!node.visited) { node.visited = true; for (let i = 0; i < node.behind.length; ++i) { if (node.behind[i] == null) { break; } else { visitNode(node.behind[i]); delete node.behind[i]; } } node.depth = depth++; sortedElements.push(node.el); } }; for (let i = 0; i < nodes.length; ++i) { visitNode(nodes[i]); } return sortedElements; }; This method can be implemented easily using the JointJS library — in the following CodePen, we use a special JointJS event to recalculate z-indexes of our elements whenever the position of an element is changed. As outlined above, we use a special z property of the element model to specify rendering order and assign it during the depth-first traversal. (Note that the algorithm’s behavior is undefined in the case of intersecting elements due to the nature of the implementation of isometric objects.) Z-index calculations for isometric diagrams The JointJS Demo We have created a JointJS demo that combines all of these methods and techniques and also allows you to easily switch between 2D and isometric SVG markup. Crucially, as you can see, the powerful features of JointJS (which allow us to move elements, connect them with links, and create tools to edit them, among others) work just as well in the isometric view as they do in 2D. You can see the demo here. Throughout this article, we used our open-source JointJS library for illustration. However, since you were so thorough with your exploration, we would like to extend to you an invitation to our no-commitment 30-day trial of JointJS+, an advanced commercial extension of JointJS. It will allow you to experience additional powerful tools for creating delightful diagrams.
In part one of this two-part series, we looked at how walletless dApps smooth the web3 user experience by abstracting away the complexities of blockchains and wallets. Thanks to account abstraction from Flow and the Flow Wallet API, we can easily build walletless dApps that enable users to sign up with credentials that they're accustomed to using (such as social logins or email accounts). We began our walkthrough by building the backend of our walletless dApp. Here in part two, we'll wrap up our walkthrough by building the front end. Here we go! Create a New Next.js Application Let's use the Next.js framework so we have the frontend and backend in one application. On our local machine, we will use create-next-app to bootstrap our application. This will create a new folder for our Next.js application. We run the following command: Shell $ npx create-next-app flow_walletless_app Some options will appear; you can mark them as follows (or as you prefer!). Make sure to choose No for using Tailwind CSS and the App Router. This way, your folder structure and style references will match what I demo in the rest of this tutorial. Shell ✔ Would you like to use TypeScript with this project? ... Yes ✔ Would you like to use ESLint with this project? ... No ✔ Would you like to use Tailwind CSS with this project? ... No <-- IMPORTANT ✔ Would you like to use `src/` directory with this project? ... No ✔ Use App Router (recommended)? ... No <-- IMPORTANT ✔ Would you like to customize the default import alias? ... No Start the development server. Shell $ npm run dev The application will run on port 3001 because the default port (3000) is occupied by our wallet API running through Docker. Set Up Prisma for Backend User Management We will use the Prisma library as an ORM to manage our database. When a user logs in, we store their information in a database at a user entity. This contains the user's email, token, Flow address, and other information. The first step is to install the Prisma dependencies in our Next.js project: Shell $ npm install prisma --save-dev To use Prisma, we need to initialize the Prisma Client. Run the following command: Shell $ npx prisma init The above command will create two files: prisma/schema.prisma: The main Prisma configuration file, which will host the database configuration .env: Will contain the database connection URL and other environment variables Configure the Database Used by Prisma We will use SQLite as the database for our Next.js application. Open the schema.prisma file and change the datasource db settings as follows: Shell datasource db { provider = "sqlite" url = env("DATABASE_URL") } Then, in our .env file for the Next.js application, we will change the DATABASE_URL field. Because we’re using SQLite, we need to define the location (which, for SQLite, is a file) where the database will be stored in our application: Shell DATABASE_URL="file:./dev.db" Create a User Model Models represent entities in our app. The model describes how the data should be stored in our database. Prisma takes care of creating tables and fields. Let’s add the following User model in out schema.prisma file: Shell model User { id Int @id @default(autoincrement()) email String @unique name String? flowWalletJobId String? flowWalletAddress String? createdAt DateTime @default(now()) updatedAt DateTime @updatedAt } With our model created, we need to synchronize with the database. For this, Prisma offers a command: Shell $ npx prisma db push Environment variables loaded from .env Prisma schema loaded from prisma/schema.prisma Datasource "db": SQLite database "dev.db" at "file:./dev.db" SQLite database dev.db created at file:./dev.db -> Your database is now in sync with your Prisma schema. Done in 15ms After successfully pushing our users table, we can use Prisma Studio to track our database data. Run the command: Shell $ npx prisma studio Set up the Prisma Client That's it! Our entity and database configuration are complete. Now let's go to the client side. We need to install the Prisma client dependencies in our Next.js app. To do this, run the following command: Shell $ npm install @prisma/client Generate the client from the Prisma schema file: Shell $ npx prisma generate Create a folder named lib in the root folder of your project. Within that folder, create a file entitled prisma.ts. This file will host the client connection. Paste the following code into that file: TypeScript // lib/prisma.ts import { PrismaClient } from '@prisma/client'; let prisma: PrismaClient; if (process.env.NODE_ENV === "production") { prisma = new PrismaClient(); } else { let globalWithPrisma = global as typeof globalThis & { prisma: PrismaClient; }; if (!globalWithPrisma.prisma) { globalWithPrisma.prisma = new PrismaClient(); } prisma = globalWithPrisma.prisma; } export default prisma; Build the Next.js Application Frontend Functionality With our connection on the client part finalized, we can move on to the visual part of our app! Replace the code inside pages/index.tsx file, delete all lines of code and paste in the following code: TypeScript # pages/index.tsx import styles from "@/styles/Home.module.css"; import { Inter } from "next/font/google"; import Head from "next/head"; const inter = Inter({ subsets: ["latin"] }); export default function Home() { return ( <> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={styles.main}> <div className={styles.card}> <h1 className={inter.className}>Welcome to Flow Walletless App!</h1> <div style={{ display: "flex", flexDirection: "column", gap: "20px", margin: "20px", } > <button style={{ padding: "20px", width: 'auto' }>Sign Up</button> <button style={{ padding: "20px" }>Sign Out</button> </div> </div> </main> </> ); } In this way, we have the basics and the necessities to illustrate the creation of wallets and accounts! The next step is to configure the Google client to use the Google API to authenticate users. Set up Use of Google OAuth for Authentication We will need Google credentials. For that, open your Google console. Click Create Credentials and select the OAuth Client ID option. Choose Web Application as the application type and define a name for it. We will use the same name: flow_walletless_app. Add http://localhost:3001/api/auth/callback/google as the authorized redirect URI. Click on the Create button. A modal should appear with the Google credentials. We will need the Client ID and Client secret to use in our .env file shortly. Next, we’ll add the next-auth package. To do this, run the following command: Shell $ npm i next-auth Open the .env file and add the following new environment variables to it: Shell GOOGLE_CLIENT_ID= <GOOGLE CLIENT ID> GOOGLE_CLIENT_SECRET=<GOOGLE CLIENT SECRET> NEXTAUTH_URL=http://localhost:3001 NEXTAUTH_SECRET=<YOUR NEXTAUTH SECRET> Paste in your copied Google Client ID and Client Secret. The NextAuth secret can be generated via the terminal with the following command: Shell $ openssl rand -base64 32 Copy the result, which should be a random string of letters, numbers, and symbols. Use this as your value for NEXTAUTH_SECRET in the .env file. Configure NextAuth to Use Google Next.js allows you to create serverless API routes without creating a full backend server. Each file under api is treated like an endpoint. Inside the pages/api/ folder, create a new folder called auth. Then create a file in that folder, called [...nextauth].ts, and add the code below: TypeScript // pages/api/auth/[...nextauth].ts import NextAuth from "next-auth" import GoogleProvider from "next-auth/providers/google"; export default NextAuth({ providers: [ GoogleProvider({ clientId: process.env.GOOGLE_CLIENT_ID as string, clientSecret: process.env.GOOGLE_CLIENT_SECRET as string, }) ], }) Update _app.tsx file to use NextAuth SessionProvider Modify the _app.tsx file found inside the pages folder by adding the SessionProvider from the NextAuth library. Your file should look like this: TypeScript // pages/_app.tsx import "@/styles/globals.css"; import { SessionProvider } from "next-auth/react"; import type { AppProps } from "next/app"; export default function App({ Component, pageProps }: AppProps) { return ( <SessionProvider session={pageProps.session}> <Component {...pageProps} /> </SessionProvider> ); } Update the Main Page To Use NextAuth Functions Let us go back to our index.tsx file in the pages folder. We need to import the functions from the NextAuth library and use them to log users in and out. Our update index.tsx file should look like this: TypeScript // pages/index.tsx import styles from "@/styles/Home.module.css"; import { Inter } from "next/font/google"; import Head from "next/head"; import { useSession, signIn, signOut } from "next-auth/react"; const inter = Inter({ subsets: ["latin"] }); export default function Home() { const { data: session } = useSession(); console.log("session data",session) const signInWithGoogle = () => { signIn(); }; const signOutWithGoogle = () => { signOut(); }; return ( <> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={styles.main}> <div className={styles.card}> <h1 className={inter.className}>Welcome to Flow Walletless App!</h1> <div style={{ display: "flex", flexDirection: "column", gap: "20px", margin: "20px", } > <button onClick={signInWithGoogle} style={{ padding: "20px", width: "auto" }>Sign Up</button> <button onClick={signOutWithGoogle} style={{ padding: "20px" }>Sign Out</button> </div> </div> </main> </> ); } Build the “Create User” Endpoint Let us now create a users folder underneath pages/api. Inside this new folder, create a file called index.ts. This file is responsible for: Creating a user (first we check if this user already exists) Calling the Wallet API to create a wallet for this user Calling the Wallet API and retrieving the jobId data if the User entity does not yet have the address created These actions are performed within the handle function, which calls the checkWallet function. Paste the following snippet into your index.ts file: TypeScript // pages/api/users/index.ts import { User } from "@prisma/client"; import { BaseNextRequest, BaseNextResponse } from "next/dist/server/base-http"; import prisma from "../../../lib/prisma"; export default async function handle( req: BaseNextRequest, res: BaseNextResponse ) { const userEmail = JSON.parse(req.body).email; const userName = JSON.parse(req.body).name; try { const user = await prisma.user.findFirst({ where: { email: userEmail, }, }); if (user == null) { await prisma.user.create({ data: { email: userEmail, name: userName, flowWalletAddress: null, flowWalletJobId: null, }, }); } else { await checkWallet(user); } } catch (e) { console.log(e); } } const checkWallet = async (user: User) => { const jobId = user.flowWalletJobId; const address = user.flowWalletAddress; if (address != null) { return; } if (jobId != null) { const request: any = await fetch(`http://localhost:3000/v1/jobs/${jobId}`, { method: "GET", }); const jsonData = await request.json(); if (jsonData.state === "COMPLETE") { const address = await jsonData.result; await prisma.user.update({ where: { id: user.id, }, data: { flowWalletAddress: address, }, }); return; } if (request.data.state === "FAILED") { const request: any = await fetch("http://localhost:3000/v1/accounts", { method: "POST", }); const jsonData = await request.json(); await prisma.user.update({ where: { id: user.id, }, data: { flowWalletJobId: jsonData.jobId, }, }); return; } } if (jobId == null) { const request: any = await fetch("http://localhost:3000/v1/accounts", { method: "POST", }); const jsonData = await request.json(); await prisma.user.update({ where: { id: user.id, }, data: { flowWalletJobId: jsonData.jobId, }, }); return; } }; POST requests to the api/users path will result in calling the handle function. We’ll get to that shortly, but first, we need to create another endpoint for retrieving existing user information. Build the “Get User” Endpoint We’ll create another file in the pages/api/users folder, called getUser.ts. This file is responsible for finding a user in our database based on their email. Copy the following snippet and paste it into getUser.ts: TypeScript // pages/api/users/getUser.ts import prisma from "../../../lib/prisma"; export default async function handle( req: { query: { email: string; }; }, res: any ) { try { const { email } = req.query; const user = await prisma.user.findFirst({ where: { email: email, }, }); return res.json(user); } catch (e) { console.log(e); } } And that's it! With these two files in the pages/api/users folder, we are ready for our Next.js application frontend to make calls to our backend. Add “Create User” and “Get User” Functions to Main Page Now, let’s go back to the pages/index.tsx file to add the new functions that will make the requests to the backend. Replace the contents of index.tsx file with the following snippet: TypeScript // pages/index.tsx import styles from "@/styles/Home.module.css"; import { Inter } from "next/font/google"; import Head from "next/head"; import { useSession, signIn, signOut } from "next-auth/react"; import { useEffect, useState } from "react"; import { User } from "@prisma/client"; const inter = Inter({ subsets: ["latin"] }); export default function Home() { const { data: session } = useSession(); const [user, setUser] = useState<User | null>(null); const signInWithGoogle = () => { signIn(); }; const signOutWithGoogle = () => { signOut(); }; const getUser = async () => { const response = await fetch( `/api/users/getUser?email=${session?.user?.email}`, { method: "GET", } ); const data = await response.json(); setUser(data); return data?.flowWalletAddress != null ? true : false; }; console.log(user) const createUser = async () => { await fetch("/api/users", { method: "POST", body: JSON.stringify({ email: session?.user?.email, name: session?.user?.name }), }); }; useEffect(() => { if (session) { getUser(); createUser(); } }, [session]); return ( <> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={styles.main}> <div className={styles.card}> <h1 className={inter.className}>Welcome to Flow Walletless App!</h1> <div style={{ display: "flex", flexDirection: "column", gap: "20px", margin: "20px", } > {user ? ( <div> <h5 className={inter.className}>User Name: {user.name}</h5> <h5 className={inter.className}>User Email: {user.email}</h5> <h5 className={inter.className}>Flow Wallet Address: {user.flowWalletAddress ? user.flowWalletAddress : 'Creating address...'}</h5> </div> ) : ( <button onClick={signInWithGoogle} style={{ padding: "20px", width: "auto" } > Sign Up </button> )} <button onClick={signOutWithGoogle} style={{ padding: "20px" }> Sign Out </button> </div> </div> </main> </> ); } We have added two functions: getUser searches the database for a user with the email logged in. createUser creates a user or updates it if it does not have an address yet. We also added a useEffect that checks if the user is logged in with their Google account. If so, the getUser function is called, returning true if the user exists and has a registered email address. If not, we call the createUser function, which makes the necessary checks and calls. Test Our Next.js Application Finally, we restart our Next.js application with the following command: Shell $ npm run dev You can now sign in with your Google account, and the app will make the necessary calls to our wallet API to create a Flow Testnet address! This is the first step in the walletless Flow process! By following these instructions, your app will create users and accounts in a way that is convenient for the end user. But the wallet API does not stop there. You can do much more with it, such as execute and sign transactions, run scripts to fetch data from the blockchain, and more. Conclusion Account abstraction and walletless onboarding in Flow offer developers a unique solution. By being able to delegate control over accounts, Flow allows developers to create applications that provide users with a seamless onboarding experience. This will hopefully lead to greater adoption of dApps and a new wave of web3 users.
If you’re anything like me, you’ve noticed the massive boom in AI technology. It promises to disrupt not just software engineering but every industry. THEY’RE COMING FOR US!!! Just kidding ;P I’ve been bettering my understanding of what these tools are and how they work, and decided to create a tutorial series for web developers to learn how to incorporate AI technology into web apps. In this series, we’ll learn how to integrate OpenAI‘s AI services into an application built with Qwik, a JavaScript framework focused on the concept of resumability (this will be relevant to understand later). Here’s what the series outline looks like: Intro and Setup Your First AI Prompt Streaming Responses How Does AI Work Prompt Engineering AI-Generated Images Security and Reliability Deploying We’ll get into the specifics of OpenAI and Qwik where it makes sense, but I will mostly focus on general-purpose knowledge, tooling, and implementations that should apply to whatever framework or toolchain you are using. We’ll be working as closely to fundamentals as we can, and I’ll point out which parts are unique to this app. Here’s a little sneak preview. I thought it would be cool to build an app that takes two opponents and uses AI to determine who would win in a hypothetical fight. It provides some explanation and the option to create an AI-generated image. Sometimes the results come out a little wonky, but that’s what makes it fun. I hope you’re excited to get started because in this first post, we are mostly going to work on… Boilerplate :/ Prerequisites Before we start building anything, we have to cover a couple of prerequisites. Qwik is a JavaScript framework, so we will have to have Node.js (and NPM) installed. You can download the most recent version, but anything above version v16.8 should work. I’ll be using version 20. Next, we’ll also need an OpenAI account to have access to their API. At the end of the series, we will deploy our applications to a VPS (Virtual Private Server). The steps we follow should be the same regardless of what provider you choose. I’ll be using Akamai’s cloud computing services (formerly Linode). Setting Up the Qwik App Assuming we have the prerequisites out of the way, we can open a command line terminal and run the command: npm create qwik@latest. This will run the Qwik CLI that will help us bootstrap our application. It will ask you a series of configuration questions, and then generate the project for you. Here’s what my answers looked like: If everything works, open up the project and start exploring. Inside the project folder, you’ll notice some important files and folders: /src: Contains all application business logic /src/components: Contains reusable components to build our app with /src/routes: Responsible for Qwik’s file-based routing; Each folder represents a route (can be a page or API endpoint). To make a page, drop a index.{jsx|tsx} file in the route’s folder. /src/root.tsx: This file exports the root component responsible for generating the HTML document root. Start Development Qwik uses Vite as a bundler, which is convenient because Vite has a built-in development server. It supports running our application locally, and updating the browser when files change. To start the development server, we can open our project in a terminal and execute the command npm run dev. With the dev server running, you can open the browser and head to http://localhost:5173 and you should see a very basic app. Any time we make changes to our app, we should see those changes reflected almost immediately in the browser. Add Styling This project won’t focus too much on styling, so this section is totally optional if you want to do your own thing. To keep things simple, I’ll use Tailwind. The Qwik CLI makes it easy to add the necessary changes, by executing the terminal command, npm run qwik add. This will prompt you with several available Qwik plugins to choose from. You can use your arrow keys to move down to the Tailwind plugin and press Enter. Then it will show you the changes it will make to your codebase and ask for confirmation. As long as it looks good, you can hit Enter, once again. For my projects, I also like to have a consistent theme, so I keep a file in my GitHub to copy and paste styles from. Obviously, if you want your own theme, you can ignore this step, but if you want your project to look as amazing as mine, copy the styles from this file on GitHub into the /src/global.css file. You can replace the old styles, but leave the Tailwind directives in place. Prepare Homepage The last thing we’ll do today to get the project to a good starting point is make some changes to the homepage. This means making changes to /src/routes/index.tsx. By default, this file starts out with some very basic text and an example for modifying the HTML <head> by exporting a head variable. The changes I want to make include: Removing the head export Removing all text except the <h1>; Feel free to add your own page title text. Adding some Tailwind classes to center the content and make the <h1> larger Wrapping the content with a <main> tag to make it more semantic Adding Tailwind classes to the <main> tag to add some padding and center the contents These are all minor changes that aren’t strictly necessary, but I think they will provide a nice starting point for building out our app in the next post. Here’s what the file looks like after my changes. import { component$ } from "@builder.io/qwik"; export default component$(() => { return ( <main class="max-w-4xl mx-auto p-4"> <h1 class="text-6xl">Hi [wave emoji]</h1> </main> ); }); And in the browser, it looks like this: Conclusion That’s all we’ll cover today. Again, this post was mostly focused on getting the boilerplate stuff out of the way so that the next post can be dedicated to integrating OpenAI’s API into our project. With that in mind, I encourage you to take a moment to think about some AI app ideas that you might want to build. There will be a lot of flexibility for you to put your own spin on things. I’m excited to see what you come up with, and if you would like to explore the code in more detail, I’ll post it on my GitHub account.
Nowadays, it’s difficult to imagine a serious JavaScript-based application without a TypeScript superset. Interfaces, tuples, generics, and other features are well-known among TypeScript developers. While some advanced constructs may require a learning curve, they can significantly bolster your type safety. This article aims to introduce you to some of these advanced features. Type Guards Type guards help us to get info about a type within a conditional block. There are a few simple ways to check the type using in, typeof, instanceof operators, or using equality comparison (===). In this section, I’d like to pay more attention to user-defined type guards. This guard serves as a simple function that returns a boolean value. In other words, the return value is a type predicate.Let’s take a look at the example when we have base user info and user with additional details: JavaScript type User = { name: string }; type DetailedUser = { name: string; profile: { birthday: string } } function isDetailedUser(user: User | DetailedUser) { return 'profile' in user; } function showDetails(user: User | DetailedUser) { if (isDetailedUser(user)) { console.log(user.profile); // Error: Property 'profile' does not exist on type 'User | DetailedUser'. } } The isDetailedUser function returns a boolean value, but it does not identify this function as a boolean that “defines the object type.” In order to achieve the desired result, we need a little bit of update isDetailedUser function using ““user is DetailedUser” construction JavaScript function isDetailedUser(user: User | DetailedUser): user is DetailedUser { return 'profile' in user; } Indexed Access Types There may be the case in your app when you have a large object type and you want to create a new type, that uses a part of the original one. For example, part of our app requires only a user profile. User[‘profile’] extracts the desired type and assigns it to the UserProfile type. JavaScript type User = { id: string; name: string; surname: string; profile: { birthday: string; } } type UserProfile = User['profile']; What if we want to create a type based on a few properties? In this case, you can use a built-in type called Pick. JavaScript type FullName = Pick<User, 'name' | 'surname'>; // { name: string; surname: string } There are many other utility types, such as Omit, Exclude, and Extract, which may be helpful for your app. At first sight, all of them are kind of indexed types, but actually, they are built on Mapped types. Indexed Types With an Array You might have met the case when an app provided you with a union type, such as: JavaScript type UserRoleType = ‘admin’ | ‘user’ | ‘newcomer’; Then, in another part of the app, we fetch user data and check its role. For this case, we need to create an array: JavaScript const ROLES: UserRoleType[] = [‘admin’, ‘user’, ‘newcomer’]; ROLES.includes(response.user_role); Looks tiring, doesn't it? We need to repeat union-type values inside our array. It would be great to have a feature to retrieve a type from an existing array to avoid duplication. Fortunately, indexed types help here as well. First of all, we need to declare our array using a const assertion to remove the duplication and make a read-only tuple. JavaScript const ROLES = [‘admin’, ‘user’, ‘newcomer’] as const; Then, using the typeof operator and number type, we create a union type based on the array value. JavaScript type RolesType = typeof ROLES[number]; // ‘admin’ | ‘‘user’ | ‘‘newcomer’; You may be confused about this solution, but as you may know, arrays are object-based constructions with numeric keys. That’s why, in this example, number is used as the index access type. Conditional Types and Infer Keyword Conditional types define a type that depends on the condition. Usually, they are used along with generics. Depending on the generic type (input type), construction chooses the output type. For example, the built-in NonNullable TypeScript type is built on conditional types. JavaScript type NonNullable<T> = T extends null | undefined ? never : T type One = NonNullable<number>; // number type Two = NonNullable<undefined>; // never The infer keyword is used with conditional types and can not be used outside of the ‘extends’ clause. It serves as a ‘type variable creator.’ I think it will be easier for you to understand it by looking at the real example. Case: retrieve async function result type. JavaScript const fetchUser = (): Promise<{ name: string }> => { /* implementation */ } The easiest solution is to import the type declaration and assign it to the variable. Unfortunately, there are cases when result declaration is written inside the function, as in the example above. This problem may be resolved in two steps: The Awaited utility type was introduced in TypeScript 4.5. For learning purposes, let’s look at the simplified variant. JavaScript export type Awaited<T> = T extends Promise<infer U> ? U : T; Using conditional types and infer keyword, we “pull out” the promised type and assign it to the Uname. It’s a kind of type variable declaration. If the passed type is acceptable with PromiseLike generic, construction returns the original type saved to the U name. 2. Get value from the async function. Using built-in ReturnType that extracts the return type of function and our Awaited type, we achieve the desired result: JavaScript export type Awaited ReturnType<T> = Awaited<Return Type<T>>; I hope you found this article useful for yourself. Have fun coding!
Seasoned software engineers long for the good old days when web development was simple. You just needed a few files and a server to get up and running. No complicated infrastructure, no endless amount of frameworks and libraries, and no build tools. Just some ideas and some code hacked together to make an app come to life. Whether or not this romanticized past was actually as great as we think it was, developers today agree that software engineering has gotten complicated. There are too many choices with too much setup involved. In response to this sentiment, many products are providing off-the-shelf starter kits and zero-config toolchains to try to abstract away the complexity of software development. One such startup is Zipper, a company that offers an online IDE where you can create applets that run as serverless TypeScript functions in the cloud. With Zipper, you don’t have to spend time worrying about your toolchain — you can just start writing code and deploy your app within minutes. Today, we’ll be looking at a ping pong ranking app I built — once in 2018 with jQuery, MongoDB, Node.js, and Express; and once in 2023 with Zipper. We’ll examine the development process for each and see just how easy it is to build a powerful app using Zipper. Backstory First, a little context: I love to play ping pong. Every office in which I’ve worked has had a ping pong table, and for many years ping pong was an integral part of my afternoon routine. It’s a great game to relax, blow off some steam, strengthen friendships with coworkers, and reset your brain for a half hour. Those who played ping pong every day began to get a feel for who was good and who wasn’t. People would talk. A handful of people were known as the best in the office, and it was always a challenge to take them on. Being both highly competitive and a software engineer, I wanted to build an app to track who was the best ping pong player in the office. This wouldn’t be for bracket-style tournaments, but just for recording the games that were played every day by anybody. With that, we’d have a record of all the games played, and we’d be able to see who was truly the best. This was 2018, and I had a background in the MEAN/MERN stack (MongoDB, Express, Angular, React, and Node.js) and experience with jQuery before that. After dedicating a week’s worth of lunch breaks and nights to this project, I had a working ping-pong ranking app. I didn’t keep close track of my time spent working on the app, but I’d estimate it took about 10–20 hours to build. Here’s what that version of the app looked like. There was a login and signup page: Office Competition Ranking System — Home page The login page asked for your username and password to authenticate: Office Competition Ranking System — Login page Once authenticated, you could record your match by selecting your opponent and who won: Office Competition Ranking System — Record game results page You could view the leaderboard to see the current office rankings. I even included an Elo rating algorithm like they use in chess: Office Competition Ranking System — Leaderboard page Finally, you could click on any of the players to see their specific game history of wins and losses: Office Competition Ranking System — Player history page That was the app I created back in 2018 with jQuery, MongoDB, Node.js, and Express. And, I hosted it on an AWS EC2 server. Now let’s look at my experience recreating this app in 2023 using only Zipper. About Zipper Zipper is an online tool for creating applets. It uses TypeScript and Deno, so JavaScript and TypeScript users will feel right at home. You can use Zipper to build web services, web UIs, scheduled jobs, and even Slack or GitHub integrations. Zipper even includes auth. In short, what I find most appealing about Zipper is how quickly you can take an idea from conception to execution. It’s perfect for side projects or internal-facing apps to quickly improve a business process. Demo App Here’s the ping-pong ranking app I built with Zipper in just three hours. And that includes time reading through the docs and getting up to speed with an unfamiliar platform! First, the app requires authentication. In this case, I’m requiring users to sign in to their Zipper account: Ping pong ranking app — Authentication page Once authenticated, users can record a new ping-pong match: Ping pong ranking app — Record a new match page They can view the leaderboard: Ping pong ranking app — Leaderboard page And they can view the game history for any individual player: Ping pong ranking app — Player history page Not bad! The best part is that I didn’t have to create any of the UI components for this app. All the inputs and table outputs were handled automatically. And, the auth was created for me just by checking a box in the app settings! You can find the working app hosted publicly on Zipper. Ok, now let’s look at how I built this. Creating a New Zipper App First, I created my Zipper account by authenticating with GitHub. Then, on the main dashboard page, I clicked the Create Applet button to create my first applet. Create your first applet Next, I gave my applet a name, which became its URL. I also chose to make my code public and required users to sign in before they could run the applet. Applet configuration Then I chose to generate my app using AI, mostly because I was curious how it would turn out! This was the prompt I gave it: “I’d like to create a leaderboard ranking app for recording wins and losses in ping pong matches. Users should be able to log into the app. Then they should be able to record a match showing who the two players were and who won and who lost. Users should be able to see the leaderboard for all the players, sorted with the best players displayed at the top and the worst players displayed at the bottom. Users should also be able to view a single player to see all of their recorded matches and who they played and who won and who lost.” Applet initialization I might need to get better at prompt engineering because the output didn’t include all the features or pages I wanted. The AI-generated code included two files: a generic “hello world” main.ts file, and a view-player.ts file for viewing the match history of an individual player. main.ts file generated by AI view-player.ts file generated by AI So, the app wasn’t perfect from the get-go, but it was enough to get started. Writing the Ping Pong App Code I knew that Zipper would handle the authentication page for me, so that left three pages to write: A page to record a ping-pong match A page to view the leaderboard A page to view an individual player’s game history Record a New Ping Pong Match I started with the form to record a new ping-pong match. Below is the full main.ts file. We’ll break it down line by line right after this. TypeScript type Inputs = { playerOneID: string; playerTwoID: string; winnerID: string; }; export async function handler(inputs: Inputs) { const { playerOneID, playerTwoID, winnerID } = inputs; if (!playerOneID || !playerTwoID || !winnerID) { return "Error: Please fill out all input fields."; } if (playerOneID === playerTwoID) { return "Error: PlayerOne and PlayerTwo must have different IDs."; } if (winnerID !== playerOneID && winnerID !== playerTwoID) { return "Error: Winner ID must match either PlayerOne ID or PlayerTwo ID"; } const matchID = Date.now().toString(); const matchInfo = { matchID, winnerID, loserID: winnerID === playerOneID ? playerTwoID : playerOneID, }; try { await Zipper.storage.set(matchID, matchInfo); return `Thanks for recording your match. Player ${winnerID} is the winner!`; } catch (e) { return `Error: Information was not written to the database. Please try again later.`; } } export const config: Zipper.HandlerConfig = { description: { title: "Record New Ping Pong Match", subtitle: "Enter who played and who won", }, }; Each file in Zipper exports a handler function that accepts inputs as a parameter. Each of the inputs becomes a form in UI, with the input type being determined by the TypeScript type that you give it. After doing some input validation to ensure that the form was correctly filled out, I stored the match info in Zipper’s key-value storage. Each Zipper applet gets its own storage instance that any of the files in your applet can access. Because it’s a key-value storage, objects work nicely for values since they can be serialized and deserialized as JSON, all of which Zipper handles for you when reading from and writing to the database. At the bottom of the file, I’ve added a HandlerConfig to add some title and instruction text to the top of the page in the UI. With that, the first page is done. Ping pong ranking app — Record a new match page Leaderboard Next up is the leaderboard page. I’ve reproduced the leaderboard.ts file below in full: TypeScript type PlayerRecord = { playerID: string; losses: number; wins: number; }; type PlayerRecords = { [key: string]: PlayerRecord; }; type Match = { matchID: string; winnerID: string; loserID: string; }; type Matches = { [key: string]: Match; }; export async function handler() { const allMatches: Matches = await Zipper.storage.getAll(); const matchesArray: Match[] = Object.values(allMatches); const players: PlayerRecords = {}; matchesArray.forEach((match: Match) => { const { loserID, winnerID } = match; if (players[loserID]) { players[loserID].losses++; } else { players[loserID] = { playerID: loserID, losses: 0, wins: 0, }; } if (players[winnerID]) { players[winnerID].wins++; } else { players[winnerID] = { playerID: winnerID, losses: 0, wins: 0, }; } }); return Object.values(players); } export const config: Zipper.HandlerConfig = { run: true, description: { title: "Leaderboard", subtitle: "See player rankings for all recorded matches", }, }; This file contains a lot more TypeScript types than the first file did. I wanted to make sure my data structures were nice and explicit here. After that, you see our familiar handler function, but this time without any inputs. That’s because the leaderboard page doesn’t need any inputs; it just displays the leaderboard. We get all of our recorded matches from the database, and then we manipulate the data to get it into an array format of our liking. Then, simply by returning the array, Zipper creates the table UI for us, even including search functionality and column sorting. No UI work is needed! Finally, at the bottom of the file, you’ll see a description setup that’s similar to the one on our main page. You’ll also see the run: true property, which tells Zipper to run the handler function right away without waiting for the user to click the Run button in the UI. Ping pong ranking app — Leaderboard page Player History Alright, two down, one to go. Let’s look at the code for the view-player.ts file, which I ended up renaming to player-history.ts: TypeScript type Inputs = { playerID: string; }; type Match = { matchID: string; winnerID: string; loserID: string; }; type Matches = { [key: string]: Match; }; type FormattedMatch = { matchID: string; opponent: string; result: "Won" | "Lost"; }; export async function handler({ playerID }: Inputs) { const allMatches: Matches = await Zipper.storage.getAll(); const matchesArray: Match[] = Object.values(allMatches); const playerMatches = matchesArray.filter((match: Match) => { return playerID === match.winnerID || playerID === match.loserID; }); const formattedPlayerMatches = playerMatches.map((match: Match) => { const formattedMatch: FormattedMatch = { matchID: match.matchID, opponent: playerID === match.winnerID ? match.loserID : match.winnerID, result: playerID === match.winnerID ? "Won" : "Lost", }; return formattedMatch; }); return formattedPlayerMatches; } export const config: Zipper.HandlerConfig = { description: { title: "Player History", subtitle: "See match history for the selected player", }, }; The code for this page looks a lot like the code for the leaderboard page. We include some types for our data structures at the top. Next, we have our handler function which accepts an input for the player ID that we want to view. From there, we fetch all the recorded matches and filter them for only matches in which this player participated. After that, we manipulate the data to get it into an acceptable format to display, and we return that to the UI to get another nice auto-generated table. Ping pong ranking app — Player history page Conclusion That’s it! With just three handler functions, we’ve created a working app for tracking our ping-pong game history. This app does have some shortcomings that we could improve, but we’ll leave that as an exercise for the reader. For example, it would be nice to have a dropdown of users to choose from when recording a new match, rather than entering each player’s ID as text. Maybe we could store each player’s ID in the database and then display those in the UI as a dropdown input type. Or, maybe we’d like to turn this into a Slack integration to allow users to record their matches directly in Slack. That’s an option too! While my ping pong app isn’t perfect, I hope the takeaway here is how easy it is to get up and running with a product like Zipper. You don’t have to spend time agonizing over your app’s infrastructure when you have a simple idea that you just want to see working in production. Just get out there, start building, and deploy!
For many full-stack developers, the combination of Spring Boot and React has become a staple in building dynamic business applications. Yet, while powerful, this pairing has its set of challenges. From type-related errors to collaboration hurdles, developers often find themselves navigating a maze of everyday issues. Enter Hilla, a framework that aims to simplify this landscape. If Hilla hasn't crossed your radar yet, this article will provide an overview of what it offers and how it can potentially streamline your development process when working with Spring Boot and React. Spring Boot, React, and Hilla For full-stack developers, the combination of Java on the backend and React (with TypeScript) on the frontend offers a compelling blend of reliability and dynamism. Java, renowned for its robust type system, ensures data behaves predictably, catching potential errors at compile-time. Meanwhile, TypeScript brings a similar layer of type safety to the JavaScript world, enhancing React's capabilities and ensuring components handle data as expected. However, while both Java and TypeScript offer individual type-safe havens, there's often a missing link: ensuring that this type-safety is consistent from the backend all the way to the frontend. This is where the benefits of Hilla shine, enabling End-to-End Type Safety Direct Communication Between React and Spring Services Consistent Data Validation and Type Safety End-To-End Type Safety Hilla takes type safety a step further by ensuring it spans the entire development spectrum. Developers spend less time perusing API documentation and more time coding. With automatically generated TypeScript services and data types, Hilla allows developers to explore APIs directly within their IDE. This seamless integration means that if any code is altered, whether on the frontend or backend, any inconsistencies will trigger a compile-time error, ensuring that issues are caught early and rectified. Direct Communication Between React and Spring Services With Hilla, the cumbersome process of managing endpoints or deciphering complex queries becomes a thing of the past. Developers can directly call Spring Boot services from their React client, receiving precisely what's needed. This is achieved by making a Spring @Service available to the browser using Hilla's @BrowserCallable annotation. This direct communication streamlines data exchange, ensuring that the frontend gets exactly what it expects without any unnecessary overhead. Here's how it works. First, you add @BrowserCallable annotation to your Spring Service: Java @BrowserCallable @Service public class CustomerService { public List<Customer> getCustomers() { // Fetch customers from DB or API } } Based on this annotation, Hilla auto-generates TypeScript types and clients that enable calling the Java backend in a type-checkable way from the frontend (no need to declare any REST endpoints): TypeScript function CustomerList() { // Customer type is automatically generated by Hilla const [customers, setCustomers] = useState<Customer[]>([]); useEffect(() => { CustomerService.getCustomers().then(setCustomers); }, []); return ( <ComboBox items={customers} ></ComboBox> ) } Consistent Data Validation and Type Safety One of the standout features of Hilla is its ability to maintain data validation consistency across the stack. By defining data validation rules once on the backend, Hilla auto-generates TypeScript validations for the frontend. This not only enhances developer productivity but also ensures that data remains consistent, regardless of where it's being processed. For instance, if a field is marked as @NotBlank in Java, Hilla ensures that the same validation is applied when this data is processed in the React frontend. Java public class Customer { @NotBlank(message = "Name is mandatory") private String name; @NotBlank(message = "Email is mandatory") @Email private String email; // Getters and setters } The Hilla useForm hook uses the generated TypeScript model to apply the validation rules to the form fields. TypeScript function CustomerForm() { const {model, field, read, submit} = useForm(CustomerModel, { onSubmit: CustomerService.saveCustomer }); return ( <div> <TextField label="Name" {...field(model.name)} /> <EmailField label="Email" {...field(model.email)} /> <Button onClick={submit}>Save</Button> </div> ) } Batteries and Guardrails Included Hilla streamlines full-stack development by offering pre-built tools, enhancing real-time capabilities, prioritizing security, and ensuring long-term adaptability. The framework provides a set of pre-built UI components designed specifically for data-intensive applications. These components range from data grids and forms to various select components and editors. Moreover, for those looking to implement real-time features, Hilla simplifies the process with its support for reactive endpoints, removing the need for manual WebSocket management. Another notable aspect of Hilla is its security integration. By default, it's connected with Spring Security, offering robust access control mechanisms to safeguard data exchanges. The framework's stateless design ensures that as user demands increase, the application remains efficient. Hilla's design approach not only streamlines the current development process but also future-proofs your app. It ensures that all components integrate seamlessly, making updates, especially transitioning from one version to another, straightforward and hassle-free. In Conclusion Navigating the complexities of full-stack development in Spring Boot and React can be complex. This article highlighted how Hilla can alleviate many of these challenges. From ensuring seamless type safety to simplifying real-time integrations and bolstering security, Hilla stands out as a comprehensive solution. Its forward-thinking design ensures that as the tech landscape evolves, your applications remain adaptable and updates remain straightforward. For those immersed in the world of Spring Boot and React, considering Hilla might be a step in the right direction. It's more than just a framework; it's a pathway to streamlined and future-ready development.
In today's data-driven world, data visualization simplifies complex information and empowers individuals to make informed decisions. One particularly valuable chart type is the Resource Chart, which facilitates efficient resource allocation. This tutorial will be your essential guide to creating dynamic resource charts using JavaScript. A resource chart is a type of Gantt chart that visualizes data about resource utilization, such as equipment, employees, and so on. It provides a comprehensive overview, making it easier to make informed decisions promptly. As an illustrative example, in this tutorial, I will represent the FIFA World Cup 2022 Qatar schedule by stadium, enabling you to track when and where each game took place. Get your coding boots ready, and by the end of this guide, you'll be well-equipped to create your own JS-based resource chart and have a valuable tool at your disposal for tracking your next favorite tournament, server status, employee project assignments, or anything else of that kind. Resource Chart To Be Crafted Are you excited to see what we're about to create? Keep reading, and you’ll learn how to craft a JavaScript resource chart like the one showcased below. Intrigued? Let's kick off this thrilling journey together! Building Resource Chart The resource chart might seem like a challenging structure at first glance, with horizontal bars representing time periods. However, I assure you that it's quite straightforward once you get the hang of it. You can construct this chart by following these four basic steps: Create a web page in HTML. Include the necessary JavaScript files. Load the data. Write some JS code to visualize the resource chart. Now, let’s delve into each step in detail. 1. Create a Web Page in HTML To begin, create a basic HTML page to host your JavaScript resource chart. Within the body of the HTML document, add an HTML block element such as <div> that will serve as the container for your upcoming chart. Give it an ID, which you'll reference later in your JavaScript code when creating the chart. To ensure the chart utilizes the correct position, define some CSS rules within the <style> block. Below is an example of a simple web page created this way. I’ve named the <div> element as “container” and adjusted its height and width to make the chart utilize the entire screen. HTML <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>JavaScript Resource Gantt Chart</title> <style type="text/css"> html, body, #container { width: 100%; height: 100%; margin: 0; padding: 0; } </style> </head> <body> <div id="container"></div> </body> </html> 2. Include the Necessary JavaScript Files When it comes to data visualization, JavaScript charting libraries are invaluable tools. The key is to find one that not only suits your needs but also supports the specific chart type you're looking for. In this tutorial, I’ll use AnyChart, a long-living JS charting library that supports resource charts and provides comprehensive documentation, and it’s free (unless you integrate it into a commercial application). If you choose to use a different library, the overall process remains essentially the same. You have two primary options for including the necessary JavaScript files of your chosen library: downloading them and using them locally or linking to them directly via a CDN (Content Delivery Network). In this tutorial, I’ll opt for the CDN approach. Below is what it will look like when linking the required scripts in the <head> section of your HTML page. The chart's code will find its home within the <script> tag in the <body> section. Alternatively, you can also place it in the <head> section if that suits your project structure better. HTML <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>JavaScript Resource Gantt Chart</title> <style type="text/css"> html, body, #container { width: 100%; height: 100%; margin: 0; padding: 0; } </style> <script src="https://cdn.anychart.com/releases/8.11.1/js/anychart-core.min.js"></script> <script src="https://cdn.anychart.com/releases/8.11.1/js/anychart-gantt.min.js"></script> <script src="https://cdn.anychart.com/releases/8.11.1/js/anychart-data-adapter.min.js"></script> </head> <body> <div id="container"></div> <script> // The place for the following chart’s code. </script> </body> </html> 3. Load the Data Now, let's load the data. In this tutorial, the schedule of the 2022 FIFA World Cup will be visualized. The data is available in JSON format on the provided GitHub gist. The data consists of a list of objects, with each object representing a stadium. You'll find details such as its name and city inside each stadium object. Additionally, there is a property called "periods," containing a list of matches that have been organized in that stadium. Each match is represented by the names of the two competing countries and the result of the match. To correctly feed this type of data into the resource chart, utilize the anychart.data.loadJsonFile() method. Below is the code snippet that loads the data: JavaScript anychart.data.loadJsonFile("https://gist.githubusercontent.com/awanshrestha/07b9144e8f2539cd192ef9a38f3ff8f5/raw/b4cfb7c27b48a0e92670a87b8f4b1607ca230a11/Fifa%2520World%2520Cup%25202022%2520Qatar%2520Stadium%2520Schedule.json"); 4. Write Some JS Code to Visualize the Resource Chart With the data loaded, you are now ready to move on and see how a few lines of JavaScript code can transform into a fully functional resource chart. Begin by adding the anychart.onDocumentReady() function encapsulates all the necessary code to ensure that your code executes only when the page is fully loaded. HTML <script> anychart.onDocumentReady(function () { // The resource chart data and code will be in this section. }); </script> Next, load the data and create a data tree. JavaScript anychart.onDocumentReady(function () { // load the data anychart.data.loadJsonFile( "https://gist.githubusercontent.com/awanshrestha/07b9144e8f2539cd192ef9a38f3ff8f5/raw/b4cfb7c27b48a0e92670a87b8f4b1607ca230a11/Fifa%2520World%2520Cup%25202022%2520Qatar%2520Stadium%2520Schedule.json", function (data) { // create a data tree var treeData = anychart.data.tree(data, 'as-table’); } ); }); Then, utilize the ganttResource() method to create the resource Gantt chart and set your data tree using the data() method. JavaScript // create a resource gantt chart var chart = anychart.ganttResource(); // set the data for the chart chart.data(treeData); Place the chart within the <div> container introduced in Step 1, and finally, draw the chart using the draw() method. JavaScript // set the container chart.container("container"); // draw the chart chart.draw(); Voila! You've successfully created a simple and fully functional resource chart using JavaScript. Take a look at how it appears in action; the interactive version of this chart is available here. For your convenience, the complete code for the basic resource chart is also provided. With this resource chart, you can easily visualize which matches took place in which stadiums, and you can scroll through the matches section on the right to view all the matches. But there's more to explore, so let's proceed to customize this interactive data visualization. HTML <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>JavaScript Resource Gantt Chart</title> <style type="text/css"> html, body, #container { width: 100%; height: 100%; margin: 0; padding: 0; } </style> <script src="https://cdn.anychart.com/releases/8.11.1/js/anychart-core.min.js"></script> <script data-fr-src="https://cdn.anychart.com/releases/8.11.1/js/anychart-gantt.min.js"></script> <script src="https://cdn.anychart.com/releases/8.11.1/js/anychart-data-adapter.min.js"></script> </head> <body> <div id="container"></div> <script> anychart.onDocumentReady(function () { // load the data anychart.data.loadJsonFile( "https://gist.githubusercontent.com/awanshrestha/07b9144e8f2539cd192ef9a38f3ff8f5/raw/b4cfb7c27b48a0e92670a87b8f4b1607ca230a11/Fifa%2520World%2520Cup%25202022%2520Qatar%2520Stadium%2520Schedule.json", function (data) { // create a data tree var treeData = anychart.data.tree(data, "as-table"); // create a resource gantt chart var chart = anychart.ganttResource(); // set the data for the chart chart.data(treeData); // set the container chart.container("container"); // draw the chart chart.draw(); } ); }); </script> </body> </html> Customizing Resource Chart Now that the basic JavaScript-based resource chart is in place let's explore ways to enhance its visuals and functionality. Configure the Rows and Columns To improve the visual appeal of your resource chart, let's delve into some potential adjustments for the rows and columns. Firstly, you can set custom colors for the selected and hover states of rows and adjust the splitter position for better content visibility. Additionally, consider specifying a default row height for neat presentation and easy readability of row items. JavaScript // customize the rows chart .rowSelectedFill("#D4DFE8") .rowHoverFill("#EAEFF3") .splitterPosition(230); // set the row height chart.defaultRowHeight(50); Now, let's move on to configuring the columns. In the first column, you have the option to include a simple number hashtag "#" as the title and customize its width. For the second column, you can make the stadium name bold to give it prominence and place the city name right below the stadium name. Tailor the column width as needed to accommodate the content comfortably. JavaScript // customize the column settings: // get the data grid var dataGrid = chart.dataGrid(); // set the fixed columns mode dataGrid.fixedColumns(true); // first column dataGrid .column(0) .title("#") .width(30 .labels({ hAlign: "center" }); // second column dataGrid .column(1) .title("Stadium") .width(200) .labels() .useHtml(true) .format(function () { return ( "<strong>" + this.name.toString() + "</strong> <br>" + this.item.get("city") ); }); Add Final Scores to the Bars Now, let's enhance the resource chart by displaying match results directly on the timeline bars. This provides a quick overview without the need to refer elsewhere. To achieve this, enable labels on the periods of the timeline and apply custom styling using the useHtml() method. JavaScript // configure the period labels: // get the period labels var periodLabels = chart.getTimeline().periods().labels(); // set the period labels periodLabels .enabled(true) .useHtml(true) .format( "<span style='color:#fff; font-size: 12px;'>{%result}</span>" ); With this additional information on the resource bars themselves, the chart now delivers a richer set of information at a glance. Personalize the Visual Appearance For an aesthetically pleasing user experience, consider spicing up the visual aspects of the chart. Start by setting the background color of the chart to a light gray shade. JavaScript chart.background("#edeae8 0.8"); Next, access the bars as elements from the timeline and make adjustments to their fill and stroke colors. JavaScript var elements = chart.getTimeline().elements(); elements.normal().fill("#9a1032 0.8"); elements.normal().stroke("#212c68"); To take this one level further, you can use a function to dynamically fill the color of the bars based on a condition. For example, the match result can be such a condition. So, the function checks the match result, and if it's a tie, it paints the bar green; otherwise, it colors it red. This provides an interesting way to instantly discern the outcome of a match from the bar colors themselves. JavaScript // customize the color of the bars: // get the elements var elements = chart.getTimeline().elements(); // check if the current item is a tie, and if yes, color it differently elements.normal().fill(function() { var result = this.period.result; var scores = result.split("-").map(Number); if (scores[0] === scores[1]) { return "#11A055 0.8"; } else { return "#9a1032 0.8"; } }); elements.normal().stroke("#212c68"); Customize the Tooltip Now, it's time to fine-tune the tooltip for an enhanced user experience. To keep the tooltip straightforward, configure it to display the team names and match results whenever you hover over a particular bar. JavaScript // configure the tooltip var tooltip = chart.getTimeline().tooltip(); tooltip .useHtml(true) .format(function(e) { var tooltipText; if (typeof e.period === "undefined") { tooltipText = e.item.la.city; } else { var period = e.period; tooltipText = period.result; } return tooltipText; }); These subtle adjustments significantly improve the visual clarity of the presented data. And now, below is the final version of the resource chart. You can explore the interactive version of this final chart here. Feel free to explore and interact with it. For your convenience, the entire code for the final resource chart is provided below. HTML <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>JavaScript Resource Gantt Chart</title> <style type="text/css"> html, body, #container { width: 100%; height: 100%; margin: 0; padding: 0; } </style> <script src="https://cdn.anychart.com/releases/8.11.1/js/anychart-core.min.js"></script> <script data-fr-src="https://cdn.anychart.com/releases/8.11.1/js/anychart-gantt.min.js"></script> <script data-fr-src="https://cdn.anychart.com/releases/8.11.1/js/anychart-data-adapter.min.js"></script> </head> <body> <div id="container"></div> <script> anychart.onDocumentReady(function () { // load the data anychart.data.loadJsonFile( "https://gist.githubusercontent.com/awanshrestha/07b9144e8f2539cd192ef9a38f3ff8f5/raw/b4cfb7c27b48a0e92670a87b8f4b1607ca230a11/Fifa%2520World%2520Cup%25202022%2520Qatar%2520Stadium%2520Schedule.json", function (data) { // create a data tree var treeData = anychart.data.tree(data, "as-table"); // create a resource gantt chart var chart = anychart.ganttResource(); // set the data for the chart chart.data(treeData); // customize the rows chart .rowSelectedFill("#D4DFE8") .rowHoverFill("#EAEFF3") .splitterPosition(230); // set the row height chart.defaultRowHeight(50); // customize the column settings: // get the data grid var dataGrid = chart.dataGrid(); // first column dataGrid .column(0) .title("#") .width(30) .labels({ hAlign: "center" }); // second column dataGrid .column(1) .title("Stadium") .width(200) .labels() .useHtml(true) .format(function () { return ( "<strong>" + this.name.toString() + "</strong> <br>" + this.item.get("city") ); }); // configure the period labels: // get the period labels var periodLabels = chart.getTimeline().periods().labels(); // set the period labels periodLabels .enabled(true) .useHtml(true) .format( "<span style='color:#fff; font-size: 12px;'>{%result}</span>" ); // configure the background of the chart chart.background("#edeae8 0.8"); // customize the color of the bars: // get the elements var elements = chart.getTimeline().elements(); // check if the current item is a tie, and if yes, color it differently elements.normal().fill(function() { var result = this.period.result; var scores = result.split("-").map(Number); if (scores[0] === scores[1]) { return "#11A055 0.8"; } else { return "#9a1032 0.8"; } }); elements.normal().stroke("#212c68"); // configure the tooltip var tooltip = chart.getTimeline().tooltip(); tooltip .useHtml(true) .format(function(e) { var tooltipText; if (typeof e.period === "undefined") { tooltipText = e.item.la.city; } else { var period = e.period; tooltipText = period.result; } return tooltipText; }); // set the container chart.container("container"); // draw the chart chart.draw(); } ); }); </script> </body> </html> Conclusion Hooray! You’ve come a long way in this journey of crafting a compelling resource chart. I hope this detailed tutorial has provided you with the understanding and skills needed to create your own JavaScript resource charts. Now it's your turn to explore more ways of how you can customize these charts to meet your unique requirements. Why not add some connectors or milestones, for example? Don't hesitate to reach out if you're stuck or have questions, and feel free to share the resource charts that you create following this guide. Let your creativity shine through your work! Happy charting!
Anthony Gore
Founder,
Vue.js Developers
John Vester
Staff Engineer,
Marqeta @JohnJVester
Justin Albano
Software Engineer,
IBM
Swizec Teller
CEO,
preona