JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Implement a Geographic Distance Calculator Using TypeScript
Creating Scrolling Text With HTML, CSS, and JavaScript
As the logistics industry evolves, it requires advanced solutions to streamline operations and enhance efficiency. This case study explores the development of a truck tracker cum delivery services software built using React Native, RESTful APIs, and SQLite. The software caters to both drivers and management, providing features such as route mapping, delivery status updates, and real-time tracking. Objective The primary goal was to create a comprehensive logistics management tool that enables: Real-time truck tracking for management.Route optimization and navigation for drivers.Efficient data handling and offline support using SQLite.Seamless communication between drivers and management through APIs. Technology Stack Frontend: React Native for cross-platform mobile application development.Backend: RESTful APIs built using Node.js and Express.Database: SQLite for lightweight and offline-first data management.Third-party integrations: Google Maps API for route mapping and GPS tracking. Features Implemented Driver-Side Services Route Map The application provides an optimized route mapping feature, leveraging Google Maps API to ensure drivers follow the shortest and most efficient paths to their destinations. This reduces fuel consumption and enhances delivery times. Pickup and Drop Points Drivers can view precise pickup and drop locations directly within the app. This eliminates confusion, improves delivery accuracy, and ensures customer satisfaction. Nearby Branches For situations requiring assistance or coordination, the app displays a list of nearby company branches. Drivers can quickly locate the closest branch for support during deliveries or emergencies. Nearby Drivers and Trucks Drivers can access a map showing nearby colleagues and company trucks. This fosters better communication, enables resource sharing in emergencies, and enhances team collaboration. Management-Side Services Truck Tracking Management can track trucks in real time using GPS data integrated into the application. This feature provides visibility into vehicle locations, improving operational oversight and delivery planning. Route Maps Detailed route maps for each truck are available for management, allowing them to monitor adherence to planned routes and adjust plans dynamically if required. Pickup and Drop Statuses The app provides instant updates on pickup and drop progress. Management can view completed, pending, or delayed statuses, enabling proactive issue resolution. Delivery Statuses Comprehensive records of delivery statuses are maintained, including timestamps and proof of delivery. This helps streamline reporting, improve accountability, and enhance customer trust. Development Process 1. Requirement Analysis Collaborated with stakeholders to identify pain points in the current logistics workflow and prioritize features for the software. 2. Design and Prototyping Created wireframes and user journey maps for both driver and management interfaces.Designed a user-friendly interface leveraging React Native’s components and Material Design principles. 3. Implementation Frontend: Developed reusable React Native components for consistent UI and faster development.Backend: Created scalable REST APIs for data exchange between the application and the server.Database: Utilized SQLite for storing data locally, ensuring offline functionality and faster access times. 4. Testing and Quality Assurance Conducted rigorous testing to ensure: Smooth performance on both iOS and Android platforms.Accurate data synchronization between SQLite and the backend database.Proper handling of edge cases, such as network interruptions. 5. Deployment Deployed the application on both the Google Play Store and Apple App Store, following best practices for app submission. Challenges and Solutions 1. Challenge: Synchronizing Offline Data With the Central Server Scenario Drivers frequently traveled through areas with poor network coverage, resulting in unsynchronized delivery updates. This caused discrepancies in the central database and delayed status visibility for management. Tactical Solution The team implemented a conflict resolution strategy that tagged each update with a timestamp. During synchronization, the server compared timestamps to resolve conflicts, ensuring that the most recent data was retained. A background sync mechanism was also introduced, which queued updates and synchronized them automatically once the network was restored. 2. Challenge: Ensuring Accurate GPS Tracking Scenario In urban areas with tall buildings or rural areas with sparse infrastructure, GPS signals were inconsistent, leading to inaccurate truck locations and delays in delivery reporting. Tactical Solution Advanced location APIs were integrated with a fallback mechanism that switched to cell tower triangulation when GPS signals were weak. Additionally, the team implemented data smoothing algorithms to filter out erroneous location spikes, ensuring more reliable tracking data. 3. Challenge: Managing Large Datasets on iOS Devices Scenario Drivers frequently needed to access historical delivery records, causing performance issues as the local SQLite database on iOS grew in size. Tactical Solution The team utilized iOS application development best practices to optimize SQLite queries, ensuring only the necessary data was retrieved. Pagination was implemented for long lists to enhance user experience. Additionally, periodic archiving was introduced, where older records were compressed and securely stored on the server. The app provided seamless on-demand access to these archived records, ensuring optimal performance and usability on iOS devices. Outcomes Improved efficiency: Reduced manual tracking efforts by 60%.Enhanced driver experience: Simplified navigation and communication.Better decision-making: Provided real-time insights to management for strategic planning.Scalability: The modular architecture allows easy addition of new features. The truck tracker and delivery services software successfully transformed logistics operations by harnessing React Native's cross-platform capabilities, SQLite's robust offline handling, and RESTful APIs' flexibility. The application stands as a comprehensive mobile application development solution for managing Android and iOS apps, significantly impacting operational efficiency in the logistics industry. FAQs 1. Why were JavaScript, React Native, APIs, and SQLite chosen for this project? These technologies provide scalability, performance, and cross-platform compatibility, making them ideal for a logistics tracking system that needs to handle a large number of users and frequent updates. 2. How does SQLite benefit the Truck Drivers Tracker system? SQLite offers a lightweight and efficient database that can easily store local data on mobile devices, providing fast access and secure storage for crucial route information and logs. 3. What makes React Native ideal for the driver and management applications? React Native enables developers to build high-quality, cross-platform mobile apps with a native experience, streamlining the development process and reducing costs. 4. How do APIs ensure seamless communication in this project? APIs enable real-time data synchronization between the drivers’ mobile apps and the management dashboards, ensuring that both parties stay informed and can act on up-to-date information. 5. What scalability measures are implemented in this project? The system uses modular design, cloud services, and serverless computing to accommodate growing user bases, additional vehicles, and future upgrades.
See previous Part 1 and Part 2. The relationship between your event definitions and the event streams themselves is a major design. One of the most common questions I get is, “Is it okay to put multiple event types in one stream? Or should we publish each event type to its own stream?” This article explores the factors that contribute to answering these questions and offers a set of recommendations that should help you find the best answer for your own use cases. Example Consumer Use Cases General alerting of state changes (deltas)Processing sequences of events (deltas)Transferring state (facts)Mixing facts and deltas The consumer’s use case should be a top consideration when deciding how to structure your event streams. Event streams are replayable sources of data that are written only once, but that can be read many times by many different consumers. We want to make it as easy as possible for them to use the data according to their own needs. Use Case: General Alerting of Changes Deltas work very well for the general change alerting. Applications can respond to the delta events exposed from inside of an application. Splitting up events so that there is only one type per stream provides a high granularity and permits consumer applications to subscribe to only the deltas they care about. Use Case: Processing Sequences of Delta Events But what if a single application needs to read several deltas, and the ordering between events is very important? Following a one-event per stream strategy introduces the risk that events may be read and processed out of order, giving inconsistent sequencing results. Stream processors, like Kafka Streams and Flink, typically contain logic to process events in both ascending timestamp and offset order, a process that I call “event scheduling.” For example, Kafka Streams uses a best-effort algorithm to select the next record to process, while Flink uses a Watermarking strategy to process records based on timestamps. Be warned that not all stream processing frameworks support event scheduling, leading to wildly diverging processing orders based on race condition outcomes. At the end of the day, even event scheduling is a best-effort attempt. Out-of-order processing may still occur due to intermittent failures of the application or hardware, as well as unresolved corner cases and race conditions in the frameworks themselves. Note that this is not nearly as dire as it seems. Many (if not the vast majority) of streaming use cases aren’t that sensitive to processing order between topics. For those that are sensitive, Watermarking and event scheduling tend to work pretty well in the majority of cases. And for those event sequences that need perfectly strict ordering? Well, read on. What About Strict Ordering? But what do you do if you need something with stronger guarantees? A precise and strict ordering of events may be a significant factor for your business use case. In this case, you may be better off putting all of your events into a single event stream so that your consumer receives them in the same order as they are written. You also need a consistent partitioning strategy to ensure that all events of the same key go to the same partition, as Kafka only guarantees order on a per-partition basis. Note that this technique is not about reducing the number of topics you’re using — topics are relatively cheap, and you should choose to build your topics based on the data they’re carrying and the purposes they’re meant to serve — not to simply cut down on topic count. Apache Kafka is perfectly capable of handling thousands of topics without any problem. Single Stream, Multiple Delta Types Putting related event types into a single topic partition provides a strict incremental order for consumer processing, but it requires that all events be written by a single producer, as it needs strict control over the ordering of events. In this example, we have merged all of the adds, removes, and discount codes for the shopping cart into a single partition of a single event stream. Use Case: Processing Sequences of Delta Events Zooming back out, you can see a single consumer coupled with this stream of events. They must be able to understand and interpret each of the types in the stream. It’s important not to turn your topic into a dumping ground for multiple event types and expect your consumers to simply figure it out. Rather, the consumer must know how to process each delta type, and any new types or changes to existing types would need to be negotiated between the application owners. Use Flink SQL to Split Stream Up You can also use a stream processor like Flink to split the single cart events stream up into an event stream per delta, writing each event to a new topic. Consumers can choose to subscribe to these purpose-built delta streams, or they can subscribe to the original stream and simply filter out events they do not care about. Word of Caution A word of caution, however. This pattern can result in a very strong coupling between the producer and the consumer service. Usually, it is only suitable for applications that are intended to be strongly coupled, such as a pair of systems using event sourcing, and not for general-purpose usage. You should also ask yourself if these two applications merit separation or if they should be redesigned into a single application. Use Case: Transferring State with Facts Facts provide a capability known as Event-Carried State Transfer. Each event provides a complete view of the state of a particular entity at a particular point in time. Fact events present a much better option for transferring state, do not require the consumer to interpret a sequence of events, and offer a much looser coupling option. In this case, only a single event type is used per event stream — there is no mixing of facts from various streams. Keeping only one fact type per stream makes it much easier to transfer read-only state to any application that needs access to it. Streams of Facts effectively act as data building blocks for you to compose purpose-built applications and services for solving your business problems. Single Fact Type Per Stream The convention of one type of fact per stream shows up again when you look into the tools you can build your applications with — like Kafka Streams or Flink. In this example, a Flink SQL application materializes the item facts into a table. The query specifies the table schema, the Kafka topic source, the key column, and the key and value schema format. Flink SQL enforces a strict schema definition and will throw away incoming events that do not adhere to it. This is identical to how a relational database will throw an exception if you try to insert data that doesn’t meet the table schema requirements. Joining Disparate Fact Streams You can leverage Flink SQL’s join functionality when consuming multiple types of facts from different streams, selecting only the fields you need for your own business logic and discarding the rest. In this example, the Flink SQL application consumes from both inventory and item facts and selects just the ID, price, name, and stock, but only keeps records where there is at least one item in stock. The data is filtered and joined together, then emitted to the in-stock items facts stream, which can be used by any application that needs it. Best Practice: Record the Entire State in One Fact When recording an event, it’s important to keep everything that happened in a single detailed event. Consider an order (above) that consists of both a cart entity and a user entity. When creating the order event, we insert all of the cart information as well as all of the user information for the user at that point in time. We record the event as a single atomic message to maintain an accurate representation of what happened. We do not split it up into multiple events in several other topics! Consumers are free to select only the data they really want from the event, plus you can always split up the compound event. However, it is far more difficult to reconstruct the original event if you split it up from the beginning. A best practice is to give the initial event a unique ID, and then propagate it down to any derivative events. This provides event tracing. We will cover event IDs in more detail in a future post. Use Case: Mixing Facts and Deltas Consumers can also compose applications by selecting the fact streams that they need and combining them with selected deltas. This approach is best served by single types per event stream, as it allows for easy mixing of data according to each consumer's needs. Summary Single streams of single delta types make it easy for applications to respond to specific edge conditions, but they remain responsible for building up their own state and applying their own business logic. A single delta per stream can lead to difficulties when trying to create a perfect aggregation of states. It can also be challenging when trying to figure out which events need to be considered to build the aggregate. Put multiple event types in the same stream if you are concerned about strict ordering, such as building up an aggregate from a series of deltas. You’ll have all of the data necessary to compose the final state, and in precisely the same order that it was published by the producer. The downside is that you must consume and process each event, even if it’s one you don’t really care about. And finally, use a single event type for fact streams. This is identical to how you would store this information in a relational database, with a single well-defined schema per table. Your stream consumers can mix, match, and blend the fact events they need for their own use cases using tools like Kafka Streams or Flink. There’s one more part to go in this event design series. Stay tuned for the next one, where we’ll wrap it up, covering elements such as workflows, assorted best practices, and basics of schema evolution.
Gantt chart is an advanced visualization solution for project management that considerably facilitates planning, scheduling, and controlling the progress of short-, mid-, and long-term projects. Gantt charts were invented more than a hundred years ago by Henry Gantt, who made a major contribution to the development of scientific management. Decades ago, the entire procedure of implementing Gantt charts in infrastructure projects was really time-consuming. Today, we are lucky to have modern tools that greatly speed up the process. How Does a Gantt Chart Make Project Planning Easier? To make the entire process of project management easier to deal with, a Gantt chart takes care of all the complex logic and provides users with a convenient interface to handle process-related data. Thus, a Gantt chart basically has two sections: the left one with a list of tasks and subtasks and the right one with the visualized project timeline. This helps represent the whole set of interdependent tasks in a more digestible way. In this article, we are going to take a closer look at Gantt chart libraries for React that provide rich functionality, allowing us to efficiently manage really complex business processes. We do not set the goal of considering comprehensive lists of features for each of the tools, but want to focus on some interesting points characterizing the libraries we have chosen. SVAR React Gantt Chart SVAR React Gantt Chart is a free, open-source solution available under the GPLv3 license. This Gantt chart library offers an appealing UI and supplies users with advanced features for supervising tasks within a project. Key features: Create, edit, and delete tasks with sidebar form,Modify tasks and dependencies directly on the chart with drag-and-drop,Reorder tasks in the grid with drag-and-drop,Task dependencies: end-to-start, start-to-start, end-to-end, start-to-end,Hierarchical view of sub-tasks,Sorting by a single or multiple columns,Fully customizable task bars, tooltips, time scale,Toolbar and context menu,High performance with large data sets,Light and dark themes. The list represented above is not exhaustive, as the SVAR Gantt Chart equips users with many other convenient functionalities, like zooming, flexible or fixed grid columns, touch support, etc. This open-source library offers a wide range of features and is able to cope with complex business tasks. Check the demos to see what the SVAR Gantt Chart is capable of. DHTMLX Gantt for React DHTMLX Gant for React is a versatile solution that offers an easy way to add a feature-rich Gantt chart to a React-based application. It is distributed as a stand-alone component with flexible licensing options, prices starting from $699, and a free 30-day trial. Key features: Smooth performance with high working loads (30 000+ tasks)Dynamic loading and smart renderingPredefined and custom types of tasksFlexible time formattingAdditional timeline elements (milestones, baselines, deadlines, constraints)Project summaries with rollup tasksAdvanced features (resource management, auto-scheduling, critical path, task grouping, etc.)Accessibility and localizationExport to PDF/PNG/MS Project7 built-in skinsSimplified styling with CSS variables The extensive and developer-friendly API of this UI component allows dev teams to create React Gantt charts to manage workflows of any scale and complexity. There are plenty of configuration and customization options to meet any specific project requirements. Syncfusion React Gantt Chart Syncfusion React Gantt Chart is a task scheduling component for monitoring tasks and resources. It is part of Syncfusion Essential Studio that comes under a commercial Team License (starting from $395 per month) or a free community license, which is available under strict conditions. Key features: Configurable timeline,Full support of CRUD operation,Drag-and-drop UI,Built-in themes,Critical path support (well-suited for projects with fixed deadlines),The possibility to split and merge tasks,Resource view,Context menu and Excel-like filters,Undo/redo capabilities for reverting/reapplying actions,Virtual scrolling for large data sets,The possibility to highlight events and days. This React Gantt chart component is really feature-rich and well-suited for managing complex processes and resource allocation, although its pricing policy can be considered aggressive, and some users have noted challenges when attempting advanced customizations to fit specific needs. Kendo React Gantt Chart Kendo React Gantt Chart is a performant and customizable tool for handling large projects, which is a part of the KendoUI library. The UI component is available under a commercial license of $749 per developer with a free trial version. Key features: Tasks sorting (by task type or task start date);Filtering, including conditional filtering that can be configured;Easy data binding (a helper method converts flat data into a more complex data structure required by the Gantt chart);Task dependencies: end-to-start, start-to-start, end-to-end, start-to-end,Task editing via popup form;Customizable time slots;Time zones support,Day, week, month, year views. To sum up, we can say that along with basic features for project management, this UI component has a lot to offer for building sophisticated business apps. However, it lacks the interactivity of the drag-and-drop interface found in the tools mentioned above. DevExtreme React Gantt DevExtreme React Gantt is a configurable UI Gantt component for the fast development of React-based task management applications. This solution is distributed within the DevExtreme Complete package under the commercial license (starting from $900 per developer). A free trial is available. Key features: Move and modify tasks on the chart with drag-and-drop,Data sorting by a single or multiple columns,Column filtering and header filters with a pop-up menu,Validation of task dependencies,Export of data to PDF,Task templates that allow customizing task elements,Toolbars and a context menu for tasks,Tooltips support,Strip lines for highlighting specific time or a time interval. As you can see, the component contains a list of features that can be of interest in case you are looking for a multifunctional project management tool, just test them to check whether they are well-suited for your particular purposes. Smart React UI Gantt Chart Smart React UI Gantt Chart is one more React component that helps you add a project planning and management solution to your apps. This tool is distributed as a part of the “Smart UI” package under commercial licenses. The pricing starts from $399 per developer. Key features: Task editing via popup edit form,Move and modify tasks on the chart with drag-and-drop,Assign resources to tasks (timeline and diagram/histogram);Task dependencies;Filtering and sorting of tasks and resources;Tasks auto rescheduling;Built-in themes (7 in total);Export of data in different formats (PDF, Excel, TSV, CSV);Task tooltips and indicators;Localization, RTL. Smart React UI Gantt Chart contains all necessary capabilities for carrying out the management of complex projects. It offers powerful features like task auto-rescheduling and built-in themes, making it a flexible option for various project management needs. Conclusion In this article, we've explored several Gantt chart libraries for React, each offering unique capabilities for project management visualization. These solutions range from commercial offerings with extensive enterprise features to open-source alternatives. While commercial solutions like Syncfusion, DHTMLX, Kendo, DevExtreme, and Smart React UI offer comprehensive feature sets with professional support, the open-source SVAR React Gantt stands out with its free license, making it a compelling option for developers seeking a robust solution without licensing costs. When considering these libraries, check whether they fully meet your requirements in terms of the feature set, documentation and support, performance, seamless integration, data binding, and customization options. Take time to evaluate each solution against your specific project requirements to find the best fit for your development needs.
Background What Is PL/SQL? PL/SQL is a procedural language designed specifically to embrace SQL statements within its syntax. It includes procedural language elements such as conditions and loops and can handle exceptions (run-time errors). PL/SQL is native to Oracle databases, and databases like IBM DB2, PostgreSQL, and MySQL support PL/SQL constructs through compatibility features. What Is a JavaScript UDF? JavaScript UDF is Couchbase’s alternative to PL/SQL. JavaScript UDF brings JavaScript's general-purpose scripting flexibility to databases, allowing for dynamic and powerful operations across modern database systems and enhancing flexibility in data querying, processing, and transformation. Most modern databases like Couchbase, MongoDB, Snowflake, and Google BigQuery support Javascript UDF. The Problem A common problem seen by users migrating from Oracle to Couchbase is porting their PL/SQL scripts. Instead of supporting PL/SQL, Couchbase lets users construct user-defined functions in JavaScript (supported since 2021). JavaScript UDFs allow easy, intuitive manipulation of variant and JSON data. Variant objects passed to a UDF are transformed into native JavaScript types and values. The unintended consequence of this is that the majority of RDBMS that have been in existence for the last ten years have strongly encouraged developers to access the database using their procedural extensions to SQL (PL/pgSQL, PL/SQL), which support procedural constructs, integration with SQL, error handling, functions and procedures, triggers, and cursors, or at the very least, functions and procedures (like Sakila). For any attempt to move away from them, all of their scripts would need to be rewritten. Rewriting code is often a tedious task, especially when dealing with PL/SQL scripts that have been written in the 2000s and maintained since then. These scripts can be complex, often extending to thousands of lines, which can be overwhelming for the average enterprise user. Solution The ideal approach would be to develop a whole new PL/SQL evaluator, but that would require an excessive amount of engineering hours, and for the same use case, we already have a modern, stable, and fast JsEvaluator — so why support another evaluator? This makes the problem a perfect use case to leverage the ongoing advances in AI and LLMs — and that's what we have done here. We have used Generative AI models to automate the conversion of PL/SQL to JSUDF. As of June 2024, models have a limited context window, which means longer PL/SQLs get hit with the error: "This model's maximum context length is 8192 tokens. However, your messages resulted in <More-than-8192> tokens. Please reduce the length of the messages.” Note that this is for GPT4. So do we wait for AI to become more powerful and allow more tokens (like Moore’s Law, but for the AI’s context-length-vs-precision)? No: that’s where the ANTLR parser generator tool comes in. ANTLR is well-known to be used for Compiler and Interpreter Development. That way we can break the big script into smaller units that can be translated independently. So now are we building a transpiler? Well, yes and no. The stages in a transpiler are as follows: Lexical analysis (tokenization)Syntactic analysis (parsing)Semantic analysisIntermediate Representation (IR) generationOptimization (optional)Target code generation How the AI Translator Works Steps 1 and 2 are done using ANTLR. We use ANTLR’s Listener interface to grab individual Procedure/Function/Anonymous blocks, as they are independent blocks of code. In a case where the Procedure/Function/Anonymous blocks themselves exceed the context window, we translate at a statement level (where the LLM assumes the existence of use of variables/function calls that aren’t defined here but somewhere before). Subsequently, steps 3, 4, 5, and 6 are left to the LLM (GPT), i.e., translating each PL/SQL block into a JavaScript function to the best of its ability that also preserves the operational semantics of the block and is syntactically accurate. The results are surprisingly quite positive: the translation is 80-85% accurate. Another benefit of the solution is that we reduce hallucination by focusing on one task at a time, resulting in more accurate translations. To visualize: How to Use the Tool Link to the executable herePL/SQL To JsUDF tool Readme The executable expects the following command-line arguments: -u : Capella sign-in email-p : Capella sign-in password-cpaddr: Capella-url for chat-completions API-orgid: Organization ID in the chat-completions API path-cbhost: node-ip: cbcluster node-cbuser: cluster-user-name: cbcluster user, added through database access-cbpassword: cluster-password: cbcluster password, added through database access-cbport: query-service TLS port (usually 18093)filepath, i.e., path to the PL/SQL script that has to be translatedoutput->: In the output directory, a file with the same name as the plsql file is generated with translated JavaScript Library code. For example, cat example1.sql: PLSQL DECLARE x NUMBER := 0; counter NUMBER := 0; BEGIN FOR i IN 1..4 LOOP x := x + 1000; counter := counter + 1; INSERT INTO temp VALUES (x, counter, 'in OUTER loop'); --start an inner block DECLARE x NUMBER := 0; -- this is a local version of x BEGIN FOR i IN 1..4 LOOP x := x + 1; -- this increments the local x counter := counter + 1; INSERT INTO temp VALUES (x, counter, 'inner loop'); END LOOP; END; END LOOP; COMMIT; END; To briefly explain the above script, an outer loop runs for 4 iterations, incrementing x by 1000, counter by 1. The inner loop runs for 4 iterations, incrementing x by 1, counter by 1. Running the translator on the test PL/SQL: Shell ./plsql-to-jsudf -u «capella-signin-mailid» -p «capella-signin-password» -cpaddr https://api.cloud.couchbase.com -orgid «capella-organisation-id» -cbhost «hostname of data node» -cbuser «cbcluster username» -cbpassword «cbcluster password» -cbport 18093 ./translator/test/plsql/example1.sql Output JsUDF: cat output/example1.js: JavaScript function nestedloop(){ var x = 0; var counter = 0; var querybegin = BEGIN WORK; querybegin.close(); for (var i = 1; i <= 4; i++){ x = x + 1000; counter = counter + 1; var params = [x, counter]; var query = N1QL('INSERT INTO test.testscope.temp VALUES (uuid(),{"val1":$1,"val2":$2,"val3":"in OUTER loop"})',params); query.close(); var x_inner = 0; for (var j = 1; j <= 4; j++){ x_inner = x_inner + 1; counter = counter + 1; var params_inner = [x_inner, counter]; var query_inner = N1QL('INSERT INTO test.testscope.temp VALUES (uuid(),{"val1":$1,"val2":$2,"val3":"inner loop"})',params_inner); query_inner.close(); } } var querycommit = COMMIT WORK; querycommit.close(); } The translated script has a function nestedloop (name generated by LLM) that does exactly what the original Anonymous PL/SQL block specifies. Side note: For named functions/procedures, translated JS functions will have the same name. For anonymous blocks, the LLM uses a name it comes up with. Known Issues PL/SQL and JS are two different languages, and the way they are supported in Oracle and Couchbase doesn’t allow for a clean direct mapping between the two. Below are some limitations we discovered and the workarounds we have implemented for the same: 1. console.log Is Not Supported DBMS_OUTPUT.PUT built-in procedure and two other similar built-ins, DBMS_OUTPUT.PUT_LINE and DBMS_OUTPUT.NEW_LINE are translated to console.log(), but console.log is a browser API and is not supported by Couchbase's JavaScript evaluation implementation. This has been a frequent ask, considering the Couchbase eventing function does support print() statements but not in JavaScript UDFs. Workaround Users are expected to create a logging bucket. Logs are inserted as part of a document INSERT into the `default`.`default` collection. The document would look something like this: { "udf": «func-name», "log": «argument to console.log», // the actual log line "time": «current ISO time string» } The user can look at his logs by selecting logging: SELECT * FROM logging WHERE udf= "«func-name»"; SELECT * FROM logging WHERE time BETWEEN "«date1»" AND "«date2»"; Example: Original BEGIN DBMS.OUTPUT.PUT("Hello world!"); END; / Translation JavaScript function helloWorld() { // workaround for console.log("Hello world!"); var currentDate = new Date(); var utcISOString = currentDate.toISOString(); var params = [utcISOString,'anonymousblock1',"Hello world!"]; var logquery = N1QL('INSERT INTO logging VALUES(UUID(),{"udf":$2, "log":$3, "time":$1}, {"expiration": 5*24*60*60 })', params); logquery.close(); } This is already implemented in the tool. To view the log: EXECUTE FUNCTION helloWorld(); "results": [ null ] CREATE PRIMARY INDEX ON logging; "results": [ ] SELECT * FROM logging; "results": [ {"logging":{"log":"Hello world!","time":"2024-06-26T09:20:56.000Z","udf":"anonymousblock1"} ] 2. Cross-Package Function Calls Procedures/Functions listed in the package specification are global and can be used from other packages via «package_name».«public_procedure/function». However, the same is not true for a JavaScript Library in Couchbase, as import-export constructs are not supported by Couchbase's JavaScript evaluation implementation. Workaround In case of an interlibrary function call «lib_name».«function»(), the user is expected to have the referenced library «lib_name» already created; you can verify this via GET /evaluator/v1/libraries.The referenced function «function» also is expected to be created as a global UDF; this can be verified via GET /admin/functions_cache or select system:functions keyspace. This way we can access the function via n1ql. Example: math_utils Package CREATE OR REPLACE PACKAGE math_utils AS -- Public function to add two numbers FUNCTION add_numbers(p_num1 NUMBER, p_num2 NUMBER) RETURN NUMBER; END math_utils; / CREATE OR REPLACE PACKAGE BODY math_utils AS FUNCTION add_numbers(p_num1 NUMBER, p_num2 NUMBER) RETURN NUMBER IS BEGIN RETURN p_num1 + p_num2; END add_numbers; END math_utils; / show_sum package CREATE OR REPLACE PACKAGE show_sum AS -- Public procedure to display the sum of two numbers PROCEDURE display_sum(p_num1 NUMBER, p_num2 NUMBER); END show_sum; / CREATE OR REPLACE PACKAGE BODY show_sum AS PROCEDURE display_sum(p_num1 NUMBER, p_num2 NUMBER) IS v_sum NUMBER; BEGIN -- Calling the add_numbers function from math_utils package v_sum := math_utils.add_numbers(p_num1, p_num2); -- Displaying the sum using DBMS_OUTPUT.PUT_LINE DBMS_OUTPUT.PUT_LINE('The sum of ' || p_num1 || ' and ' || p_num2 || ' is ' || v_sum); END display_sum; END show_sum; / Translated code: function show_sum(a, b) { var sum_result; // Workaround for cross library function call math_utils.add_numbers(a, b) var crossfunc = N1QL("EXECUTE FUNCTION add_numbers($1,$2)",[a, b]) var crossfuncres = [] for(const doc of crossfunc) { crossfuncres.push(doc); } // actual replacement for math_utils.add_numbers(a, b) sum_result = crossfuncres[0]; // workaround for console.log('The sum of ' + a + ' and ' + b + ' is: ' + sum_result); var currentDate = new Date(); var utcISOString = currentDate.toISOString(); var params = [utcISOString,'SHOW_SUM','The sum of ' + a + ' and ' + b + ' is: ' + sum_result]; var logquery = N1QL('INSERT INTO logging VALUES(UUID(),{"udf":$2, "log":$3, "time":$1}, {"expiration": 5*24*60*60 })', params); logquery.close(); } It is auto-handled by the program — with a warning that it should be verified by a human set of eyes! 3. Global Variables PL/SQL supports package level and session level global variables, but global variables are not supported in JsUDF deliberately by design, as this causes concern for memory leaks. Workaround The suggested workaround requires manual tweaking of the generated translation. For example: CREATE OR REPLACE PACKAGE global_vars_pkg AS -- Global variable declarations g_counter NUMBER := 0; g_message VARCHAR2(100) := 'Initial Message'; -- Public procedure declarations PROCEDURE increment_counter; PROCEDURE set_message(p_message VARCHAR2); PROCEDURE show_globals; END global_vars_pkg; / CREATE OR REPLACE PACKAGE BODY global_vars_pkg AS -- Procedure to increment the counter PROCEDURE increment_counter IS BEGIN g_counter := g_counter + 1; END increment_counter; -- Procedure to set the global message PROCEDURE set_message(p_message VARCHAR2) IS BEGIN g_message := p_message; END set_message; -- Procedure to display the current values of global variables PROCEDURE show_globals IS BEGIN DBMS_OUTPUT.PUT_LINE('g_counter = ' || g_counter); DBMS_OUTPUT.PUT_LINE('g_message = ' || g_message); END show_globals; END global_vars_pkg; / Any function that modifies a global variable must accept it as an argument and return it to the caller. increment_counter: function increment_counter(counter){ counter = counter + 1; return counter } Any function that only reads a global can accept it as an argument. show_globals: function show_globals(counter, message){ // workaround for console.log(counter); var currentDate = new Date(); var utcISOString = currentDate.toISOString(); var params = [utcISOString,'SHOW_GLOBALS',couter]; var logquery = N1QL('INSERT INTO logging VALUES(UUID(),{"udf":$2, "log":$3, "time":$1}, {"expiration": 5*24*60*60 })', params); logquery.close(); // workaround for console.log(message); var currentDate = new Date(); var utcISOString = currentDate.toISOString(); var params = [utcISOString,'SHOW_GLOBALS',message]; var logquery = N1QL('INSERT INTO logging VALUES(UUID(),{"udf":$2, "log":$3, "time":$1}, {"expiration": 5*24*60*60 })', params); logquery.close(); } Package to Library This section shows an end-to-end package-to-library conversion using the tool. Sample PL/SQL package: CREATE OR REPLACE PACKAGE emp_pkg IS PROCEDURE insert_employee( p_emp_id IN employees.emp_id%TYPE, p_first_name IN employees.first_name%TYPE, p_last_name IN employees.last_name%TYPE, p_salary IN employees.salary%TYPE ); PROCEDURE update_employee( p_emp_id IN employees.emp_id%TYPE, p_first_name IN employees.first_name%TYPE, p_last_name IN employees.last_name%TYPE, p_salary IN employees.salary%TYPE ); PROCEDURE delete_employee( p_emp_id IN employees.emp_id%TYPE ); PROCEDURE get_employee( p_emp_id IN employees.emp_id%TYPE, p_first_name OUT employees.first_name%TYPE, p_last_name OUT employees.last_name%TYPE, p_salary OUT employees.salary%TYPE ); END emp_pkg; / CREATE OR REPLACE PACKAGE BODY emp_pkg IS PROCEDURE insert_employee( p_emp_id IN employees.emp_id%TYPE, p_first_name IN employees.first_name%TYPE, p_last_name IN employees.last_name%TYPE, p_salary IN employees.salary%TYPE ) IS BEGIN INSERT INTO employees (emp_id, first_name, last_name, salary) VALUES (p_emp_id, p_first_name, p_last_name, p_salary); END insert_employee; PROCEDURE update_employee( p_emp_id IN employees.emp_id%TYPE, p_first_name IN employees.first_name%TYPE, p_last_name IN employees.last_name%TYPE, p_salary IN employees.salary%TYPE ) IS BEGIN UPDATE employees SET first_name = p_first_name, last_name = p_last_name, salary = p_salary WHERE emp_id = p_emp_id; END update_employee; PROCEDURE delete_employee( p_emp_id IN employees.emp_id%TYPE ) IS BEGIN DELETE FROM employees WHERE emp_id = p_emp_id; END delete_employee; PROCEDURE get_employee( p_emp_id IN employees.emp_id%TYPE, p_first_name OUT employees.first_name%TYPE, p_last_name OUT employees.last_name%TYPE, p_salary OUT employees.salary%TYPE ) IS BEGIN SELECT first_name, last_name, salary INTO p_first_name, p_last_name, p_salary FROM employees WHERE emp_id = p_emp_id; END get_employee; END emp_pkg; / Translation: Shell ./plsql-to-jsudf -u «capella-signin-mailid» -p «capella-signin-password» -cpaddr https://api.cloud.couchbase.com -orgid «capella-organisation-id» -cbhost «hostname of data node» -cbuser «cbcluster username» -cbpassword «cbcluster password» -cbport 18093 translator/test/plsql/blog_test.sql Code: function insert_employee(p_emp_id, p_first_name, p_last_name, p_salary){ var params = [p_emp_id, p_first_name, p_last_name, p_salary]; var query = N1QL('INSERT INTO test.testscope.employees VALUES ($1, {"emp_id":$1, "first_name":$2, "last_name":$3, "salary":$4})', params); query.close(); } function update_employee(p_emp_id, p_first_name, p_last_name, p_salary){ var params = [p_first_name, p_last_name, p_salary, p_emp_id]; var query = N1QL('UPDATE test.testscope.employees SET first_name = $1, last_name = $2, salary = $3 WHERE emp_id = $4', params); query.close(); } function delete_employee(p_emp_id){ var querybegin=BEGIN WORK; var params = [p_emp_id]; var query= N1QL('DELETE FROM test.testscope.employees WHERE emp_id = $1',params); query.close(); var querycommit=COMMIT WORK; querycommit.close(); } function get_employee(p_emp_id){ var query = N1QL('SELECT first_name, last_name, salary FROM test.testscope.employees WHERE emp_id = $1', [p_emp_id]); var rs = []; for (const row of query) { rs.push(row); } query.close(); var p_first_name = rs[0]['first_name']; var p_last_name = rs[0]['last_name']; var p_salary = rs[0]['salary']; return {first_name: p_first_name, last_name: p_last_name, salary: p_salary}; } Let’s insert a new employee document. Create employee collection: curl -u Administrator:password http://127.0.0.1:8091/pools/default/buckets/test/scopes/testscope/collections -d name=employees Insert an Employee curl -u Administrator:password https://127.0.0.1:18093/query/service -d 'statement=EXECUTE FUNCTION insert_employee(1, "joe", "briggs", 10000)' -k { "requestID": "2c0854c1-d221-42e9-af47-b6aa0801a46c", "signature": null, "results": [ ], "errors": [{"code":10109,"msg":"Error executing function 'insert_employee' (blog_test:insert_employee)","reason":{"details":{"Code":" var query = N1QL('INSERT INTO test.testscope.employees VALUES ($1, {\"emp_id\":$1, \"first_name\":$2, \"last_name\":$3, \"salary\":$4})', params);","Exception":{"_level":"exception","caller":"insert_send:207","code":5070,"key":"execution.insert_key_type_error","message":"Cannot INSERT non-string key 1 of type value.intValue."},"Location":"functions/blog_test.js:5","Stack":" at insert_employee (functions/blog_test.js:5:17)"},"type":"Exceptions from JS code"}], "status": "fatal", "metrics": {"elapsedTime": "104.172666ms","executionTime": "104.040291ms","resultCount": 0,"resultSize": 0,"serviceLoad": 2,"errorCount": 1} } This errors out, and that’s ok — we can fix it manually. Reading the reason and exception: Cannot INSERT non-string key 1 of type value.intValue, ah! The key is always expected to be a string: passing insert_employee("1", "joe", "briggs", 10000) would do the trick, but it is unintuitive to expect employee_id to be a string. Let’s alter the generated code: function insert_employee(p_emp_id, p_first_name, p_last_name, p_salary){ var params = [p_emp_id.toString(), p_emp_id, p_first_name, p_last_name, p_salary]; var query = N1QL('INSERT INTO test.testscope.employees VALUES ($1, {"emp_id":$2, "first_name":$3, "last_name":$4, "salary":$5})', params); query.close(); } And recreate the UDF: curl -u Administrator:password https://127.0.0.1:18093/query/service -d 'statement=CREATE OR REPLACE FUNCTION insert_employee(p_emp_id, p_first_name, p_last_name, p_salary) LANGUAGE JAVASCRIPT AS "insert_employee" AT "blog_test"' -k { "requestID": "89df65ac-2026-4f42-8839-b1ce7f0ea2be", "signature": null, "results": [ ], "status": "success", "metrics": {"elapsedTime": "27.730875ms","executionTime": "27.620083ms","resultCount": 0,"resultSize": 0,"serviceLoad": 2} } Trying to insert it again: curl -u Administrator:password https://127.0.0.1:18093/query/service -d 'statement=EXECUTE FUNCTION insert_employee(1, "joe", "briggs", 10000)' -k { "requestID": "41fb76bf-a87f-4472-b8ba-1949789ae74b", "signature": null, "results": [ null ], "status": "success", "metrics": {"elapsedTime": "62.431667ms","executionTime": "62.311583ms","resultCount": 1,"resultSize": 4,"serviceLoad": 2} } Update an Employee Shoot! There’s a goof-up: employee 1 isn’t Joe, it’s Emily. Let’s update employee 1: curl -u Administrator:password https://127.0.0.1:18093/query/service -d 'statement=EXECUTE FUNCTION update_employee(1, "Emily", "Alvarez", 10000)' -k { "requestID": "92a0ca70-6d0d-4eb1-bf8d-0b4294ae987d", "signature": null, "results": [ null ], "status": "success", "metrics": {"elapsedTime": "100.967708ms","executionTime": "100.225333ms","resultCount": 1,"resultSize": 4,"serviceLoad": 2} } View the Employee curl -u Administrator:password https://127.0.0.1:18093/query/service -d 'statement=EXECUTE FUNCTION get_employee(1)' -k { "requestID": "8f180e27-0028-4653-92e0-606c80d5dabb", "signature": null, "results": [ {"first_name":"Emily","last_name":"Alvarez","salary":10000} ], "status": "success", "metrics": {"elapsedTime": "101.995584ms","executionTime": "101.879ms","resultCount": 1,"resultSize": 59,"serviceLoad": 2} } Delete the Employee Emily left. curl -u Administrator:password https://127.0.0.1:18093/query/service -d 'statement=EXECUTE FUNCTION delete_employee(1)' -k { "requestID": "18539991-3d97-40e2-bde3-6959200791b1", "signature": null, "results": [ ], "errors": [{"code":10109,"msg":"Error executing function 'delete_employee' (blog_test:delete_employee)","reason":{"details":{"Code":" var querycommit=N1QL('COMMIT WORK;', {}, false); ","Exception":{"_level":"exception","caller":"txcouchbase:240","cause":{"cause":{"bucket":"test","collection":"_default","document_key":"_txn:atr-988-#1b0","error_description":"Durability requirements are impossible to achieve","error_name":"DurabilityImpossible","last_connection_id":"eda95f8c35df6746/d275e8398a49e515","last_dispatched_from":"127.0.0.1:50069","last_dispatched_to":"127.0.0.1:11210","msg":"durability impossible","opaque":7,"scope":"_default","status_code":161},"raise":"failed","retry":false,"rollback":false},"code":17007,"key":"transaction.statement.commit","message":"Commit Transaction statement error"},"Location":"functions/blog_test.js:29","Stack":" at delete_employee (functions/blog_test.js:29:21)"},"type":"Exceptions from JS code"}], "status": "fatal", "metrics": {"elapsedTime": "129.02975ms","executionTime": "128.724ms","resultCount": 0,"resultSize": 0,"serviceLoad": 2,"errorCount": 1} } Again, we have an error with the generated code. Looking at the reason and exception, we can confirm that the translated code encloses delete in a transaction, which wasn’t the case in the original. For transactions, buckets need to have durability set, but this requires more than one data server; hence, the error. The fix here is to alter the code to remove the enclosing translation. function delete_employee(p_emp_id){ var params = [p_emp_id]; var query= N1QL('DELETE FROM test.testscope.employees WHERE emp_id = $1',params); query.close(); } curl -u Administrator:password https://127.0.0.1:18093/query/service -d 'statement=CREATE OR REPLACE FUNCTION delete_employee(p_emp_id) LANGUAGE JAVASCRIPT AS "delete_employee" AT "blog_test"' -k { "requestID": "e7432b82-1af8-4dc4-ad94-c34acea59334", "signature": null, "results": [ ], "status": "success", "metrics": {"elapsedTime": "31.129459ms","executionTime": "31.022ms","resultCount": 0,"resultSize": 0,"serviceLoad": 2} } curl -u Administrator:password https://127.0.0.1:18093/query/service -d 'statement=EXECUTE FUNCTION delete_employee(1)' -k { "requestID": "d440913f-58ff-4815-b671-1a72b75bb7eb", "signature": null, "results": [ null ], "status": "success", "metrics": {"elapsedTime": "33.8885ms","executionTime": "33.819042ms","resultCount": 1,"resultSize": 4,"serviceLoad": 2} } Now, all functions in the original PL/SQL work in Couchbase via JS UDFs. Yes, the example is pretty trivial, but you get the gist of how to go about using the tool to migrate your PL/SQL scripts with little manual supervision. Remember the tool is supposed to take you 80%: the other 20% still needs to be done by you, but much better than writing all of that code yourself! The Future This project is open-source, so feel free to contribute. Some ideas that were thrown at us included: Critic AI that can criticize generated code to ensure manual intervention is not needed at allCurrently, the source code is code that just works; no thoughts for parallelism or code reuse were put to use. And also include the limitations discussed earlier. Finally, I’d like to thank Kamini Jagtiani for guiding me and Pierre Regazzoni for helping me test the conversion tool.
A data fabric is a system that links and arranges data from many sources so that it is simple to locate, utilize, and distribute. It connects everything like a network, guaranteeing that our data is constantly available, safe, and prepared for use. Assume that our data is spread across several "containers" (such as databases, cloud storage, or applications). A data fabric acts like a network of roads and pathways that connects all these containers so we can get what we need quickly, no matter where it is. On the other hand, stream processing is a method of managing data as it comes in, such as monitoring sensor updates or evaluating a live video feed. It processes data instantaneously rather than waiting to gather all of it, which enables prompt decision-making and insights. In this article, we explore how leveraging data fabric can supercharge stream processing by offering a unified, intelligent solution to manage, process, and analyze real-time data streams effectively. Access to Streaming Data in One Place Streaming data comes from many sources like IoT devices, social media, logs, or transactions, which can be a major challenge to manage. Data fabric plays an important role by connecting these sources and providing a single platform to access data, regardless of its origin. An open-source distributed event-streaming platform like Apache Kafka supports data fabric by handling real-time data streaming across various systems. It also acts as a backbone for data pipelines, enabling smooth data movement between different components of the data fabric. Several commercial platforms, such as Cloudera Data Platform (CDP), Microsoft Azure Data Factory, and Google Cloud Dataplex, are designed for end-to-end data integration and management. These platforms also offer additional features, such as data governance and machine learning capabilities. Real-Time Data Integration Streaming data often needs to be combined with historical data or data from other streams to gain meaningful insights. Data fabric integrates real-time streams with existing data in a seamless and scalable way, providing a complete picture instantly. Commercial platforms like Informatica Intelligent Data Management Cloud (IDMC) simplify complex data environments with scalable and automated data integration. They also enable the integration and management of data across diverse environments. Intelligent Processing When working with streamed data, it often arrives unstructured and raw, which reduces its initial usefulness. To make it actionable, it must undergo specific processing steps such as filtering, aggregating, or enriching. Streaming data often contains noise or irrelevant details that don’t serve the intended purpose. Filtering involves selecting only the relevant data from the stream and discarding unnecessary information. Similarly, aggregating combines multiple data points into a single summary value, which helps reduce the volume of data while retaining essential insights. Additionally, enriching adds extra information to the streamed data, making it more meaningful and useful. Data fabric plays an important role here by applying built-in intelligence (like AI/ML algorithms) to process streams on the fly, identifying patterns, anomalies, or trends in real time. Consistent Governance It is difficult to manage security, privacy, and data quality for streaming data because of the constant flow of data from various sources, frequently at fast speeds and in enormous volumes. Sensitive data, such as financial or personal information, may be included in streaming data; these must be safeguarded instantly without affecting functionality. Because streaming data is unstructured or semi-structured, it might be difficult to validate and clean, which could result in quality problems. By offering a common framework for managing data regulations, access restrictions, and quality standards across various and dispersed contexts, data fabric contributes to consistent governance in stream processing. As streaming data moves through the system, it ensures compliance with security and privacy laws like the CCPA and GDPR by enforcing governance rules in real time. Data fabric uses cognitive techniques, such as AI/ML, to monitor compliance, identify anomalies, and automate data classification. Additionally, it incorporates metadata management to give streaming data a clear context and lineage, assisting companies in tracking its usage, changes, and source. Data fabric guarantees that data is safe, consistent, and dependable even in intricate and dynamic processing settings by centralizing governance controls and implementing them uniformly across all data streams. The commercial Google Cloud Dataplex can be used as a data fabric tool for organizing and governing data across a distributed environment. Scalable Analytics By offering a uniform and adaptable architecture that smoothly integrates and processes data from many sources in real time, data fabric allows scalable analytics in stream processing. Through the use of distributed computing and elastic scaling, which dynamically modifies resources in response to demand, it enables enterprises to effectively manage massive volumes of streaming data. By adding historical and contextual information to streaming data, data fabric also improves analytics by allowing for deeper insights without requiring data duplication or movement. In order to ensure fast and actionable insights, data fabric's advanced AI and machine learning capabilities assist in instantly identifying patterns, trends, and irregularities. Conclusion In conclusion, a data fabric facilitates the smooth and effective management of real-time data streams, enabling organizations to make quick and informed decisions. For example, in a smart city, data streams from traffic sensors, weather stations, and public transport can be integrated in real time using a data fabric. It can process and analyze traffic patterns alongside weather conditions, providing actionable insights to traffic management systems or commuters, such as suggesting alternative routes to avoid congestion.
Asynchronous programming is an essential pillar of modern web development. Since the earliest days of Ajax, developers have grappled with different techniques for handling asynchronous tasks. JavaScript’s single-threaded nature means that long-running operations — like network requests, reading files, or performing complex calculations — must be done in a manner that does not block the main thread. Early solutions relied heavily on callbacks, leading to issues like “callback hell,” poor error handling, and tangled code logic. Promises offer a cleaner, more structured approach to managing async operations. They address the shortcomings of raw callbacks by providing a uniform interface for asynchronous work, enabling easier composition, more readable code, and more reliable error handling. For intermediate web engineers who already know the basics of JavaScript, understanding promises in depth is critical to building robust, efficient, and maintainable applications. In this article, we will: Explain what a promise is and how it fits into the JavaScript ecosystem.Discuss why promises were introduced and what problems they solve.Explore the lifecycle of a promise, including its three states.Provide a step-by-step example of implementing your own simplified promise class to deepen your understanding. By the end of this article, you will have a solid grasp of how promises work and how to use them effectively in your projects. What Is a Promise? A promise is an object representing the eventual completion or failure of an asynchronous operation. Unlike callbacks — where functions are passed around and executed after a task completes — promises provide a clear separation between the asynchronous operation and the logic that depends on its result. In other words, a promise acts as a placeholder for a future value. While the asynchronous operation (such as fetching data from an API) is in progress, you can attach handlers to the promise. Once the operation completes, the promise either: Fulfilled (Resolved): The promise successfully returns a value.Rejected: The promise fails and returns a reason (usually an error).Pending: Before completion, the promise remains in a pending state, not yet fulfilled or rejected. The key advantage is that you write your logic as if the value will eventually be available. Promises enforce a consistent pattern: an asynchronous function returns a promise that can be chained and processed in a linear, top-down manner, dramatically improving code readability and maintainability. Why Do We Need Promises? Before the introduction of promises, asynchronous programming in JavaScript often relied on nesting callbacks: JavaScript getDataFromServer((response) => { parseData(response, (parsedData) => { saveData(parsedData, (saveResult) => { console.log("Data saved:", saveResult); }, (err) => { console.error("Error saving data:", err); }); }, (err) => { console.error("Error parsing data:", err); }); }, (err) => { console.error("Error fetching data:", err); }); This pattern easily devolves into what is commonly known as “callback hell” or the “pyramid of doom.” As the complexity grows, so does the difficulty of error handling, code readability, and maintainability. Promises solve this by flattening the structure: JavaScript getDataFromServer() .then(parseData) .then(saveData) .then((result) => { console.log("Data saved:", result); }) .catch((err) => { console.error("Error:", err); }); Notice how the .then() and .catch() methods line up vertically, making it clear what happens sequentially and where errors will be caught. This pattern reduces complexity and helps write code that is closer in appearance to synchronous logic, especially when combined with async/await syntax (which builds on promises). The Three States of a Promise A promise can be in one of three states: Pending: The initial state. The async operation is still in progress, and the final value is not available yet.Fulfilled (resolved): The async operation completed successfully, and the promise now holds a value.Rejected: The async operation failed for some reason, and the promise holds an error or rejection reason. A promise’s state changes only once: from pending to fulfilled or pending to rejected. Once settled (fulfilled or rejected), it cannot change state again. Consider the lifecycle visually: ┌──────────────────┐ | Pending | └───────┬──────────┘ | v ┌──────────────────┐ | Fulfilled | └──────────────────┘ or ┌──────────────────┐ | Rejected | └──────────────────┘ Building Your Own Promise Implementation To fully grasp how promises work, let’s walk through a simplified custom promise implementation. While you would rarely need to implement your own promise system in production (since the native Promise API is robust and well-optimized), building one for learning purposes is instructive. Below is a simplified version of a promise-like implementation. It’s not production-ready, but it shows the concepts: JavaScript const PROMISE_STATUS = { pending: "PENDING", fulfilled: "FULFILLED", rejected: "REJECTED", }; class MyPromise { constructor(executor) { this._state = PROMISE_STATUS.pending; this._value = undefined; this._handlers = []; try { executor(this._resolve.bind(this), this._reject.bind(this)); } catch (err) { this._reject(err); } } _resolve(value) { if (this._state === PROMISE_STATUS.pending) { this._state = PROMISE_STATUS.fulfilled; this._value = value; this._runHandlers(); } } _reject(reason) { if (this._state === PROMISE_STATUS.pending) { this._state = PROMISE_STATUS.rejected; this._value = reason; this._runHandlers(); } } _runHandlers() { if (this._state === PROMISE_STATUS.pending) return; this._handlers.forEach((handler) => { if (this._state === PROMISE_STATUS.fulfilled) { if (handler.onFulfilled) { try { const result = handler.onFulfilled(this._value); handler.promise._resolve(result); } catch (err) { handler.promise._reject(err); } } else { handler.promise._resolve(this._value); } } if (this._state === PROMISE_STATUS.rejected) { if (handler.onRejected) { try { const result = handler.onRejected(this._value); handler.promise._resolve(result); } catch (err) { handler.promise._reject(err); } } else { handler.promise._reject(this._value); } } }); this._handlers = []; } then(onFulfilled, onRejected) { const newPromise = new MyPromise(() => {}); this._handlers.push({ onFulfilled, onRejected, promise: newPromise }); if (this._state !== PROMISE_STATUS.pending) { this._runHandlers(); } return newPromise; } catch(onRejected) { return this.then(null, onRejected); } } // Example usage: const p = new MyPromise((resolve, reject) => { setTimeout(() => resolve("Hello from MyPromise!"), 500); }); p.then((value) => { console.log(value); // "Hello from MyPromise!" return "Chaining values"; }) .then((chainedValue) => { console.log(chainedValue); // "Chaining values" throw new Error("Oops!"); }) .catch((err) => { console.error("Caught error:", err); }); What’s happening here? Construction: When you create a new MyPromise(), you pass in an executor function that receives _resolve and _reject methods as arguments.State and Value: The promise starts in the PENDING state. Once resolve() is called, it transitions to FULFILLED. Once reject() is called, it transitions to REJECTED.Handlers Array: We keep a queue of handlers (the functions passed to .then() and .catch()). Before the promise settles, these handlers are stored in an array. Once the promise settles, the stored handlers run, and the results or errors propagate to chained promises.Chaining: When you call .then(), it creates a new MyPromise and returns it. Whatever value you return inside the .then() callback becomes the result of that new promise, allowing chaining. If you throw an error, it’s caught and passed down the chain to .catch().Error Handling: Similar to native promises, errors in .then() handlers immediately reject the next promise in the chain. By having a .catch() at the end, you ensure all errors are handled. While this code is simplified, it reflects the essential mechanics of promises: state management, handler queues, and chainable operations. Best Practices for Using Promises Always return promises: When writing functions that involve async work, return a promise. This makes the function’s behavior predictable and composable.Use .catch() at the end of chains: To ensure no errors go unhandled, terminate long promise chains with a .catch().Don’t mix callbacks and promises needlessly: Promises are designed to replace messy callback structures, not supplement them. If you have a callback-based API, consider wrapping it in a promise or use built-in promisification functions.Leverage utility methods: If you’re waiting on multiple asynchronous operations, use Promise.all(), Promise.race(), Promise.allSettled(), or Promise.any() depending on your use case.Migrate to async/await where possible: Async/await syntax provides a cleaner, more synchronous look. It’s generally easier to read and less prone to logical errors, but it still relies on promises under the hood. Conclusion Promises revolutionized how JavaScript developers handle asynchronous tasks. By offering a structured, composable, and more intuitive approach than callbacks, promises laid the groundwork for even more improvements, like async/await. For intermediate-level engineers, mastering promises is essential. It ensures you can write cleaner, more maintainable code and gives you the flexibility to handle complex asynchronous workflows with confidence. We covered what promises are, why they are needed, how they work, and how to use them effectively. We also explored advanced techniques like Promise.all() and wrote a simple promise implementation from scratch to illustrate the internal workings. With this knowledge, you’re well-equipped to tackle asynchronous challenges in your projects, building web applications that are more robust, maintainable, and ready for the real world.
In web development, optimizing and scaling applications have always been an issue. React.js had extraordinary success in front-end development as a tool, providing a robust way to create user interfaces. But it gets complicated with growing applications, especially when it comes to multiple REST API endpoints. Concerns such as over-fetching, where excessive data is required, can be a source of performance bottlenecks and a poor user experience. Among the solutions to these challenges is adopting the use of GraphQL with React applications. If your backend has multiple REST endpoints, then introducing a GraphQL layer that internally calls your REST API endpoints can enhance your application from overfetching and streamline your frontend application. In this article, you will find how to use it, the advantages and disadvantages of this approach, various challenges, and how to address them. We will also dive deeper into some practical examples of how GraphQL can help you improve the ways you work with your data. Overfetching in REST APIs In REST APIs, overfetching occurs when the amount of data that the API delivers to the client is more than what the client requires. This is a common problem with REST APIs, which often return a fixed Object or Response Schema. To better understand this problem, let us consider an example. Consider a user profile page where it is only required to show the user’s name and email. With a typical REST API, fetching the user data might look like this: JavaScript fetch('/api/users/1') .then(response => response.json()) .then(user => { // Use the user's name and profilePicture in the UI }); The API response will include unnecessary data: JSON { "id": 1, "name": "John Doe", "profilePicture": "/images/john.jpg", "email": "john@example.com", "address": "123 Denver St", "phone": "111-555-1234", "preferences": { "newsletter": true, "notifications": true }, // ...more details } Although the application only requires the name and email fields of the user, the API returns the whole user object. This additional data often increases the payload size, takes more bandwidth, and can eventually slow down the application when used on a device with limited resources or a slow network connection. GraphQL as a Solution GraphQL addresses the overfetching problem by allowing clients to request exactly the data they need. By integrating a GraphQL server into your application, you can create a flexible and efficient data-fetching layer that communicates with your existing REST APIs. How It Works 1. GraphQL Server Setup You introduce a GraphQL server that serves as an intermediary between your React frontend and the REST APIs. 2. Schema Definition You define a GraphQL schema that specifies the data types and queries your frontend requires. 3. Resolvers Implementation You implement resolvers in the GraphQL server that fetch data from the REST APIs and return only the necessary fields. 4. Front-End Integration You update your React application to use GraphQL queries instead of direct REST API calls. This approach allows you to optimize data fetching without overhauling your existing backend infrastructure. Implementing GraphQL in a React Application Let’s look at how to set up a GraphQL server and integrate it into a React application. Install Dependencies PowerShell npm install apollo-server graphql axios Define the Schema Create a file called schema.js: JavaScript const { gql } = require('apollo-server'); const typeDefs = gql` type User { id: ID! name: String email: String // Ensure this matches exactly with the frontend query } type Query { user(id: ID!): User } `; module.exports = typeDefs; This schema defines a User type and a user query that fetches a user by ID. Implement Resolvers Create a file called resolvers.js: JavaScript const resolvers = { Query: { user: async (_, { id }) => { try { const response = await fetch(`https://jsonplaceholder.typicode.com/users/${id}`); const user = await response.json(); return { id: user.id, name: user.name, email: user.email, // Return email instead of profilePicture }; } catch (error) { throw new Error(`Failed to fetch user: ${error.message}`); } }, }, }; module.exports = resolvers; The resolver for the user query fetches data from the REST API and returns only the required fields. We will use https://jsonplaceholder.typicode.com/ for our fake REST API. Set Up the Server Create a server.js file: JavaScript const { ApolloServer } = require('apollo-server'); const typeDefs = require('./schema'); const resolvers = require('./resolvers'); const server = new ApolloServer({ typeDefs, resolvers, }); server.listen({ port: 4000 }).then(({ url }) => { console.log(`GraphQL Server ready at ${url}`); }); Start the server: PowerShell node server.js Your GraphQL server is live at http://localhost:4000/graphql, and if you query your server, it will take you to this page. Integrating With the React Application We will now change the React application to use the GraphQL API. Install Apollo Client PowerShell npm install @apollo/client graphql Configure Apollo Client JavaScript import { ApolloClient, InMemoryCache } from '@apollo/client'; const client = new ApolloClient({ uri: 'http://localhost:4000', cache: new InMemoryCache(), }); Write the GraphQL Query JavaScript const GET_USER = gql` query GetUser($id: ID!) { user(id: $id) { id name email } } `; Now, integrate the above pieces of code with your React app. Here is a simple react app below, which lets a user select the userId and display the information: JavaScript import { useState } from 'react'; import { ApolloClient, InMemoryCache, ApolloProvider, gql, useQuery } from '@apollo/client'; import './App.css'; // Link to the updated CSS const client = new ApolloClient({ uri: 'http://localhost:4000', // Ensure this is the correct URL for your GraphQL server cache: new InMemoryCache(), }); const GET_USER = gql` query GetUser($id: ID!) { user(id: $id) { id name email } } `; const User = ({ userId }) => { const { loading, error, data } = useQuery(GET_USER, { variables: { id: userId }, }); if (loading) return <p>Loading...</p>; if (error) return <p>Error: {error.message}</p>; return ( <div className="user-container"> <h2>{data.user.name}</h2> <p>Email: {data.user.email}</p> </div> ); }; const App = () => { const [selectedUserId, setSelectedUserId] = useState("1"); return ( <ApolloProvider client={client}> <div className="app-container"> <h1 className="title">GraphQL User Lookup</h1> <div className="dropdown-container"> <label htmlFor="userSelect">Select User ID:</label> <select id="userSelect" value={selectedUserId} onChange={(e) => setSelectedUserId(e.target.value)} > {Array.from({ length: 10 }, (_, index) => ( <option key={index + 1} value={index + 1}> {index + 1} </option> ))} </select> </div> <User userId={selectedUserId} /> </div> </ApolloProvider> ); }; export default App; Result You will see simple user details like this: [Github Link]. Working With Multiple Endpoints Imagine a scenario where you need to retrieve a specific user’s posts, along with the individual comments on each post. Instead of making three separate API calls from your frontend React app and dealing with unnecessary data, you can streamline the process with GraphQL. By defining a schema and crafting a GraphQL query, you can request only the exact data your UI requires, which is an efficient request all in one. We need to fetch user data, their posts, and comments for each post from the different endpoints. We’ll use fetch to gather data from the multiple endpoints and return it via GraphQL. Update Resolvers JavaScript const fetch = require('node-fetch'); const resolvers = { Query: { user: async (_, { id }) => { try { // fetch user const userResponse = await fetch(`https://jsonplaceholder.typicode.com/users/${id}`); const user = await userResponse.json(); // fetch posts for a user const postsResponse = await fetch(`https://jsonplaceholder.typicode.com/posts?userId=${id}`); const posts = await postsResponse.json(); // fetch comments for a post const postsWithComments = await Promise.all( posts.map(async (post) => { const commentsResponse = await fetch(`https://jsonplaceholder.typicode.com/comments?postId=${post.id}`); const comments = await commentsResponse.json(); return { ...post, comments }; }) ); return { id: user.id, name: user.name, email: user.email, posts: postsWithComments, }; } catch (error) { throw new Error(`Failed to fetch user data: ${error.message}`); } }, }, }; module.exports = resolvers; Update GraphQL Schema JavaScript const { gql } = require('apollo-server'); const typeDefs = gql` type Comment { id: ID! name: String email: String body: String } type Post { id: ID! title: String body: String comments: [Comment] } type User { id: ID! name: String email: String posts: [Post] } type Query { user(id: ID!): User } `; module.exports = typeDefs; Server setup in server.js remains same. Once we update the React.js code, we get the below output: Result You will see a detailed user like this: [Github Link]. Benefits of This Approach Integrating GraphQL into your React application provides several advantages: Eliminating Overfetching A key feature of GraphQL is that it only fetches exactly what you request. The server only returns the requested fields and ensures that the amount of data transferred over the network is reduced by serving only what the query demands, thus improving performance. Simplifying Front-End Code GraphQL enables you to get the needed information in a single query regardless of its origin. Internally, it could be making 3 API calls to get the information. This helps to simplify your frontend code because now you don’t need to orchestrate different async requests and combine their results. Improving Developer’s Experience A strong typing and schema introspection offer better tooling and error checking than in the traditional API implementation. Furthermore, there are interactive environments where developers can build and test queries, including GraphiQL or Apollo Explorer. Addressing Complexities and Challenges This approach has some advantages but also introduces some challenges that must be managed. Additional Backend Layer The introduction of the GraphQL server creates an extra layer in your backend architecture, and if it is not managed properly, it becomes a single point of failure. Solution Pay attention to error handling and monitoring. Containerization and orchestration tools like Docker and Kubernetes can help manage scalability and reliability. Potential Performance Overhead The GraphQL server may make multiple REST API calls to resolve a single query, which can introduce latency and overhead to the system. Solution Cache the results to avoid making several calls to the API. Some tools, such as DataLoader, can handle the process of batching and caching requests. Conclusion "Simplicity is the ultimate sophistication" — Leonardo da Vinci Integrating GraphQL into your React application is more than just a performance optimization — it’s a strategic move toward building more maintainable, scalable, and efficient applications. By addressing overfetching and simplifying data management, you not only enhance the user experience but also empower your development team with better tools and practices. While the introduction of a GraphQL layer comes with its own set of challenges, the benefits often outweigh the complexities. By carefully planning your implementation, optimizing your resolvers, and securing your endpoints, you can mitigate potential drawbacks. Moreover, the flexibility that GraphQL offers can future-proof your application as it grows and evolves. Embracing GraphQL doesn’t mean abandoning your existing REST APIs. Instead, it allows you to leverage their strengths while providing a more efficient and flexible data access layer for your front-end applications. This hybrid approach combines the reliability of REST with the agility of GraphQL, giving you the best of both worlds. If you’re ready to take your React application to the next level, consider integrating GraphQL into your data fetching strategy. The journey might present challenges, but the rewards — a smoother development process, happier developers, and satisfied users — make it a worthwhile endeavor. Full Code Available You can find the full code for this implementation on my GitHub repository.
Metaprogramming is a powerful programming paradigm that allows code to dynamically manipulate its behavior at runtime. JavaScript, with the introduction of Proxies and the Reflect API in ES6, has taken metaprogramming capabilities to a new level, enabling developers to intercept and redefine core object operations like property access, assignment, and function invocation. This blog post dives deep into these advanced JavaScript features, explaining their syntax, use cases, and how they work together to empower dynamic programming. What Are Proxies? A Proxy in JavaScript is a wrapper that allows developers to intercept and customize fundamental operations performed on an object. These operations include getting and setting properties, function calls, property deletions, and more. Proxy Syntax JavaScript const proxy = new Proxy(target, handler); target: The object being proxied.handler: An object containing methods, known as traps, that define custom behaviors for intercepted operations. Example: Logging Property Access JavaScript const user = { name: 'Alice', age: 30 }; const proxy = new Proxy(user, { get(target, property) { console.log(`Accessing property: ${property}`); return target[property]; } }); console.log(proxy.name); // Logs: Accessing property: name → Output: Alice Key Proxy Traps Trap NameOperation InterceptedgetAccessing a property (obj.prop or obj['prop'])setAssigning a value to a property (obj.prop = value)deletePropertyDeleting a property (delete obj.prop)hasChecking property existence (prop in obj)applyFunction invocation (obj())constructCreating new instances with new (new obj()) Advanced Use Cases With Proxies 1. Input Validation JavaScript const user = { age: 25 }; const proxy = new Proxy(user, { set(target, property, value) { if (property === 'age' && typeof value !== 'number') { throw new Error('Age must be a number!'); } target[property] = value; return true; } }); proxy.age = 30; // Works fine proxy.age = '30'; // Throws Error: Age must be a number! In this example, the set trap ensures type validation before allowing assignments. 2. Reactive Systems (Similar to Vue.js Reactivity) JavaScript const data = { price: 5, quantity: 2 }; let total = 0; const proxy = new Proxy(data, { set(target, property, value) { target[property] = value; total = target.price * target.quantity; console.log(`Total updated: ${total}`); return true; } }); proxy.price = 10; // Logs: Total updated: 20 proxy.quantity = 3; // Logs: Total updated: 30 This code dynamically recalculates values whenever dependent properties are updated, mimicking the behavior of modern reactive frameworks. What Is Reflect? The Reflect API complements Proxies by providing methods that perform default behaviors for object operations, making it easier to integrate them into Proxy traps. Key Reflect Methods MethodDescriptionReflect.get(target, prop)Retrieves the value of a property.Reflect.set(target, prop, val)Sets a property value.Reflect.has(target, prop)Checks property existence (prop in obj).Reflect.deleteProperty(target, prop)Deletes a property.Reflect.apply(func, thisArg, args)Calls a function with a specified this context.Reflect.construct(target, args)Creates a new instance of a constructor. Example: Using Reflect for Default Behavior JavaScript const user = { age: 25 }; const proxy = new Proxy(user, { set(target, property, value) { if (property === 'age' && typeof value !== 'number') { throw new Error('Age must be a number!'); } return Reflect.set(target, property, value); // Default behavior } }); proxy.age = 28; // Sets successfully console.log(user.age); // Output: 28 Using Reflect simplifies the code by maintaining default operations while adding custom logic. Real-World Use Cases Security wrappers: Restrict access to sensitive properties.Logging and debugging: Track object changes.API data validation: Ensure strict rules for API data. Conclusion Metaprogramming with Proxies and Reflect enables developers to dynamically control and modify application behavior. Master these tools to elevate your JavaScript expertise. Happy coding!
Managing expenses and keeping track of receipts can be cumbersome. Digitalizing receipts and extracting product information automatically can greatly enhance efficiency. In this blog, we’ll build a Receipt Scanner App where users can scan receipts using their phone, extract data from them using OCR (Optical Character Recognition), process the extracted data with OpenAI to identify products and prices, store the data in PostgreSQL, and analyze product prices across different stores. What Does the Receipt Scanner App do? This app allows users to: Scan receipts: Users can take pictures of their receipts with their phone.Extract text: The app will use OCR to recognize the text from the receipt images.Analyze product information: With OpenAI’s natural language processing capabilities, we can intelligently extract the product names and prices from the receipt text.Store data: The extracted data is stored in a PostgreSQL database.Track prices: Users can later retrieve price ranges for products across different stores, providing insights into spending patterns and price comparisons. Tech Stack Overview We'll be using the following technologies: Frontend (Mobile) Expo - React Native: For the mobile app that captures receipt images and uploads them to the backend. Backend Node.js with Express: For handling API requests and managing interactions between the frontend, Google Cloud Vision API, OpenAI, and PostgreSQL.Google Cloud Vision API: For Optical Character Recognition (OCR) to extract text from receipt images.OpenAI GPT-4: For processing and extracting meaningful information (product names, prices, etc.) from the raw receipt text.PostgreSQL: For storing receipt and product information in a structured way. Step 1: Setting Up the Backend with Node.js and PostgreSQL 1. Install the Required Dependencies Let’s start by setting up a Node.js project that will serve as the backend for processing and storing receipt data. Navigate to your project folder and run: Shell mkdir receipt-scanner-backend cd receipt-scanner-backend npm init -y npm install express multer @google-cloud/vision openai pg body-parser cors dotenv 2. Set Up PostgreSQL We need to create a PostgreSQL database that will store information about receipts and products. Create two tables: receipts: Stores metadata about each receipt.products: Stores individual product data, including names, prices, and receipt reference. SQL CREATE TABLE receipts ( id SERIAL PRIMARY KEY, store_name VARCHAR(255), receipt_date DATE ); CREATE TABLE products ( id SERIAL PRIMARY KEY, product_name VARCHAR(255), price DECIMAL(10, 2), receipt_id INTEGER REFERENCES receipts(id) ); 3. Set Up Google Cloud Vision API Go to the Google Cloud Console, create a project, and enable the Cloud Vision API.Download your API credentials as a JSON file and save it in your backend project directory. 4. Set Up OpenAI API Create an account at Open AI and obtain your API key.Store your OpenAI API key in a .envfile like this: Shell OPENAI_API_KEY=your-openai-api-key-here 5. Write the Backend Logic Google Vision API (vision.js) This script will use the Google Cloud Vision API to extract text from the receipt image. Google Vision for Text Extraction (vision.js) JavaScript const vision = require('@google-cloud/vision'); const client = new vision.ImageAnnotatorClient({ keyFilename: 'path-to-your-google-vision-api-key.json', }); async function extractTextFromImage(imagePath) { const [result] = await client.textDetection(imagePath); const detections = result.textAnnotations; return detections[0]?.description || ''; } module.exports = { extractTextFromImage }; OpenAI Text Processing (openaiService.js) This service will use OpenAI GPT-4 to analyze the extracted text and identify products and their prices. JavaScript const { Configuration, OpenAIApi } = require('openai'); const configuration = new Configuration({ apiKey: process.env.OPENAI_API_KEY, }); const openai = new OpenAIApi(configuration); async function processReceiptText(text) { const prompt = ` You are an AI that extracts product names and prices from receipt text. Here’s the receipt data: "${text}" Return the data as a JSON array of products with their prices, like this: [{"name": "Product1", "price": 9.99}, {"name": "Product2", "price": 4.50}] `; const response = await openai.createCompletion({ model: 'gpt-4', prompt, max_tokens: 500, }); return response.data.choices[0].text.trim(); } module.exports = { processReceiptText }; Setting Up Express (app.js) Now, we’ll integrate the OCR and AI processing in our Express server. This server will handle image uploads, extract text using Google Vision API, process the text with OpenAI, and store the results in PostgreSQL. JavaScript require('dotenv').config(); const express = require('express'); const multer = require('multer'); const { Pool } = require('pg'); const { extractTextFromImage } = require('./vision'); const { processReceiptText } = require('./openaiService'); const app = express(); app.use(express.json()); const pool = new Pool({ user: 'your-db-user', host: 'localhost', database: 'your-db-name', password: 'your-db-password', port: 5432, }); const upload = multer({ dest: 'uploads/' }); app.get('/product-price-range/:productName', async (req, res) => { const { productName } = req.params; try { // Query to get product details, prices, and store names const productDetails = await pool.query( `SELECT p.product_name, p.price, r.store_name, r.receipt_date FROM products p JOIN receipts r ON p.receipt_id = r.id WHERE p.product_name ILIKE $1 ORDER BY p.price ASC`, [`%${productName}%`] ); if (productDetails.rows.length === 0) { return res.status(404).json({ message: 'Product not found' }); } res.json(productDetails.rows); } catch (error) { console.error(error); res.status(500).json({ error: 'Failed to retrieve product details.' }); } }); app.post('/upload-receipt', upload.single('receipt'), async (req, res) => { try { const imagePath = req.file.path; const extractedText = await extractTextFromImage(imagePath); const processedData = await processReceiptText(extractedText); const products = JSON.parse(processedData); const receiptResult = await pool.query( 'INSERT INTO receipts (store_name, receipt_date) VALUES ($1, $2) RETURNING id', ['StoreName', new Date()] ); const receiptId = receiptResult.rows[0].id; for (const product of products) { await pool.query( 'INSERT INTO products (product_name, price, receipt_id) VALUES ($1, $2, $3)', [product.name, product.price, receiptId] ); } res.json({ message: 'Receipt processed and stored successfully.' }); } catch (error) { console.error(error); res.status(500).json({ error: 'Failed to process receipt.' }); } }); app.listen(5000, () => { console.log('Server running on port 5000'); }); Step 2: Building the React Native Frontend Now that our backend is ready, we’ll build the React Native app for capturing and uploading receipts. 1. Install React Native and Required Libraries Plain Text npx expo init receipt-scanner-app cd receipt-scanner-app npm install axios expo-image-picker 2. Create the Receipt Scanner Component This component will allow users to capture an image of a receipt and upload it to the backend for processing. App.js JavaScript import React from 'react'; import { NavigationContainer } from '@react-navigation/native'; import { createStackNavigator } from '@react-navigation/stack'; import ProductPriceSearch from './ProductPriceSearch'; // Import the product price search screen import ReceiptUpload from './ReceiptUpload'; // Import the receipt upload screen const Stack = createStackNavigator(); export default function App() { return ( <NavigationContainer> <Stack.Navigator initialRouteName="ReceiptUpload"> <Stack.Screen name="ReceiptUpload" component={ReceiptUpload} /> <Stack.Screen name="ProductPriceSearch" component={ProductPriceSearch} /> </Stack.Navigator> </NavigationContainer> ); } ProductPriceSearch.js JavaScript import React, { useState } from 'react'; import { View, Text, TextInput, Button, FlatList, StyleSheet } from 'react-native'; import axios from 'axios'; const ProductPriceSearch = () => { const [productName, setProductName] = useState(''); const [productDetails, setProductDetails] = useState([]); const [message, setMessage] = useState(''); // Function to search for a product and retrieve its details const handleSearch = async () => { try { const response = await axios.get(`http://localhost:5000/product-price-range/${productName}`); setProductDetails(response.data); setMessage(''); } catch (error) { console.error(error); setMessage('Product not found or error retrieving data.'); setProductDetails([]); // Clear previous search results if there was an error } }; const renderProductItem = ({ item }) => ( <View style={styles.item}> <Text style={styles.productName}>Product: {item.product_name}</Text> <Text style={styles.storeName}>Store: {item.store_name}</Text> <Text style={styles.price}>Price: ${item.price}</Text> </View> ); return ( <View style={styles.container}> <Text style={styles.title}>Search Product Price by Store</Text> <TextInput style={styles.input} placeholder="Enter product name" value={productName} onChangeText={setProductName} /> <Button title="Search" onPress={handleSearch} /> {message ? <Text style={styles.error}>{message}</Text> : null} <FlatList data={productDetails} keyExtractor={(item, index) => index.toString()} renderItem={renderProductItem} style={styles.list} /> </View> ); }; const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', padding: 20, }, title: { fontSize: 24, textAlign: 'center', marginBottom: 20, }, input: { height: 40, borderColor: '#ccc', borderWidth: 1, padding: 10, marginBottom: 20, }, list: { marginTop: 20, }, item: { padding: 10, backgroundColor: '#f9f9f9', borderBottomWidth: 1, borderBottomColor: '#eee', marginBottom: 10, }, productName: { fontSize: 18, fontWeight: 'bold', }, storeName: { fontSize: 16, marginTop: 5, }, price: { fontSize: 16, color: 'green', marginTop: 5, }, error: { color: 'red', marginTop: 10, textAlign: 'center', }, }); export default ProductPriceSearch; ReceiptUpload.js JavaScript import React, { useState } from 'react'; import { View, Button, Image, Text, StyleSheet } from 'react-native'; import * as ImagePicker from 'expo-image-picker'; import axios from 'axios'; const ReceiptUpload = () => { const [receiptImage, setReceiptImage] = useState(null); const [message, setMessage] = useState(''); // Function to open the camera and capture a receipt image const captureReceipt = async () => { const permissionResult = await ImagePicker.requestCameraPermissionsAsync(); if (permissionResult.granted === false) { alert('Permission to access camera is required!'); return; } const result = await ImagePicker.launchCameraAsync(); if (!result.cancelled) { setReceiptImage(result.uri); } }; // Function to upload the receipt image to the backend const handleUpload = async () => { if (!receiptImage) { alert('Please capture a receipt image first!'); return; } const formData = new FormData(); formData.append('receipt', { uri: receiptImage, type: 'image/jpeg', name: 'receipt.jpg', }); try { const response = await axios.post('http://localhost:5000/upload-receipt', formData, { headers: { 'Content-Type': 'multipart/form-data' }, }); setMessage(response.data.message); } catch (error) { console.error(error); setMessage('Failed to upload receipt.'); } }; return ( <View style={styles.container}> <Text style={styles.title}>Upload Receipt</Text> <Button title="Capture Receipt" onPress={captureReceipt} /> {receiptImage && ( <Image source={{ uri: receiptImage } style={styles.receiptImage} /> )} <Button title="Upload Receipt" onPress={handleUpload} /> {message ? <Text style={styles.message}>{message}</Text> : null} </View> ); }; const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', padding: 20, }, title: { fontSize: 24, textAlign: 'center', marginBottom: 20, }, receiptImage: { width: 300, height: 300, marginTop: 20, marginBottom: 20, }, message: { marginTop: 20, textAlign: 'center', color: 'green', }, }); export default ReceiptUpload; Explanation expo-image-picker is used to request permission to access the device's camera and to capture an image of the receipt.The captured image is displayed on the screen and then uploaded to the backend using axios. 3. Running the App To run the app: Start the Expo development server: Plain Text npx expo start Scan the QR code using the Expo Go app on your phone. The app will load, allowing you to capture and upload receipts. Step 3: Running the Application Start the Backend Run the backend on port 5000: Plain Text node app.js Run the React Native App Open the iOS or Android emulator and run the app: Plain Text npx expo init receipt-scanner-app cd receipt-scanner-app npm install axios expo-image-picker Once the app is running: Capture a receipt image.Upload the receipt to the backend.The backend will extract the text, process it with OpenAI, and store the data in PostgreSQL. Step 4: Next Steps Enhancements Authentication: Implement user authentication so that users can manage their personal receipts and data.Price comparison: Provide analytics and price comparison across different stores for the same product.Improve parsing: Enhance the receipt parsing logic to handle more complex receipt formats with OpenAI. Conclusion We built a Receipt Scanner App from scratch using: Expo - React Native for the frontend.Node.js, Google Cloud Vision API, and OpenAI for text extraction and data processing.PostgreSQL for storing and querying receipt data. The Receipt Scanner App we built provides users with a powerful tool to manage their receipts and gain valuable insights into their spending habits. By leveraging AI-powered text extraction and analysis, the app automates the process of capturing, extracting, and storing receipt data, saving users from the hassle of manual entry. This app allows users to: Easily scan receipts: Using their mobile phone, users can capture receipts quickly and effortlessly without needing to manually input data.Track spending automatically: Extracting product names, prices, and other details from receipts helps users keep a detailed log of their purchases, making expense tracking seamless.Compare product prices: The app can provide price ranges for products across different stores, empowering users to make smarter shopping decisions and find the best deals.Organize receipts efficiently: By storing receipts in a structured database, users can easily access and manage their purchase history. This is particularly useful for budgeting, tax purposes, or warranty claims. Overall, the Price Match App is a valuable tool for anyone looking to streamline their receipt management, track their spending patterns, and make data-driven decisions when shopping. With features like AI-powered text processing, automatic product identification, and price comparison, users benefit from a more organized, efficient, and intelligent way of managing their personal finances and shopping habits. By automating these tasks, the app frees up time and reduces errors, allowing users to focus on more important things. Whether you're tracking business expenses, managing household finances, or simply looking for the best deals on products, this app simplifies the process and adds value to everyday tasks.
Okay, so picture this: it’s 11 p.m., I’ve got a cup of coffee that’s somehow both cold and scalding (a skill I’ve mastered), and I’m spiraling down the rabbit hole of JavaScript runtimes. Yeah, I know, wild Friday night, right? But hey, when you're a software engineer, your idea of "fun" sometimes involves comparing Deno and Node.js while your cat judges you from across the room. For a little backstory on this notion, I have been juggling with Node.js for years now. It's like those worn-out clothes in your wardrobe that you just can’t seem to get rid of because they are still in working (quality) condition. It's comfortable, yet at times, you think of getting similar ones that are trendy on the market — the revised and new variants, you know. Back to the main subject, enter Deno, the modern rival that everyone’s been buzzing about. Accustomed to Node.js for years, it is only a natural instinct for me to explore the element deeply and check for myself if it is worthy of all the hype around it or if it has equal or even better runtime. So, shall we break it down to get a better hang of it? First Impressions: Who Even Names These Things? Back in the late 2000s, when technology was somewhat an infant, Node.js was present in the industry since 2009. Built on Chrome’s V8 engine, Node.js has been steadily helping us build scalable apps. You can understand it as that version of Javascript, which is highly dependable and preferred by everyone in the crowd. On the latest note, Deno was launched back in 2018. And, yes, it was also developed by the same guy, Ryan Dahl, the original creator of popular Node.js. Plot twist, right? He came back, pointed out everything he thought he messed up with Node, and then went, “Hold my coffee. I’ll fix it.” Deno was born with security, simplicity, and modern features at its core. And if you’re wondering about the name… I honestly don’t know. But Deno is an anagram of Node, so there’s that. Round 1: Security Let’s talk security because if you’re anything like me, you’ve had at least one “Oh no, I accidentally exposed an API key” moment. (We don’t talk about that project anymore.) Node.js leaves security up to the developer, which means you better know your way around .env files and permissions — or else. Deno, though? It is like one of those paranoid friends that we all have who persist in double-checking the locks. Anyhow, Deno, by default, works in a protected sandbox that does not permit your code access to the network, file system, or even the environment variables unless explicit permission is given. Here’s an example: Node.js JavaScript const fs = require('fs'); fs.writeFileSync('./hello.txt', 'Hello, World!'); console.log('File written successfully!'); Deno JavaScript const encoder = new TextEncoder(); await Deno.writeFile('hello.txt', encoder.encode('Hello, World!')); console.log('File written successfully!'); But if you try running that Deno code without permissions, you’ll get a big ol’ error message: JavaScript PermissionDenied: Requires write access to "hello.txt". Yep, Deno doesn’t mess around. You’ll need to explicitly pass flags like --allow-write when you run the script. Is it slightly annoying? Sure. But does it save you from accidentally unleashing chaos? Definitely. Round 2: Performance Now, I’m no speed freak, but when it comes to runtimes, performance matters. You want your app to respond faster than your friends when you ask, “Who’s up for pizza?” Both Node.js and Deno use the V8 engine, so they’re fast. But Deno is written in Rust, which gives it a slight edge in terms of performance and reliability. Rust’s memory safety features and concurrency model make it a beast under the hood. That said, Node.js has been around longer, and its performance optimizations are battle-tested. I ran some benchmarks because, well, nerd: Basic HTTP Server in Node.js: JavaScript const http = require('http'); const server = http.createServer((req, res) => { res.writeHead(200, { 'Content-Type': 'text/plain' }); res.end('Hello from Node.js!'); }); server.listen(3000, () => console.log('Node server running on port 3000')); Basic HTTP Server in Deno: JavaScript import { serve } from "https://deno.land/std/http/server.ts"; const server = serve({ port: 3000 }); console.log("Deno server running on port 3000"); for await (const req of server) { req.respond({ body: "Hello from Deno!" }); } Results? Deno was slightly faster in handling requests, but we’re talking milliseconds here. For most real-world applications, the difference won’t be game-changing—unless you’re trying to build the next Twitter (or X? Is that what we’re calling it now?). Round 3: Developer Experience Okay, this part hit me hard. If you’ve been using Node.js, you know npm is the lifeblood of your project. It’s how you install packages, manage dependencies, and occasionally yell at your screen when node_modules grows to 2 GB. Deno said, “Nah, we don’t do npm here.” Instead, it uses a decentralized module system. You import modules directly via URLs, like this: JavaScript import * as _ from "https://deno.land/x/lodash/mod.ts"; console.log(_.chunk([1, 2, 3, 4], 2)); At first, I was like, “Wait, what?” But then I realized how cool it is. No more bloated node_modules folders! No more worrying about package version mismatches! Just clean, straightforward imports. Still, I’ll admit it: I missed the convenience of npm and the sheer variety of packages it offers. Old habits die hard. A Quick Comparison Here’s a quick side-by-side to show how Deno and Node.js differ in syntax and style: Reading a File Node.js: JavaScript const fs = require('fs'); const data = fs.readFileSync('./file.txt', 'utf8'); console.log(data); Deno: JavaScript const data = await Deno.readTextFile('./file.txt'); console.log(data); Making an HTTP Request Node.js (Using axios): JavaScript const axios = require('axios'); const response = await axios.get('https://api.example.com/data'); console.log(response.data); Deno (Built-In Fetch): JavaScript const response = await fetch('https://api.example.com/data'); const data = await response.json(); console.log(data); So, What Should Be Your Pick? Let’s take time to analyze more. So, assuming that you are neck-deep working on Node.js projects, consider your priority; there is no need to switch ships if all is running fine. Node.js is now mature and has a vast ecosystem, and it can get all the jobs done. However, if you want to start afresh or build something emphasizing security aspects, Deno is worthy of consideration. It’s like Node’s cooler, more modern cousin who listens to indie bands before they get famous. For me? I will probably keep playing around with both. Node.js feels like home to me at this point, but Deno has that shiny, new-toy appeal to it. What’s more, I am actually drawn to the concept of writing code that guarantees more future-proof. With all that out of my mind, I now need to move and clean my monitor as it is currently occupied by about 90% screenshots of error pop-ups and random code snippets. Classic case, right? Your Turn! Have you tried Deno yet, or are you sticking with Node.js? Drop your thoughts below — I’m always up for a good tech debate (bonus points if it involves memes).
John Vester
Senior Staff Engineer,
Marqeta
Justin Albano
Software Engineer,
IBM