88% of respondents use online communities as their primary learning method. See what else they had to say about the state of dev today.
Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.
Playwright: Filter Visible Elements With locator.filter({ visible: true })
Build a REST API With Just 2 Classes in Java and Quarkus
Developer Experience
With tech stacks becoming increasingly diverse and AI and automation continuing to take over everyday tasks and manual workflows, the tech industry at large is experiencing a heightened demand to support engineering teams. As a result, the developer experience is changing faster than organizations can consciously maintain.We can no longer rely on DevOps practices or tooling alone — there is even greater power recognized in improving workflows, investing in infrastructure, and advocating for developers' needs. This nuanced approach brings developer experience to the forefront, where devs can begin to regain control over their software systems, teams, and processes.We are happy to introduce DZone's first-ever Developer Experience Trend Report, which assesses where the developer experience stands today, including team productivity, process satisfaction, infrastructure, and platform engineering. Taking all perspectives, technologies, and methodologies into account, we share our research and industry experts' perspectives on what it means to effectively advocate for developers while simultaneously balancing quality and efficiency. Come along with us as we explore this exciting chapter in developer culture.
Apache Cassandra Essentials
Identity and Access Management
Garbage collection in Java is something that just happens: you don’t have to worry about memory management. Or do you? The garbage collector (GC) runs in the background, quietly doing its work. But this process can have a huge impact on performance. Understanding the concepts of advanced Java GC is invaluable in tuning and troubleshooting applications. There are seven types of Java Garbage Collectors available in the JVM, some of which are obsolete. This article will look at the details, and compare the strengths and weaknesses of each. It will also look briefly at how you would go about evaluating garbage collection performance. GC Evaluation Criteria GC performance is evaluated on two criteria: Throughput – Calculated as the percentage of time an application spends on actual work as opposed to time spent on GC;Latency – The time during which the GC pauses the application. You should look at average latency times and maximum latency times when you’re evaluating performance. Comparison of Types of Java Garbage Collectors The table below gives details of the seven different algorithms, including the Java version where they were introduced, and the versions, if any, that use the algorithm as the default. algorithmcommentsintroducedused as default in Serial GC Original GC algorithm: single-threaded; tends to have long GC pauses; now obsolete Java 1.2 Java 1.2 to Java 4; also in Java 5 to 8 single-core versions Parallel GC Multi-threaded; Distributed amongst cores; high throughput but long pauses Java 5 Java 5 to 8 in multi-core versions CMS GC (Concurrent Mark & Sweep) Most work done concurrently; minimizes pauses; no compaction, therefore occasional long pauses for full GC Java 4; deprecated in Java 9 and removed in Java 14 None G1 GC (Garbage first) Heap is divided into equal-sized regions. Mostly concurrent. Balances between latency and throughput; best for heap size < 32GB Java 7 Java 8 and above Shenandoah GC Recommended for heap sizes >32GB; has high CPU consumption Open JDK 8 and above; Oracle JDK in JDK 11 None ZGC Recommended for heap size >32GB; prone to stalls on versions < Java 21 Java 11 None Epsilon GC This is a do-nothing GC, used only for benchmarking applications with and without GC Java 11 None G1 GC is probably the best algorithm in most cases, unless you have a very large heap (32GB or more). If this is the case, you can use Shenandoah if it is available or ZGC if you’re using Java 21 or later. ZGC can be unstable in earlier versions. Shenandoah may not be stable in the first few Oracle releases. CMS was deprecated from Java 9 onwards because it didn’t deal well with compacting the heap. Fragmentation over time degraded performance, and resulted in long GC pauses for compaction. Setting and Tuning the Garbage Collector The table below shows the types of Java Garbage collectors, along with the JVM switches you’d use to set each of the GC algorithms for an application. It also contains links to tuning guides for each algorithm. algorithmjvm switch to set ithow to tune it: links Serial GC -XX:+UseSerialGC Tuning Serial GC Parallel GC -XX:+UseParallelGC Tuning Parallel GC CMS GC (Concurrent Mark & Sweep) -XX:+UseConcMarkSweepGC Tuning CMS GC G1 GC (Garbage first) -XX:+UseG1GC Tuning G1 GC Shenandoah GC -XX:+UseShenandoahGC Tuning Shenandoah GC ZGC -XX:+UseZGC Tuning ZGC GC Epsilon GC -XX:+UseEpsilonGC N/A Evaluating GC Performance Before you commit to an algorithm for your application, it’s best to evaluate its performance. To do so, you’ll need to request a GC log, and then analyze it. 1. Requesting a GC Log Use JVM command line switches. For Java 9 or later: Java -Xlog:gc*:file=<gc-log-file-path> For Java 8 or earlier: Java -XX:+PrintGCDetails -Xloggc:<gc-log-file-path> 2. Evaluating the Log You can open the log with any text editor, but for long-running programs, it could take hours to evaluate it. The log information looks something like this: Java [0.082s][info][gc,heap] Heap region size: 1M [0.110s][info][gc ] Using G1 [0.110s][info][gc,heap,coops] Heap address: 0x00000000c8c00000, size: 884 MB, Compressed Oops mode: 32-bit [0.204s][info][gc,heap,exit ] Heap [0.204s][info][gc,heap,exit ] garbage-first heap total 57344K, used 1024K [0x00000000c8c00000, 0x0000000100000000) [0.204s][info][gc,heap,exit ] region size 1024K, 2 young (2048K), 0 survivors (0K) [0.204s][info][gc,heap,exit ] Metaspace used 3575K, capacity 4486K, committed 4864K, reserved 1056768K [0.204s][info][gc,heap,exit ] class space used 319K, capacity 386K, committed 512K, reserved 1048576K A good choice for quickly obtaining meaningful stats from the GC log is the GCeasy tool. It detects memory leaks, highlights long GC pauses and inefficient GC cycles, as well as making performance tuning recommendations. Below is a sample of part of the GCeasy report. Fig: Sample of GCeasy Output Conclusion In this article, we’ve looked at the different types of Java Garbage collectors, and learned how to invoke each algorithm on the JVM command line. We’ve looked briefly at how to evaluate, monitor, and tune the garbage collector. G1 GC is a good all-around choice if you're using Java 8 and above. It was still experimental in Java 7, so it may not be stable. If you have a very large heap size, consider Shenandoah or Z. Again, these may not be stable in earlier versions of Java. CMS was found to be problematic, as in some cases it caused long GC pauses (as much as 5 minutes), and was therefore deprecated and then finally removed from newer versions. For more information, you may like to read these articles: Comparing Java GC Algorithms: Which One is Best?What is Java’s default GC algorithm?
This blog post covers how to build a chat history implementation using Azure Cosmos DB for NoSQL Go SDK and LangChainGo. If you are new to the Go SDK, the sample chatbot application presented in the blog serves as a practical introduction, covering basic operations like read, upsert, etc. It also demonstrates using the Azure Cosmos DB Linux-based emulator (in preview at the time of writing) for integration tests with Testcontainers for Go. Go developers looking to build AI applications can use LangChainGo, which is a framework for LLM-powered applications. It provides pluggable APIs for components like vector store, embedding, loading documents, chains (for composing multiple operations), chat history, and more. Before diving in, let’s take a step back to understand the basics. What Is Chat History, and Why Is It Important for Modern AI Applications? A common requirement for conversational AI applications is to be able to store and retrieve messages exchanged as part of conversations. This is often referred to as “chat history.” If you have used applications like ChatGPT (which also uses Azure Cosmos DB, by the way!), you may be familiar with this concept. When a user logs in, they can start chatting, and the messages exchanged as part of the conversation are saved. When they log in again, they can see their previous conversations and can continue from where they left off. Chat history is obviously important for application end users, but let’s not forget about LLMs! As smart as LLMs might seem, they cannot recall past interactions due to a lack of built-in memory (at least for now). Using chat history bridges this gap by providing previous conversations as additional context, enabling LLMs to generate more relevant and high-quality responses. This enhances the natural flow of conversations and significantly improves the user experience. A simple example illustrates this: Suppose you ask an LLM via an API, "Tell me about Azure Cosmos DB," and it responds with a lengthy paragraph. If you then make another API call saying, "Break this down into bullet points for easier reading," the LLM might get confused because it lacks context from the previous interaction. However, if you include the earlier message as part of the context in the second API call, the LLM is more likely to provide an accurate response (though not guaranteed, as LLM outputs are inherently non-deterministic). How to Run the Chatbot As I mentioned earlier, the sample application is a useful way for you to explore langchaingo, the Azure Cosmos DB chat history implementation, as well as the Go SDK. Before exploring the implementation details, it’s a good idea to see the application in action. Refer to the README section of the GitHub repository, which provides instructions on how to configure, run, and start conversing with the chatbot. Application Overview The chat application follows a straightforward domain model: users can initiate multiple conversations, and each conversation can contain multiple messages. Built in Go, the application includes both backend and frontend components. Backend It has multiple sub-parts: The Azure Cosmos DB chat history implementation.Core operations like starting a chat, sending/receiving messages, and retrieving conversation history are exposed via a REST API.The REST API leverages a langchaingo chain to handle user messages. The chain automatically incorporates chat history to ensure past conversations are sent to the LLM. langchaingo handles all orchestration – LLM invocation, chat history inclusion, and more without requiring manual implementation. Frontend It is built using JavaScript, HTML, and CSS. It is packaged as part of the Go web server (using the embed package) and invokes the backend REST APIs in response to user interactions. Chat History Implementation Using Azure Cosmos DB LangChainGo is a pluggable framework, including its chat history (or memory) component. To integrate Azure Cosmos DB, you need to implement the schema.ChatMessageHistory interface, which provides methods to manage the chat history: AddMessage to add messages to a conversation (or start a new one).Messages to retrieve all messages for a conversation.Clear to delete all messages in a conversation. While you can directly instantiate a CosmosDBChatMessageHistory instance and use these methods, the recommended approach is to integrate it into the langchaingo application. Below is an example of using Azure Cosmos DB chat history with a LLMChain: Go // Create a chat history instance cosmosChatHistory, err := cosmosdb.NewCosmosDBChatMessageHistory(cosmosClient, databaseName, containerName, req.SessionID, req.UserID) if err != nil { log.Printf("Error creating chat history: %v", err) sendErrorResponse(w, "Failed to create chat session", http.StatusInternalServerError) return } // Create a memory with the chat history chatMemory := memory.NewConversationBuffer( memory.WithMemoryKey("chat_history"), memory.WithChatHistory(cosmosChatHistory), ) // Create an LLM chain chain := chains.LLMChain{ Prompt: promptsTemplate, LLM: llm, Memory: chatMemory, OutputParser: outputparser.NewSimple(), OutputKey: "text", } From an Azure Cosmos DB point of view, note that the implementation in this example is just one of many possible options. The one shown here is based on a combination of the user ID as the partition key and the conversation ID (also referred to as the session ID sometimes) being the unique key (id of an Azure Cosmos DB item). This allows an application to: Get all the messages for a conversation. This is a point read using the unique ID (conversation ID) and partition key (user ID).Add a new message to a conversation. It uses an upsert operation (instead of create) to avoid the need for a read before write.Delete a specific conversation. It uses the delete operation to remove a conversation (and all its messages). Although the langchaingo interface does not expose it, when integrating this as a part of an application, you can also issue a separate query to get all the conversations for a user. This is also efficient since its scoped to a single partition. Simplify Testing With Azure Cosmos DB Emulator and Testcontainers The sample application includes basic test cases for both the Azure Cosmos DB chat history and the main application. It is worth highlighting the use of testcontainers-go to integrate the Azure Cosmos DB Linux-based emulator docker container. This is great for integration tests since the database is available locally and the tests run much faster (let’s not forget the cost savings as well!). The icing on top is that you do not need to manage the Docker container lifecycle manually. This is taken care of as part of the test suite, thanks to the testcontainers-go API which makes it convenient to start the container before the tests run and terminate it once they are complete. You can refer to the test cases in the sample application for more details. Here is a snippet of how testcontainers-go is used: Go func setupCosmosEmulator(ctx context.Context) (testcontainers.Container, error) { req := testcontainers.ContainerRequest{ Image: emulatorImage, ExposedPorts: []string{emulatorPort + ":8081", "1234:1234"}, WaitingFor: wait.ForListeningPort(nat.Port(emulatorPort)), } container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{ ContainerRequest: req, Started: true, }) if err != nil { return nil, fmt.Errorf("failed to start container: %w", err) } // Give the emulator a bit more time to fully initialize time.Sleep(5 * time.Second) return container, nil } If you’re interested in using the Azure Cosmos DB Emulator in CI pipelines, check out the blog post. Wrapping Up Being able to store chat history is an important part of conversational AI apps. They can serve as a great add-on to existing techniques such as RAG (retrieval-augmented generation). Do try out the chatbot application and let us know what you think! While the implementation in the sample application is relatively simple, how you model the chat history data depends on the requirements. One such scenario that has been presented is this excellent blog post on how Microsoft Copilot scales to millions of users with Azure Cosmos DB. Some of your requirements might include: Storing metadata, such as reactions (in addition to messages)Showing top N recent messagesConsidering chat history data retention period (using TTL)Incorporating additional analytics (on user interactions) based on the chat history data and more. Irrespective of the implementation, always make sure to incorporate best practices for data modeling. Refer here for guidelines. Are you already using or planning to leverage Azure Cosmos DB for your Go applications? We would love to hear from you! Send us your questions and feedback.
Low-code was supposed to be the future. It promised faster development, simpler integrations, and the ability to build complex applications without drowning in code. And for a while, it seemed like it would deliver. But then reality hit. Developers and IT teams who embraced low-code quickly found its limitations. Instead of accelerating innovation, it created bottlenecks. Instead of freeing developers, it forced them into rigid, vendor-controlled ecosystems. So, is low-code dead? The old version of it, yes. But low-code without limits? That’s where the future lies. Where Traditional Low-Code Went Wrong The first wave of low-code tools catered to business users. They simplified app development but introduced hard limits that made them impractical for serious enterprise work. The problems became clear: Rigid data models. Once your needs went beyond basic CRUD operations, things fell apart.Vendor lock-in. Customizations meant deeper dependence on the platform, making migration nearly impossible.Limited extensibility. If the platform didn’t support your use case, you were out of luck or stuck writing fragile workarounds. Developers don’t abandon low-code platforms because they hate them. They abandon them because they outgrow them. Breaking the False Choice Between Simplicity and Power For too long, developers have been forced to choose: Low-code for speed, but with strict limitations.Traditional development for control, but at the cost of slower delivery. We reject that tradeoff. Low-code should not mean low power. It should be flexible enough to support simple applications and complex enterprise solutions without forcing developers into unnecessary constraints. No Limits: The New Standard for Low-Code Traditional low-code platforms treat developers as an afterthought. That’s why we built a low-code platform without limits — one that gives developers full control while maintaining the speed and simplicity that makes low-code attractive. 1. Extensibility Without Barriers If the platform doesn’t support a feature, you can build it. No black-box constraints, no artificial restrictions on custom logic. 2. API-First, Not API-Restricted Most low-code platforms force you into their way of doing things. We took the opposite approach: seamless API integrations that work with your architecture, your existing systems, and your data sources. 3. Scalability Built for Enterprises Low-code platforms struggle with scale because they were never designed for it. Our architecture ensures multi-layered workflows, high-performance APIs, and cloud-native deployments without compromising flexibility. 4. Developer-First, Not Developer-Limited Low-code shouldn’t mean no-code. We give developers the freedom to write custom scripts, fine-tune APIs, and integrate deeply with backend systems without forcing them into a proprietary ecosystem. Beyond Citizen Development: A Platform for Professionals Most low-code platforms were designed for citizen developers — people with no coding experience. That’s fine for simple applications, but real enterprise development requires more. We built our platform for software engineers, enterprise architects, and IT teams who need to move fast without hitting a ceiling. That means: Advanced data handling without the limitations of predefined models.Secure, scalable applications that meet enterprise compliance needs.The ability to build freely — not be boxed in by rigid templates. The Future: Low Code Without Limits Low-code isn’t dead. But the old way of doing it — closed ecosystems, locked-down workflows, and one-size-fits-all platforms — is obsolete. The next evolution of low-code is developer-first, API-driven, and enterprise-ready. It’s a platform that doesn’t just accelerate development — it empowers developers to build without constraints. If you’ve ever felt like low-code was holding you back, it’s time for something better. It’s time for Low Code, No Limits.
Hey, DZone Community! We have an exciting year ahead of research for our beloved Trend Reports. And once again, we are asking for your insights and expertise (anonymously if you choose) — readers just like you drive the content we cover in our Trend Reports. Check out the details for our research survey below. Comic by Daniel Stori Generative AI Research Generative AI is revolutionizing industries, and software development is no exception. At DZone, we're diving deep into how GenAI models, algorithms, and implementation strategies are reshaping the way we write code and build software. Take our short research survey ( ~10 minutes) to contribute to our latest findings. We're exploring key topics, including: Embracing generative AI (or not)Multimodal AIThe influence of LLMsIntelligent searchEmerging tech And don't forget to enter the raffle for a chance to win an e-gift card of your choice! Join the GenAI Research Over the coming month, we will compile and analyze data from hundreds of respondents; results and observations will be featured in the "Key Research Findings" of our Trend Reports. Your responses help inform the narrative of our Trend Reports, so we truly cannot do this without you. Stay tuned for each report's launch and see how your insights align with the larger DZone Community. We thank you in advance for your help! —The DZone Content and Community team
The tech sector is based on human rather than physical assets: not on machinery or warehouses but on talented specialists and innovations. Competition is fierce, and success often hinges on adaptability in an ever-changing environment. Speed takes precedence, where the ability to launch a product swiftly can outweigh even its quality. If you have any experience in software development, all these issues definitely sound like a daily routine. But what if you are about to launch your own startup? What if your project’s ambitions go all the way to Unicorn heights? In this case, you are going to face additional and (most probably) unfamiliar obstacles. These include workforce challenges, management issues, insufficient investment in innovation, and neglect of technical branding. Having been a developer myself, I’ve encountered all these issues firsthand. With insight into what motivates tech teams and drives project success, I’m here to share actionable strategies to address these problems. Let’s get started! Trap #1: Developers Cannot (Should Not) Be Overcontrolled Does this sound familiar? Your development team assures you that everything’s on track, but as deadlines approach, reality proves otherwise. The system is far from completion, key features don’t work, and unforeseen problems keep cropping up. If you’ve experienced this despite your best efforts at control, you may wonder: Is such control even possible? Also, keep in mind that managing a dev team as a tech lead is quite different from managing a startup as a founder. While settling on your new track, you could easily fall into the trap of hypercontrol and micromanagement. And you are not alone in this. According to a survey by Forbes, 43% of employees say their online activity is monitored, yet only 32% were officially informed about that. Fair enough, employees are not happy with such practices: 27% of respondents said they would likely quit if their employer began tracking online activity. Startups may be particularly vulnerable to this problem, as it is very common in smaller companies with immature processes. I once worked in a firm where leadership was obsessed with micromanagement. Instead of setting up proper workflows from the beginning, they opted for a “let’s just get started” approach. Plans were discussed, work commenced, and progress seemed fine until the boss started making panic-driven calls, demanding updates. Explaining everything to someone with no technical understanding took an entire day, repeatedly disrupting progress. The root of this problem is a lack of understanding and an inability to establish systematic processes. Uncertainty breeds anxiety, which leads to a desire for control, draining time, energy, and motivation. Are there any viable replacements for micromanagement and the ‘Big Brother’ approach that could help you control your team’s performance? I am quite positive about that. Here are just a few simple recipes: 1. Foster Open Communication You are a leader of the pack now, and you better be a good one. Good leaders maintain an approachable, transparent dialogue with their team. Discuss your plans openly, ensuring alignment between developers' goals (e.g., writing elegant code) and management’s priorities (e.g., deadlines, budgets). 2. Trust Your Specialists Remember, you hired them because they’re experts. You may have been a brilliant tech expert yourself once, but now your role is different. Let them do their job and do yours, focusing on the bigger picture. 3. Remote Work Is No Enemy Some managers believe in-office supervision works better, but forcing everyone back into the office these days risks losing top talent. But 100% remote is not suitable for everyone, too. So, try to offer flexibility, allowing employees to choose their preferred work setup. Being an experienced developer, you know exactly what the most common problem is. And it is poor management, not the physical location where people write code. 4. Avoid Unnecessary Daily Stand-Ups Daily meetings that drag on past 15 minutes are counterproductive. Weekly calls for updates and Q&A are often far more effective. Management newbies often get fooled by theorists that encourage sticking to dogmas. But in fact, every team management practice you employ should be reasonable, useful, and well thought of. Bringing in a shiny new ritual just because it’s trendy this season is a bad, bad idea. Do you remember how annoying corporate rituals can be for a developer? You preferred to be controlled by task management and tracking, didn’t you? So stick to that practice in your new position. 5. Set Clear Metrics Use indicators like time-to-market, system stability, and user feedback to monitor progress and identify bottlenecks. With metrics, you will have a bigger picture of your project's wellbeing without delving too deep into technical details. Trap #2: Lack of Investment in Your Startup’s Future Let’s go fast forward to a time when your company receives its first money. Be it a seed investment round or sales revenue, the temptation is always the same: you will want to spend it on something fancy and unnecessary. This is a major trap that many startups and even bigger companies fall into. For instance, in one web studio, instead of investing in development or improving workflows, the founder regularly showed up in yet another brand-new car. Little wonder, the team was dissatisfied, employee turnover remained consistently high, and the studio stagnated. Remarkably, this company is still around today, with the same website it had 15 years ago. They're still using technologies and designs from the early 2010s, while the mid-2020s are already here. Unsurprisingly, they’ve fallen far behind their competitors, though their boss seems to spend his time partying without concern. From smaller studios to international majors, this problem is, indeed, a global one. According to the Global Innovation Index 2024 (the latest available at the time of writing) by the World Intellectual Property Organization (WIPO), investment in science and innovation made a significant downturn in 2023, following a boom between 2020 and 2022. Venture capital and scientific publications declined sharply back to pre-pandemic levels, and corporate R&D spending also slowed, resembling the post-2009 crisis deceleration. According to WIPO, the outlook for 2024 and 2025 is ‘unusually uncertain.’ In tech, competition is fierce, and success belongs to those who can adapt quickly. However, during tough economic times, many companies hesitate to take bold steps, fearing the costs and risks involved. So, what’s the solution? 1. R&D Is Essential No matter how small your startup is, you can (and you should) set aside a budget for research that will support you in the future. Introduce dedicated R&D days, or special budgets for experimentation. Google has successfully implemented a policy known as ‘20% time’ (Innovation Time Off), allowing employees to work on personal projects. This initiative has led to the creation of major products like Gmail and Google Maps. When people work on things they’re passionate about, they tend to be more motivated and productive. I’ve experienced this firsthand. Moreover, such opportunities allow employees to gain new skills and broaden their expertise, knowledge that can later translate into tangible business gains for the company. Keep in mind that not all projects will yield immediate results, but such investments shape the future of your company. Companies like Apple and Tesla dedicate substantial resources to experimentation and cutting-edge technologies, enabling them to remain leaders in their industries. 2. Grant Developers a Degree of Freedom It’s crucial to agree upfront that while this time allows them to choose their tasks, it’s still considered work time, meaning the company retains ownership rights to any resulting product. In return, employees get the chance to bring their ideas to life and potentially join the team behind a new product. What developer wouldn’t want to make history by creating something truly groundbreaking? 3. Don’t Limit Yourself to a Single Niche or Product Take Amazon and Microsoft as examples: these companies are known for their globally popular products but have also invested heavily in diverse areas, from cloud technologies to artificial intelligence. This approach helps them maintain their market leadership. Even smaller companies can allocate part of their budget to exploring new directions that complement their current business model. 4. Don’t Be Afraid to Experiment Bold decision-making can give your projects a competitive edge. Look at Atlassian, for example: the company has internal programs that encourage employees to experiment with technologies without fear of failure. This approach has dramatically increased the pace of innovation within their teams. Remember: in the ever-changing tech landscape, taking risks and fostering a culture of innovation isn’t just an option. It is a necessity for long-term success. Trap #3: Poor Management Promoting people who were with you from the start (and those were probably developers like you) is the most natural and intuitive move. But beware and think twice. Chances are that this promotion will not make anyone happier. Professional managers are often a much wiser choice, and your old buddies might be better off continuing their work in development. A joint research by scholars from Boston University and the University of Kansas found that software developers transition to management roles more frequently than specialists in other fields. However, without skills in motivation, planning, or communication, they may struggle with delegating or balancing technical and business needs. Here is a real-life example. In a company where a friend of mine used to work, there was a former developer who transitioned to a managerial role. He was enticed by the higher pay and didn’t see any other path for career growth. However, he lacked management experience and had no communication skills. Deep down, he still wanted to write code rather than manage a team. As a result, he ended up causing more harm than good. His poor leadership drove experienced developers to leave, staff turnover increased, and the quality of the project declined. Some of these unlucky managers even take on parallel jobs and cause problems there as well. Many eventually realize (though not all will admit it) that management isn’t for them and return to development. By then, however, they’ve lost their technical skills and fallen behind on new technologies. That guy could have grown in a technical direction, becoming, for example, a tech lead. Managers are more like psychologists and career mentors: their job is to resolve conflicts and unite the team. A tech lead, on the other hand, is a technical mentor who ensures that solutions are effective and don’t compromise the system. Here is what you can do to tackle these issues: Hire professional managers. Invest in managers skilled in planning, team motivation, and process building, even if they lack technical expertise. Define clear roles. Make sure managers manage, not code. Mixing responsibilities leads to inefficiencies.Support managerial growth. If growing developers into managers is your conscious choice, do it wisely and carefully. Provide training programs on Agile, Scrum, or Kanban methodologies, as well as on soft skills like conflict resolution.Introduce mentorship programs. Pair new managers with seasoned leaders. This will help newbies avoid common pitfalls. Trap #4: Neglecting Your Technical Brand It is crucial to build your technical brand from the very beginning. By default, any tech startup is based on new ideas, so you already have something to begin with. Your company should be in the limelight as an innovator, a promoter of new solutions, and an active member of the tech community. This will be a huge bonus, as you will be more attractive to your employees, potential investors, and partners alike. If you don’t have enough money, you can invest your time, and it will pay off sooner than you can imagine. Keep this in mind as your company grows: a strong tech brand is a constant process rather than a result that you can achieve and then move on to something else. I once worked at a company that initially invested in building its brand and gained recognition. But over time, new leadership decided it was a waste of time and money. We asked for a humble budget to participate in conferences, hire a copywriter, and publish articles in a blog about our ideas and achievements, but it was all in vain. As a result, employees lost motivation, and the company became far less appealing to potential hires. The importance of these investments is mostly understood by large, well-established companies, but even there, some fail to grasp it. Every developer wants to be part of a vibrant, almost cult-like community of top-tier professionals. Building this community or ‘cult,’ in the best sense of the word, is a joint effort between the company and its employees. This desire is so strong that companies with a well-developed technical brand have the luxury of attracting top-quality talent at lower costs. Developers, in turn, satisfy their ambitions and add an impressive credential to their résumés. It’s a win-win for everyone. So, how do you develop a strong technical brand? 1. Foster an Internal Culture Supporting Professional Growth Your team members should feel encouraged to give conference talks, publish articles, or participate in hackathons. Help them prepare materials, fund their travel, and allow time for personal or open-source projects. While some companies hire dedicated DevRel (Developer Relations) specialists for this, it’s possible to achieve results without one. What matters most is committing to this direction. 2. Don’t Be Shy: Share Your Success and Innovation Blogs, videos, and presentations are excellent ways to showcase the company’s expertise. Publishing case studies, internal solutions, or unique approaches to work demonstrates your technical prowess and inspires the wider community. 3. Leverage Open Source as a Reputation-Building Tool Companies that contribute to open-source projects demonstrate technological leadership. Technologies like Docker, Kubernetes, and Spring gained global recognition because large companies weren’t afraid to share their tools with the world. 4. Technical Brands Don’t Grow Overnight It is rather a journey than a destination, requiring consistent and ongoing efforts, from participating in local events to creating your own initiatives, such as internal meetups or online courses. While these investments demand resources, they pay off in the long run. Companies that invest in their technical brand today will reap significant benefits in the future. Conclusion Building a tech startup is a much bigger challenge than working for someone else from 9 AM to 5 PM. Your transition from pure tech expertise to managing your own business requires a total revision of your entire mindset. But your previous experience is very valuable: remember what you wanted your employer to be and try to build a company that will be a joy to work for. Don’t micromanage your employees or try to control every aspect of their work. Instead, focus on building strong connections with them, investing in their growth, and fostering the innovations they can bring to life. These principles form the foundation of success for any tech company. If you not only read these suggestions but also put them into practice, you’ll find it easier to navigate the challenges of leadership and achieve real improvements in your company’s standing. Have fun — and a lot of success!
Automated testing is essential to modern software development, ensuring stability and reducing manual effort. However, test scripts frequently break due to UI changes, such as modifications in element attributes, structure, or identifiers. Traditional test automation frameworks rely on static locators, making them vulnerable to these changes. AI-powered self-healing automation addresses this challenge by dynamically selecting and adapting locators based on real-time evaluation. Self-healing is crucial for automation testing because it significantly reduces the maintenance overhead associated with test scripts by automatically adapting to changes in the application's user interface. This allows tests to remain reliable and functional even when the underlying code or design is updated, thus saving time and effort for testers while improving overall test stability and efficiency. Key Reasons Why Self-Healing Is Needed in Automation Testing Reduces Test Maintenance When UI elements change (like button IDs or class names), self-healing mechanisms can automatically update the test script to locate the new element, eliminating the need for manual updates and preventing test failures due to outdated locators. Improves Test Reliability By dynamically adjusting to changes, self-healing tests are less prone to "flaky" failures caused by minor UI modifications, leading to more reliable test results. Faster Development Cycles With less time spent on test maintenance, developers can focus on building new features and delivering software updates faster. Handles Dynamic Applications Modern applications often have dynamic interfaces where elements change frequently, making self-healing capabilities vital for maintaining test accuracy. How Self-Healing Works Heuristic algorithms. These algorithms analyze the application's structure and behavior to identify the most likely candidate element to interact with when a previous locator fails. Intelligent element identification. Using techniques like machine learning, the test framework can identify similar elements even if their attributes change slightly, allowing it to adapt to updates. Multiple locator strategies. Test scripts can use a variety of locators (like ID, XPath, CSS selector) to find elements, increasing the chances of successfully identifying them even if one locator becomes invalid. Heuristic-based fallback mechanism. Let’s understand self-healing using a heuristic-based fallback mechanism by implementing it with an example. Step 1 Initialize a playwright project and install cucumber dependencies by executing the command: Plain Text npm init playwright Adding Cucumber for BDD Testing Cucumber allows for writing tests in Gherkin syntax, making them readable and easier to maintain for non-technical stakeholders. Plain Text npm install --save-dev @cucumber/cucumber Step 2 Create the folder structure below and add the required files (add_to_cart.feature, add_to_cart.steps.js, and cucumber.js). Step 3 Add code to browserSetup.js. JavaScript const { chromium } = require('playwright'); async function launchBrowser(headless = false) { const browser = await chromium.launch({ headless }); const context = await browser.newContext(); const page = await context.newPage(); return { browser, context, page }; } module.exports = { launchBrowser }; Step 4 Add the self-healing helper function to the helper.js file. This function is designed to "self-heal" by trying multiple alternative selectors when attempting to click an element. If one selector fails (for example, due to a change in the page's structure), it automatically tries the next one until one succeeds or all have been tried. JavaScript // Self-healing helper with a shorter wait timeout per selector async function clickWithHealing(page, selectors) { for (const selector of selectors) { try { console.log(`Trying selector: ${selector}`); await page.waitForSelector(selector, { timeout: 2000 }); // reduced to 2000ms per selector await page.click(selector); console.log(`Clicked using selector: ${selector}`); return; } catch (err) { console.log(`Selector "${selector}" not found. Trying next alternative...`); } } throw new Error(`None of the selectors matched: ${selectors.join(", ")}`); } module.exports = { clickWithHealing }; Step 5 Write a test scenario in add_to_cart.feature file. Gherkin Feature: Add Item to Cart Scenario Outline: User adds an item to the cart successfully Given I navigate to the homepage When I add the "<itemtype>" item to the cart Then I should see the item in the cart Examples: |itemtype| |Pliers | Step 6 Implement the corresponding step definition. JavaScript const { Given, When, Then, Before, After, setDefaultTimeout } = require('@cucumber/cucumber'); const { launchBrowser } = require('../utils/browserSetup'); const { clickWithHealing } = require('../utils/helpers'); // Increase default timeout for all steps to 60 seconds setDefaultTimeout(60000); let browser; let page; // Launch the browser before each scenario Before(async function () { const launch = await launchBrowser(false); // set headless true/false as needed browser = launch.browser; page = launch.page; }); // Close the browser after each scenario After(async function () { await browser.close(); }); Given('I navigate to the homepage', async function () { await page.goto('https://practicesoftwaretesting.com/'); }); When('I add the {string} item to the cart', async function (itemName) { this.itemName = itemName; // Self-healing selectors for the product item const productSelectors = [ `//img[@alt='${itemName}']`, `text=${itemName}`, `.product-card:has-text("${itemName}")` ]; await clickWithHealing(page, productSelectors); await page.waitForTimeout(10000); // Self-healing selectors for the "Add to Cart" button const addToCartSelectors = [ 'button:has-text("Add to Cart")', '#add-to-cart', '.btn-add-cart' ]; await clickWithHealing(page, addToCartSelectors); }); Then('I should see the item in the cart', async function () { const cartIconSelectors = [ 'a[href="/cart"]', '//a[@data-test="nav-cart"]', 'button[aria-label="cart"]', '.cart-icon' ]; await clickWithHealing(page, cartIconSelectors); const itemInCartSelector = `text=${this.itemName}`; await page.waitForSelector(itemInCartSelector, { timeout: 10000 }); }); Step 7 Add the cucumber.js file. The cucumber.js file is the configuration file for Cucumber.js, which allows you to customize how your tests are executed. We will use the file to define Feature file pathsStep definition locations JavaScript module.exports = { default: `--require tests/steps/**/*.js tests/features/**/*.feature --format summary ` }; Step 8 Update pakage.json to add scripts. JSON "scripts": { "test": "cucumber-js" }, Step 9 Execute the test script. Plain Text npm run test Test execution result: As you see in the above screenshot, the code tried to find the selector a[href="/cart"] and when it couldn’t find the selector, the code moved on to finding the next alternative selector //a[@data-test="nav-cart"], which was successful; hence, clicking the element using the selector Intelligent Element Identification + Multiple Locator Strategies Let's explore with an example on how to incorporate multiple locator strategies into AI-powered self-healing tests with a fallback method. The idea is to try each known locator in a predefined order before resorting to the ML-based fallback when all known locators fail. High-Level Overview Multiple locator strategies. Maintain a list of potential locators (e.g., CSS, XPath, text-based, etc.). Your test tries each in turn.AI/ML fallback. If all known locators fail, capture a screenshot and invoke your ML model to detect the element visually. Below is an example of the AI-powered self-healing approach, showing how to integrate TensorFlow.js (specifically @tensorflow/tfjs-node) to perform a real machine–learning–based fallback. We’ll extend the findElementUsingML function to load an ML model, run inference on a screenshot, and parse the results to find the target UI element. Note: In a real-world scenario, you’d have a trained object detection or image classification model that knows how to detect specific UI elements (e.g., “Add to Cart” button). For illustration, we’ll show pseudo-code for loading a model and parsing bounding box predictions. The actual model and label mapping will depend on your training data and approach. Step 1 Let’s begin by setting up the playwright project and installing the dependencies (Cucumber and Tensorflow). Plain Text npm init playwright Plain Text npm install --save-dev @cucumber/cucumber Plain Text npm install @tensorflow/tfjs-node Step 2 Create the folder structure below and add the required files: The model folder contains the trained TF.js model files (e.g., model.json and associated weight files).aiLocator.js loads the model and runs inference when needed.locatorHelper.js tries multiple standard locators, then calls the AI fallback if all fail. Step 3 Let's implement changes in the locatorHelper.js file. This file contains a helper function to find an element using multiple locator strategies. If all fail, it delegates to the AI fallback. Multiple Locators The function takes an array of locators (locators) and attempts each one in turn.If a locator succeeds, we return immediately. AI Fallback If all standard locators fail, we capture a screenshot and call the findElementUsingML function to get bounding box coordinates for the target element.Return the coordinates if found, or null if the AI also fails. Step 4 In util/aiLocator.js, we simulate an ML-based locator. In a production implementation, you’d load your trained ML model (for example, using TensorFlow.js) to process the screenshot and return the location (bounding box) of the “Add to Cart” button. JavaScript const { findElementUsingML } = require('./aiLocator'); async function findElement(page, screenshotPath, locators, elementLabel) { for (const locator of locators) { try { const element = await page.$(locator); if (element) { console.log(`Element found using locator: "${locator}"`); return { element, usedAI: false }; } } catch (error) { console.log(`Locator failed: "${locator}" -> ${error}`); } } // If all locators fail, attempt AI-based fallback console.log(`All standard locators failed for "${elementLabel}". Attempting AI-based locator...`); await page.screenshot({ path: screenshotPath }); const coords = await findElementUsingML(screenshotPath, elementLabel); if (coords) { console.log(`ML located element at x=${coords.x}, y=${coords.y}`); return { element: coords, usedAI: true }; } return null; } module.exports = { findElement }; Step 5 Let’s implement changes in the aiLocator.js file. Below is a mock example of how you might load and run inference with TensorFlow.js (using @tensorflow/tfjs-node), parse bounding boxes, and pick the coordinates for the “Add to Cart” button. Disclaimer: The code below shows the overall structure. You’ll need a trained model that can detect or classify UI elements (e.g., a custom object detection model). The actual code for parsing predictions will depend on how your model outputs bounding boxes, classes, and scores. JavaScript // util/aiLocator.js const tf = require('@tensorflow/tfjs-node'); const fs = require('fs'); const path = require('path'); // For demonstration, we store a global reference to the loaded model let model = null; /** * Loads the TF.js model from file system, if not already loaded */ async function loadModel() { if (!model) { const modelPath = path.join(__dirname, 'model', 'model.json'); console.log(`Loading TF model from: ${modelPath}`); model = await tf.loadGraphModel(`file://${modelPath}`); } return model; } /** * findElementUsingML * @param {string} screenshotPath - Path to the screenshot image. * @param {string} elementLabel - The label or text of the element to find. * @returns {Promise<{x: number, y: number}>} - Coordinates of the element center. */ async function findElementUsingML(screenshotPath, elementLabel) { console.log(`Running ML inference to find element: "${elementLabel}"`); try { // 1. Read the screenshot file into a buffer const imageBuffer = fs.readFileSync(screenshotPath); // 2. Decode the image into a tensor [height, width, channels] const imageTensor = tf.node.decodeImage(imageBuffer, 3); // 3. Expand dims to match model's input shape: [batch, height, width, channels] const inputTensor = imageTensor.expandDims(0).toFloat().div(tf.scalar(255)); // 4. Load (or retrieve cached) model const loadedModel = await loadModel(); // 5. Run inference // The output structure depends on your model (e.g., bounding boxes, scores, classes) // For instance, an object detection model might return: // { // boxes: [ [y1, x1, y2, x2], ... ], // scores: [ ... ], // classes: [ ... ] // } const prediction = await loadedModel.executeAsync(inputTensor); // Example: Suppose your model returns an array of Tensors: [boxes, scores, classes] // boxes: shape [batch, maxDetections, 4] // scores: shape [batch, maxDetections] // classes: shape [batch, maxDetections] // // NOTE: The exact shape/names of the outputs differ by model architecture. const [boxesTensor, scoresTensor, classesTensor] = prediction; const boxes = await boxesTensor.array(); // shape: [ [ [y1, x1, y2, x2], ... ] ] const scores = await scoresTensor.array(); // shape: [ [score1, score2, ... ] ] const classes = await classesTensor.array(); // shape: [ [class1, class2, ... ] ] // We'll assume only 1 batch => use boxes[0], scores[0], classes[0] const b = boxes[0]; const sc = scores[0]; const cl = classes[0]; // 6. Find the bounding box for "Add to Cart" or the best match for the given label // In a real scenario, you might have a class index for "Add to Cart" // or a text detection pipeline. We'll do a pseudo-search for a known class ID. let bestIndex = -1; let bestScore = 0; for (let i = 0; i < sc.length; i++) { const classId = cl[i]; // Suppose "Add to Cart" is class ID 5 in your model (completely hypothetical). // Or if you have a text-based detection approach, you’d match on the text. if (classId === 5 && sc[i] > bestScore) { bestScore = sc[i]; bestIndex = i; } } // If we found a bounding box with decent confidence if (bestIndex >= 0 && bestScore > 0.5) { const [y1, x1, y2, x2] = b[bestIndex]; console.log(`Detected bounding box for "${elementLabel}" -> [${y1}, ${x1}, ${y2}, ${x2}] with score ${bestScore}`); // Convert normalized coords to actual pixel coords const { width, height } = imageTensor.shape; // shape is [height, width, 3] const top = y1 * height; const left = x1 * width; const bottom = y2 * height; const right = x2 * width; // Calculate the center of the bounding box const centerX = left + (right - left) / 2; const centerY = top + (bottom - top) / 2; // Clean up tensors to free memory tf.dispose([imageTensor, inputTensor, boxesTensor, scoresTensor, classesTensor, prediction]); return { x: Math.round(centerX), y: Math.round(centerY) }; } // If no bounding box matched the criteria, return null console.warn(`No bounding box found for label "${elementLabel}" with sufficient confidence.`); tf.dispose([imageTensor, inputTensor, boxesTensor, scoresTensor, classesTensor, prediction]); return null; } catch (error) { console.error('Error running AI locator:', error); return null; } } module.exports = { findElementUsingML }; Let's understand the machine learning flow. 1. Loading the Model We start by loading a pre-trained TensorFlow.js model from a file. To improve performance, we store the model in memory, so it doesn't reload every time we use it. 2. Preparing the Image Decode the image. Convert it into a format the model understands.Add a batch dimension. Reshape it to match the model's input format.Normalize pixel values. Scale pixel values between 0 and 1 to improve accuracy. 3. Running Inference (Making a Prediction) We pass the processed image into the model for analysis.For object detection, the model outputs: Bounding box coordinates (where the object is in the image).Confidence scores (how certain the model is about its prediction).Object labels (e.g., "cat," "car," "dog"). 4. Processing the Predictions Identify the most confident prediction. Convert the model’s output coordinates into actual pixel positions on the image. 5. Returning the Result If an object is detected, return its center coordinates (x, y).If no object is found or confidence is too low, return null. 6. Memory Cleanup Since TensorFlow.js runs on the GPU, we must free up memory by disposing of temporary data after use. Step 6 Feature file. Gherkin Feature: Add Item to Cart Scenario Outline: User adds an item to the cart successfully Given I navigate to the homepage When I add the "<itemtype>" item to the cart Then I should see the item in the cart Examples: |itemtype| |Pliers | Step 7 Step definition. 1. addToCartLocators We store multiple locators (CSS, text, XPath) in an array.The test tries them in the order listed. 2. findElement If none of the locators work, it uses the ML-based fallback to find coordinates.The return value tells us whether we used AI fallback (usedAI: true) or a standard DOM element (usedAI: false). 3. Clicking the Element If we get a real DOM handle, we call element.click().If we get coordinates from the AI fallback, we call page.mouse.click(x, y). JavaScript // step_definitions/steps.js const { Given, When, Then } = require('@cucumber/cucumber'); const { chromium } = require('playwright'); const path = require('path'); const { findElement } = require('../util/locatorHelper'); let browser, page; Given('I navigate to the homepage', async function () { browser = await chromium.launch({ headless: true }); page = await browser.newPage(); await page.goto('https://practicesoftwaretesting.com/'); }); When('I add the {string} item to the cart', async function (itemName) { // Define multiple possible locators for the Add to Cart button const productSelectors = [ `//img[@alt='${itemName}']`, `text=${itemName}`, `.product-card:has-text("${itemName}")` ]; await page.waitForTimeout(10000); // Attempt to find the element using multiple locators, then AI fallback const screenshotPath = path.join(__dirname, 'page.png'); const found = await findElement(page, screenshotPath, productSelectors, 'Select Product'); if (!found) { throw new Error('Failed to locate the Add to Cart button using all strategies and AI fallback.'); } if (!found.usedAI) { // We have a DOM element handle await found.element.click(); } else { // We have x/y coordinates from AI await page.mouse.click(found.element.x, found.element.y); } // Define multiple possible locators for the Add to Cart button const addToCartLocators = [ 'button.add-to-cart', // CSS locator 'text="Add to Cart"', // Playwright text-based locator '//button[contains(text(),"Add")]', // XPath ]; // Attempt to find the element using multiple locators, then AI fallback const screenshotPath1 = path.join(__dirname, 'page1.png'); const found1 = await findElement(page, screenshotPath, addToCartLocators, 'Add to Cart'); if (!found) { throw new Error('Failed to locate the Add to Cart button using all strategies and AI fallback.'); } if (!found.usedAI) { // We have a DOM element handle await found.element.click(); } else { // We have x/y coordinates from AI await page.mouse.click(found.element.x, found.element.y); } }); Then('I should see the item in the cart', async function () { // Wait for cart item count to appear or update await page.waitForSelector('.cart-items-count', { timeout: 5000 }); const countText = await page.$eval('.cart-items-count', el => el.textContent.trim()); if (parseInt(countText, 10) <= 0) { throw new Error('Item was not added to the cart.'); } console.log('Item successfully added to the cart.'); await browser.close(); }); Using TensorFlow.js for self-healing tests involves: Multiple locators. Attempt standard locators (CSS, XPath, text-based).Screenshot + ML inference. If standard locators fail, take a screenshot, load it into your TF.js model, and run object detection (or a custom approach) to find the desired UI element.Click by coordinates. Convert the predicted bounding box into pixel coordinates and instruct Playwright to click at that location. Conclusion This approach provides a robust fallback that can adapt to UI changes if your ML model is trained to recognize the visual cues of your target elements. As your UI evolves, you can retrain the model or add new examples to improve detection accuracy, thereby continuously “healing” your tests without needing to hardcode new selectors.
DZone events bring together industry leaders, innovators, and peers to explore the latest trends, share insights, and tackle industry challenges. From Virtual Roundtables to Fireside Chats, our events cover a wide range of topics, each tailored to provide you, our DZone audience, with practical knowledge, meaningful discussions, and support for your professional growth. DZone Events Happening Soon Below, you’ll find upcoming events that you won't want to miss. DevOps for Oracle Applications: Automation nd Compliance Made Easy Date: March 11, 2025Time: 1:00 PM ET Register for Free! Join Flexagon and DZone as Flexagon's CEO unveils how FlexDeploy is helping organizations future-proof their DevOps strategy for Oracle Applications and Infrastructure. Explore innovations for automation through compliance, along with real-world success stories from companies who have adopted FlexDeploy. Make AI Your App Development Advantage: Learn Why and How Date: March 12, 2025Time: 10:00 AM ET Register for Free! The future of app development is here, and AI is leading the charge. Join Outsystems and DZone, on March 12th at 10am ET, for an exclusive Webinar with Luis Blando, CPTO of OutSystems, and John Rymer, industry analyst at Analysis.Tech, as they discuss how AI and low-code are revolutionizing development.You will also hear from David Gilkey, Leader of Solution Architecture, Americas East at OutSystems, and Roy van de Kerkhof, Director at NovioQ. This session will give you the tools and knowledge you need to accelerate your development and stay ahead of the curve in the ever-evolving tech landscape. Developer Experience: The Coalescence of Developer Productivity, Process Satisfaction, and Platform Engineering Date: March 12, 2025Time: 1:00 PM ET Register for Free! Explore the future of developer experience at DZone’s Virtual Roundtable, where a panel will dive into key insights from the 2025 Developer Experience Trend Report. Discover how AI, automation, and developer-centric strategies are shaping workflows, productivity, and satisfaction. Don’t miss this opportunity to connect with industry experts and peers shaping the next chapter of software development. Unpacking the 2025 Developer Experience Trends Report: Insights, Gaps, and Putting it into Action Date: March 19, 2025Time: 1:00 PM ET Register for Free! We’ve just seen the 2025 Developer Experience Trends Report from DZone, and while it shines a light on important themes like platform engineering, developer advocacy, and productivity metrics, there are some key gaps that deserve attention. Join Cortex Co-founders Anish Dhar and Ganesh Datta for a special webinar, hosted in partnership with DZone, where they’ll dive into what the report gets right—and challenge the assumptions shaping the DevEx conversation. Their take? Developer experience is grounded in clear ownership. Without ownership clarity, teams face accountability challenges, cognitive overload, and inconsistent standards, ultimately hampering productivity. Don’t miss this deep dive into the trends shaping your team’s future. Accelerating Software Delivery: Unifying Application and Database Changes in Modern CI/CD Date: March 25, 2025Time: 1:00 PM ET Register for Free! Want to speed up your software delivery? It’s time to unify your application and database changes. Join us for Accelerating Software Delivery: Unifying Application and Database Changes in Modern CI/CD, where we’ll teach you how to seamlessly integrate database updates into your CI/CD pipeline. Petabyte Scale, Gigabyte Costs: Mezmo’s ElasticSearch to Quickwit Evolution Date: March 27, 2025Time: 1:00 PM ET Register for Free! For Mezmo, scaling their infrastructure meant facing significant challenges with ElasticSearch. That's when they made the decision to transition to Quickwit, an open-source, cloud-native search engine designed to handle large-scale data efficiently. This is a must-attend session for anyone looking for insights on improving search platform scalability and managing data growth. What's Next? DZone has more in store! Stay tuned for announcements about upcoming Webinars, Virtual Roundtables, Fireside Chats, and other developer-focused events. Whether you’re looking to sharpen your skills, explore new tools, or connect with industry leaders, there’s always something exciting on the horizon. Don’t miss out — save this article and check back often for updates!
Have you ever spent hours debugging a seemingly simple React application, only to realize the culprit was a misplaced import? Incorrect import order can lead to a host of issues, from unexpected behavior to significant performance degradation. In this article, we'll delve into the intricacies of import order in React, exploring best practices and powerful tools to optimize your code. By the end, you'll be equipped to write cleaner, more efficient, and maintainable React applications. Let's start a journey to master the art of import order and unlock the full potential of your React projects. What Is an Import Order? At first glance, the concept of "import order" might seem trivial — just a list of files and libraries your code depends on, right? But in reality, it’s much more than that. The order in which you import files in React can directly affect how your app behaves, looks, and performs. How Import Order Works in React When you write: JavaScript import React from "react"; import axios from "axios"; import Button from "./components/Button"; import "./styles/global.css"; Each line tells the JavaScript engine to fetch and execute the specified file or library. This order determines: When dependencies are loaded. JavaScript modules are executed in the order they’re imported. If a later import depends on an earlier one, things work smoothly. But if the order is wrong, you might end up with errors or unexpected behavior.How styles are applied. CSS imports are applied in the sequence they appear. Importing global styles after component-specific styles can override the latter, leading to layout issues.Avoiding conflicts. Libraries or components that rely on other dependencies need to be loaded first to ensure they work properly. Breaking Down Import Types In React, imports generally fall into these categories: 1. Core or Framework Imports These are React itself (react, react-dom) and other core libraries. They should always appear at the top of your file. JavaScript import React from "react"; import ReactDOM from "react-dom"; 2. Third-Party Library Imports These are external dependencies like axios, lodash, or moment. They come next, providing the building blocks for your application. JavaScript import axios from "axios"; import lodash from "lodash"; 3. Custom Module Imports Your components, hooks, utilities, or services belong here. These imports are specific to your project and should follow third-party libraries. JavaScript import Header from "./components/Header"; import useAuth from "./hooks/useAuth"; 4. CSS or Styling Imports CSS files, whether global styles, CSS modules, or third-party styles (like Bootstrap), should typically be placed at the end to ensure proper cascading and prevent accidental overrides. JavaScript import "./styles/global.css"; import "bootstrap/dist/css/bootstrap.min.css"; 5. Asset Imports Finally, assets like images or fonts are imported. These are less common and are often used within specific components rather than at the top level. JavaScript import logo from "./assets/logo.png"; Why Categorizing Matters Grouping imports by type not only makes your code easier to read but also helps prevent subtle bugs, such as circular dependencies or mismatched styles. It creates a predictable structure for you and your team, reducing confusion and improving collaboration. By understanding the types of imports and how they work, you’re already taking the first step toward mastering import order in React. Why Import Order Matters At first, it might seem like how you order your imports shouldn’t affect the functionality of your application. However, the sequence in which you import files has far-reaching consequences — everything from performance to bug prevention and even security can be impacted by the seemingly simple task of ordering your imports correctly. 1. Dependencies and Execution Order JavaScript is a synchronous language, meaning that imports are executed in the exact order they are written. This matters when one module depends on another. For example, if you import a component that relies on a function from a utility file, but the utility file is imported after the component, you might run into runtime errors or undefined behavior. Example: JavaScript // Incorrect import order import Button from "./components/Button"; // Depends on utility function import { formatDate } from "./utils/formatDate"; // Imported too late In the above code, Button relies on formatDate, but since formatDate is imported after Button, it leads to errors or undefined functions when Button tries to access formatDate. React and JavaScript generally won’t warn you about this kind of issue outright — only when your code breaks will you realize that import order matters. 2. Styles and Layout Consistency Another critical factor that import order affects is CSS, which is applied in the order it's imported. If you import a global CSS file after a specific component’s styles, global styles will override the component-specific styles, causing your layout to break unexpectedly. Example: JavaScript // Incorrect import order import "./styles/global.css"; // Loaded after component styles import "./components/Button.css"; // Should have come first Here, if global styles are imported after component-specific ones, they might override your button’s styles. You’ll end up with buttons that look completely different from what you intended, creating a frustrating bug that’s hard to trace. 3. Performance Optimization Beyond just preventing bugs, proper import order can significantly impact the performance of your React application. Large third-party libraries (such as moment.js or lodash) can slow down your initial bundle size if imported incorrectly. In particular, if a large library is imported globally (before optimizations like tree-shaking can happen), the entire library may be bundled into your final JavaScript file, even if only a small portion of it is used. This unnecessarily increases your app’s initial load time, negatively impacting the user experience. Example: JavaScript // Improper import order affecting performance import "moment"; // Large, global import import { formatDate } from "./utils/formatDate"; // Only uses part of moment.js Instead, by importing only the specific functions you need from moment, you can take advantage of tree-shaking, which removes unused code and reduces the final bundle size. Correct approach: JavaScript import { format } from "moment"; // Tree-shaking-friendl By carefully organizing imports, you can ensure that only the necessary parts of large libraries are included in your build, making your app more performant and faster to load. 4. Avoiding Circular Dependencies Circular dependencies can happen when two or more files depend on each other. When this happens, JavaScript gets stuck in a loop, attempting to load the files, which can lead to incomplete imports or even runtime errors. These errors are often hard to trace, as they don’t throw an immediate warning but result in inconsistent behavior later on. A proper import order can help mitigate circular dependencies. If you’re aware of how your files interconnect, you can organize your imports to break any potential circular references. Example: JavaScript // Circular dependency scenario import { fetchData } from "./api"; import { processData } from "./dataProcessing"; // processData depends on fetchData // But api.js imports dataProcessing.js too import { processData } from "./dataProcessing"; // Circular dependency In this case, the two files depend on each other, creating a circular reference. React (or JavaScript in general) doesn’t handle this situation well, and the result can be unpredictable. Keeping a strict import order and ensuring that files don’t directly depend on each other will help prevent this. 5. Code Readability and Maintenance Lastly, an organized import order helps with the long-term maintainability of your code. React projects grow fast, and when you revisit a file after some time, having a clear import order makes it easy to see which libraries and components are being used. Establishing and following an import order convention makes it easier for other developers to collaborate on the project. If imports are grouped logically (core libraries at the top, followed by custom modules, and then styles), the code is more predictable, and you can focus on adding new features rather than hunting down import-related issues. By now, it's clear that import order isn't just a cosmetic choice — it plays a crucial role in preventing bugs, improving performance, and maintaining readability and collaboration within your codebase. Next, we’ll dive into the technical aspects of what happens behind the scenes when JavaScript files are imported and how understanding this process can further help you optimize your code. The Technical Underpinnings: What Happens When You Import Files in React Now that we’ve covered why import order matters, let’s dive deeper into how the JavaScript engine processes imports under the hood. Understanding the technical side of imports can help you avoid common pitfalls and gain a deeper appreciation for why order truly matters. 1. Modules and the Import Mechanism In modern JavaScript (ES6+), we use the import statement to bring in dependencies or modules. Unlike older methods, such as require(), ES6 imports are statically analyzed, meaning the JavaScript engine knows about all the imports at compile time rather than runtime. This allows for better optimization (like tree-shaking), but also means that the order in which imports are processed becomes important. Example: JavaScript import React from "react"; import axios from "axios"; import { useState } from "react"; Here, when the file is compiled, the JavaScript engine will process each import in sequence. It knows that React needs to be loaded before useState (since useState is a React hook), and that axios can be loaded after React because it’s a completely independent module. However, if the order were flipped, useState might throw errors because it relies on React being already available in the scope. 2. Execution Context: Global vs. Local Scope When you import a file in JavaScript, you’re essentially pulling it into the current execution context. This has significant implications for things like variable scope and initialization. JavaScript runs top to bottom, so when you import a module, all of its code is executed in the global context first, before moving on to the rest of the file. This includes both the side effects (like logging, initialization, or modification of global state) and exports (such as functions, objects, or components). If the order of imports is incorrect, these side effects or exports might not be available when expected, causing errors or undefined behavior. Example: JavaScript import "./utils/initGlobalState"; // Initializes global state import { fetchData } from "./api"; // Uses the global state initialized above In this case, the initGlobalState file needs to be imported first to ensure that the global state is initialized before fetchData attempts to use it. If the order is reversed, fetchData will try to use undefined or uninitialized state, causing issues. 3. The Role of Tree-Shaking and Bundle Optimization Tree-shaking is the process of removing unused code from the final bundle. It’s a powerful feature of modern bundlers like Webpack, which eliminates dead code and helps reduce the size of your app, making it faster to load. However, tree-shaking only works properly if your imports are static (i.e., no dynamic require() calls or conditional imports). When the order of imports isn’t maintained in a way that the bundler can optimize, tree-shaking might not be able to effectively eliminate unused code, resulting in larger bundles and slower load times. Example: JavaScript // Incorrect import import * as moment from "moment"; // Tree-shaking can't remove unused code In this example, importing the entire moment library prevents tree-shaking from working efficiently. By importing only the needed functions (as seen in earlier examples), we can reduce the bundle size and optimize performance. 4. Understanding the Single Execution Pass When a file is imported in JavaScript, it’s executed only once per module during the runtime of your app. After that, the imported module is cached and reused whenever it’s imported again. This single execution pass ensures that any side effects (like variable initialization or configuration) only happen once, regardless of how many times the module is imported. If modules are imported out of order, it can cause initialization problems. For example, an import that modifies global state should always be loaded first, before any component or utility that depends on that state. Example: JavaScript // Proper execution order import { initializeApp } from "./config/init"; // Initializes app state import { getUserData } from "./api"; // Depends on the app state initialized above Here, the initializeApp file should always load first to ensure the app state is set up correctly before getUserData tries to fetch data. If the order is reversed, the app might fail to load with missing or incorrect state values. 5. How Bundlers Like Webpack Handle Imports When using bundlers like Webpack, all the imported files are analyzed, bundled, and optimized into a single (or multiple) JavaScript files. Webpack performs this analysis from top to bottom, and the order in which imports appear directly impacts how dependencies are bundled and served to the browser. If a file is imported before it’s needed, Webpack will include it in the bundle, even if it isn’t used. If a file is imported later but needed earlier, Webpack will throw errors because the dependency will be undefined or incomplete. By understanding how bundlers like Webpack handle imports, you can be more strategic about which files are loaded first, reducing unnecessary imports and optimizing the final bundle. In the next section, we’ll look at real-world examples and consequences of incorrect import order, as well as ways to ensure that your import order is optimized for both performance and stability. Consequences of Incorrect Import Order Now that we've explored the "how" and "why" of import order, let's examine the real-world consequences of getting it wrong. While some mistakes can be easy to spot and fix, others might cause subtle bugs that are difficult to trace. These mistakes can manifest as unexpected behavior, performance issues, or even outright crashes in your app. Let’s take a look at a few common scenarios where an incorrect import order can break your application and how to avoid them. 1. Undefined Variables and Functions One of the most straightforward consequences of an incorrect import order is encountering undefined variables or functions when you try to use them. Since JavaScript imports are executed top to bottom, failing to load a module before you use it will result in an error. Example: JavaScript // Incorrect import order import { fetchData } from "./api"; // Function depends on an imported state import { globalState } from "./state"; // globalState needs to be initialized first fetchData(); // Error: globalState is undefined In the example above, fetchData depends on the globalState being initialized first. However, since globalState is imported after fetchData, the function call results in an error because globalState is undefined at the time of execution. The application may crash or return unexpected results because the order of imports was wrong. 2. Styling Issues and Layout Breakage Another common issue is when CSS or styling is applied in the wrong order, which can cause the layout to break or styles to be overridden unintentionally. This is especially problematic when you import global styles after component-level styles or when third-party style sheets conflict with your own custom styles. Example: JavaScript // Incorrect import order import "bootstrap/dist/css/bootstrap.min.css"; // Loaded first import "./styles/customStyles.css"; // Loaded second, overrides styles Here, global styles from Bootstrap are loaded before the component-specific styles in customStyles.css. As a result, any custom styling defined in customStyles.css could be overridden by the Bootstrap styles, causing layout inconsistencies and unexpected results in your UI. It’s crucial to load your own styles last, ensuring they take precedence over any third-party styles. 3. Circular Dependencies and Infinite Loops Circular dependencies occur when two or more modules depend on each other. When these dependencies are incorrectly imported, it can lead to infinite loops or incomplete imports, which can break your app in subtle ways. This often happens when two files import each other in a way that the JavaScript engine can’t resolve. Example: JavaScript // Circular dependency import { fetchData } from "./api"; import { processData } from "./dataProcessing"; // Depends on fetchData // But api.js imports dataProcessing.js too import { processData } from "./dataProcessing"; // Circular import In this example, api.js and dataProcessing.js depend on each other, creating a circular reference. When you try to import these modules in an incorrect order, JavaScript ends up in a loop trying to load them, which leads to an incomplete or undefined state. This issue can result in runtime errors or unpredictable app behavior. To avoid circular dependencies, ensure that your modules are logically organized and avoid creating circular references. 4. Performance Degradation Incorrect import order can also negatively affect your app’s performance. For example, importing large libraries like lodash or moment globally when you only need a small portion of their functionality will lead to unnecessary bloat in your final bundle. This increases the time it takes for your app to load, especially on slower networks or devices. Example: JavaScript // Incorrect import order import * as moment from "moment"; // Imports the entire library import { fetchData } from "./api"; // Only needs one function Here, importing the entire moment library instead of specific functions like import { format } from "moment"; wastes bandwidth and increases the size of your app's JavaScript bundle. The result is slower loading times, especially in production environments. By ensuring that only the necessary parts of large libraries are imported, you can avoid this kind of performance hit. 5. Debugging Nightmares Incorrect import order might not always break your application outright, but it can create bugs that are incredibly difficult to debug. Sometimes, an issue will appear intermittently, especially in larger codebases, when the app executes at a different speed depending on how quickly or slowly modules are loaded. This kind of bug can cause random errors, especially if you’re dealing with asynchronous code or complex interactions between imported modules. These errors can be particularly frustrating because they don’t always manifest during initial development or testing. Example: JavaScript // Incorrect import order import { initializeApp } from "./config/init"; import { fetchData } from "./api"; // fetchData is calling an uninitialized app state In this case, initializeApp is supposed to set up the app state before any data is fetched, but because fetchData is imported before initializeApp, the app state is undefined when fetchData is called. This might not cause an error during initial testing, but can lead to random failures or unpredictable behavior later on. Best Practices to Prevent Import Order Mistakes Now that we’ve looked at the potential consequences, let’s quickly cover some best practices to ensure you avoid these common pitfalls: Follow a consistent import order. Always group imports logically — core libraries first, followed by third-party modules, then custom components and services, and finally styles and assets.Check for circular dependencies. Be mindful of the order in which files depend on each other. Circular imports can create difficult-to-debug errors.Use descriptive names for imports. Avoid ambiguity by using clear, descriptive names for your imports. This makes it easier to track where things might go wrong.Optimize library imports. Use tree-shaking to import only the parts of libraries you need. This reduces bundle size and improves performance.Test across environments. Test your app in different environments (local development, staging, and production) to catch any order-related issues that might appear only under certain conditions. By being aware of these consequences and following best practices, you’ll not only avoid headaches down the road but also create more reliable, maintainable, and performant React applications. In the next section, we’ll explore how you can organize your imports for maximum efficiency, using both manual strategies and automated tools. Best Practices for Organizing Your Imports At this point, you’re well aware of the consequences of incorrect import order, and you’ve seen how the import order can affect your React application’s functionality and performance. Now, let's turn our attention to practical ways to organize your imports, ensuring that your code is maintainable, efficient, and free of bugs. Whether you're working on a small project or a large-scale React application, adhering to a solid import structure is crucial for productivity and code quality. Here are some best practices to guide you in organizing your imports the right way: 1. Use a Logical and Consistent Order The first step to maintaining clean and readable code is using a consistent order for your imports. A logical order not only makes it easier to navigate your code but also helps avoid subtle errors that may occur due to import order. Here’s a commonly recommended import order, based on industry standards: 1. Core Libraries Start with essential libraries like React and ReactDOM. These are the building blocks of any React application and should always appear first. JavaScript import React from "react"; import ReactDOM from "react-dom"; 2. Third-Party Libraries Next, import third-party dependencies (like axios, lodash, or styled-components). These libraries are typically installed via npm/yarn and are used throughout your application. JavaScript import axios from "axios"; import { useState } from "react"; 3. Custom Components and Modules After that, import your own components and modules, organized by feature or functionality. This section helps separate your project’s core functionality from external dependencies. JavaScript import Header from "./components/Header"; import Footer from "./components/Footer"; 4. CSS and Other Assets Finally, import CSS, styles, images, or other assets. These should be last, as styles often override previous CSS, and assets are usually used globally. JavaScript import "./styles/main.css"; import logo from "./assets/logo.png"; Here’s how the entire import block might look in practice: JavaScript // Core Libraries import React from "react"; import ReactDOM from "react-dom"; // Third-Party Libraries import axios from "axios"; import { useState } from "react"; // Custom Components import Header from "./components/Header"; import Footer from "./components/Footer"; // Styles import "./styles/main.css"; import logo from "./assets/logo.png"; This structure ensures that your imports are organized and easy to follow. It's not only visually appealing but also avoids issues with variable and function availability due to improper ordering. 2. Group Imports by Type Another effective strategy is to group your imports based on their type. This helps ensure that your file remains modular, and you can easily spot and manage dependencies. Typically, you’d separate your imports into groups like: React-related importsThird-party librariesCustom components, hooks, and utilitiesCSS, images, and assets Grouping like this allows you to focus on one category of imports at a time and reduces the chances of mixing things up. For example, you wouldn’t want to import a component from ./components before the necessary third-party libraries like React or Redux. JavaScript // React-related imports import React, { useState } from "react"; // Third-party libraries import axios from "axios"; import { useDispatch } from "react-redux"; // Custom components and hooks import Navbar from "./components/Navbar"; import useAuth from "./hooks/useAuth"; // CSS and assets import "./styles/main.css"; import logo from "./assets/logo.png"; By separating imports into logical groups, you improve the readability of your code, making it easier for you and your team to maintain and extend your project. 3. Use Aliases to Avoid Clutter As your project grows, you may find that the number of imports in each file can become overwhelming. This is especially true for larger projects with deeply nested directories. To combat this, consider using import aliases to simplify the import paths and reduce clutter in your code. Before using aliases: JavaScript import Header from "../../../components/Header"; import Footer from "../../../components/Footer"; After using aliases: JavaScript import Header from "components/Header"; import Footer from "components/Footer"; By setting up aliases (like components), you can create cleaner, more readable imports that don’t require traversing long file paths. You can configure aliases using your bundler (Webpack, for example) or module bundling tools like Babel or Create React App’s built-in configurations. 4. Avoid Importing Unused Code One of the key advantages of ES6 imports is that you only import what you need. This is where tree-shaking comes into play, allowing bundlers to remove unused code and optimize your app’s performance. However, this only works when you follow best practices for modular imports. Example of unnecessary imports: JavaScript import * as _ from "lodash"; // Imports the entire lodash library In the above example, you’re importing the entire lodash library when you only need a specific function, such as debounce. This unnecessarily bloats your bundle size. Better approach: JavaScript import { debounce } from "lodash"; // Only import what you need This approach ensures that only the necessary code is imported, which in turn keeps your bundle smaller and your app more performant. 5. Use Linters and Formatters to Enforce Consistency To maintain consistency across your codebase and prevent errors due to incorrect import order, you can use linters (like ESLint) and formatters (like Prettier). These tools can help enforce a standardized import structure and even automatically fix issues related to import order. Here are some popular ESLint rules you can use for organizing imports: import/order: This rule helps enforce a specific order for imports, ensuring that core libraries are loaded first, followed by third-party libraries and custom modules.no-unused-vars: This rule prevents importing unused modules, helping to keep your codebase clean and optimized. By integrating these tools into your workflow, you can automate the process of checking and correcting your import structure. Putting It All Together: An Import Order Example Let’s take a look at an example of an import structure that follows all of these best practices. This example will not only ensure that your code is clean, modular, and organized but will also prevent bugs and improve performance. JavaScript // React-related imports import React, { useState } from "react"; import ReactDOM from "react-dom"; // Third-party libraries import axios from "axios"; import { useDispatch } from "react-redux"; // Custom components and hooks import Navbar from "components/Navbar"; import Sidebar from "components/Sidebar"; import useAuth from "hooks/useAuth"; // Utility functions import { fetchData } from "utils/api"; // CSS and assets import "./styles/main.css"; import logo from "assets/logo.png"; This structure maintains clarity, keeps imports logically grouped, and helps you avoid common pitfalls like circular dependencies, unused imports, and performance degradation. In the next section, we'll explore how you can automate and enforce the best practices we’ve discussed here with the help of tools and configurations. Stay tuned to learn how to make this process even easier! Tools and Automation for Enforcing Import Order Now that you understand the importance of import order and have explored best practices for organizing your imports, it’s time to focus on how to automate and enforce these practices. Manually ensuring your imports are well-organized can be time-consuming and prone to human error, especially in large-scale projects. This is where powerful tools come in. In this section, we’ll discuss the tools that can help you automate the process of organizing and enforcing import order, so you don’t have to worry about it every time you add a new module or component. Let’s dive into the world of linters, formatters, and custom configurations that can streamline your import management process. 1. ESLint: The Linter That Can Enforce Import Order One of the most effective ways to automate the enforcement of import order is through ESLint, a tool that analyzes your code for potential errors and enforces coding standards. ESLint has a specific plugin called eslint-plugin-import that helps you manage and enforce a consistent import order across your entire project. How to Set Up ESLint for Import Order 1. Install ESLint and the import plugin. First, you’ll need to install ESLint along with the eslint-plugin-import package: Plain Text npm install eslint eslint-plugin-import --save-dev 2. Configure ESLint. After installing the plugin, you can configure ESLint by adding rules for import order. Below is an example of how you might set up your ESLint configuration (.eslintrc.json): JSON { "extends": ["eslint:recommended", "plugin:import/errors", "plugin:import/warnings"], "plugins": ["import"], "rules": { "import/order": [ "error", { "groups": [ ["builtin", "external"], ["internal", "sibling", "parent"], "index" ], "alphabetize": { "order": "asc", "caseInsensitive": true } } ] } } In this configuration: "builtin" and "external" imports come first (i.e., core and third-party libraries)."internal", "sibling", "parent" imports come next (i.e., your own modules and components)."index" imports come last (i.e., imports from index.js files).The alphabetize option ensures that imports are listed alphabetically within each group. 3. Run ESLint. Now, whenever you run ESLint (via npm run lint or your preferred command), it will automatically check the import order in your files and report any issues. If any imports are out of order, ESLint will throw an error or warning, depending on how you configure the rules. Benefits of Using ESLint for Import Order Consistency across the codebase. ESLint ensures that the import order is the same across all files in your project, helping your team follow consistent practices.Prevent errors early. ESLint can catch issues related to incorrect import order before they make it to production, preventing subtle bugs and performance issues.Customizable rules. You can fine-tune ESLint’s behavior to match your team’s specific import order preferences, making it highly adaptable. 2. Prettier: The Code Formatter That Can Sort Your Imports While ESLint is great for enforcing code quality and rules, Prettier is a tool designed to format your code automatically to keep it clean and readable. Prettier doesn’t focus on linting but rather on maintaining consistent styling across your codebase. When combined with ESLint, it can ensure that your imports are both syntactically correct and properly organized. How to Set Up Prettier for Import Order 1. Install Prettier and ESLint plugin. To set up Prettier, you’ll need to install both Prettier and the Prettier plugin for ESLint: Plain Text npm install prettier eslint-config-prettier eslint-plugin-prettier --save-dev 2. Configure Prettier with ESLint. Add Prettier’s configuration to your ESLint setup by extending the Prettier configuration in your .eslintrc.json file: JSON { "extends": [ "eslint:recommended", "plugin:import/errors", "plugin:import/warnings", "plugin:prettier/recommended" ], "plugins": ["import", "prettier"], "rules": { "import/order": [ "error", { "groups": [ ["builtin", "external"], ["internal", "sibling", "parent"], "index" ], "alphabetize": { "order": "asc", "caseInsensitive": true } } ], "prettier/prettier": ["error"] } } This setup ensures that Prettier’s formatting is automatically applied along with your ESLint rules for import order. Now, Prettier will format your imports whenever you run npm run format. Benefits of Using Prettier for Import Order Automatic formatting. Prettier automatically fixes import order issues, saving you time and effort.Consistent formatting. Prettier ensures that all files in your codebase adhere to a single, consistent formatting style, including import order.Code readability. Prettier maintains consistent indentation and spacing, ensuring that your imports are not just in the correct order but also easy to read. 3. Import Sorter Extensions for IDEs For a smoother developer experience, you can install import sorter extensions in your IDE or code editor (like VSCode). These extensions can automatically sort your imports as you type, helping you keep your code organized without even thinking about it. Recommended Extensions VSCode: auto-import. This extension automatically organizes and cleans up imports as you type.VSCode: sort-imports. This extension sorts imports by predefined rules, such as alphabetizing or grouping. By integrating these extensions into your workflow, you can avoid manually managing the order of imports and let the tool take care of the tedious tasks for you. 4. Custom Scripts for Import Management If you prefer a more customized approach or are working in a larger team, you can write your own scripts to automatically enforce import order and other code quality checks. For instance, you can create a pre-commit hook using Husky and lint-staged to ensure that files are automatically linted and formatted before every commit. How to Set Up Husky and lint-staged 1. Install Husky and lint-staged. Install these tools to manage pre-commit hooks and format your code before committing: Plain Text npm install husky lint-staged --save-dev 2. Configure lint-staged. Set up lint-staged in your package.json to automatically run ESLint and Prettier on staged files: JSON "lint-staged": { "*.js": ["eslint --fix", "prettier --write"] } 3. Set up Husky Hooks. Use Husky to add a pre-commit hook that runs lint-staged: Plain Text npx husky install This will automatically check for import order and formatting issues before any changes are committed. Automation Is Key to Maintaining Consistency By utilizing tools like ESLint, Prettier, import sorter extensions, and custom scripts, you can automate the process of enforcing import order and formatting across your entire project. This not only saves you time but also ensures consistency, reduces human error, and helps prevent bugs and performance issues. With these tools in place, you can focus more on writing quality code and less on worrying about the small details of import management. Conclusion: The Power of Organized Imports In React development, the order in which you import files is far more significant than it may seem at first glance. By adhering to a well-structured import order, you ensure that your code remains predictable, error-free, and maintainable. Remember, good coding habits aren’t just about syntax; they’re about creating a foundation that enables long-term success and scalability for your projects. So, take the time to implement these strategies, and watch your code become cleaner, more efficient, and less prone to errors. Thank you for reading, and happy coding!
SingleStore is a powerful multi-model database system and platform designed to support a wide variety of business use cases. Its distinctive features allow businesses to unify multiple database systems into a single platform, reducing the Total Cost of Ownership (TCO) and simplifying developer workflows by eliminating the need for complex integration tools. In this article, we'll explore how SingleStore can transform email campaigns for a web analytics company, enabling the creation of personalized and highly targeted email content. The notebook file used in the article is available on GitHub. Introduction A web analytics company relies on email campaigns to engage with customers. However, a generic approach to targeting customers often misses opportunities to maximize business potential. A more effective solution would involve using a large language model (LLM) to craft personalized email messages. Consider a scenario where user behavior data are stored in a NoSQL database like MongoDB, while valuable documentation resides in a vector database, such as Pinecone. Managing these multiple systems can become complex and resource-intensive, highlighting the need for a unified solution. SingleStore, a versatile multi-model database, supports various data formats, including JSON, and offers built-in vector functions. It seamlessly integrates with LLMs, making it a powerful alternative to managing multiple database systems. In this article, we'll demonstrate how easily SingleStore can replace both MongoDB and Pinecone, simplifying operations without compromising functionality. In our example application, we'll use an LLM to generate unique emails for our customers. To help the LLM learn how to target our customers, we'll use a number of well-known analytics companies as learning material for the LLM. We'll further customize the content based on user behavior. Customer data are stored in MongoDB. Different stages of user behavior are stored in Pinecone. The user behavior will allow the LLM to generate personalized emails. Finally, we'll consolidate the data stored in MongoDB and Pinecone by using SingleStore. Create a SingleStore Cloud Account A previous article showed the steps to create a free SingleStore Cloud account. We'll use the Standard Tier and take the default names for the Workspace Group and Workspace. We'll also enable SingleStore Kai. We'll store our OpenAI API Key and Pinecone API Key in the secrets vault using OPENAI_API_KEY and PINECONE_API_KEY, respectively. Import the Notebook We'll download the notebook from GitHub. From the left navigation pane in the SingleStore cloud portal, we'll select "DEVELOP" > "Data Studio." In the top right of the web page, we'll select "New Notebook" > "Import From File." We'll use the wizard to locate and import the notebook we downloaded from GitHub. Run the Notebook Generic Email Template We'll start by generating generic email templates and then use an LLM to transform them into personalized messages for each customer. This way, we can address each recipient by name and introduce them to the benefits of our web analytics platform. We can generate a generic email as follows: Python people = ["Alice", "Bob", "Charlie", "David", "Emma"] for person in people: message = ( f"Hey {person},\n" "Check out our web analytics platform, it's Awesome!\n" "It's perfect for your needs. Buy it now!\n" "- Marketer John" ) print(message) print("_" * 100) For example, Alice would see the following message: Plain Text Hey Alice, Check out our web analytics platform, it's Awesome! It's perfect for your needs. Buy it now! - Marketer John Other users would receive the same message, but with their name, respectively. 2. Adding a Large Language Model (LLM) We can easily bring an LLM into our application by providing it with a role and giving it some information, as follows: Python system_message = """ You are a helpful assistant. My name is Marketer John. You help write the body of an email for a fictitious company called 'Awesome Web Analytics'. This is a web analytics company that is similar to the top 5 web analytics companies (perform a web search to determine the current top 5 web analytics companies). The goal is to write a custom email to users to get them interested in our services. The email should be less than 150 words. Address the user by name. End with my signature. """ We'll create a function to call the LLM: Python def chatgpt_generate_email(prompt, person): conversation = [ {"role": "system", "content": prompt}, {"role": "user", "content": person}, {"role": "assistant", "content": ""} ] response = openai_client.chat.completions.create( model = "gpt-4o-mini", messages = conversation, temperature = 1.0, max_tokens = 800, top_p = 1, frequency_penalty = 0, presence_penalty = 0 ) assistant_reply = response.choices[0].message.content return assistant_reply Looping through the list of users and calling the LLM produces unique emails: Python openai_client = OpenAI() # Define a list to store the responses emails = [] # Loop through each person and generate the conversation for person in people: email = chatgpt_generate_email(system_message, person) emails.append( { "person": person, "assistant_reply": email } ) For example, this is what Alice might see: Plain Text Person: Alice Subject: Unlock Your Website's Potential with Awesome Web Analytics! Hi Alice, Are you ready to take your website to new heights? At Awesome Web Analytics, we provide cutting-edge insights that empower you to make informed decisions and drive growth. With our powerful analytics tools, you can understand user behavior, optimize performance, and boost conversions—all in real-time! Unlike other analytics platforms, we offer personalized support to guide you every step of the way. Join countless satisfied customers who have transformed their online presence. Discover how we stack up against competitors like Google Analytics, Adobe Analytics, and Matomo, but with a focus on simplicity and usability. Let us help you turn data into your greatest asset! Best, Marketer John Awesome Web Analytics Equally unique emails will be generated for the other users. 3. Customizing Email Content With User Behavior By categorising users based on their behavior stages, we can further customize email content to align with their specific needs. An LLM will assist in crafting emails that encourage users to progress through different stages, ultimately improving their understanding and usage of various services. At present, user data are held in a MongoDB database with a record structure similar to the following: JSON { '_id': ObjectId('64afb3fda9295d8421e7a19f'), 'first_name': 'James', 'last_name': 'Villanueva', 'company_name': 'Foley-Turner', 'stage': 'generating a tracking code', 'created_date': 1987-11-09T12:43:26.000+00:00 } We'll connect to MongoDB to get the data as follows: Python try: mongo_client = MongoClient("mongodb+srv://admin:<password>@<host>/?retryWrites=true&w=majority") mongo_db = mongo_client["mktg_email_demo"] collection = mongo_db["customers"] print("Connected successfully") except Exception as e: print(e) We'll replace <password> and <host> with the values from MongoDB Atlas. We have a number of user behavior stages: Python stages = [ "getting started", "generating a tracking code", "adding tracking to your website", "real-time analytics", "conversion tracking", "funnels", "user segmentation", "custom event tracking", "data export", "dashboard customization" ] def find_next_stage(current_stage): current_index = stages.index(current_stage) if current_index < len(stages) - 1: return stages[current_index + 1] else: return stages[current_index] Using the data about behavior stages, we'll ask the LLM to further customize the email as follows: Python limit = 5 emails = [] for record in collection.find(limit = limit): fname, stage = record.get("first_name"), record.get("stage") next_stage = find_next_stage(stage) system_message = f""" You are a helpful assistant, who works for me, Marketer John at Awesome Web Analytics. You help write the body of an email for a fictitious company called 'Awesome Web Analytics'. We are a web analytics company similar to the top 5 web analytics companies. We have users at various stages in our product's pipeline, and we want to send them helpful emails to encourage further usage of our product. Please write an email for {fname} who is on stage {stage} of the onboarding process. The next stage is {next_stage}. Ensure the email describes the benefits of moving to the next stage. Limit the email to 1 paragraph. End the email with my signature. """ email = chatgpt_generate_email(system_message, fname) emails.append( { "fname": fname, "stage": stage, "next_stage": next_stage, "email": email } ) For example, here is an email generated for Michael: Plain Text First Name: Michael Stage: funnels Next Stage: user segmentation Subject: Unlock Deeper Insights with User Segmentation! Hi Michael, Congratulations on successfully navigating the funnel stage of our onboarding process! As you move forward to user segmentation, you'll discover how this powerful tool will enable you to categorize your users based on their behaviors and demographics. By understanding your audience segments better, you can create tailored experiences that increase engagement and optimize conversions. This targeted approach not only enhances your marketing strategies but also drives meaningful results and growth for your business. We're excited to see how segmentation will elevate your analytics efforts! Best, Marketer John Awesome Web Analytics 4. Further Customizing Email Content To support user progress, we'll use Pinecone's vector embeddings, allowing us to direct users to relevant documentation for each stage. These embeddings make it effortless to guide users toward essential resources and further enhance their interactions with our product. Python pc = Pinecone( api_key = pc_api_key ) index_name = "mktg-email-demo" if any(index["name"] == index_name for index in pc.list_indexes()): pc.delete_index(index_name) pc.create_index( name = index_name, dimension = dimensions, metric = "euclidean", spec = ServerlessSpec( cloud = "aws", region = "us-east-1" ) ) pc_index = pc.Index(index_name) pc.list_indexes() We'll create the embeddings as follows: Python def get_embeddings(text): text = text.replace("\n", " ") try: response = openai_client.embeddings.create( input = text, model = "text-embedding-3-small" ) return response.data[0].embedding, response.usage.total_tokens, "success" except Exception as e: print(e) return "", 0, "failed" id_counter = 1 ids_list = [] for stage in stages: embedding, tokens, status = get_embeddings(stage) parent = id_counter - 1 pc_index.upsert([ { "id": str(id_counter), "values": embedding, "metadata": {"content": stage, "parent": str(parent)} } ]) ids_list.append(str(id_counter)) id_counter += 1 We'll search Pinecone for matches as follows: Python def search_pinecone(embedding): match = pc_index.query( vector = [embedding], top_k = 1, include_metadata = True )["matches"][0]["metadata"] return match["content"], match["parent"] Using the data, we can ask the LLM to further customize the email, as follows: Python limit = 5 emails = [] for record in collection.find(limit = limit): fname, stage = record.get("first_name"), record.get("stage") # Get the current and next stages with their embedding this_stage = next((item for item in stages_w_embed if item["stage"] == stage), None) next_stage = next((item for item in stages_w_embed if item["stage"] == find_next_stage(stage)), None) if not this_stage or not next_stage: continue # Get content cur_content, cur_permalink = search_pinecone(this_stage["embedding"]) next_content, next_permalink = search_pinecone(next_stage["embedding"]) system_message = f""" You are a helpful assistant. I am Marketer John at Awesome Web Analytics. We are similar to the current top web analytics companies. We have users at various stages of using our product, and we want to send them helpful emails to encourage them to use our product more. Write an email for {fname}, who is on stage {stage} of the onboarding process. The next stage is {next_stage['stage']}. Ensure the email describes the benefits of moving to the next stage, and include this link: https://github.com/VeryFatBoy/mktg-email-flow/tree/main/docs/{next_content.replace(' ', '-')}.md. Limit the email to 1 paragraph. End the email with my signature: 'Best Regards, Marketer John.' """ email = chatgpt_generate_email(system_message, fname) emails.append( { "fname": fname, "stage": stage, "next_stage": next_stage["stage"], "email": email } ) For example, here is an email generated for Melissa: Plain Text First Name: Melissa Stage: getting started Next Stage: generating a tracking code Subject: Take the Next Step with Awesome Web Analytics! Hi Melissa, We're thrilled to see you getting started on your journey with Awesome Web Analytics! The next step is generating your tracking code, which will allow you to start collecting valuable data about your website visitors. With this data, you can gain insights into user behavior, optimize your marketing strategies, and ultimately drive more conversions. To guide you through this process, check out our detailed instructions here: [Generating a Tracking Code](https://github.com/VeryFatBoy/mktg-email-flow/tree/main/docs/generating-a-tracking-code.md). We're here to support you every step of the way! Best Regards, Marketer John. We can see that we have refined the generic template and developed quite targeted emails. Using SingleStore Instead of managing separate database systems, we'll streamline our operations by using SingleStore. With its support for JSON, text, and vector embeddings, we can efficiently store all necessary data in one place, reducing TCO and simplifying our development processes. We'll ingest the data from MongoDB using a pipeline similar to the following: SQL USE mktg_email_demo; CREATE LINK mktg_email_demo.link AS MONGODB CONFIG '{"mongodb.hosts": "<primary>:27017, <secondary>:27017, <secondary>:27017", "collection.include.list": "mktg_email_demo.*", "mongodb.ssl.enabled": "true", "mongodb.authsource": "admin", "mongodb.members.auto.discover": "false"}' CREDENTIALS '{"mongodb.user": "admin", "mongodb.password": "<password>"}'; CREATE TABLES AS INFER PIPELINE AS LOAD DATA LINK mktg_email_demo.link '*' FORMAT AVRO; START ALL PIPELINES; We'll replace <primary>, <secondary>, <secondary> and <password> with the values from MongoDB Atlas. The customer table will be created by the pipeline. The vector embeddings for the behavior stages can be created as follows: Python df_list = [] id_counter = 1 for stage in stages: embedding, tokens, status = get_embeddings(stage) parent = id_counter - 1 stage_df = pd.DataFrame( { "id": [id_counter], "content": [stage], "embedding": [embedding], "parent": [parent] } ) df_list.append(stage_df) id_counter += 1 df = pd.concat(df_list, ignore_index = True) We'll need a table to store the data: SQL USE mktg_email_demo; DROP TABLE IF EXISTS docs_splits; CREATE TABLE IF NOT EXISTS docs_splits ( id INT, content TEXT, embedding VECTOR(:dimensions), parent INT ); Then, we can save the data in the table: Python df.to_sql( "docs_splits", con = db_connection, if_exists = "append", index = False, chunksize = 1000 ) We'll search SingleStore for matches as follows: Python def search_s2(vector): query = """ SELECT content, parent FROM docs_splits ORDER BY (embedding <-> :vector) ASC LIMIT 1 """ with db_connection.connect() as con: result = con.execute(text(query), {"vector": str(vector)}) return result.fetchone() Using the data, we can ask the LLM to customize the email as follows: Python limit = 5 emails = [] # Create a connection with db_connection.connect() as con: query = "SELECT _more :> JSON FROM customers LIMIT :limit" result = con.execute(text(query), {"limit": limit}) for customer in result: customer_data = customer[0] fname, stage = customer_data["first_name"], customer_data["stage"] # Retrieve current and next stage embeddings this_stage = next((item for item in stages_w_embed if item["stage"] == stage), None) next_stage = next((item for item in stages_w_embed if item["stage"] == find_next_stage(stage)), None) if not this_stage or not next_stage: continue # Get content cur_content, cur_permalink = search_s2(this_stage["embedding"]) next_content, next_permalink = search_s2(next_stage["embedding"]) # Create the system message system_message = f""" You are a helpful assistant. I am Marketer John at Awesome Web Analytics. We are similar to the current top web analytics companies. We have users that are at various stages in using our product, and we want to send them helpful emails to get them to use our product more. Write an email for {fname} who is on stage {stage} of the onboarding process. The next stage is {next_stage['stage']}. Ensure the email describes the benefits of moving to the next stage, then always share this link: https://github.com/VeryFatBoy/mktg-email-flow/tree/main/docs/{next_content.replace(' ', '-')}.md. Limit the email to 1 paragraph. End the email with my signature: 'Best Regards, Marketer John.' """ email = chatgpt_generate_email(system_message, fname) emails.append( { "fname": fname, "stage": stage, "next_stage": next_stage["stage"], "email": email, } ) For example, here is an email generated for Joseph: Plain Text First Name: Joseph Stage: generating a tracking code Next Stage: adding tracking to your website Subject: Take the Next Step in Your Analytics Journey! Hi Joseph, Congratulations on generating your tracking code! The next step is to add tracking to your website, which is crucial for unlocking the full power of our analytics tools. By integrating the tracking code, you will start collecting valuable data about your visitors, enabling you to understand user behavior, optimize your website, and drive better results for your business. Ready to get started? Check out our detailed guide here: [Adding Tracking to Your Website](https://github.com/VeryFatBoy/mktg-email-flow/tree/main/docs/adding-tracking-to-your-website.md). Best Regards, Marketer John. Summary Through this practical demonstration, we've seen how SingleStore improves our email campaigns with its multi-model capabilities and AI-driven personalization. Using SingleStore as our single source of truth, we've simplified our workflows and ensured that our email campaigns deliver maximum impact and value to our customers. Acknowledgements I thank Wes Kennedy for the original demo code, which was adapted for this article.
Testing is a critical aspect of software development. When developers write code to meet specified requirements, it’s equally important to write unit tests that validate the code against those requirements to ensure its quality. Additionally, Salesforce mandates a minimum of 75% code coverage for all Apex code. However, a skilled developer goes beyond meeting this Salesforce requirement by writing comprehensive unit tests that also validate the business-defined acceptance criteria of the application. In this article, we’ll explore common Apex test use cases and highlight best practices to follow when writing test classes. Test Data Setup The Apex test class has no visibility to the real data, so you must create test data. While the Apex test class can access a few set-up data, such as user and profile data, the majority of the test data need to be manually set up. There are several ways to set up test data in Apex. 1. Loading Test Data You can load test data from CSV files into Salesforce, which can then be used in your Apex test classes. To load the test data: Create a CSV file with column names and valuesCreate a static resource for this file Call Test.loadData() in your test method, passing two parameters — the sObject type you want to load data for and name of the static resource. Java list<Sobject> los = Test.loadData (Lead.SobjectType, 'staticresourcename'); 2. Using TestSetup Methods You can use a method with annotation @testSetup in your test class to create test records once. These records will be accessible to all test methods in the test class. This reduces the need to create test records for each test method and speeds up test execution, especially when there are dependencies on a large number of records. Please note that even if test methods modify the data, they will each receive the original, unmodified test data. Java @testSetup static void makeData() { Lead leadObj1 = TestDataFactory.createLead(); leadObj1.LastName = 'TestLeadL1'; insert leadObj1; } @isTest public static void webLeadtest() { Lead leadObj = [SELECT Id FROM Lead WHERE LastName = 'TestLeadL1' LIMIT 1]; //write your code for test } 3. Using Test Utility Classes You can define common test utility classes that create frequently used data and share them across multiple test classes. These classes are public and use @IsTest annotation. These utility classes can only be used in a test apex context. Java @isTest Public class TestDataFactory { public static Lead createLead() { // lead data creation } } System Mode vs. User Mode Apex test runs in system mode, meaning that the user permission and record sharing rules are not considered. However, it's essential to test your application under specific user contexts to ensure that user-specific requirements are properly covered. In such cases, you can use System.runAs(). You can either create a user record or find a user from the environment, then execute your test code within that user’s context. It is important to note that the runAs() method doesn’t enforce user permissions or field-level permissions; only record sharing is enforced. Once the runAs() block completes, the code goes back to system mode. Additionally, you can use nested runAs() methods in your test to simulate multiple user contexts within the same test. Java System.runAs(User){ // test code } Test.startTest() and Test.stopTest() When your test involves a large number of DML statements and queries to set up the data, you risk hitting the governor limit in the same context. Salesforce provides two methods — Test.startTest() and Test.stopTest() — to reset the governor limits during test execution. These methods mark the beginning and end of a test execution. The code before Test.startTest() should be reserved for set-up purposes, such as initializing variables and populating data structures. The code between these two methods runs with fresh governor limits, giving you more flexibility in your test. Another common use case of these two methods is to test asynchronous methods such as future methods. To capture the result of a future method, you should have the code that calls the future between the Test.startTest() and Test.stopTest() methods. Asynchronous operations are completed as soon as Test.stopTest() is called, allowing you to test their results. Please note that Test.startTest() and Test.stopTest() can be called only once in a test method. Java Lead leadObj = [SELECT Id FROM Lead WHERE LastName = 'TestLeadL1' LIMIT 1]; leadObj.Status = 'Contacted'; Test.startTest(); // start a fresh list of governor limits //test lead trigger, which has a future callout that creates api logs update leadObj; Test.stopTest(); list<Logs__c> logs = [select id from Logs__c limit 1]; system.assert(logs.size()!= 0); Test.isRunningTest() Sometimes, you may need to write code to execute only in the test context. In these cases, you can use Test.isRunningTest() to check if the code is currently running as part of a test execution. A common scenario for Test.isRunningTest() is when testing http callouts. It enables you to mock a response, ensuring the necessary coverage without making actual callouts. Another use case is to bypass any DMLs (such as error logs) to improve test class performance. Java public static HttpResponse calloutMethod(String endPoint) { Http httpReq = new Http(); HttpRequest req = new HttpRequest(); HttpResponse response = new HTTPResponse(); req.setEndpoint(endPoint); req.setMethod('POST'); req.setHeader('Content-Type', 'application/x-www-form-urlencoded'); req.setTimeout(20000); if (Test.isRunningTest() && (mock != null)) { response = mock.respond(req); } else { response = httpReq.send(req); } } Testing With TimeStamps It's common in test scenarios to need control over timestamps, such as manipulating the CreatedDate of a record or setting the current time, in order to test application behavior based on specific timestamps. For example, you might need to test a UI component that loads only yesterday’s open lead records or validate logic that depends on non-working hours or holidays. In such cases, you’ll need to create test records with a CreatedDate of yesterday or adjust holiday/non-working hour logic within your test context. Here’s how to do that. 1. Setting CreatedDate Salesforce provides a Test.setCreatedDate method, which allows you to set the createddate of a record. This method takes the record Id and the desired dateTime timestamp. Java @isTest static void testDisplayYesterdayLead() { Lead leadObj = TestDataFactory.createLead(); Datetime yesterday = Datetime.now().addDays(-1); Test.setCreatedDate(leadObj.Id, yesterday); Test.startTest(); //test your code Test.stopTest(); } 2. Manipulate System Time You may need to manipulate the system time in your tests to ensure your code behaves as expected, regardless of when the test is executed. This is particularly useful for testing time-dependent logic. To achieve this: Create a getter and setter for a nowvariable in a utility class and use it in your test methods to control the current time. Java public static DateTime now { get { return now == null ? DateTime.now() : now; } set; } What this means is that in production, when now is not set, it will take the expected Datetime.now(), otherwise, it will take the set value for now.Ensure that your code references Utility.now instead of System.now() so that the time manipulation is effective during the test execution.Set the Utility.nowvariable in your test method. Java @isTest public static void getLeadsTest() { Date myDate = Date.newInstance(2025, 11, 18); Time myTime = Time.newInstance(3, 3, 3, 0); DateTime dt = DateTime.newInstance(myDate, myTime); Utility.now = dt; Test.startTest(); //run your code for testing Test.stopTest(); } Conclusion Effective testing is crucial to ensuring that your Salesforce applications perform as expected under various conditions. By using the strategies outlined in this article — such as loading test data, leveraging Test.startTest() and Test.stopTest(), manipulating timestamps, and utilizing Test.isRunningTest() — you can write robust and efficient Apex test classes that cover a wide range of use cases.
Building a Real-Time AI-Powered Workplace Safety System
March 14, 2025 by
The Impact of AI Agents on Modern Workflows
March 13, 2025
by
CORE
SRE Best Practices for Java Applications
March 12, 2025
by
CORE
Accelerating Deep Learning on AWS EC2
March 14, 2025 by
Build a Scalable E-commerce Platform: System Design Overview
March 14, 2025 by
Queries for Optimizing and Debugging PostgreSQL Replication
March 14, 2025 by
Accelerating Deep Learning on AWS EC2
March 14, 2025 by
Automate and Standardize IBM ACE Installation With PowerShell
March 14, 2025 by
Build a Scalable E-commerce Platform: System Design Overview
March 14, 2025 by
Accelerating Deep Learning on AWS EC2
March 14, 2025 by
Automate and Standardize IBM ACE Installation With PowerShell
March 14, 2025 by
Queries for Optimizing and Debugging PostgreSQL Replication
March 14, 2025 by
Accelerating Deep Learning on AWS EC2
March 14, 2025 by
Automate and Standardize IBM ACE Installation With PowerShell
March 14, 2025 by
AWS Step Functions Local: Mocking Services, HTTP Endpoints Limitations
March 14, 2025 by
Accelerating Deep Learning on AWS EC2
March 14, 2025 by
Smart Cities With Multi-Modal Retrieval-Augmented Generation
March 14, 2025 by
Building a Real-Time AI-Powered Workplace Safety System
March 14, 2025 by