Build Your Tech Startup: 4 Key Traps and Ways to Tackle Them
Import Order in React: A Deep Dive Into Best Practices and Tools
Developer Experience
With tech stacks becoming increasingly diverse and AI and automation continuing to take over everyday tasks and manual workflows, the tech industry at large is experiencing a heightened demand to support engineering teams. As a result, the developer experience is changing faster than organizations can consciously maintain.We can no longer rely on DevOps practices or tooling alone — there is even greater power recognized in improving workflows, investing in infrastructure, and advocating for developers' needs. This nuanced approach brings developer experience to the forefront, where devs can begin to regain control over their software systems, teams, and processes.We are happy to introduce DZone's first-ever Developer Experience Trend Report, which assesses where the developer experience stands today, including team productivity, process satisfaction, infrastructure, and platform engineering. Taking all perspectives, technologies, and methodologies into account, we share our research and industry experts' perspectives on what it means to effectively advocate for developers while simultaneously balancing quality and efficiency. Come along with us as we explore this exciting chapter in developer culture.
Apache Cassandra Essentials
Identity and Access Management
Testing is a critical aspect of software development. When developers write code to meet specified requirements, it’s equally important to write unit tests that validate the code against those requirements to ensure its quality. Additionally, Salesforce mandates a minimum of 75% code coverage for all Apex code. However, a skilled developer goes beyond meeting this Salesforce requirement by writing comprehensive unit tests that also validate the business-defined acceptance criteria of the application. In this article, we’ll explore common Apex test use cases and highlight best practices to follow when writing test classes. Test Data Setup The Apex test class has no visibility to the real data, so you must create test data. While the Apex test class can access a few set-up data, such as user and profile data, the majority of the test data need to be manually set up. There are several ways to set up test data in Apex. 1. Loading Test Data You can load test data from CSV files into Salesforce, which can then be used in your Apex test classes. To load the test data: Create a CSV file with column names and valuesCreate a static resource for this file Call Test.loadData() in your test method, passing two parameters — the sObject type you want to load data for and name of the static resource. Java list<Sobject> los = Test.loadData (Lead.SobjectType, 'staticresourcename'); 2. Using TestSetup Methods You can use a method with annotation @testSetup in your test class to create test records once. These records will be accessible to all test methods in the test class. This reduces the need to create test records for each test method and speeds up test execution, especially when there are dependencies on a large number of records. Please note that even if test methods modify the data, they will each receive the original, unmodified test data. Java @testSetup static void makeData() { Lead leadObj1 = TestDataFactory.createLead(); leadObj1.LastName = 'TestLeadL1'; insert leadObj1; } @isTest public static void webLeadtest() { Lead leadObj = [SELECT Id FROM Lead WHERE LastName = 'TestLeadL1' LIMIT 1]; //write your code for test } 3. Using Test Utility Classes You can define common test utility classes that create frequently used data and share them across multiple test classes. These classes are public and use @IsTest annotation. These utility classes can only be used in a test apex context. Java @isTest Public class TestDataFactory { public static Lead createLead() { // lead data creation } } System Mode vs. User Mode Apex test runs in system mode, meaning that the user permission and record sharing rules are not considered. However, it's essential to test your application under specific user contexts to ensure that user-specific requirements are properly covered. In such cases, you can use System.runAs(). You can either create a user record or find a user from the environment, then execute your test code within that user’s context. It is important to note that the runAs() method doesn’t enforce user permissions or field-level permissions; only record sharing is enforced. Once the runAs() block completes, the code goes back to system mode. Additionally, you can use nested runAs() methods in your test to simulate multiple user contexts within the same test. Java System.runAs(User){ // test code } Test.startTest() and Test.stopTest() When your test involves a large number of DML statements and queries to set up the data, you risk hitting the governor limit in the same context. Salesforce provides two methods — Test.startTest() and Test.stopTest() — to reset the governor limits during test execution. These methods mark the beginning and end of a test execution. The code before Test.startTest() should be reserved for set-up purposes, such as initializing variables and populating data structures. The code between these two methods runs with fresh governor limits, giving you more flexibility in your test. Another common use case of these two methods is to test asynchronous methods such as future methods. To capture the result of a future method, you should have the code that calls the future between the Test.startTest() and Test.stopTest() methods. Asynchronous operations are completed as soon as Test.stopTest() is called, allowing you to test their results. Please note that Test.startTest() and Test.stopTest() can be called only once in a test method. Java Lead leadObj = [SELECT Id FROM Lead WHERE LastName = 'TestLeadL1' LIMIT 1]; leadObj.Status = 'Contacted'; Test.startTest(); // start a fresh list of governor limits //test lead trigger, which has a future callout that creates api logs update leadObj; Test.stopTest(); list<Logs__c> logs = [select id from Logs__c limit 1]; system.assert(logs.size()!= 0); Test.isRunningTest() Sometimes, you may need to write code to execute only in the test context. In these cases, you can use Test.isRunningTest() to check if the code is currently running as part of a test execution. A common scenario for Test.isRunningTest() is when testing http callouts. It enables you to mock a response, ensuring the necessary coverage without making actual callouts. Another use case is to bypass any DMLs (such as error logs) to improve test class performance. Java public static HttpResponse calloutMethod(String endPoint) { Http httpReq = new Http(); HttpRequest req = new HttpRequest(); HttpResponse response = new HTTPResponse(); req.setEndpoint(endPoint); req.setMethod('POST'); req.setHeader('Content-Type', 'application/x-www-form-urlencoded'); req.setTimeout(20000); if (Test.isRunningTest() && (mock != null)) { response = mock.respond(req); } else { response = httpReq.send(req); } } Testing With TimeStamps It's common in test scenarios to need control over timestamps, such as manipulating the CreatedDate of a record or setting the current time, in order to test application behavior based on specific timestamps. For example, you might need to test a UI component that loads only yesterday’s open lead records or validate logic that depends on non-working hours or holidays. In such cases, you’ll need to create test records with a CreatedDate of yesterday or adjust holiday/non-working hour logic within your test context. Here’s how to do that. 1. Setting CreatedDate Salesforce provides a Test.setCreatedDate method, which allows you to set the createddate of a record. This method takes the record Id and the desired dateTime timestamp. Java @isTest static void testDisplayYesterdayLead() { Lead leadObj = TestDataFactory.createLead(); Datetime yesterday = Datetime.now().addDays(-1); Test.setCreatedDate(leadObj.Id, yesterday); Test.startTest(); //test your code Test.stopTest(); } 2. Manipulate System Time You may need to manipulate the system time in your tests to ensure your code behaves as expected, regardless of when the test is executed. This is particularly useful for testing time-dependent logic. To achieve this: Create a getter and setter for a nowvariable in a utility class and use it in your test methods to control the current time. Java public static DateTime now { get { return now == null ? DateTime.now() : now; } set; } What this means is that in production, when now is not set, it will take the expected Datetime.now(), otherwise, it will take the set value for now.Ensure that your code references Utility.now instead of System.now() so that the time manipulation is effective during the test execution.Set the Utility.nowvariable in your test method. Java @isTest public static void getLeadsTest() { Date myDate = Date.newInstance(2025, 11, 18); Time myTime = Time.newInstance(3, 3, 3, 0); DateTime dt = DateTime.newInstance(myDate, myTime); Utility.now = dt; Test.startTest(); //run your code for testing Test.stopTest(); } Conclusion Effective testing is crucial to ensuring that your Salesforce applications perform as expected under various conditions. By using the strategies outlined in this article — such as loading test data, leveraging Test.startTest() and Test.stopTest(), manipulating timestamps, and utilizing Test.isRunningTest() — you can write robust and efficient Apex test classes that cover a wide range of use cases.
SingleStore is a powerful multi-model database system and platform designed to support a wide variety of business use cases. Its distinctive features allow businesses to unify multiple database systems into a single platform, reducing the Total Cost of Ownership (TCO) and simplifying developer workflows by eliminating the need for complex integration tools. In this article, we'll explore how SingleStore can transform email campaigns for a web analytics company, enabling the creation of personalized and highly targeted email content. The notebook file used in the article is available on GitHub. Introduction A web analytics company relies on email campaigns to engage with customers. However, a generic approach to targeting customers often misses opportunities to maximize business potential. A more effective solution would involve using a large language model (LLM) to craft personalized email messages. Consider a scenario where user behavior data are stored in a NoSQL database like MongoDB, while valuable documentation resides in a vector database, such as Pinecone. Managing these multiple systems can become complex and resource-intensive, highlighting the need for a unified solution. SingleStore, a versatile multi-model database, supports various data formats, including JSON, and offers built-in vector functions. It seamlessly integrates with LLMs, making it a powerful alternative to managing multiple database systems. In this article, we'll demonstrate how easily SingleStore can replace both MongoDB and Pinecone, simplifying operations without compromising functionality. In our example application, we'll use an LLM to generate unique emails for our customers. To help the LLM learn how to target our customers, we'll use a number of well-known analytics companies as learning material for the LLM. We'll further customize the content based on user behavior. Customer data are stored in MongoDB. Different stages of user behavior are stored in Pinecone. The user behavior will allow the LLM to generate personalized emails. Finally, we'll consolidate the data stored in MongoDB and Pinecone by using SingleStore. Create a SingleStore Cloud Account A previous article showed the steps to create a free SingleStore Cloud account. We'll use the Standard Tier and take the default names for the Workspace Group and Workspace. We'll also enable SingleStore Kai. We'll store our OpenAI API Key and Pinecone API Key in the secrets vault using OPENAI_API_KEY and PINECONE_API_KEY, respectively. Import the Notebook We'll download the notebook from GitHub. From the left navigation pane in the SingleStore cloud portal, we'll select "DEVELOP" > "Data Studio." In the top right of the web page, we'll select "New Notebook" > "Import From File." We'll use the wizard to locate and import the notebook we downloaded from GitHub. Run the Notebook Generic Email Template We'll start by generating generic email templates and then use an LLM to transform them into personalized messages for each customer. This way, we can address each recipient by name and introduce them to the benefits of our web analytics platform. We can generate a generic email as follows: Python people = ["Alice", "Bob", "Charlie", "David", "Emma"] for person in people: message = ( f"Hey {person},\n" "Check out our web analytics platform, it's Awesome!\n" "It's perfect for your needs. Buy it now!\n" "- Marketer John" ) print(message) print("_" * 100) For example, Alice would see the following message: Plain Text Hey Alice, Check out our web analytics platform, it's Awesome! It's perfect for your needs. Buy it now! - Marketer John Other users would receive the same message, but with their name, respectively. 2. Adding a Large Language Model (LLM) We can easily bring an LLM into our application by providing it with a role and giving it some information, as follows: Python system_message = """ You are a helpful assistant. My name is Marketer John. You help write the body of an email for a fictitious company called 'Awesome Web Analytics'. This is a web analytics company that is similar to the top 5 web analytics companies (perform a web search to determine the current top 5 web analytics companies). The goal is to write a custom email to users to get them interested in our services. The email should be less than 150 words. Address the user by name. End with my signature. """ We'll create a function to call the LLM: Python def chatgpt_generate_email(prompt, person): conversation = [ {"role": "system", "content": prompt}, {"role": "user", "content": person}, {"role": "assistant", "content": ""} ] response = openai_client.chat.completions.create( model = "gpt-4o-mini", messages = conversation, temperature = 1.0, max_tokens = 800, top_p = 1, frequency_penalty = 0, presence_penalty = 0 ) assistant_reply = response.choices[0].message.content return assistant_reply Looping through the list of users and calling the LLM produces unique emails: Python openai_client = OpenAI() # Define a list to store the responses emails = [] # Loop through each person and generate the conversation for person in people: email = chatgpt_generate_email(system_message, person) emails.append( { "person": person, "assistant_reply": email } ) For example, this is what Alice might see: Plain Text Person: Alice Subject: Unlock Your Website's Potential with Awesome Web Analytics! Hi Alice, Are you ready to take your website to new heights? At Awesome Web Analytics, we provide cutting-edge insights that empower you to make informed decisions and drive growth. With our powerful analytics tools, you can understand user behavior, optimize performance, and boost conversions—all in real-time! Unlike other analytics platforms, we offer personalized support to guide you every step of the way. Join countless satisfied customers who have transformed their online presence. Discover how we stack up against competitors like Google Analytics, Adobe Analytics, and Matomo, but with a focus on simplicity and usability. Let us help you turn data into your greatest asset! Best, Marketer John Awesome Web Analytics Equally unique emails will be generated for the other users. 3. Customizing Email Content With User Behavior By categorising users based on their behavior stages, we can further customize email content to align with their specific needs. An LLM will assist in crafting emails that encourage users to progress through different stages, ultimately improving their understanding and usage of various services. At present, user data are held in a MongoDB database with a record structure similar to the following: JSON { '_id': ObjectId('64afb3fda9295d8421e7a19f'), 'first_name': 'James', 'last_name': 'Villanueva', 'company_name': 'Foley-Turner', 'stage': 'generating a tracking code', 'created_date': 1987-11-09T12:43:26.000+00:00 } We'll connect to MongoDB to get the data as follows: Python try: mongo_client = MongoClient("mongodb+srv://admin:<password>@<host>/?retryWrites=true&w=majority") mongo_db = mongo_client["mktg_email_demo"] collection = mongo_db["customers"] print("Connected successfully") except Exception as e: print(e) We'll replace <password> and <host> with the values from MongoDB Atlas. We have a number of user behavior stages: Python stages = [ "getting started", "generating a tracking code", "adding tracking to your website", "real-time analytics", "conversion tracking", "funnels", "user segmentation", "custom event tracking", "data export", "dashboard customization" ] def find_next_stage(current_stage): current_index = stages.index(current_stage) if current_index < len(stages) - 1: return stages[current_index + 1] else: return stages[current_index] Using the data about behavior stages, we'll ask the LLM to further customize the email as follows: Python limit = 5 emails = [] for record in collection.find(limit = limit): fname, stage = record.get("first_name"), record.get("stage") next_stage = find_next_stage(stage) system_message = f""" You are a helpful assistant, who works for me, Marketer John at Awesome Web Analytics. You help write the body of an email for a fictitious company called 'Awesome Web Analytics'. We are a web analytics company similar to the top 5 web analytics companies. We have users at various stages in our product's pipeline, and we want to send them helpful emails to encourage further usage of our product. Please write an email for {fname} who is on stage {stage} of the onboarding process. The next stage is {next_stage}. Ensure the email describes the benefits of moving to the next stage. Limit the email to 1 paragraph. End the email with my signature. """ email = chatgpt_generate_email(system_message, fname) emails.append( { "fname": fname, "stage": stage, "next_stage": next_stage, "email": email } ) For example, here is an email generated for Michael: Plain Text First Name: Michael Stage: funnels Next Stage: user segmentation Subject: Unlock Deeper Insights with User Segmentation! Hi Michael, Congratulations on successfully navigating the funnel stage of our onboarding process! As you move forward to user segmentation, you'll discover how this powerful tool will enable you to categorize your users based on their behaviors and demographics. By understanding your audience segments better, you can create tailored experiences that increase engagement and optimize conversions. This targeted approach not only enhances your marketing strategies but also drives meaningful results and growth for your business. We're excited to see how segmentation will elevate your analytics efforts! Best, Marketer John Awesome Web Analytics 4. Further Customizing Email Content To support user progress, we'll use Pinecone's vector embeddings, allowing us to direct users to relevant documentation for each stage. These embeddings make it effortless to guide users toward essential resources and further enhance their interactions with our product. Python pc = Pinecone( api_key = pc_api_key ) index_name = "mktg-email-demo" if any(index["name"] == index_name for index in pc.list_indexes()): pc.delete_index(index_name) pc.create_index( name = index_name, dimension = dimensions, metric = "euclidean", spec = ServerlessSpec( cloud = "aws", region = "us-east-1" ) ) pc_index = pc.Index(index_name) pc.list_indexes() We'll create the embeddings as follows: Python def get_embeddings(text): text = text.replace("\n", " ") try: response = openai_client.embeddings.create( input = text, model = "text-embedding-3-small" ) return response.data[0].embedding, response.usage.total_tokens, "success" except Exception as e: print(e) return "", 0, "failed" id_counter = 1 ids_list = [] for stage in stages: embedding, tokens, status = get_embeddings(stage) parent = id_counter - 1 pc_index.upsert([ { "id": str(id_counter), "values": embedding, "metadata": {"content": stage, "parent": str(parent)} } ]) ids_list.append(str(id_counter)) id_counter += 1 We'll search Pinecone for matches as follows: Python def search_pinecone(embedding): match = pc_index.query( vector = [embedding], top_k = 1, include_metadata = True )["matches"][0]["metadata"] return match["content"], match["parent"] Using the data, we can ask the LLM to further customize the email, as follows: Python limit = 5 emails = [] for record in collection.find(limit = limit): fname, stage = record.get("first_name"), record.get("stage") # Get the current and next stages with their embedding this_stage = next((item for item in stages_w_embed if item["stage"] == stage), None) next_stage = next((item for item in stages_w_embed if item["stage"] == find_next_stage(stage)), None) if not this_stage or not next_stage: continue # Get content cur_content, cur_permalink = search_pinecone(this_stage["embedding"]) next_content, next_permalink = search_pinecone(next_stage["embedding"]) system_message = f""" You are a helpful assistant. I am Marketer John at Awesome Web Analytics. We are similar to the current top web analytics companies. We have users at various stages of using our product, and we want to send them helpful emails to encourage them to use our product more. Write an email for {fname}, who is on stage {stage} of the onboarding process. The next stage is {next_stage['stage']}. Ensure the email describes the benefits of moving to the next stage, and include this link: https://github.com/VeryFatBoy/mktg-email-flow/tree/main/docs/{next_content.replace(' ', '-')}.md. Limit the email to 1 paragraph. End the email with my signature: 'Best Regards, Marketer John.' """ email = chatgpt_generate_email(system_message, fname) emails.append( { "fname": fname, "stage": stage, "next_stage": next_stage["stage"], "email": email } ) For example, here is an email generated for Melissa: Plain Text First Name: Melissa Stage: getting started Next Stage: generating a tracking code Subject: Take the Next Step with Awesome Web Analytics! Hi Melissa, We're thrilled to see you getting started on your journey with Awesome Web Analytics! The next step is generating your tracking code, which will allow you to start collecting valuable data about your website visitors. With this data, you can gain insights into user behavior, optimize your marketing strategies, and ultimately drive more conversions. To guide you through this process, check out our detailed instructions here: [Generating a Tracking Code](https://github.com/VeryFatBoy/mktg-email-flow/tree/main/docs/generating-a-tracking-code.md). We're here to support you every step of the way! Best Regards, Marketer John. We can see that we have refined the generic template and developed quite targeted emails. Using SingleStore Instead of managing separate database systems, we'll streamline our operations by using SingleStore. With its support for JSON, text, and vector embeddings, we can efficiently store all necessary data in one place, reducing TCO and simplifying our development processes. We'll ingest the data from MongoDB using a pipeline similar to the following: SQL USE mktg_email_demo; CREATE LINK mktg_email_demo.link AS MONGODB CONFIG '{"mongodb.hosts": "<primary>:27017, <secondary>:27017, <secondary>:27017", "collection.include.list": "mktg_email_demo.*", "mongodb.ssl.enabled": "true", "mongodb.authsource": "admin", "mongodb.members.auto.discover": "false"}' CREDENTIALS '{"mongodb.user": "admin", "mongodb.password": "<password>"}'; CREATE TABLES AS INFER PIPELINE AS LOAD DATA LINK mktg_email_demo.link '*' FORMAT AVRO; START ALL PIPELINES; We'll replace <primary>, <secondary>, <secondary> and <password> with the values from MongoDB Atlas. The customer table will be created by the pipeline. The vector embeddings for the behavior stages can be created as follows: Python df_list = [] id_counter = 1 for stage in stages: embedding, tokens, status = get_embeddings(stage) parent = id_counter - 1 stage_df = pd.DataFrame( { "id": [id_counter], "content": [stage], "embedding": [embedding], "parent": [parent] } ) df_list.append(stage_df) id_counter += 1 df = pd.concat(df_list, ignore_index = True) We'll need a table to store the data: SQL USE mktg_email_demo; DROP TABLE IF EXISTS docs_splits; CREATE TABLE IF NOT EXISTS docs_splits ( id INT, content TEXT, embedding VECTOR(:dimensions), parent INT ); Then, we can save the data in the table: Python df.to_sql( "docs_splits", con = db_connection, if_exists = "append", index = False, chunksize = 1000 ) We'll search SingleStore for matches as follows: Python def search_s2(vector): query = """ SELECT content, parent FROM docs_splits ORDER BY (embedding <-> :vector) ASC LIMIT 1 """ with db_connection.connect() as con: result = con.execute(text(query), {"vector": str(vector)}) return result.fetchone() Using the data, we can ask the LLM to customize the email as follows: Python limit = 5 emails = [] # Create a connection with db_connection.connect() as con: query = "SELECT _more :> JSON FROM customers LIMIT :limit" result = con.execute(text(query), {"limit": limit}) for customer in result: customer_data = customer[0] fname, stage = customer_data["first_name"], customer_data["stage"] # Retrieve current and next stage embeddings this_stage = next((item for item in stages_w_embed if item["stage"] == stage), None) next_stage = next((item for item in stages_w_embed if item["stage"] == find_next_stage(stage)), None) if not this_stage or not next_stage: continue # Get content cur_content, cur_permalink = search_s2(this_stage["embedding"]) next_content, next_permalink = search_s2(next_stage["embedding"]) # Create the system message system_message = f""" You are a helpful assistant. I am Marketer John at Awesome Web Analytics. We are similar to the current top web analytics companies. We have users that are at various stages in using our product, and we want to send them helpful emails to get them to use our product more. Write an email for {fname} who is on stage {stage} of the onboarding process. The next stage is {next_stage['stage']}. Ensure the email describes the benefits of moving to the next stage, then always share this link: https://github.com/VeryFatBoy/mktg-email-flow/tree/main/docs/{next_content.replace(' ', '-')}.md. Limit the email to 1 paragraph. End the email with my signature: 'Best Regards, Marketer John.' """ email = chatgpt_generate_email(system_message, fname) emails.append( { "fname": fname, "stage": stage, "next_stage": next_stage["stage"], "email": email, } ) For example, here is an email generated for Joseph: Plain Text First Name: Joseph Stage: generating a tracking code Next Stage: adding tracking to your website Subject: Take the Next Step in Your Analytics Journey! Hi Joseph, Congratulations on generating your tracking code! The next step is to add tracking to your website, which is crucial for unlocking the full power of our analytics tools. By integrating the tracking code, you will start collecting valuable data about your visitors, enabling you to understand user behavior, optimize your website, and drive better results for your business. Ready to get started? Check out our detailed guide here: [Adding Tracking to Your Website](https://github.com/VeryFatBoy/mktg-email-flow/tree/main/docs/adding-tracking-to-your-website.md). Best Regards, Marketer John. Summary Through this practical demonstration, we've seen how SingleStore improves our email campaigns with its multi-model capabilities and AI-driven personalization. Using SingleStore as our single source of truth, we've simplified our workflows and ensured that our email campaigns deliver maximum impact and value to our customers. Acknowledgements I thank Wes Kennedy for the original demo code, which was adapted for this article.
The big question is: Can we handle AI responsibly? Or will we let it run wild? Artificial intelligence (AI) is changing the world. It is used in self-driving cars, healthcare, finance, and education. AI is making life easier, but it also comes with risks. What happens when AI makes a mistake? Should AI take the blame, or should humans be responsible? I know that AI is just a tool — it doesn’t think, feel, or make choices on its own. It only follows the data and rules we give it. If AI makes a mistake, it’s because we made errors in building, training, or supervising it. Blaming AI is like blaming a calculator for a wrong answer when the person entered the wrong numbers. AI can be powerful and helpful, but it’s not perfect. It’s our duty to build it carefully, test it properly, and use it responsibly. The real responsibility is on us, not AI. What Is Responsible AI? Responsible AI means creating, using, and controlling AI in a safe and fair way. It helps reduce problems like bias, unfairness, and privacy risks. AI is not a person. It does not have feelings or morals. It follows the rules we give it. So, if AI does something wrong, it is our fault, not AI’s. Key Parts of Responsible AI Transparency. AI decisions should be clear, not a "black box."Fairness. AI must not treat people unfairly.Accountability. Humans must take responsibility when AI makes mistakes.Privacy and security. AI must protect people’s data, not misuse it. So it's always us who use the AI; we’re the irresponsible ones, not AI. Key Principles of Responsible AI Several organisations and researchers have proposed principles for responsible AI. These principles often overlap and share common themes. Some key principles include: 1. Fairness and No Discrimination AI should not treat people unfairly based on race, gender, or other factors. Biased AI can lead to unfair hiring, loans, and law enforcement. "Algorithms are opinions embedded in code." – Cathy O'Neil, Weapons of Math Destruction 2. Transparency and Explainability AI must show how it makes decisions. This builds trust. If AI makes a mistake, we should understand why. The need for explainable AI (XAI) is growing as AI systems become more complex and are used in critical applications. "Black boxes conceal agendas." – Frank Pasquale, The Black Box Society 3. Accountability There must be clear rules about who is responsible when AI causes harm. "With great power comes great responsibility." – Attributed to Voltaire, popularized by Spider-Man comics 4. Privacy and Security AI systems should protect user privacy and data security. This is particularly important as AI systems often rely on large amounts of data. "Privacy is not an option, and it shouldn’t be the price we accept for just getting on the Internet." – Gary Kovacs, former CEO of Mozilla 5. Robustness and Safety AI must work correctly in all situations, especially in self-driving cars and healthcare. "Safety isn’t expensive, it’s priceless." – Unknown 6. Human Control Humans should always have control over AI. We should be able to stop AI or change its decisions if needed. “Is artificial intelligence less than our intelligence?” – Spike Jonze 7. Beneficence AI should help people and solve problems, not create new ones. “The coming era of Artificial Intelligence will not be the era of war, but be the era of deep compassion, non-violence, and love.” ― Amit Ray, Compassionate Artificial Intelligence Examples of Responsible AI in Practice Companies and developers must ensure that AI is transparent, fair, and safe for all. By following responsible AI practices, we can build a future where AI benefits everyone without unintended harm. Healthcare: AI for Better Patient Care Fair diagnosis. AI must not favor one group over another.Data safety. Patients’ information must be protected.Transparency. Doctors must understand how AI makes medical decisions. Finance: Fair and Secure Banking with AI Equal access to loans. AI should not discriminate against applicants based on race or gender.Reliable fraud detection. AI must accurately detect fraud while avoiding false alarms on genuine transactions.Clear decision-making. Banks should explain why AI approves or denies a loan. (Explainable AI) Transportation: Smarter and Safer Mobility Safe self-driving vehicles. AI must prioritize human safety over efficiency.Better traffic flow. AI should help reduce congestion without creating unfair access to transportation.Privacy protection. AI in ride-sharing apps or public transport must safeguard user data. Global Efforts Toward Responsible AI Several organisations and regulatory bodies are shaping AI governance and ethical guidelines: Microsoft’s AI principles – Focus on fairness, transparency, and accountabilityEU ethics guidelines for trustworthy AI – Emphasizes human oversight, safety, and non-discriminationGoogle’s AI principles – Prioritizes fairness, safety, and accountabilityMcKinsey’s responsible AI framework – Ethical AI practices for business transformation Open-source initiatives also play a crucial role in ensuring AI fairness and transparency: AI fairness 360 – IBM’s toolkit for detecting and mitigating AI biasTensorFlow privacy – Privacy-preserving AI model trainingFairlearn – Python package for improving AI fairnessXAI by ethical AI and ML – AI library with built-in explainability features Worldwide Authorities Several worldwide authorities and organisations are working on responsible AI: The European Union. The EU is at the forefront of regulating AI and promoting responsible AI through initiatives like the Ethics Guidelines for Trustworthy AI and the proposed AI Act. OECD. The Organisation for Economic Co-operation and Development (OECD) has developed principles on AI and is working to promote international cooperation on responsible AI. UNESCO. The United Nations Educational, Scientific, and Cultural Organization (UNESCO) has developed a Recommendation on the Ethics of AI, providing a global framework for responsible AI development and deployment.IEEE. The Institute of Electrical and Electronics Engineers (IEEE) has initiatives and standards related to ethically aligned design for autonomous and intelligent systems. How to Prevent AI Risks and Ensure Responsibility 1. Regulate AI Development Governments should enforce strict AI policies.AI ethics committees should oversee high-risk applications. 2. Promote AI Transparency and Explainability AI models should be interpretable.Black-box AI should be restricted in critical fields like law enforcement. 3. Develop Ethical AI Practices AI should be built with fairness and inclusivity in mind.Developers must ensure diverse, unbiased datasets. 4. Support AI and Human Collaboration AI should enhance, not replace, human intelligence.AI should augment jobs, not eliminate them. 5. Strengthen AI Cybersecurity AI must be protected from hacking and manipulation.Governments should fund AI security research. 6. Enforce Privacy Laws Users should control how AI uses their data.Mass AI surveillance should be banned without consent. 7. Ban AI in Autonomous Weapons Global treaties must prevent AI warfare and prohibit lethal autonomous systems. Conclusion Well, AI is only as responsible as the people who build, train, and use it. Think of AI like a really smart intern — it can process tons of data, follow instructions, and even come up with creative solutions, but it doesn't have morality or accountability. If it messes up, it’s not AI’s fault — it’s ours. So, is AI the villain or the hero? Neither — it’s just a powerful tool. Whether it helps or harms depends on how we use it. The real question isn’t “Can AI be responsible?” but “Are we responsible enough to handle AI?” What do you think? Resources Microsoft’s AI principlesEU ethics guidelines for trustworthy AIGoogle’s AI principlesMcKinsey’s responsible AI frameworkTensorFlow privacyFairlearnXAI by ethical AI and MLWhy open-source is crucial for responsible AI development
Hey, DZone Community! We have an exciting year ahead of research for our beloved Trend Reports. And once again, we are asking for your insights and expertise (anonymously if you choose) — readers just like you drive the content we cover in our Trend Reports. Check out the details for our research survey below. Comic by Daniel Stori Generative AI Research Generative AI is revolutionizing industries, and software development is no exception. At DZone, we're diving deep into how GenAI models, algorithms, and implementation strategies are reshaping the way we write code and build software. Take our short research survey ( ~10 minutes) to contribute to our latest findings. We're exploring key topics, including: Embracing generative AI (or not)Multimodal AIThe influence of LLMsIntelligent searchEmerging tech And don't forget to enter the raffle for a chance to win an e-gift card of your choice! Join the GenAI Research Over the coming month, we will compile and analyze data from hundreds of respondents; results and observations will be featured in the "Key Research Findings" of our Trend Reports. Your responses help inform the narrative of our Trend Reports, so we truly cannot do this without you. Stay tuned for each report's launch and see how your insights align with the larger DZone Community. We thank you in advance for your help! —The DZone Content and Community team
Thousands of new software engineers enter the industry every year with aspirations to make a mark, but many struggle to grow efficiently. Transitioning from an entry-level engineer to a senior software engineer is challenging and rewarding, requiring strategic effort, persistence, and the ability to learn from every experience. This article outlines a simple, effective strategy to accelerate your journey. This is not a shortcut; it is quite the opposite. This is a way to develop a solid base of earned knowledge for long-term growth with urgency and focus. Fast growth from intern to senior engineer requires a clear understanding of the expectations at each level, a focus on developing key skills, and a strategy that emphasizes both technical and interpersonal growth. This guide will help you navigate this journey effectively, laying out actionable advice and insights to fast-track your progress. Who Is This For? Before I go further, let me be more specific about the strategy's applicability in this article. This article concerns growing as a software engineer from an intern/new grad (L3 at Google or E3 at Meta) to a senior software engineer (L5 at Google or E5 at Meta). While the personal growth part of this article is generally applicable, the career growth part applies only to companies where the engineering career ladder is heavily influenced by prominent Silicon Valley companies such as Google Meta. Executing this strategy requires a healthy ambition with a matching work ethic. Key Differences Between Junior and Senior Engineers With that preamble, let's dive in. The first step is to understand what sets the two career levels apart. Once the difference is laid out, I will boil it down to a few dimensions of growth and demonstrate how the strategy covers all of them. While every organization defines these roles slightly differently, the expectations from these roles are roughly as follows: Entry-Level Software Engineer Works on small, well-defined tasks under close supervision.Relies heavily on existing frameworks, tools, and guidance from senior colleagues.Contributes individual pieces to a larger project, often with a limited understanding of the bigger picture. Senior Software Engineer Independently solves open-ended problems that impact a business domainThey are the domain experts. Owns large-scale projects or critical subsystems, taking responsibility for their design, development, and delivery.Designs complex systems, makes high-level architectural decisions, and anticipates long-term technical implications.Leads cross-functional efforts — partners across teams and functions to drive strategic initiatives that align with organizational goals.Maintains systems, leads incidents, and reduces toil. Participates in long-term planning. Mentors junior engineers. What Are the Areas of Growth? You need to grow in the following areas to close the gap from an L3 to an L5. 1. Independence L5 are independent agents. They are responsible for figuring out what to do, how to do it, who to talk to, etc. Ultimately, they must be able to deliver results on medium-sized projects without handholding. An L5 must be agentic, i.e., they should be able to provide value for at least a quarter in their manager's absence. 2. Functional Expertise L5 can be independent only when they have the required expertise. This has three dimensions: L5 must have technical and functional competence in all things related to your team. L5 must understand the business context in which their team exists and why it matters to users. They must have organizational know-how and social capital that enables them to work on cross-team projects. 3. Working With Others Given that L5s manage projects with significant complexity and scope, they must develop this meta-skill. This one has many dimensions, such as writing, communication, planning, project management, etc. This only comes with deliberate practice. 4. Leadership L5 leverages its expertise to make many small and big decisions. For more significant projects, you will need critical long-term thinking. You must uncover and present trade-offs and have strong opinions on what path the team should take. All of this is collectively covered under designing systems. This muscle, too, only comes with practice. Strategy: Spiral of Success “Take a simple idea, and take it seriously.” – Charlie Munger As you can see, there is much growing up to do. The overall strategy is simple: you need to maximize your learning rate. Learning in any complex domain happens only by doing, shipping, and learning from feedback. You need to do a lot of projects that are increasingly more complex. You need to optimize to be invited to do a lot of projects. For that, you need to increase the surface area of your opportunity. The best way to increase your opportunity surface area is to do the projects you get quickly and satisfactorily. Make doing things fast and well your brand. It is essential to go dapper in the significance of these two dimensions: Fast: This is the key to turbocharging our learning process. The quicker you complete an assignment, the faster you get to do the next one. This enables you to do newer, different things sooner. And you keep accumulating a ton of delivered impact. Well: Doing a project well is key to earning your team’s trust. This is the best signal that you are ready for more significant responsibilities. It tells the team that you can carry your weight and more. This leads to increased scope and complexity in the next project. A project not well done is worse than a project not done. It erodes your brand; it delays critical and complex projects getting your way, robbing you of opportunities. Doing the first 2-3 assignments really fast and well leads to new and slightly bigger projects coming your way. You repeat the same process, forming a virtuous growth cycle, and before you know it, you will become a crucial team member. With every project done You will learn new core skills, code something new, use new technology, see new interactions between technologies, and so on. You will become more independent and gain functional expertise. You will gain more business context, which you will be able to connect to previous learning. Your holistic mental map of the whole area will start to become richer. This will make you more mature in a domain and improve your intuition, thus making you more independent. You will earn the team's trust; you will make cross-team contacts. You keep accumulating social credentials. With time, you learn more about your adjacent teams, their systems, and how they join with your team. With more context, you become more creative, and you are able to generate more possible solutions to choose from. Once you have sufficient context, more open-ended assignments will come your way. You want to reach this phase as soon as you can. These projects give you the opportunity to hone skills like writing, designing, project management, cross-team work, and overall leadership. Notice that this strategy does not discuss what to do on the projects. It’s all about how to do them. The “what” will keep changing as you take on bigger responsibilities. Initially, this will primarily involve coding and testing. But increasingly, there will be more investigation, communication, coordination, design, and so on. The key point is that your strategy should be the same, and you should do every assignment very fast and very well. This means doing well and making fast changes with changes in scope. Just like writing good code has a learning curve, doing good planning or writing well will have a learning curve. The only way to learn fast is to lean in, go through the discomfort, and do a lot of it. You have to really earn it. Execution You can not apply this strategy blindly. You are providing a service. As you focus on learning and personal growth, ensure you deliver value to the organization and users. Ultimately, the delivered impact is the only thing that matters. While you're at it, treat everyone you interact with better than you wish to be treated. Be unreasonable in your hospitality. In the long run, this is in your self-interest because this leads to more opportunities coming your way. How to Do Things Fast? First, consider how much it should take you upfront for each granular task. And then see how much time it actually takes. This is not about being right; it is about being deliberate. You will be mostly wrong in both directions, and that's okay. This is about figuring out how and why you are wrong and incrementally correcting for it. If you finish too fast, you will adjust your intuition. If you finish too slowly, you need to debug why. From that debugging, you will learn to do things faster. It's a win-win situation. Projects can be done in a much shorter time than the time allocated to them, especially in cases where they don't need cross-team coordination. This is simply because people naturally procrastinate. To counter this, you should always set very aggressive timelines for yourself. You will meet such timelines more often than you think. An aggressive timeline helps attain focus, which is a force multiplier. You can do more in one 4-hour chunk than four 1-hour chunks. Find that sweet spot for you. Second, do not shy away from seeking help. Do not work in isolation. Sometimes, you spend a day on a thing that your teammate could have helped you within 15 minutes. Seek that help. Do your homework before seeking help. A well-framed help looks like this: I am trying to do X, I have attempted A, B, and C, and I am stuck. Can you help me figure out what I am missing? But seek the help. Because help means you will finish your assignment faster, and thus, you will be able to get the next assignment faster. Remember, your goal is to increase opportunity surface area. Finally, you must put in the hours — your growth compounds with time. Thus, the hours you put in the first few weeks, months, and years will keep giving benefits for many years to come. The intensity really matters here. With focus, a lot can be done in a day or a week. You could wrap your head around a new domain in a week. You could take a month, too, but then you lose out on some opportunities. Be relentless. How to Do Things Well? This boils down to two traits you can cultivate: curiosity and care. A healthy curiosity is essential to doing things well. It leads to more questions and a better understanding of the subject. Chasing every odd observation leads to the early identification of bugs and, thus, better quality output. With curiosity, you will not be confined to only your project. Still, you will learn from projects around you, too, which helps you spin up faster on the business domain and increases your opportunity surface area. Care is about the polish in your output. Early in your career, you still do not have a taste for what is considered “well done.” To counter that, you need to have a feedback and improvement loop. You will have teammates, users, or managers to work with for everything you work on. Show your work to them, seek feedback, identify improvement areas, and then make the improvements. Repeat this cycle many times, and fast. Naturally, you will develop a taste for what's well done, which is a requirement for a senior software engineer. What if Fast and Well Conflict? Suppose you have to make a tradeoff between well and fast. Prioritize doing things well. It's okay to be a bit slow initially while finding your feet, but it's never okay to half-do things. Here, seeking feedback is crucial, as your own sense of what is well done is not yet fully developed. Seek feedback on what you are good at and what is not overkill. Seek clarity on requirements. Get your plans reviewed by teammates just to make sure. A Few More Things About Day-to-Day Practices Take all assignments, even if you don’t like them. Growing as a senior means caring about the business. If there is something that you don't like to do but needs to be done to have a business outcome, take it. Volunteer for the things no one wants to do. Because you do things so fast, unpleasant assignments will be short-lasting and earn you a lot of goodwill. Some projects move slower than others despite your best efforts. But there will be downtimes. You will wait on other teams, data jobs will run long, builds will take time, code reviews will be delayed, and so on. To fully occupy yourself, try to have two or more assignments going in parallel, but ensure your primary assignments are always on track. If you still have spare time, read the incoming tickets, bug reports, and design documents. Keep up with the Slack discourse. Be a sponge, absorbing all the context around you. Curiosity will help here. Make a point of being excellent on on-call rotations, as they are an excellent learning opportunity to get involved in the whole team context. Tracking As you execute this strategy, it is essential to ensure that you are on the right track and hitting the milestones along the way. Phase 1: Foundations Complete small tasks. Build confidence by reliably delivering well-defined tasks.Understand team systems and tools. Gain familiarity with the team's codebase, tooling, and workflows.Build trust. As you deliver consistently, your team begins to rely on you for essential contributions.Grasp team dynamics. Develop an understanding of what everyone on the team is working on and how their work connects.Acquire operational knowledge. Achieve a working knowledge of the technologies and systems used by your team. Phase 2: Gaining Independence Write small design documents. Begin drafting plans for features that take several weeks to implement.Handle feedback. Your code and designs require minimal feedback as you consistently land on effective solutions.Communicate more. Transition from primarily coding to more discussions, planning, and presenting your ideas.Tackle cross-team investigations. Collaborate with other teams to solve problems that extend beyond your immediate scope.Lead incident responses. Take charge of resolving incidents and develop a reputation for reliability under pressure. Phase 3: Expanding Scope Own medium-sized projects. Successfully deliver projects lasting 2-3 months, taking responsibility for their end-to-end execution.Increase visibility. Participate in multiple discussions and contribute to projects spanning different teams.Propose solutions. Take on open-ended, high-level challenges and provide thoughtful, actionable solutions.Mentor peers. Offer guidance and support to less experienced colleagues, building your leadership skills.Contribute to design discussions. Meaningfully engage in conversations about system architecture and project strategies. Phase 4: Strategic Impact Lead large-scale projects. Identify and own initiatives that have cross-team or company-wide implications.Develop frameworks and tools. Create solutions that improve productivity or simplify workflows for your team.Advocate for best practices. Promote coding standards, testing strategies, and effective development processes.Represent your team. Act as a spokesperson in cross-functional meetings, advocating for your team’s goals.Drive innovation. Bubble ideas for what the team should tackle next and align them with organizational priorities. Tracking With Manager You need to make sure that your management team sees that you are progressing on the career ladder defined by the organization. You need to keep a tight loop with your manager and explicitly sync with them regularly. You could do this monthly at the completion of each project or both. The best way to keep yourself and your manager accountable is to document every such check-in in a structured way rooted in the ladder. Here is a template that could be useful for such tracking Axis of development Scope and Impact Functional Expertise Independence Working with others Leadership Jan 2025 Where do you think you stand Your self-review goes here Your self-review goes here Your self-review goes here Your self-review goes here Your self-review goes here Where does your manager think you stand Your manager’s review goes here Your manager’s review goes here Your manager’s review goes here Your manager’s review goes here Your manager’s review goes here Action Items Action items you both agree on Action items you both agree on Action items you both agree on Action items you both agree on Action items you both agree on Feb 2025 Where do you think you stand Where does your manager think you stand Action Items Project X Where do you think you stand Where does your manager think you stand Action Items Parting Thoughts Do not compare yourself with others. Focus on your rate of improvement. Your goal is to have the fastest possible growth for yourself, not to compare yourself to others. Everyone is different, with different starting points, in different teams, and with various projects. Thus, everyone’s growth will be different. Don't waste time agonizing over others.
I have been looking at open source development in Japan, which is showing increasing signs of playing a bigger part in corporate strategy and becoming more global in reach. Koichi Shikata, Head of Solutions Architect Division, SUSE Software Solutions, Japan, is a seasoned professional with extensive experience in the software industry, having held significant roles at SUSE, Wind River, and Intel, based both in Japan and the US. Throughout his career, he has been instrumental in promoting open-source solutions and fostering innovation within the industry. I spoke with Shikata-san to get his insights into the Japanese market and emerging trends around open source use in Japan. The insights shared here are Shikata-san’s personal opinions and do not necessarily reflect the views of his employer, SUSE. Trends in Open-Source Adoption in Japan 1. You've had extensive experience with SUSE, Wind River, Intel, and more. What trends do you see around Japanese open source in 2025? Open source adoption in Japan is expected to expand significantly in 2025, driven primarily by advancements in AI, cloud-native adoption, and expanded use in the manufacturing industry. In recent years, manufacturers, traditionally reliant on proprietary operating systems with lengthy development cycles, have begun embracing open-source technologies. This shift has notably reduced development times, enabling more efficient deployment of services. Even companies supplying machinery for integrated circuit (IC) manufacturing are recognizing the benefits of this transition, indicating a broader trend across the industry. 2. What initiatives, policies, or industry movements are currently driving open-source adoption in Japan, and how do they impact developers and businesses? A significant factor influencing open-source adoption in Japan is the growing awareness of the risks associated with vendor lock-in, especially highlighted by recent events involving proprietary technologies like VMware. Editor’s note: In late 2024, VMware in Japan, now owned by Broadcom, started being investigated by the Japanese Fair Trade Commission for suspected antitrust violations related to "bundling practices," where they are allegedly forcing customers to buy packages of VMware software together, potentially raising prices and limiting customer choice compared to when products were sold separately; this is seen as a form of unfair market dominance. The investigation is ongoing. Customers are increasingly discussing their vendor’s proprietary technologies and expressing concerns about dependency on single vendors. This is new. This heightened awareness is prompting businesses to consider open-source alternatives as a safety net, leading to a more diversified and resilient technological landscape. Traditionally conservative Japanese companies are now acknowledging the potential downsides of relying solely on a single technology and are proactively seeking open-source solutions to mitigate these risks. 3. Recently, Japanese companies like Toyota and Hitachi have established open-source program offices (OSPOs) to strategically manage open-source software. Is this a sign that Japanese companies are integrating open source more than in the past? Yes, this development signifies a positive shift towards greater open-source integration among Japanese enterprises. Notably, major banks in Japan have been utilizing Linux for their core banking systems, reflecting a level of trust in open-source technologies. The establishment of OSPOs by prominent companies indicates a strategic move to harness the innovation and collaborative potential of the global open-source community. This trend suggests that even traditionally cautious organizations are recognizing the value of open-source methodologies and are actively incorporating them into their operations. 4. SUSE has developer communities around the world. What are the main strengths and characteristics of the Linux developer community in Japan? In Japan, while the Linux developer community may not have widespread visibility, it plays a crucial role in supporting mission-critical systems, particularly in sectors like banking. Many Japanese companies are conservative and prefer not to publicly disclose their use of open-source and other technologies. This cultural tendency towards discretion presents an opportunity for growth. Enhancing the visibility and awareness of SUSE and its contributions within the Japanese market is a priority. Efforts are underway to expand community engagement and demonstrate the value of open-source solutions to a broader audience, aiming to build trust and encourage more open collaboration within the industry. Conclusion Competition serves as a catalyst for technological advancement. The drive to outperform rivals fosters innovation and continuous improvement, ultimately benefiting the industry as a whole.
DZone events bring together industry leaders, innovators, and peers to explore the latest trends, share insights, and tackle industry challenges. From Virtual Roundtables to Fireside Chats, our events cover a wide range of topics, each tailored to provide you, our DZone audience, with practical knowledge, meaningful discussions, and support for your professional growth. DZone Events Happening Soon Below, you’ll find upcoming events that you won't want to miss. DevOps for Oracle Applications: Automation nd Compliance Made Easy Date: March 11, 2025Time: 1:00 PM ET Register for Free! Join Flexagon and DZone as Flexagon's CEO unveils how FlexDeploy is helping organizations future-proof their DevOps strategy for Oracle Applications and Infrastructure. Explore innovations for automation through compliance, along with real-world success stories from companies who have adopted FlexDeploy. Make AI Your App Development Advantage: Learn Why and How Date: March 12, 2025Time: 10:00 AM ET Register for Free! The future of app development is here, and AI is leading the charge. Join Outsystems and DZone, on March 12th at 10am ET, for an exclusive Webinar with Luis Blando, CPTO of OutSystems, and John Rymer, industry analyst at Analysis.Tech, as they discuss how AI and low-code are revolutionizing development.You will also hear from David Gilkey, Leader of Solution Architecture, Americas East at OutSystems, and Roy van de Kerkhof, Director at NovioQ. This session will give you the tools and knowledge you need to accelerate your development and stay ahead of the curve in the ever-evolving tech landscape. Developer Experience: The Coalescence of Developer Productivity, Process Satisfaction, and Platform Engineering Date: March 12, 2025Time: 1:00 PM ET Register for Free! Explore the future of developer experience at DZone’s Virtual Roundtable, where a panel will dive into key insights from the 2025 Developer Experience Trend Report. Discover how AI, automation, and developer-centric strategies are shaping workflows, productivity, and satisfaction. Don’t miss this opportunity to connect with industry experts and peers shaping the next chapter of software development. Unpacking the 2025 Developer Experience Trends Report: Insights, Gaps, and Putting it into Action Date: March 19, 2025Time: 1:00 PM ET Register for Free! We’ve just seen the 2025 Developer Experience Trends Report from DZone, and while it shines a light on important themes like platform engineering, developer advocacy, and productivity metrics, there are some key gaps that deserve attention. Join Cortex Co-founders Anish Dhar and Ganesh Datta for a special webinar, hosted in partnership with DZone, where they’ll dive into what the report gets right—and challenge the assumptions shaping the DevEx conversation. Their take? Developer experience is grounded in clear ownership. Without ownership clarity, teams face accountability challenges, cognitive overload, and inconsistent standards, ultimately hampering productivity. Don’t miss this deep dive into the trends shaping your team’s future. Accelerating Software Delivery: Unifying Application and Database Changes in Modern CI/CD Date: March 25, 2025Time: 1:00 PM ET Register for Free! Want to speed up your software delivery? It’s time to unify your application and database changes. Join us for Accelerating Software Delivery: Unifying Application and Database Changes in Modern CI/CD, where we’ll teach you how to seamlessly integrate database updates into your CI/CD pipeline. Petabyte Scale, Gigabyte Costs: Mezmo’s ElasticSearch to Quickwit Evolution Date: March 27, 2025Time: 1:00 PM ET Register for Free! For Mezmo, scaling their infrastructure meant facing significant challenges with ElasticSearch. That's when they made the decision to transition to Quickwit, an open-source, cloud-native search engine designed to handle large-scale data efficiently. This is a must-attend session for anyone looking for insights on improving search platform scalability and managing data growth. What's Next? DZone has more in store! Stay tuned for announcements about upcoming Webinars, Virtual Roundtables, Fireside Chats, and other developer-focused events. Whether you’re looking to sharpen your skills, explore new tools, or connect with industry leaders, there’s always something exciting on the horizon. Don’t miss out — save this article and check back often for updates!
Selenium is an open-source suite of tools and libraries that allows you to interact with browsers to perform various operations like sending text, clicking on a button, selecting drop-downs, etc. However, there are scenarios where the actual Selenium WebDriver commands do not work as expected, as Selenium can’t interact with the WebElements directly. This is where JavaScriptExecutor comes into the picture. In this blog, we discuss JavaScriptExecutor in Selenium and how to get started with practical use cases and examples. What Is JavaScriptExecutor in Selenium? JavaScriptExecutor is an interface provided by Selenium that helps in executing JavaScript commands. This interface provides methods to run JavaScript on the selected window or the current web page. It is available for all the language bindings supported by Selenium. The JavaScriptExecutor in Selenium can be used directly by importing the following package in the automation test scripts: Java org.openqa.selenium.JavascriptExecutor JavaScriptExecutor in Selenium provides two methods to interact with the WebElements: executeScript() – This method executes JavaScript in the context of the currently selected window or frame in Selenium. The script will be executed as the body of an anonymous function.executeAsyncScript() – This method executes an asynchronous snippet of JavaScript in the context of the currently selected window or frame in Selenium. The script will be executed as the body of an anonymous function. Note: The major difference between executeScript() and executeAsyncScript() methods is that the script invoked using the executeAsyncScript() has to signal about the completion of execution using the callback() function. Invoking methods using executeAsyncScript() are majorly used when sleep has to be performed in the browser under test or when tests have to be synchronized within an AJAX application. Why Use JavaScriptExecutor in Selenium? There are scenarios where some WebDriver commands do not work as expected due to multiple reasons, as follows: Selenium not interacting with the WebElements directlyPerforming actions like scrolling into view, clicking on the WebElements that are hidden behind the overlay, or setting values in the readonly fieldsPerforming browser-specific behaviours like modifying the DOM dynamically In these cases, we take the help of JavaScriptExecutor in Selenium. Traditionally, we use Selenium locators such as ID, Name, CSS Selector, XPath, etc., to locate a WebElement. If these locators do not work, or you are handling a tricky XPath, in such cases, JavaScriptExecutor helps to locate the desired WebElement. There are cases where the click() method may not work on all the web browsers, or the web controls might behave differently on different browsers. To overcome such situations, the JavaScriptExecutor should be used to perform a click action. As we know, browsers have JavaScript implementation inside them and can understand JavaScript commands. Hence, understanding JavaScriptExecutor in Selenium will enable us to perform a range of operations more efficiently. Basics of JavaScriptExecutor in Selenium The purpose of this section is to provide a high-level idea about the implementation steps of the JavaScriptExecutor in Selenium. For the demonstration, we will be using Java as the preferred programming language. Let’s take a look at the key steps. 1. Import the package associated with JavaScriptExecutor: Java import org.openqa.selenium.JavascriptExecutor; 2. Use JavaScriptExecutor, create a reference for the interface, and assign it to the WebDriver instance by type-casting it: Java JavascriptExecutor js = (JavascriptExecutor) driver; 3. Call the executeAsyncScript() or executeScript() methods. For example, the syntax for executeScript() is given below: Java js.executeScript(java.lang.String script, java.lang.Object... args) Demo: Using JavaScriptExecutor in Selenium Before we look at how to use JavaScriptExecuter in Selenium, follow these prerequisites: Create a new Maven project using IntelliJ IDEAdd latest Selenium WebDriver dependency in the pom.xmlAdd latest TestNG dependency in pom.xml We would be using the LambdaTest eCommerce Playground website to demo the working of the JavaScriptExecutor in Selenium by running the tests on the Local Chrome browser. Test Scenario 1 Our objective is to write a simple code to illustrate an example using the executeScript() method using the following test scenario. Navigate to the Account Login page of the LambdaTest eCommerce Playground website.Enter valid login credentials and click on the Login button by highlighting the field with a red border.Print the page title and domain name.Assert that the page header “My Account” is displayed on a successful login. Implementation Create a new TestJavaScriptExecutor class for implementing the test scenario. We would first create two methods in this test class that would allow us to set up and gracefully quit the Selenium WebDriver sessions. Let’s declare the WebDriver at class level as we would need it in both of the methods, i.e, setup() method, to start the driver session and the tearDown() method, to gracefully quit the session. Java public class TestJavaScriptExecutor { private WebDriver driver; //... } Let’s create a new setup() method that will instantiate an instance of the WebDriver class and accordingly set the configuration for running the tests on the local Chrome browser. Java @BeforeTest public void setup () { driver = new ChromeDriver (); driver.manage () .window () .maximize (); driver.manage () .timeouts () .implicitlyWait (Duration.ofSeconds (30)); } This method will open the Chrome browser, maximize its window, and also apply an implicit wait of 30 seconds. This implicit wait will allow all the website contents to get loaded successfully before the test execution starts. Java @AfterTest public void tearDown () { driver.quit (); } Finally, when the test is executed, the tearDown() method will be called, which will close the RemoteWebDriver session gracefully. Let’s now add a testJavaScriptExecutorCommand() method in the same test class to implement the test scenario we discussed. Java @Test public void testJavaScriptExecutorCommand () { driver.get ("https://ecommerce-playground.lambdatest.io/index.php?route=account/login"); JavascriptExecutor js = (JavascriptExecutor) driver; //.... } The code navigates to the Login page of the LambdaTest eCommerce Playground website. The next line casts the WebDriver instance to JavascriptExecutor so that JavaScript commands can be executed in the browser. Java WebElement emailAddressField = driver.findElement (By.id ("input-email")); js.executeScript ("arguments[0].style.border='3px solid red'", emailAddressField); emailAddressField.sendKeys ("davidjacob@demo.com"); js.executeScript ("arguments[0].style.border='2px solid #ced4da'", emailAddressField); Next, it locates the emailAddressField using the id locator strategy. Then, it uses the JavaScriptExecutor command to highlight the border of the e-mail address field in red color. Java WebElement passwordField = driver.findElement (By.id ("input-password")); js.executeScript ("arguments[0].style.border='3px solid red'", passwordField); passwordField.sendKeys ("Password123"); js.executeScript ("arguments[0].style.border='2px solid #ced4da'", passwordField); Next, the password field is located and highlighted with a red border. This highlight helps in knowing what steps are getting executed at the time of automation test execution. Java WebElement loginBtn = driver.findElement (By.cssSelector ("input.btn")); js.executeScript ("arguments[0].style.border='3px solid red'", loginBtn); js.executeScript ("arguments[0].click();", loginBtn); Likewise, the Login button is located using the CSS Selector strategy and is highlighted as well. Java String titleText = js.executeScript ("return document.title;").toString (); System.out.println ("Page Title is: " + titleText); String domainName = js.executeScript ("return document.domain;").toString (); System.out.println ("Domain is: " + domainName); The page title and domain name are located next using the JavaScriptExecutor and printed on the console. Java String myAccountHeader = driver.findElement (By.cssSelector ("#content h2")).getText (); assertEquals (myAccountHeader, "My Account"); Finally, the page header of the My Account page, which is displayed after a successful login, is located, and an assertion is performed to check that it displays the text "My Account." Here is the full code from the TestJavaScriptExecutor class: Java public class TestJavaScriptExecutor { private WebDriver driver; @BeforeTest public void setup () { driver = new ChromeDriver (); driver.manage () .window () .maximize (); driver.manage () .timeouts () .implicitlyWait (Duration.ofSeconds (30)); } @AfterTest public void tearDown () { driver.quit (); } @Test public void testJavaScriptExecutorCommand () { driver.get ("https://ecommerce-playground.lambdatest.io/index.php?route=account/login"); JavascriptExecutor js = (JavascriptExecutor) driver; WebElement emailAddressField = driver.findElement (By.id ("input-email")); js.executeScript ("arguments[0].style.border='3px solid red'", emailAddressField); emailAddressField.sendKeys ("davidjacob@demo.com"); js.executeScript ("arguments[0].style.border='2px solid #ced4da'", emailAddressField); WebElement passwordField = driver.findElement (By.id ("input-password")); js.executeScript ("arguments[0].style.border='3px solid red'", passwordField); passwordField.sendKeys ("Password123"); js.executeScript ("arguments[0].style.border='2px solid #ced4da'", passwordField); WebElement loginBtn = driver.findElement (By.cssSelector ("input.btn")); js.executeScript ("arguments[0].style.border='3px solid red'", loginBtn); js.executeScript ("arguments[0].click();", loginBtn); String titleText = js.executeScript ("return document.title;") .toString (); System.out.println ("Page Title is: " + titleText); String domainName = js.executeScript ("return document.domain;") .toString (); System.out.println ("Domain is: " + domainName); String myAccountHeader = driver.findElement (By.cssSelector ("#content h2")) .getText (); assertEquals (myAccountHeader, "My Account"); } Test Execution The following screenshot from the IntelliJ IDE shows that the test was executed successfully. Test Scenario 2 Our objective is to write a simple code to illustrate an example using the executeAsyncScript() method using the following test scenario. Navigate to the LambdaTest eCommerce Playground website.Scroll down to the bottom of the home page.Assert that the text "FROM THE BLOG" is displayed in the lower section of the page. Implementation: Create a new testExecuteAsyncScript() method in the existing text class TestJavaScriptExecutor. Java @Test public void testExecuteAsyncScript() { driver.get("https://ecommerce-playground.lambdatest.io"); JavascriptExecutor js = (JavascriptExecutor) driver; js.executeAsyncScript("var callback = arguments[arguments.length - 1];" + "window.scrollBy(0,document.body.scrollHeight); + callback()"); String fromTheBlogText = driver.findElement(By.cssSelector("#entry_217991 > h3")).getText(); assertEquals(fromTheBlogText, "FROM THE BLOG"); } The code will navigate to the homepage of the LambdaTest eCommerce Playground website. The executeAsyncScript() method of the JavaScriptExecutor will be called next, where it will perform the action to scroll the window. In the executeAsyncScript() method, the scripts executed need to explicitly signal that they are finished by invoking the provided callback() method. Java JavascriptExecutor js = (JavascriptExecutor) driver; js.executeAsyncScript("var callback = arguments[arguments.length - 1];" + "window.scrollBy(0,document.body.scrollHeight); + callback()"); After scrolling to the bottom of the window, the text "FROM THE BLOG" will be located, and an assertion will be performed on it. Java String fromTheBlogText = driver.findElement(By.cssSelector("#entry_217991 > h3")).getText(); assertEquals(fromTheBlogText, "FROM THE BLOG"); Test Execution The following screenshot shows that the test was executed successfully. Commands for Using JavaScriptExecutor in Selenium Let’s examine some scenarios we could handle using JavaScriptExecutor Interface for Selenium test automation. To click on a button: Java js.executeScript("document.getElementById('enter element id').click();"); //or js.executeScript("arguments[0].click();", okButton); To type text in a text box without using the sendKeys() method: Java js.executeScript("document.getElementById(id').value='someValue';"); js.executeScript("document.getElementById('Email').value='SeleniumTesting.com';"); To handle the checkbox by passing the value as true or false: Java js.executeScript("document.getElementById('enter element id').checked=false;"); To generate an alert pop window in Selenium WebDriver: Java js.executeScript("alert('Welcome To Selenium Testing');"); To refresh the browser window using JavaScript: Java js.executeScript("history.go(0)"); To get the innertext of the entire webpage in Selenium: Java String innerText = js.executeScript(" return document.documentElement.innerText;").toString(); System.out.println(innerText); To get the title of the web page: Java String titleText = js.executeScript("return document.title;").toString(); System.out.println(titleText); To get the domain name: Java String domainName= js.executeScript("return document.domain;").toString(); System.out.println(domainName); To get the URL of a web page: Java String url= js.executeScript("return document.URL;").toString(); System.out.println(url); To get the height and width of a web page: Java js.executeScript(“return window.innerHeight;”).toString(); js.executeScript(“return window.innerWidth;”).toString(); To navigate to a different page using JavaScript: Java js.executeScript("window.location = 'https://www.google.com"); To perform scroll on an application using Selenium: To scroll the page vertically for 500px: Java js.executeScript(“window.scrollBy(0,500)”); To scroll the page vertically till the end: Java js.executeScript(“window.scrollBy(0,document.body.scrollHeight)”); Adding an element in the Document Object Model (DOM): Java js.executeScript("var btn=document.createElement('newButton');" + "document.body.appendChild(btn);"); To get the shadow root in the DOM: Java WebElement element = driver.findElement(By.id("shadowroot")); js.executeScript("return arguments[0].shadowRoot", element); Conclusion Selenium has an interface called JavaScriptExecutor that is utilized when WebDriver commands don’t behave as intended. With the help of JavaScriptExecutor, we can use WebDriver to execute JavaScript code on the website, allowing us to handle a variety of tasks in an elegant and effective manner that would otherwise be impossible using only Java. In this blog, we explored how to use JavaScriptExecutor in Selenium and its different methods. Further, we covered various scenarios to attain an effective solution using different methods along with practical examples.
What Are Fixtures In Playwright? In Playwright, fixtures are objects that help you set up your tests efficiently. Think of them as “ready-made helpers” that provide things you commonly need, like a browser page, so you don’t have to create them from scratch every time. Explanation Let’s start with an example of a page fixture. Here’s the code to explain the page fixture: JavaScript import { test, expect } from '@playwright/test'; test('basic test', async ({ page }) => { await page.goto('https://playwright.dev/'); await expect(page).toHaveTitle(/Playwright/); }); In the above code: The page object is a fixture provided by Playwright.It’s automatically created for your test, and it represents a browser page (like a tab in Chrome or Firefox) that you can use to interact with a website.{ page }: Playwright gives this fixture to you inside the curly braces {}. It’s like saying, “Hey, give me a browser page to work with!”You didn’t have to write code to open a browser or create a page — Playwright does that for you automatically using the page fixture. A Very Basic Example Imagine you’re testing a simple website, like a login page. Here’s how page fixtures help. JavaScript import { test, expect } from '@playwright/test'; test('login page test', async ({ page }) => { await page.goto('https://example.com/login'); await expect(page).toHaveURL(/login/); }); Without fixtures: You’d have to manually write code to launch a browser, open a new tab, etc.With fixtures: Playwright says, “Don’t worry, I’ll give you a page ready to go!” Why Are Fixtures Useful? Saves time. You don’t need to set up a browser or page yourself.Ensure consistency. Every test gets the same fresh page to work with.Automatic cleanup. Playwright automatically closes the browser after the test, so you don’t have to. Other Fixtures in Playwright Besides page, Playwright offers other fixtures like: browser – Gives you a whole browsercontext – Gives you a browser context (like a fresh session with cookies)request – Helps you make API callsbrowserName – Help you to know in which browser your test is running browser Fixture The browser fixture gives you access to the entire browser instance (e.g., Chrome, Firefox). You can use it to control the browser or launch multiple pages. JavaScript import { test, expect } from '@playwright/test'; test('check browser type', async ({ browser }) => { // Open a new page manually using the browser fixture const page = await browser.newPage(); await page.goto('https://example.com'); await expect(page).toHaveTitle(/Example/); }); Why use it? If you need to control the browser directly or create multiple pages in one test. context Fixture The context fixture provides a browser context, which is like a fresh browsing session (e.g., with its own cookies, storage, etc.). It’s useful for testing things like logins or isolated sessions. JavaScript import { test, expect } from '@playwright/test'; test('check cookies in context', async ({ context }) => { // "context" fixture gives you a fresh browser session const page = await context.newPage(); await page.goto('https://example.com'); // Add a cookie to the context await context.addCookies([{ name: 'myCookie', value: 'hello', domain: '.example.com', path: '/' }]); console.log('Cookies:', await context.cookies()); // Prints the cookies }); Why use it? To manage cookies, local storage, or test multiple user sessions without interference. request Fixture The request fixture lets you make HTTP requests (like GET or POST) directly, which is great for testing APIs alongside your webpage tests. JavaScript import { test, expect } from '@playwright/test'; test('test an API', async ({ request }) => { // "request" fixture lets you send HTTP requests const response = await request.get('https://api.example.com/data'); // Check if the API returns a successful status expect(response.ok()).toBe(true); // Check the response body const data = await response.json(); console.log('API Response:', data); }); Why use it? To test backend APIs or mock responses without needing a browser page. browserName Fixture The browserName fixture tells you which browser your test is running in (e.g., “chromium”, “firefox”, or “webkit”). It’s handy for writing browser-specific tests. JavaScript import { test, expect } from '@playwright/test'; test('check browser name', async ({ browserName }) => { // "browserName" fixture tells you the browser being used console.log('Running in:', browserName); if (browserName === 'chromium') { console.log('This is Chrome or Edge!'); } else if (browserName === 'firefox') { console.log('This is Firefox!'); } }); Best Practices for Using Fixtures in Playwright Using fixtures in Playwright effectively can make your tests cleaner, more maintainable, and easier to scale. Below are some best practices for using fixtures in Playwright, explained with simple examples and reasoning. These practices will help you avoid common pitfalls and get the most out of Playwright’s powerful fixture system. Leverage Built-In Fixtures Instead of Manual Setup Playwright’s built-in fixtures (like page, context, browser) are optimized and handle setup/teardown for you. Avoid manually creating resources unless absolutely necessary. JavaScript // Bad: Manually launching a browser import { chromium } from '@playwright/test'; test('manual setup', async () => { const browser = await chromium.launch(); const page = await browser.newPage(); await page.goto('https://example.com'); await browser.close(); }); // Good: Use the built-in fixtures import { test, expect } from '@playwright/test'; test('using fixture', async ({ page }) => { await page.goto('https://example.com'); }); Reason? The page fixture automatically manages browser launch and cleanup, saving you code and ensuring consistency. Use Fixtures Only When Needed Don’t include unused fixtures in your test signature — it keeps your code cleaner and avoids unnecessary overhead. JavaScript // Bad: Including unused fixtures test('simple test', async ({ page, browser, context, request }) => { await page.goto('https://example.com'); // Only "page" is used }); // Good: Only include what you need test('simple test', async ({ page }) => { await page.goto('https://example.com'); }); Reason? Unused fixtures still get initialized, which can slightly slow down your tests and clutter your code. Use context for Isolation The context fixture provides a fresh browser context (e.g., separate cookies, storage). Use it when you need isolated sessions, like testing multiple users. JavaScript test('test two users', async ({ context }) => { const page1 = await context.newPage(); await page1.goto('https://example.com/login'); await page1.fill('#user', 'user1'); // New context for a second user const newContext = await context.browser().newContext(); const page2 = await newContext.newPage(); await page2.goto('https://example.com/login'); await page2.fill('#user', 'user2'); }); Reason? context ensures each test or user session is independent, avoiding interference (e.g., shared cookies). Create Custom Fixtures for Reusable Setup If you have repeated setup logic (e.g., logging in), create a custom fixture to keep your tests DRY (Don’t Repeat Yourself). JavaScript // Define a custom fixture const { test: base } = require('@playwright/test'); const test = base.extend({ loggedInPage: async ({ page }, use) => { await page.goto('https://example.com/login'); await page.fill('#username', 'testuser'); await page.fill('#password', 'password123'); await page.click('button[type="submit"]'); await use(page); // Pass the logged-in page to the test }, }); // Use the custom fixture test('use logged-in page', async ({ loggedInPage }) => { await loggedInPage.goto('https://example.com/dashboard'); await expect(loggedInPage).toHaveURL(/dashboard/); }); Reason? Custom fixtures reduce duplication and make tests more readable and maintainable. Summary In Playwright, fixtures are handy objects that are provided to make testing easier. It’s like borrowing a pre-opened notebook to write your test instead of making a new one from scratch. You just use it, and Playwright handles the rest!
In the development of mobile applications, a well-defined API is crucial for enabling seamless communication between the mobile front end and the backend services. Running this API locally can significantly enhance the development workflow, allowing developers to test and debug their applications without deploying them to a remote server. In this article, we will explore how to run a mobile app API locally using Docker and how to test it effectively with Postman. Why Use Docker? Docker provides a lightweight environment for running applications in containers, ensuring consistency across development, testing, and production environments. Using Docker, developers can isolate dependencies, manage versions, and streamline the deployment process. Setting Up the Project Before we begin, ensure that you have Docker and Postman installed on your machine. 1. Create a Simple API with Node.js and Express Let’s create a simple RESTful API using Node.js and Express. We will implement endpoints for a mobile app that manages a list of tasks. Step 1: Project Structure Create a new directory for your project: Shell mkdir mobile-app-api cd mobile-app-api mkdir src Inside the src directory, create the following files: server.jspackage.json Step 2: Define the API package.json: JSON { "name": "mobile-app-api", "version": "1.0.0", "description": "A simple API for managing tasks", "main": "server.js", "scripts": { "start": "node server.js" }, "dependencies": { "express": "^4.17.3" } } server.js: JavaScript const express = require('express'); const app = express(); const port = 3000; app.use(express.json()); let tasks = [ { id: 1, title: 'Task One', completed: false }, { id: 2, title: 'Task Two', completed: false }, ]; // Get all tasks app.get('/api/tasks', (req, res) => { res.json(tasks); }); // Create a new task app.post('/api/tasks', (req, res) => { const newTask = { id: tasks.length + 1, title: req.body.title, completed: false, }; tasks.push(newTask); res.status(201).json(newTask); }); // Start the server app.listen(port, () => { console.log(`API running at http://localhost:${port}`); }); Step 3: Create a Dockerfile In the root directory, create a file named Dockerfile: Dockerfile # Use the official Node.js image FROM node:18 # Set the working directory WORKDIR /usr/src/app # Copy package.json and install dependencies COPY src/package*.json ./ RUN npm install # Copy the rest of the application files COPY src/ . # Expose the API port EXPOSE 3000 # Start the application CMD ["npm", "start"] Step 4: Create a Docker Compose File To simplify running the API, create a docker-compose.yml file in the root directory: YAML version: '3.8' services: api: build: . ports: - "3000:3000" 2. Building and Running the API With Docker To build and run your API, execute the following commands in your terminal: Shell # Build the Docker image docker-compose build # Run the Docker container docker-compose up You should see output indicating that the API is running: Plain Text API running at http://localhost:3000 3. Testing the API With Postman Now that your API is running locally, you can test it using Postman. Step 1: Open Postman Launch Postman and create a new request. Step 2: Test the GET Endpoint Set the request type to "GET."Enter the URL: http://localhost:3000/api/tasks.Click "Send." You should see the list of tasks returned in JSON format. Step 3: Test the POST Endpoint Set the request type to "POST."Enter the URL: http://localhost:3000/api/tasks.In the Body tab, select "raw" and set the format to JSON.Enter the following JSON to create a new task: JSON { "title": "Task Three" } Click "Send." You should see the newly created task in the response. 4. Visualizing the API Architecture Here’s a simple diagram representing the architecture of our API: Plain Text ┌─────────────────────┐ │ Mobile App │ └──────────▲──────────┘ │ ┌──────────┴──────────┐ │ Postman Client │ └──────────▲──────────┘ │ ┌─────────┴───────────┐ │ Docker Container │ │ (Node.js + Express) │ └─────────▲───────────┘ │ ┌────────┴────────┐ │ API │ └─────────────────┘ Conclusion Running a mobile app API locally using Docker allows developers to create a consistent and isolated development environment. Using for containerization and Postman for testing, you can efficiently build, run, and debug your API, leading to a smoother development experience. This setup accelerates development and ensures that the application behaves consistently across various environments. Next Steps Explore Docker Networking to connect multiple services.Implement a database (e.g., MongoDB or PostgreSQL) for persistent storage.Integrate API documentation tools like Swagger for better API management.Consider deploying the API to a cloud service once it’s production-ready. You should now be able to streamline your mobile application development workflow, ensuring your API is robust and reliable.
SRE Best Practices for Java Applications
March 12, 2025
by
CORE
Build Your Tech Startup: 4 Key Traps and Ways to Tackle Them
March 11, 2025
by
CORE
The Alignment-to-Value Pipeline
March 10, 2025
by
CORE
Handling Concurrent Data Loads in Delta Tables
March 12, 2025 by
Low-Code Development Is Dead; Long Live Low Code, No Limits
March 12, 2025
by
CORE
Lightning Data Service for Lightning Web Components
March 12, 2025 by
Using Jetpack Compose With MVI Architecture
March 12, 2025 by
SRE Best Practices for Java Applications
March 12, 2025
by
CORE
Mobile Backend With Docker, Kubernetes, and Microservices
March 12, 2025 by
Handling Concurrent Data Loads in Delta Tables
March 12, 2025 by
SRE Best Practices for Java Applications
March 12, 2025
by
CORE
Mobile Backend With Docker, Kubernetes, and Microservices
March 12, 2025 by
Using Jetpack Compose With MVI Architecture
March 12, 2025 by
SRE Best Practices for Java Applications
March 12, 2025
by
CORE
Mobile Backend With Docker, Kubernetes, and Microservices
March 12, 2025 by
GenAI: Running Prototypes Faster Than Wireframes
March 12, 2025
by
CORE
SRE Best Practices for Java Applications
March 12, 2025
by
CORE
Build a DIY AI Model Hosting Platform With vLLM
March 12, 2025
by
CORE