DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Security

The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.

icon
Latest Premium Content
Trend Report
Software Supply Chain Security
Software Supply Chain Security
Refcard #387
Getting Started With CI/CD Pipeline Security
Getting Started With CI/CD Pipeline Security
Refcard #402
SBOM Essentials
SBOM Essentials

DZone's Featured Security Resources

Understanding Proxies and the Importance of Japanese Proxies in Modern Networking

Understanding Proxies and the Importance of Japanese Proxies in Modern Networking

By Adamo Tonete
In the current digital age, which largely revolves around the use of the internet, privacy and security, as well as access to the whole world, are some of the main concerns for individuals and businesses. One of the most effective methods to achieve these objectives is the use of proxy servers. Proxy servers essentially change your IP address and reroute your online traffic. Many types of proxies are available, but Japanese proxies are considered to be the best because they are not only stable but also have very low latency and offer you excellent regional access. What Are Proxies and How Do They Work? A proxy server is a middleman between the device of a user and the internet. The connection from your side will first go to the proxy server when you request access to a website. After that, with its IP address, the proxy, not you, makes the request to the destination site. As a result, the web server that you go to has no access to your real IP but can only see the proxy, which gives you an extra safety shield as well as the possibility to have a certain degree of anonymity and to manage your online actions. Key Functions of a Proxy Server IP masking: Hides the user’s real IP address, enhancing online privacy.Access control: Helps organizations manage which users or devices can access specific web content.Traffic optimization: Caches web pages and compresses data to improve load times.Geolocation bypassing: Allows users to access region-restricted content or services.Enhanced security: Filters harmful content and prevents direct connections between devices and external servers. Types of Proxies Commonly Used Different proxy configurations serve various needs. Here are the most commonly used types: Type of proxydescriptionideal use case HTTP/HTTPS Proxies Designed for web browsing; support both secure and standard sites. Web scraping, SEO monitoring, and general browsing. SOCKS5 Proxies More flexible protocol that handles any traffic, not just HTTP. Torrenting, gaming, and handling complex network requests. Residential Proxies Use real IPs assigned by ISPs, making them harder to detect. Market research, sneaker bots, and ad verification. Datacenter Proxies Hosted in data centers, offering high speed but lower anonymity. Bulk data gathering, automation, and performance testing. Rotating Proxies Automatically switch IPs after each request or session. Web scraping at scale and bypassing rate limits. Why Businesses and Developers Prefer Japanese Proxies Demand for Japanese proxies has skyrocketed in the past few years, chiefly attributable to Japan's cutting-edge internet infrastructure, minimal latency, and reliable connections. Such proxies represent a great advantage to businesses that need accurate geographical targeting or wish to tap into Japanese online services. 1. Access to Japan-Specific Content It is very common for streaming services, gaming networks, and e-commerce websites to limit access to users from Japan only. Through the use of Japanese proxies, companies and developers are able to verify the implemented local versions of their websites or quietly draw a blocked-by-region API. 2. High Performance and Low Latency Japan can boast of an outstanding internet speed and a very powerful connectivity infrastructure. One can easily automate their work, collect data from the internet, or run performance tests without worrying about high latency and low throughput if they are using Japanese proxy servers that are usually located in Japan. 3. Enhanced Security for Enterprise Operations By utilizing Japanese proxies, businesses handling sensitive information can create an additional barrier for their safety. Such proxies limit the risk of being attacked by masking the location of servers and avoiding direct links between company networks and the public internet. 4. SEO and Marketing Intelligence For SEO agencies, the practice of using Japanese proxies to monitor SEO is quite common. They employ these proxies to keep track of keyword rankings, positions in the Search Engine Results Page (SERP), and the locations of advertisements in the Japanese market. Through the use of a real Japanese IP address, marketers have the ability to collect localized insights, improve the targeting of their ads, and study the performance of their rivals. Technical Advantages of Using Proxies in Modern IT Infrastructure From a technical standpoint, proxies are not just about privacy; they are powerful tools for network optimization, control, and resilience. A. Load Balancing and Traffic Distribution Proxies have the ability to allocate traffic in a balanced manner between several servers, thus allowing the servers to not be overloaded and providing a stable performance during times of high demand. Such a feature is notably beneficial to websites with a massive number of users or developers running simultaneous sessions for testing purposes. B. Bandwidth Management One of the benefits of caching is that it can reduce the amount of bandwidth that is used. In addition, proxies keep duplicates of the web resources that are needed locally, so the access time will be shorter for the requests that have already been made. C. Security Layer Integration It is also possible to combine proxies with other security measures such as firewalls, IDS/IPS systems, and VPNs to build a more robust protection against threats like DDoS attacks, malware injections, or unauthorized access. D. Controlled Access and Compliance Corporate IT administrators employ proxies for monitoring employee internet usage, keeping a record of employee activities, and verifying that these activities are in line with various local data protection laws. The matter of concern is particularly for the multinationals that are working in a country such as Japan, where the privacy laws are stringent. When to Choose Japanese Proxies Over Global Options While global proxy networks offer coverage across multiple regions, Japanese proxies stand out for several niche scenarios: E-commerce testing: Simulate Japanese customer experiences and payment gateways.Streaming access: Unlock region-exclusive media content.Ad verification: Validate ad placements on Japanese sites.Game development: Test multiplayer latency and regional server performance.Research and analytics: Gather localized search engine and consumer data. Example Workflow Using Japanese Proxies Select a proxy provider offering stable Japanese IP addresses.Configure your proxy settings in the browser, API client, or automation script.Authenticate securely using user credentials or whitelisted IPs.Test latency and speed before deploying tasks at scale.Monitor performance logs to ensure consistent uptime and minimal request errors. Final Thoughts Proxies have become the must-have gadgets of IT professionals in an era where data privacy, network efficiency, and geo-specific intelligence are the need of the hour. The right proxy type, if comprehended and used for security, testing, and data collection, can be a great operational efficiency facilitator. If you are involved in the Japanese market, digital testing, or simply want to access resources restricted in Japan, the prudent move is to use Japanese proxies. They deliver unmatched speed, trust, and safety, which are just the right qualities that go with today’s enterprise networking demands. More
Detecting Supply Chain Attacks in NPM, PyPI, and Docker: Real-World Techniques That Work

Detecting Supply Chain Attacks in NPM, PyPI, and Docker: Real-World Techniques That Work

By David Iyanu Jonathan
The digital ecosystem breathes through trust. Every npm install, every pip install, every docker pull represents a leap of faith — a developer placing confidence in code written by strangers, maintained by volunteers, distributed through systems they've never seen. This trust, however, has become the Achilles' heel of modern software development. Supply chain attacks don't knock on your front door. They slip through the dependencies you invited in yourself. The Rising Threat: When Trust Becomes Vulnerability Software supply chain attacks represent a paradigm shift in cybersecurity threats. Rather than directly targeting fortified systems, attackers have discovered something far more insidious: contaminating the very building blocks developers use to construct their applications. The math is simple, yet terrifying — compromise one widely-used package, and suddenly you have access to thousands of downstream applications. Consider the numbers. A typical Node.js application might depend on 400+ packages. Each package brings its own dependencies. The dependency tree explodes exponentially, creating what security researchers call "dependency hell" — not just for version conflicts, but for attack surface expansion. The SolarWinds breach of 2020 demonstrated this with devastating clarity. Attackers didn't need to penetrate 18,000 organizations individually. They contaminated a single software update, then watched as victims essentially installed the malware themselves. Trust became the delivery mechanism. Understanding the Modern Attack Surface Today's software supply chain resembles a complex ecosystem where multiple attack vectors converge. Dependency confusion attacks exploit naming similarities between public and private repositories. Typosquatting campaigns target developers' muscle memory — urlib instead of urllib, beuatifulsoup instead of beautifulsoup. These aren't accidental typos; they're calculated traps. Malicious maintainers represent perhaps the most concerning vector. Package maintainers often work for free, maintaining critical infrastructure used by millions. Burnout is common. Account takeovers happen. Sometimes legitimate maintainers sell their packages to bad actors, who then inject malicious code into trusted libraries. The attack surface extends beyond individual packages. CI/CD pipelines themselves become targets, with attackers compromising build systems to inject malicious code during the compilation process. Container registries face similar threats — malicious Docker images masquerading as legitimate base images, complete with backdoors baked into the filesystem. NPM: Securing the JavaScript Ecosystem JavaScript's package ecosystem moves fast. Really fast. The NPM registry hosts over two million packages, with thousands added daily. This velocity creates opportunities for both innovation and exploitation. Native tooling provides your first line of defense. The npm audit command, built into NPM itself, scans your dependency tree against known vulnerability databases. Simple to use: npm audit reveals vulnerabilities, while npm audit fix attempts automatic remediation. But automation isn't always wise — major version jumps can break your application. Shell bashnpm audit --audit-level high npm audit --production # Focus on production dependencies only Socket.dev has emerged as a game-changer for NPM security. Unlike traditional vulnerability scanners that look for known CVEs, Socket analyzes package behavior. Does this utility package really need network access? Why is a string manipulation library spawning child processes? Socket's behavioral analysis catches malicious packages before they're widely known as threats. The tool integrates seamlessly with GitHub pull requests, automatically flagging suspicious dependency changes. Install their GitHub app, and Socket will comment on PRs when new dependencies exhibit concerning behaviors — filesystem access, network calls, shell execution. It's like having a security expert review every dependency addition. Snyk operates differently — comprehensive, enterprise-focused, battle-tested. Their database combines public vulnerability information with proprietary research. Snyk doesn't just find vulnerabilities; it provides context. Risk scores, exploit maturity, and fix guidance. The CLI tool integrates into any workflow: Shell bashsnyk test # Test current project snyk monitor # Continuous monitoring snyk wizard # Interactive fixing GitHub's Dependabot represents automation at scale. Enabled by default for public repositories, Dependabot monitors your dependencies and automatically creates pull requests when updates fix security issues. The key insight: automate the mundane, but review the critical. The event-stream incident serves as a cautionary tale. In 2018, the maintainer of event-stream — a popular Node.js package with millions of weekly downloads — transferred ownership to a seemingly legitimate user. The new maintainer added a malicious dependency that specifically targeted the Copay cryptocurrency wallet. The attack was surgical: the malicious code only activated when it detected that it was running within the Copay application. This incident highlighted how trust chains can be exploited through social engineering and legitimate-seeming account transfers. PyPI: Python's Package Security Landscape Python's Package Index faces unique challenges. The language's popularity in data science, machine learning, and automation means PyPI packages often handle sensitive data. Scientific computing libraries deal with massive datasets, financial modeling packages process trading algorithms, and DevOps tools manage infrastructure credentials. pip-audit, developed by PyPA (Python Packaging Authority), brings vulnerability scanning directly to Python developers. Unlike pip's basic functionality, pip-audit specifically focuses on security. It cross-references your installed packages against the OSV (Open Source Vulnerabilities) database and PyUp.io's safety database. Shell bashpip-audit # Audit current environment pip-audit --requirement requirements.txt pip-audit --format json # Machine-readable output The tool's strength lies in its integration with Python's ecosystem. It understands virtual environments, requirements files, and poetry.lock files. Critical for CI/CD integration, where you need consistent, reproducible security scanning. Bandit complements pip-audit by focusing on code analysis rather than dependency scanning. While pip-audit finds vulnerable packages, Bandit identifies vulnerable code patterns within your own codebase. Hard-coded passwords, SQL injection patterns, unsafe deserialization — Bandit catches what automated dependency scanners miss. OSV Scanner represents Google's contribution to open-source security. The tool doesn't just scan Python packages; it's language-agnostic, supporting NPM, PyPI, Go modules, and more. What makes OSV Scanner special is its data source: the OSV database aggregates vulnerability information from multiple sources, providing comprehensive coverage often missing from single-vendor solutions. Typosquatting campaigns in PyPI demonstrate the creativity of attackers. Research has identified thousands of malicious packages with names deliberately similar to popular libraries. urllib becomes urlib, requests becomes request, tensorflow becomes tensorfow. These packages often contain information-stealing malware designed to harvest environment variables, SSH keys, and authentication tokens. The Python Package Index has responded by implementing typosquatting protections and requiring two-factor authentication for critical package maintainers. However, the fundamental challenge remains: how do you balance accessibility with security in an ecosystem built on trust? Docker Images: Container Security in Practice Container security extends far beyond scanning individual images. The entire container lifecycle — from base images to runtime — presents attack opportunities. Malicious base images, vulnerable dependencies baked into containers, secrets accidentally included in layers, and runtime privilege escalation. Trivy has become the gold standard for container vulnerability scanning. Developed by Aqua Security and open-sourced, Trivy scans not just the final container image but individual layers, understanding how vulnerabilities propagate through the Docker build process. Shell bashtrivy image nginx:latest trivy image --severity HIGH,CRITICAL ubuntu:20.04 trivy filesystem --security-checks vuln,config . Trivy's comprehensive approach examines OS packages, language-specific dependencies (NPM, PyPI, Go modules), and configuration issues. It understands Dockerfiles, Kubernetes manifests, and Terraform configurations. The tool provides actionable remediation advice — not just "vulnerability exists" but "upgrade to version X" or "use this alternative base image." Grype, developed by Anchore, focuses specifically on vulnerability detection with impressive speed. Where some scanners take minutes to analyze large images, Grype typically completes scans in seconds. The performance advantage becomes critical in CI/CD pipelines where scan time directly impacts deployment velocity. Docker Scout, Docker's native security solution, integrates directly into Docker Desktop and Docker Hub. The integration advantage is significant — Scout automatically scans images as you build them, providing immediate feedback without requiring separate tool installation or configuration. Consider this vulnerable Dockerfile example: Dockerfile dockerfileFROM ubuntu:18.04 RUN apt-get update && apt-get install -y \ python3 \ python3-pip \ npm COPY requirements.txt . RUN pip3 install -r requirements.txt COPY package.json . RUN npm install COPY . . EXPOSE 8080 CMD ["python3", "app.py"] Scanning this with Trivy reveals multiple issues: Ubuntu 18.04 contains numerous CVEs, pip packages might have vulnerabilities, npm dependencies could be compromised, and the image runs as root by default. Each issue represents a potential attack vector. Base image trust becomes crucial. Official images from Docker Hub generally receive regular security updates. Third-party images vary wildly in maintenance quality. Alpine Linux has gained popularity partly due to its minimal attack surface — fewer packages mean fewer vulnerabilities. However, Alpine's use of musl libc instead of glibc can cause compatibility issues with some applications. CI/CD Pipeline Integration: Automation Without Compromise Security scanning integrated into CI/CD pipelines transforms reactive security into proactive defense. Rather than discovering vulnerabilities in production, you catch them during development. The key principle: fail fast, fail early, fail safely. GitHub Actions provides an ideal platform for security automation. The ecosystem includes pre-built actions for most security tools, reducing configuration complexity. Here's a comprehensive security scanning workflow: YAML yamlname: Security Scan on: [push, pull_request] jobs: security: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: NPM Audit run: npm audit --audit-level high - name: Snyk Security Scan uses: snyk/actions/node@master env: SNYK_TOKEN: ${{ secrets.SNYK_TOKEN } - name: Docker Security Scan run: | docker build -t myapp . trivy image --exit-code 1 --severity HIGH,CRITICAL myapp Security gates require careful calibration. Failing to address every medium-severity vulnerability might sound secure, but it can paralyze development. Teams often start with critical and high-severity issues, gradually tightening thresholds as their security posture improves. The challenge lies in balancing security with velocity. Automated fixes work well for straightforward updates but can break functionality for major version changes. Many teams implement a hybrid approach: automatic updates for patch releases, manual review for minor updates, and extensive testing for major version changes. GitLab CI provides similar capabilities with its built-in security scanning. GitLab's advantage lies in integration — dependency scanning, container scanning, and static analysis are built into the platform, requiring minimal configuration for basic security coverage. Monitoring and Incident Response: Beyond Prevention Prevention isn't enough. Even with comprehensive scanning, zero-day vulnerabilities appear regularly. Newly disclosed security issues affect packages you've already vetted and deployed. Effective security requires ongoing monitoring and rapid response capabilities. OSV.dev serves as a centralized vulnerability database aggregating information from multiple sources. Unlike vendor-specific databases, OSV provides a unified API for querying vulnerabilities across ecosystems. This aggregation enables comprehensive monitoring — you can track all vulnerabilities affecting your technology stack from a single source. Have I Been Pwned extends beyond personal email monitoring. The service now includes domain monitoring for organizations, alerting when employee credentials appear in data breaches. Since many supply chain attacks begin with compromised developer accounts, monitoring for credential exposure provides early warning of potential threats. Malicious package repositories track known bad packages across ecosystems. The Python Advisory Database, npm's security advisories, and similar resources provide structured information about malicious packages. Automating checks against these databases helps identify whether your dependencies have been flagged as malicious. When you discover a compromised dependency in your stack, response speed matters. Document your incident response process before you need it: Immediate containment: Stop deployments, isolate affected systemsImpact assessment: Identify what data the compromised package could accessRemediation: Update to safe versions, scan for indicators of compromiseRecovery verification: Ensure complete removal of malicious codePost-incident review: Update processes to prevent similar issues The xz-utils backdoor of 2024 demonstrated the sophistication of modern supply chain attacks. Attackers spent years building trust within the project community, gradually gaining maintainer access. The backdoor was nearly undetectable, hidden within binary test files and activated only under specific conditions. This attack highlighted how traditional scanning tools might miss sophisticated attacks that don't manifest as obvious vulnerabilities. Practical Implementation: Starting Your Security Journey Beginning comprehensive supply chain security can feel overwhelming. Start small, build incrementally, and focus on high-impact changes first. Week 1: Visibility Run npm audit, pip-audit, and trivy on your main applicationsDocument current vulnerability exposureIdentify critical issues requiring immediate attention Week 2: Basic Automation Enable GitHub Dependabot or equivalent automated dependency updatesAdd basic security scanning to your CI/CD pipelineConfigure notifications for critical vulnerabilities Week 3: Enhanced Scanning Integrate behavioral analysis tools like Socket.dev for NPM packagesAdd container scanning to your Docker build processImplement security gates for critical vulnerabilities Week 4: Monitoring and Response Set up OSV.dev monitoring for your technology stackCreate incident response procedures for compromised dependenciesSchedule regular security reviews and tool updates The goal isn't perfect security — that's impossible. The goal is managed risk, informed decisions, and rapid response capabilities. Conclusion: Trust, But Verify Everything Modern software development operates on trust at an unprecedented scale. Every dependency represents a trust relationship. Every container base image. Every CI/CD tool. Every package registry. The interconnectedness that makes modern development so powerful also makes it vulnerable. Supply chain security isn't just about tools — though tools are essential. It's about changing how we think about dependencies. Instead of blind trust, we need informed trust. Instead of hoping for the best, we need monitoring for the worst. Instead of reactive patching, we need proactive defense. The techniques outlined here — dependency scanning, behavioral analysis, container security, CI/CD integration, continuous monitoring — provide a foundation for managing supply chain risk. But tools evolve. Threats evolve. Your security practices must evolve, too. Most developers unknowingly trust dozens or hundreds of third parties every time they build an application. That trust enables incredible innovation, but it also creates incredible risk. The key is making that trust explicit, measured, and continuously validated. Start where you are. Use what you have. Do what you can. Perfect security doesn't exist, but better security always does. Your supply chain security journey begins with a single scan, a single update, a single question: "Do I really know what this code does?" The answer might surprise you. More importantly, it might protect you. More
HSTS Beyond the Basics: Securing AI Infrastructure and Modern Attack Vectors
HSTS Beyond the Basics: Securing AI Infrastructure and Modern Attack Vectors
By Vidyasagar (Sarath Chandra) Machupalli FBCS DZone Core CORE
Building Secure Software: Integrating Risk, Compliance, and Trust
Building Secure Software: Integrating Risk, Compliance, and Trust
By Akash Gupta
Workload Identities: Bridging Infrastructure and Application Security
Workload Identities: Bridging Infrastructure and Application Security
By Maria Pelagia
Bridging the Divide: Tactical Security Approaches for Vendor Integration in Hybrid Architectures
Bridging the Divide: Tactical Security Approaches for Vendor Integration in Hybrid Architectures

Security architecture in hybrid environments has traditionally focused on well-known concepts such as OWASP vulnerabilities, identity and access management, role-based access control, network security, and the principle of least privilege. Best practices like secure coding and incorporating SAST/DAST testing into CI/CD pipelines are also widely discussed. However, when organizations operate in a hybrid model — running workloads both on-premises and in the cloud — while also integrating with vendor-managed cloud solutions, a different set of security design considerations comes into play. These scenarios are not uncommon, yet they are rarely highlighted in the context of secure solution implementation involving vendor software in hybrid environments. This article highlights four real-world use cases and outlines practical architectural strategies for organizations to adopt to ensure secure integration in hybrid settings. Acronyms OWASP – Open Web Application Security ProjectSAST – Static Application Security TestingDAST – Dynamic Application Security TestingCI/CD – Continuous Integration / Continuous TestingSaaS – Software as a ServiceUX – User ExperienceETL – Extract, Transform, and Load Use Cases There are three use cases this article covers, as listed below. Automated software update by the vendor in the organization's managed data centerWebhook – mismatch in verification methodologyJavaScript embedding – monitoring mandate Tactical Solutions Automated Software Update by Vendor in Organization-Managed Data Center Problem Statement In some vendor software integrations, organizations are required to install an agent within their own data center. This agent typically acts as a bridge between the vendor’s cloud-hosted application and the organization’s on-premises systems. For example, it may facilitate data transfer between the vendor software and the organization’s on-premises database. In many cases, the vendor’s operational architecture requires that this agent be automatically updated. While convenient, this approach introduces a significant security risk. If the vendor’s software is compromised or contains malware, the update process could infect the virtual machine or container hosting the agent. From there, the threat could propagate into other parts of the organization’s infrastructure, potentially leading to a major security incident. Figure 1 showcases the scenario. Figure 1: Vendor software agent running in the organization's data center Solution A tactical way to solve this problem is to install the future version of the agent software in a separate virtual machine or container and scan the software as well as the machine for any vulnerabilities. If the software and the deployment platform where the software is running pass all the security checks, then the vendor can be approved to install the new version of the agent software automatically. This way it can be ensured that an unverified version of the vendor software doesn’t automatically get pushed to the organization’s data center. Figure 2 demonstrates the solution. Figure 2: Pre-release version of vendor software and scan process Webhook: Mismatch in Verification Methodology Problem Statement This is an interesting security scenario where we often stumble. For a webhook implementation, the organization has to open up an inbound connectivity from the vendor software over the internet. As it is an inbound traffic to the organization's data center (on-prem or cloud), the inbound traffic needs to be verified from every aspect of software security, such as DDoS attack, malicious payload, etc. Organizations generally have a well-defined common security policy to verify all incoming traffic from external vendors. On the other hand, vendor software may also have a common policy that works as a guideline for their customers to verify all aspects of security when they receive inbound traffic from the vendor webhook. It is highly unlikely that the security policy of an organization and a vendor will match, especially when both organization and vendor are major players in the industry. As the security policy doesn’t match the majority of the time, it creates a challenge to implement such webhook integration. Solution A tactical way to solve the issue is to let the incoming traffic hit a reverse proxy layer of the organization. The reverse proxy layer, which receives traffic from the internet, is generally protected by a DDoS protection layer. The reverse proxy layer can forward the incoming traffic to the backend service layer, which has the business logic to process the webhook request. The backend service layer can implement the payload and other verification of the vendor webhook incoming traffic based on the policy set up for the vendor specification. Figure 3 demonstrates the tactical solution. Figure 3: Webhook traffic verification JavaScript Embedding: Monitoring Mandate Problem Statement Some of the vendor solutions these days are JavaScript toolkits. They are typically Digital Adoption Platform (DAP) software that are used to navigate users through the UX of the web platform to make them familiar with the navigation of newly released features. The integration process often requires embedding the vendor's JavaScript toolkit within the organization's codebase. This is deemed risky due to script injection and other types of JavaScript vulnerabilities. In addition to that, vendor software generally also has a feature to send information from a web browser to their system to capture data for analytical purposes. This analytical data capture feature adds further risk since there is a possibility of vendor software capturing unauthorized data elements about customers and applications in their system. The organization, therefore, prefers analytics traffic to flow to the vendor platform from the browser through its infrastructure. If the data flows through the organization's infrastructure, then the data that flows through the vendor platform can be monitored and actioned upon as necessary. Solution There are two problems to solve in this use case: Safely integrate the JavaScript package of the vendor into the organization's codebaseImplement a solution to send analytics traffic from the browser to the vendor through the organization's infrastructure To implement a secure integration solution with the vendor JavaScript tool, the script needs to be packaged as part of the CI/CD pipeline to scan and perform SAST/DAST testing before deploying. In order to route the analytics traffic to the vendor platform through the organization's infrastructure, create a proxy to the target vendor endpoint and customize the vendor JavaScript to point to the proxy. This arrangement helps in routing analytics traffic from the browser to the vendor through the organization's infrastructure. Figure 4: JavaScript embedding and analytics traffic flow Conclusion This article explored three real-world scenarios that highlight the security challenges organizations face when integrating vendor software into hybrid environments. Each use case demonstrates how seemingly routine technical decisions — such as software updates, webhook validation, or JavaScript embedding — can introduce vulnerabilities if not carefully addressed. The solutions presented are not just theoretical best practices but tactical architectural choices that organizations can adopt to implement solutions in a secure way for these less talked about but common integration challenges.

By Dipankar Saha
Navigating the Cyber Frontier: AI and ML's Role in Shaping Tomorrow's Threat Defense
Navigating the Cyber Frontier: AI and ML's Role in Shaping Tomorrow's Threat Defense

Abstract This article explores the transformative role of artificial intelligence (AI) and machine learning (ML) in cybersecurity. It delves into innovative strategies such as adaptive cyber deception and predictive behavioral analysis, which are reshaping defense mechanisms against cyber threats. The integration of AI in zero-trust architectures, quantum cryptography, and automation within cybersecurity frameworks highlights a shift towards more dynamic and proactive security measures. Furthermore, the challenges of the "black box" problem in AI decision-making and the potential for AI to automate routine cybersecurity tasks are discussed. The narrative underscores the importance of complementing technology with human insight for effective digital defenses. Introduction: A Personal Encounter With Cyber Evolution Let me rewind a few years back — a time when I was knee-deep in implementing a creditworthiness model at my previous role at Sar Tech LLC/Capital One. It was around the same time I encountered the formidable intersection of artificial intelligence (AI) and cybersecurity. While tuning machine learning (ML) algorithms to reduce loan approval risks, I witnessed firsthand how AI could pivot an organization's security posture in ways I hadn’t quite imagined before. This realization didn't stem from an academic paper or industry panel — it came from the challenge of protecting sensitive data while simultaneously fine-tuning predictive models. It was an "aha" moment, one which highlighted the potential of AI and ML in a broader, more dynamic context of cybersecurity. 1. Adaptive Cyber Deception: A Strategic Shift Deception as Defense: More Than Just Smoke and Mirrors I vividly recall a project where we employed AI-driven deception techniques, a strategy that initially seemed straight out of a spy thriller rather than a data security meeting. The idea of deploying decoys and traps to mislead would-be attackers wasn't just innovative — it was transformative. We used platforms that could autonomously deploy traps tailored to the intelligence we gathered, constantly evolving as threats matured. This wasn't about fooling some hypothetical hacker; it was a real-world application, dynamically adjusting to threats in real time. The early challenges were not insignificant. The AI needed fine-tuning — much like a brewing pot of coffee that you keep tasting until that perfect balance is struck. Yet, when we saw reduced breach attempts and elongated threat response times, the payoff was clear. This strategy shifted our mindset from being purely defensive to engaging in active deterrence. 2. Predictive Behavioral Analysis: Reading Between The Lines Breaking the Mold: Predicting the Unpredictable Incorporating AI into predictive behavioral analysis feels a bit like playing chess blindfolded — challenging but rewarding. Most cybersecurity efforts focus on known threats —the easily identifiable pawns and bishops. But there's immense value in predicting the moves of hidden pieces. For instance, during a period when identifying insider threats was critical, we leveraged AI to analyze massive datasets, revealing subtle user patterns that could indicate future security risks. It was akin to predictive maintenance in manufacturing. It required a mindset shift — a move from passive analysis to active prediction, not only guarding against known threats but also casting a safety net over potential surprises. The parallels were striking: just as in maintaining a manufacturing line, we had to anticipate system 'failures' before they happened. 3. Zero Trust and AI: A Necessary Symbiosis Continuous Trust: The Ever-Evolving Security Blanket When the conversation turns to zero-trust architectures, my mind immediately goes back to implementing real-time fraud detection systems while working with financial data. Here, AI played a critical role in ensuring persistent verification of user identities and devices. Our experience was that traditional models that granted one-time trust were antiquated. We needed a system that continuously validated not just once, but every step of the way. Implementing this was no easy feat, as it often required the blending of AI with agile security systems—akin to updating software in a live server environment. The automation brought by AI allowed us to evaluate risk in real-time, ensuring that our trust was as fluid as the threats being faced. 4. Quantum Cryptography: The Next Frontier AI and Quantum: The New Dynamic Duo Exploring AI's role in enhancing quantum cryptography was perhaps the most cutting-edge venture. The convergence of AI with quantum methods wasn't just an exploration in theoretical cryptography but a practical endeavor to secure communication channels. We employed machine learning (ML) algorithms to optimize quantum key distribution, dynamically adjusting to new vulnerabilities. The challenge here was twofold: technical and conceptual. The quantum realm doesn’t always adhere to classical physics — or logic, for that matter. Combining it with AI required navigating unfamiliar waters in quantum algorithms and applying ML models in an entirely new context. It was a learning curve, but the potential was too significant to ignore — a robust defense against not only current threats but the looming quantum computing advancements that could render traditional cryptography obsolete. 5. Addressing the "Black Box" Problem Transparency in AI: Demystifying the Algorithms A recurring pain point with AI-driven cybersecurity solutions is their opaque nature—the dreaded "black box." In my experience, transparency in decision-making processes is crucial. Security teams need to trust that AI's decisions are based on sound logic. It's not unlike cooking without a recipe; you need to know the ingredients to trust the outcome. Yet, explainable AI models can bridge this gap by offering insights into the decision-making pathways of algorithms. Initiatives during my tenure at Capital One included developing clear protocols for auditing AI-driven decisions, providing transparency, and fostering trust within our security teams. This endeavor ensured that our 'AI chefs' revealed enough of their recipe to build confidence in the solutions presented. 6. The Increasing Role of AI in Automating Cybersecurity From Manual to Machine: Redefining Roles The future is unmistakably veering towards automation — allowing AI to shoulder more of the operational load. This shift is redefining roles within cybersecurity teams, requiring a new focus on strategic oversight rather than routine tasks. My journey through machine learning projects taught me the value of shifting mundane tasks to AI, freeing up human resources to tackle complex, strategic challenges. However, this evolution comes with its own set of challenges, such as ensuring AI's ethical use and accountability. It’s like introducing a new player into an established team; roles need to be reassessed, and new playbooks developed. The human element will pivot to overseeing, strategizing, and innovating the broader defense strategies rather than routine operations. Conclusion: A Future of Autonomous Defenses Navigating this cyber frontier, one thing remains clear: AI and ML are integral to evolving threat defenses. The journey, punctuated by challenges and groundbreaking strides, is one of continuous learning—much like my career path, which has been anything but linear. The lessons learned along the way emphasize that while technology propels us forward, it remains essential to blend human insight with artificial intelligence. Just as no single technology was ever a panacea, AI and ML are tools — powerful ones — that, when wielded wisely, can redefine how we secure our digital landscapes. In essence, the future of cybersecurity is not just about the tools but the synergy they create with the people behind them. It’s an exciting time to be in this field, and I, for one, am eager to see how AI and ML continue to transform the way we defend against threats. So, here’s to embracing these innovations and blazing a trail into a more secure digital future.

By Geethamanikanta Jakka
A Framework for Securing Open-Source Observability at the Edge
A Framework for Securing Open-Source Observability at the Edge

The Edge Observability Security Challenge Deploying an open-source observability solution to distributed retail edge locations creates a fundamental security challenge. With thousands of locations processing sensitive data like payments and customers' personally identifiable information (PII), every telemetry component running on the edge becomes a potential entry point for attackers. Edge environments operate in spaces where there is limited physical security, bandwidth constraints shared with business-critical application traffic, and no technical staff on-site for incident response. Therefore, traditional centralized monitoring security models do not fit in these conditions because they require abundant resources, dedicated security teams, and controlled physical environments. None of them exists on the edge. This article explores how to secure an OpenTelemetry (OTel) based observability framework from the Cloud Native Computing Foundation (CNCF). It covers metrics, distributed tracing, and logging through Fluent Bit and Fluentd. Securing OTel Metrics Mutual Transport Layer Security (TLS) Security of metrics is enabled through mutual TLS (mTLS) authentication, where both client and server end need to prove their identity using certificates before communication can be established. This ensures trusted communication between the systems. Unlike traditional Prometheus deployments that expose unauthenticated HTTP stands for Hypertext Transfer Protocol (HTTP) endpoints for every service, OTel's push model allows us to require mTLS for all connections to the collector (see Figure 1). Figure 1: Multi-stage security through PII removal, mTLS communication, and 95% volume reduction Security configuration, otel-config.yaml YAML receivers: otlp: protocols: grpc: endpoint: mysite.local:55690 tls: cert_file: server.crt key_file: server.key otlp/mtls: protocols: grpc: endpoint: mysite.local:55690 tls: client_ca_file: client.pem cert_file: server.crt key_file: server.key exporters: otlp: endpoint: myserver.local:55690 tls: ca_file: ca.crt cert_file: client.crt key_file: client-tss2.key Multi-Stage PII Removal for Metrics Metrics often end up capturing sensitive data by accident through labels and attributes. A customer identity (ID) in a label, or a credit card number in a database query attribute, can turn compliant metrics into a regulatory violation. The implementation of multi-stage PII removal fixes this problem in depth at the data level. Stage 1: Application-level filtering. The first stage happens at the application level, where developers use OTel Software Development Kit (SDK) instrumentation that hashes out user identifiers with the SHA-256 algorithm before creating metrics. Uniform Resource Locators (URLs) are scanned to remove query parameters like tokens and session IDs before they become span attributes. Stage 2: Collector-level processing. The second stage occurs in the OTel Collector's attribute processor. It implements three patterns: complete deletion for high-risk PII, one-way hashing for identifiers using SHA-256 with a cryptographic salt and use regex to clean up complex data. Stage 3: Backend-level scanning. The third stage provides backend-level scanning where centralized systems perform data loss prevention (DLP) scanning to detect any PII that reached storage, triggering alerts for immediate remediation. When the backend scanner detects PII, it generates an alert indicating the edge filters need updating, creating a feedback loop that continuously improves protection. Aggressive Metric Filtering Security is not only about encryption and authentication, but also about removing unnecessary data. Transmitting less data reduces the attack surface, minimizes exposure windows, and makes anomaly detection easier. There may be hundreds of metrics available out of the box, but filtering and forwarding only the needed metrics reduces up to 95% of metric volume. It saves resources, network bandwidth utilization, and management bottlenecks. Resource Limits as Security Controls The OTel Collector sets strict resource limits that prevent denial-of-service attacks. resourceLimitProtection against Memory 500MB hard cap Out-of-memory attacks Rate limiting 1,000 spans/sec/service Telemetry flooding attacks Connections 100 concurrent streams Connection exhaustion These limits ensure that even when an attack happens, the collector maintains stable operation and continues to collect required telemetry from applications. Distributed Tracing Security Trace Context Propagation Without PII Security for distributed traces can be enabled through the W3C Trace Context standard, which provides secure propagation without exposing sensitive data. The traceparent header can contain only a trace ID and span ID. No business data, user identifiers, or secrets are allowed (see Figure 1). Critical Rule Often Violated Never put PII in baggage. Baggage is transmitted in HTTP headers across every service hop, creating multiple exposure opportunities through network monitoring, log files, and services that accidentally log baggage. Span Attribute Cleaning at Source Span attributes must be cleaned before span creation because they are immutable once created. Common mistakes that expose PII include capturing full URLs with authentication tokens in query parameters, adding database queries containing customer names or account numbers, capturing HTTP headers with cookies or authorization tokens, and logging error messages with sensitive data that users submitted. Implementing filter logic at the application level removes or hashes sensitive data before spans are created. Security-Aware Sampling Strategy Reduction of 90% normal operation traces is supported by the General Data Protection Regulation (GDPR) principle of data minimization while maintaining 100% visibility for security-relevant events. The following sampling approach serves both performance and security by intelligently deciding which traces to keep based on their value. trace typesample raterationale Error spans 100% Potential security incidents require full investigation High-value transactions 100% Fraud detection and compliance requirements Authentication/authorization 100% Security-critical paths need complete visibility Normal operations 10-20% Maintains statistical validity while minimizing data collection Logging Security With Fluent Bit and Fluentd Real-Time PII Masking Application logs are the highest risk involved data, which contain unstructured text that may include anything developers print. Real-time masking of PII data before logs leave the pod represents the most critical security control in the entire observability stack. The scanning and masking happen in microseconds, adding minimal overhead to log processing. If developers accidentally log sensitive data, it's caught before network transmission (see Figure 2). Figure 2: Logging security enabled through two-stage DLP, Real-Time Masking in microseconds, TLS 1.2+ End-to-End, Rate Limiting, and Zero Log-Based PII Leaks Security configuration, fluent-bit.conf YAML pipeline: inputs: - name: http port: 9999 tls: on tls.verify: off tls.cert_file: self_signed.crt tls.key_file: self_signed.key outputs: - name: forward match: '*' host: x.x.x.x port: 24224 tls: on tls.verify: off tls.ca_file: '/etc/certs/fluent.crt' tls.vhost: 'fluent.example.com' Fluentd.conf <transport tls> cert_path /root/cert.crt private_key_path /root/cert.key client_cert_auth true ca_cert_path /root/ca.crt </transport> Secondary DLP Layer Fluentd provides secondary DLP scanning with different regex patterns designed to catch what Fluent Bit missed. This includes private keys, new PII patterns, sensitive data, and context-based detection. Encryption and Authentication for Log Transit Transmission of logs is secured through TLS 1.2 or higher encryption method using mutual authentication. In this communication method, Fluent Bit authenticates to Fluentd using certificates, and Fluentd authenticates to Splunk using tokens. This approach prevents network attacks that could capture logs in transit, man-in-the-middle attacks that could modify logs, and unauthorized log injection. Rate Limiting as Attack Prevention Preventing log flooding avoids both performance and security issues. An attacker generating massive volume of logs can hide malicious activity in noise, consume all disk space causing denial of service, overwhelm centralized log systems, or increase cloud costs until logging is disabled to save money. Rate limiting at 10,000 logs per minute per namespace prevents these attacks. Security Comparison: Three Telemetry Types AspectMetrics (Otel)Traces (Otel)Logs (Fluent bit/fluentd) Primary Risk PII in labels/attributes PII in span attributes/baggage Unstructured text with any PII Authentication mTLS with 30-day cert rotation mTLS for trace export TLS 1.2+ with mutual auth PII Removal 3-stage: App --> Collector --> Backend 2-stage: App --> Backend DLP 3-stage: Fluent Bit --> Fluentd --> Backend Data Minimization 95% volume reduction via filtering 80-90% via smart sampling Rate limiting + filtering Attack Prevention Resource limits (memory, rate, connections) Immutable spans + sampling Rate limiting + buffer encryption Compliance Feature Allowlist-based metric forwarding 100% sampling for security events Real-time regex-based masking Key Control Attribute processor in collector Cleaning before span creation Lua scripts in sidecar Key Outcomes Secured open-source observability across distributed retail edge locationsAchieved Full Payment Card Industry (PCI) Data Security Standard (DSS) and GDPR compliance Reduced bandwidth consumption by 96% Minimized attack surface while maintaining complete visibility Conclusion Securing a Cloud Native Computing Foundation-based observability framework at the retail edge is both achievable and essential. By implementing comprehensive security across OTel metrics, distributed tracing, and Fluent Bit/Fluentd logging, organizations can achieve zero security incidents while maintaining complete visibility across distributed locations.

By Prakash Velusamy
Top Takeaways From Devoxx Belgium 2025
Top Takeaways From Devoxx Belgium 2025

In October 2025, I visited Devoxx Belgium, and again it was an awesome event! I learned a lot and received quite a lot of information, which I do not want to withhold from you. In this blog, you can find my takeaways of Devoxx Belgium 2025! Introduction Devoxx Belgium is the largest Java conference in Europe. This year, it was already the 22nd edition. As always, Devoxx is being held in the fantastic theatres of Kinepolis Antwerp. Each year, there is a rush on the tickets. Tickets are released in several batches, so if you could not get a ticket during the first batch, you will get another chance. The first two days of Devoxx are Deep Dive days where you can enjoy more in-depth talks (about 2-3 hours) and hands-on workshops. Days three up and including five are the Conference Days where talks are being held in a time frame of about 30-50 minutes. You receive a lot of information! This edition was a special one for me, because I got the opportunity to speak at Devoxx myself, which has been an awesome experience! I gave a Deep Dive session on Monday, but more on that later. Enough for the introduction, the next paragraphs contain my takeaways from Devoxx. This only scratches the surface of a topic, but it should be enough to make you curious to dive a bit deeper into the topic yourself. Do check out the Devoxx YouTube channel. All the sessions are recorded and can be viewed there. If you intend to view them all, there are 250 of them. Artificial Intelligence Let's start with AI first. More and more AI-related talks are given, which makes Devoxx Belgium also the largest AI conference in the world. But there are sufficient other topics to choose from, but I cannot neglect the importance of AI during this conference. AI Agents Agents are on the rise, and the major libraries for using AI with Java have support for it, or are working on this topic. In general, they all support three flows (explanation is mainly taken from the LangChain4j documentation): Sequential workflow: A sequential workflow is the simplest possible pattern where multiple agents are invoked one after the other, with each agent's output being passed as input to the next agent. This pattern is useful when you have a series of tasks that need to be performed in a specific order.Loop workflow: In this case, you want to improve the output of an LLM in a loop until a certain condition has been met. The agent is invoked multiple times. An end condition can of course also be a maximum number of times in order to prevent the agent to get stuck in the loop.Parallel workflow: With the parallel workflow, you can start multiple agents in parallel and combine their output once they are done with their task. Next to these flows, it is also possible to create agent-to-agent workflows. A2A is an open standard that enables AI agents to communicate and collaborate across different platforms and frameworks, regardless of their underlying technologies. With this approach, you can combine several agents altogether. It is good to know about these capabilities and which support is available in the libraries: LangChain4j, Spring AI, and Agent Development Kit. And do check out the Embabel Framework created by Rod Johnson. This makes use of Goal-Oriented-Action-Planning (GOAP). From LLM orchestration to autonomous agents: Agentic AI patterns with LangChain4j Discover the Agent Development Kit for Java for building AI agents Gen AI Grows Up: Enterprise JVM Agents With Embabel Model Context Protocol If you want to add agents to your AI workflow, you should know about Model Context Protocol (MCP). MCP is a standardized way of interacting with agents. Creating an MCP server is quite easy to do with the above-mentioned libraries. If you want to test your agents, use the MCP Inspector. Something which is not yet addressed sufficiently in the MCP specification is how to secure MCP servers. There is some temporary solution currently, but this probably will change in the near future. Beyond local tools: Deep dive into the Model Context Protocol (MCP) Securing MCP Servers AI Coding Assistants Of course, I have to mention my own Deep Dive. If you want to know more about how to improve the model responses during coding, or if you want to know which tasks can be executed (or not) by AI, etc. Do definitely watch the first part of my Deep Dive. If you are interested in adding MCP servers to your coding workflow so that a model can make use of your terminal, retrieve up-to-date documentation for your libraries, or write End-to-End tests for you, do watch the second part (starting at 1:17). Unlocking AI Coding Assistants: Real-World Use Cases Software Architecture I have read about Architecture Decision Records (ADR) before, and they were mentioned in some talks. But I never had a decent explanation like in the talk I visited. So if you want to get started with ADR, you should definitely take a look at the talk. Creating effective and objective architectural decision records (ADRs) And to continue the architecture paragraph, also watch Making significant Software Architecture decisions. If someone wants to make an architectural decision, you should use the 5 Whys. So, if someone is telling you to use technology A, you ask 'but why?', the person will explain, and then you ask 'but why?', etc. If you still got a decent answer after the fifth why, you are good to go. This and other tips are given in this talk. Security Spring Security I always try to visit a talk about Spring Security, just to freshen up my knowledge and to learn new things, of course. This year, I went to a Spring Security Authorization Deep Dive. You learn about Request, Method, and Object authorization, and how to design your security authorization. Authorization in Spring Security: permissions, roles and beyond Vulnerabilities Ah, vulnerabilities... often a nightmare for developers. Because we need to update our dependencies often. This talk explains CVEs, SBOMs, how to expose your SBOM by means of Spring Boot Actuator, how to use Dependency Track to manage your SBOMs, etc. And also that you should use distroless base images for your own container images in order to reduce the number of dependencies in your container. From Vulnerability to Victory: Mastering the CVE Lifecycle for Java Developers Others Java 25 Between all the AI content, we would almost forget that Java 25 has been released on the 16th of September. In order to get a complete overview, you should take a look at Java 21 to 25 - Better Language, Better APIs, Better Runtime. I was unfortunately not able to attend this Deep Dive because it was scheduled together with my Deep Dive. But that is the beauty of Devoxx Belgium: all talks are recorded and available the next day. This is definitely one of the first talks I will look at. If you are interested in what is coming forward, you should take a look at Weather the Storm: How Value Classes Will Enhance Java Performance. Value classes are immutable and also available for Records. You will get the same performance as with primitive types, meaning that creating value classes comes at almost no performance cost. Spring Boot 4 Another major release coming up is Spring Boot 4 and Spring Framework 7, which is scheduled for November 2025. Discover the new HTTP client, the use of JSpecify annotations, Jackson 3, API versioning, and so on. Bootiful Spring Boot IntelliJ IDEA If you are a Java developer, you probably are using IntelliJ IDEA. IntelliJ covers quite some features and also a lot of them you do not know about. Learn more about it and watch to be more productive with IntelliJ IDEA. You will definitely learn something new. If you are using Spring Boot, you should install the Spring Debugger plugin. At least the ability to see which properties files are loaded, which beans are loaded, is already so valuable that it will help you during debugging. Spring Debugger: Behind The Scenes of Spring Boot Conclusion Devoxx 2025 was great, and I am glad I was able to attend the event. As you can read in this blog, I learned a lot and I need to take a closer look at many topics. At least I do not need to search for inspiration for future blogs!

By Gunter Rotsaert DZone Core CORE
Evolving Golden Paths: Upgrades Without Disruption
Evolving Golden Paths: Upgrades Without Disruption

The platform team had done it again — a new version of the golden path was ready. Cleaner templates, better guardrails, smoother CI/CD. But as soon as it rolled out, messages started flooding in: “My pipeline broke!”, “The new module isn’t compatible with our setup!” Sound familiar? Every platform engineer knows that delicate balance — driving innovation while ensuring developer stability. Golden paths promise simplicity and speed, but without careful version management, they can easily turn from enablers into disruptors. This blog explores how evolving golden paths can be managed like a well-planned journey — where upgrades happen seamlessly, teams stay productive, and developer flow never skips a beat. Why Golden Paths Need to Evolve To know more about the golden paths, refer to our blog. A golden path is never static. It evolves alongside your platform ecosystem, cloud services, and compliance standards. Over time, what was once “golden” may become obsolete, insecure, or inefficient. Common triggers for evolution include: Security and compliance updates: New guardrails or encryption defaults.Technology upgrades: Moving from Terraform v0.13 to v1.x or updating Helm/Kubernetes versions.Performance improvements: Introducing observability, caching, or new CI templates.Organizational scaling: Supporting multi-region or multi-tenant environments. In essence, versioning gives engineering governance without friction a scalable model for both innovation and stability. The Upgrade Dilemma: Balancing Consistency and Autonomy A version upgrade may sound simple: “just use the new template.” But in reality, it touches multiple layers: Layer example impact Infrastructure Change in Terraform module versions CI/CD Pipelines Updated security scanners, lint rules Runtime New base images, sidecars, or service meshes Compliance Revised IAM or audit configurations For developers, this can feel like a moving target. Too frequent changes create fatigue; too few lead to drift. The goal is to evolve the golden path without breaking existing developer flow — keeping the developer experience seamless and predictable. Strategies for Managing Golden Path Evolution Adopt semantic versioning: Use a clear scheme like v1.x, v2.xto indicate breaking vs. non-breaking changes. Semantic clarity prevents confusion and enables automation. Patch versions (1.0 → 1.1): Minor improvements, safe to auto-upgrade.Minor versions (1.x → 2.0): Introduce new standards; require developer opt-in.Provide parallel onboarding paths: Maintain multiple active versions temporarily. Use developer portals (like Backstage) or templates-as-code repositories to help teams select the right version. This reduces upgrade anxiety and encourages self-paced adoption. Example versions: goldenpath-v1 → legacy workloadsgoldenpath-v2 → new workloadsAutomate upgrade discovery: Integrate automated notifications or dashboards that show which teams are running outdated templates. Use metadata tagging (goldenpath.version: v1.2) in repositories or manifests.Implement drift detection pipelines that scan for older configurations.Trigger pull requests suggesting version upgrades — keeping teams aware but not blocked.Version-aware CI/CD pipelines: Your Golden Path upgrade process should be validated through pipelines. For example: Before rollout, CI tests each version with synthetic workloads.Integration tests validate compatibility with existing infra policies.Feature flags allow gradual rollout — enabling canary testing for developers.Backward compatibility and deprecation policies: Every golden path version should include a sunset policy: Define clear timelines (e.g., “v1 supported until Q2 2026”).Communicate impact early via developer channels or portals.Provide upgrade guides and migration playbooks.Measure developer experience impact: Metrics should guide your version evolution — not gut feel. Track: Adoption rate per versionAverage upgrade completion timeDeployment success rate post-upgradeDeveloper satisfaction scores (DX surveys)Governance through evolution: Governance is often misunderstood as control — but in platform engineering, it’s enablement through standards. Version-controlled Golden Paths become the living documentation of your platform's maturity. Change advisory reviews (for architectural consistency)Security sign-offs (for compliance assurance)Platform observability (for impact tracking)Communication cadence: Roadmap + dates: what’s coming next quarter; planned deprecations.Upgrade office hours: tight feedback loop during beta/week 1 of stable.Changelog discipline: human-readable summaries first, details later. Common Anti-Patterns Consider your golden paths upgrades like product upgrades. Follow the regular product upgrade best practices, and the common anti-patterns to be taken care of are — Silent breaking changes hidden in “minor” bumpsBig-bang migrations that mix security, runtime, and CI overhauls at onceDocs lagging code; forcing Slack archaeology to upgradeBlocking enforcement on day one; create warning windows firstNo rollback or pinned artifact strategy Golden Path Version Upgrade Across Multiple Teams Central Platform Team Prepares the Upgrade Define the scope: The platform (enablement) team decides what’s changing: New Terraform module versions, CI/CD pipeline improvements, or base image updatesSecurity hardening, policy updates, or compliance alignmentNew observability or networking patternsTag and version it: They cut a branch like release/v2.0 and test it thoroughly in internal sandboxes. Semantic versioning (major/minor/patch) clarifies upgrade effort for developers.Validate it through automated pipelines End-to-end CI pipelines test the new version using reference applications.Static policy checks (e.g., OPA/Conftest) ensure compliance.Performance and regression tests verify no degradation. Internal Developer Platform (IDP) Publishes the New Version Once verified, the new golden path version is made visible through the developer portal (e.g., Backstage, KubriX, IBM Cloud Projects, or an internal template catalog): Existing Golden Paths remain available (e.g., goldenpath-v1 and goldenpath-v2 coexist).Each entry includes metadata: version, release_date, support_until, and migration_guide. Developers can preview diffs or launch new projects using the new version, while older ones continue to run on the previous one. Multi-Team Upgrade Rollout Begins Communication phase The platform team announces the new version via Slack/Teams, newsletters, or the portal banner. Each update includes: What changed, why it mattersEffort level (e.g., “low-risk, no breaking changes”)Sunset timeline for old versionsAutomated visibility: Dashboards show which teams are still using v1.x. Dashboards or scripts (e.g., via GitHub Actions or Tekton) collect and report this data across all repos. Upgrade Execution per Team Automated upgrade pull requests - Automation (PR bots or CLI tools) generates PRs in each repo: Shows file diffs between v1 and v2Runs pre-upgrade checks (e.g., IAM policy differences)Includes migration notesValidation by CI/CD - Each team’s CI pipeline validates that the upgrade passes: Build and deploy succeedIntegration tests and smoke tests passpolicy scans remain compliantPilot to Gradual rollout Start with a pilot team (early adopters).Gather DX feedback, fix edge cases, release v2.0.1.Expand to remaining teams in waves. Governance and Enforcement Once adoption stabilizes, platform governance tools (OPA, Policy Controller, or GitHub Actions) can warn or block deployments on deprecated versions. This ensures consistency and compliance without manual policing. Feedback and Continuous Improvement A retrospective is conducted after major version rollouts.Teams share friction points and feature requests.The platform team refines upgrade tooling and documentation for the next cycle. In short, a golden path upgrade across multiple teams succeeds when it’s: Visible (clearly announced and discoverable) Automated (PR bots, CI pipelines, dashboards)Gradual (pilot rollout, self-service adoption)Governed (sunset policy and compliance guardrails)Developer-centric (empathy-driven communication and docs) Real-World Example: Evolving Golden Paths in IBM Cloud Imagine a platform team using IBM Cloud is rolling out a new version of a Deployable Architecture that provisions a secured Kubernetes cluster with integrated monitoring and logging. In the older version (v1.4), certain compliance controls were manual. The new version (v1.5) introduces automated CBR (Context-Based Restrictions), enhanced SCC (Security and Compliance Center) scans, and pre-approved Terraform modules. Before releasing it, the platform team validates it in a sandbox IBM Cloud Project using Schematics and Conftest policies. Once verified, the version is published to the private catalog, where individual development teams can choose when to upgrade by updating their project configuration. As teams progressively adopt v1.5, outdated templates are automatically flagged through OPA policies, and deprecated versions are eventually sunset — ensuring secure evolution without breaking developer workflows. Lessons Learned From Real-World Platform Teams Ship small, test early: Don’t wait for a massive “v2.” Release incremental improvements that developers can digest.Automate friction away: The easier the upgrade, the faster the adoption.Communicate like a product team: Treat internal teams as customers — market your Golden Path upgrades.Balance stability and innovation: Give teams time to migrate, but don’t let them drift indefinitely.Leverage telemetry: Use logs, version labels, and Schematics activity trackers to detect outdated paths. Conclusion Golden paths are not static highways — they are evolving ecosystems. Versioning is what turns them from one-off templates into strategic assets for developer productivity and organizational resilience. Evolving from v1 to vNext is less about technology and more about trust, transparency, and timing. Platform teams that embrace controlled evolution cultivate something powerful. Developers trust the platform’s pace of change, upgrades feel like improvements, not pain, and standards evolve without constraining creativity. In the end, a truly golden path isn’t the one that never changes — it’s the one that changes gracefully.

By Josephine Eskaline Joyce DZone Core CORE
Scaling Boldly, Securing Relentlessly: A Tailored Approach to a Startup’s Cloud Security
Scaling Boldly, Securing Relentlessly: A Tailored Approach to a Startup’s Cloud Security

Launching a SaaS startup is like riding a rocket. At first, you’re just trying not to burn up in the atmosphere — delivering features, delighting users, hustling for feedback. But, as you start to scale, you realize: security isn’t just a cost center — it’s an accelerant for growth, trust, and resilience. For SaaS startups racing from MVP to unicorn, robust security isn’t just about compliance; it fuels innovation, safeguards reputation, and unlocks enterprise sales. But faced with fierce market demands and thin resources, how can founders, engineers, and security leads scale infrastructure and build trust — all without slowing the agile hustle? This phased and tailored approach distills the wisdom of security approach research, startup battle scars, and practical frameworks to take your cloud security journey from survival mode to true maturity. This is not a handheld guide but critical directional steps to highlight. Why Startup Cloud Security Is Different (And Why Agility Is Your Superpower) Small teams, huge accountability: Startups need to deliver enterprise-grade security on a bootstrap budget.Unpredictable scaling: A post goes viral, a new client lands, regulations shift. Security needs to evolve faster than your product.Winning trust: Enterprise prospects, regulators, and savvy users expect real answers about risk, compliance, and resilience. PhaseTeam/Customer SizeSecurity PriorityInceptionSmall/earlyEssentials, hygieneMaturingGrowingFormalization, automationGrowthLarge, enterpriseAdvanced controls Phase I: Build Strong Foundations — Don’t Wait for Disaster You can’t outsource everything. The Shared Responsibility Model is not just a legal shield: as a customer, you own data security, identity, app config, and compliance — even if AWS/Azure/GCP handle “steel doors” and hypervisors. Quick, impactful wins: Enable built-in tools like IAM, encryption, and activity logs.Apply the principle of least privilege—give “just enough access” and rotate credentials frequently.Patch, update, automate. Legacy debt grows exponentially. Source: Microsoft Phase II: Architect for Resilience — Blast Radius Reduction and Segmentation What happens if dev gets breached? Can an attacker exfiltrate production data, pivot to finance, or shut down your core APIs? Network segmentation is your safety mechanism: Separate environments (Dev, Test, Prod) using VPCs, subnets, resource groups, with firewalls and strict access rules.Infrastructure as Code (IaC) and repeatable templates allow you to rebuild compromised systems, keep configs airtight, and automate DR (disaster recovery).Multi-zone, multi-region deployments mitigate downtime, regulatory risk, and improve scalability. Real story: Imagine an OAuth misconfiguration in Dev exposes test tokens publicly. If Dev is “coupled” to Prod with open rules, you’re looking at a business-ending breach. But if it’s segmented, attackers hit a wall. Phase III: App Security — Code, Supply Chain, and User-Facing Interfaces Security isn’t just about “don’t get breached” — it’s how you build code and manage change through every phase. Embed secure software development lifecycle (SSDLC) practices: threat modeling, code reviews, automated tests (SAST, DAST).Supply chain security: Audit open-source components with SBOMs, use mature packages with regular updates, and review third-party API contracts.Secure CI/CD: Isolate pipelines, scan for secrets, ensure role-based access, automate vulnerability checks.Protect APIs and web apps: Use WAFs, enforce authentication, validate inputs, and throttle excessive use. Practical tip: Start simple with the OWASP Top 10 — layer more advanced testing as you grow (interactive testing, bug bounties). Make “security” a user story in your backlog. Phase IV: Governance, Risk, and Compliance — Turning Security into a Market Advantage Enterprise clients don’t just want cool features; they want to know you’re SOC 2-ready or GDPR-compliant. Set up a risk register: Map assets, estimate threats, record mitigations.Create policies for data classification, backup, encryption, and resilience (immutable storage = forensic gold after an incident).Implement third-party risk management: vet vendors for compliance, security controls, and breach history. Framework choices: Startups often begin aligning with ISO 27001 or NIST. Embed controls early, document everything — auditors and customers will ask. Phase V: IT Security — From Endpoints to Remote Work Cloud security isn’t just about VMs and S3 buckets. Every laptop is a potential front door. Use MDM/UEM for device management and patching.Enforce disk encryption, endpoint antivirus/EDR, strong passwords, and MFA policies.For remote teams, use Zero Trust Network Access (ZTNA) or SASE — VPNs alone aren’t enough. Pro tip: The risk of credential theft, device compromise, and shadow IT is highest when teams scale quickly — centralized management and routine training pay dividends. Phase VI: Security Monitoring, Incident Response, and Automation Detection is everything. Can you spot suspicious logins, exfiltration attempts, or privilege escalation in real time? Start with basic log collection and cloud-native security dashboards (AWS Security Hub, Azure Security Center).Scale up: Add SIEM data lakes, automate alerting (SOAR), and enable periodic threat hunting.Practice incident response: establish a runbook, rehearse the process, have escalation contacts and external experts on call. If a breach happens, speed is crucial — customers forgive honest communication and fast remediation over stealth and denial. The Zero Trust Mindset—Never Trust, Always Verify Zero Trust isn’t vendor hype. Assume every device, user, and API call could be compromised. Implement device posture checks, adaptive authentication, micro-segmentation, and least-privileged access.Use behavior analytics to flag anomalies and automate dynamic controls. As more startups embrace remote work, every login is an untrusted action — gate everything with context, not geography. Appendix 1: Practical Cloud Security Milestones Below are phased milestones for each security area. Startups progress from “basic hygiene” to “automated resilience.” PhaseArchitectureAppSec TestingMonitoring/IRIT SecurityGovernance & ComplianceInceptionSingle region, IAC templatesBasic code reviews, OWASPAlerts for failed loginsManual device updates, MFARisk register, simple policiesMaturingMulti-zone/region, blast radiusSAST/DAST, SBOM, CI/CD secSIEM, daily reviewsAutomated patching, MDMMap frameworks, TPRMGrowthMulti-cloud, advanced DRThreat modeling, bug bountiesSOAR, full automationUEM, centralized dashboardsAudits, compliance sustained Appendix 2: Cyber Attacks and Results Below are some of the recent notable cyber attack incidents and critical lessons learned from the same. These can shed some light on how the startups can build their defence and response strategies Pandora Jewelers: Salesforce Data Breach Pandora, the jewelry retailer, suffered a cyberattack in August 2025 when threat actors gained access to its Salesforce environment through successful social engineering and vishing calls targeting a third-party provider. The attackers tricked staff into authorizing a fraudulent app, then used OAuth tokens to pull customer names, emails, and birthdates. Lessons learned: Vendor and integration risks: Monitor and assess third-party SaaS access continually.Staff training: Regularly educate against phishing and social engineering threats.Restrict access: Use least-privilege permissions and enforce MFA on all platforms.Act fast: Quick incident response and transparent communication are critical after a breach. The Pandora attack underscores how attackers abuse trusted integrations, making proactive vendor oversight and employee vigilance essential. United Natural Foods: Cyberattack, forcing system shutdowns United Natural Foods, Inc. (UNFI) — the primary food distributor for Whole Foods and all US Military retail exchanges — was hit by a suspected ransomware attack. UNFI detected unauthorized activity, took some systems offline, and publicly disclosed that the breach disrupted its ability to fulfill customer orders, impacting over 30,000 retail locations across the US and Canada. The attack triggered supply chain delays and required workarounds to continue limited operations while forensic and law enforcement investigations began. Lessons learned: Critical infrastructure is vulnerable: Even essential supply chains are high-value and susceptible targets.Incident response is key: Quickly taking systems offline and communicating with stakeholders helps contain damage and maintain trust.Resilience matters: Food distributors and other critical sectors should focus on both prevention and operational resilience, not just response.Vendor and software supply chain risks: With large, complex distribution networks, security gaps in technology or third-party software can have outsized operational impacts. The UNFI incident underscores the urgent need for modern, resilient cybersecurity practices in every tier of the supply chain, especially within essential infrastructure Marks & Spencer (M&S): Theft of customer information UK retailer Marks & Spencer (M&S) was hit by a cyberattack that resulted in the theft of customer information — including phone numbers, addresses, and dates of birth. No payment card details or account passwords were exposed, but as a precaution, M&S forced password resets for customers and paused online orders temporarily. The attack was later attributed to the DragonForce ransomware group, also responsible for recent attacks on other UK retailers. Lessons learned: Limit sensitive data storage: Avoid retaining payment info on systems wherever possible.Customer communication: Prompt notification and offering guidance build trust.Preventative security: Prepare for ransomware threats and syndicate attacks with regular incident response exercises.Password hygiene: Enforce password changes and multi-factor authentication in the wake of any suspected breach. This incident highlights the value of quickly securing accounts, maintaining transparency, and ensuring data minimization practices for retailers handling customer data. 23andMe: Went Bankrupt 23andMe suffered a major data breach when attackers used "credential stuffing" — trying to reuse passwords from other breaches — to access about 14,000 accounts. Because of the company's "DNA Relatives" feature, the attackers then scraped data linked to nearly 7 million users. Exposed data included names, birth years, locations, family trees, profile pictures, ancestry details, and sometimes health reports, impacting especially customers of Ashkenazi Jewish and Chinese heritage. No raw genetic files were leaked, but the personal and genealogical data could not be changed, making the impact severe. The breach led to lawsuits, regulatory fines, and significant damage to the company's reputation and business. Lessons learned: Weak password practices (password reuse) expose even the most sensitive accounts — enforce strong, unique passwords and enable multi-factor authentication for all users.Features that connect users (like DNA Relatives) may increase risk if a single account compromise gives wide data access; strict access controls and data segmentation are essential.Data minimization and regular security audits help reduce risk and regulatory exposure in highly sensitive sectors.Transparency, prompt breach response, and proactive communication are necessary to maintain customer trust and regulatory compliance Final Thoughts: Security as Growth Multiplier Cloud security for startups is not a checklist — it’s a journey. Foundational hygiene, architecture that segments and automates, smart app security, and incident response muscle: these drive innovation, win deals, and scale trust. Scale as boldly as you dare — just do it securely.

By Srihari Pakalapati
From Platform Cowboys to Governance Marshals: Taming the AI Wild West
From Platform Cowboys to Governance Marshals: Taming the AI Wild West

The rapid ascent of artificial intelligence has ushered in an unprecedented era, often likened to a modern-day gold rush. This "AI gold rush," while brimming with potential, also bears a striking resemblance to the chaotic and lawless frontier of the American Wild West. We are witnessing an explosion of AI initiatives — from unmonitored chatbots running rampant to independent teams deploying large language models (LLMs) without oversight — all contributing to skyrocketing budgets and an increasingly unpredictable technological landscape. This unbridled enthusiasm, though undeniably promising for innovation, concurrently harbors significant and often underestimated dangers. The current trajectory of AI development has indeed forged a new kind of "lawless land." Pervasive "shadow deployments" of AI systems, unsecured AI endpoints, and unchecked API calls are running wild, creating a critical lack of visibility into who is developing what, and how. Much like the historical gold rush, this is a full-throttle race to exploit a new resource, with alarmingly little consideration given to inherent risks, essential security protocols, or spiraling costs. The industry is already rife with cautionary tales: the rogue AI agent that inadvertently leaked highly sensitive corporate data, or the autonomous agent that, in a mere five minutes, initiated a thousand unauthorized API calls. These "oops moments" are not isolated incidents; they are becoming distressingly common occurrences in this new, unregulated frontier. This is precisely where the critical role of the platform engineer emerges. In this burgeoning chaos, the platform engineer is uniquely positioned to bring much-needed order, stepping into the role of the new "sheriff." More accurately, given the complexities of AI, they are evolving into the governance marshal. This transformation isn't a mere rebranding; it reflects a profound evolution of the role itself. Historically, during the nascent stages of DevOps, platform engineers operated more as "cowboys" — driven by speed, experimentation, and a minimal set of rules. With the maturation of Kubernetes and the advent of widespread cloud adoption, they transitioned into "settlers," diligently building stable, reliable platforms that empowered developers. Now, in the dynamic age of AI, the platform engineer must embrace the mantle of the marshal — a decisive leader singularly focused on instilling governance, ensuring safety, and establishing comprehensive observability across this volatile new frontier. The Evolution of the Platform Engineer: From Builder to Guardian This shift in identity signifies far more than just a new job title; it represents a fundamental redefinition of core responsibilities. The essence of the platform engineer's role is no longer solely about deploying and managing infrastructure. It has expanded to encompass the crucial mandate of ensuring that this infrastructure remains safe, stable, and inherently trusted. This new form of leadership transcends traditional hierarchical structures; it is fundamentally about influence — the ability to define and enforce the critical standards upon which all other development will be built. While it may occasionally necessitate saying "no" to risky endeavors, more often, it involves saying "yes" with a clearly defined and robust set of guardrails, enabling innovation within secure parameters. As a governance marshal, the platform engineer is tasked with three paramount responsibilities: Gatekeeper of infrastructure: The platform engineer stands as the primary guardian at the very entry point of modern AI infrastructure. Their duty is to meticulously vet and ensure that everything entering the system is unequivocally safe, secure, and compliant with established policies and regulations. This involves rigorous checks and controls to prevent unauthorized or malicious elements from compromising the ecosystem.Governance builder: Beyond merely enforcing rules, the platform engineer is responsible for actively designing and integrating governance mechanisms directly into the fabric of the platform itself. This means embedding policies, compliance frameworks, and security protocols as foundational components, rather than afterthoughts. By building governance into the core, they create a self-regulating environment that naturally steers development towards best practices.Enabler of innovation: Crucially, the ultimate objective of the platform engineer is not to impede progress or stifle creativity. Instead, their mission is to empower teams to build and experiment fearlessly, without the constant dread of catastrophic failures. This role transforms into that of a strategic enabler, turning seemingly impossible technical feats into repeatable, manageable processes through the provision of standardized templates, robust self-service tools, and clearly defined operational pathways. They construct the scaffolding that allows innovation to flourish securely. Consider the platform engineer not as an obstructionist, but rather as a highly skilled and visionary highway engineer. They are meticulously designing the safe on-ramps, erecting unambiguous signage, and setting appropriate speed limits that enable complex AI workflows to operate at peak efficiency and speed, all while meticulously preventing collisions and catastrophic system failures. The Governance Arsenal: The AI Marshall Stack Platform engineers do not enter this challenging new domain unprepared. They possess a sophisticated toolkit — their "governance arsenal" — collectively known as the AI Marshall Stack. This arsenal is composed of several critical components: AI gateway: Functioning as a "fortified outpost," the AI Gateway establishes a single, secure point of entry for all applications connecting to various LLMs and external AI vendors. This strategic choke point is where fundamental controls are implemented, including intelligent rate limiting to prevent overload, robust authentication to verify user identities, and critical PII (Personally Identifiable Information) redaction to protect sensitive data before it reaches the AI models.Access control: This element represents "the law" within the AI ecosystem. By leveraging granular role-based access control (RBAC), the platform engineer can precisely define and enforce who has permission to utilize specific AI tools, services, and data. This ensures that only authorized individuals and applications can interact with sensitive AI resources, minimizing unauthorized access and potential misuse.Rate limiting: This is the essential "crowd control" mechanism. It acts as a preventative measure against financial stampedes and operational overloads, effectively preventing scenarios like a misconfigured or rogue AI agent making thousands of costly API calls within a matter of minutes, thereby safeguarding budgets and system stability.Observability: These components serve as the "eyes on the street," providing critical real-time insights into the AI landscape. A significant proportion of AI-related problems stem not from technical failures but from a profound lack of visibility. With comprehensive observability, the platform engineer gains precise knowledge of who is doing what, when, and how, enabling them to swiftly identify and address misbehaving agents or unexpected API spikes before they escalate into significant damage or costly incidents.Cost controls: These are the "bankers" of the AI Marshall Stack. They are designed to prevent financial overruns by setting explicit limits on AI resource consumption and preventing the shock of unexpectedly large cloud bills. By implementing proactive cost monitoring and control mechanisms, they ensure that AI initiatives remain within budgetary constraints, fostering responsible resource allocation. By meticulously constructing and deploying these interconnected systems, platform engineers are not merely averting chaos; they are actively fostering an environment where teams can build and innovate with unwavering confidence. The greater the trust users have in the underlying AI infrastructure and its governance, the more rapidly and boldly innovation can proceed. Governance, in essence, is the mechanism through which trust is scaled across an organization. Just as robust rules and well-defined structures allowed rudimentary frontier towns to evolve into flourishing, complex cities, comprehensive AI governance is the indispensable framework that will enable AI to transition from a series of disparate, one-off experiments into a cohesive, strategically integrated product strategy. Why the Platform Engineer Is the Right Person for the Job: The AI Marshal's Unique Advantage Platform engineers are uniquely and exceptionally well-suited to assume this critical role of the governance marshal. They possess the nuanced context of development cycles, the inherent influence within engineering organizations, and the technical toolkit necessary to implement and enforce AI governance effectively. They have lived through and shaped the eras of the "cowboy" and the "settler"; now, it is unequivocally their time to become the "marshal." The AI landscape, while transformative, is not inherently lawless. However, it desperately requires systematic enforcement and a foundational structure. It needs a leader to build the stable scaffolding that allows developers to move with agility and speed without the constant threat of crashing and burning. This vital undertaking is not about imposing control for the sake of control; rather, it is fundamentally about safeguarding everyone from the inevitable "oops moments" that can derail projects, compromise data, and exhaust budgets. It is about actively constructing a superior, inherently safer, and demonstrably smarter AI future for every stakeholder. Therefore, the call to action for platform engineers is clear and urgent: do not passively await others to define the rules of this new frontier. Seize the initiative. Embrace the role of the hero. Build a thriving, resilient AI town where innovation can flourish unencumbered, and where everyone can contribute and grow without the paralyzing fear of stepping on a hidden landmine. Final Thoughts AI doesn’t need to be feared. It just needs to be governed. And governance doesn’t mean slowing down—it means creating the structures that let innovation thrive. Platform engineers are in the perfect position to lead this shift. We’ve been cowboys. We’ve been settlers. Now it’s time to become marshals. So, to all the platform engineers out there: pick up your badge, gather your toolkit, and help tame the AI frontier. The future of safe, scalable, and trusted AI depends on it. Because the Wild West was never meant to last forever. Towns become cities. And with the right governance in place, AI can move from chaos to confidence — and unlock its full potential. Want to dive deeper into the AI Marshal Stack and see how platform engineers can tame the AI Wild West in practice? Watch my full PlatformCon 2025 session here: Discover how to move from cowboy experiments to marshal-led governance — and build the trusted AI foundations your organization needs.

By Hugo Guerrero DZone Core CORE
Adobe Service Runtime: Keep Calm and Shift Down!
Adobe Service Runtime: Keep Calm and Shift Down!

Microservices at Adobe Adobe’s transformation from desktop applications to cloud offerings triggered an explosion of microservices. Be it Acrobat, Photoshop, or Adobe Experience Cloud, they are all powered by suites of microservices mainly written in Java. With so many microservices created, every developer had to go through the same painful processes, i.e., security, compliance, scalability, resiliency, etc., to create a production-grade microservice. That was the genesis of Adobe Service Runtime. What Is ASR? ASR or Adobe Service Runtime is an implementation of the Microservice Chassis Pattern. More than 80% of Adobe’s microservices use ASR foundational libraries. It offers cross-cutting concerns that every production-grade microservice is expected to have. Highlights of the cross-cutting concerns included in ASR libraries: Foundational libraries for Java and Python These libraries offer log masking, request stitching, breadcrumb trail, exception handling, async invocation, resiliency, etc.À la carte libsASR connector libs/SDKs to talk to internal Adobe servicesBlessed base containers are the security-blessed containers that accelerate container adoption for applications in any language.Bootstrapping code and infrastructure templates for fast-tracking getting started.Opinionated build system — i.e., how to build a Java application, run tests, launch debug setups, and package into containers.Secure defaults for configurables to ensure teams get started with baselines that have been tested to work. Having cross-cutting concerns as a single chassis helps organizations produce production-grade microservices at scale, just like an automobile manufacturer’s production line. Why ASR? Large organizations often have heavy compliance, security, resiliency, and scalability requirements. ASR provides a collection of foundational libraries, components, tools, and best practices (12 factor). This enables rapid development of four 9s-capable, innovative, and secure services. It also enables a container-first deployment system. Value Proposition We did a study on internal teams to derive the value proposition of ASR. Category Task With ASR Without ASR Velocity Picking frameworks and libraries, getting them to work, setting up project structure and build system, and resolving dependency and build issues so you can start focusing on business logic. Less than 1 hour 1 - 2 weeks Velocity Implementing capabilities like log masking, req stitching, etc. All capabilities are available 'out of the box'. 4-6 weeks Security Legal and security reviews of core code and libraries (not including business logic) 2-3 days 3-6 weeks Community A strong community that empowers decentralized decision-making on feature priorities for service teams. Common framework makes it easy to share code and developers between projects. Diverse frameworks make it hard to share code across projects. Using ASR saved Developers time and improved security posture by not reinventing the wheel. Benchmarks RPS We did some benchmarking to see if ASR has any overhead over vanilla applications. We ran a ten-minute Gatling script to simulate 500 users, for example. AppRequests/second (RPS)ASR % overheadResponse times (p95)ASR % overhead Non-ASR 21678.506n/a 46 n/a ASR 23969.3837% 48 4% ASR Filters Some cross-cutting concerns are offered as Filters, which can add some overhead. Our baseline comparison is the mean requests/sec of 20474.225. Sections below show the performance change with individual filters disabled. ASR logging filter The cost of disabling this is that the ASR service won't log incoming requests and outgoing responses Performance: mean requests/sec 21260.143, a 3.8% improvementASR exception filter The cost of disabling this is that stack traces can escape in exceptions, an ASSET violationPerformance: Mean requests/sec 20988.308, a 2.5% improvementASR request ID filter The cost of disabling this is that the ASR service won't have a unique request ID per request for tracking.Performance: mean requests/sec 21354.042, a 4.3% improvementASR request response filter The cost of disabling this is that the ASR service won't automatically validate the Authorization header in the incoming request (if com.adobe.asr.enable-authorization-header-validation is set to true)Performance: mean requests/sec 20896.923, a 2% improvement The benchmarks reveal that using ASR adds minimal overhead when compared to the functionalities it offers. Security CVE scans often uncover millions of vulnerabilities across codebases in large organizations. If Adobe developers had to manually fix each one, they would spend most of their time on patching rather than building features. By providing secure defaults and hardened components, ASR serves as a foundational library that reduces vulnerability exposure and saves developers valuable time. CVEs The Log4J incident is a testament to the success of ASR. When the CVE was published, users of ASR had to upgrade to just one version of ASR. Non-ASR repos had to scramble to migrate their libs off of Log4j. This clearly demonstrated the significant multiplier impact ASR has created within the company. Sensitive Data in Logs Log masking is another popular feature that is often recreated across the orgs. ASR comes with a modular log masking library that masks sensitive information. Logs that contain credit card, SSN, or any Adobe-defined sensitive info by default are automatically masked. Developers can also extend it to customize masking for additional use cases. This ensures consistent protection of PII across all applications. ASR Connectors and Resiliency ASR has connectors, which can be used to consume APIs exposed by other services inside Adobe. ASR connectors are application environment aware, i.e, a connector will automatically pick the right root URL of the service based on the app environment. For example, if the Application is running in the stage environment, the identity connector will use the identity stage URL; when the application is running in the prod environment, the identity connector will use the prod URL. This is possible due to the AutoConfiguration that ASR provides for all the connectors. One of the challenges with microservices is that different SLAs are honored by services. Your service might have a higher standard, and you must often tolerate other services. By using ASR connectors, microservices get fault-tolerant communication out of the box. ASR connectors leverage Resilience4j to achieve this. Every connector comes with Resiliency features like bulkhead threadpool, circuit breakers, retries, exponential backoff, etc. By using ASR connectors, the posture of a microservice is greatly enhanced. There are guardrails in the thread pool that ensure there is no avalanche of threads. By using retries by default, the stress on Adobe's network is greatly reduced when the availability of the dependent service is degraded. This is a classic example of how pushing the cross-cutting concerns down to a common layer unlocks a lot of value and reduces redundancies. ASR Adoption at Adobe Almost every Java service at Adobe uses at least one of ASR’s libraries. The full suite of ASR is used by 80% or roughly 7000+ services at Adobe and continues to grow. With the growing need to make products more agentic, we see a strong need for libraries that support such use cases. ASR can be a powerful multiplier in enabling harm and bias guardrails, which are highly relevant to both the company and the industry today. Keep Calm and Shift Down! Inspired by shift left, shift down is a paradigm in platform engineering. A lot of cross-cutting concerns must be managed and provided by the platform out of the box. The users of the platform can focus on their functionalities without having to worry about the baseline standards set by Adobe. ASR enables shift down philosophy at scale. Security teams and executives keep calm due to the centralization of best practices and the at-scale adoption of ASR. Developers are at ease due to overhead being handled at the foundational layer. Every company interested in developer productivity and operational excellence should adopt a shift-down strategy like ASR. Over the years, the ROI keeps compounding and helps companies move fast on paved roads that power the developer journeys.

By Anirudh Mathad
The Rise of Passkeys
The Rise of Passkeys

What Are Passkeys? You know how annoying it is to remember all those different passwords for every single website? And how terrifying it is when you hear about a company getting hacked, and suddenly, your password for that site might be out there? Well, imagine logging into PayPal without a password, and even if PayPal's systems got totally breached, your login wouldn't be compromised. That's pretty much what passkeys are all about, and lets deep dive into the secret sauce behind them. If you're like billions of others with a smartphone, you might have already used passkeys on your phone. Companies like Apple, Microsoft, and Google have been rolling out passkeys widely recently. So what is it? PassKeys are a new, more secure way to sign in to websites and apps. They are a replacement for traditional passwords and are designed to be resistant to phishing, easier to use, and secure. Passkey is the standard introduced by the FIDO Alliance. It relies on an asymmetric key where there are two sets of keys — one is the public key and the other is the private key. Let’s go over this using your interaction with PayPal as an example. When you set up a passkey for PayPal, your device generates a "private key" that never leaves your device, and the other is a "public key," which your device sends over to PayPal. Now, when you want to log in, PayPal sends your phone a random piece of data, also referred to as a challenge. Your device signs this data with the private key stored on the device and sends that back to PayPal without revealing the key. Now PayPal, using the public key stored for your account, can instantly verify that it is signed by your private key, without ever needing to see that private key itself. Now that validates your digital identity, allowing you to use your PayPal account. Private key is stored on your device, so it is important to secure the device, and that is where your device unlocking mechanism, such as Face ID, Touch ID, or device password, comes into play. It prevents unauthorized persons from using the private key stored on the device. Pseudo Code Let's walk through the technical flow of setting up and using a passkey. The following pseudo-code snippets represent the essential client-side actions. JavaScript // Pseudo-code for Passkey Setup (User registers with PayPal) function setupPasskey(user_id, device_info): // 1. Device generates a new asymmetric key pair private_key = generatePrivateKey() public_key = generatePublicKey(private_key) // 2. Device securely stores the private key storeSecurelyOnDevice(private_key) // 3. Device sends the public key to PayPal paypal_server.receivePublicKey(user_id, public_key, device_info) // 4. PayPal associates the public key with the user's account paypal_server.database.store(user_id, public_key) return "Passkey setup successful!" // Pseudo-code for Passkey Login (User logs into PayPal) function loginWithPasskey(user_id): // 1. PayPal sends a challenge to the user's device challenge = generateRandomChallenge() paypal_server.sendChallenge(user_id, challenge) // 2. User's device receives the challenge device_response = user_device.receiveChallenge(challenge) // 3. User authenticates on the device (Face ID, Touch ID, PIN) if (device_authentication_successful()): // 4. Device signs the challenge with its stored private key signature = signWithPrivateKey(device_response.private_key, challenge) // 5. Device sends the signature back to PayPal paypal_server.receiveSignature(user_id, signature, challenge) // 6. PayPal retrieves the user's stored public key public_key = paypal_server.database.getPublicKey(user_id) // 7. PayPal verifies the signature using the public key if (verifySignature(public_key, challenge, signature)): return "Login successful!" else: return "Signature verification failed. Access denied." else: return "Device authentication failed. Access denied." Why the Surge in Passkey Adoption? Traditional passwords have several vulnerabilities, such as passwords that are easy to guess and that are used across services. Consider phishing as an example, phishing websites don’t get your password, as there is no traditional password. Even if you're tricked into visiting a fake website, your passkey won't work there, and each website has its own private key. The private key is on the device, and services such as PayPal only have your public key. So data breaches won’t compromise your private key These advantages, combined with improved user experience, such as using your Face ID to log in to websites, pushed major platforms such as Apple, Google, and Microsoft to roll out support for passkeys. The built-in support means it is easy for developers to integrate them and for users to adopt them. This also created a ripple effect, where major services have started adopting these for the same reasons as major platforms did — improved security and user experience. Are There Any Drawbacks? While passkeys are a significant leap forward in security compared to traditional passwords, they aren't without their potential downsides or risks. Passkeys do not change the recovery process, and bad actors could use this route to gain access. Services have to harden the account recovery process.This is a relatively new authentication mechanism, and services have to ensure they are following the best practices to ensure that they are verifying the signatures when they implement support for passkeys.Passkeys have an over-reliance on devices and their security. So these are as good as the local authentication performed on the device. Using the best possible device authentication will mitigate this.Passkeys are tied to the ecosystem to some extent. Passkeys created in Apple devices may be used seamlessly across your Apple devices, but it is difficult to use the same on the Android platform. They can be shared across devices using Bluetooth and QR codes, but it is not as seamless as within a given ecosystem. Conclusion Overall, the passkeys are far more secure than traditional passwords, and that largely outweighs the drawbacks. As adoption continues to grow and technology evolves, we will likely see many of these challenges addressed. With more services rolling out support for passkeys, user awareness will build, and best practices will become clear for everyone. It is safe to say that passkeys are here to stay and will likely become the standard for online authentication for the foreseeable future.

By Shiva kumar Pati
The Ethics of AI Exploits: Are We Creating Our Own Cyber Doomsday?
The Ethics of AI Exploits: Are We Creating Our Own Cyber Doomsday?

As artificial intelligence advances at rates never previously encountered, its impact upon society is taking hold ever more profoundly and extensively. From autonomous vehicles and personalized medicine to generative media and intelligent infrastructure, AI is changing every area it touches. But lurking in the background of these revolutionary promises is a chilly, black fear: Are we also building the tools of our own digital demise? The ethics of AI exploits, although intentional or emergent, raise profoundly disturbing questions about the cybersecurity, anonymity, and even global security of the future. A Double-Edged Sword AI is widely hailed as something good. Its ability to detect threats, analyze great volumes of data, and make complex decisions on its own has established it as a cornerstone of modern innovation. But with every great technology comes the risk of dual-purpose utilization. The same machine learning that can detect cancer can be repurposed as a weapon to detect zero-day vulnerabilities in computer systems. The same algorithms that make traffic predictions can be used to bypass intrusion prevention systems. The large language models that power conversational AI can be exploited to generate phishing emails indistinguishable from legitimate communication or worse, to socially engineer their way through enterprise defenses. We’re no longer speculating. These capabilities are already here. AI as a Weapon The idea of cyberwarfare isn’t new. Nation-states and criminal organizations have been waging silent wars in cyberspace for decades. But AI drastically shifts the scale and speed of these operations. Take offensive AI, for example. Machine learning tools can now look on the internet for vulnerabilities by themselves, rank them by impact, and even personalize attacks against specific targets from information pilfered on social media, via leaked data, or past breaches. What once took human effort for months is now possible in hours or even minutes. More disturbing is the emergence of autonomous cyber weapons: AI-driven systems with the ability to make choices, learn from experience, and launch cyberattacks with minimal or no human involvement. These kinds of systems raise an existential ethical question: where does delegation end and abdication begin? If an AI system launches a cyberattack on critical infrastructure autonomously, who should be held responsible? The developer? The deployer? The AI? The Rise of AI Exploits The term "AI exploits" both implies exploitation by AI systems and AI as an exploitative agent. At one end, hackers are finding ways to mislead or circumvent AI systems through adversarial examples, data poisoning, model inversion, and prompt injection attacks. Through such methods, AI models can be fooled into making dangerous mistakes or revealing sensitive information. Conveyingly, though, AI is being used to exploit vulnerabilities in traditional systems, with ghastly efficiency. Security researchers have demonstrated that generative models can be trained to produce polymorphic malware that will change its signature to attempt to evade detection. Others have trained models to detect misconfigurations in cloud deployments or crack passwords more effectively than brute-force tools. There is a darkening ecosystem that is emerging around AI-based hacking tools, including some that are open-sourced or available through underground forums. The democratization of AI-driven exploits means that even low-bandwidth attackers can now have access to advanced tools, escalating the threat surface exponentially. Ethics in the Age of Digital Leviathans The ethical dilemma is not merely one of avoiding abuse. It is one of redefining what responsibility appears as in a world where code can think, learn, and act. For one, the pace of development is far outstripping the establishment of safeguards. AI developers, often racing for market share or academic prestige, may overlook or under-prioritize security. We’ve already seen examples where popular AI APIs were exploited to produce hate speech, violate privacy, or bypass content moderation. Also, there is no universal code of conduct for AI systems to operate or be used in offensive cyber environments. A system that is considered by one nation as defensive AI can be seen as a first-strike weapon by another nation. No international agreements or norms on weaponizing AI exist, something one recalls from the days of nuclear proliferation. Only now, the barrier to entry is so much lower. Are We Sleepwalking into a Cyber Doomsday? To say this is nothing but science fiction would be a disservice to the reality unfolding before our very eyes. AI now has the ability to detect and exploit security vulnerabilities on its own. Deepfakes have nullified the very foundation of belief in visual and audio evidence. Artificial social media robots can influence public perceptions and upend elections. Code written by AI can be rife with backdoors or merely written in a way that exploits esoteric logic bugs in compilers and runtimes. Now imagine those skills combined into a self-replicating cyberorganism, which is an AI-driven worm that learns, adapts, and replicates via networks, adapting its payload to match the target. It's not impossible. Indeed, researchers have already built proof-of-concepts based on this very threat model. The idea of a "cyber doomsday" is not necessarily a singular monolithic cataclysm. It might occur more insidiously: as the gradual erosion of online trust, large-scale disruption of service, and desensitization to AI-enabled sabotage. Manipulative adversary financial markets. Water or electrical grids, critical infrastructure, are taken down by autonomous exploits. Corporate and state secrets siphoned off by hyper-personalized social engineering. No Skynet required. Just apathy. Responsibility and Foresight We do have choices. Ethics never ought to be an afterthought, and it needs to be embedded in the very fabric of AI design. This means: Secure by design: AI models and platforms must be constructed with security as a core principle and not an additional feature.Red teaming and adversarial testing: AI systems must undergo rigorous red teaming to understand how they might be manipulated or exploited.Transparency and explainability: There are too many black boxes among AI systems. We must prioritize making explainable AI a priority so we understand how conclusions are reached and how they can be wrong.Accountability mechanisms: Governments and institutions must design regulatory mechanisms that hold creators and operators of AI accountable for its misuse, whether intentional or emergent.Global cooperation: Similar to nuclear weapons control and chemical weapon conventions, there must be global cooperation to define norms and red lines on AI use in cyberspace. What’s Next? AI is neither ethical nor unethical. It's a reflection of our own intentions, blind spots, and decisions. As we push the boundaries of what machines can do, we must also extend our capacity to anticipate the consequences. The ethics of AI exploitation is not a purely technical debate, but it's a social imperative. In the absence of visionary governance, the technologies we develop to empower humanity will become the harbingers of its digital collapse. The issue now is not whether AI can be employed for evil-already it is. The issue is: will we act with foresight and integrity sufficient to steer it away from the edge of the abyss?

By Omkar Bhalekar

Top Security Experts

expert thumbnail

Apostolos Giannakidis

Product Security,
Microsoft

expert thumbnail

Kellyn Gorman

Advocate and Engineer,
Redgate

With over two decades of dedicated experience in the realm of relational database technology and proficiency in diverse public clouds, Kellyn, has recently joined Redgate as their multi-platform advocate to share her technical brilliance in the industry. Delving deep into the intricacies of databases early in hercareer, she has developed an unmatched expertise, particularly in Oracle on Azure. This combination of traditional database knowledge with an insight into modern cloud infrastructure has enabled her to bridge the gap between past and present technologies, and foresee the innovations of tomorrow. She maintains a popular technical blog called DBAKevlar, (http://dbakevlar.com). Kellyn has authored both technical and non-technical books, having been part of numerous publications around database optimization, DevOps and command line scripting. This commitment to sharing knowledge underlines her belief in the power of community-driven growth.
expert thumbnail

Josephine Eskaline Joyce

Chief Architect,
IBM

expert thumbnail

Siri Varma Vegiraju

Senior Software Engineer,
Microsoft

Siri Varma Vegiraju is a seasoned expert in healthcare, cloud computing, and security. Currently, he focuses on securing Azure Cloud workloads, leveraging his extensive experience in distributed systems and real-time streaming solutions. Prior to his current role, Siri contributed significantly to cloud observability platforms and multi-cloud environments. He has demonstrated his expertise through notable achievements in various competitive events and as a judge and technical reviewer for leading publications. Siri frequently speaks at industry conferences on topics related to Cloud and Security and holds a Masters Degree from University of Texas, Arlington with a specialization in Computer Science.

The Latest Security Topics

article thumbnail
Evaluating AI Vulnerability Detection: How Reliable Are LLMs for Secure Coding?
Can AI spot real-world security bugs? Semgrep’s research compares LLMs on their ability to catch SQLi, XSS, and IDOR vulnerabilities.
November 14, 2025
by Jayson DeLancey
· 901 Views · 1 Like
article thumbnail
Spectre and Meltdown: How Modern CPUs Traded Security for Speed
Spectre and Meltdown exploit CPU optimizations like out-of-order and speculative execution, leaking sensitive data despite software-level protections.
November 14, 2025
by Yash Gupta
· 652 Views
article thumbnail
The DSPM Paradox: Perceived Controls for an Uncontrollable Data Landscape
You can't stop every burglar, but you can ensure your valuables are in a safe they can't crack. Why DSPM tools create an illusion of control in modern data security.
November 12, 2025
by Hamid Akhtar
· 766 Views
article thumbnail
A Growing Security Concern: Prompt Injection Vulnerabilities in Model Context Protocol Systems
Prompt injection can make AI assistants a privilege‑escalation risk. Learn attack patterns and layered defenses: isolation, sanitization, validation.
November 11, 2025
by Janani Annur Thiruvengadam
· 1,149 Views
article thumbnail
Decentralized Identity Management: The Future of Privacy and Security
As more of our information goes digital and cybersecurity awareness increases, DIM feels like the natural next step of identity management.
November 11, 2025
by Ben Hartwig
· 1,044 Views · 2 Likes
article thumbnail
Docker Security: 6 Practical Labs From Audit to AI Protection
Master Docker security with six practical labs that take you from basic configuration audits to advanced AI workload protection
November 10, 2025
by Shamsher Khan
· 2,193 Views · 4 Likes
article thumbnail
Understanding Proxies and the Importance of Japanese Proxies in Modern Networking
Understand how proxies work and why Japanese proxies offer superior speed, security, and regional access for businesses, developers, and online tasks.
November 7, 2025
by Adamo Tonete
· 1,070 Views · 1 Like
article thumbnail
Workload Identities: Bridging Infrastructure and Application Security
Replace static secrets with verifiable workload identities to close security gaps and build a stronger zero-trust foundation.
November 7, 2025
by Maria Pelagia
· 901 Views
article thumbnail
Bridging the Divide: Tactical Security Approaches for Vendor Integration in Hybrid Architectures
Learn about real-world hybrid security use cases and learn tactical strategies for securely integrating vendor software in cloud and on-prem environments.
November 5, 2025
by Dipankar Saha
· 1,231 Views · 1 Like
article thumbnail
Top Takeaways From Devoxx Belgium 2025
In October 2025, Devoxx Belgium showcased its 22nd edition, emphasizing Java and AI advancements. Sessions covered were AI workflows, architecture decisions, and more.
November 4, 2025
by Gunter Rotsaert DZone Core CORE
· 899 Views · 1 Like
article thumbnail
Detecting Supply Chain Attacks in NPM, PyPI, and Docker: Real-World Techniques That Work
Supply chain attacks represent the modern cybersecurity nightmare — attackers compromise the dependencies you trust instead of attacking you directly.
November 3, 2025
by David Iyanu Jonathan
· 2,472 Views · 2 Likes
article thumbnail
Navigating the Cyber Frontier: AI and ML's Role in Shaping Tomorrow's Threat Defense
AI and ML are transforming cybersecurity with adaptive defenses, predictive analysis, and automation, shaping a smarter, more resilient digital future.
November 3, 2025
by Geethamanikanta Jakka
· 790 Views · 2 Likes
article thumbnail
A Framework for Securing Open-Source Observability at the Edge
Build secure observability solutions for distributed edge environments using open-source telemetry. Achieve zero security incidents and full compliance.
October 31, 2025
by Prakash Velusamy
· 1,685 Views · 1 Like
article thumbnail
HSTS Beyond the Basics: Securing AI Infrastructure and Modern Attack Vectors
HTTP Strict Transport Security (HSTS) is a web security policy mechanism that helps protect websites against protocol downgrade attacks and cookie hijacking.
October 29, 2025
by Vidyasagar (Sarath Chandra) Machupalli FBCS DZone Core CORE
· 1,282 Views · 4 Likes
article thumbnail
Building Secure Software: Integrating Risk, Compliance, and Trust
Software is the backbone of digital business, but as systems grow more connected, risks multiply. Enterprises need security built into the software itself.
October 28, 2025
by Akash Gupta
· 2,086 Views · 2 Likes
article thumbnail
Evolving Golden Paths: Upgrades Without Disruption
Keep your golden paths evolving — with semantic versioning, automation, and empathy — to drive innovation without breaking developer flow.
October 23, 2025
by Josephine Eskaline Joyce DZone Core CORE
· 3,902 Views · 3 Likes
article thumbnail
From Platform Cowboys to Governance Marshals: Taming the AI Wild West
AI feels like the Wild West — platform engineers must become governance marshals to scale trust and turn chaos into safe, sustainable innovation.
October 22, 2025
by Hugo Guerrero DZone Core CORE
· 1,797 Views · 1 Like
article thumbnail
Scaling Boldly, Securing Relentlessly: A Tailored Approach to a Startup’s Cloud Security
Learn how SaaS startups can scale securely from MVP to enterprise in this phased, developer-first guide to cloud, app, and zero-trust security maturity.
October 21, 2025
by Srihari Pakalapati
· 2,145 Views · 1 Like
article thumbnail
Is My Application's Authentication and Authorization Secure and Scalable?
The most common mistakes that developers/architects make while creating a new application for authentication and authorization, and how to avoid them.
October 21, 2025
by Navin Kaushik
· 2,583 Views · 2 Likes
article thumbnail
The Rise of Passkeys
Passkeys offer a secure, password-less future by replacing vulnerable passwords with device-specific cryptographic keys resulting in surge in use of passkeys.
October 21, 2025
by Shiva kumar Pati
· 1,681 Views · 1 Like
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Support and feedback
  • Community research

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 215
  • Nashville, TN 37211
  • [email protected]

Let's be friends:

×