DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

SBOMs are essential to circumventing software supply chain attacks, and they provide visibility into various software components.

Related

  • The Disruptive Potential of On-Device Large Language Models
  • The Twelve-Factor Agents: Building Production-Ready LLM Applications
  • Optimizing Cloud Costs With Serverless Architectures: A Technical Perspective
  • My Dive into Local LLMs, Part 2: Taming Personal Finance with Homegrown AI (and Why Privacy Matters)

Trending

  • What Is Plagiarism? How to Avoid It and Cite Sources
  • Advanced gRPC in Microservices: Hard-Won Insights and Best Practices
  • Implementing Event-Driven Systems With AWS Lambda and DynamoDB Streams
  • Rapid AWS Prototyping With LocalStack: A Developer’s Guide to Building AWS PoCs Locally
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. Deploying LLMs Across Hybrid Cloud-Fog Topologies Using Progressive Model Pruning

Deploying LLMs Across Hybrid Cloud-Fog Topologies Using Progressive Model Pruning

Deploying LLMs at the edge is hard due to size and resource limits. This guide explores how progressive model pruning enables scalable hybrid cloud–fog inference.

By 
Sam Prakash Bheri user avatar
Sam Prakash Bheri
·
Jul. 02, 25 · Analysis
Likes (0)
Comment
Save
Tweet
Share
1.6K Views

Join the DZone community and get the full member experience.

Join For Free

Large Language Models (LLMs) have become backbone for conversational AI, code generation, summarization, and many more scenarios. However, their deployment poses significant challenges in environments where compute resources are limited mostly in hybrid cloud-fog architectures, where real-time inference may need to run closer to the edge.

In these instances, progressive model pruning plays a pivotal role offering solution to reduce model size and computation cost without impacting accuracy. In this article, we will discuss how to efficiently deploy LLMs across cloud-fog topologies using layer-aware, resource-adaptive pruning techniques.

What Is a Hybrid Cloud-Fog Topology?

Before we deep into the topic, let’s understand and define the architecture:

  • Cloud layer: This layer consists of centralized data centers that contains thousands of High-Performance Computing servers (HPC – GPU/TPU) with large capacity for training large language models (LLM), full-scale inference, and orchestration.
  • Fog layer: This layer is different from traditional cloud layer where we have decentralized micro data centers where intelligence and computing power is within the local area network or edge (e.g., smart cities, vehicles, industrial sites). The fog layer executes with low latency, but it is resource constrained.

A hybrid topology of Cloud-Fog orchestrates inference across both the layers. It combines the scalability and flexibility of cloud computing with the proximity and low-latency benefits of fog computing. The cloud handles large datasets, performs training, and fallback logic. The fog layer performs basic tasks like data filtering, pre-processing, and analysis before sending data to the cloud layer. The processing of data locally at the fog layer reduces latency and enables real-time application. By offloading some tasks to the fog, the cloud layer can optimize the resource utilization and run efficiently. 

The key idea is to dynamically adapt the deployment of LLM components across fog and cloud resources to optimize performance. For example, some parts of the LLM could run on local fog devices, while other parts run in the cloud. This allows the system to take advantage of the strengths of both fog and cloud computing.

The Challenge: LLMs Don’t Fit at the Edge

But there are challenges to deploy LLM at the Edge/Fog. The current LLMs like GPT-3, LLaMA etc. that are multimillion parameter models require high memory, high bandwidth, and multi-GPU clusters for inference.

But the Fog layer simply cannot host full LLMs because of limited resources. Hence, we need compression techniques to deploy LLM in Fog nodes. Extensive research has been done to understand LLM model compression, in which LLM weight pruning is a representative technique.

Progressive Weight Model Pruning

Model pruning is a technique that removes unimportant weights or neurons from a neural network reducing size and compute requirements. Progressive pruning does this incrementally by allowing more pruning near input and less pruning near output. It also generates multiple model variants at various parameter sizes to balance performance and resource efficiency.

Types of Pruning

Structured Pruning focuses on removing components of a model such as neurons, attention heads, convolutional filters, or entire feedforward blocks. This results in smaller and more efficient model architecture that retains regular structure and makes it compatible with existing hardware like GPUs and TPUs. Since entire blocks are removed, structured pruning reduces compute and memory requirements but maintains compatibility with standard deep learning frameworks.

Unstructured Pruning focuses on removing individual weights or connections from the neural network resulting in sparse weight matrices. This technique does not preserve regular structure, which makes it difficult to realize computational speedups without specialized sparse matrix libraries or custom hardware. However, unstructured pruning can have very high compression ratios and is effective in reducing the overall model size required for a constrained environment.

Layer-wise Pruning focuses on selectively pruning specific layers or submodules of the model based on their relative importance or contribution to the overall performance. This method allows for fine-grained control over model complexity, ensuring that critical components of the network are preserved while less influential parts are pruned.

Deployment Strategy: Pruning + Placement

With the use of Progressive Pruning capability, we can deploy LLMs across cloud and fog layers. Below we will discuss the steps and process to deploy the LLMs:

  • The first step is to train and profile the model in the cloud. A base LLM such as LLaMA 2-7B is fine-tuned on domain-specific data to adapt the model to targeted use cases. After training is complete, techniques like saliency analysis are applied to identify layers or components that can be pruned without degrading the performance. Later, various pruned variants of the base model of different sparsity levels such as 50%, 30%, and 10% are generated. This helps create a portfolio of smaller models optimized for different deployment scenarios. 
  • In the next step, the pruned models are matched to the capacity of fog nodes based on the available edge hardware (CPU/GPU) specs, memory, and thermal constraints. Each device is assigned a pruned variant that fits within its performance. Full sized models (e.g., the original 7B model) remain in the cloud for high-throughput, latency-tolerant use cases such as prompt batching. Intelligent routing policies are implemented to dynamically direct user queries to the most appropriate node based on model size and hardware availability. 
  • Finally, a hierarchical fallback mechanism is employed to ensure accuracy and responsiveness. If a fog node’s response confidence is low or the input context length exceeds its processing limits, the query is escalated to the cloud, where the full model provides a more accurate answer. Additionally, a hybrid inference pattern is supported where fog nodes deliver a fast initial response, and the cloud performs a secondary evaluation asynchronously for enhanced output quality. This architecture not only optimizes latency and resource usage but also ensures a robust and scalable deployment of LLMs across diverse infrastructure layers.

Evaluation Metrics

While using the progressive pruning, it’s important to track the following metrics essential for practical validation.

  • Accuracy of the model: It’s important that the accuracy of the model doesn’t drop <2% for fog models.
  • Latency: Ensure LLM model runs efficiently in each layer with <100 ms latency on fog and <300 ms on cloud.
  • Throughput: We need ensure the LLM models have high through on each node running on Cloud or Fog layer. Hence, we need to track Tokens/sec per node.
  • Memory: We need to ensure the model fits within 80% of total device RAM

Conclusion

Deploying LLMs in hybrid cloud-fog computing environments is no longer just theoretical. There has been extensive research happening and it’s feasible with the right optimizations. Progressive model pruning offers a powerful way to adapt large language models and Deep Neural network model for resource constrained environments, making hybrid AI truly intelligent and responsive.

Whether we are designing a smart assistant or IOT device at the edge or building low-latency NLP pipelines in distributed environments, this approach can bridge the performance-accessibility gap bringing LLMs closer to where model data is generated and decisions are made.

Cloud computing Fog computing large language model

Published at DZone with permission of Sam Prakash Bheri. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • The Disruptive Potential of On-Device Large Language Models
  • The Twelve-Factor Agents: Building Production-Ready LLM Applications
  • Optimizing Cloud Costs With Serverless Architectures: A Technical Perspective
  • My Dive into Local LLMs, Part 2: Taming Personal Finance with Homegrown AI (and Why Privacy Matters)

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: