DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

SBOMs are essential to circumventing software supply chain attacks, and they provide visibility into various software components.

Related

  • Creating an Agentic RAG for Text-to-SQL Applications
  • Securing Generative AI Applications
  • Decoding the Secret Language of LLM Tokenizers
  • Master AI Development: The Ultimate Guide to LangChain, LangGraph, LangFlow, and LangSmith

Trending

  • What Is Plagiarism? How to Avoid It and Cite Sources
  • Secret Recipe of the Template Method: Po Learns the Art of Structured Cooking
  • Docker Model Runner: A Game Changer in Local AI Development (C# Developer Perspective)
  • When Incentives Sabotage Product Strategy
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. The Unreasonable Effectiveness of the Actor Model for Creating Agentic LLM Applications

The Unreasonable Effectiveness of the Actor Model for Creating Agentic LLM Applications

Large language models are, in effect, actors. We exploit this fact to develop a system to easily create composable agentic applications.

By 
Alan Littleford user avatar
Alan Littleford
·
Jun. 25, 25 · Analysis
Likes (2)
Comment
Save
Tweet
Share
1.4K Views

Join the DZone community and get the full member experience.

Join For Free

Given the title we need to define what we mean by agentic applications and actors, and then we can move ahead.

Agentic Applications (AAs)

This term seems to have many definitions as appearances in articles, so I'll add the one I am using here. I hope you'll agree it captures most of the important stuff:

  1. An Agentic Application is given a (possibly implicit) goal and develops a plan to autonomously satisfy that goal.
  2. Agentic Applications have access to tools they can use to help further their partial solutions to achieving that goal.
  3. Agentic Applications generally exist in a broiling stew of data they need access to (or data they need to provide) at each stage of the plan. This data often has a high latency associated with it, or is often prone to failure or in general can be very unreliable.

Note that the source of the goal in step #1 may be a human user, or it may be another agentic application or even the same agentic application. In this latter case you can think of this as recursively decomposing a plan into smaller "chunks."

Characteristics of AAs

Typical AAs are:

  • Asynchronous. AAs often need to acquire information from the outside world by visiting other sources of data on internal or external networks. These introduce variable latency the AA needs to handle.
  • Built from reusable composable components in two ways: Internally an AA usually has access to a bag of tools, and the planning aspect of an AA is deciding how to compose subsets of these tools to reach various (possibly intermediate) goals. Externally, correctly architected it should be possible to create AAs by composing other AAs. Put another way "It's AAs all the way up".
  • Robust Non-trivial AAs live in a hostile environment. Networks go down, web sites become temporarily unavailable.
  • Have large language models (LLMs) at their core
  1. The LLMs are responsible for generating the plans to satisfy the goal

  2. The LLMs are often involved in extracting executable knowledge from data their tools may have acquired

  • Possibly Persistent AAs sometimes run for extended periods of time. For instance a clinical trial matching AA might periodically visit data sources for new trials that might possibly be applicable for a given patient.

In current AAs the tools are typically executed on the server hosting the LLM or can be executed on the AA's server by having the LLM call API endpoints - the latter requiring laborious set up and schema definitions.

In order to get maximum composability it would be nice if there were one common mechanism that could span all aspects of AAs. This article claims there is a natural one:

Actors

Actors were first discussed 50 years ago as a computation model. Many frameworks exist: Erlang, Elixir, Akka — but for this article, I’m using Dust, an actor system for Java. 

Previously discussed here:

  • Dust: Open-Source Actors for Java
  • Dust Actors and Large Language Models: An Application

Dust is a lightweight Java framework with a library of useful design patterns. I'll be using specific examples from this library, but the core ideas should be applicable to any actor-like system. Also I'll use Groovy for the examples. If you don't know Groovy think of it as a semicolon-less Java. Dust plays nicely with all JVM based languages.

What is an Actor ?

[This is a very brief overview. See the linked articles above for more detail]. An Actor is a Java object consisting of:

  1. A mailbox to which messages are sent and are handled in FIFO order
  2. Possible state which is only accessible within the Object. The state may be updated by:
  3. One or more 'behaviors' - handlers for messages retrieved from the mailbox. An actor can internally swap behaviors but only one behavior is active at any time. The actor waits until a message is available at the mailbox and it removes and executes it with the behavior. Messages are handled sequentially - i.e. the actor is single threaded.

Every actor has an address which allows it to communicate with other actors by passing messages back and forth and which can transparently span networks. There is a guarantee that if actor A sends a series of messages to actor B then those messages will be in actor B's mailbox in the order they left actor A (but there may be intervening messages from other actors).

Every actor, local or remote, has a handle - an ActorRef. A message is sent to an actor by calling its ActorRef's tell() method:

Java
 
ref.tell(msg, self)


msg is a Serializable object and self is 'me' —an ActorRef to the sender of the message. At the receiving Actor there is a variable sender which is the ActorRef of the sender of the message currently being processed.

Actors can create other actors which are their children and so an actor application is a tree of actors. The actor method actorOf() performs this named child creation:

Java
 
actorOf(SomeActorClass.props(), 'my-child')


Every actor class (i.e. a subclass of actor) has a static props() method (which may take parameters). Calling this method produces a Props object which contains all the information needed to produce an instance of this actor, which actorOf() does. Note this is the only way to create an Actor, we do not use the regular 'new Constructor()' syntax.

The actor method actorSelection takes a path to an actor and returns the target's ActorRef so a message can be sent to it:

Java
 
actorSelection('/a/b/c/d').tell(msg, someRef)


Here the Actor 'd' (which was created by c which was created by b...) is sent msg, but rather than using self we are using someRef, which is the ActorRef of a different actor. When d receives msg it will be as though it came from someRef.

Actors also have a life-cycle—they are born and they can die. They can also recover from exceptions and store state so they can restart.

Finally (and importantly) actors are cheap. A typical actor implementation will support millions of actors on very modest server. Creating, communicating with and destroying actors is to be treated as de rigueur.

There is also another kind of actor lurking in the background:

Large Language Models as Actors

LLMs have attributes very similar to that of actors:

  1. The Mailbox is the prompt handler. A message (prompt) is sent from a user to the Mailbox. The LLM handles one prompt at a time.
  2. The LLM's context is its state
  3. The Behavior is what the LLM does given the last prompt/message and its context/state. In response it sends a message to the user.

Notice that in the discussion of actors we did not deal with problems of latency or reliability—the framework tales care of this. LLMs present the same issues of latency and reliability so were the 'actor-like' LLM to be wrapped in an actor then the bigger framework will take care of many of these issues.

We are limiting the LLM's tool use to tools which do not break this model. For instance, web search is fine since it is just used to better respond to the prompt, but executing code somewhere definitely violates our model.

Dust has actors which wrap LLMs in the Actor framework:

Java
 
ActorRef chatGPTRef = actorOf(
    ServiceManagerActor.props(
        ChatGptAPIServiceActor.props(key),
        4
    ),
    'chat-gpt'
)


This creates a ServiceManagerActor with two initial parameters—the props of ServiceActors to create and the maximum number of simultaneous instances of these actors to allow. The type of Actor this is managing is a ChatGptAPIServiceActor created with a parameter which is our OpenAI key. ServiceManagers do one thing: they wait for a message, create an instance of the specified ServiceActor (or if it has filled its pool wait for a slot to free up) and send that message on to the newly created actor (as though it came from the original client).

A ServiceActor follows a convention: fulfill a clearly defined goal and die. A ChatGptAPIServiceActor takes a message containing a prompt and returns that message to the requestor with the LLMs response—and then dies.

Java
 
class ChatGPTRequestResponseMsg implements Serializable {
    String prompt
    String response = null

    ChatGPTRequestResponseMsg(String prompt) {
        this.prompt = prompt
    }
}


Message handling in a Dust actor is implemented by defining a createBehavior() method:

Groovy
 
void preStart() {
    chatGPTRef.tell(
        new ChatGPTRequestResponseMsg('why is the sky blue?'),
        self
    )
}

ActorBehavior createBehavior() {
    (Serializable msg) -> {
        switch (msg) {
            case ChatGPTRequestResponseMsg:
                println msg.response
                stopSelf()
                break

            default:
                println "Got unexpected message $msg"
        }
    }
}


preStart() is an actor method which is executed when the actor is created (when its mailbox and message loop are up and running but before any messages have been processed). So the actor-extract above will send a message to the actor at ChatGPT ref, which will in turn create a ChatGptAPIServiceActor to handle that message. This actor talks to ChatGPT (creating an HttpServiceActor behind the scenes, of course) by sending it the prompt and processing the response. It puts the utterance from ChatGPT into the message response, sends it back to the original requestor (the above) and dies.

The requestor's mailbox picks up the message extracts and prints the response from the LLM and then, in this particular case, kills itself as well since now we understand why the sky is blue.

Agentic?

This is all well and good, but we appear to have thrown away any agentic capabilities when we declined the use of most of the LLM's tools. In terms of our original three aspects of agentic systems, the use of actors clearly addresses #3, giving a powerful and uniform way of incorporating LLMs in very robust, scalable and distributable systems, but we appear to have thrown the baby away with the bath water... not so fast.

Pipelines

Dust has a PipelineActor class:

Groovy
 
static PipelineActor.props(List<Prop> stageProps, List<String> stageNames)


When created the pipeline actor creates children from the stageProps list giving each the corresponding name in stageNames. As the word 'stage' implies there is a strict order associated with the pipe, the first actor in the list is stage-1, the second stage-2 and so on. When a message is sent to a pipeline the pipe sends the message to stage-1. Any message stage-1 sends back to the pipe are sent to stage-2 and so on. The final stage can optionally send messages back to the original sender.

There are special stages called Hubs—these can broadcast messages to their children, so while a pipeline conjures up the notion of a single pipe, one can construct pipelines which are a tree of pipes (laid on its side).

A pipeline is just an actor. It can be created by other actors, process messages and then die. If we regard the stageProps actors as simple tools, then exactly like the Unix pipe, '|', we can compose small tools into bigger tools to accomplish goals. Recall:

#2 Agentic Applications have access to tools they can use to help further their partial solutions to achieving that goal.

Two down, one to go.

The Magic Sauce

The path is clear. We use the LLMs to formulate plans to solve a goal. The plans are implemented by creating Pipeline actors which combine tools to (possibly partially) solve the goal. These custom tools are disposable and self-managing—the Pipelines can destroy themselves once their job is done.

One advantage Groovy has over Java is that while Groovy is compiled it also contains a shell accessible from within the language which can execute Groovy source code.

Thus if we can get the LLM to generate the source code of the pipeline we can create and execute it completely within our framework.

In practice it is easier to have the LLM generate a list of the required stages which we then use to build the pipeline using a simple constructor. 

Consider the news reader we discussed in the previous article (Dust Actors and Large Language Models: An Application) We would like to set up agents that watch the news and let me know when it sees certain things, or ask it to find new sources of information. We need a prompt:

Assume you have a set of Groovy functions: 
1. add_to_lens('entity name')
2. watch_entity('entity name')    
3. watch_event('description of an event to match against')  
4. watch_event_and_notify('description of an event to match against')    
5. unknown_command('command you could not match')  
6. start()
7. stop()
8. recent_news_about('entity name')
9. add_to_rss_feeds(['feed1url', 'feed2url' ...])
Below is a list of commands. Convert each command into a function call using the functions above.
Assume the methods are defined on object 'actor'.
Return a list of quoted closures, where each closure contains functions related to the
same entity. Always include a call to start() function as the first function
in the closure and the stop() function as the last in the closure. 
If the commands include both watch_event and watch_event_and_notify functioncalls for the *same event*
only include the watch_event_and_notify.
If you need to search the web for *accurate* information do so. 
If asked for RSS feeds recall most such feed urls end in /feeds or /rss so focus
your search on those.
Return just a list of Groovy closures containing appropriate function calls. For example:
[ 
  "{ actor.start(); actor.add_to_lens('IBM'); actor.stop() }", 
  "{ actor.start(); actor.add_to_lens('HPE'); actor.watch_entity('HPE'); actor.stop() }",
  "{ actor.start(); actor.watch_event_and_notify('IBM buys startup company'); actor.stop() }" 
  "{ actor.start(); actor.add_to_rss_feeds(['https://ex1.com/feeds', 'https://ex2.com/rss']); actor.stop() }"  
]
Add no commentary, just return the list of quoted closures. 
Ensure arguments are accurate. For example a function accepting a list of urls
must be given a valid Groovy list of string, nothing else.
Do not use any Markdown syntax in your response.
Commands: $commands 


Test out the prompt:

Tell me when OpenAI makes any announcements about Stargate
Groovy
 
[
  "{ actor.start(); actor.watch_event_and_notify('OpenAI announces Stargate'); actor.stop() }"
]


The prompt to the LLM and its response are wrapped into a GetFunctionCallsMsg and l handled within a CommandServiceActor:

Groovy
 
case GetFunctionCallsMsg:
    Binding binding = new Binding()
    binding.setVariable('actor', owner)
    GroovyShell shell = new GroovyShell(binding)

    List<String> closures = shell.evaluate(msg.utterance)
    closures.each { String pipe ->
        actorOf(ExecutionServiceActor.props()).tell(
            new ExecuteCommandsMsg(shell, pipe),
            self
        )
    }
    break


Here 'owner' is the CommandServiceActor itself (since the message handler is in a Grooy closure we cannot use 'this') which provides the implementations start, stop, add_to_lens, etc. 

start() starts creation of the pipe, everything else except stop() adds stage props to the pipe line props list (actors which implement add_to_lens, watch_entity, etc.). stop() then takes the list of stage props so created, adds a stop stage and builds a pipeline actor from them and sends that pipeline a StartMsg. 

The pipeline runs and then dies. During its run it creates long-lived actors which are given summaries of the news articles and in turn use the LLM to answer questions ('Does this article define an event that I'm watching for') and sends updates to the user.

This example, while very simple is useful and truly agentic: your request for specific information is converted to autonomous agents which continually watch the news for specific events of interest to you and which will notify you when these turn up. If you lose interest in a specific event then simply kill the actor which is looking for it—all the infrastructure it built up to fulfill your request will be destroyed with it since the death of an Actor automatically causes the death of all its children.

Summary

The Actor paradigm seems to be unreasonably effective for creating agentic applications. Unlike the current LLM tools models which rely on a string and baling-wire like approach, the Actor approach reduces everything down to two things: actors and messages.

Pipelines provide the glue that ties problem mapping performed by the LLM to Agentic execution all within a single distributable framework.

Since all agents created in this fashion are actors the ability to compose agents into more complex agents comes for free—it is indeed actors all the way up.

applications Groovy (programming language) large language model

Opinions expressed by DZone contributors are their own.

Related

  • Creating an Agentic RAG for Text-to-SQL Applications
  • Securing Generative AI Applications
  • Decoding the Secret Language of LLM Tokenizers
  • Master AI Development: The Ultimate Guide to LangChain, LangGraph, LangFlow, and LangSmith

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: