DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Beyond Simple Responses: Building Truly Conversational LLM Chatbots
  • Achieving High Genericity in Code
  • Parent Document Retrieval (PDR): Useful Technique in RAG
  • MCP Servers: The Technical Debt That Is Coming

Trending

  • Mastering Fluent Bit: Installing and Configuring Fluent Bit on Kubernetes (Part 3)
  • AI Speaks for the World... But Whose Humanity Does It Learn From?
  • Four Essential Tips for Building a Robust REST API in Java
  • Mastering Advanced Traffic Management in Multi-Cloud Kubernetes: Scaling With Multiple Istio Ingress Gateways
  1. DZone
  2. Coding
  3. Tools
  4. A Practical Guide to Augmenting LLM Models With Function Calling

A Practical Guide to Augmenting LLM Models With Function Calling

Learn how to build a more dynamic AI application using a no-code tool, enabling integration of external functions with OpenAI LLM.

By 
Javier Teso user avatar
Javier Teso
·
Updated by 
Pranav K user avatar
Pranav K
DZone Core CORE ·
Nov. 26, 24 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
1.8K Views

Join the DZone community and get the full member experience.

Join For Free

What Is Function Calling?

LLM models are powerful, but they’re limited to the data they've been trained on. Enter function calling: a feature that lets us enhance LLM’s capabilities by “calling” functions to gather external information. This means we can teach AI how to fetch specific, real-world data, like current weather, and use it to provide more relevant answers.

Use Case

“What should I wear today?” The answer varies based on the weather, but now it can be automated. Imagine an API that recommends clothing based on the forecast. In this demo, we used Kumologica, a low-code tool, to integrate OpenAI with a weather service. The goal: to enable OpenAI to use real-time weather data and offer on-point clothing advice.

Design

Our focus was on keeping things simple and effective. Using Kumologica, we connected OpenAI’s function-calling capability to a custom weather-based wardrobe API. When OpenAI encounters a query about what to wear, function calling activates, fetching specific data from an external weather API. This API gathers weather details — temperature, precipitation, wind — and then interprets them to provide clothing recommendations.

Design diagram

Implementation

Kumologica’s low-code setup made the integration straightforward. We designed a flow that takes a city name as input, triggers a weather API call, and relays the data to OpenAI through function calling. The result? A tailored clothing suggestion: layers for chilly weather, light attire for warmth, and rain gear when it’s wet. The entire process is quick and simple.

The following diagram illustrates the complete implementation of the API, detailing the flow of data and the steps involved in generating personalized clothing recommendations based on weather conditions.

Complete implementation of the API

Here’s an explanation of the components: 

1. GET /wear (EventListener)

This represents an HTTP GET request endpoint. It takes a query parameter (“city”) to initiate the flow.

GET /wear (EventListener)

2. Store City (Set-Property)

This stores the city name from the incoming request.

Store City (Set-Property)

3. OpenAI (OpenAI)

This calls the OpenAI API to generate personalized clothing recommendations based on the weather for the given city.

OpenAI (OpenAI)

4. Set Weather API URL (Function)

This is a JavaScript function that dynamically constructs the URL for querying a weather API based on the stored city name.

Set Weather API URL (Function)

5. Weather API (HTTP Request)

Call a third-party weather service API (WeatherStack) to retrieve current weather data.

Weather API (HTTP Request)

6. Return Weather Data (OpenAIToolEnd)

Pass the weather data to OpenAI API (3) to complete the request.

7. Return Results (EventListenerEnd)

This sends the final clothing recommendations back to the user as a response to their initial request.

Return Results (EventListenerEnd)

Try It

1. Install Kumologica

Shell
 
npm install -g @kumologica/sdk


2. Clone the Project Repository

Shell
 
git clone https://github.com/KumologicaHQ/demo-weather-wear

cd demo-weather-wear 


3. Install the Dependencies

Shell
 
npm install

 

4. Set Environment Variables 

Create an .env file in the root directory and configure the following variables:

Shell
 
OPENAI_API_KEY=<Your OpenAI API Key>

WEATHERSTACK_API=<Your Weatherstack API Key>


4. Running Locally

Open Kumologica Designer:

Shell
 
kl open .


To test the API locally,  you can use the designer TestCase or an external tool like Postman or CURL to make a request: GET http://127.0.0.1:1880/wear?city=sydney.

Wrap Up

In conclusion, function calling opens up exciting possibilities for enhancing the capabilities of LLMs by allowing them to interact with real-world data in real-time.

The use case presented can be expanded further with parameter definition in function calling, enabling the LLM to extract parameters from the prompt, as well as the ability to call multiple functions. This paves the way for even more powerful and customized AI-driven solutions.

API Data (computing) large language model Tool low code

Opinions expressed by DZone contributors are their own.

Related

  • Beyond Simple Responses: Building Truly Conversational LLM Chatbots
  • Achieving High Genericity in Code
  • Parent Document Retrieval (PDR): Useful Technique in RAG
  • MCP Servers: The Technical Debt That Is Coming

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!