DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Container Checkpointing in Kubernetes With a Custom API
  • How to Create a Successful API Ecosystem
  • How the Go Runtime Preempts Goroutines for Efficient Concurrency
  • Unlocking the Benefits of a Private API in AWS API Gateway

Trending

  • Implementing Explainable AI in CRM Using Stream Processing
  • Securing the Future: Best Practices for Privacy and Data Governance in LLMOps
  • Cloud Security and Privacy: Best Practices to Mitigate the Risks
  • How to Build Real-Time BI Systems: Architecture, Code, and Best Practices
  1. DZone
  2. Software Design and Architecture
  3. Integration
  4. Building LangChain Applications With Amazon Bedrock and Go: An Introduction

Building LangChain Applications With Amazon Bedrock and Go: An Introduction

Follow this tutorial to learn how to extend the LangChain Go package to include support for Amazon Bedrock.

By 
Abhishek Gupta user avatar
Abhishek Gupta
DZone Core CORE ·
Nov. 14, 23 · Tutorial
Likes (5)
Comment
Save
Tweet
Share
17.9K Views

Join the DZone community and get the full member experience.

Join For Free

One of our earlier blog posts discussed the initial steps for diving into Amazon Bedrock by leveraging the AWS Go SDK. Subsequently, our second blog post expanded upon this foundation, showcasing a Serverless Go application designed for image generation with Amazon Bedrock and AWS Lambda ("Generative AI Apps With Amazon Bedrock: Getting Started for Go Developers").

Amazon Bedrock is a fully managed service that makes base models from Amazon and third-party model providers (such as Anthropic, Cohere, and more) accessible through an API. The applications demonstrated in those blog posts accessed Amazon Bedrock APIs directly, thereby avoiding any additional layers of abstraction or frameworks/libraries. This approach is particularly effective for learning and crafting straightforward solutions.

However, developing generative AI applications goes beyond simply using large language models (LLMs) via an API. You need to think about other parts of the solution which include intelligent search (also known as semantic search that often requires specialized data stores), orchestrating sequential workflows (e.g.,  invoking another LLM based on the previous LLM response), loading data sources (text, PDF, links, etc.) to provide additional context for LLMs, maintaining historical context (for conversational/chatbot/QA solutions) and much more. Implementing these features from scratch can be difficult and time-consuming.

Enter LangChain, a framework that provides off-the-shelf components to make it easier to build applications with language models. It is supported in multiple programming languages. This obviously includes Python, but also JavaScript, Java, and Go.

langchaingo is the LangChain implementation for the Go programming language. This blog post covers how to extend langchaingo to use foundation model from Amazon Bedrock.

The code is available in this GitHub repository.

LangChain Modules

One of LangChain's strengths is its extensible architecture - the same applies to the langchaingo library as well. It supports components/modules, each with interface(s) and multiple implementations. Some of these include:

  • Models: These are the building blocks that allow LangChain apps to work with multiple language models (such as ones from Amazon Bedrock, OpenAI, etc.).
  • Chain: These can be used to create a sequence of calls that combine multiple models and prompts.
  • Vector databases: They can store unstructured data in the form of vector embedding. At query time, the unstructured query is embedded, and semantic/vector search is performed to retrieve the embedding vectors that are "most similar" to the embedded query.
  • Memory: This module allows you to persist the state between chain or agent calls. By default, chains are stateless, meaning they process each incoming request independently (the same goes for LLMs).

This provides ease of use, choice, and flexibility while building LangChain-powered Go applications. For example, you can change the underlying vector database by swapping the implementation with minimal code changes. Since langchaingo provides many large language model implementations, the same applies here as well.

langchaingo Implementation for Amazon Bedrock

As mentioned before, Amazon Bedrock processes access to multiple models including Cohere, Anthropic, etc. We will cover how to extend Amazon Bedrock to build a plugin for the Anthropic Claude (v2) model, but the guidelines apply to other models as well.

Let's walk through the implementation at a high level.

Any custom model (LLM) implementation has to satisfy langchaingo LLM and LanguageModel interfaces. So it implements Call, Generate, GeneratePrompt and GetNumTokens functions.

The key part of the implementation is in the Generate function. Here is a breakdown of how it works.

  1. The first step is to prepare the JSON payload to be sent to Amazon Bedrock. This contains the prompt/input along with other configuration parameters.
//...
    payload := Request{
        MaxTokensToSample: opts.MaxTokens,
        Temperature:       opts.Temperature,
        TopK:              opts.TopK,
        TopP:              opts.TopP,
        StopSequences:     opts.StopWords,
    }

    if o.useHumanAssistantPrompt {
        payload.Prompt = fmt.Sprintf(claudePromptFormat, prompts[0])
    } else {
    }

    payloadBytes, err := json.Marshal(payload)
    if err != nil {
        return nil, err
    }


It is represented by the Request struct which is marshalled into JSON before being sent to Amazon Bedrock.

type Request struct {
    Prompt            string   `json:"prompt"`
    MaxTokensToSample int      `json:"max_tokens_to_sample"`
    Temperature       float64  `json:"temperature,omitempty"`
    TopP              float64  `json:"top_p,omitempty"`
    TopK              int      `json:"top_k,omitempty"`
    StopSequences     []string `json:"stop_sequences,omitempty"`
}


2. Next Amazon Bedrock is invoked with the payload and config parameters. Both synchronous and streaming invocation modes are supported.

The streaming/async mode will be demonstrated in an example below:

//...
    if opts.StreamingFunc != nil {

        resp, err = o.invokeAsyncAndGetResponse(payloadBytes, opts.StreamingFunc)
        if err != nil {
            return nil, err
        }

    } else {
        resp, err = o.invokeAndGetResponse(payloadBytes)
        if err != nil {
            return nil, err
        }
    }


This is how the asynchronous invocation path is handled - the first part involves using the InvokeModelWithResponseStream function and then handling InvokeModelWithResponseStreamOutput response in the ProcessStreamingOutput function.

You can refer to the details in Using the Streaming API section in "Generative AI Apps With Amazon Bedrock: Getting Started for Go Developers," linked in the introduction of this article.

//...
func (o *LLM) invokeAsyncAndGetResponse(payloadBytes []byte, handler func(ctx context.Context, chunk []byte) error) (Response, error) {

    output, err := o.brc.InvokeModelWithResponseStream(context.Background(), &bedrockruntime.InvokeModelWithResponseStreamInput{
        Body:        payloadBytes,
        ModelId:     aws.String(o.modelID),
        ContentType: aws.String("application/json"),
    })

    if err != nil {
        return Response{}, err
    }

    var resp Response

    resp, err = ProcessStreamingOutput(output, handler)

    if err != nil {
        return Response{}, err
    }

    return resp, nil
}

func ProcessStreamingOutput(output *bedrockruntime.InvokeModelWithResponseStreamOutput, handler func(ctx context.Context, chunk []byte) error) (Response, error) {

    var combinedResult string
    resp := Response{}

    for event := range output.GetStream().Events() {
        switch v := event.(type) {
        case *types.ResponseStreamMemberChunk:

            var resp Response
            err := json.NewDecoder(bytes.NewReader(v.Value.Bytes)).Decode(&resp)
            if err != nil {
                return resp, err
            }

            handler(context.Background(), []byte(resp.Completion))
            combinedResult += resp.Completion

        case *types.UnknownUnionMember:
            fmt.Println("unknown tag:", v.Tag)

        default:
            fmt.Println("union is nil or unknown type")
        }
    }

    resp.Completion = combinedResult

    return resp, nil
}


3. Once the request is processed successfully, the JSON response from Amazon Bedrock is converted (un-marshaled) back in the form of a Response struct and a slice of Generation instances as required by the Generate function signature.

//...
generations := []*llms.Generation{
    {Text: resp.Completion},
}


Code Samples: Use the Amazon Bedrock Plugin in LangChain Apps

Once the Amazon Bedrock LLM plugin for langchaingo has been implemented, using it is as easy as creating a new instance with claude.New(<supported AWS region>) and using the Call (or Generate) function.

Here is an example:

package main

import (
    "context"
    "fmt"
    "log"

    "github.com/build-on-aws/langchaingo-amazon-bedrock-llm/claude"
    "github.com/tmc/langchaingo/llms"
)

func main() {

    llm, err := claude.New("us-east-1")

    input := "Write a program to compute factorial in Go:"
    opt := llms.WithMaxTokens(2048)

    output, err := llm.Call(context.Background(), input, opt)

//....


Prerequisites

Before executing the sample code, clone the GitHub repository and change to the right directory:

git clone github.com/build-on-aws/langchaingo-amazon-bedrock-llm
cd langchaingo-amazon-bedrock-llm/examples


Refer to the Before You Begin section in "Generative AI Apps With Amazon Bedrock: Getting Started for Go Developers" to complete the prerequisites for running the examples. This includes installing Go, configuring Amazon Bedrock access, and providing necessary IAM permissions.

Run Basic Examples

This example demonstrates tasks such as code generation, information extraction, and question-answering. You can refer to the code here.

go run main.go


Run Streaming Output Example

In this example, we pass in the WithStreamingFunc option to the LLM invocation. This will switch to the streaming invocation mode for Amazon Bedrock.

You can refer to the code here.

//...
_, err = llm.Call(context.Background(), input, llms.WithMaxTokens(2048), llms.WithTemperature(0.5), llms.WithTopK(250), 
llms.WithStreamingFunc(func(ctx context.Context, chunk []byte) error {
        fmt.Print(string(chunk))
        return nil
}))


To run the program:

go run streaming/main.go


Conclusion

LangChain is a powerful and extensible library that allows us to plugin external components as per requirements. This blog demonstrated how to extend langchaingo to make sure it works with the Anthropic Claude model available in Amazon Bedrock. You can use the same approach to implement support for other Amazon Bedrock models such as Amazon Titan.

The examples showed how to use simple LangChain apps to using the Call function. In future blog posts, I will cover how to use them as part of chains for implementing functionality like a chatbot or QA assistant.

Until then, happy building!

AWS applications Go (programming language) API Library

Published at DZone with permission of Abhishek Gupta, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Container Checkpointing in Kubernetes With a Custom API
  • How to Create a Successful API Ecosystem
  • How the Go Runtime Preempts Goroutines for Efficient Concurrency
  • Unlocking the Benefits of a Private API in AWS API Gateway

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!