paint-brush
How to Build GenAI Applications with Amazon Bedrockby@ramsjha
12,616 reads
12,616 reads

How to Build GenAI Applications with Amazon Bedrock

by ramsjhaFebruary 16th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Amazon Bedrock empowers Gen-AI development by providing simplified access to foundational models from leading providers like Meta, Anthropic, and more. Explore how to build advanced AI applications using AWS Console or API integration, enabling rapid innovation while ensuring security and privacy.
featured image - How to Build GenAI Applications with Amazon Bedrock
ramsjha HackerNoon profile picture

“Gen-AI is unleashing true potential what an AI application can bring and Amazon Bedrock complements to simplify. It (Amazon Bedrock) makes the foundation model accessible from Amazon and Other providers (Meta, Anthropic, 121, etc.) in a simplified fashion ensuring security, privacy, and responsible AI perspective are well catered. Amazon Bedrock is an enabler to speed up-up the innovation by providing capability to choose and experiment FM, train based on specific need, customise with own data with added security and agent to automate tasks. Amazon Bedrock just like anyother AWS capabilites are serverless and it gives wings to building Gen-AI capability”- https://aws.amazon.com/bedrock/


In this blog, will go through “How to build a Gen-AI Applications using Amazon Bedrock” while learning Amazon Bedrock concepts & accelerated development of Gen-AI. Let’s take an example of a chat assistant deployed in the cloud (AWS) using Gen-AI & customer-specific data, architecture illustrated below.



The outcome expected is to converse in the natural English language and contextually relevant to data provided with accuracy.


In very simplistic flow:

  • End users query the chat assistant via an interface that is supposed to understand natural language most preferably English.
  • The query gets processed and sent to relevant AWS services for computing based on business logic and query context.
  • Organization data is stored in enterprise storage over the cloud and query parsed with the data for the context.
  • Contextual data with End-user query is sent to Amazon Bedrock Gen-AI service using any Foundational Model
  • Amazon Bedrock leverages a foundational model to generate responses and send them to the experience layer.
  • The experience layer sends results to the end-user via chatbot to provide answers in natural language.


With this example, it seems intuitive to build/leverage Gen-AI application using API in more secure and customer-centric data for the use case. There are many more use cases for Amazon Bedrock (a) Text Generation (b) Search (c) Text Summarization (d) Image Generation (e) Personalization etc.


How to around Amazon Bedrock, Amazon Bedrock can be accessed using AWS Console or API.


“AWS Console: Aamzon Bedrock provides a mechanism (playground) to explore the Foundation Model using multi-modal capability (Text, Image, Contextual Chat). The way to interact is to leverage prompt or input to converse/get the desired output and underhood it can be a variety of Foundational Models. Abstraction of complexity is taken care of using bedrock, However, certain parameters are provided to tune the model based on need”


e.g. In the chat playground, you can interact with the FM of your choice using a conversational interface. In the example below, Stable Diffusion FM is used for text-to-image prompt and output. The following architecture diagram is used for the demonstration in this training. Consumers interact with Amazon Bedrock in the console for chat, text, and image playground activity with the FMs.


“API: AWS Console is the best way to explore without getting into application development nitty gritty, mostly proof of concept phase. Once you pass that phase, API (AWS SDK) is the way to integrate any Foundational Model into an application using Amazon Bedrock unified API which is backed by any to every FM available (like marketplace).”


You can optionally set inference parameters to influence the response generated by the model. FMs support the following types of inference parameters (a) Temperature (b) Top P (c) Response Length (d) Stop Sequence.


So far it’s more of a theoretical perspective, Let’s pick an example of “prompt-based chatbot” using Bedrock, Langchian framework, Claude FM, Boto 3, and Python SDK. To implement follow the steps as below:


  • Leverage AWS SDK for Python with Boto3 for API based call (rather than console)


  • Pre-Requisite:

    1. pip install “boto3”, “awscli” and “botocore” for API invocation

    2. pip install packages “langchain”, “transformers”, “sqlalchemy”, “matplotlib”, “anthropic/cluade”

    3. Create boto3 client which can vary based on environment and invoking get_bedrock_client() method with run time parameter

    4. validate connection and list_foundation_model() to know what is avaible to use.


  • Get the prompt data using json method and put in a var called prompt_input = json.text(user_input, token)


  • data = json.dumps({"prompt": prompt_data, "max_tokens_to_sample": 500})
    


  • Invoke Model: invoke_model() API used to invoke Amazon Bedrock runtime client

    Input

    {
        "prompt": "\n\nHuman:<prompt>\n\nAnswer:",
        "max_tokens_to_sample": 300,
        "temperature": 0.5,
        "top_k": 250,
        "top_p": 1,
        "stop_sequences": ["\n\nHuman:"]
    }
    
    


    Output

    {
        "completion": "<output>",
        "stop_reason": "stop_sequence"
    }
    

    modelId = “anthropic.cluade.v?” application & Content = “jspn”


try:

response = bedrock_runtime.invoke_model(
    body=body, modelId=modelId, accept=accept, contentType=contentType
)
response_body = json.loads(response.get("body").read())

print(response_body.get("completion"))


  • Iterate Model invocation for conversation: Using prompt and overriding data for input, converse with the Langchain framework to make the model work.


  • Close connection and exit()


Now you have seen how it is easy in a few steps to build your basic conversation chatbot, However, further complexity will creep in when it needs to be context-aware, lex limitation, security, cost, and other limitations need to be balanced.


To summarize, Amazon bedrock seamlessly handles the complexity of Natural Language problems and lets users focus on the business problems in the “most secured and easiest way” (nevertheless serverless).