paint-brush
How Separation of Concerns Elevates Your AI Strategy by@sumant
199 reads

How Separation of Concerns Elevates Your AI Strategy

by Sumant ShringariOctober 6th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The decision of whether or not to directly invoke OpenAI's services within your business logic depends on your context. Direct integration with AI services could lead to long-term technical debt and reduced future agility. Technical debt can account for up to 40% of the overall value of your technology.
featured image - How Separation of Concerns Elevates Your AI Strategy
Sumant Shringari HackerNoon profile picture


Disclaimer: I’m the co-founder of Pontus.so


Treat AI as Software

In the era of integrating AI into various software applications, it's essential to remember that AI is, at its core, still software. The same fundamental software principles apply, and one such principle worth emphasizing is the Separation of Concerns (SoC). The primary goal of the separation of concerns is to enhance the modularity, readability, maintainability, and reusability of software systems.


As defined by Wikipedia:


In computer science, the separation of concerns is a design principle for dividing a computer program into distinct sections, each addressing a specific concern.


Why Avoid Directly Calling OpenAI in Your Business Logic?

The decision of whether or not to directly invoke OpenAI's services within your business logic depends on your context. In the world of personal endeavors and small startups, agility is your most trusted companion. It's the realm where creativity knows no bounds, and innovation often thrives on the boundless enthusiasm of a small, dedicated team. As such, if you are working on a personal side project or managing a small startup, you may proceed with whatever it takes to complete tasks and deliver value to your users or customers.


However, in the case of more established companies with a team of over 20 developers, direct integration with AI services could lead to long-term technical debt and reduced future agility. Technical debt can account for up to 40% of the overall value of your technology. This accumulation of technical debt results in increased coupling, making it challenging to switch language model (LLM) providers and implement observability, caching, and other essential components.


These costs continue to mount as you postpone the creation of an abstraction layer within your enterprise, ultimately resulting in wasted time. Thus, it is advisable to avoid direct calls to OpenAI without utilizing an interface.


The Role of an AI Orchestration Layer in Separation of Concerns

An AI Orchestration Layer (AOL) is the key to efficiently managing the components required to transition from a prompt to a response. An AI Orchestration Layer (AOL) serves as the architectural backbone of AI applications. It functions as a central hub that coordinates and manages the flow of information and actions within the AI system. It provides reusable components that can be easily replaced as needed.


A survey of the AI landscape reveals various reusable components, including:


  • Monitoring: Ensuring the health of your AI service.
  • Caching: Optimizing GPT calls to save both time and money.
  • PII Sanitization: Protecting user privacy.
  • Toxicity Check: Preventing the dissemination of NSFW content to users.
  • Retrieval Augmented Generation: Ensuring GPT has access to relevant information.
  • Logging: Maintaining sanitized records of GPT inputs and outputs.
  • Validation: Ensuring the structure of your responses is well-defined and enforced.


How to Get Started?

If you prefer a library-based approach and work with Python or JavaScript, consider using Langchain, which can assist you in implementing the necessary AI orchestration layer. However, if you require broader language support and a more opinionated approach to AI integration, you may need to develop your own solution or explore Pontus.so.


Also published here.