Brian Sathianathan, the Chief Technology Officer at Iterate.ai, discusses what teams need to pay attention to as they plan and execute generative AI strategies across their organizations.
In my opinion, the three biggest advantages generative AIoffers enterprises are speed to market, cost efficiency, and rapid experimentation. Enterprise workforces can assemble applications, analyze data, create content faster at lower cost, and experiment like never before. The risks stem from complexity. If hallucinating generative AI gives company leaders incorrect business insights, generates the wrong content, or gives customers poor digital experiences, generative AI applications can backfire—fast. Solid policies and checks-and-balances are essential to making generative AI work for enterprises.
My advice: act now. If you wait, you’re too late. But make sure to implement the right processes and guardrails. Keep it simple, test often, and keep going. Also, leaders should commit to private large language models (LLMs) rather than public options like OpenAI. Looking forward, control over unique data will emerge as the essential differentiator that makes one business’s generative AI application better than its competitors. Private LLMs mean building a unique IP with generative AI, rather than giving data away for free.
Strong generative AI compliance standards will arrive, just as today, we have really good standards and tools for detecting bias in AI. For now, the best approach for CIOs and compliance officers is to look at your existing company policies. For example, say you’re implementing a generative AI version of a bank helpline. What safeguards do your current policies and compliance procedures require? Available generative AI tools can help with rapid experimentation to test and validate application prompts, responses, and behaviors.
I think generative AI is a revolution, far different from what we had in the past. You have to treat this differently: it’s not a fad that will disappear. The application of generative AI within companies is going to be deep-rooted. Leaders need to think about generative AI as an accelerant they should apply to supercharge as many use cases as possible. Look at your departments, what you provide customers, and critical use cases in your revenue path.
Traditional protections like firewalls and intrusion prevention systems aren’t designed to prevent attacks on AI, especially attacks that subvert or divert the AI learning process and basically poison the system. Have controls in place wherever training is involved. Don't make your chatbot learn in real-time and don't update it immediately: you'll end up with the Microsoft problem, where the Tay chatbot became racist within minutes. Keep learning offline and introduce AI interference protections, especially where AI dynamically learns from inputs.