I Built a Free Bloomberg Alternative and Made It Open Source Because $24K/yr Is Insane

Written by raviteja-nekkalapu | Published 2026/03/09
Tech Story Tags: big-tech-alternatives | open-source | finance | stock-market | artificial-intelligence | bloomberg-alternative | financial-modeling | dcf

TLDRBloomberg does all of this. But not for $2,000 a month. One developer built the 25-section stock report he actually wanted and open-sourced it.via the TL;DR App

I invest in stocks on the side. My workflow looked like this :

  1. Open Screener.in for fundamentals
  2. Open TradingView for technicals
  3. Ask Perplexity for a quick summary
  4. Manually check insider trading on SEC EDGAR
  5. Look up institutional holders somewhere else


Five tools. None of them talked to each other.


None of these tools gave me what I actually wanted. 

  • A proper DCF valuation.
  • Insider trading data (who's buying, who's selling and how much). 
  • Institutional ownership. 
  • A Graham Number. 
  • Moat analysis. 
  • Sentiment data from Reddit. 
  • All in one place.


Bloomberg does all of that. 

But I am not paying $2,000 a month. 


I am a developer. I can build things.

So I did.


What it actually does

You type a stock ticker. Wait about 10 seconds. And you get a 25-section report.


I am not exaggerating about the 25 sections.

Below are some of the sections -

  • A letter grade from A+ to F (I call it the Nipun Score). 
  • Three different valuation estimates using DCF
  • Graham Number
  • Peter Lynch Fair Value. 
  • A scenario breakdown with bull, base, and bear price targets. 
  • A full SWOT analysis. 
  • Competitive moat rating. 
  • Insider trading data showing who bought or sold recently and how much. 
  • Top 10 institutional holders. 
  • Technical indicators (RSI, MACD, Bollinger Bands, Fibonacci levels). 
  • Dividend analysis. 
  • Earnings history. 
  • Peer comparison. 
  • Social sentiment analysis from Reddit. 
  • SEC filing links. 
  • And a few more sections.

The whole thing exports to PDF too.


The problem: AI agrees with itself too much

When I first built Nipun AI, I used a single Gemini model to generate the analysis section. It worked. But something always felt off about the output.


If a stock had decent financials, Gemini would write a bullish narrative.

If the numbers were bad, the narrative was bearish. 

It looks reasonable except that it was always confident.


Always telling a clean story. Never saying "I don't know" or "this could go either way."

Real financial analysis doesn't work like that. Every investment thesis has holes. 

Every bull case has a bear case hiding behind it. A good analyst presents both sides and lets you decide.

Then I thought that One AI model won't do that. It picks a side and stays there.


The fix? Give it a sparring partner

So I added a second model.

Not just a different prompt on the same model.

A completely different AI provider.

After Gemini writes the main analysis, I take that analysis and send it to Cerebras with a prompt that basically says: "Find the weaknesses in this argument. What did it miss? What's the bear case if the analysis is bullish? What's the bull case if it's bearish?"

Cerebras doesn't have access to the raw data. It only sees what Gemini wrote. That's on purpose. I wanted a fresh perspective, not a recalculation.


The result was immediately better. The reports went from "GOOGL looks strong, buy it" to "GOOGL looks strong because of X, Y, and Z. But here's what could go wrong - the valuation is stretched at these multiples, China revenue is a question mark, and services growth needs to stay above 15% for the DCF to work."


That's the kind of report I actually want to read before putting money into something.

But how do you know if any of it is true?

This bugged me for weeks. Both models were generating text that sounded smart and authoritative. 


But was it accurate? Was the DCF calculation they referenced actually in the data I fed them? Or were they just making numbers up?

If you have used ChatGPT or Gemini for anything factual, you know they hallucinate.


Not all the time. But enough that you can't blindly trust them.


So I added a third model. 

Cohere.

Its job is one thing - take every statement from the analysis and check it against the actual financial data.


For each claim, Cohere returns one of three labels:

  1. Grounded - This statement matches the financial data from Finnhub or the metrics calculated in phase two.
  2. Speculative - This statement is a reasonable interpretation, but it goes beyond what the data directly shows. It's an opinion, not a fact.
  3. Unverifiable - This statement references something that isn't in the data at all. Could be true, could be false, but there's no way to check from the data I have.


These labels show up right in the report. So when you read "GOOGL's competitive moat is wide and durable," you can see whether that's backed by the numbers or it's just the AI's opinion.


I don't know any other free tool that does this. Most AI tools generate text. I didn't want that.


The security thing

I obsessed over security because the tool handles API keys.

Users bring their own keys (BYOK model).


You sign up for free tiers on Finnhub, Google Gemini, Groq, Cohere, and optionally Cerebras.

Then you enter those keys into Nipun AI.


Here's what happens to them -

They get encrypted in your browser with AES-256-GCM.

Key derivation uses PBKDF2 with 100,000 iterations and a random salt.

The encrypted keys sit in localStorage.


When you run an analysis, the keys are decoded in memory, packed into a custom HTTP header, sent to the worker, used for the API calls, and then discarded.


Nothing gets written to disk on the server side.

Nothing gets logged.

The worker processes them in its memory and then they're gone.

I used the Web Crypto API for all of this. Zero npm dependencies for the crypto stuff. No third-party encryption library. Just the browser's built-in crypto module.


Who is this for?

Honestly?

Retail investors who want to do real research without paying for it.


If you are a day trader who needs millisecond latency real-time data, this isn't it.

If you are an institutional investor with a $50K bloomberg terminal already on your desk, you don't need this.


But if you are someone who picks 5 to 10 stocks a year and wants to do proper homework before buying, this will save you hours.

It covers both US and Indian listed companies. NSE tickers work. BSE tickers work. NASDAQ, NYSE, all of it.


How to try it

Open your terminal:

npx nipun-ai


That's it. One command. A browser opens. You enter a ticker. You get a report.

If you want to see what a report looks like before setting up API keys, click here for the live demo at nipun-ai.pages.dev. It runs on mock data, but the format is identical to a real report.


The whole codebase is on GitHub: github.com/myProjectsRavi/Nipun-AI


MIT licensed.

Fork it if you want.

Change anything you want.

I would appreciate a GitHub star if you find it useful but honestly just hearing that someone used it to make a real investment decision would make my month.



Written by raviteja-nekkalapu | Passionate about building high-impact tools for the developer community & making enterprise-grade protection accessible to every developer
Published by HackerNoon on 2026/03/09