Get-Star Earns a 27 Proof of Usefulness Score by Building Client-Side Parallel Search

Written by wilgilr | Published 2026/02/06
Tech Story Tags: proof-of-usefulness-hackathon | hackernoon-hackathon | javascript-web-applications | parallel-search | client-side-parallel-search | software-development | filter-bubble-alternatives | meta-search-engine

TLDRGet-Star is a fully client-side, browser-based meta-search tool that runs parallel queries across 42 sites, prioritizes user privacy, and earned a 27 Proof of Usefulness score for real-world utility.via the TL;DR App

Welcome to the Proof of Usefulness Hackathon spotlight, curated by HackerNoon’s editors to showcase noteworthy tech solutions to real-world problems. Whether you’re a solopreneur, part of an early-stage startup, or a developer building something that truly matters, the Proof of Usefulness Hackathon is your chance to test your product’s utility, get featured on HackerNoon, and compete for $150k+ in prizes. Submit your project to get started!


In this interview, we talk with William Fletcher Gilreath, the creator of (get)* "Get-Star". This browser-based tool is designed to streamline information retrieval by executing parallel searches across 42 different engines and sites simultaneously.

What does (get)* "Get-Star" do? And why is now the time for it to exist?

The (get)* does parallel search on the local web browser to do search across 42 different sites and search engines. The idea is its easier to close a browser window, then to open one, enter a URL, search terms, and the click search. More in the PDF (design rationale, manifest, whitepaper) https://get-star.org/get_star_whitepaper.pdf. Now’s a good time for (get)* "Get-Star" to exist because users are increasingly seeking efficiency and broader data retrieval methods without relying on a single algorithm's filter bubble.

What is your traction to date? How many people does (get)* "Get-Star" reach?

According to Cloudflare, the last 30-days get-star.org has had 1.98k unique visitors, serving

62.8k total requests.

Who does your (get)* "Get-Star" serve? What’s exciting about your users and customers?

No notable customers, I do not track individual users. However its for anyone wanting to do a large heterogeneous web search, save time, and search more efficiently. The PDF manifesto/whitepaper explains this in much more detail.

What technologies were used in the making of (get)* "Get-Star"? And why did you choose ones most essential to your techstack?

The project relies on the fundamental building blocks of the web: HTML, CSS, and JavaScript, hosted via GitHub and Cloudflare. This "vanilla" tech stack was chosen to ensure the tool runs entirely client-side in the local browser, maximizing privacy and speed without complex backend dependencies.

What is traction to date for (get)* "Get-Star"? Around the web, who’s been noticing?

Traction has been primarily organic and observational. AI search tools have begun indexing the project as a utility for enhancing search efficiency, recognizing it as a dedicated web application for simultaneous multi-site querying.


(get)* "Get-Star" scored a 27 proof of usefulness score(proofofusefulness.com/get-get-star-report)

What excites you about this (get)* "Get-Star"'s potential usefulness?

What excites me about (get)* get-star's potential usefulness is to create a tool that allows super-search across different search engines/sites, so that with massive parallel in-tandem search information can be found more effectively, and efficiently.


By running the search locally, it removes the issues and challenges with a dedicated server, and creates a tool that is like the "ideal" Internet application from the dot-com era.


A tool that is centralized to access and find, easy to use, but then runs locally on the user's computer so is under user control. Later customizing search, and prioritizing search results is some additional features and functionality to add to get-star.org.

Walk us through your most concrete evidence of usefulness.

The most concrete evidence of usefulness is the increase over time since November 2024 when I deployed get-star.org. The Cloudflare analytics for the last 30-days, 1-week, and 24-hours is:


80-visitors, 676-request, 5Mb data in last 24/hours; 595-visitors, 11.43K requests, 79Mb data in last 7-days; and 1.88K visitors, 33.24K requests, 221Mb data in last 30-days.


These metrics are 2x to 3x the number of accesses and data transfer from when get-star.org was deployed. So people are returning, and new users are slowly using get-star.org.


Unfortunately I do not monitor, retain data or query the users so explicit data is unavailable and other information is on the user's computer system and the web browser.

How do you measure genuine user adoption versus "tourists" who sign up but never return? What's your retention story?

I do not explicitly measure user adoption at this point; I do not log or record who visits the get-star.org website or uses it. The Cloudflare tools track number of accesses, data transfer, but not specific data about users more a metric of count or cardinality. Nor do I require a user to sign-up to use the tool. Hence I am not tracking user adoption or retention at this point. The tool is free to use, so there is no economic motive to track that metric at this point.


But implicitly with the number of users within a time period remains fixed, or increases, then users are coming back, and new ones are usin get-star.org over time. So the measure is how many per time period that is recorded by Cloudflare, and then seeing a stable number per day, week, and then monthly with some growth. Perhaps later I might add some tracking information, such as cookies.

If we re-score your project in 12 months, which criterion will show the biggest improvement, and what are you doing right now to make that happen?

The criterion that will show the biggest improvement will be the number of accesses, and the amount of data transferred as reported by Cloudflare.


One improvement to get-star.org I am striving for is to prioritize search engine/search sites based upon either user preference, or configuration. Another possibility is to determine and detect the most popular search engine/sites results that are viewed by the user.


Then the results can be prioritized for the user. This would require to have some mechanism that will track the user and the accumulated results, which goes against the user information neutrality. So the challenge is finding an implementation to avoid any specific user information.

How Did You Hear About HackerNoon? Share With Us About Your Experience With HackerNoon.

I know about HackerNoon, as I've read articles, postings, and comments for years. I was on one tech board, and it referenced a post or article on HackerNoon, and have been reading every since.


So when I wanted to post about a programming language I was and am developing, ZeptoN, I posted some articles about the language on HackerNoon. Under the title "What the Do-While is ZeptoN?" and now focused on the language, a rewrite, redesign from the original version 1.0.


Another article I wrote and posted was about why I prefer to use the GNU GPL version 3 open-source license; the article was entitled, "Why the GNU General Public License v3 for Your Open-Source Project?" Hence I'm familiar with HackerNoon, have read for some time, and posted articles I write.

With nearly 2,000 unique visitors in the last month, what channels are driving the most curiosity towards your whitepaper and tool?

The primary channel has been LinkedIn, where I post regularly non-technical essays, and also comment in various technical discussion groups. I also perceive word of mouth. When I was interviewing for my next job role after a lay-off from Broadcom, I included the get-star.org super-search tool in my resume which I uploaded, and submitted when applying.


Also during my job search in interviews I was often asked question was where I found material, information, code for projects done in the interview, or "homework" problems, and so on; and of course I shared get-star.org as the tool/source. Unfortunately beyond posting and sharing, PR and marketing are not my forte.    

Your approach involves opening multiple connections simultaneously; how do you plan to handle browser resource constraints or popup blockers as you scale the feature set?

The tool detects pop-up blockers, and are detected, the user informed the tool will not work unless they allow pop-up blockers, and the tool disabled.


For each search creates its own window, so faster searches complete more readily. my own tests with various browsers was between 30 and 50 searches for a reasonable response time.

You mention the "local" aspect being crucial—how does keeping the search logic client-side specifically benefit the user compared to a server-side meta-search engine?

The local aspect avoids use of information that a server request could retain, store, and process like browser configuration, ip-address, and so on.


Thus no information is retained, and needs to be processed so the search is faster, and local search avoids having to send, and then wait for a response from a server to the web browser.


The locality feature all of this is local in the web browser. There is no send-receive-search-process-response to a server but a send-response in the local web browser.


Meet our sponsors

Bright Data: Bright Data is the leading web data infrastructure company, empowering over 20,000 organizations with ethical, scalable access to real-time public web information. From startups to industry leaders, we deliver the datasets that fuel AI innovation and real-world impact. Ready to unlock the web? Learn more at brightdata.com.

Neo4j: GraphRAG combines retrieval-augmented generation with graph-native context, allowing LLMs to reason over structured relationships instead of just documents. With Neo4j, you can build GraphRAG pipelines that connect your data and surface clearer insights. Learn more.

Storyblok: Storyblok is a headless CMS built for developers who want clean architecture and full control. Structure your content once, connect it anywhere, and keep your front end truly independent. API-first. AI-ready. Framework-agnostic. Future-proof. Start for free.

Algolia: Algolia provides a managed retrieval layer that lets developers quickly build web search and intelligent AI agents. Learn more.



Written by wilgilr | I am a senior software engineer, computer scientist, data scientist, mathematician, poet, and writer.
Published by HackerNoon on 2026/02/06