paint-brush
I Hacked HQ Trivia But Here’s How They Can Stop Meby@stervy
55,805 reads
55,805 reads

I Hacked HQ Trivia But Here’s How They Can Stop Me

by Stephen CognettaNovember 2nd, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<a href="https://itunes.apple.com/us/app/hq-live-trivia-game-show/id1232278996?mt=8" target="_blank">HQ Trivia</a>, one of the hottest apps of 2017, is reinventing what game shows look like in the 21st century. On HQ, you can tune in live every day to answer trivia questions for the chance to win thousands of dollars.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - I Hacked HQ Trivia But Here’s How They Can Stop Me
Stephen Cognetta HackerNoon profile picture

HQ Trivia, one of the hottest apps of 2017, is reinventing what game shows look like in the 21st century. On HQ, you can tune in live every day to answer trivia questions for the chance to win thousands of dollars.

However, as HQ Trivia increases in popularity, it exposes itself to a significant risk. Hackers, hoping to make some extra cash, can programmatically Google the answers to the questions. If these hackers and their scripts are successful, it could ruin the fun of HQ for honest players.

To explore the feasibility of hacking HQ, I wrote a script that wins the majority of HQ games. In this post, I explain how the script works, and how HQ can defend itself against these attacks. The script generally follows these three steps:

  1. Mirror phone screen to computer screen.
  2. Translate the image of the phone screen to computer-readable text.
  3. Using Google, run three separate approaches for finding the answer.

Please note: other than some general testing, all of the methods described in this blog are used on YouTube videos of the live HQ game, but not on the game itself. It is against the terms and conditions of HQ to use scripts on the live game.

Extracting The Text

First, the script needs to extract text from the app in order to process it.

I mirrored my phone screen to my laptop using QuickTime’s built-in screen mirroring feature and an Apple Lightning Cable, which ends up looking like this:

After mirroring the screen, I programmed a Python script to automatically screenshot the screen at specified coordinates, using the “screencapture” shell command. Then, the script uses the Tesseract OCR library to perform OCR (Optical Character Recognition), which converts the image to computer-readable text.

Now that the script has the parsed text for both the question and the answer choices, I then found three different approaches that all work with about 50–80% accuracy each. My final script runs all three approaches, so that I can review the total results for a final decision.

Approach 1: Googling the Question

The most intuitive approach is just to Google the question. This was an easy one-liner in Python:

About 50% of the time, this approach shows a Google search results page with the answer on it. However, it’s often hard to parse all this text myself, and it reliably fails on any questions that are “Which of these…” questions, since those questions rely on not just the question, but also the answer choices. For example, Approach 1 completely fails on this question:

Approach 2: Counting Answers on Results Page

Now, what if we could automatically scan the page for the answer choices? This would be much faster than what a human could do. Additionally, Google exposes a Custom Search API, which allows the script to access Google’s search functionality without even opening a browser.

For this approach, the script uses the Google Custom Search API to Google the question, and then counts the number of each answer choice that appears in the response (this includes not only the link titles, but also the little snippet that appears below the title).

While Approach 2 is correct more often than Approach 1, it can sometimes underperform. Approach 1 is superior than Approach 2 especially when it comes to “misconception” questions. Take a look at the following:

The answer from looking at the Google results is clearly Mauna Kea. However, there are several instances of Mount Everest on the page. Furthermore, the Google results page spells it as “Mt. Everest” and “Mount Everest” — so just looking for the exact phrasing of the answer choice is insufficient for 100% accuracy.

Approach 3: Counting Number of Search Results

The last approach uses the Google Custom Search API to query the question three times, with a different answer choice appended to it each time. This essentially restricts the search results page to only include results that contain the answer choice. The script can then compare the number of search results that appear from each query. You normally see it right below your search bar like this:

Approach 3 is far more effective for “Which of these” questions. However, it can still fail. For instance, take a look at this question:

The correct answer is “Steel,” but Approach 3 shows “Tech” with the biggest number of search results, since it’s a hot topic and mentioned frequently with “best-performing” and “stocks.”

As you can see, none of these approaches are fool-proof; however, when I run all three approaches simultaneously, I can get through all 12 questions for about 70–80% of the HQ videos on YouTube. This is quite remarkable for a game that gives out thousands of dollars to winners every day, seven days a week.

Advice to HQ

Dear HQ — here’s how you can stop people like me and other hackers from ruining the game for everyone.

1) Altering the UI to Impede OCR

Right now, the HQ game appears to already have two UI formats for the question page, pre- and post-elimination:

Left-side UI appears before the user is eliminated, right UI after elimination.

Note the difference? Although seemingly insignificant, the right-side UI is actually harder for the Tesseract OCR Engine to parse, since the colors are less distinct. My script pre-processes the image and increases its contrast to circumvent this issue. Note that a player would only ever see the right-side UI screen if they already got eliminated from the game, but this was important for me since I used recorded YouTube clips for testing.

However, what if HQ had 7–10 different UI formats that appeared pre-elimination, so that scripts can’t universally increase contrast to improve recognition? With UI formats like the ones below, it’s near impossible to reliably parse the text (of course, HQ will still need to prioritize user readability).

While most are impractical, these UI screens could be modified to prioritize user-readability and still thwart OCR attempts.

2) “Hard-To-Google Questions”

It would be an easy, but poor product decision for HQ to increase the question difficulty so that hackers can’t solve it, since overly difficult questions reduce the “fun” factor of the game.

Instead, HQ should focus on including 3–4 “hard-to-google” questions in every game.

What constitutes a “hard-to-google” question? Well, here are a few types of questions that HQ could target:

  • “Which of these …”
  • “<number> of the top <number> objects…”
  • “How close is <entity> to <entity>”
  • “What is this? <displays image>”
  • “How many times …”
  • “Why did…”

In general, these questions are hard because it takes multiple google searches and several seconds of human scanning to find the right answer.

3) Run a script to classify questions

When choosing questions to run on the app, I’m sure HQ already has some calculated measure of “difficulty.” HQ should have its software engineers create and run a script similar to my own, and then classify questions by how easy they are to be hacked or Googled. As long as HQ includes 3–4 of these questions per game, they’ll be protected against hackers.

HQ Trivia is a promising new mobile game that could really shift the entire game show paradigm. Trivia bots and clever hackers will be no issue for the platform’s rapid growth, as long as HQ makes a few critical adjustments.

Enjoyed this article? Check out some of my other articles below!