Not so long ago I got into poker, and in addition to that, I enjoy working with computer vision and decided that I would like to combine business with pleasure. I made a detection of objects that are on the poker table as well as added some analytics on the basis of which I could make decisions about my moves.
I should point out right away that I chose PokerStars as a game room and the most popular variety of poker - Texas Hold'em. The function of the program is to start an infinite loop that reads a certain area of the screen where the poker table is. When our (hero's) turn comes, a window with the following information pops up or is updated:
Visually it looks as follows:
Just below the hero’s cards, there is a small area that can be either black or gray:
If this area is grayed out, it is our move, otherwise, it is our opponent's move. Since our image is static, we cut out this area by coordinates. Then we use the inRange()
function, which is used to detect pixels in the image that are within a certain color range by passing the clipped image there, determine from the number of white pixels in the binary image whether that returned function is our move or not:
res_img = self.img[self.cfg['hero_step_define']['y_0']:self.cfg['hero_step_define']['y_1'],
self.cfg['hero_step_define']['x_0']:self.cfg['hero_step_define']['x_1']]
hsv_img = cv2.cvtColor(res_img, cv2.COLOR_BGR2HSV_FULL)
mask = cv2.inRange(hsv_img, np.array(self.cfg['hero_step_define']['lower_gray_color']),
np.array(self.cfg['hero_step_define']['upper_gray_color']))
count_of_white_pixels = cv2.countNonZero(mask)
Now that we've determined that it's our turn, we should recognize the hero’s cards and those on the table. To do this, I suggest that we again take advantage of the fact that the image is static and cut it out, and then binarize the areas with cards. As a result, for such images with cards:
We get the following binary image:
After that, we find the outer contours of the values and suits using findContours()
function, which we then pass to boundingRect()
function, which returns the bounding boxes of each contour. All right, now we have boxes of all the cards, but how do we know if we have, for example, an ace of hearts? To do this, I found and manually cropped each value and each suit, and placed these images in a special folder as reference images. Next, we calculate the MSE between each of the reference images and the cropped card images:
err = np.sum((img.astype("float") - benchmark_img.astype("float")) ** 2)
err /= float(img.shape[0] * img.shape[1])
Which reference image has the smallest error, that is the name image we assign to the box. Quite easy :)
To determine the bank, we will work with a template image of this view:
We pass the template image and the image of the whole table to the matchTemplate()
function, which I wrote about in one of my previous articles , which, as one of the parameters is to return the coordinates of the top-left corner of the template image on the image of the whole table.
Knowing these coordinates, we can, by stepping back a constant value to the right, find the digits of the bank. Then, according to the familiar scheme, we find the contours and boxes of each digit, which we subsequently compare with the reference, but already digits image, and count the MSE. All these machinations are described in this section, except for the search of template image, we also do the same with bets of each player, where the coordinates of bets are prescribed in the config file.
The dealer button in poker is a mandatory attribute that determines the order of action and bargaining for all participants in the game. If you have to act one of the first, you are in the early position. If you are in a late position, your turn is one of the last to act. For the 6-max table, which is the table we're looking at, the positions are as follows:
To determine who the dealer is, we also take a template image, only this one:
We find the coordinates of the upper left corner of the image on the table and using the formula for the distance between two points on the plane, where the second x and y coordinates are prescribed in the configuration file coordinates of the player's center, determine who is closer to the button, that will be its owner :)
It often happens that there are 5 players at the table instead of 6, so the empty seat is marked in this way:
Under the nickname of a player who is currently absent, the following caption appears:
To detect the presence of such players, we take these images as templates and the image of the table, and again give them as input to the matchTemplate()
function, but this time we don't return the coordinates, but the probability of how similar the two images are to each other. If the probability, for instance, between the first image and the table is high, then we have a table missing a player.
Equity is the probability of winning a particular hand against two specific cards or the opponent's range. Mathematically, equity is calculated as the ratio of the number of possible winning combinations to the total number of possible combinations.
In Python this algorithm can be implemented using library eval7 (which in this case helps to estimate how strong the hand is) as follows:
deck = [eval7.Card(card) for card in deck]
table_cards = [eval7.Card(card) for card in table_cards]
hero_cards = [eval7.Card(card) for card in hero_cards]
max_table_cards = 5
win_count = 0
for _ in range(iters):
np.random.shuffle(deck)
num_remaining = max_table_cards - len(table_cards)
draw = deck[:num_remaining+2]
opp_hole, remaining_comm = draw[:2], draw[2:]
player_hand = hero_cards + table_cards + remaining_comm
opp_hand = opp_hole + table_cards + remaining_comm
player_strength = eval7.evaluate(player_hand)
opp_strength = eval7.evaluate(opp_hand)
if player_strength > opp_strength:
win_count += 1
win_prob = (win_count / iters) * 100
In this article, I wanted to show what can be achieved using only classic computer vision methods. I understand that the current solution is unlikely to be used in poker games, but in the future, I plan to add analytics, which can already be useful. If anyone wants to participate in the project or has any ideas for its development - feel free to write! The source code is available on github, as always. Have a nice day everyone!
Also published here.