was created to train an AI using Tensorflow and Python3 that learns your interests in the other sex and automatically plays the tinder swiping-game for you. Auto-tinder In this document, I am going to explain the following steps that were needed to create auto-tinder: Analyze the tinder webpage to find out what internal API calls tinder makes, reconstruct the API calls in and analyze its content Postman Build a api wrapper class in python that uses the tinder api to like/dislike/match etc. Download a bunch of images of people nearby Write a simple mouse-click classifier to label our images Develop a preprocessor that uses the tensorflow object detection API to only cut out the person in our image Retrain inceptionv3, a deep convolutional neural network, to learn on our classified data Use the classifier in combination with the tinder API wrapper to play tinder for us Step 0: Motivation and disclaimer Auto tinder is a concept project purely created for fun and educational purposes. It shall never be abused to harm anybody or spam the platform. The auto-tinder scripts should not be used with your tinder profile since they surely violate tinders terms of service. I’ve written this piece of software mainly out of two reasons: 1. Because I can and it was fun to create :) 2. I wanted to find out whether an AI would actually be able to learn my preferences in the other sex and be a reliable left-right-swipe partner for me. 3. (Purely fictional reason: I am a lazy person, so why not invest 15 hours to code auto-tinder + 5 hours to label all images to save me a few hours of actually swiping tinder myself? Sounds like a good deal to me!) Step 1: Analyze the tinder API The first step is to find out how the tinder app communicates to tinders backend server. Since tinder offers a web version of its portal, this is as easy as going to tinder.com, opening up chrome devtools and have a quick look at the network protocol. The content shown in the picture above was from a request to that is made when the tinder.com landing page is loading. Clearly, tinder has some sort of internal API that they are using to communicate between the front- and backend. With analyzing the content of , it becomes clear that this API endpoint returns a list of user profiles of people nearby. The data includes (among many other fields), the following data: https://api.gotinder.com/v2/recs/core /recs/core { : { : }, : { : [ { : , : { : , : , : , : , : [ { : , : { : { : , : , : , : }, : { : , : , : , : }, : , : }, : , : [ { : , : , : }, { : , : , : }, { : , : , : }, { : , : , : } ], : , : , : , : [ ] } ], : , : [], : [], : }, : { : [], : , : [] }, : { : }, : , : , : , : { : }, : [], : { : [] } } ] } } "meta" "status" 200 "data" "results" "type" "user" "user" "_id" "4adfwe547s8df64df" "bio" "19y." "birth_date" "1997-17-06T18:21:44.654Z" "name" "Anna" "photos" "id" "879sdfert-lskdföj-8asdf879-987sdflkj" "crop_info" "user" "width_pct" 1 "x_offset_pct" 0 "height_pct" 0.8 "y_offset_pct" 0.08975463 "algo" "width_pct" 0.45674357 "x_offset_pct" 0.984341657 "height_pct" 0.234165403 "y_offset_pct" 0.78902343 "processed_by_bullseye" true "user_customized" false "url" "https://images-ssl.gotinder.com/4adfwe547s8df64df/original_879sdfert-lskdföj-8asdf879-987sdflkj.jpeg" "processedFiles" "url" "https://images-ssl.gotinder.com/4adfwe547s8df64df/640x800_879sdfert-lskdföj-8asdf879-987sdflkj.jpg" "height" 800 "width" 640 "url" "https://images-ssl.gotinder.com/4adfwe547s8df64df/320x400_879sdfert-lskdföj-8asdf879-987sdflkj.jpg" "height" 400 "width" 320 "url" "https://images-ssl.gotinder.com/4adfwe547s8df64df/172x216_879sdfert-lskdföj-8asdf879-987sdflkj.jpg" "height" 216 "width" 172 "url" "https://images-ssl.gotinder.com/4adfwe547s8df64df/84x106_879sdfert-lskdföj-8asdf879-987sdflkj.jpg" "height" 106 "width" 84 "last_update_time" "2019-10-03T16:18:30.532Z" "fileName" "879sdfert-lskdföj-8asdf879-987sdflkj.webp" "extension" "jpg,webp" "webp_qf" 75 "gender" 1 "jobs" "schools" "show_gender_on_profile" false "facebook" "common_connections" "connection_count" 0 "common_interests" "spotify" "spotify_connected" false "distance_mi" 1 "content_hash" "slkadjfiuwejsdfuzkejhrsdbfskdzufiuerwer" "s_number" 9876540657341 "teaser" "string" "" "teasers" "snap" "snaps" A few things are very interesting here : (note that I changed all the data to not violate this persons privacy) All images are publicly accessible. If you copy the image URL and open it in a private window, it still loads instantly — meaning that tinder uploads all user images publicly to the internet, free to be seen by anybody. The original photos accessible via the API are extremely high resolution. If you upload a photo to tinder, they will scale it down for the in-app usage, but they store the original version publicly on their servers, accessible by anybody. Even if you choose not to “show_gender_on_profile”, everybody can still see your gender via the API (“gender”: 1, where 1=Woman, 0=Man) If you send multiple requests to the tinder API consecutively, you always get different results (e.g. different profiles). We can therefore just call this endpoint repeatedly to “farm” a bunch of pictures that we can later use to train our neural network. With analyzing the content headers, we quickly find our private API Keys: . X-Auth-Token With copying this token and going over to Postman, we can validate that we can indeed freely communicate with the tinder API with just the right URL and our auth token. With clicking a bit through tinders webapp, I quickly discover all relevant API endpoints: Step 2: Building an API Wrapper in Python So let's get into the code. We will use the python library to communicate with the API and write an API wrapper class around it for convenience. Similarly, we write a small Person class that takes the API response from Tinder representing a Person and offers a few basic interfaces to the tinder API. Let's start with the Person Class. It shall receive API data, a tinder-api object and save all relevant data into instance variables. It shall further offer some basic features like "like" or "dislike" that make a request to the tinder-api, which allows us to conveniently use "some_person.like()" in order to like a profile we find interesting. Requests datetime geopy.geocoders Nominatim TINDER_URL = geolocator = Nominatim(user_agent= ) PROF_FILE = self._api = api self.id = data[ ] self.name = data.get( , ) self.bio = data.get( , ) self.distance = data.get( , ) / self.birth_date = datetime.datetime.strptime(data[ ], ) data.get( , ) self.gender = [ , , ][data.get( , )] self.images = list(map( photo: photo[ ], data.get( , []))) self.jobs = list( map( job: { : job.get( , {}).get( ), : job.get( , {}).get( )}, data.get( , []))) self.schools = list(map( school: school[ ], data.get( , []))) data.get( , ): self.location = geolocator.reverse( ) self._api.like(self.id) self._api.dislike(self.id) import from import "https://api.gotinder.com" "auto-tinder" "./images/unclassified/profiles.txt" : class Person (object) : def __init__ (self, data, api) "_id" "name" "Unknown" "bio" "" "distance_mi" 0 1.60934 "birth_date" '%Y-%m-%dT%H:%M:%S.%fZ' if "birth_date" False else None "Male" "Female" "Unknown" "gender" 2 lambda "url" "photos" lambda "title" "title" "name" "company" "company" "name" "jobs" lambda "name" "schools" if "pos" False f' , ' {data[ ][ ]} "pos" "lat" {data[ ][ ]} "pos" "lon" : def __repr__ (self) return f" - ( )" {self.id} {self.name} {self.birth_date.strftime( )} '%d.%m.%Y' : def like (self) return : def dislike (self) return Our API wrapper is not much more than a fancy way of calling the tinder API using a class: requests TINDER_URL = self._token = token data = requests.get(TINDER_URL + , headers={ : self._token}).json() Profile(data[ ], self) data = requests.get(TINDER_URL + , headers={ : self._token}).json() list(map( match: Person(match[ ], self), data[ ][ ])) data = requests.get(TINDER_URL + , headers={ : self._token}).json() { : data[ ], : data[ ] } requests.get(TINDER_URL + , headers={ : self._token}).json() data = requests.get(TINDER_URL + , headers={ : self._token}).json() list(map( user: Person(user[ ], self), data[ ][ ])) import "https://api.gotinder.com" : class tinderAPI () : def __init__ (self, token) : def profile (self) "/v2/profile?include=account%2Cuser" "X-Auth-Token" return "data" : def matches (self, limit= ) 10 f"/v2/matches?count= " {limit} "X-Auth-Token" return lambda "person" "data" "matches" : def like (self, user_id) f"/like/ " {user_id} "X-Auth-Token" return "is_match" "match" "liked_remaining" "likes_remaining" : def dislike (self, user_id) f"/pass/ " {user_id} "X-Auth-Token" return True : def nearby_persons (self) "/v2/recs/core" "X-Auth-Token" return lambda "user" "data" "results" We can now use the API to find people nearby and have a look at their profile, or even like all of them. Replace YOUR-API-TOKEN with the X-Auth-Token you found in the chrome dev console earlier. __name__ == : token = api = tinderAPI(token) : persons = api.nearby_persons() person persons: print(person) if "__main__" "YOUR-API-TOKEN" while True for in # person.like() Step 3: Download images of people nearby Next, we want to automatically download some images of people nearby that we can use for training our AI. With 'some', I mean like 1500-2500 images. First, let's extend our Person class with a function that allows us to download images. PROF_FILE = open(PROF_FILE, ) f: lines = f.readlines() self.id lines: open(PROF_FILE, ) f: f.write(self.id+ ) index = image_url self.images: index += req = requests.get(image_url, stream= ) req.status_code == : open( , ) f: f.write(req.content) sleep(random()*sleep_max_for) # At the top of auto_tinder.py "./images/unclassified/profiles.txt" # inside the Person-class : def download_images (self, folder= , sleep_max_for= ) "." 0 with "r" as if in return with "a" as "\r\n" -1 for in 1 True if 200 with f" / _ _ .jpeg" {folder} {self.id} {self.name} {index} "wb" as Note that I added some random sleeps here and there, just because we will likely be blocked if we spam the tinder CDN and download many pictures in just a few seconds. We write all the peoples profile IDs into a file called "profiles.txt". By first scanning the document whether a particular person is already in there, we can skip people we already encountered, and we ensure that we don't classify people several times (you will see later why this is a risk). We can now just loop over nearby persons and download their images into an "unclassified" folder. __name__ == : token = api = tinderAPI(token) : persons = api.nearby_persons() person persons: person.download_images(folder= , sleep_max_for=random()* ) sleep(random()* ) sleep(random()* ) if "__main__" "YOUR-API-TOKEN" while True for in "./images/unclassified" 3 10 10 We can now simply start this script and let it run for a few hours to get a few hundret profile images of people nearby. If you are a tinder PRO user, update your location now and then to get new people. Step 4: Classify the images manually Now that we have a bunch of images to work with, let's build a really simple and ugly classifier. It shall just loop over all the images in our "unclassified" folder and open the image in a GUI window. By right-clicking a person, we can mark the person as "dislike", while a left-click marks the person as "like". This will be represented in the filename later on: will be renamed to if we mark the image as "like", or otherwise. 4tz3kjldfj3482.jpg 1_4tz3kjldfj3482.jpg 0_4tz3kjldfj3482.jpg The label like/dislike is encoded as 1/0 in the beginning of the filenmae. Let's use tkinter to write this GUI quickly: os listdir, rename os.path isfile, join tkinter tk PIL ImageTk, Image IMAGE_FOLDER = images = [f f listdir(IMAGE_FOLDER) isfile(join(IMAGE_FOLDER, f))] unclassified_images = filter( image: (image.startswith( ) image.startswith( )), images) current = current, unclassified_images : current = next(unclassified_images) StopIteration: root.quit() print(current) pil_img = Image.open(IMAGE_FOLDER+ +current) width, height = pil_img.size max_height = height > max_height: resize_factor = max_height / height pil_img = pil_img.resize((int(width*resize_factor), int(height*resize_factor)), resample=Image.LANCZOS) img_tk = ImageTk.PhotoImage(pil_img) img_label.img = img_tk img_label.config(image=img_label.img) current rename(IMAGE_FOLDER+ +current, IMAGE_FOLDER+ +current) next_img() current rename(IMAGE_FOLDER + + current, IMAGE_FOLDER + + current) next_img() __name__ == : root = tk.Tk() img_label = tk.Label(root) img_label.pack() img_label.bind( , positive) img_label.bind( , negative) btn = tk.Button(root, text= , command=next_img) next_img() root.mainloop() from import from import import as from import "./images/unclassified" for in if lambda not "0_" or "1_" None : def next_img () global try except "/" 1000 if : def positive (arg) global "/" "/1_" : def negative (arg) global "/" "/0_" if "__main__" "<Button-1>" "<Button-3>" 'Next image' # load first image We load all unclassified images into the "unclassified_images" list, open up a tkinter window, pack the first image into it by calling next_img() and resize the image to fit onto the screen. Then, we register two clicks, left-and right mouse buttons, and call the functions positive/negative that renames the images according to their label and show the next image. Step 5: Develop a preprocessor to cut out only the person in our images For the next step, we need to bring our image data into a format that allows us to do a classification. There are a few difficulties we have to consider given our dataset. Our Dataset is relatively small. We deal with +-2000 Images, which is considered a very low amount of data, given the complexity of them (RGB Images with high resolution) The pictures sometimes contain people from behind, sometimes only faces, sometimes no people at all. Most pictures not only contain the person itself, but often the surrounding which can be distracting four our AI. We combat these challenges by: Converting our images to greyscale, to reduce the amount of information that our AI has to learn by a factor of 3 (RGB to G) Cutting out only the part of the image that actually contains the person, nothing else 1. Dataset Size: 2. Data Variance: 3. Data Noise: 1. 2. The first part is as easy as using Pillow to open up our image and convert it to greyscale. For the second part, we use the with the mobilenet network architecture, pretrained on the coco dataset that also contains a label for "Person". Tensorflow Object Detection API Our script for person detection has four parts: Part 1: Opening the pre-trained mobilenet coco dataset as a Tensorflow graph You find the .bp file for the tensorflow mobilenet coco graph in my Github repository. Let's open it as a Tensorflow graph: tensorflow tf detection_graph = tf.Graph() detection_graph.as_default(): od_graph_def = tf.GraphDef() tf.gfile.GFile( , ) fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name= ) detection_graph import as : def open_graph () with with 'ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb' 'rb' as '' return Part 2: Load in images as numpy arrays We use Pillow for image manipulation. Since tensorflow needs raw numpy arrays to work with the data, let's write a small function that converts Pillow images to numpy arrays: numpy np (im_width, im_height) = image.size np.array(image.getdata()).reshape( (im_height, im_width, )).astype(np.uint8) import as : def load_image_into_numpy_array (image) return 3 Part 3: Call object detection API The next function takes an image and a tensorflow graph, runs a tensorflow session using it and return all informations about the detected classes (object types), bounding boxes and scores (certainty that the object was detected correctly). numpy np object_detection.utils ops utils_ops tensorflow tf ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name op ops output op.outputs} tensor_dict = {} key [ , , , , ]: tensor_name = key + tensor_name all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) tensor_dict: detection_boxes = tf.squeeze(tensor_dict[ ], [ ]) detection_masks = tf.squeeze(tensor_dict[ ], [ ]) real_num_detection = tf.cast(tensor_dict[ ][ ], tf.int32) detection_boxes = tf.slice(detection_boxes, [ , ], [real_num_detection, ]) detection_masks = tf.slice(detection_masks, [ , , ], [real_num_detection, , ]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[ ], image.shape[ ]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, ), tf.uint8) tensor_dict[ ] = tf.expand_dims( detection_masks_reframed, ) image_tensor = tf.get_default_graph().get_tensor_by_name( ) output_dict = sess.run(tensor_dict, feed_dict={image_tensor: image}) output_dict[ ] = int(output_dict[ ][ ]) output_dict[ ] = output_dict[ ][ ].astype(np.int64) output_dict[ ] = output_dict[ ][ ] output_dict[ ] = output_dict[ ][ ] output_dict: output_dict[ ] = output_dict[ ][ ] output_dict import as from import as import as : def run_inference_for_single_image (image, sess) for in for in for in 'num_detections' 'detection_boxes' 'detection_scores' 'detection_classes' 'detection_masks' ':0' if in if 'detection_masks' in # The following processing is only for single image 'detection_boxes' 0 'detection_masks' 0 # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. 'num_detections' 0 0 0 -1 0 0 0 -1 -1 1 2 0.5 # Follow the convention by adding back the batch dimension 'detection_masks' 0 'image_tensor:0' # Run inference # all outputs are float32 numpy arrays, so convert types as appropriate 'num_detections' 'num_detections' 0 'detection_classes' 'detection_classes' 0 'detection_boxes' 'detection_boxes' 0 'detection_scores' 'detection_scores' 0 if 'detection_masks' in 'detection_masks' 'detection_masks' 0 return Part 4: Bringing it all together to find the person The last step is to write a function that takes an image path, opens it using Pillow, calls the object detection api interface and crops the image according to the detected persons bounding box. numpy np PIL Image PERSON_CLASS = SCORE_THRESHOLD = img = Image.open(image_path) image_np = load_image_into_numpy_array(img) image_np_expanded = np.expand_dims(image_np, axis= ) output_dict = run_inference_for_single_image(image_np_expanded, sess) persons_coordinates = [] i range(len(output_dict[ ])): score = output_dict[ ][i] classtype = output_dict[ ][i] score > SCORE_THRESHOLD classtype == PERSON_CLASS: persons_coordinates.append(output_dict[ ][i]) w, h = img.size person_coordinate persons_coordinates: cropped_img = img.crop(( int(w * person_coordinate[ ]), int(h * person_coordinate[ ]), int(w * person_coordinate[ ]), int(h * person_coordinate[ ]), )) cropped_img import as from import 1 0.5 : def get_person (image_path, sess) 0 for in "detection_boxes" "detection_scores" "detection_classes" if and "detection_boxes" for in 1 0 3 2 return return None Part 5: Move all images into according classified folder As a last step, we write a script that loops over all images in the "unclassified" folder, checks whether they have an encoded label in the name copies the image in the according "classified" folder with applying the previously developed preprocessing steps: os person_detector tensorflow tf IMAGE_FOLDER = POS_FOLDER = NEG_FOLDER = __name__ == : detection_graph = person_detector.open_graph() images = [f f os.listdir(IMAGE_FOLDER) os.path.isfile(os.path.join(IMAGE_FOLDER, f))] positive_images = filter( image: (image.startswith( )), images) negative_images = filter( image: (image.startswith( )), images) detection_graph.as_default(): tf.Session() sess: pos positive_images: old_filename = IMAGE_FOLDER + + pos new_filename = POS_FOLDER + + pos[: ] + os.path.isfile(new_filename): img = person_detector.get_person(old_filename, sess) img: img = img.convert( ) img.save(new_filename, ) neg negative_images: old_filename = IMAGE_FOLDER + + neg new_filename = NEG_FOLDER + + neg[: ] + os.path.isfile(new_filename): img = person_detector.get_person(old_filename, sess) img: img = img.convert( ) img.save(new_filename, ) import import import as "./images/unclassified" "./images/classified/positive" "./images/classified/negative" if "__main__" for in if lambda "1_" lambda "0_" with with as for in "/" "/" -5 ".jpg" if not if not continue 'L' "jpeg" for in "/" "/" -5 ".jpg" if not if not continue 'L' "jpeg" Whenver we run this script, all labeled images are being processed and moved into corresponding subfolders in the "classified" directory. Step 6: Retrain inceptionv3 and write a classifier For the retraining part, we'll just use tensorflows script with the inceptionv3 model. Call the script in your project root directory with the following parameters: retrain.py python retrain.py --bottleneck_dir=tf/training_data/bottlenecks --model_dir=tf/training_data/inception --summaries_dir=tf/training_data/summaries/basic --output_graph=tf/training_output/retrained_graph.pb --output_labels=tf/training_output/retrained_labels.txt --image_dir=./images/classified --how_many_training_steps=50000 --testing_percentage=20 --learning_rate=0.001 The learning takes roughly 15 minutes on a GTX 1080 ti, with a final accuracy of about 80% for my labeled dataset, but this heavily depends on the quality of your input data and your labeling. The result of the training process is a retrained inceptionV3 model in the "tf/training_output/retrained_graph.pb" file. We must now write a Classifier class that efficiently uses the new weights in the tensorflow graph to make a classification prediction. Let's write a Classifier-Class that opens the graph as a session and offers a "classify" method with an image file that returns a dict with certainty values matching our labels "positive" and "negative". The class takes as input both the path to the graph as well as the path to the label file, both sitting in our "tf/training_output/" folder. We develop helper functions for converting an image file to a tensor that we can feed into our graph, a helper function for loading the graph and labels and an important little function to close our graph after we are done using it. numpy np tensorflow tf self._graph = self.load_graph(graph) self._labels = self.load_labels(labels) self._input_operation = self._graph.get_operation_by_name( ) self._output_operation = self._graph.get_operation_by_name( ) self._session = tf.Session(graph=self._graph) t = self.read_tensor_from_image_file(file_name) results = self._session.run(self._output_operation.outputs[ ], {self._input_operation.outputs[ ]: t}) results = np.squeeze(results) top_k = results.argsort()[ :][:: ] result = {} i top_k: result[self._labels[i]] = results[i] result self._session.close() graph = tf.Graph() graph_def = tf.GraphDef() open(model_file, ) f: graph_def.ParseFromString(f.read()) graph.as_default(): tf.import_graph_def(graph_def) graph label = [] proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines() l proto_as_ascii_lines: label.append(l.rstrip()) label input_name = file_reader = tf.read_file(file_name, input_name) image_reader = tf.image.decode_jpeg( file_reader, channels= , name= ) float_caster = tf.cast(image_reader, tf.float32) dims_expander = tf.expand_dims(float_caster, ) resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width]) normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std]) sess = tf.Session() result = sess.run(normalized) result import as import as : class Classifier () : def __init__ (self, graph, labels) "import/Placeholder" "import/final_result" : def classify (self, file_name) # Open up a new tensorflow session and run it on the input 0 0 # Sort the output predictions by prediction accuracy -5 -1 for in # Return sorted result tuples return : def close (self) @staticmethod : def load_graph (model_file) with "rb" as with return @staticmethod : def load_labels (label_file) for in return @staticmethod : def read_tensor_from_image_file (file_name, input_height= , input_width= , input_mean= , input_std= ) 299 299 0 255 "file_reader" 3 "jpeg_reader" 0 return Step 7: Use all this to actually auto-play tinder Now that we have our classifier in place, let's extend the "Person" class from earlier and extend it with a "predict_likeliness" function that uses a classifier instance to verify whether a given person should be liked or not. ratings = [] image self.images: req = requests.get(image, stream= ) tmp_filename = req.status_code == : open(tmp_filename, ) f: f.write(req.content) img = person_detector.get_person(tmp_filename, sess) img: img = img.convert( ) img.save(tmp_filename, ) certainty = classifier.classify(tmp_filename) pos = certainty[ ] ratings.append(pos) ratings.sort(reverse= ) ratings = ratings[: ] len(ratings) == : ratings[ ]* + sum(ratings[ :])/len(ratings[ :])* # In the Person class : def predict_likeliness (self, classifier, sess) for in True f"./images/tmp/run.jpg" if 200 with "wb" as if 'L' "jpeg" "positive" True 5 if 0 return 0.001 return 0 0.6 1 1 0.4 Now we have to bring all the puzzle pieces together. First, let's initialize the tinder API with our api token. Then, we open up our classification tensorflow graph as a tensorflow session using our retrained graph and labels. Then, we fetch persons nearby and make a likeliness prediction. As a little bonus, I added a likeliness-multiplier of 1.2 if the person on Tinder goes to the same university as I do, so that I am more likely to match with local students. For all people that have a predicted likeliness score of 0.8, I call a like, for all the other a dislike. I developed the script to auto-play for the next 2 hours after it is started. likeliness_classifier Classifier person_detector tensorflow tf time time __name__ == : token = api = tinderAPI(token) detection_graph = person_detector.open_graph() detection_graph.as_default(): tf.Session() sess: classifier = Classifier(graph= , labels= ) end_time = time() + * * time() < end_time: : persons = api.nearby_persons() pos_schools = [ , , ] person persons: score = person.predict_likeliness(classifier, sess) school pos_schools: school person.schools: print() score *= print( ) print( , person.id) print( , person.name) print( , person.schools) print( , person.images) print(score) score > : res = person.like() print( ) : res = person.dislike() print( ) Exception: classifier.close() from import import import as from import if "__main__" "YOUR-API-TOKEN" with with as "./tf/training_output/retrained_graph.pb" "./tf/training_output/retrained_labels.txt" 60 60 2 while try "Universität Zürich" "University of Zurich" "UZH" for in for in if in 1.2 "-------------------------" "ID: " "Name: " "Schools: " "Images: " if 0.8 "LIKE" else "DISLIKE" except pass That's it! We can now let our script run for as long as we like and play tinder without abusing our thumb! If you have questions or found bugs, feel free to contribute to my . Github Repository