Create an iOS app — like Prisma — with CoreML, Fast Style Transfer, and TensorFlow. Table Of Contents: Intro & Setup Preliminary Steps CoreML Conversion iOS App Intro The basis of this tutorial comes from and their PyTorch approach. However, we will use TensorFlow for the models and specifically, by Logan Engstrom — which is a . Prisma Lab’s blog Fast Style Transfer MyBridge Top 30 (#7) The result of this tutorial will be an app that can run the TensorFlow models with CoreML. — this also contains all adjustments and additions to fst & tf-coreml. iOS Here is the GitHub repo What made this possible? Standford Research Fast Style Transfer Apple’s CoreML *Does not support TensorFlow Google releases TensorFlow Lite *Does not support CoreML Google releases CoreML support *Does not offer full support we make some adjustments and hack together a solution Setup We’re going to use fst’s pre-trained models — custom models will work as well (you’ll need to make minor adjustments that I’ll note). Models: Download the pre-trained models. Fast Style Transfer: https://github.com/lengstrom/fast-style-transfer *clone and run setup if needed With fst, I’ve had the most success using TensorFlow 1.0.0 , and with tf-corml you’ll need 1.1.0 or greater TensorFlow: *does not need to use GPU for this tutorial TensorFlow CoreML: https://github.com/tf-coreml/tf-coreml *install instructions 11 9 2.7 iOS: Xcode: Python: Preliminary Steps We need to do some preliminary steps due to Fast-Style-Transfer being more of a research implementation vs. made for reuse & production (no naming convention or output graph). The first step is to figure out the name of the output node for our graph; TensorFlow auto-generates this when not explicitly set. We can get it by printing the net in the evaluate.py script. Step 1: After that we can run the script to see the printed output. I’m using the pre-trained model here. wave *If you’re using custom models, the checkpoint parameter just needs to be the directory where your meta and input files exist. $ python evaluate.py --checkpoint wave.ckpt --in-path inputs/ --out-path outputs/ > Tensor(“add_37:0”, shape=(20, 720, 884, 3), dtype=float32, device=/device:GPU:0) The only data that matters here is the output node name, which is . This makes sense as the last unnamed operator in the net is addition, . add_37 see here We need to make a few more additions to evaluate.py so that we can save the graph to disk. Note that if you’re using your own models that you’ll need to add the code to satisfy the checkpoints directory condition vs. a single checkpoint file. Step 2: Now we run evaluate.py on a model, and we will end up with our graph file saved. * Step 3: When training models we probably used a batch size greater than 1, as well as GPU, however CoreML only accepts graphs with an input-size of 1, and CPU optimizations — note the evaluate command to adjust. $ python evaluate.py --checkpoint wave/wave.ckpt --in-path inputs/ --out-path outputs/ --device “/cpu:0” --batch-size 1 Awesome, this creates output_graph.pb & we can move on to the CoreML conversion. CoreML Conversion Thanks to , there is now a TensorFlow to CoreML convertor: . This is awesome, but the implementation is new and lacking some core tf operations, like google https://github.com/tf-coreml/tf-coreml power. Our model will not convert without adding support for but luckily Apple’s which supports power. We need to add this code into the TensorFlow’s implementation. Below is a gist for 3 files you will need to additions to. Step 1: power, coremltools provides a unary conversion create and run the conversion script Step 2: $ python convert.py The actual CoreML converter does not provide the capability to output images from a model. Images are represented by NumPy arrays (multi-dimensional arrays) — which are the actual output and compiled to a non-standard type of MultiArray in Swift. I searched for help online and was able to get some code that evaluated a graph outputted by CoreML and then traverse it to transform the output types to images. Create and run the output transform script on the model (my_model.mlmodel) which was outputted above. Step 3: $ python output.py and ….. 🎉💥🤙 ….. we have a working CoreML model. * I changed the name of it to “wave” in the output script, also the image size corresponds to the input for evaluation. iOS App This isn’t so much an iOS tutorial, so you’ll want to work with my repo. I’ll only cover the important details here. Import the models into your Xcode project. Make sure you add them to target. Step 1: After importing, you’ll be able to instantiate the models like so: Step 2: Create a class for model input parameter, which is an MLFeatureProvider. Step 3: *img_placeholder is the input that is defined in the evaluate script Now we can call the model in our code. Step 4: The rest of the app is just setup and image processing. Nothing new or directly related to CoreML, so we won’t cover it here. At this point, you’ll have a good understanding of how everything came together and should be able to innovate further. I think there can be improvement on fst’s graph. We should be able to gut out over-engineered operations to make the style-transfer even faster on iOS. But for now everything works pretty well. Final: ✌️🤙 follow me