10 years ago I worked as a software engineer and later quit the job to start my own projects. To save some extra money, I went to a small hometown, where I used to work on a website for students, accounting software and mobile games at the same time. Having no business experience caused some issues with generating income, so all projects had to be closed. I came back to the capital to get a job, again. The story repeated itself a few times.
When I was broke again then faced a total economical crisis. I couldn’t find a job and it felt awful. It was a good reason to see the world through sober eyes. I had to honestly admit that I didn’t know what niche to choose for my business. Doing projects you like seemed like a way to nowhere.
The only thing I was capable of is creating mobile applications. Several years of work in tech companies allowed me to gain some useful experience, so I decided to make fundamentally different apps (games, music, art, health, lifestyle, languages) and test the market needs. Prepared sets of assets and code libraries made it possible to simply create applications on various topics: 2d games, GPS trackers, simple utilities, etc. Most of them had several pictures, 2 buttons, and only one function. But it was enough to test the idea and monetization model. For example, a running app tracked the person’s speed, distance and burned calories. Nothing more. Purchasing graphics on stocks and source code re-usage helped me to create hundreds of simple applications over 2 years.
At first, the applications were free. Then I added ads and in-app purchases, and picked up keywords and bright icons. Users started to download my apps. Some applications differed in profit: translators, navigation for trucks, music simulators (piano, drums, guitar chords, players), as well as simple casual games.
Then I noticed that in just a month, translators had been downloaded more than 1M times taking 100th position in the ranking category. There are hundreds of languages in the world, and people enter a query for every language. The niche turned out to be promising.
About 40 simple translators were later created using Google API. It cost me $20 per 1M characters. Then appeared improved versions of apps, where I included ads, in-app purchases, and voice translation.
I earned enough money to move to big city and buy a house. By that time, I had 50–70 translation applications and 5M downloads in total. User growth increased the cost of the paid Google Translate API. So business profitability has seriously decreased. Paying users translated blocks of 1K characters at a time, which forced us to limit their requests. When they faced this translation limit, they left bad reviews and got their refunds. 70% of the income covered our expenses. With large translation volumes, this business wasn’t promising. To recoup expenses, it was necessary to add advertising to applications; that always scares users. That’s why we needed our API for translation.
Besides Google, several companies provided cloud API for translation. I was ready to pay $30K for their technology licenses in 40 languages to deploy on-premise. This would allow me to translate an unlimited number of times for a fixed price and serve any number of users on my servers. But in response, I got the amount several times higher than expected. It was too expensive. I decided to recreate their technology for translation.
I turned to a friend who owns an outsourcing company. At the end of 2016, he allocated a team for me. I expected to solve the problem in six months on an outsourcing basis, not depend on Google’s API.
The work has begun. In 2016, we found several open source projects — Apertium, Joshua, and Moses. It was a statistical machine translation suitable for simple texts. From 3 to 40 people supported these projects. Later it became clear that we need powerful servers and high-quality datasets which are expensive. Even after we spent money on hardware and a quality dataset for one of the translation pairs, the quality left much to be desired.
Technically, it didn’t boil down to the “download dataset and train” scheme to create a translator. It turned out that there are a million nuances that we were not even aware of. We tried a few more resources but didn’t achieve good results. Nevertheless, the work continued, and freelancers joined the company.
In March 2017, we found an open-source project called OpenNMT. The project was just launched and offered translation based on new technology — neural networks.
Therefore, OpenNMT made a bold move: they shared their developments in open source so that enthusiasts like me could get involved in this work. They created a forum where their experts began to help newcomers for free. And it brought a good return: startups, and scientific works on translation began to appear since everyone could take the basis and conduct their experiments on the basis of it.
Even if everyone has the computing power to handle large datasets, the question of finding specialists in NLP (Natural Language Processing) is acute on the market. In 2017, this topic was much less developed than image and video processing. Fewer datasets, scientific papers, specialists, frameworks, and more. There are even fewer people who are able to build a business and close any of their local niches from NLP research papers. Both top-tier companies like Google and smaller players need to gain a competitive edge over players in their category.
It may seem strange, but in order to compete, they decide to add new players to the market. For them to appear there — you need to make the market attractive. The entry threshold is still high, and the demand for language processing technologies is growing fast (voice assistants, chatbots, translations, speech recognition, analysis, etc.) Large companies are interested in such startups as ours developing, capturing new niches, and showing maximum growth. They are happy to buy NLP startups to strengthen their positions.
After all, even if you have all the datasets and algorithms in your hands — this doesn’t mean you will make a high-quality translator or another startup in the NLP vector. And even if you do, then it’s far from the fact that you get a large piece of the market pie. Therefore, you need help, and if someone succeeds, buy or merge.
To quickly deal with translation experiments and stop running tests from the console, a Dashboard was created that allowed us to do all tasks, from preparing and filtering data to deploying translation tests. In the picture below: on the right is a list of tasks and GPU servers on which models are being trained. In the center are the parameters of the neural network, and below are the datasets that will be used for training.
In 2018 I spent my time on solving the problem of high-quality translation in the main European languages. I thought I needed another six months for everything to work out. I was limited in resources, only few people were involved in data science tasks. It was necessary to move fast. It seemed that the solution to the problem was something simple. I wasn’t satisfied with the translation quality.
I noticed that our community started talking about a new architecture for neural networks — Transformer. Everyone rushed to train neural networks based on this Transformer model and began to switch to Python (Tensorflow) instead of the old Lua (Torch). I decided to try it too.
We also took a new tokenizer, pre-processed the text, and started filtering and marking up the data in a different way, otherwise processing the text after translation to correct errors. The rule of 10K hours worked: there were many steps to the goal, and at some point, I realized that the translation quality was already enough to use in the API for my applications. Each change added 2–4% of quality, which wasn’t enough for the critical mass where people continue to use the product instead of using competitors’ solutions.
Then we started connecting various tools that allowed us to further improve the quality of translation: named entity recognition, transliteration, specific dictionaries, a system for correcting errors in words. After 5 months of hard work, the quality in some languages became much better and people began to complain less. It was a turning point. You can already sell the software, and since you have your API for translation, you can greatly reduce costs. You can increase sales or the number of users, because your only expense is computing power.
To train a neural network I needed a good computer. But we saved money. We rented 20 regular computers ( each equipped with a GTX1080 video card) and simultaneously launched 20 simple tests on them through the Lingvanex Control Panel. It took a week for each test, it was a long time. To achieve better quality, you had to run with other parameters that required more resources. We needed cloud computing and more video cards on one machine. We decided to rent a cloud service Amazon 8 GPU V100 x 4. It was fast but very expensive. We started the test at night, and in the morning got a bill for $1200. At that time, there were very few rental options for powerful GPU servers besides it. I had to abandon this idea and look for cheaper options. Maybe try to create my own?
We began to consult with the team and decided that it is possible to make a computer using several powerful GPUs and at a price of up to 10K dollars, which will solve our problems and pay off in a month. Two weeks later, everything was ready.
At the beginning of 2019, I finally assembled this computer at home and began to conduct many tests, without worrying about what I needed to pay for cloud services. I began to notice that the English-Spanish translation is close to the Google translation according to the BLEU metric. The computer buzzed all night, it was impossible to sleep. It was necessary to ensure there were no errors in the console. In the morning, I ran a test to translate 100 sentences with lengths from 1 to 100 words and saw that it turned out to be a good translation, including long sentences. This night has changed everything. I saw the light at the end of the tunnel and realized I can achieve a good translation quality.
Earning money from mobile translator apps, I decided to improve their quality, as well as make a version for Android, Mac OS, and Windows Desktop. I was hoping that when I have my translation API, I will finish the app development to enter other markets. But competitors went much further. Some core functions and features were needed.
The first thing I decided to do was offline voice translation for mobile applications without Internet access. This was a personal issue. For example, you go to Germany, download only the German package to your phone (100 MB), and get a translation from English into German and vice versa. Internet access abroad could be an issue. Wifi is often not available, slow or otherwise not useable. At the time, in 2017, there were thousands of high-quality translation apps that required an internet connection to use Google API. We had a challenge to make neural models compact to be able to run fast on mobile phones and translate with good quality.
I found guys in Spain with good experience in machine translation projects. For about 3 months, we jointly conducted research in the field of reducing the size of the neural network model for translation, to achieve 100 MB per language and then run on mobile phones.
The size had to be reduced so that in a certain size of the dictionary (for example, 30 thousand words) to embed as many options as possible for translating words of different lengths and topics.
Later, the result of our research was made publicly available and presented at the European Machine Translation Association in Alicante (Spain) in May 2018, and one of the team members got Ph.D. on it.
At the conference, many people wanted to buy a product, but only one language pair was ready (English — Spanish). Offline translation on neurons for mobile phones was ready in March 2018, and it was possible to do it in all other languages until the summer. But I hadn’t enough time and money. I had to pause this feature. A year later, I returned to it and completed it.
Later In addition to translating text, voice, and pictures, we decided to add phone call translation with transcripts, which competitors didn’t have. We knew that people in different countries often use mobile or landline phones to call support. And for a person you are calling there was no need to install the app. This function required a lot of time and expenses, so we put it in a separate application. This is how we launched the Phone Call Translator.
Also added voice chats with translation. This will be useful for tourist groups when the guide can speak their language, and each of the visitors will listen in translation. And finally — the translation of large files on the phone or computer.
The project has grown. Applications have appeared not only for mobile platforms but also for computers, wearable devices, instant messengers, browsers, and voice assistants. In addition to translating text, a translation of voice, pictures, files, websites, and phone calls was created. Initially, I planned to make my translation API to use only for my applications. But then I decided to offer it to everyone.
Until that time, I managed everything alone as an individual, hiring people to outsource. But the complexity of the product and the number of tasks began to grow rapidly, and it became obvious that you need to delegate functions and quickly hire people to your own team in your office. I called a friend, and he quit his job and decided to establish Lingvanex company in March 2017.
Until 2020, our focus was on mobile translation applications. Recently, the Appstore Search Optimization (ASO) for mobile applications changed its algorithm. The keywords in the Apple App store without purchasing paid installations have become ineffective. The user acquisition with paid traffic has become very expensive. Nevertheless, it helped me to get 40 million downloads and earn the first million $.
At the end of 2020, we decided to move to the B2B market. We think that any international business needs a translation feature. The more languages you support, the more revenue you will get.
In five years I got thousands of questions “Why Lingvanex is better than Google”. I tried to give different answers, but now I try to answer briefly — data privacy, functionality, price, support service. Use Lingvanex Translator if you need to translate big volumes of data or when you need privacy.
Today we have three options for translation: Cloud API, SDK, and our flagship product - Translation Server.
Cloud API - Translation of text and sites through our API x4 times cheaper than Google ($5 per million characters). The price can be critical for large volumes of data. We support the same REST API format as Google so it will be easy to migrate.
Translation SDK - If you need to add an offline translation feature to your app, this is the best choice. We support iOS, Android, Mac OS, and Windows platforms and 110 languages. Each language is only 70MB and uses 200 MB of RAM.
On-premise Translation Server - Unlimited secure and ultra-fast translation of text, files, audio, and HTML. It works offline and can translate billions of characters per day. Also, the server can make audio transcription in 19 languages. It comes as a docker image for Ubuntu. The price begins from $200 / month and depends on a number of languages.
Over the years I earned about $1M in revenue from mobile apps, and spent most of profit to create my own translation system. You can visit our
To get free products demo or ask questions, feel free to contact me via email [email protected]
Also Published Here