What brings all technology-based innovations together? It is the developers’ craving for making them work in the likeness of human beings. This desire spurs the evolution of AI and Deep Learning — technologies that are going to affect a number of related jobs.
As Gartner claims, today before one innovation is implemented, two others arrive. But the outcomes are controversial. For instance, AI is predicted to eliminate more jobs than it creates through 2019, while the technology is also believed to create enough new jobs and enrich current people’s careers in terms of effectiveness and productivity. In the future, among the specialists whose jobs will be eliminated may be template designers and GUI programmers. And the reason for that will be the accelerated growth of “human-like” programs, namely Artificial Neural Networks.
Automated processes are gradually implemented to boost the effectiveness of intellectual tasks, e.g., generating source code. There is nothing sort of miracle here: the code is automatically generated by a written program based on certain functions. For instance, an open-source Google/MIT App Inventor allows users to drag and drop some needed functions in the development area, connect them to each other, define the way an application is going to work and get the generated source code based on the prepared template. Using this application, even newcomers to programming make little efforts to create Android-based apps.
But what about bringing coding to a whole new level? Everybody would agree creating GUI is a time- and effort-consuming process. Working on small-scale projects, developers are often in charge of creating GUI, although it is not among their priority tasks. What if they could get rid of this burden? What if a program could capture and recognize screenshots of a GUI and generate the source code from image data?
There is something magical about Recurrent Neural Networks (RNNs). In fact, to train a multi-layer RRN is not so difficult as it seems. This type of neural network architecture is organized into recurrent layers: the input layer, hidden layers, and the output layer.
Recurrent layers allow feeding the information from previous time steps and combine it with the input of the current time steps. It means that the order of input information matters. This is the reason coders often employ the algorithm of gradient descent to training a neural network. The algorithm allows continuous improvement of the desired output, thus making a network “more intelligent” with every new update.
RNNs can be successfully used in many fields, where data can be represented as a sequence, i.e., for pattern recognition and automatic code generation.
Here we get to the main issue: how to leverage AI and deep learning to facilitate the process of coding based on designers’ mock-ups? To date, there are hosts of programming languages specific to different systems where custom software is supposed to run. It makes the process of implementing GUI code tedious and time-consuming.
UIzard Technologies has presented a Pix2Code based on several architectures of neural networks, including RNN, to enable generating the source code from a screenshot. The model was trained on a small dataset, and therefore, Pix2Code is just the beginning of automating the process of code generation.
The developers came quite close to perfection addressing the following three issues:
Computer vision. Machines cannot recognize and process objects or characters on input images. The problem was solved by means of Convolutional Neural Networks (CNN) specifically trained on image data.
Language problem. RNNs, actually responsible for text recognition and code generation, suffer from vanishing and exploding gradient problem. To eliminate this problem, they used another neural architecture, Long Short-Term Memory (LSTM).
Network training issues. The training process involved experiments since there was no ready-made image-code dataset for the network. However, the network demonstrated sufficient capacities in linking texts, images, and codes and generating a 77 percent accurate code for various platforms.
The generated code doesn’t require significant changes, and it is compatible with different platforms, like iOS, Android, and Web. As it was mentioned, the current accuracy of the code is high enough. But before turning into a great tool, Pix2Code should be significantly improved through training on a large database.
The unity of AI and Deep Learning is in line with the trend of combining design and coding in a single process. Nonetheless, the perspectives of moving in this direction are controversial.
On the one hand, the development process will be drastically accelerated, which will allow developers to give more priority to other project tasks. However, developers won’t adopt the fully automated process of GUI code generation until the number of possible user interfaces is limited to the number of screenshots that a neural network is trained to recognize. In the future, this issue can be solved by training a neural network on a large-scale dataset.
What seems more challenging, is the task to train a network to single out the algorithms of GUI code generation. It means such NN shouldn’t just make a linear choice based on familiar patterns. It is supposed to “think” and to process data the way a natural neural network does.
On the other hand, the automation of code generation process will undoubtedly affect the number of related jobs. It is reckless to say that there will be no need in such specialists, but the AI leap forward will surely bring significant changes to the job market.
Enjoyed this article? Find more stories here: https://indatalabs.com/blog
Tap into the power of data science!
Level up your reading game by joining Hacker Noon now!