Why AI is an Evolutionary, Not Revolutionary, Technology

Written by maxdos | Published 2023/04/17
Tech Story Tags: artificial-intelligence | ai | investing | ai-trends | future-of-ai | investments | ai-top-story | technology

TLDRArtificial intelligence is not a revolution but an evolution toward something useful for the final user. It offers great potential, from relieving annoying and monotonous routine work to promoting creativity. But it also contains substantial risks and challenges, such as high costs, legal issues, quality of the database, and algorithms.via the TL;DR App

Stock market hypes pay for the future and its promising mega-topics. It was the same with the internet, social media, hydrogen investments, and cryptocurrencies. But in the end, there was the usual disillusion and disappointment when vision and reality met.

The same happens currently with artificial intelligence. The big ones try to sell their ETFs and tout them like there's no tomorrow. The private investors (or those who want to become one) are once again acting for the wrong reasons, namely in the hope of getting rich quickly.

The question is whether will AI suffer the same fate as hydrogen, cryptocurrencies, and the other “promising” trends. Or has it a chance to evolve into something useful, and would it be possible for investors to earn money with it?

Artificial Intelligence – An Evolutionary Step

In my opinion, artificial intelligence is not a revolution but just an evolution toward something useful for the final user. It offers great potential, from relieving annoying and monotonous routine work to promoting creativity. However, it also contains substantial risks and challenges. These are high costs, legal issues, quality of the database, and algorithms that sometimes tend to objectify subjective statements.

The pursuit of artificial intelligence, in the sense that computers can solve complex problems rationally or at least help to prepare decisions, has existed since the 1950s. However, in the past ten years or so, we have seen a significantly accelerated development. On the one hand, this was because more complex and new mathematical models and algorithms came up. Additionally, there was exponentially increasing computing power of machines working in parallel, and the networking of data centers in the cloud.

But the most important innovation driver is still the humans. That means the creative, imaginative, sometimes offbeat, and emotional intelligence of the developers and users. But these properties of humans limit the current AIs that are trained for specific fields of application. And I think it will also stay that way with the next generation of AI. Therefore, based on these facts alone, it doesn’t make sense to speak of a revolution.

However, what is likely to change is the increased use of AI by end users directly. First, AI was only present in the business sector in the background and in some useless IoT gimcracks (like virtual voice assistants). That means that now, the end-user experience will become direct.

Direct AI Experience For The End Users

With the much-discussed ChatGPT, AI can now be experienced immediately and directly, since this language model (Large Language Model/LLM) communicates with the user in natural language.

The answers generated by ChatGPT are, at least at first blush, precise and logical, and clearly structured in the argumentation. In addition, the application takes the user's wishes into account to a large extent (e.g. "Rewrite the poem from humorous to sad"). Furthermore, in the course of the chat, the previous requests and feedback from the user are taken into account when generating new answers. This means, the results appear even more "real" and tailored to the user the longer he uses ChatGPT.

How Do AI Applications Become Intelligent?

The currently most common AI applications are optimized for specific use cases. For example, they are embedded in voice assistants or chatbots. The next stage of development is aimed at general AI applications that can be used in almost all situations.

The generic term AI includes various sub-areas, some of which overlap and are not entirely clear-cut:

Machine learning (ML) means that the AI learns independently and improves itself automatically. Deep Learning (DL), the next level of machine learning, uses artificial neural networks for processing. These are modeled on the neural connections of the human brain.

Language models (NLP) use ML and DL techniques to understand human language, decode contexts, and interpret and generate results in human language. Like several other applications, ChatGPT belongs to the class of language models.

But AI systems have to be trained before they can be used. Depending on the desired result and the type of application, there are structured but also unstructured data being used. This data consists of text, images, videos, or audio data. And it has to be processed and classified before algorithms can analyze connections, such as recurring connections between words or parts of sentences, and store them as parameters. The result generated by the algorithm is evaluated and goes into the next iteration of the training. This achieves a continuous improvement of the database, the algorithms, and ultimately the entire AI model.

Three Current Problems Of AI

AI is currently facing some problems or challenges to be solved. There are costs, for example, for training the models and computing capacities during operation. But its makers also have to deal with legal aspects of AI training like copyrights, data protection, and the exploitation rights of the results.

  1. Static database and objectification: The quality of the database, in terms of correctness and up-to-date information, is a central aspect. This data builds the basis for generating results that are also often presented without naming probabilities or sources. Because of that, algorithms sometimes tend to cement subjective statements, which can lead to disregarding alternative results. And since AIs cannot evaluate the context of the results, they sometimes create plausible-sounding and consistent but wrong or misleading answers. This is getting especially obvious for trick questions or possible future events.

  1. Lack of creativity and poor transfer performance: AIs are trained to recognize patterns and statistical relationships, and to choose the solution with the highest probability. But they deliver pretty poor results when they face whole new (and untrained) situations. The reason for this is that AIs are optimized for specific use cases and cannot access general knowledge.

  1. Missing ethical and moral compass: AI has no emotional or empathic intelligence. Therefore, it is well suited for solving rational and emotionless problems. But as soon as ethical or moral challenges have to be solved, their possibilities are suddenly getting limited. They just cannot suggest “correct” solutions in such situations. Because today's AI is still nothing more than a sophisticated calculator. A calculator fed with complex algorithms. For this reason, strictly speaking, one cannot talk of an AI.

I guess the term AI was created in some marketing departments because it was a way to sell and promote products under a new flag. Thus, in today's sense, AI is actually a pseudo-artificial intelligence. And therefore, it has no understanding of ethics or morality (Blake Lemoine is wrong). The fact is, it has no understanding at all, and it can't understand anything. If artificial intelligence had moral concepts and an understanding of humans, then it would be a real AI, and you could compare it to the robots from the movies “Bicentennial Man” or “Ex Machina”. But it will last for sure another 100 years until then. Or maybe less, if quantum computing should make tremendous developments over the next 50 years?

General AI Is The Next Milestone

The current limitations of AI systems in terms of technology, creativity, and ethics will require significant advancements for the next generation of AI - general artificial intelligence. Unlike specialized AI, which is designed to solve specific tasks, general AI should possess cognitive abilities similar to those of humans, enabling it to overcome these limitations and approach human intelligence. This should form the foundation for the next generation of AI - super AI - which will ultimately surpass human intelligence.

However, achieving this goal requires further development of models, algorithms, hardware, and infrastructure. Advancements in technology, such as the development of optical or quantum computers, are necessary. Currently, there are ongoing efforts to reduce response and latency times, decentralize edge computing, and reduce the dependency on central data centers. According to Sam Altman, CEO of OpenAI, fully mature and developed general AI is not expected to be available until the next decade.

Dynamic And Competitive Landscape

AI applications and competition are characterized by a large number of initiatives, projects, and start-ups. Many of them come from the university environment but also from large multinational companies. These companies have the means to bring ideas to market and commercialize them.

A variety of language models already exist, of which BigScience's Bloom and Meta's OPT come closest to ChatGPT. In addition to the data basis and the algorithms, these models are different in the type of training. For example, Bloom is automatically trained, but unlike ChatGPT, it is an open-source language model.

In addition, there is already a growing number of AI-based applications for private end users. For example, image generators (Starryai, Point E, DALL-E2), text search and editing (Socratic, Talk to Books), or personality development (replica, Character.ai). However, some applications are just in the test phase but already enjoying increasing attention. These are applications such as "GPT detector" or "AI Text Classifier", that will be able to recognize computer-generated texts.


Which AI-related Stocks Should Investors Pick?

Developments in the AI markets are pacing with a pretty high dynamic. Therefore, it’s difficult to make a statement about which part of the value chain is the most attractive from an investor's point of view. The value chain is multi-layered. It consists of semiconductors, hardware infrastructure, models and algorithms, the provision of the technical platform, and end applications. Hence, I will limit myself to the large companies that have made substantial in-house developments and are directly or indirectly affected by the changes caused by AI.

At this point, you should first read the disclaimer:

I am NOT a financial advisor. I’m using information sources believed to be reliable, but their accuracy cannot be guaranteed. The information I publish is not intended to constitute individual investment advice and is not designed to meet your personal financial situation. The opinions expressed in such publications are those of the publisher and are subject to change without notice. You are advised to discuss your investment options with your financial advisers, and whether any investment is suitable for your specific needs. I may, from time to time, have positions in the securities covered in the articles on this website.

Microsoft - Integration Of OpenAI Features Improves The Competitive Position

Microsoft recognized the potential of AI early, and after three rounds of financing (in 2019 around $1 billion, in 2021 around $2 billion, and in 2023 $10 billion), it currently holds around 49 percent of OpenAI.

The company supports OpenAI with computing capacity and cloud services, and among other things, exclusive rights to use several OpenAI applications. The AI features will be integrated into various Microsoft applications. For example, in Github Copilot, Bing, and in Office applications.

Microsoft also makes several OpenAI functions available to customers through its Cloud Azure, including DALL-E2 for generating realistic images from text input. But ChatGPT, in particular, is currently attracting a lot of attention.

Nevertheless, the directly measurable impact on Microsoft's sales is not yet predictable. And the costs or negative effects on the operating result as a part of necessary investments in AI cannot yet be calculated. Above all, there are positive aspects that should result from improved competitiveness for individual products and applications.

In the recent past, Microsoft has always proven that new technologies can be quickly incorporated into the product portfolio and commercialized. The company has already announced the integration of AI functionalities in the next versions of Bing and Edge. Bing, with its market share of approx. 9 percent, will be able to process results as a summary beyond the classic website search in the future. With it, the user can evaluate, refine and specify the result. In the first tests, the quality of the results was still mixed. But the AI learns very quickly and errors are corrected. Also new is that the AI, in contrast to ChatGPT, can process up-to-date information.

Alphabet – Markets Expect An AI Offensive

The risk Google could lose market share (approx. 88 percent) to Bing enhanced with AI has increased. However, there has been continuity in user behavior in the past. In most cases, the user searches for websites and information but not for analysis or summary of the results. In addition, Google has been active in AI for years, for example, with the Language Model for Dialogue Applications (LaMDA). The current Google Switch C language model includes 1.6 trillion parameters, and its associate company Deepmind has already developed AI models that are significantly larger than ChatGPT (175 billion parameters).

But Deepmind’s models were designed for other areas of application. At an AI event, Google has already shown many new, small AI features for search services and for use with Google Maps. In the future, Google Lens can refine the photo search, for example, to be able to automatically recognize products in images. Significant improvements in the texts to be translated are also to be expected. Also, Google previously announced its own chatbot Bart. The communication assistant based on LaMDA should help users. For example, when buying a car, it could recommend car models with a list of pros and cons. However, the event was not considered a big hit, and there were initially no real surprises. Hence, in direct comparison, Microsoft is a bit ahead.

Meta Platforms - AI Is The Central Block Of The Metaverse

Like Microsoft, Meta has already had to accept some setbacks with chatbots (Blenderbot, Galactica). However, the meta-OPT (Open Pre-Trained Transformer) language model already delivers decent, if not as good results as ChatGPT. Nevertheless, the use and commercialization of Meta-OPT can be the first technological wave to reach the future Metaverse (which is a pretty damp squib at the moment). Meta already relies on its AI discovery engine. Therefore, Facebook & Co are no longer organized around people and the topics, or the chat groups that follow them. Instead, the user is increasingly getting shown more suitable content that has been recommended and selected by the AI.

At the same time, the feeds improve the relevant advertisements and the possibilities for monetization. This works particularly successfully with Instagram applications and the increasingly popular reels.

Nvidia - AI Dynamics Already Support Growth Expectations

The company will continue to benefit from the fact that graphics card chips are essential for training and running AI models, which accelerate parallel matrix calculations. This development already supports the growth expectations for Nvidia's data center business (approx. 40% share of sales in 2022). In addition, the accelerated adaptation of the new product, H100, was launched in 2022. Cloud providers and so-called hyper scalers such as Amazon, Google, Alibaba, and Microsoft are already investing heavily in GPUs from Nvidia, among other things, to train their own AI systems.

Intel - Corporate Transformation Overshadows Positive Impact Of AI

Intel's AI sales can be located in the Accelerated Computing Systems and Graphics segment. In addition, in the growing data center (DC) and AI area. However, in the pure server CPU segment, Intel has been losing market shares to AMD for some time, but also to Nvidia. Intel continues to face a capital-intensive transformation of its business model, accompanied by weak economic demand. But in the long term, Intel still has enormous potential to position itself at the forefront of the competition in the field of AI and also to maintain its position.

Samsung - Always Good For A Surprise

In addition to computing capacity, AI applications also require large storage capacities. Samsung, as a market leader in memory chips, could benefit from AI applications in the medium term. But in my opinion, even a significant AI-induced surge in demand for memory chips would only have a minimal positive impact on the currently disrupted market balance. For example, in the foundry segment, Nvidia's preferred logic chip manufacturer is currently not Samsung but TSMC. However, Samsung has been very active in the field of AI development for years. Therefore, it can be realistic that this company as an innovation leader, could surprise the market with its own AI solutions.

Conclusion

In my opinion, ChatGPT is neither a hype nor a revolution. It’s a major evolutionary step towards an AI that can also be experienced by the end user. The relief of employees through the use of AI is only one aspect. However, the goal is to promote people's creativity by showing possibilities, especially in complex contexts. The current, highly specialized AIs can only insufficiently meet this requirement. Another weak point of specialized AI is the quality of the database and algorithms, which sometimes tend to objectify subjective statements.

For this reason, it is currently not worth investing wildly in AI ETFs. Because AI, just like graphene development, is still not ready for the markets, even if some people don't want to admit it. If someone wants to invest in AI, it is worth examining the big software companies more closely.

Nevertheless, more and more (self-proclaimed) AI experts are now coming onto the stage, telling why AI is the investment opportunity par excellence. And, of course, social media will help drive this hype completely unselfishly, with countless clicks. And on Wall Street, the IPOs, the new adventurers of fortune, are stepping in. In the never-ending game of greed and fear, more and more investors will jump on the AI train. Because the benefits of it are obvious, right?

Of course, for a reasonable and considerate investor, it pays out to watch this big game and jump in when the time is right. But when is the right time? You can never get the right time because nobody can see into the future. What investors can do, is not get influenced by the hype but analyze the situation rationally. In addition, they could invest small sums at the beginning to see how the securities develop.

With this in mind, I wish you much success with your investments!


Written by maxdos | In first place I'm a computer nerd with the focus on workflow processing. But I'm also a passionate trader and investor.
Published by HackerNoon on 2023/04/17