The number of software developers globally is due to almost double by 2030, yet InterSystems research has found that more than 8 out of 10 developers currently feel they work in a pressured environment. Creating a better experience for developers is key for inciting innovation, but the current data environment continues to evolve in ways that challenge the experience at every turn.
Data supply is exploding across every industry to the tune of 2.5 quintillion bytes daily. From financial services and healthcare to supply chain and logistics, every activity generates new and useful information in larger volumes. With larger volumes also comes new types of data and analytics from traditional BI and online analytical processing, to machine learning, deep learning, and statistical analysis. Much of that data is often siloed, making it increasingly difficult to consume. Developers have been tasked with grappling with this data regardless of these existential challenges. In order to succeed in this rapidly changing developer environment, here are three budding trends from 2021 that will become major themes in 2022.
In 2021, we saw technology vendors expand their platforms to more programming languages to provide even more usability across different developer bases. In fact, while Python reigns as the most popular programming language globally, at InterSystems we added full server side support for the programming language to our InterSystems IRIS data platform.
With Python embedded into the platform, developers can now run Python directly in the database. No performance issues and no overhead in communications between Python code and the database which gives developers amazingly fast access. This is a trend we’ll see much more of in 2022 as developer productivity becomes harder to attain.
Most of us in the IT space are well aware of this stat: Gartner predicts that the worldwide public cloud revenue will grow by 23% in 2021 for a total revenue close to $332.2 billion, up from $270 billion from last year. However, Gartner also predicts that the SaaS cloud application services market will consistently constitute at least one-third of the total public cloud revenue share for the next four years. In 2021 alone, the SaaS market will likely reach over $123 billion in revenue - and by the end of 2022, the growth rate will increase even further as it is expected to reach $145 billion in revenue.
Accelerated digital transformation has in turn accelerated cloud migration, and many are realizing that the potential of the cloud extends far beyond just increased capacity and performance. Cloud-based services are bringing new levels of productivity and greatly reducing complexity for developers across industries. In healthcare for instance, a lot of vendors are releasing services to increase interoperability for the massive amounts of data and applications in systems across the country. At InterSystems, we introduced our InterSystems IRIS FHIR Accelerator Service, a fully managed, enterprise-grade server for HL7® FHIR (Fast Healthcare Interoperability Resources) that provides developers an easy, secure and scalable repository for storing and sharing healthcare data for their applications. This allows developers to use InterSystems IRIS on AWS and pick a specialized service for whatever you need to get done.
As the cloud becomes pervasive, our ability to store and process information becomes less expensive and more streamlined. This trend is creating a microservices environment in which any organization can gather data, build an app and start deriving insights.
Despite growing in popularity over the last year, data lakes and cloud data warehouses aren’t always the solution to traditional data management challenges. Today’s data warehouses are collecting immense amounts of data - more than may have been anticipated when these technologies were originally implemented. While data lakes have helped organize this raw data into central repositories, they still are not typically involved in operational and transactional data flows.
This is where modern data architectures, such as data fabrics, come into play. Not only do data fabrics effectively organize the data sets into fields that help identify the most actionable and high quality resources, but each one tends to meet a unique, IT-driven purpose. Without a well-orchestrated architecture, the data remains either inaccessible and wasted (or not efficiently addressable) regardless of where it sits within the data lake or warehouse.
Data fabrics will continue to grow in use and popularity in the coming year as a way to get a holistic view of the entire life cycle of your data – how it’s shared, how it’s used, analysis, ML and visualization of data. Everyone has a strategy that brings in all of these components, but data fabrics reduce the friction and knit these components together in your platform to make it really easy to see the life cycle.
Developers have had issues with operationalizing machine learning in the past, but this holistic view makes it much easier. For instance, if you want to create a recommendation service in retail where someone puts jeans in their shopping cart and your service matches them with a sweater that goes with those jeans, that takes a lot of deep learning. Traditionally, after the machine learning team creates the application, it would then have to go through the developer team to operationalize. This is extra work that most developers don’t want to take on which stalls production. Data fabric helps the machine learning team immediately operationalize because they can see what data is being fed into the algorithm and can automatically put it out on the web.
That’s just one example of how we’ve seen the use of data fabric evolve in 2021, and how it will continue to evolve in 2022.
In 2022, data and analytics professionals will need to understand their data consumption and regulatory compliance needs in order to help developers properly use new architectural paradigms that integrate the composable stack. A lack of understanding often creates complexity - or worse, points of failure. Appointing a Chief Data Officer (CDO) is one strategy to foster top-down data governance and provide necessary organizational support for a cohesive data strategy.