paint-brush
Building a Custom Solution for Financial Analyticsby@intellectsoft-blog
348 reads
348 reads

Building a Custom Solution for Financial Analytics

by IntellectsoftNovember 22nd, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A world's leading investment fund called for Intellectsoft to create a custom solution for financial analytics. The central purpose of this product is to provide accessible data analytics by pulling information from different sources together and providing a visual representation for results. The application also gives the green light to use a historical record for truly in-depth market research, management of data integrity, and consistent workflow for multiple users and different roles. Intellectsoft's software engineers decided to take a fully managed cloud service to meet rigorous enterprise-grade performance, security, and compliance requirements.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Building a Custom Solution for Financial Analytics
Intellectsoft HackerNoon profile picture


As they search for better data availability, usability, and connectivity across the globe, companies need reliable systems to minimize risks and turn their daily activities into a more controllable workflow. When it comes to banking and financial services, the ability to turn data into an all-encompassing efficiency analysis becomes even more critical for business. To face the challenge, a leading world investment fund called for Intellectsoft to create a purpose-built software solution built around the concept of infrastructure as code, blue-green deployment, and continuous delivery.


Although not all applications and services are equal — each has a number of specific features and functionalities. Paying particular attention to data analytics and security, our team at Intellectsoft has created the system with a set of capabilities that include disaster recovery, data replication, managed service identity, web application firewall, etc. Their capabilities can vary widely depending on the platform, and we considered Microsoft Azure Cloud as it had a designated service in each of these categories.


Of course, that does not mean that it is impossible to achieve the same objective when a provider cannot offer a service like disaster recovery (for example, Google Cloud that is certainly capable of supporting the functionality). But to meet rigorous enterprise-grade performance, security, and compliance requirements, our software engineers decided to take a fully managed cloud service. Yevhen Kulinichenko, Software Architect at Intellectsoft, shares how this approach helped to quickly build, deploy, host, and scale the system, functionalities, and APIs on the teams.


Application Overview

The central purpose of this application is to provide accessible data analytics by pulling information from different accounts together and providing a visual representation for results (i.e., graphs, diagrams, tables, and some others). The final data output was delivered to stakeholders in Excel file format.


Working on files manually, creating and updating spreadsheets, presenting results in a single source of truth, and (even more importantly) getting actionable insights on the go — for analysts and other specialists, doing it all with their own hands is hard even to imagine.

Upon completion, this custom software development project allows the owner to automate the majority of data processing.


Instead of wasting precious time and mental concentration, responsible roles can now easily access, upload, and distribute data in the Excel file format within a single system interface. The application also gives the green light to use a historical record for truly in-depth market research, management of data integrity, and consistent workflow for multiple users and different roles.

Challenges

Today’s enterprise-grade companies commonly have a set of different systems in the IT infrastructure. With this in mind, Intellectsoft handled all-around research to define the scope of the project. Considering that most systems are based on Microsoft products, at Intellectsoft, we decided to go for Microsoft Azure Cloud hosting to simplify future integrations, especially over the long haul.


List of Services

  • Azure App Service for the API hosting
  • Azure SQL Databases for the structured data
  • Azure Storage for storing assets
  • Azure Storage with the static website hosting for hosting web apps based on the SPA approach and React.js technology
  • Other services for security, routing, and the rest of the functionalities are listed below.


Azure Pipelines

As part of the Azure DevOps system, Azure Pipelines is one of the most evolving platforms for implementing the concept of Continuous Integration and Continuous Delivery (CI/CD) to enable a vast range of valuable features and utilities — both from the standpoint of usability, efficiency, and reliability. A set of different pipeline triggers and manual approvals for the pipeline runs are good examples to illustrate that. So, to eliminate human error when maximum automation for each and every process and operation is a must (of course, when applicable).


Hosting & Data Storage

Network bandwidth and application performance are among the most frequent issues for software systems observed at the moment. Enabling reliable file storage for the system relies on direct access to the files stored on Azure Blob Storage through the shared access signatures (SAS). For complete information about this approach, check out the official documentation. The core idea is the following:

  • The client application needs the file.
  • The client application sends to the API the request with an intent to download the file.
  • API validates the request in terms of permissions and other perspectives if needed
  • API generates the SAS and responds with the direct URL to the file with the SAS included
  • The client app downloads the file directly from Blob Storage using the provided URL.

(Software) Engineering

Continuous Integration and Delivery

The CI/CD processes were created with the “built once and deployed everywhere” concept in mind. Meaning all revisions from every VCS branch can be built and deployed across different environments. Making development faster and much more predictable as the build process (CI) can produce the immutable artifact. Later on, it can be deployed to dev/test/UAT/production environments with no need to perform additional operations on this artifact. All necessary parts are simply packaged into the artifact — built code, dependencies, assets, etc.


On the other hand, the deployment process (CD) takes the built artifact, and the only thing to be changed is environment-specific configuration settings. After specifying the configuration values, the artifact goes to the cloud. And there is no need to perform additional verifications of the system on the upper environment as everything works just in the same way as the previous one. In addition, this approach eliminates the “works on my machine” issue as any developer can simply download the artifact and run it on the local PC — and there will be no difference between the application version on the local PC and its environment.


Infrastructure as a Code

One of the most vital steps in development is infrastructure deployment. During the system evolution, the codebase is not the only thing that is changed. The set of infrastructural elements and their configurations are also updated. However, the chances are that something may be eventually missed or wrongly configured right before deploying the system to the live environment. To avoid this, we have decided to take on the Infrastructure as a Code (IaC) approach for development and deployment.


All cloud resources are described as code and stored in the VCS in the same way as the system code. Before deploying the application, the deployment pipeline deploys infrastructure changes. If there is any additional cloud resource introduced during the development phase, then it is created via code during this stage.


The next stage is going to move the application to this new resource. A similar scenario will be applicable for any removed resources or updated configurations. All redundant and unnecessary leftovers simply go into the trash to save cost, and everything new will just need reconfiguring. As the infrastructure awaits in the VCS, the rest of things like version history or code reviews are applicable in the same way as it has been with the application code so far.


There is a wide range of technologies that could be used for IaC: Azure CLI, Azure Resource Manager (ARM), Pulumi, and Terraform. For system needs, Terraform has been selected as the most frequently used technology on the market and remarked for its multi-cloud support.

Having the Terraform code in the VCS running together with the Terraform code in the deployment pipeline makes it possible for the developers to work even without having access to the production infrastructure. And there are no issues or potential drawbacks expected in the development and troubleshooting processes after all.


Blue-Green Deployment

Even when nearly all go on autopilot, there is still a potential gap — this time from the viewpoint of application code that may include mistakes. Should it happen, for example, in the main components of the system, the business process can be significantly disrupted or even completely broken. With these precautions in mind, the release process is built on top of the Blue-Green approach to deployment, which suggests having two different instances of the application with different versions of the application but still being up and running in both instances at a time (for example, the production itself and the production candidate).


That way, development steps are as follows:

  • deploying the new version to the production candidate environment
  • ensuring that the production candidate has no issues
  • routing traffic from real users to the production candidate
  • making production a production candidate and vice versa. After shifting the environments, the next stage of deployment remains unchanged.


As the terms “production” and “production candidate” may seem misleading or too complex, the industry refers to them as blue and green. At any given moment, for example, blue is the production and green is the production candidate. After the next deployment, green will become the production, and blue the production candidate. Another goal of having two environments up and running is a process of recovery and failover from the Azure outages we will cover below.

Disaster Recovery

Cloud Provider Infrastructure

The internal structure of the cloud providers (Microsoft Azure) holds the following: Availability Zone (AZ) — the data center and its physical location. Having the application deployed to different availability zones is the first step to achieving a high availability Region — the set of data centers located nearly one by one, which has a high-speed connection between each other. Having the application deployed to different regions is the best practice for achieving high availability.


The maximum distance between AZs is 100 km, and the minimum distance between regions is 300 km which minimizes the possibility of outages among different regions simultaneously. These regions have their own identifiers. For example, westeurope means the location in the west of Europe, eastus2 means the location in the east of the United States. If you need complete information about the locations and regions, check out the official documentation. The system’s blue and green instances are located in two different regions, therefore, making them isolated from each other.


Azure Front Door Routing the traffic coming from the users is among the most important tasks in the process of failover or blue-green deployment. Changing the DNS records when we need to serve the users from a new instance of the system looks good — but there is still a problem with DNS caching, which can route users to the broken instance even after the record changes. Eventually, such incorrect behavior drives increased downtime.


Among renowned solutions, there is a wide variety of cloud offerings. They all are based on the concept of abstraction between front-facing URLs and backend services. The client application always addresses the front-facing URLs regardless of where the traffic is routed. Microsoft Azure has the right options for different scenarios. For details, check out the official documentation.


The best fit for the system is Azure Front Door (FD):

  • we are using HTTPS traffic and application-level routing for traffic coming to the API and the storage of the assets
  • we have the failover scenario and Azure FD completely fitting the needs as it is a global service.


The set of FD rules is the following:

  • direct access to the blue instance
  • direct access to the green instance
  • generic access to the system, which points to blue or green instances based on the deployment.


Changing the configuration of the last rule allows us to route the users to the needed instance in case of deployment or disaster recovery. Both are listed below.


Data Replication

In terms of deployments and failovers, there is a critical difference between applications and their data. An application is usually just a set of files. Most commonly, the process of upgrading the application is just replacing the files with the newer ones. But when it comes to data, the process becomes more complex. Data is generated by the users and is not a part of the application itself. So, when the application receives some new functionalities (for example, introducing a new file format), the process of deployment includes data migration.


In terms of failover, when redirecting traffic to another application instance in another region, the system performance may become redundant. The data needs reliable storage in both regions at the same time. Therefore, after switching the application to another region, it will also serve the data from this region. And the system will still be up and running should it happen that the first region is completely down.


Two types of data used in the system:

  • structured data in Azure SQL Database
  • assets in Azure Blob Storage.


The easiest way of data residing in parallel in two different regions is the usage of the replication approach. All the changes taking place in the primary source are reflected in the secondary source automatically. Both services support data replication across different regions. For details about databases, check out the official documentation here. You can learn more about Azure Blob Storage here.


Failover Summary

  • switch primary and secondary instances of Azure Blob Storage
  • switch primary and secondary instances of Azure SQL Database
  • switch the system’s connection strings to the database and assets storage to write the data to the new primary instances
  • switch the Azure Front Door rule to point to another instance of the system.


To make this happen in case of an outage, we have implemented the Powershell script performing all these actions one by one. During the system's lifetime, for about a year, there have been no outage events recorded. But having a fully implemented and well-tested failover procedure remains among the most mission-critical elements of the system after all.


Security

Azure Active Directory

While user credentials storage is among the key issues frequently faced by modern systems, implementing the original solution would be a truly complex and expensive option. While third-party systems are usually implemented at a considerably high cost (i.e., passwordless login, multi-factor authentication, etc.), they are still great for enabling Single-Sign-On scenarios when the user logs in once and receives access to the set of systems.


One of the systems leveraged here is Azure Active Directory. Access to the system itself and other systems in the IT infrastructure of the client is managed through a single Azure Portal. The system does not store any user credentials, and all can be managed via admin access on the same page.


Secrets Management Apart from user passwords, Secret is a so-called set of sensitive data used by the application to access a third-party API, database, or any other resource that comes with credentials. The best practice on the market is the usage of Vaults. As the system is hosted in Microsoft Azure, it uses Azure Key Vault service to store such data safely and securely.


Upon start, the system reads sensitive data directly from the Key Vault instead of keeping it in the repository accessed by developers. Access permissions to read and write secrets to the Key Vault are enabled only for specific roles and specialists responsible for requesting these types of keys from the third-party systems and trusted by the client. To learn more about Azure Key Vault and its benefits, check out the official documentation.


Managed Service Identity Once we have everything in place with the Key Vault, the system accesses its secrets. When the API key is stored in the repository with multiple user permissions, the corresponding roles will have similar access levels to Secrets Management enabled by default. To make the system “personalized” with the identity, Microsoft Azure has Managed Service Identity. For details, check out the official documentation.


After creating the Managed Identity resource, setting the access to read the Azure Key Vault instance, and assigning this identity as an Azure App Service identity, the system will gain access to the secrets without any additional credentials stored in the code repository as it has been so far.


Web Application Firewall Azure WAF is the last key element of security we are going to cover here. Just like with the option that involves third-party solutions for authentication purposes, using already built systems will be arguably the best way to stay protected against the most common cyber threats — both in terms of price and simplicity.


The instance of WAF is configured to analyze the traffic coming to the system through the Azure Front Door. As a result, all suspicious requests are blocked automatically without the system itself or programming code involved. There is a set of built-in managed rules covering the standards like OWASP Top 10 vulnerabilities used to create custom rules based on specific patterns or logic.


When WAF-verification fails, a request will be blocked, and the user receives a 403 error — that helps to avoid system overloads and, in the case of emergency, drives potential negative impact to the minimum. To learn more about Azure WAF, check out the official documentation.


Top Findings / Key Takeaways (for Business)

By setting new entries and data processing operations up to speed, the system makes it possible for companies to become truly flexible and quickly adapt to the ever-changing market. The process of data collection and analysis will remain fully controllable and disruption-free, for example, in the case of violent fluctuations on the market. Upon recording new conditions, the analyst can instantly upload all necessary data to the system and update information automatically, keeping responsible roles and stakeholders informed on short notice. After all, the ability to act proactively and make better decisions is what helps our clients effectively stand out, saving many hours of labor and directly impacting their bottom line, especially over longer-term.