In software development, the quest for a robust and seamlessly integrated solution stands as a paramount pursuit. This article embarks on a comprehensive journey into the depths of a proposed software solution, meticulously scrutinizing its architecture, integration strategies, and deployment modalities. Through this exploration, we illuminate the symbiotic dance between users and technology, offering a roadmap for architects, developers, and IT aficionados alike to navigate the labyrinth of a forward-thinking software solution.
A context diagram (view) defines the solution's boundaries and connections with third parties, such as external systems, users, and data.
As shown in the context diagram above, there is an interaction between Users and the Proposed Solution. These interactions will be implemented using a UI web interface.
For obtaining input, the solution interacts with Online and Offline UML tools using their API to retrieve the initial Software Architecture and to save the extracted Software Architecture.
To access code, configuration, and the database, interactions with Version Control Systems and the database are utilized.
Finally, Report Storage is employed to save the outputs extracted from the Architecture under analysis.
The sole distinction between the previous view and the Context View for a Separate Step in CI/CD is the integration of the proposed solution with the existing CI/CD pipeline.
The functional diagram below illustrates the high-level functional decomposition of the proposed solution. An agenda is provided below to explain the color coding used in the functional decomposition diagram.
The function of the integration layer is to abstract and segregate the system from external components, enhancing system extensibility and modifiability.
The following components are involved:
The user interface function interacts with users in two modes: architects and administrator mode. It utilizes a web interface that is compatible with desktop browsers.
The data layer function maintains the following types of data:
The processing layer serves as a fundamental function within systems and is tasked with processing input data to identify architecture problems. Configuration of this layer takes place through the architect panel, where checks against the input are initiated. Additionally, it collaborates with the reports plugin engine for report storage. Furthermore, the extracted architecture is preserved in Architecture baselines, enabling architects to review it at a later time. It is worth noting that Machine Learning (ML) might be employed during processing to enhance the quality of reports.
The deployment view illustrates how a solution is intended to be deployed, encompassing its flows and the supporting components.
On-premise deployment assumes the use of a single computing node for installation. The Database component should be deployed first, as other components depend on it. Subsequently, the Processor and ML components are deployed once the Database component is in place. The User Interface component, report plugins, and integration points are deployed in the final stage.
The Database component is deployed as a single DB instance with multiple schemas inside. The User Interface component is presented as a web service, while the Processor and ML components are native processes built for the target platform. Reports plugins and integration points are designed as dynamic libraries that can be easily added, removed, and configured at runtime.
Cloud deployment involves the utilization of two computing nodes for installation. Due to the potentially high CPU usage of the Machine Learning process, it is recommended that a dedicated machine be allocated for this purpose within the cloud pipeline.
As previously mentioned, the proposed solution should seamlessly integrate with offline UML design tools, including:
Furthermore, the proposed solution should also integrate effectively with online UML design tools, which encompass:
Due to the absence of APIs in offline tools, collaboration with them is established through import mechanisms. Consequently, architectural documents saved in Microsoft Visio format will be exported by a plugin (integration point) and subsequently analyzed by the Processor. The path for scanning input documents will be stored in the System Configuration DB. To support this form of integration, the ability to parse various input formats must be implemented.
Online UML design tools typically offer APIs for integration purposes. The collaboration with such tools will make use of their respective APIs. However, it's important to note that some UML design tools might lack APIs for integration. In such scenarios, the import of architecture documents to a designated folder will be undertaken. The subsequent parsing of these documents will be implemented to import data into the Processor, following a similar approach as outlined earlier for offline tools.
For online UML design tools, HTTP or REST APIs are commonly available. Accordingly, the integration point will access data within the UML design tool using the provided HTTP or REST API. Each integration with a specific tool will require a distinct integration point. For instance, integration with LucidChart will be realized using the HTTP protocol with GET and POST verbs. On the other hand, integration with Enterprise Architect will involve the use of a REST API with JSON as the data format.
This section outlines how the proposed solution can seamlessly integrate with AWS, Azure, or GCP. Given the potential enhancements Machine Learning can offer in terms of result quality, it is advisable to allocate a dedicated compute node for ML tasks. However, in scenarios where the on-premise environment lacks the capability to support ML calculations, a viable solution is to leverage a Cloud Environment for both ML computation and overall computing needs.
Drawing from the deployment flows previously described, each component should be matched with the appropriate cloud provider service. For instance, in the context of AWS:
Similar mapping strategies can be applied for Azure and GCP, ensuring that each component seamlessly integrates with the suitable services provided by the respective cloud platforms.
Dependency identification plays a crucial role in architecture analysis. It dissects the entire codebase into components and elucidates their interactions. Given that similar functions are already available in tools like JDeps, leveraging their output would logically bolster the implementation of the proposed solution. JDeps, for instance, is a command-line tool utilized to launch the Java class dependency analyzer. For systems built on the .NET framework, Net Dependency Walker serves as a suitable choice for ascertaining dependencies.
As mentioned previously, one facet of the application involves integration within a CI/CD pipeline. CI/CD represents a multifaceted and composite process encompassing numerous stages. Within a single CI/CD pipeline, a plethora of functionalities converge to facilitate a comprehensive workflow.
The diagram provided illustrates a standard CI/CD pipeline, depicted as the first pipeline. This pipeline encompasses multiple sequential stages, which include:
At present, two prominent products, Jenkins and Bamboo, are implemented in the CI/CD pipeline concept. In real-world project scenarios, both Jenkins and Bamboo are utilized to create pipelines tailored to specific project requirements. Typically, upon each code commit, at least one pipeline is triggered, orchestrating various processes.
In the context of integrating the proposed solution, the existing pipeline structure can be extended. This extension entails invoking the proposed solution for architecture analysis. Consequently, the augmented pipeline would resemble the second pipeline depicted in the provided diagram.