paint-brush
It’s Time for Developers to Embrace Security as a Core Job Functionby@paulgarden
783 reads
783 reads

It’s Time for Developers to Embrace Security as a Core Job Function

by Paul GardenMay 2nd, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Developers never asked for software security to be a part of their jobs, but it’s become abundantly clear that they really don’t have a choice. As the scale of AI continues to expand and large language models (LLMs) become more common, developers face the responsibility of integrating AI and ML models into new software solutions.
featured image - It’s Time for Developers to Embrace Security as a Core Job Function
Paul Garden HackerNoon profile picture

Developers never asked for software security to be a part of their jobs, but it’s become abundantly clear that they really don’t have a choice.


This isn’t to say that developers are the only party responsible for securing the software supply chain, of course. But particularly, as artificial intelligence (AI) and machine learning (ML) play an increasingly vital role in software development, considering security at the beginning of the software lifecycle has never been more important.


As the scale of AI continues to expand and large language models (LLMs) become more common, developers face the responsibility of integrating AI and ML models into both software updates and new software solutions. While the potential for innovation in AI/ML is immense, it also brings along greater concerns, especially considering that many developers lack the capacity to manage their development securely.

Understanding the Impact of AI and ML

Every organization wants to leverage the power of AI and ML to benefit their businesses, but they need to do so with a full understanding of the risks they may face. For example, security lapses can inadvertently introduce malicious code into AI/ML models, creating vulnerabilities that attackers may exploit. This threat could entice developers to adopt compromised open-source software (OSS) model variants, which would ultimately expose corporate networks and cause significant harm to an organization.


Further, developers are increasingly turning to generative AI to produce code, often without full assurance regarding its security. This poses an additional layer of risk, creating a need for thorough code vetting from the outset to proactively mitigate potential threats to the software supply chain.


The prevalence of these threats underscores the ongoing challenges faced by startups as malicious attackers look for new avenues to exploit AI/ML models. The deployment of AI and ML models isn’t likely to slow down anytime soon, creating a major reason why developers need to integrate security into their roles and implement the defenses necessary to enhance resilience against evolving security threats.


It’s clear that further education is needed — McKinsey found that even many high-performing organizations have yet to master AI adoption best practices, including machine learning operations (MLOps).


For a deeper understanding of the impact and implications of AI, startups might first consider building AI- or ML-driven software or solutions to be used by internal teams, as opposed to a customer-facing application. This teaches development teams the best practices of developing for AI/ML and instills confidence in the organization’s ability to deliver. Building for internal use cases in the solutions first helps teams (and businesses) become more efficient and productive.

The Ever-Evolving Duties of Developers

Modern software development is a dynamic landscape, with potential threats lingering around every corner. This makes considering security from the outset of the development lifecycle a crucial [albeit relatively new] practice for startups. Unfortunately, many organizations consider security at the binary level a perk rather than a "must-have." Attackers realize this and look for vulnerabilities they can exploit to weaponize ML models and other software components.


Many developers, particularly in startup environments, may lack the training needed to incorporate security measures during the initial stages of development. To mitigate this, teams may attempt to fast-track development by using AI-generated code, but this can lead to other serious issues.


For example, the AI might be trained on open-source repositories that lack thorough vetting for vulnerabilities and complete end-to-end security controls. While this tactic can save time and resources, it could also unwittingly expose the organization to risks that will prove costly down the road. Once implemented in AI/ML models, these vulnerabilities may go undetected and become more impactful.


The era of widespread AI adoption has transformed the traditional developer role; junior software engineers often employed by startups may not have sufficient training or experience to navigate the evolving security landscape. As startups build out their teams, they must help their developers evolve into security professionals to support an organization-wide DevSecOps mentality.


By integrating secure solutions from the start, developers can achieve peak workflow efficiency while instilling confidence in the organization's overall security posture.


Attracting talent is always a challenge for startups, but the current state of the software development landscape could actually serve as a recruiting tool. By helping prospects understand the importance of baking security into software from the beginning of the development process — along with the fact that this is an industry trend that’s not going away soon — businesses create an attractive culture where developers add to their skill sets and eventually help further their careers.

Shifting Security Left as a Cultural Mindset

The security landscape for binaries and ML models demands continuous evolution to stay ahead of emerging threats, and it's a mindset that should filter down throughout a company's culture. As AI implementation becomes more widespread, startups can't afford to defer necessary security protocols to later stages of the software development lifecycle. By then, the risks may become insurmountable.


It has become clear in recent years that startup leaders should embrace the "shift left" mentality for software development, with a proactive emphasis on security from a project's inception. This ensures that every facet of the software development process prioritizes security, enhancing the overarching security posture of the organization.


When specifically applied to AI/ML, the shift-left mindset verifies the security of code generated in external AI/ML systems and ensures that the models are devoid of malicious code and comply with licensing requirements.


For startups (or any business, really), it's best not to think of shifting security left as a set of operational guidelines and procedures. It’s much more effective when it serves as a philosophy embedded into the organization’s culture.


It can't be overstated: implementing security measures from the beginning of the software development lifecycle is imperative for startups to consistently thwart attacks. This ultimately saves businesses the time and resources it takes to address costly and potentially harmful attacks and allows startups to focus on doing what they do best: innovate.