The Customer Screen That Said, "Booga! Booga!"
When software developers see security professionals come around, I almost get the sense that they start to avoid us as if we are a vacuum cleaner salesman. In software development, time is a luxury. Time is money and the difference between successful software deliveries and missed deadlines with unhappy customers. Developers insist that their environments be unstructured to facilitate creativity and fully be able to troubleshoot and integrate code without constraints. Security is seen as costing time, money, and productivity. But there doesn't have to be a conflict between security vs usability. In fact, following some key secure development environment principles can save developers from disaster.
Years ago, I learned first-hand how security and development can come crashing down without a framework to ensure a safe environment. I started in information technology as a software tester in a small research and development group at an innovative telecommunications company. Though the Agile Model of interactive development wasn't on the horizon yet, our team did a lot of "on the fly" requirements gathering, coding/testing, and implementation with customers to create the first software programs that integrated telephone functionality and customer account access in a single application on a call center agent's desktop. Multiple releases in a day, test-code-test-redesign-recode, with customer feedback along the way.
But one day, serious data processing errors started on the customer end, and code fixes only created more, followed by functionality issues. And then, the customer called the lead software engineer to ask why the message "Booga! Booga!" was displaying on agent screens under certain conditions. The engineer realized that in his haste to fix an issue, he promoted his “small change” directly from coding to production and had not removed all of the debug messages! The issue was the last of many that not long after ended the company's contract with the largest telecom company in Britain, but dashed efforts to penetrate the European market.
How was this a security issue and not just a release management problem? Product development and test areas are closely related, if not connected, to production environments. The development oversight that day was a security issue because the code introduced software problems, in a loosely controlled environment, that degraded customer confidence in the software. It was just as surely as a hacker inserting malware in the development environment and allowing it to filter unnoticed through testing and the various preproduction environments, only to pop up in production. "Booga! Booga!"
A Framework of Key Security Development environment Principles
By following a framework of key security development environment principles, software teams are a step closer to code that is less likely to be compromised by a malicious insider, an external hacker, or accidentally by the team. The following list is an overview of key principles for a secure development environment:
Prioritize Cyber Security in the Development Environment
Businesses often weigh the decision between security restrictions versus creativity. They resist security measures due to cost versus return and can find it cheaper to pay any regulatory fines or reimburse a customer for security issues.
But development environments are often the least understood areas in a company where it is not uncommon to have connections between these environments, the internal corporate networks, and production. Open or mixed environments, and similarly lax procedures, expose the entire company, multiplying the effects of a mistake or malicious attack. It doesn’t make sense that a software company puts security controls throughout its infrastructure but places minimum controls in development areas that have critical corporate value. The old expression, “An ounce of prevention is worth a pound of cure” applies.
Make People Security Aware
Information security is not just about protecting data, but preventing the theft or manipulation of corporate resources. Educating the development teams on what resources are at risk and why practices are important demonstrates that security controls and mindset have a purpose. The awareness continues with drafting policies, standards, and operational guides for development environments. View these documents as design and build requirements, which developers are familiar with following, and provide security awareness training for added understanding.
Anybody that accesses a development environment needs to be aware that they play a role in addressing the risks to resources, corporate reputation, customer operations, and business costs through their actions and following secure practices. The practice of awareness through policies and standards helps, but ultimately a person needs to apply them as an individual. Security awareness, combined with defined security practices and regular refresher training, keeps safe practices fresh in their minds.
Identify the Weakest Link
As the saying goes, "When being chased by a bear, I don't need to be faster than the bear, only faster than the person next to me"! No environment can be bullet proof to hackers, but defending its weakest links will encourage them to go somewhere else. Performing regular risk assessments is a recommended way to identify your weakest links and mitigate the risks. Risk assessments provide a consistent approach and openly rationalize how resulting decisions are made. Risk assessments and their mitigations increase an organization's trust in their infrastructure and give a mechanism to manage the risks.
One area in development environments that is a weak link, and easy to address, is the use of Free and Open-Source Software (FOSS). This type of software is often unsupported or its use has limitations for use by its creator, which exposes development efforts to malware if not scanned or copyright issues if used blindly.
Keep Production and Development Environments Separate
Within software companies and their outsource providers, it is not uncommon for development teams to have connections between production and preproduction (ie testing and development) environments or even internal corporate networks, in some cases sharing servers. While this practice has never been considered wise, when it happens it is often viewed as unfortunate but not worth the cost and disruption to separate them. These “mixed” environments could expose sensitive data, with in some cases the potential for significant government fines that would easily place a company out of business and dealing with the resulting law suites. For compliance and regulation reasons, let alone security considerations, separating these environments is essential.
If your operational system handles credit card transactions, any environment that interacts with the PCI system must be PCI compliant as well, which might mean the entire development environment or even your corporate network that is linked in some way to production. In the case of processing medical records where HIPAA applies, the operational system you thought was HIPAA compliant is actually in violation of the law because of links to other environments where sensitive data could be exposed or accessed by people not authorized under the HIPAA rule.
Manage Your Data
While technical controls and separating environments help to secure the development environment and its data, it is not enough to be secure when handling sensitive data. Data should be classified for its sensitivity and handled accordingly. Depending on the type of data, such as live data pulled from production environments, steps might need to be taken to anonymize it or remove key values to meet compliance and regulatory requirements. One ramification to consider is if special software is needed to not only obscure sensitive data, but to maintain the ability to trace that data across multiple sources.
Software developers more than ever need to work with "live data" that is as close as possible to what is seen as production. Increasingly, that data falls under privacy protections like HIPAA, PCI, GDPR, or state regulation like CCPA (California's Consumer Privacy Act). In the case of data protected by European GDPR, the permission of each end user’s data you intent to use must be obtained before use. And if the data does not require management due to compliance and regulation, it still has value to competitors waiting to reverse engineer your software.
Use Layered Security Controls
In addition to the other principles, following a layered, defense in depth approach will reduce the attack surface available to attackers, reducing overall risk levels. If one defense fails, attackers must also overcome the next level to succeed. It includes the following technical security controls:
- Firewalls and gateways
- Secure configurations
- Access controls and limiting user permissions
- Malware protection
- Configuration management
The technical controls identified are part of a larger set of 20 recommended control areas in the CIS (Center for Internet Security) Framework, along with 149 recommended behaviors for securing systems. The more dynamic the divisions and practices between environments, the more risk, and as a result the increased need for more layers of security. However, monitoring the alerts and logs for these systems becomes key to identifying malicious behavior than relying solely on the tools.
Foster a Strong Partnership Between Security and Development
And the final principle that really is the basis for the rest is an open and working partnership between Information Security and Development. Ensuring that developers have an optimum environment to create products and services in must be the shared goal of both. Development environments are often an overlooked area for security, but in most cases offer high values to hackers. Leaving them unprotected is not an option.
After Mastering the Key Principles
The principles listed are based on a variety of sources that include in the framework detailed by NIST (National Institute of Standards and Testing) and a broader outline of practices by the CIS (Center for Internet Security). Federal and state governments, as well as private industry are turning to NIST, CIS, and ISO-27001 for standardized guidance and as a reference for reviewing outsource provider security measures. Auditors point to these sources for their reviews. In addition, customer contracts for software services often require providers to adhere to NIST or ISO-27001 security principles.
Additional software development principles increase security within the software itself, but they cannot be fully effective if an underlying basis of secure development environment principles do not exist. The practices should be built into the software itself will be the topic of a follow-on article.