secure by design - don’t forget about security

Abstract

  • You should use code security reviews as a recurring part of your secure software development process.
  • As your technology stack grows, it becomes important to invest in tooling that provides quick access to information about security vulnerabilities across the entire stack.
  • It can be beneficial to proactively set up a strategy for dealing with security vulnerabilities as part of your regular development cycle.
  • Pen tests can be used to challenge your design and detect microlesions caused by evolving domain models.
  • Feedback from a pen test should be used as an opportunity to learn from your mistakes.
  • Bug bounty programs can be used to simulate a continuous, never-ending pen test, but bug bounty programs are complex and require a great deal from an organization.
  • It’s important to study the field of security.
  • Knowledge from different domains can be used to solve security problems.
  • Incident handling and problem resolution have different focuses.
  • Incident handling needs to involve the whole team.
  • The security incident mechanism should focus on learning to become more resistant to attack.

Conduct code security reviews

Code reviews are an effective way to get feedback on solutions, find possible design flaws, and spread knowledge about the codebase.

Use recurrent code security reviews to enhance the security of your code and to share knowledge. Make them a natural part of your development process.

There’s more than one way of doing a code security review:

  • focus primarily on the overall design of the code, while paying extra attention to things like the implementation or absence of secure code constructs
  • focus on more explicit security aspects, such as the choice of hash algorithms and encoding or how HTTP headers are used
  • a combination of different approaches

You can use a checklist as a guide if you’re ensure what to include in a security review. Some example:

  • Is proper encoding used when sending/receiving data in the web application?
  • Are security-related HTTP headers used properly?
  • What measures have been taken to mitigate cross-site scripting attacks?
  • Are the invariants checked in domain primitives strict enough?
  • Are automated security tests executed as part of the delivery pipeline?
  • How often are passwords rotated in the system?
  • How often are certificates rotated in the system?
  • How is sensitive data prevented from accidentally being written to logs?
  • How are passwords protected and stored?
  • Are the encryption schemes used suitable for the data being protected?
  • Are all queries to the database parameterized?
  • Is security monitoring performed and is there a process for dealing with detected incidents?

Whom to include in a code security review

It’s good to include both people within the team and people from outside the team as they’ll bring slightly different perspectives when performing the review.

Keep track of your stack

Being to work with aggregated views of information is necessary when dealing with issues that need to be addressed at a company level.

Overarching views of security vulnerabilities are essential when operating at large scale. Invest in tooling for aggregating and working with large amounts of information across the company.

Prioritizing work

Set up a process for how to prioritize and distribute work when vulnerabilities are discovered. Doing this beforehand will save you headaches later on.

As early as possible, figure out a process for dealing with vulnerabilities. Decide how to prioritize vulnerabilities against each other and against other development activities. You also need to decide who should perform the work of fixing a vulnerability and how to prioritize the work against other types of development activities.

Run security penetration tests

The main objective is to help developers build and design the best possible software without security flaws. Regardless of what design principles you follow, running pen tests from time to time is a good practice to challenge your design and prevent security bugs from slipping through.

Challenging your design

When designing software, you always end up making trade-offs.

Effective pen testing should therefore include the technical aspects of a system (such as authentication mechanism and certificates), as well as focusing on the business rules of a domain. This is because security weaknesses can be caused by a combination of design flaws and valid business operations.

Run pen tests on a regular basis to detect exploitable microlesions in your design caused by evolving domain models and new business features.

Learning from your mistakes

If you don’t run pen tests on a regular basis, there’s often lots of ceremony associated with a test, similar to what you get when only releasing to production a few times a year. By discussing results within your team and seeing it as a chance to learn, you reduce overall anxiety about finding serious flaws or that someone has made a mistake.

How often should you run a pen test?

There are no best practice to follow regarding how often you should run a test. It all depends on the current situation and context.

A good interval tends to be as often as you think it brings value to your design, like in context-driven testing.

Info

Title: Context-driven testing (CDT)

he essence of CDT is that testing practices completely depend on the current situation and context in which an application resides.

For example, if you have two applications, one of which has strict regulatory requirements and one where only time-to-market matters, then the testing practices will completely differ between them. A bug slipping through in the first application can lead to serious consequences; whereas in the other, you need to release a patch.

The seven basic principles of context-driven testing follow:

  • The value of any practice depends on its context.
  • There are good practices in context, but there are no best practices.
  • People working together are the most important part of any project’s context.
  • Projects unfold over time in ways that are often not predictable.
  • The product is a solution. If the problem isn’t solved, the product doesn’t work.
  • Good software testing is a challenging intellectual process.
  • Only through judgment and skill, exercised cooperatively throughout the entire project, are you able to do the right things at the right times to effectively test your products.

For more information on CDT, see http://context-driven-testing.com.

Using bug bounty programs as continuous pen testing

How do you know if you’ve tested enough? There’s no way to know, but the use of bug bounty programs or vulnerability reward programs allows you to increase your confidence by simulating a continuous, never-ending pen test.

A pen test is normally conducted by a highly trained pen test team, whereas a bug bounty program can be seen as a challenge to the community to find weaknesses in a system.

You also need a mechanism to assess the value of a finding:

  • How serious is it?
  • How much is it worth?
  • How soon do you need to address it?

All these questions need answers before you can start a bug bounty program. Because of this, we recommend that you don’t fire up a challenge without properly analyzing what it requires of your company.

Study the field of security

Addressing security issues with proper design is an efficient way to achieve implicit security benefits, but this doesn’t mean you can forget about security as a field.

Important

In fact, learning about the latest security breaches and attack vectors is as important as studying new web frameworks or programming languages.

Develop a security incident mechanism

Whether we like it or not, security incidents happen. When they do, someone will have to clean up the mess—that’s what a security incident mechanism is all about.

Distinguishing between incident handling and problem resolution

Security processes often distinguish between incident handling and problem resolution. This is a distinction that’s in no way unique to these processes, but it’s also found in more general frameworks for software management.

  • Incident handling is what you do when there is a security incident; for example, when data is leaked or someone has hacked their way into your system. What can you do to stop the attack and limit the damage?
  • Problem resolution is what you do to address the underlying problem that made the incident possible. What were the weaknesses that were exploited? What can you do about them?

Keeping these two apart helps keep an organization from panicking when under attack and aids in focusing on those things that are most important to be done at the time.

The rest of the organization must be prepared too. A security incident is by its nature unplanned and gets the highest priority. No one can assume that the team will continue to work as planned at the same time as they’re saving the business assets of the company. No stakeholder should complain that their favorite feature was delayed because the team was preoccupied with a security incident.

A good product owner should balance features and quality work, ensuring capabilities such as response time, capacity, and security. We all know that security has a hard time getting to the top of the priorities list, but at least a backlog item that’s about fixing a security problem that has already caused an incident has a somewhat better chance of getting to the top.

Resilience, Wolff’s law, and antifragility

If each attack is met with incident handling, a post-mortem analysis, learning, and structured problem resolution of both the product and the processes, it’s possible for a system to follow Wolff’s law and grow stronger when attacked.

Making systems grow stronger when attacked is hard, but not impossible.

When a security penetration tests provide valuable information about the vulnerabilities of your system, the development team can react to this kind of information on three (or four) different levels:

  • Level 0: Ignore the report completely. Do nothing. Obviously, this leads to no benefits, neither short-term nor long-term. The product is released with all the flaws.
  • Level 1: Fix what is explicitly mentioned. The report is interpreted as a list of bugs that are to be fixed. This provides some short-term effect on the product, at least fewer of the obvious flaws. But there might be more vulnerabilities of the same kind. Also, the same kind of mistakes will probably be repeated in the next release.
  • Level 2: Find similar bugs. The report is treated as a set of examples. The development team searches for similar weaknesses, perhaps by devising some grep commands and running them. There’s a good short-term benefit for the product. If the grep commands are included in the build pipeline, there’ll also be some small long-term benefit as well.
  • Level 3: Systemic learning. The report is treated as a pointer for learning. Apart from fixing and searching for similar bugs, the team also engages in understanding how vulnerabilities could happen. One way is running a themed retrospective with the security penetration test report as underlying data.* The insights are included in code review checklists and build pipeline steps and become part of the everyday work of the team.