All companies which hold customer data should have a system of threat monitoring in place.
Pentesting moves this on to the next level by commissioning a third party to execute a ‘mock attack’ and highlight weaknesses which can then be fixed.
“Different things can be pentested – apps, infrastructure, network,” explains Holly Williams, technical director at Secarma, which conducts both general vulnerability scans and pentesting. “They can be assessed in different ways, so we ask our clients: what are we testing? To what degree are we testing it? And what is the expected outcome?
“I summarise it as a scope-restricted time-limited human-led security test of a system.”
Zain Javed, CTO at Xyone Cyber Security, agrees. “When you are going through a proposal with a client, you should have a clear definition of the service they’re signing up for. And on that basis, the consultant that you assigned to that project needs to be clear on what the method is and what the budget constraints are – because the scope is not just unlimited.”
Jay Hariss, founder at Digital Interruption, adds: “A pentest should be defined as an attack simulation. At the moment it is known as plain security testing, in the same way that you have internal teams to test performance or usability. But if third parties are invited in, pentesting becomes its own thing.”
Who should consider a pentest?
The first criteria to consider is the value of the data which your company holds, argues James Pearson, a solicitor at Brabners. “If you’re a construction company you probably don’t need pentesting. However, if you are a recruitment business and you’ve got five million records of people with their employment history, CV and salary data, you need to be looking at this in a really serious way.
“That bank of data could be extremely valuable, and can also put you at enormous risk. It’s about credibility: the commercial driver and risk to reputation. Can you imagine the impact of a breach on that recruitment company if it featured on the front of the business pages?”
Any company which holds particularly sensitive data should be looking to incorporate pentesting into its processes, says Javed. Xyone works with the NHS to test the security of its health apps. “They have a requirement that you have to get a source code level assessment done, which is really expensive, before they allow that app to go on the digital health library. Companies holding health data and financial organisations seem like a natural fit for pentesting.”
Bernadette Kelly, media director at agency ActiveWin, says the huge volume of fast-growing FinTech businesses in the UK makes them a target for cybercriminals. “They’re generally small teams but may have a lot of turnover and probably don’t think that they’re susceptible. Any industry which is financial-related will be like moths to a flame for cybercrime. A lot of times, developers are almost nonchalant: it is human nature to think that you know best. But you’d be forgiven if you fess up and say ‘you know what, maybe I should get a third party to check just to be on the safe side’. Because more and more cyberattacks are happening on smaller businesses.”
Williams says that the number of employees in a business is irrelevant to the decision of whether to pentest: “When Instagram sold to Facebook for a billion dollars, they had 13 employees. So the value of a company’s data isn’t tied in any way to the number of employees.”
Larger organisations are more aware of the options than smaller firms, argues Helen Pyne, cyber security and risk consultant at KPMG. “We work with small companies with a few employees up to multi-million-pound companies. You typically find that the larger organisations know what they’re asking for while smaller ones will generally have more of an open discussion about what they’re trying to achieve with it and what they’re most worried about.”
What are the legal considerations?
At Brabners, Pearson advises businesses on legal compliance for privacy and cyber security. He says pentesting forms part of the broader requirements for this. “Companies must appreciate what they’ve got and why it needs to be protected. They must put in place appropriate organisational and technical measures to ensure that that data is sufficiently protected.”
Williams adds: “The question of the nature of the data that you hold and why you are trying to protect it is very often raised by third parties, which – either through compliance or the demands of another company which it is looking to work with – insists upon a pentest.”
Redteaming v pentesting
“You can ask 10 different pentesters what the difference is and get 10 different answers!” says Hariss. “To my mind, a redteam is more open scope, so it’s up to the tester to decide how to approach the test as they learn more about the company’s systems. It’s also driven by a specific goal rather than a brief to find all vulnerabilities: the goal might be to access a certain piece of data or send an email from the CEO.”
Williams explains the concept further: “I may decide I can best target a database by using an exploit of a known vulnerability. Or I can target the guy who administers the database and get his password.
“It is a test of the response by the organisation. So you’re essentially testing the ‘blue team’ – or the defensive company – on its ability to react to a threat. But you can’t necessarily have two primary ends: you can either try to hack in really deep or you can try to not be detected, for example.”
Javed adds: “Generally people believe that pentesting doesn’t involve the physical side [of an infiltration] whereas redteaming does. It could include technical, physical and social attacks.”
What are the key techniques used?
Social engineering is when pentesters seek to access a physical location such as a company’s headquarters. “At the reconnaissance stage, you might find that the company’s database is on the internet and is easy to access – or there could be a firewall in place so we need to have someone on the inside,” says Hariss. “The next step would be to try and get to the building and deploy some kind of machine on the network which we can then connect to and access remotely.
“It is led by the third-party consultants: it’s their job to figure out the best approaches to reach their goal without getting caught.”
Where does the responsibility lie?
Kelly says one problem is that responsibility for security can sit in various departments such as technical, compliance and development. “I don’t think anybody jumps up and down to say ‘that’s us’ because you’re almost putting the onus on your department to be the one that captures any potential flaws. Once a business decides that they want to take it seriously, they really do have to dictate who’s going to be responsible for which different area if they’re going to do it properly.”
Pearson adds: “Pentests are typically with the IT team because that’s the natural home for it. But there are wider security and privacy implications. I know a business which isn’t data-led but has an awful lot of data as part of their business. The board isn’t worried about the granular detail. But they do need to know what happens if there’s a failure or risk to the business – and their view is that it’s taken care of because it’s outsourced. Actually, that’s not good enough.
“Does penetration testing extend naturally to the knowledge of what happens if a vulnerability is exposed? And then the next steps from that? Does it go through to the people who need to know or is it just a case of a report going through to the IT team?”
What happens after a pentest?
It is important to produce a report in clear language which can be understood at board level, says Javed. “There are probably two or three key audiences that you will prepare the report for. One should be really non-technical and just sum up what you did. You can then produce a separate report revealing the full technical breakdown of exactly what you found, how you found it, the proof of concept and clear recommendations for action.
“That action plan could be for the board, a third-party managed service provider or an application development team. In terms of the action plan, we try to say to them ‘this is how you can fix it, given your environment. If this doesn’t work, for whatever reasons, then this is another approach that is also common, but has these disadvantages’. We do try to give several ways of fixing things.”
Williams adds: “Remediation isn’t always obvious. We might be able to help them address a problem without fixing the root cause. A really simple example would be this server is missing this patch so you should install the patch – but perhaps the patching policy itself is broken.”
Pyne leads KPMG’s pentesting capability outside of London and says it is pointless to commission a pentest if you don’t follow it up. “People assume that having a test is reducing risk, but it’s just a process of identification and not reducing risk in itself – unless you act upon it.”
Javed warns against using the same provider for testing and fixing the issues: “You can’t mark your own work.”
How often should you pentest?
“I would usually recommend twice a year or when there are significant changes,” says Hariss. “The best thing to do is to try and develop everything securely and then introduce a pentest after a set period or significant change takes place.”
Williams: “Or when lots of little changes become significant – you see this with the agile way of working. A start-up I worked with was making more than five small changes to their public website a day, but at some point many small changes become significant. So you do a baseline test, say annually, and then every six weeks find out what they have changed and focus the testing there. So you can kind of push pentesting towards a more agile approach.”
Javed: “We kind of work on ad hoc basis. So if they want say 40-50 days of pentesting every year, the testing team are there at their disposal to use as required. I think that down the line automation is going to play a big part due to affordability of testing time.”