Security Leader: Jason Duerden, Blackberry Cylance

Regional Director for ANZ, Blackberry Cylance

Credit: Jason Duerden

How did you end up in your current role, and what attracted you to the industry?

I began my career in creative web design and always had a passion for technology. After trying a few different things within the IT industry, as well as the events space, I found success in operational roles where I could demonstrate my leadership strengths. From there pivoted into business roles across delivery, sales, marketing and ultimately, leadership in IT and cybersecurity. I have a distinct interest in problem solving, so I was naturally drawn to the intriguing world of cyber. In particular, the global scale of the challenge and opportunity that our digital landscape presents. Being able to provide a technical solution that could impact society in a positive way has always attracted me to the industry.

What do you see as the biggest threat we currently face?

People.  It may sound like a generic statement however it will always remain our weakest link in the cybersecurity chain.  Effectively communicating cyber risk and educating the public en masse is difficult, especially when it comes to helping non-technical people understand. 

According to the World Economic Forum, large scale cyber-attacks are a top five risk to society, only after number four – which is massive data fraud and theft.  This poses a significant ability to shift the landscape of how we live and work, especially as our cyber and physical worlds merge in a connected ‘Internet of Things’ society. 

People also pose a threat when it comes to ethics in an increasingly automated society. Technology should be underpinned by an ethical framework that asks questions like: Are we developing technology and tools for good? Are they being used only for their intended purpose?  

Rather than asking, “Could we?”, the question development teams should be asking is, “Should we?”

What are we doing wrong that means we’re unable to stop it?

Complacency, Control and Compliance are the 3 C’s that prohibit progress. For too long, we have viewed cybersecurity as a control and compliance requirement, which inevitably drives complacency: ‘’We ticked the box this year, so it’ll be right’’. No longer does checking off a list of controls and creating a training video stand up to today’s threat landscape.  The perimeter is no longer at the firewall. 

However, unfortunately this complacent approach to cybersecurity is something our industry continues to perpetuate. So far, we’ve been unable to stop it. We must do better. 

We can do that by embracing risk management frameworks as the way forward. The emergence of effective new predictive technologies leveraging Artificial Intelligence (AI) and data science won’t address every single compliance mandate, however will offer a force multiplier to teams struggling with a lack of skills and time.  
A great example of this, is where the Financial Times reports that the Australian Securities and Investments Commission (ASIC) is cautiously embracing AI to boost compliance.  It is reporting ASIC is funding studies to investigate how natural language processing technology to detect misconduct and improve regulation in finance and insurance industries. 

How has the increasing climate of governance and compliance changed your approach to security, and changed your engagement with board members and executives?

I don’t think it has necessarily changed our approach to engagement with executives. It simply continues to highlight the need for further education and innovation in the ways we approach the problem. In 2019, we saw the Australian Government’s Information Security Manual pivot towards a risk management approach, at the same time as NIST (National Institute of Standards and Technology), MITRE and other risk-based frameworks are becoming more widely adopted. In response to this, we have modified our approach to help executives understand the gaps that compliance leaves in addressing organisational and cyber risk, and those gaps can be measured and mitigated.

Is the security industry getting better at using tools like threat intelligence and collaboration policies to work together against a common threat?

Yes and no. The Cloud has provided much easier access to threat intelligence and threat sharing feeds that organisations are increasingly taking advantage of. The problem with leveraging threat intelligence is that it is a fundamentally reactive process – meaning organisations are always one step behind the threat. It isn’t a cyber risk mitigation solution. However, you leverage it effectively in order to build better solutions that are predictive and proactive. 

Collaboration forums and policies are certainly improving. As I mentioned previously, NIST and MITRE are two that come to mind that perpetuate better awareness of cyber risk. The emergence of Joint Cyber Security Centres (JCSCs) around Australia is also a very important initiative that has facilitated collaboration and sharing in our community. 

What do you see as the biggest gaps in the functionality of current cybersecurity technologies?

For the past 10 years, the cybersecurity industry has been primarily focused on detection-based technology. We have now found ourselves with common problems: alert fatigue, a lack of funding, and limited resources and skills to deal with the overwhelming number of logs, data feeds and alerts coming out of our technology stacks. 

Traditional, detection-based security technology requires human interaction. Many detection-based technologies are also signature or rule-based, which means that you must know exactly what you are looking for and constantly tune the systems based on the latest information to hand. But with this approach, you can’t effectively defend against something you’ve never seen before.  

The biggest gap we see in the functionality of technologies today is a lack of balance. The use of automated prevention and automated detection capabilities should be stepped up to streamline operations and reduce the workload for security teams.  Again, it comes back to the scalability point I made earlier.  Many organisations want to move past the ‘whack-a-mole’ approach and apply more intelligence to their security postures but haven’t found a way forward.

A second big gap we see is the lack of open eco-system integration. Many leading security vendors are attempting to lock their customers into a single platform that does not integrate with other best of breed solutions. The motivations behind this a primarily financial gain which loses sight of the fact that the whole purpose is to effectively protect our customers.

This leaves both single failure points and disjointed eco-systems. Rather than trying to tick all the boxes, vendors should be focusing on delivering their core capabilities in a best-of-breed manner that plugs into other market-leading tools.

What technologies do you think will most transform security in coming years?

Artificial Intelligence and Machine Learning are already at the forefront and we are seeing rapid adoption in security platforms. The challenge now is: How do you define maturity? The US Defence Advanced Research Projects Agency DARPA provides a great framework for this. As AI and ML is adopted into more and more solutions, the maturity of the models will be transformative. 

Then, it’s hard to ignore 5G and the transformative promise of constant connectivity. This technology will provide huge benefit and risk at the same time. How do you manage and enable millions more connections compared to now? Lastly, Quantum computing has the ability to completely redefine how compute works at the core. The jury is out on what that might look like, however we will need to rethink everything once the quantum world arrives.

What impact do you think nation-state attacks will have on cybersecurity strategy in the future?

It’s difficult to answer this question, unless you work in national security. What we can predict is the impact of nation state offensive capabilities on the private sector. Government to government cyber espionage and offensive capability preparation is already occurring, and we are starting to see the flow-on effect of these activities in the private sector as well as national agencies, such as healthcare and energy. WannaCry shutting down the UK’s National Health Service (NHS) was the first major example of a nation state cyber-attack that directly impacted the well-being and safety of the general public. This is one of many examples of where large-scale cyber-attacks have a very human impact.

Since then, the FUD (fear, uncertainty, and doubt) has been perpetuated in society by nation state activity that has been clearly aligned with political agendas, taking the 2016 US federal election controversy as an example. Such controversies drive public relations strategies related to cyber security, meaning their true impact will never be known. 

How are cyber threats evolving and do you see the threat prevention landscape competently future proofing organisations?

The simple answer is you can never eliminate risk, you can never claim to be 100 per cent secure or 100 per cent perfect in preventing attacks. What you can do is drive a high standard of preventative risk reduction. All organisations ultimately want to avoid a breach. The challenge is that, for many years, prevention has not been effective, and we have lost faith. 

The common attitude is, it’s not if but when an organisation will experience a breach – so you must be prepared for that ‘eventuality’.   The positive news is, we can regain that lost faith with machine learning. 

We are certainly seeing a pivot, at least in marketing, of security companies offering prevention-based controls and outcomes for customers. With threats evolving exponentially every day, either morphing or via automated manipulation, manpower simply cannot scale in response to the problem, unless we are driving a prevention-first approach. AI and ML provide means to achieve this scalability.

How are you engaging with end-users to educate them about AI-driven predictive threat detection?

We try to bring reality to the situation, understand the pros, cons and use cases. How could this help prevent a breach? What could I do with my time if I reduce alerts by 50%? How much money could we re-invest into the security program if we were able to re-purpose staff resources away from mundane tasks? 

The problem is that ‘AI-driven,’ ‘Machine Learning,’ and ‘’predictive security’ have become buzzwords, with every vendor promising the world. To combat this, we try to educate customers on questions that can be asked to validate these claims, to validate development processes, to understand gaps. We often point our customers to DARPA and NIST, which provide two great frameworks about machine learning and AI applications in cyber security. These frameworks help organisations better define and understand the maturity of AI-driven solutions.

Tags cyber attacks

Show Comments