Friday, November 20, 2009
Wednesday, September 23, 2009
Illustrated guide to AES
Jess Moser posted the most awesome and entertaining illustrated guide to AES.
Thursday, September 17, 2009
Tuesday, July 7, 2009
ISSA 09
I spoke this morning about secuirty non innovation at the ISSA 2009 conference. The purpose of the presentation was to position security architecture as a support for business innovation. You know that SOA and BPM initiatived didn't ship with security included in the box, right? For those interested in the introduction video you can grab it from here.
Monday, June 1, 2009
DiscussIT Podcast
Stephan Buys and myself were recently invited to come and chat about Splunk on the DiscussIT Security Pubcast. Stephan has some really solid delivery experience and works for Exponant, the local Splunk partner in South Africa. I got to share a little about our forth coming SAP application. The podcast can be found here.
I noticed that they posted the wrong link for the mp3, you can grab the mp3 from here.
See you at Splunk live in Johannesburg.
I noticed that they posted the wrong link for the mp3, you can grab the mp3 from here.
See you at Splunk live in Johannesburg.
Monday, April 20, 2009
Virtualization CIO round table
I see the CIO round table for April is now online. The usual fun was had, just a pity that there's not enough space to capture all good talking points.
Friday, March 20, 2009
Risk CIO round table
I participated in a CIO risk round table last year, I see the article is now online. Hat tip to Maverick for hosting yet another fun event.
Monday, March 16, 2009
Where's the Intelligence?
One of the most crucial and overlooked areas in information security is intelligence gathering. Let’s face it the guys who do this well are physical security practitioners. They know if they get this wrong someone could end up dead.
Intelligence informs us to make better risk management decisions. It buys us time and gives us an opportunity to react. Without it we end up making bad and potentially harmful decisions not to mention mistakes.
We've gotten better at identifying our assets and modeling threats, however we've made little head way in dealing with threat agents and the threats that they pose. They become vague and generalized assumptions.
In some sense rather than dealing with them, we've tried to outsource the problem with technology i.e. IDS, IPS, DLP, SIEM, etc. At best we now known when something bad happened but only if we were looking and if we knew what to look for. Problem is we can’t see much anyway because we’ve been encrypting anything that moves, oops! At least we'll be getting fewer alerts to action and the management reports are all green now.
But can we collect intelligence about threat agents and threats within the organization? Yes and we’ve been doing a good job in the application space for ages with behavioral systems i.e. fraud detection. But what about the rest of the stack? This has proven to be very problematic as anyone who’s ever tried to build an all encompassing log management system will tell you.
Indexing log events across the stack is a promising new paradigm. Although not a security vendor, Splunk have developed a technology that does just that. If you could index your landscape you could now ask it very interesting questions for example “show me all the users who accessed the network but didn’t log on via RAS or the access network”. We're literally sitting on volumes of useful intelligence that end up on tape.
We've been having a lot of fun with Splunk recently, we've developed a connector for SAP NetWeaver. It's going to be interesting to see how this new paradigm is exploited by security vendors and pracitioners.
Drop me an email at if you are interested in playing with the SAP NetWeaver application.
Intelligence informs us to make better risk management decisions. It buys us time and gives us an opportunity to react. Without it we end up making bad and potentially harmful decisions not to mention mistakes.
We've gotten better at identifying our assets and modeling threats, however we've made little head way in dealing with threat agents and the threats that they pose. They become vague and generalized assumptions.
In some sense rather than dealing with them, we've tried to outsource the problem with technology i.e. IDS, IPS, DLP, SIEM, etc. At best we now known when something bad happened but only if we were looking and if we knew what to look for. Problem is we can’t see much anyway because we’ve been encrypting anything that moves, oops! At least we'll be getting fewer alerts to action and the management reports are all green now.
But can we collect intelligence about threat agents and threats within the organization? Yes and we’ve been doing a good job in the application space for ages with behavioral systems i.e. fraud detection. But what about the rest of the stack? This has proven to be very problematic as anyone who’s ever tried to build an all encompassing log management system will tell you.
Indexing log events across the stack is a promising new paradigm. Although not a security vendor, Splunk have developed a technology that does just that. If you could index your landscape you could now ask it very interesting questions for example “show me all the users who accessed the network but didn’t log on via RAS or the access network”. We're literally sitting on volumes of useful intelligence that end up on tape.
We've been having a lot of fun with Splunk recently, we've developed a connector for SAP NetWeaver. It's going to be interesting to see how this new paradigm is exploited by security vendors and pracitioners.
Drop me an email at if you are interested in playing with the SAP NetWeaver application.
Friday, March 13, 2009
You don't win a war by defending yourself
Chris Hoff recently made this observations in a post about offensive computing and he's right. This is akin to carrying a gun in the real world to defend yourself, however this doesn't translate well into the wild west we call the internet. Fighting back can have significant unintended not to mention legal consequences.
The Metasploit site recently became the victim of a petty DDOS attack. Now the last person you want to DDOS is HD Moore and co. An amusing side effect though was that the victim could redirect the attack and basically flood anyone they wanted to by changing their own DNS entries. In theory they could have redirected the attack at individual attackers in the botnet systematically knocking them off the net, but they wouldn't know who was on the receiving end. Fighting back would be risky.
You may also find yourself on the wrong side conscripted into a fight you never knew or cared about. With aging and overloaded plumbing (dns, bgp, etc) it's hard enough to play fair. Guess it's time to layout the tar pits.
The Metasploit site recently became the victim of a petty DDOS attack. Now the last person you want to DDOS is HD Moore and co. An amusing side effect though was that the victim could redirect the attack and basically flood anyone they wanted to by changing their own DNS entries. In theory they could have redirected the attack at individual attackers in the botnet systematically knocking them off the net, but they wouldn't know who was on the receiving end. Fighting back would be risky.
You may also find yourself on the wrong side conscripted into a fight you never knew or cared about. With aging and overloaded plumbing (dns, bgp, etc) it's hard enough to play fair. Guess it's time to layout the tar pits.
Wednesday, February 25, 2009
Software security is an engineering problem
Designing software that is secure is a difficult prospect at the best of times. Defenders need to know what(assets) they need to protect and from whom (threats). Defenders need to ensure that all the holes (vulnerability) are patches, all the time with their limited budget, people & legal constraints. Attackers on the other hand need only fine one hole, often have ample time and opportunity. There's an asymmetry in resources.
Well it's not always easy to find your assets or know how to identify them. We only need to look at the GFC (Global Financial Crisis) to see the impact of not being able to identify and locate actual hard assets.
It's not easy to know whom you need to defend against. You want to make sure you have the right defenses in the right places where it will have the maximum bang for your buck. What about the people you trust, like Heartland? More than 500+ financial institutions now impacted at last count.
How do you find all the holes? Do you know where to look? If the experts who are creating the next generation of crypto routines can't get it right, what hope does your developers have?
Not to mention all the interesting ways your code and applications can be abused in ways you never thought possible.
Throwing technology (Firewalls, SSL, VPN, DLP, Anti Virus, etc) at the software problem isn't going to solve it either. It's an engineering problem, you need to build security in!
No wonder some of the best security guys I know have an engineering background.
Well it's not always easy to find your assets or know how to identify them. We only need to look at the GFC (Global Financial Crisis) to see the impact of not being able to identify and locate actual hard assets.
It's not easy to know whom you need to defend against. You want to make sure you have the right defenses in the right places where it will have the maximum bang for your buck. What about the people you trust, like Heartland? More than 500+ financial institutions now impacted at last count.
How do you find all the holes? Do you know where to look? If the experts who are creating the next generation of crypto routines can't get it right, what hope does your developers have?
Not to mention all the interesting ways your code and applications can be abused in ways you never thought possible.
Throwing technology (Firewalls, SSL, VPN, DLP, Anti Virus, etc) at the software problem isn't going to solve it either. It's an engineering problem, you need to build security in!
No wonder some of the best security guys I know have an engineering background.
Wednesday, February 18, 2009
Is the CIA model still relevant?
When one thinks of securing information, practitioners will recite the standard CIA mantra. We need to protect information from confidentiality, integrity and availability threats. Some call these security primitives, attributes, goals, objectives, aspects, qualities, threat taxonomy, etc.
Since there is no clear consensus on this matter (as far as I can tell), I’ll just refer to them as threats to information assets that the owner needs to protect against. Is the CIA model still relevant though given today’s evolving threats?
Consider the following scenario, Bob the developer is in financial trouble and needs a quick “loan” to settle some debts. Lucky for him he has a transaction account with his employer, a bank. Using a service account, he logs on the transaction database and makes a small deposit into his account by altering his bank balance (the asset).
Looking at the CIA model we find that there was no breach of confidentiality, the balance has not been disclosed. The Integrity of the database is intact, redundancy checks pass. The database with his balance is still available, so he runs down to the street and makes an ATM withdrawal.
Did the CIA model consider these information threats?
Parker in his seminal work Fighting Computer Crime, proposed a new model that extended the CIA triad by introducing three additional non overlapping (atomic) attributes or threats.
M.E. Kabay from Norwich University an advocate of this model calls it the Parkerian Hexad. I really like this model since it clearly delineates information threats and I’ve found it particularly helpful in my work.
I’m amazed though how little traction this has gotten within the security community.
Microsoft as part of the SDL has opted for the STRIDE model. A quick comparison between STRIDE and CIA/Parkerian Hexad shows how they overlap.
Although useful, I feel that STRIDE conflates information threats with services like authentication and authorization. We missed possession/control, authenticity and utility.
To further complicate things, Dave Piscitello proposed another model that Bruce Schneier really liked.
Should we be surprised that our customers are confused when we struggle find consensus on the basics?
Dave's extended model goes beyond information threats and deals with identity, trust and access control. I like to think of these elements as services that control access to information assets.
My approach is to start with assets. First you have to figure out what information assets you have, and what threats you want to protect them from. You should also have an idea of how important they are to you and what the impact would be if certain threats were realized. Use Parker's model to evaluate threats and set your objectives.
Information is useless if you can't handle and process it. Consider how users and systems will interact with information.Threat modeling can really help you here. I'd suggest you go beyond STRIDE and including threats from Parker's model. It really needs a better name, it sounds weird to say Parkerian Hexad all the time. Dave's extended model can act as a heuristic here to ensure completeness.
Since there is no clear consensus on this matter (as far as I can tell), I’ll just refer to them as threats to information assets that the owner needs to protect against. Is the CIA model still relevant though given today’s evolving threats?
Consider the following scenario, Bob the developer is in financial trouble and needs a quick “loan” to settle some debts. Lucky for him he has a transaction account with his employer, a bank. Using a service account, he logs on the transaction database and makes a small deposit into his account by altering his bank balance (the asset).
Looking at the CIA model we find that there was no breach of confidentiality, the balance has not been disclosed. The Integrity of the database is intact, redundancy checks pass. The database with his balance is still available, so he runs down to the street and makes an ATM withdrawal.
Did the CIA model consider these information threats?
Parker in his seminal work Fighting Computer Crime, proposed a new model that extended the CIA triad by introducing three additional non overlapping (atomic) attributes or threats.
- Confidentiality was extended to include Possession/Control. An adversary may steal a memory stick with you private key on it, but they may not have your pass phrase to use it. The confidentiality has not been breached but your adversary now has possession and control of your information asset.
- Integrity was extended to include Authenticity. An adversary may gain unauthorized access to database and update a table. Internal and external consistency checks (integrity) will pass but table now contains tampered data that’s not authentic or trustworthy.
- Availability was extended to include utility. A user may encrypt their private key with a pass phrase. If they forget their pass phrase the usefulness (utility) of the information asset is lost. The information is still available but not usable.
M.E. Kabay from Norwich University an advocate of this model calls it the Parkerian Hexad. I really like this model since it clearly delineates information threats and I’ve found it particularly helpful in my work.
I’m amazed though how little traction this has gotten within the security community.
Microsoft as part of the SDL has opted for the STRIDE model. A quick comparison between STRIDE and CIA/Parkerian Hexad shows how they overlap.
- Spoofing primarily deals with authentication, no overlap.
- Tampering overlaps with Integrity.
- Repudiation partially overlaps with Authenticity.
- Information Disclosure overlaps with Confidentiality.
- Denial of Service overlaps with Availability.
- Elevation of privilege primarily deals with Authorization, no overlap.
Although useful, I feel that STRIDE conflates information threats with services like authentication and authorization. We missed possession/control, authenticity and utility.
To further complicate things, Dave Piscitello proposed another model that Bruce Schneier really liked.
- Authentication (who are you)
- Authorization (what are you allowed to do)
- Availability (is the data accessible)
- Authenticity (is the data intact)
- Admissibility (trustworthiness)
Should we be surprised that our customers are confused when we struggle find consensus on the basics?
Dave's extended model goes beyond information threats and deals with identity, trust and access control. I like to think of these elements as services that control access to information assets.
My approach is to start with assets. First you have to figure out what information assets you have, and what threats you want to protect them from. You should also have an idea of how important they are to you and what the impact would be if certain threats were realized. Use Parker's model to evaluate threats and set your objectives.
Information is useless if you can't handle and process it. Consider how users and systems will interact with information.Threat modeling can really help you here. I'd suggest you go beyond STRIDE and including threats from Parker's model. It really needs a better name, it sounds weird to say Parkerian Hexad all the time. Dave's extended model can act as a heuristic here to ensure completeness.
Monday, February 16, 2009
Threats, vulnerabilities and risk
Recently a customer asked me where they should focus their security efforts, should they focus on vulnerabilities or threats? I responded by asking what's the difference between threats and risks? I don't think security practitioners do themselves any favours when it comes to communicating concepts like risk and it shows. If we truly believe that risk management is at the heart of information security we must be able to clearly delineate concepts and show how they relate to each other. The following diagram illustrates how I like to think about risk.
Subscribe to:
Posts (Atom)