Announcement

Collapse
No announcement yet.

Can you mathematically model infosec?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Can you mathematically model infosec?

    Here's something I've been mulling over for a little while after a discussion on whether you could tell in advance if you were likely to have a major security incident... Is it possible to mathematically model infosec? Would it turn out to be a chaotic system - is there extreme sensitivity to changes in initial conditions? Or is it intrinsically stable so that large perturbations are cancelled out over time?

    If you could model it would it show a phase change from secure to insecure? And if so, could you tell when the amount of "energy" in the system was about to push it through a phase change?

    Or is this a contender for /dev/null?

    The CotMan started this very same converstation here but it didn't seem to go anywhere.
    "Don't call me Mr Average," he said, "I'm at the very top of the bell curve."

  • #2
    Originally posted by 7d5
    Here's something I've been mulling over for a little while after a discussion on whether you could tell in advance if you were likely to have a major security incident... Is it possible to mathematically model infosec? Would it turn out to be a chaotic system - is there extreme sensitivity to changes in initial conditions? Or is it intrinsically stable so that large perturbations are cancelled out over time?

    If you could model it would it show a phase change from secure to insecure? And if so, could you tell when the amount of "energy" in the system was about to push it through a phase change?

    Or is this a contender for /dev/null?

    The CotMan started this very same converstation here but it didn't seem to go anywhere.
    I'll try to offer a short answer, but this means this answer is far from complete, and there are several exceptions that won't be mentioned in this post.

    Yes, you can model certain kinds of security issues with FSM. You can then track transitions from states, to other states to try to find paths where privs are escalated, or a transition violates some part of the model being tested.

    There are problems with this:
    1) The model is only as effective as the creator is at modeling all of the parts. If the creator forgets about possible transitions in states and omits them, or omits complete states, then the model produced is not an accurate description of the problem, and any decision that are based on a flawed model run the risk of being faulty.
    2) Models like this are often theoretical, and generally do not include issues of implementation--- the space where most (IMO) security problems are found.
    3) Models like these can quickly become complicated as the next biggest issue IMO (increasing complexity as multiple models start to have "parts" that interact) can quickly make a "complete" model too complicated for any single person to see and understand all parts-- and with complexity comes increased risk for mistakes (especially in implementation.) [Bleed-over problem, border case problem, etc]



    There are other problems beyond those above with modeling CompSec issues, but the above are sufficient to show how there are weaknesses to modeling.

    This does not mean modeling is useless. Modeling is usful. It helps to find problems, and is great for working on protocols/procedures that are to be used as blueprints to create implementations. However, its usefulness is best when the users knows and understands its limitations.

    Comment


    • #3
      Originally posted by 7d5
      Here's something I've been mulling over for a little while after a discussion on whether you could tell in advance if you were likely to have a major security incident... Is it possible to mathematically model infosec? Would it turn out to be a chaotic system - is there extreme sensitivity to changes in initial conditions? Or is it intrinsically stable so that large perturbations are cancelled out over time?

      If you could model it would it show a phase change from secure to insecure? And if so, could you tell when the amount of "energy" in the system was about to push it through a phase change?

      Or is this a contender for /dev/null?

      The CotMan started this very same converstation here but it didn't seem to go anywhere.
      I've been reading a bit on this topic lately. I made a similar thread about the actual applications of this a few weeks back you can probably find lying around somewhere. If you are looking for a good book to read, I'd recommend "Computer Security: Art and Science" by Matt Bishop:
      http://tinyurl.com/9qkk3
      I'm currently reading the less mathematically intense version (Introduction to Computer Security) and its interesting stuff.

      -zac
      %54%68%69%73%20%69%73%20%6E%6F%74%20%68%65%78

      Comment


      • #4
        Beyond Fear by Schneier covers some of this. The algorithm is basically

        Threat * Likelyhood * (Impact / Mitigating Controls) = Risk

        If you were to establish some sort of numeric rating and baseline off that, you probably could, but your numerical answers would only really be relevant to you. At my place of employment we just ended up going with a matrix based on Impact and Probability with a Risk value being extrapolated out of a matrix of values; the highest being Critical and the lowest being Low.


        Also, have a good understanding of the differences between a Threat, a Risk, and a Threat Vector.

        I return whatever i wish . Its called FREEDOWM OF RANDOMNESS IN A HECK . CLUSTERED DEFEATED CORn FORUM . Welcome to me

        Comment


        • #5
          Originally posted by noid
          Beyond Fear by Schneier covers some of this. The algorithm is basically

          Threat * Likelyhood * (Impact / Mitigating Controls) = Risk

          If you were to establish some sort of numeric rating and baseline off that, you probably could, but your numerical answers would only really be relevant to you. At my place of employment we just ended up going with a matrix based on Impact and Probability with a Risk value being extrapolated out of a matrix of values; the highest being Critical and the lowest being Low.


          Also, have a good understanding of the differences between a Threat, a Risk, and a Threat Vector.

          Sounds like you all use a hybrid qualitative/quantitative risk assessment model. Quantitative maths are great if you in fact have a baseline or history from which to derive your algorithm. Many organizations (didn't) don't, thus the move toward more qualitative algorithms.

          FWIW, I prefer quantitative analyses over qualitiative. The reason? Often makes the BoD buttholes pucker when you can actually attach numbers to the Threat/Risk/Occurance values. "The ALE on that is WHAT? But it would only cost me WHAT to mitigate??? " Gets them most every time... However, it does require a baseline and some heavy duty research into costs of doing business in whatever LOB you happen to be in. These types of analyses also take half again as long to perform as qualitative.

          my tu'pence

          valkyrie

          Comment


          • #6
            Originally posted by valkyrie
            Quantitative maths are great if you in fact have a baseline or history from which to derive your algorithm. Many organizations (didn't) don't, thus the move toward more qualitative algorithms.

            FWIW, I prefer quantitative analyses over qualitiative. The reason? Often makes the BoD buttholes pucker when you can actually attach numbers to the Threat/Risk/Occurance values. "The ALE on that is WHAT? But it would only cost me WHAT to mitigate??? " Gets them most every time... However, it does require a baseline and some heavy duty research into costs of doing business in whatever LOB you happen to be in. These types of analyses also take half again as long to perform as qualitative.
            If you are put into a situation where the client really does need those hard numbers, a quick way to pull down their potential losses would be to check if they have a decent, recent Business Impact Analysis. A well done BIA will have downtime losses for all their critical systems. It's a ballpark, but better than coming back with a contract mod to do the same research that was touched upon during the Disaster Recovery Planning.

            FWIW.
            Aut disce aut discede

            Comment


            • #7
              Originally posted by AlxRogan
              If you are put into a situation where the client really does need those hard numbers, a quick way to pull down their potential losses would be to check if they have a decent, recent Business Impact Analysis. A well done BIA will have downtime losses for all their critical systems. It's a ballpark, but better than coming back with a contract mod to do the same research that was touched upon during the Disaster Recovery Planning.

              FWIW.
              AlxRogan, yes, that is true. What is unfortunate is that most clients haven't done their homework up front, so there is no existent/no current BIA nor any Business Unit Descriptions, so determining what are a particular companies critical business processes can sometimes be like attempting to shoot fish in the dark. My experience has been that BIA's are done up front (security management pre-planning) as opposed to on the back end (BCP) though I have seen them done both ways.

              Any suggestions on how to convince a client that it is less expensive and subsequently returns a higher ROI to do it on the front end (information reuse)?

              Regards,

              valkyrie

              Comment


              • #8
                Originally posted by valkyrie
                Any suggestions on how to convince a client that it is less expensive and subsequently returns a higher ROI to do it on the front end (information reuse)?
                It may be morbid, but I find using the latest disaster as an example. Being in Houston it's pretty easy to reference Katrina and now Rita. Having a hot site is great, having it 2 miles away, not so great. :)
                Usually I'll get the context clues from the interviews with the personnel to find out what their key concerns are, then gen up some information about that particular asset and their acceptable downtime.
                I guess I don't have any suggestions per se, but during an assessment, I always try to get an idea of their Incident Response and DR planning, especially if I'm doing any kind of functional testing. In my write-ups I usually talk about the worst-case effect of failures, which tracks directly to, "So is information like this captured in your DRP?"
                Aut disce aut discede

                Comment


                • #9
                  Originally posted by AlxRogan
                  In my write-ups I usually talk about the worst-case effect of failures, which tracks directly to, "So is information like this captured in your DRP?"
                  Its funny, as security folks we spend so much time focused on the C and I portions of the triangle that the A* gets left by the wayside. Most folks dont even realize that things like DRP and BCP are a part of InfoSec.






                  For the noobs, CIA = Confidentiality, Integrity, Availability

                  I return whatever i wish . Its called FREEDOWM OF RANDOMNESS IN A HECK . CLUSTERED DEFEATED CORn FORUM . Welcome to me

                  Comment


                  • #10
                    Originally posted by noid
                    Its funny, as security folks we spend so much time focused on the C and I portions of the triangle that the A* gets left by the wayside. Most folks dont even realize that things like DRP and BCP are a part of InfoSec.






                    For the noobs, CIA = Confidentiality, Integrity, Availability
                    omgomgomg. Thank you Noid. Yeah, this is a big deal. The triad should be the first thing that security folks talk about with their clients. A is a big deal, but not as big a deal for say my financial clients as it is for my ISP clients. Every client falls somewhere in the triangle, however they all fall differently within that triangle. And you forgot non-repudiation, which is more or less important, depending on the client.

                    Hey, folks, can we had a Bof about this at one of the cons? I figure DefCon is too chichi for this, but perhaps Toor or Shmoo? I for one would like to pick other people's brains regarding this issue. And I wanted to talk with those interested about security management methodologies. I know the topic isn't sexy but it is necessary...let me know, eh?

                    Comment


                    • #11
                      Originally posted by valkyrie
                      Hey, folks, can we had a Bof about this at one of the cons? I figure DefCon is too chichi for this, but perhaps Toor or Shmoo? I for one would like to pick other people's brains regarding this issue. And I wanted to talk with those interested about security management methodologies. I know the topic isn't sexy but it is necessary...let me know, eh?
                      One of the things mentioned in the location announcement for DC14, was the possibility for breakout session space, and classes. Putting in a question to DT about Bo(a)F sessions might be good. Nice, even if there is no teacher/director, just a bunch of CompSec Professionals hanging out talking about it.

                      Comment


                      • #12
                        Originally posted by valkyrie
                        And I wanted to talk with those interested about security management methodologies. I know the topic isn't sexy but it is necessary...let me know, eh?
                        I have a thing or three to add to the pile of SecInfo. Don't know where or when though, will stay in touch.

                        Comment


                        • #13
                          Originally posted by valkyrie
                          And you forgot non-repudiation, which is more or less important, depending on the client.
                          Actually, non-repudiation would fall within the 'I' section.

                          I return whatever i wish . Its called FREEDOWM OF RANDOMNESS IN A HECK . CLUSTERED DEFEATED CORn FORUM . Welcome to me

                          Comment


                          • #14
                            Originally posted by TheCotMan
                            One of the things mentioned in the location announcement for DC14, was the possibility for breakout session space, and classes. Putting in a question for DT about Bo(a)F sessions might be good. Even if there is no teachers, just a bunch of CompSec Professionals hanging out talking about it.
                            How delicious! I have no intention of being at defcon this coming year, however, if a group could be established to carry on these discussions, that would be awesome. I am ashamed about how much I don't know. That is why I call on you all to give me a wake up call. :-) By the way, thank you for all, for all you have given to me. I don't know how to appropriately thank you.

                            Comment


                            • #15
                              Thanks all for some great answers.

                              I guess it comes down to a qualified yes, provided you take into account all the variables, populate with reliable data, map all the inter-dependencies... more than enough complexity in the method for most but as an intellectual exercise maybe it's worth persuing.

                              My ultimate goal would be to graphically represent the state of the security of a system (specific or global). By using the "Threat * Likelyhood * (Impact / Mitigating Controls) = Risk" algorithm (thanks Noid) or some permutation of it could you graphically demonstrate the current state and how to gain the most benefit from changing one or more variables (ie, what is the effect on risk of changing mitigating control X over Y, and does this still hold true if threat Z increases)? To get back to the original question and to phrase it in simple terms, if an arbitrary increase of X and/or Y stops the Risk from increasing no matter how high a value for Z then I think you could class that as a stable system.

                              So what I have done is come up with a list of around 30 InfoSec/CompSec related variables, grouped them logically, and mapped the results to a radar chart. Apart from a great Rorschach inkblot , this gives a nice graphical representation of what I call the size and shape of the security hole. The next stage is to establish the relationships between variables (it keeps me looking busy anyway ).
                              "Don't call me Mr Average," he said, "I'm at the very top of the bell curve."

                              Comment

                              Working...
                              X