Testing is practically a fetish among the Ruby community. I just went to RubyConf (which was, well, very, very different from DEFCON) and one of the things covered was fuzz testing, i.e. generating all sorts of random tests to hammer your code with, and seeing what breaks.
The guy who gave the presentation (who played a music video through aalib. I so have to do that at DC15) pointed out that this parallels the (black/gray hat) approach of discovering security vulnerabilities by hammering a program with randomly generated input until it breaks. (e.g. eEye discovering countless IIS flaws by hammering it with randomly constructed HTTP requests, or discovering HTML parsing vunerabilities in IE by permutating HTML tags in weird ways until it broke)
Rubyists are completely nuts about testing. The real weaknesses seem to be in standard libraries which have been bundled with Ruby for years and have gone unmaintained (one was just disclosed in Ruby's standard CGI library).
Really feels like the academic crowd is really starting to give a shit about building bulletproof applications. I walk the line between the tribes being the hippie college drop-out that I am (and I fit in kinda weird at both cons), but: in languages like Ruby, are the fundamental security problems which have plagued software since its inception finally being solved in very generalized ways?
That's not to say that I don't except elaborate ways to attack extensively tested systems will be devised... the obvious places to hit such a system are the leaky abstractions (e.g. a DoS through resource exaustion), high level conceptual attacks, or find a place where the generalities afforded by the language let you sneak something craftily through.
I guess what I'm really asking is: if the stakes are being raised on the developer side by solving many of the fundamental problems which have plagued software since its inception, will the black/gray hats have to get smarter to compensate?
The guy who gave the presentation (who played a music video through aalib. I so have to do that at DC15) pointed out that this parallels the (black/gray hat) approach of discovering security vulnerabilities by hammering a program with randomly generated input until it breaks. (e.g. eEye discovering countless IIS flaws by hammering it with randomly constructed HTTP requests, or discovering HTML parsing vunerabilities in IE by permutating HTML tags in weird ways until it broke)
Rubyists are completely nuts about testing. The real weaknesses seem to be in standard libraries which have been bundled with Ruby for years and have gone unmaintained (one was just disclosed in Ruby's standard CGI library).
Really feels like the academic crowd is really starting to give a shit about building bulletproof applications. I walk the line between the tribes being the hippie college drop-out that I am (and I fit in kinda weird at both cons), but: in languages like Ruby, are the fundamental security problems which have plagued software since its inception finally being solved in very generalized ways?
That's not to say that I don't except elaborate ways to attack extensively tested systems will be devised... the obvious places to hit such a system are the leaky abstractions (e.g. a DoS through resource exaustion), high level conceptual attacks, or find a place where the generalities afforded by the language let you sneak something craftily through.
I guess what I'm really asking is: if the stakes are being raised on the developer side by solving many of the fundamental problems which have plagued software since its inception, will the black/gray hats have to get smarter to compensate?
Comment