[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Full-disclosure] Arbitrary DDoS PoC



How do I subscribe only to the short list have to keep answering this bizarre way, so I apologize. If someone has an alternative way, please tell me.

I do not know what you expect of public repos at Github, really do not understand, you think that I would deliver the gold as well? Well, I think you're a guy too uninformed to find that the maximum is 200 threads with pthread. Have you tried ulimit -a? I even described in the readme.

As the algorithm recaptcha, you really thought it would have all code in the main file? Why would I do that? I distributed in classes.

And why do you think IntensiveDoS accepts arguments and opens and closes a socket? Why is a snippet of code to not only HTTP DoS.

As for the trojan, you really think I would do something better and leave the public?

What planet do you live?

And Curl is a great project to parallel HTTP connections, python is not so much, and that is why only the fork stays with him.

On 14-02-2012 02:48, Lucas Fernando Amorim wrote:
On Feb 13, 2012 4:37 AM, "Lucas Fernando Amorim" <lf.amorim@xxxxxxxxxxxx <mailto:lf.amorim@xxxxxxxxxxxx>> wrote:

    With the recent wave of DDoS, a concern that was not taken is the
    model
    where the zombies were not compromised by a Trojan. In the standard
    modeling of DDoS attack, the machines are purchased, usually in a
    VPS,
    or are obtained through Trojans, thus forming a botnet. But the
    arbitrary shape doesn't need acquire a collection of computers.
    Programs, servers and protocols are used to arbitrarily make
    requests on
    the target. P2P programs are especially vulnerable, DNS, internet
    proxies, and many sites that make requests of user like Facebook
    or W3C,
    also are.

    Precisely I made a proof-of-concept script of 60 lines hitting
    most of
    HTTP servers on the Internet, even if they have protections likely
    mod_security, mod_evasive. This can be found on this link [1] at
    GitHub.
    The solution of the problem depends only on the reformulation of
    protocols and limitations on the number of concurrent requests and
    totals by proxies and programs for a given site, when exceeded
    returning
    a cached copy of the last request.

    [1] https://github.com/lfamorim/barrelroll

    Cheers,
    Lucas Fernando Amorim
    http://twitter.com/lfamorim

    _______________________________________________
    Full-Disclosure - We believe in it.
    Charter: http://lists.grok.org.uk/full-disclosure-charter.html
    Hosted and sponsored by Secunia - http://secunia.com/



_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/