[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Full-disclosure] Does this exist ?
- To: full-disclosure@xxxxxxxxxxxxxxxxx
- Subject: Re: [Full-disclosure] Does this exist ?
- From: "Rob McCauley" <robm.fd@xxxxxxxxx>
- Date: Fri, 6 Jul 2007 12:20:36 -0400
Ya know, I don't think he does get that part yet.
This scheme is essentially how data compression already works. Not in
gigantic swaths of bits, as being proposed here, but in smalish numbers a
few bits represents a bigger set of bits. Huffman coding is a basic
example.
The infeasability of this idea is all about the data size. As someone
already pointed out 2^4000 is not 16,000,000 (that's 4000^2). 2^4,000 is
large enough to just call it infinite and be done with it.
For comparison, there's something like 2^100 to 2^130 or so atoms in the
known universe. The hardware you'd need to implement a database of that
size would require more matter than exists. Period.
This idea is only interesting if it works at the scale proposed. It
doesn't. On a smaller scale, this is how data compression is already done.
Rob
On Fri, Jul 06, 2007 at 01:52:55 -0500, Dan Becker wrote:
> So we generate a packet using the idpacket field of a database to
> describe which packets should be assembled in which order then send
> it. 1 packet to send 500.
Do you realize the id of the packet(s) would be equivalent to the contents
of the package(s)?
See also
http://en.wikipedia.org/wiki/Information_entropy#Entropy_as_information_content
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/