Salut und tach auch,
I was wondering which crunchers give the best results both in crunch rate and decrunch speed. I don't care if the crunching is slow. And the decrunch does not have to be blazing fast, either. But the compression ratio should be excellent. I am also interested in articles covering how to improve compression ratio based on how the input file might be organized - if that's feasible.
Thanks in advance!
Hardcore crunching & sloooow decrunching: Shrinkler for 4k (7 seconds of decrunch for The Third Kind), or Exomizer for bigger files (2 times slower than Aplib).
Shrinkler yields impressive results. But I will still try Exomizer, too. Any other suggestions?
My personnal choice will be PuCrunch. I especially love this one.
Wow, never heard of that! Thanks, will give it a shot.
Is there a "definitive" cruncher for short data (from let's say, 50 bytes to 2kb)? Or is it "try them all and test"? Thanks.
Try them all!
If datas = 2k, I think that Shrinkler will be the best in all cases.
If datas = 50 bytes, don't crunch them, since every decruncher will be bigger than them.
If datas = 1k, then try a cruncher with a very short decruncher like ZX7 (the smallest one?), it could be better than Shrinkler...
Thanks. But something I didn't say is that there would be many data to decrunch, at various point, so there would be one decrunch code, and many tiny-to-small chunks of data. As for Shrinkler, no way, it is too slow (it's for a game).
I have some data that has a lot of inherent/repetitive patterns and thus probably compresses quite good, regardless of the cruncher used (at least that's what I guess). I was wondering, mainly when considering shrinkler, if it was any good if I pre-compressed (more generally re-arrange to decrease redundancies) that data somehow so that my code will expand/create it at runtime, making the uncompressed binary become smaller, even though I'd have to add the code for blowing up that data. I figure the entropy in that version would be higher compared to the untreated one. Do you think that's worth trying? I am sure you guys have already dealt with similar problems.
According to my tests, lower the entropy is, better the Shrinkler results are. So, I already take time for precrunching some datas and the results were always worse than before. My conclusion is that we have to test the opposite: unloop code, repeat datas, and then the entropy will be lower even if the file is bigger.
The lesson is that Shrinkler is much more sensitive to entropy than to raw size file (it's true for all crunchers, but Shrinkler is very very sensible to this).
Some really interesting and helpful insights, thanks Hicks!