Linked pull requests. Copy link. Some more testing with pigz and 7z compressing to gzip format. Bump this. It does use kekapigz when de compressing multiple files. Repository owner deleted a comment from stale bot Jan 29, Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment. You signed in with another tab or window.
Reload to refresh your session. You signed out in another tab or window. Thanks scirius! Anyone knows what's the one on OSX? Or some multi-platform library that knows more about this stuff Windows included? SnowLprd on May 30, parent prev next [—]. I'll do some additional testing to see if the results are affected by caching.
Not sure why you're surprised that I used. As I mentioned in the article, most of the time I'm creating bzipped tarballs from directories of files, so it made sense to use what is, for me, a common real-world use case. Your mention of tar. But perhaps it's I who am misunderstanding your suggestion.
Please feel free to enlighten me. Got it! In effect, you split the file up into kb chunks, compress them in parallel, and recombine them into one file at the end. Inufu on May 30, prev next [—]. Is there a reason this is not the default? I'm GNU tar user I believe that's the version in most Linux distributions, but I may be wrong , so I tend to use -z for gzip, -j for bzip2 and -J for xz.
That said, I guess using the "alternatives" framework in Linux it would be reasonably easy and transparent to support the parallel version of each tool as replacement to the regular one. BrainInAJar on May 30, prev next [—]. A GPU implementation would be cool. Hacker News new past comments ask show jobs submit. SnowLprd on May 30, parent next [—] Quite right. LogicX on May 30, root parent prev next [—] Looks worth trying out -- Can I suggest adding installation packages for: brew ubuntu at least through launchpad.
SnowLprd on May 31, root parent prev next [—] That looks very promising. SnowLprd on May 31, parent prev next [—] I disagree with your assessment. SnowLprd on May 31, root parent next [—] I agree that the default level 6 for xz probably errs too much on favoring file size over speed.
CJefferson on May 30, parent prev next [—] bz2, out of all current compression methods, is particularly parallisable, as it has already split the files up into k or smaller blocks, and compressed each individually well, run BWT on each seperately at least. SnowLprd on May 30, parent prev next [—] I'll do some additional testing to see if the results are affected by caching.
In practice, the bigger test files should be more reliable in terms of speed comparison. When reading the tables, it is important to keep in mind which settings are the default in each program:.
Note: The first column with numbers In this test bzip2 is a tough adversary to lzmash in fast modes. XMMS 1. The file was first gunzipped, resulting uncompressed size of bytes 5. For some reason, "bzip2 -6" took more time than even "bzip -9". The result didn't change when the test was repeated. The extreme mode of lzmash creates a few bytes bigger files; seems that using "lzmash -e" makes compression both slower and less efficient with smaller files. Speed tables are omitted because the smaller test file makes measuring the elapsed time with 'time' command too inaccurate.
For some reason, in compression "bzip2 -6" was a little faster than "bzip -5" but "bzip -6" still created smaller file. The memory requirements depend only on the used compression mode This small memory mode hasn't been tested. When there's need for a very fast compression, gzip is the clear winner. It has also very small memory footprint, making it ideal for systems with limited memory.
0コメント