Template talk:Compression methods
This template does not require a rating on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||
|
Notes
[edit]Where is wavelet based compression? (I think this is in Jpeg 2000)? Where is pyramid or scale space compression?
Done. The "Image" section of this template now mentions "wavelet" and "LP". The "LP" links to "Pyramid (image processing)", which in turn links to "scale-space" a few times. "scale space compression" is not mentioned in this template because, as far as I can tell, there are not yet any Wikipedia articles that discuss that topic. (I would be surprised and delighted if someone were to tell me about such articles). --DavidCary (talk) 16:44, 10 December 2013 (UTC)
Separating algorithms from the actual compressors?
[edit]I see there is Brotli and LZ4 in "Dictionary type" - these are specific compressors, while the rest are more abstract algorithms. Maybe there should be created a separate category in "Lossless": "Compressors" with the most notable ones. If so, "compressors" should be also added to other categories, for example: Lossless: gzip, Brotli, Zopfli, LZ4, PAQ, ZPAQ, ZStandard, bzip2, WinRar, LZFSE ... FreeArc, 7zip Image: jpeg, LZ-JPEG, gif, png, WebP, jpeg2000 ... Video: h.264, h.265, VP9, Dirac, Daala, AV1 ...
Also, I believe there should be clearly distinguished transformations (LZ, BWT, MTF, RLE, DCT) from statistical prediction methods (PPM, Context Mixing) ... and there is missing article e.g. "Statistical modeling for data compression" - which describes and compares: - order, for Markov modeling, - static: count frequencies and write used probabilities in headers, - adaptive: updates probability distribution - with example of realization, like here: https://fgiesen.wordpress.com/2015/05/26/models-for-adaptive-arithmetic-coding/ - PPM - context mixing Jarek Duda (talk) 10:28, 30 October 2016 (UTC)