Jump to content

Talk:Fractal compression/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Bound to fail?

This article seems biased to imply that fractaal compression was bound to fail. Could an expert in the topic check the wording of the article, and try to present a more balanced view.--172.201.36.161 21:10, 24 March 2007 (UTC)

Certain individuals do seem to be on a crusade against fractal compression for whatever reason.--Jakespalding (talk) 01:59, 28 February 2008 (UTC)
Actually the problem is that some people are in a crusade to make it sound better than it is. I'm just trying to maintain a semblance of reality and balance here. I am not making any claims about "success" or "failure", but I am trying to keep the page limited to what is verifiable and relevant.
Spot, your article was outright trash implying fractal compression does not work without human intervention. The whole point about listing fractal compression successes is to prove your moronic point of view invalid. Its obvious you have some problem accepting the truth.--Editor5435 (talk) 22:18, 29 February 2008 (UTC)
I have yet to find any verification of yours claims. In fact one of the external references has this to say: 'Fractal compression methods were actually quite competitive until the appearance of powerful wavelet-based methods and, subsequently, context-based coders. Nevertheless, it has always been the philosophy of the Waterloo research programme that fractal-based methods are interesting mathematically and that they may also be able to provide useful information about images. We are indeed finding that there is much "life after compression".' And the research group removed the word "compression" from their name. That's because it doesn't work. Spot (talk) 22:34, 29 February 2008 (UTC)
"That's because it doesn't work." That's what you believe in the little fantasy world you live in, besides the article you referenced does confirm it works. Have you ever heard of Microsoft's Encarta by any chance? Proof you are delusional!--Editor5435 (talk) 23:11, 29 February 2008 (UTC)
I quoted part of the article that says it does not and I can't find any part that says it does. Please quote it here to support your argument. I know Encarta and it uses Jpeg, and has for a long time. An early version of Encarta used fractal compression because they were willing to live with lossy reproduction, which is the only niche where fractal compression got any traction. These days it has none, hence the name-change by the Waterloo group. Btw you are in repeated violation of WP:NPA. Spot (talk) 00:08, 1 March 2008 (UTC)
"Fractal compression methods were actually quite competitive until the appearance of powerful wavelet-based methods and, subsequently, context-based coders." You are being irrational, how could fractal compression have possibly been "quite competitive" if it didn't work? Waterloo can name their group anything they want, its irrelevant. The fact wavelet based compression became more popular is no reason to exclude the subject of fractal compression in this Wiki article. We are discussing its very real history, reasons for its stagnation and renewed interest. You are bothered by it for some unknown reason.--Editor5435 (talk) 00:22, 1 March 2008 (UTC)
Whether something "works" or not is relative to your objective and the competition. Maybe back then Fractal Compression did work in the niche of highly lossy compression, and I don't have any problem with you saying so in the article. But there is no evidence that Fractal Compression worked at all as a general purpose image compression. Please see the compression FAQ. Note in particular the "Reader Beware" section which outlines the deception that you are trying to foist on us in this page. When I say it doesn't work, this is what I mean. I am not aware of any renewed interest, the case has been closed for a decade. You are welcome to demonstrate otherwise. Spot (talk) 00:59, 1 March 2008 (UTC)
Here's some more quotes from another part of the FAQ: 'It is time, once and for all, to put to death the Barnsley myth that IFSs are good for image compression. They are not. ... Even Barnsley himself admits, in his latest book, that he doesn't use IFS image compression. Instead, he uses the so-called "fractal transform," which is really just a variant of vector quantization' Spot (talk) 01:09, 1 March 2008 (UTC)
Fractal transform compression "The fractal transform is a technique invented by Michael Barnsley et al. to perform lossy image compression. This first practical fractal compression system for digital images resembles a vector quantization system using the image itself as the codebook."--Editor5435 (talk) 01:36, 1 March 2008 (UTC)
Iterated function systems and compression "Q11b: What is the state of fractal compression? A11b: Fractal compression is quite controversial, with some people claiming it doesn't work well, and others claiming it works wonderfully. The basic idea behind fractal image compression is to express the image as an iterated function system (IFS). The image can then be displayed quickly and zooming will generate infinite levels of (synthetic) fractal detail. The problem is how to efficiently generate the IFS from the image.
Barnsley, who invented fractal image compression, has a patent on fractal compression techniques (4,941,193). Barnsley's company, Iterated Systems Inc, has a line of products including a Windows viewer, compressor, magnifier program, and hardware assist board."--Editor5435 (talk) 02:01, 1 March 2008 (UTC)
Spot, there is no deception going on here, you know very well of the renewed interest in fractal compression, you just can't bring yourself to admit it. I have downloaded demo video of 250:1 fractal compression and viewed it against raw source, its near lossless. I will not post the link because I know its preferable when third party references are available, but never the less your temper tantrums won't change the fact fractal compression is making a comeback. I will post more information as it becomes available.--Editor5435 (talk) 04:31, 1 March 2008 (UTC)
Claiming that uprezzing is compression is deception (see the FAQ). I am not aware of any renewed interest, and the literature survey that Stevenj did below indicates no scientific progress and declining interest (very few papers published, and a switch from compression to coding). Your demo video came from the company that's selling the product, so it's not trustworthy. Perhaps when this information becomes available we can change the wikipedia to reflect it, but until then, we'll have to remove your claims. Spot (talk) 17:56, 1 March 2008 (UTC)
Spot, if you do a Google search on fractal compression approximately 100,000 results are returned. There are numerous research papers published in the last few years. As for your opinion about "fractal zooming" that has already been discussed. "and a switch from compression to coding", are you referring to the Waterloo group's realizing they better find something else to do because Iterated Systems Inc.'s patents still restrict what other companies can do with fractal compression? --Editor5435 (talk) 18:46, 1 March 2008 (UTC)
1) The number of results returned by Google doesn't tell us anything about how interest or research on the topic has changed over time. 2) My statements about "fractal zooming" are not "my opinion" they are simple facts, backed up by the FAQ. So far nobody here agrees with your position. 3) Regardless of the reason for their change of name, it still indicates a decrease in "fractal compression" research. Spot (talk) 02:53, 4 March 2008 (UTC)
Analysis of a hybrid fractal-predictive-coding compression scheme "There has been tremendous progress in fractal compression since the pioneer work of Barnsley and Jacquin in the late 1980s. As the encoding time complexity issues are gradually being solved, there is a steady growth of applications of fractals, especially in hybrid systems. However, such fractal hybrid systems tend to be rather difficult to analyze, and part of that difficulty lies in the quantization of the scaling and luminance offset parameters adopted in most fractal compression schemes. In this paper, we present theoretical and empirical justification for a well-known but underused alternative parametrization for the fractal affine transform. In particular, we shall present a detailed analysis of a hybrid fractal-LPC (linear predictive coding) compression scheme using the aforementioned alternative affine transform parameters."--Editor5435 (talk) 19:39, 1 March 2008 (UTC)
Sorry I don't understand the relevance of this abstract. It mentions progress "since the late 80s" but that certainly isn't isn't the last few years, and the abstract was originally written in early 2002 which isn't in the "last few years" either. Spot (talk) 02:53, 4 March 2008 (UTC)
Spot, since you are having a difficult time with this, here is a more current reference from 2005 about using faster hardware to achieve realtime fractal compression: Towards Real Time Fractal Image Compression Using Graphics Hardware on page 723.--Editor5435 (talk) 07:52, 4 March 2008 (UTC)
We already debunked your use of that reference below. The fact that computers keep getting faster and any algorithm will eventually run in realtime does not indicate renewed interest or competition with established techniques. Neither does simply introducing another reference contradict my rejection of the above. If you have the goods then show them. If you don't then your content will be removed. Spot (talk) 04:10, 5 March 2008 (UTC)
There are a lot of violations of WP:NPA in your comments, Editor5435. It's not really helping your case to make derogatory statements about Spot in each of your posts, and I suggest that you take a more factual tack to your statements. Now, while I was unable to read the information on page 723, the rest of the chapter deals generally with the potential use of GPUs in solving the fractal compression problem. Note that this is NOT saying that they've made the algorithm work faster or better - simply that there's faster hardware that could be used to speed up the process, which isn't really 'progress'. ErikReckase (talk) 04:50, 5 March 2008 (UTC)
What part of this do you not understand? Fractal compression has always been computationally intensive. I have never claimed otherwise. I claim this processing requirement is the main reason why fractal compression was surpassed by wavelet compression. What I am claiming is the hardware is now fast enough to make fractal compression practical and the benefit is higher compression rates that wavelet based compression. I also claim that due to patent issues there is only one single company pursuing fractal video compression. There is only a brief mention of this in the main article because because of Wiki's requirement for verification. Never the less, it is being mentioned in this discussion as an indication of future changes that are most likely to come to the main article. It also points out that many assumptions made about fractal compression are wrong, for example, Spot's claim that human intervention is required for fractal compression to work is completely false.--Editor5435 (talk) 06:52, 5 March 2008 (UTC)

Audio area

The audio area is a large possible application as the generation of iterable MP3s might require a bunch of computer time; that is studio effort —Preceding unsigned comment added by 129.65.190.106 (talkcontribs) 21:43, 20 June 2007

Companies providing software products

There was a company, Iterated Systems, Inc., which around 1997 or so provided a software product for PCs, Fractal Imager 1.6. This product compressed, decompressed, and converted images to/from jpg, bitmap, and certain other formats. It was slow. But it did achieve nice compression ratios, though, as the article states, primarily for natural scenes at less-than-perfect precision. Iterated was acquired, I forget by whom. The acquiring entity seemingly had no interest in a PC product and, it seems to my faulty memory, was working on integrating FI into its other products. I would be nice to have something added to the main article about Interated Systems, Inc., and Fractal Imager 1.6, though by someone who could cite the information added (I couldn't do this latter bit). Zajacd01 03:01, 23 April 2007 (UTC)

There is a state-of-the art product currently, called Genuine Fractals, which is used to resize images "up to 1000%" while retaining the appearence of sharp edges, and without aliasing artifacts. The company claims to have a patent on the algorithm, which they say is fractal-based. [1] So presumably this may be a technique related to fractal compression. I also used the demo of Fractal Imager, and did not perceive any great slowness .. everything ran slow on those old PCs. ;) When scaling an image above 100%, it would degrade in a much more "natural" way than typical raster/bicubic scaling, and it is this feature which seemed to be the selling point. Presumably this Genuine Fractals company has picked up on that, whether they inherited the Fractal Imager code or patent I do not know ... Haven't managed to find any useable open-souce code for any fractal compression or fractal-scaling .. yet. Given that there are a number of articles on the internet discussing theory about this technique of image manipulation [2] [3] [4], then I suppose it should be practical to come up with source which does not tread on any existing patent. There is a Windows binary of some published implementation. [5] -- Commandslvt 15:19, 21 May 2007 (UTC)

Worth noting if you come across a 2000 slashdot thread discussing this, that it's mostly very poor attempts at humour, and statements by people who don't know wtf. -- Commandslvt 15:25, 21 May 2007 (UTC)

In fact, the Genuine Fractals product does seem to be derived from the Fractal Imager product -- or at least to be compatible with its FIF file format.
This might be what happened: Iterated Systems Fractal Imager 1.1 => Altamira Genuine Pixels 1.0 => LizardTech Genuine Fractals 2.0 [6] => ??? => onOne software Genuine Fractals 5.0 -- Commandslvt 15:38, 21 May 2007 (UTC)

Updated with accurate information

Why does "Spot" insist upon contributing negative misinformation about fractal technology? He is about 15 years behind the current state of fractal compression and its commercial use. He should not be editing subjects he clearly has no understanding of. The article has been updated with accurate historical and current information. —Preceding unsigned comment added by Technodo (talkcontribs) 03:12, 16 February 2008

I reverted your edits because no references or documentation were provided. My version was backed up by the compression FAQ. Can you cite any peer-reviewed publications that demonstrate your claims about TruDef? Note that "Genuine Fractal" is for uprezzing, not for compression, so it shouldn't be on this page at all. Spot (talk) 08:35, 16 February 2008 (UTC)
Uprezzing can not be ignored when discussing fractal compression. Resolution independence allows both still images and video to viewed at resolutions much larger than native size while still working with the original file size. —Preceding unsigned comment added by 64.46.3.61 (talkcontribs) 01:50, 18 February 2008
I reverted the article again since you still have not provided references. My web research indicates that the only claims that match yours come from TMMI itself, and the company is a penny stock with a history of problems. Uprezzing is important and has its own page (Image scaling) where fractal uprezzing algorithms (rather than a particular product) should be described. It would be appropriate to link there from here. Spot (talk) 19:13, 18 February 2008 (UTC)
The previous article is correct, uprezzing is a valid method for achieving compression since saved file size has no relation to resolution. Further, since the original Iterated patents are still valid and licensed to specific products, those products deserve description. I restored the more informative article and will continue to do so in the event any further tampering takes place.--Editor5435 (talk) 20:36, 18 February 2008 (UTC)
The 'Reader Beware' section of the Fractal Compression section of the Compression FAQ clearly discusses the relationship between uprezzing and compression, and how it's not a means to achieve fantastic compression ratios - but it's fooled people before. ErikReckase (talk) 05:15, 5 March 2008 (UTC)
The Fractal Compression section of the Compression FAQ is wrong, it requires updating. Fractal scaling achieves the exact same results as compression, how else to fill high resolution screens from the smallest amount of data without introducing so many artifacts that loss of detail becomes unacceptable? After video has been fractally encoded it no longer has a fixed resolution, its up to the viewer to decide on what resolution to display the video.
For discussion purposes the following represents options for 10kb of fractally encoded image data:
If 10kb of image data is decoded into a 1MB image it represents a compression ratio of 100:1
If 10kb of image data is decoded into a 2MB image it represents a compression ratio of 200:1
If 10kb of image data is decoded into a 4MB image it represents a compression ratio of 400:1
If 10kb of image data is decoded into a 6MB image it represents a compression ratio of 600:1
The fact is in the above scenario a various images sizes up to 6MB were generated from 10kb of data. How would this be possible with wavelet based compression without being so horribly pixelated the image would be unviewable?
Resolution independence is an inherent characteristic of fractal compression. You can argue all you like, the fact is fractal compression allows much larger screen resolutions for the same amount of image data than wavelet based compression.--Editor5435 (talk) 08:30, 5 March 2008 (UTC)
The compression FAQ is wrong? Convince the owners of the FAQ to change it, and I'll entertain your arguments - but I'm certain that their words have had a significant amount of thought put into them. Fractal scaling is a great interpolation method, but it shouldn't be confused with compression. It also seems to me that if you use any image scaling method, fractal or otherwise - say, bicubic interpolation, or even nearest neighbor - you would have to make the same claims as to compression ratios. It doesn't matter whether there is pixelation or not - it's still the same compression ratio. I agree that fractal scaling provides more appealing results than nearest neighbor - but that doesn't make it compression, it makes it a better scaling algorithm. ErikReckase (talk) 15:50, 6 March 2008 (UTC)
My experience on Wikipedia is no longer enjoyable because of certain individuals, my effort in trying to provide factual information on this subject is no longer worth my time. If you want to butcher the article feel free to remove the following:
Once an image has been converted into fractal code its relationship to a specific resolution has been lost, it becomes resolution independent since the image can be recreated to fill any resolution.
Readers can always search other sources such as Google for facts about fractal compression. I realize Wikipedia is not the best place for such information.--Editor5435 (talk) 00:35, 7 March 2008 (UTC)

Uprezzing and patents

Uprezzing is not compression, it is not mentioned by the Image Compression page. A list of 25 patents held by a particular company is certainly not appropriate for this page. Patents do not count as documentation. What you need to be taken seriously is a reference implementation of the codec and sample video files both uncompressed and compressed with this technique. Reverting. Spot (talk) 19:46, 23 February 2008 (UTC)

Michael Barnsley's company just happens to be the company that invented fractal compression. The patent list is of fractal compression related patents. The patents describe exactly what this codec technology is. Stop deleting efforts made to update this article with current and past relevant information. As for your opinion about uprezzing its clearly wrong as resolution independence has always been the basis for fractal compression. The Image Compression page will require updating since it ignores the unique capability of fractals to expand a saved images into higher resolution images for viewing. This enables saved images to have smaller file sizes than otherwise is possible for their expanded viewing size. onOne Software describes Genuine Fractals 5 useful as a means of saving disk storage space because of this capability. The previous shabby article made it sound as if fractal compression was a failure and a dead end technology, it was full of inaccuracies that only serve to mislead readers about this subject. The fact of renewed interest and plans to port this technology to modern OS environments is certainly worthy of mention, especially after so many years inactivity in fractal video compression. As for previous use of fractal video there are references to RealVideo and Mitsubishi's license with Iterated. I will continue to gather more useful and relevant information for this article and keep it updated accordingly.)--Editor5435 (talk) 08:36, 24 February 2008 (UTC)
The patents do not describe the codec exactly. Your sources are 10 to 15 years old, and they are only press releases and patents by the interested parties. They are not NPOV. If anything this is Original Research since there is no independent verification. Let me try to address just one topic first, to see if we can agree: uprezzing. The question is, should uprezzing algorithms be covered in the pages on Image Compression and Fractal compression? Compression is the process of encoding information with fewer bits. Uprezzing is the process of expanding to higher resolutions, ie into more bits. These are different (though related) things. Can yo u explain how increasing resolution accomplishes "encoding information with fewer bits"? Spot (talk) 12:23, 24 February 2008 (UTC)
The patents describe exactly the technology which fractal video codecs are based on if you bother to read them and have the ability to comprehend their meaning. Again, Michael Barnsley's company Iterated Systems Inc. just isn't some interested party, fractal compression is its invention. It owns the patents for this technology. Many of the older sources provided are relevant history on this subject. The original article was biased and made it appear fractal compression was a complete failure. The history proves otherwise and should not be ignored. Why does the original article ignore these facts? Uprezzing allows files to be stored in fewer bits and viewed in more bits. Resolution independence is what fractal technology is all about and allows fewer bits to be stored than otherwise is possible. Besides, Genuine Fractal 5 supports FIF (Fractal Image Format) which results in smaller file sizes than raw, so obviously its compression. Why does the original article mislead about "human intervention" to achieve compression results comparable to JPEG? There is a history of ClearVideo offered as a commercial product used by numerous companies and individuals which did not require any human intervention, the encoding process is fully automated, contrary to what the original article claims. Why is such misinformation spread? There is certainly a biased agenda against the merits of fractal compression being pushed here as described at the top of this discussion page. The original ClearVideo decoder driver is still supported by Microsoft in Windows Media Player and the decoder is currently available here.--Editor5435 (talk) 17:20, 24 February 2008 (UTC)
Uprezzing does not allow files to be stored in fewer bits and viewed in more bits - I have strong feelings that uprezzing algorithms should not be covered as part of Image Compression and Fractal Compression, except potentially as a link to somewhat related information, for that reason alone. The intent of compression is to be able to represent the same data with fewer bits - this doesn't mean larger images based on the same data, it means the original data. If the uprezzing algorithm were to be used to reduce the number of bits for the same sized image, it would be an image compression approach, but I still wouldn't want uprezzing on the Image Compression or Fractal Compression pages, simply because of the name. The algorithm would have to be named something else. I'm also not fond of the argument that uprezzing is fractal compression because Iterated Systems was involved - seems similar to saying that canning jars are spacecraft components because Ball makes both. ErikReckase (talk) 17:06, 25 February 2008 (UTC)
When an image is converted into a fractal encoded file all pixel data is lost and is replaced with geometric transforms, the image becomes resolution independent because new pixels are mapped by the fractal algorithm according to the display resolution. In the case of Genuine Fractals 5 it still uses the same Iterated fractal encoding algorithm that compresses files compared to raw size. The uprezzing capability of fractal encoding is inherent in the process, it is not some separate algorithm unrelated to fractal compression. I have replaced the term "uprezzing" with "fractal zooming". With fractal zooming more bits are created from the fractal code to fill higher resolutions, for example zooming in on (scaling) an image from 640x480 to 2560x1600 as compared to compressing a native 2560x1600 image using i.e JPEG. The end result is two images that appear similar but are stored in significantly different file sizes. Here is an excellent link to Fractal Basics at FileFormat.Info.--Editor5435 (talk) 03:07, 26 February 2008 (UTC)
Spot, the updated article is far more informative than the old one, stop deleting the updates.--Editor5435 (talk) 02:01, 27 February 2008 (UTC)
If a cubic spline approach were used to scale an image larger, and then jpeg compression was used to make the image smaller in size than the original, the cubic spline is NOT part of image compression. Additionally, your comments above are akin to 'creating detail' in images where the original may not have any - 'appearing similar' is not the same as image compression. I know of no person who would prefer a 640x480 image 'fractally uprezzed' to 2560x1600 to a jpeg-compressed 2560x1600 original - generally the larger size is not available as well (which is why you use uprezzing in the first place). ErikReckase (talk) 12:01, 27 February 2008 (UTC)
"I know of no person who would prefer a 640x480 image 'fractally uprezzed' to 2560x1600 to a jpeg-compressed 2560x1600 original". Sometimes you don't have a choice, for example in the case of video, the extreme bandwidth restrictions of YouTube would greatly benefit from 640x480 data size fractally uprezzed to whatever a viewer's resolution is set to.
Creating detail is what fractal zooming does, similar to how postscript scales fonts. It is a difficult concept for most people to understand, but never the less is the basis for remapping fractal code back into blocks of pixels. Here is what the Encyclopedia of Graphics File Formats from O'Reilly has to say about fractal scaling:
"Two tremendous benefits are immediately realized by converting conventional bitmap images to fractal data. The first is the ability to scale any fractal image up or down in size without the introduction of image artifacts or a loss in detail that occurs in bitmap images. This process of "fractal zooming" is independent of the resolution of the original bitmap image, and the zooming is limited only by the amount of available memory in the computer. The second benefit is the fact that the size of the physical data used to store fractal codes is much smaller than the size of the original bitmap data. If fact, it is not uncommon for fractal images to be more than 100 times smaller than their bitmap sources. It is this aspect of fractal technology, called fractal compression, that has promoted the greatest interest within the computer imaging industry."
Resolution independence must be included in descriptions of fractal compression since images and video take on this characteristic after being converted and compressed into fractal code. Further, the fact storage requirements are reduced for whatever resolution an image or video is viewed in is the goal for all forms of imaging compression. The arguments stands, fractal encoded images and video can be viewed in large resolutions from the same compressed data which is not possible with pixel based compression schemes without the introduction of image artifacts and loss in detail. It would be wrong not to include this information in the article. I have separated this subject into a single short paragraph "A characteristic of fractal compression".--Editor5435 (talk) 17:00, 27 February 2008 (UTC)
This concept is not difficult for me to understand - I'm a signal/image processing engineer with 12+ years of experience - but it's hard for me to swallow your statements. The relevant line from the Fractal Compression FAQ with respect to uprezzing as fractal compression: "Zooming in on a person's face will not reveal the pores." Uprezzing may be a good alternative means to interpolation when making images larger, but it is not an argument for inflated compression ratios. Yes, I may not have a choice when resizing a 640x480 image larger to fit on my screen - but I certainly wouldn't describe the process involved in making the image more aesthetically pleasing as compression. Fractal uprezzing yes, but compression shouldn't even be in the same sentence. ErikReckase (talk) 04:56, 5 March 2008 (UTC)
When an image is converted into fractal code it no longer is tied to any specific resolution, its up to the user to decide which resolution to view. The source file size remains the same no matter what resolution its viewed in.--Editor5435 (talk) 09:28, 5 March 2008 (UTC)
Again - I understand the concept of fractal compression. What I'm trying to stress here is that you are combining two operations - fractal compression and image scaling. Restrict your comments, and your compression ratios, to the former. Fractal zooming is an interpolation method, and I do not believe it's inherent in the fractal compression process. ErikReckase (talk) 15:33, 6 March 2008 (UTC)
Fractal zooming is not an interpolation method. If this were so, then fractals themselves are an interpolation method. Any display of a fractal is a "zooming", so to speak. - it is a specification of a "scale". And fractals are scale-invariant (i.e. invariant under scale transformations, because they are made out of affine transforms). That is the whole point. Fractal-encoded images are not made out of pixels. A loose analogy would be to vector-graphics. When you scale a vector graphic you're not "interpolating", any more than you would be if you displayed it at 1:1. Same goes for fractals. Scale invariance is "the first thing about fractals", so to speak. If you can't understand how fractal zooming is not interpolation you need to learn a lot more about fractals (as well as interpolation) before you start debating it. Kevin Baastalk 18:17, 10 March 2008 (UTC)
I removed the term "fractal zooming" from the article and replaced it with "resolution independent", a term used by just about every paper that describes fractal compression.--Editor5435 (talk) 15:40, 6 March 2008 (UTC)
Back to the original question, which was 'should uprezzing algorithms be covered in the pages on Image Compression and Fractal compression?' Let's avoid the 'uprezzing' word and just call it image scaling for the time being. We've been talking about this issue in multiple places in this document, as a consequence of the page organization. Editor5435, you believe that fractal image scaling is an integral part of the compression process, calling it resolution independence, and therefore you want it to be covered on those pages. I, on the other hand, follow the compression FAQ and my own thoughts on the subject: that image scaling using any algorithm, fractal or not, should not be covered in discussions on image compression. You say the FAQ is wrong. To be honest, I don't think there's any way you're going to be able to convince the writers of the FAQ that they're wrong, or me for that matter. Are we stalemated here?
Scaling is a feature that comes with any system that takes "knowledge of the application" and uses it "to choose information to discard" or preserve, and therefore includes any form of "transform coding". For instance, one could take, say, a megabyte of plain english text and compress it using a simple LZW-type scheme that operated on the word and grammar level. You could then use the compressed data to generate character strings that read very much like english. (see: Claude Shannon's Mathematical Theory of Communication) (This is not interpolation. Interpolating the word "hello" would produce something like "hfeilllno".) If the original text were in French, the generated character strings would be in French. This is why transform-coded images like mpeg can scale-up images from their original resolution with better accuracy than can be achieved by doing linear (or bi-linear, or bspline, or what-have-you) interpolation on an image reproduced (from the compressed data) at the original size. This is because the encoding/decoding mechanism contains information about the physical world that allows it to make a more accurate estimate of what color a given region is likely to be (that's what enables it to compress the image in the first place - because it doesn't need to store information that can be accurately inferred from other information combined with knowledge about the physical world - just like the human eye does). This is an innate consequence of "transform coding", which fractal transform coding is a special case of. Point is that it's an innate property of image compression, for all but the most trivial methods, at the core of modern image compression theory. Being at the core of the theory of image compression, I'd venture to say that it deserves "mention". Kevin Baastalk 18:49, 10 March 2008 (UTC)

Many of the links in the article, not just the external links section, appear to violate WP:EL and/or WP:SPAM. Some might be included as references, but it's unclear which. Please take the time to format the reference appropriately, or at least indicate which links are supposed to be references. --Ronz (talk) 18:44, 27 February 2008 (UTC)

Purpose of references

I have referenced all relevant external links from within the article. The purpose of the references is to disprove the previous article that was biased towards fractal compression as being a commercial failure when in fact there is a history of commercial success with this technology.--Editor5435 (talk) 19:04, 27 February 2008 (UTC)

I'm asking for your help identifying references. Can you help? --Ronz (talk) 19:08, 27 February 2008 (UTC)
Yes, many of the reference are self explanatory such as government grant reference link, Microsoft's ClearVideo support and the link to the ClearVideo codec download, the Genuine Fractal 5 product which are based on fractal compression. I will continue to search for more information supporting this article.--Editor5435 (talk) 19:16, 27 February 2008 (UTC)
Thanks for the help! --Ronz (talk) 19:30, 27 February 2008 (UTC)

Please help keep this talk page formatted properly

Please indent appropriately when replying to a comment. Please create a new section heading when you start discussing a new topic. See Wikipedia:Talk#Technical_and_format_standards --Ronz (talk) 19:11, 27 February 2008 (UTC)

Thanks for cleaning this once ragged page up!--Editor5435 (talk) 19:44, 27 February 2008 (UTC)

Patents are not reliable sources

Patents are not reliable sources for the information contained in them, other than as a primary source. They should not be used to assert anything in the patent works or is important enough to discuss in an encyclopedia. They can be used to show what was claimed in a patent. Remember though, that these are just claims that are not checked for accuracy. --Ronz (talk) 19:22, 27 February 2008 (UTC)

The contents of the patents are what this technology is entirely based upon.--Editor5435 (talk) 19:33, 27 February 2008 (UTC)
Please read WP:V and WP:RS if you have not already done so. If we cannot find other sources to verify the information, then the information will probably have to be removed. --Ronz (talk) 19:36, 27 February 2008 (UTC)
I feel the patents are an important inclusion in this article since they are the sole reason why only one company has developed commercial fractal compression software which was sub-licensed to some third parties. Even the Encyclopedia of Graphics File Formats from O'Reilly acknowledges the patent issue has limited the implementation of fractal compression.--Editor5435 (talk) 19:43, 27 February 2008 (UTC)
"Even the Encyclopedia of Graphics File Formats" That's a useful reference. --Ronz (talk) 19:46, 27 February 2008 (UTC)
Actually, the O'Reilly article on fractal compression is excellent and very informative. All I'm saying is anyone remotely familiar with fractal compression understands Michael Barnsley patented the technology and his company was the sole developer of commercial fractal image compression software.--Editor5435 (talk) 22:14, 27 February 2008 (UTC)
That article was also written by someone with a conflict of interest, and though it is better than this one, it still contains glaring distortions and omissions such as: "The first is the ability to scale any fractal image up or down in size without the introduction of image artifacts or a loss in detail that occurs in bitmap images." This is false. "it is not uncommon for fractal images to be more than 100 times smaller than their bitmap sources" only with extreme degradation of quality. "But the encoding process can be controlled to the point where the image is visually lossless." Only by giving up decent compression ratios entirely. Spot (talk) 22:24, 29 February 2008 (UTC)
Sure Spot, sure, everyone who provides any positive information about fractal compression is in conflict of interest according to you. It just so happens the O'Reilly article is very good and accurate. If you bother to do a little research you we see examples of 250:1 and greater fractal compression with near lossless results. I dare not name any source, it will only send you off on another hissyfit.--Editor5435 (talk) 00:43, 1 March 2008 (UTC)
I have done extensive research and I am not aware of any software that can compress images at 250:1 with near lossless quality. Please post a pointer. Spot (talk) 01:16, 1 March 2008 (UTC)
So you agree any third party verified software capable of 250:1 compression with near lossless quality is something very special?--Editor5435 (talk) 02:33, 1 March 2008 (UTC)
I would agree that if you were able to take an arbitrary 640x480 RGB image (921600 bytes) and compress that down to 3700 bytes, with near lossless quality, it would be special. I require a definition of near-lossless, though, since that's not exactly a scientific term. This also does NOT mean using fractal zooming on a smaller image. ErikReckase (talk) 16:01, 6 March 2008 (UTC)
"near lossless" is sometimes referred to as "visually lossless", again not exactly a scientific term. Of course fractal compression is a lossy form of image compression.--Editor5435 (talk) 16:22, 6 March 2008 (UTC)
Of course. So show me 250:1 on a 640x480 image, visually lossless...or maybe you can tell me what compression ratio you can achieve? Would you like a 640x480 image to start with? ErikReckase (talk) 17:22, 6 March 2008 (UTC)
Here are some 1280x960 examples I have located with compression ratios going up to 700:1. Please note, this is for discussion purposes only, none of this information is included in the main article.
Raw file use as reference
130:1
300:1
400:1
600:1
700:1
As soon as I can obtain verification of these developments in fractal compression then of course the article will be updated.--Editor5435 (talk) 17:55, 6 March 2008 (UTC)
I saw these examples when you tried to 'educate' Spot, and they don't prove a thing. You have images that have increasing distortion levels, but there's no way to verify that these images actually have the compression ratios that you're referring to. On top of the fact that it's more from a commercial website, there's also no indication of how long it took to achieve this compression, if it indeed is compressed that much. Perhaps you don't understand what I'm looking for: provide me the compressed image file for this frame, and a decoder. ErikReckase (talk) 18:27, 6 March 2008 (UTC)
When the compressor and decoder are available I will let you know. Until then there is no point in continuing any further discussion as this is leading nowhere. The article has been cleaned up and referenced properly. Hopefully it will remain that way until new information is available, and of course, if Spot has finally ceased with his vandalism.--Editor5435 (talk) 18:39, 6 March 2008 (UTC)
For what it's worth, a couple of years ago I downloaded a program (in another language - i think it french) and a few fif pictures. The compression ratio was 1:50 (which is MUCH better than jpeg), and I couldn't tell the difference between the reproduction and the original. I verified the file size and ratio myself, and played around with scaling the image beyond it's original resolution (and it was MUCH better than interpolation). That was several years ago. 1:400 is only about 10 times that (which pales in comparison to the original almost 1:100 compression), and given the time and technological advancements since then, and the distortions apparent in the higher compression-ratio images, I'm not all that surprised. Pleased, certainly. Surprised, no. Given the algorithm, it makes sense. Kevin Baastalk 19:12, 10 March 2008 (UTC)

Patents have hindered adoption of fractal compression

Michael Barnsley and associates own numerous patents in the United States and elsewhere on what this article describes. These patents have hindered widespread adoption of fractal compression.

If you live in United States or other country with software patents, you can do fractal compression for research purposes (patent law allows this) but for anything more than that, you would need to get permission from the patent owners. That usually means paying them a license fee to use the technology. —Preceding unsigned comment added by Editor5435 (talkcontribs) 02:15, 1 March 2008 (UTC)

POV/advert

The article needs independent sources to meet WP:NPOV. Much of the article is poorly sourced and comes across as an ad. --Ronz (talk) 19:24, 27 February 2008 (UTC)

I disagree, the sources point to facts. The previous article was full of inaccuracies and biased against fractal compression in general. It made it seem as if fractal compression was a complete failure in practical usage.--Editor5435 (talk) 19:33, 27 February 2008 (UTC)
WP:V: "The threshold for inclusion in Wikipedia is verifiability, not truth." --Ronz (talk) 19:49, 27 February 2008 (UTC)
Readers can verify the links by clicking on them. If a link points to Microsoft's list of supported codecs and another link to a specific codec download what possible further verification is required? Perhaps this is why many Wiki articles are hopelessly lacking in information. Maybe some changes in policy are required so articles can contain important information that otherwise exists elsewhere on the Internet?--Editor5435 (talk) 19:58, 27 February 2008 (UTC)
The link to Microsoft is broken, the page is not available. And the link to download the "codec" leads only to a decoder, not to an encoder. Just one is useless, and its functionality cannot be verified. Spot (talk) 22:10, 29 February 2008 (UTC)
Microsoft moved the page, I repaired the link. If you can source a ClearVideo encoded file you can play it with the decoder. The encoder is a commercial product I believe.--Editor5435 (talk) 22:29, 29 February 2008 (UTC)
Thanks for fixing the link. I don't see anywhere that the encoder is available, and the only information i can find say that it was discontinued a long time ago. Spot (talk) 22:47, 29 February 2008 (UTC)
ClearVideo was purchased from Iterated by a company called Enxnet Inc.--Editor5435 (talk) 22:56, 29 February 2008 (UTC)
I just spoke with Enxnet by phone, and they said ClearVideo is no longer sold or supported. This product is defunct. Spot (talk) 21:26, 3 March 2008 (UTC)
That's interesting, it leaves one single company pursuing fractal video compression. I guess everyone will have to wait for verification of their progress as far as commercial apps go. Of course, researchers are free to carry on as long as they don't attempt to offer any products into the market without a proper license agreement from the patent holder.--Editor5435 (talk) 01:01, 4 March 2008 (UTC)
It's more evidence to counter your claimed "renewed interest". Spot (talk) 03:31, 4 March 2008 (UTC)
There is plenty of renewed interest here, what do you have to say about that, Spot?--Editor5435 (talk) 07:55, 4 March 2008 (UTC)
That reference violates WP:NPOV it is worthless. Spot (talk) 04:13, 5 March 2008 (UTC)
It violates nothing, this is the discussion page. As soon as third party verifications comes this reference will change the entire scope of the article. You should be aware that the case has not "been closed" 10 years ago on fractal compression as you claim which is entirely incorrect.--Editor5435 (talk) 06:37, 5 March 2008 (UTC)
It doesn't matter that this is the discussion page, TMMI has a financial interest in the contents of this page and so their web page can't be used as a source to draw conclusions here. When the claims can be verified we can adjust the page to reflect these claims. Until then they will be rolled back. Spot (talk) 06:46, 5 March 2008 (UTC)
Feel free to list any other companies developing fractal video or still image compression and provide any references to what they are claiming. If they too can be verified then the main article will include them as well.--Editor5435 (talk) 06:59, 5 March 2008 (UTC)
TMMI's claims cannot be verified, and nobody but them and you is making any such claims. Spot (talk) 07:14, 5 March 2008 (UTC)
Spot, why don't you listen to what I'm saying? They issued a news release about their activities involving fractal compression. I am discussing it here. As soon as I can obtain verified information I will add it to the main article. Until then feel free to post references in this discussion for any other companies undertaking similar efforts.--Editor5435 (talk) 08:43, 5 March 2008 (UTC)

Edit-warring

Can everyone take a break from the reverts and edit-warring? How about instead of reverting, discuss the specifici issues on this talk page, marking portions of the article as necessary with tags such as those in Wikipedia:Template_messages/Disputes#For_inline_article_placement? --Ronz (talk) 19:28, 27 February 2008 (UTC)

I have been trying to discuss, but Spot keeps wiping out all of the new information added.--Editor5435 (talk) 19:33, 27 February 2008 (UTC)
Almost all of the new information is not appropriate for this entry, and the sourcing was not NPOV. It was better for the article to be slightly out of date than to include a lot of misinformation. It takes two to tango... Spot (talk) 22:02, 29 February 2008 (UTC)
There is no misinformation, unlike your article that made it appear fractal compression is completely hopeless, not exactly the kind of information that is beneficial to Wikipedia readers. Your agenda here is obvious and will not be tolerated.--Editor5435 (talk) 22:40, 29 February 2008 (UTC)

Some references, hopefully helpful

Reading through the Talk page, it seems there have been some problems finding reputable sources for this article. Right now, it appears to be largely sourced to company press releases, and (not surprisingly) as a result the article reads a bit like an industry newsletter rather than an encyclopedia article.

I don't claim to have any expertise in fractal compression, but I do have access to a major university library, so I did a search for recent articles on this subject. There were several from the past 5 years or so (although surprisingly few compared to the number of papers on other areas of image processing), and the introductions of the articles were all fairly similar. They all basically say that fractal image compression is a interesting technique with high compression ratios and fast decoding, but has been hampered by extremely slow encoding speeds compared to other compression techniques, and that its use has mainly been limited to offline applications as a result, although a lot of research has been (and continues to be) devoted to finding faster compression schemes.

Here is a representative sample of the papers I found, with quotes from their introductions. Keep in mind that they are papers on fractal image compression, so there is a natural bias towards presenting this as a useful technique.

The idea of the fractal image compression for image compression was originally introduced by M. F. Barnsley and S. Demko [1]. The first practical fractal image compression scheme was proposed by A. E. Jacquin [2], E. W. Jacobs, Y. Fisher and R. D. Boss [3], which utilized block-based transformations and an exhaustive search strategy. Their approach was an improved version of the system patented by Barnsley [4,5]. The main disadvantage of fractal image compression is the computation complexity, i.e., the encoding speed. However, it provides high compression ratio and fair image quality and thus attracts many off-line applications.
Even though fractal image compression would result in high compression ratio and good quality of the reconstructed image, it rarely used due to its time consuming coding procedure.
Fractal compression is appealing due to a number of reasons: 1- As the image increases in size the size of the compressed file stays constant. 2- This method does not require a code-book unlike the JPEG method. 3- Compression ratio is high. 4- Decoding stage of the algorithm is independent of the to-be-reconstructed image. 5- In the decoding stage the image is reconstructed quickly through a number of recursive operations. 6- The reconstructed image has an adequate quality. The main drawback of this method is its slow coding process due to the large number of compares between range blocks and domains. To reduce this time consuming part of the algorithm, a number methods have been proposed which either use classification [5] or try to confine the search process in a small area [6].
Fractal image compression is one of the most notable techniques because it opens up a refreshing new view of image coding. Since the practical fractal encoding algorithm was first proposed by Jacquin [1], fractal image compression has achieved obvious advances in the past decade [2-10].
Finding a contractive transformation with a given point as its fixed point is called the inverse problem in fractal image compression, and is considerably difficult and challenging.
The simplicity of the decoder is one of the most notable features for fractal image compression.
While it is a very promising technique capable of achieving good image quality at a high compression ratio, its main drawback is the expensive computational cost in the encoding process.
In spite of the manifold advantages offered by fractal compression, such as high decompression speed, high bit rate, and resolution independence, the greatest disadvantage is the high computational cost of the coding phase. At present, fractal coding cannot compete with other techniques (wavelets, etc.) if compression per se is involved. However, most of the interest in fractal coding is due to its side applications in fields such as image database indexing [11] and face recognition [13]. These applications both utilize some sort of coding, and they can reach a good discriminating power even in the absence of high PSNR from the coding module. The latter is often the computational bottleneck, so it is desirable to make it as efficient as possible. In the case of fractal coding, we should try to reduce the search time, because full exhaustive search is prohibitively costly.

Note that several of the papers I found credit Jacquin for the first practical algorithm. Note also that the papers proposing "fast" encoding schemes mean "fast" compared to previous fractal compression schemes, not fast compared to competing non-fractal compression.

I hope this is helpful. —Steven G. Johnson (talk) 03:40, 1 March 2008 (UTC)

Excellent references, the one common problem they all point to is the computational intensive encoding process which is why fractal compression had limited market penetration during the 1990's. With today's multi core 64 bit hardware there is renewed hope for this technology.--Editor5435 (talk) 03:56, 1 March 2008 (UTC)
That may be, but the above references are not from the 1990s. I can't find any reference (including papers from the last couple of years) even suggesting that the encoding time is no longer an obstacle thanks to faster computers—they all emphasize the need for dramatically better algorithms, or alternatively for offline applications where encoding time is not such an issue. Wikipedia, by policy, needs to stick to what information can be found in published, reputable sources. —Steven G. Johnson (talk) 05:06, 1 March 2008 (UTC)
Here is a reference from 2005 about using faster hardware to achieve realtime fractal compression: Towards Real Time Fractal Image Compression Using Graphics Hardware on page 723.--Editor5435 (talk) 16:33, 1 March 2008 (UTC)
Here's the full paper. He reports compressing a single 256x256 gray-scale image in 1 second. That's a long, long way from real-time. He doesn't report any error/quality numbers. Spot (talk) 17:32, 1 March 2008 (UTC)

Here's a key phrase from the above abstract: "At present, fractal coding cannot compete with other techniques (wavelets, etc.) if compression per se is involved. However, most of the interest in fractal coding is due to its side applications in fields such as image database indexing [11] and face recognition [13]". This is why the Waterloo group changed their name (see above). Another "side application" is uprezzing, which we have also discussed above and is not compression. Spot (talk) 17:44, 1 March 2008 (UTC)

Spot, many of the above references claim fractal compression yields good results, however, the computational bottleneck has always been a problem. My reference "Towards Real Time Fractal Image Compression Using Graphics Hardware" points to a possible solution. As for your highly regarded Waterloo group they do see to be biased against Michael Barnsley and Iterated Systems Inc., there is a possibility this is because none of Waterloo's fractal compression research could ever result in commercial applications, at least until the Iterated's patents expire.--Editor5435 (talk) 18:30, 1 March 2008 (UTC)
I agree with Spot: compressing 256x256 grayscale in 1 second (with unspecified quality) may well represent progress, but it is not a solution in the sense of being close to competitive. Every one of the papers that I cited reports something that they claim is an improvement, but improvement is a very different thing from getting to the point of being competitive (for compression). Note that the paper you are relying on is titled "towards real-time..."; the authors know that they cannot claim "real-time". (Aside: Spot, the quote is from the body of the paper, not the abstract.) —Steven G. Johnson (talk) 17:46, 2 March 2008 (UTC)
Yes, "towards real-time..." is what I said, the paper claimed the experimental results showed the GPU version completed the test in 1 second compared to 280 seconds for the CPU version. The reason I posted the reference is to show that research into fractal compression is continuing and GPU is a "possible solution" for achieving real time fractal compression, I never made any claims that its possible right now. I will try to locate references to recent non real-time encoding results to see how much improvement has been made since the 1990's.--Editor5435 (talk) 18:46, 2 March 2008 (UTC)
For discussion purposes I have located a reference to The TILE64™ processor family (iMesh™ architecture) being evaluated for embedded realtime fractal encoding solutions which supports the paper suggesting a non standard CPU solution. 64 tile processor cores on a single board compared to the single GPU core used in the 256x256 grayscale test looks very interesting. I will keep searching through the 100,000 Google results on fractal compression.--Editor5435 (talk) 19:18, 2 March 2008 (UTC)
Fractal compression not competitive if massive hardware is required to match what standard techniques have been capable of for a long time. Merely riding the curve of hardware improvement does not constitute progress. Spot (talk) 04:07, 4 March 2008 (UTC)
Spot, what are the hardware requirement to encode Windows Media in realtime for example? What compression ratios are you talking about? I am not aware of any other codecs capable of 250:1 compression of 60fps in realtime, especially on non "massive hardware" hardware. Also, you are ignoring the fact that ever increasing resolutions require an ever increasing performance in computer hardware to keep pace with the exponential growth in data streams.
i.e.
640x480=0.31 megapixels
1280x960=1.23 megapixels
1920x1080=2.07 megapixels
3840x2160=8.28 megapixels
Your arguments are just plain ridiculous.--Editor5435 (talk) 23:12, 4 March 2008 (UTC)
Compressing video in MPEG or other popular format still takes a lot of processing power to do in real-time. In the commercial arena, expensive special-purpose hardware is used. FIF compression, algorithmically, is computational much more intensive than current compression algorithms, and thus would take more expensive hardware, and conceivably more of it. However, contrary to what Spot has suggested, it does provide better quality and higher compression ratios than what standard techniques are capable of. This could pay off in applications where bandwidth and quality are at a premium, and there are a high number of views per compression. Such applications are not few in number, and, as Editor5435 pointed out, are becoming more prevalent.
In any case, the topic is certainly debatable, but we can't reliably predict when and what new technologies commercial industries will mass-market and how well they'll be taken up. Furthermore, info about the state-of-the-art should be in a section about the state-of-the-art, and by its very nature, is likely to be plagued with items that border on WP:OR. I think it's more important right now to focus on expanding the article on more rudimentary questions like "what is fractal compression?" and "how does it work?" before moving on to more esoteric matters. Kevin Baastalk 20:36, 10 March 2008 (UTC)
Regarding the competitiveness debate, it looks like fractal compression has already found a niche market: Genuine Fractals From the article: "Genuine Fractals is still an industry standard for professional image resizing and is used by graphics professionals worldwide." Kevin Baastalk 16:49, 15 March 2008 (UTC)

Here's a good reference Spot (talk) 22:29, 22 March 2008 (UTC). full paper. Brendt Wohlberg and Gerhard de Jager, "A review of the fractal image coding literature", IEEE Transactions on Image Processing, vol. 8, no. 12, pp. 1716--1729, Dec 1999. abstract:

Fractal image compression is a relatively recent technique based on the representation of an image by a contractive transform, on the space of images, for which the fixed point is close to the original image. This broad principle encompasses a very wide variety of coding schemes, many of which have been explored in the rapidly growing body of published research. While certain theoretical aspects of this representation are well established, relatively little attention has been given to the construction of a coherent underlying image model which would justify its use. Most purely fractal-based schemes are not competitive with the current state of the art, but hybrid schemes incorporating fractal compression and alternative techniques have achieved considerably greater success. This review represents a survey of the most significant advances, both practical and theoretical, since the publication in 1990 of Jacquin's original fractal coding scheme.
Great, your cherry-picking found you a reference that's almost ten years old! When pit up against all of the recent references above, and the fact that there is an actual competitive produce, it's hard to imagine someone actually believing what the reference you provided said. It's like someone looking up at the sky, seeing blue, then reading the words "the sky is green", and deducing that the sky is green, not blue. Kevin Baastalk 14:41, 23 March 2008 (UTC)

RfC: TMMI and Editor5435

A couple of editors have engaged in a long and so far fruitless attempt to dissuade Editor5435 from his claims about Fractal Compression, TMMI, and uprezzing. Soon I plan on rewriting this article to account for what we have done here, and would appreciate your input now, so that we get the best result. Thank you. Spot (talk) 04:33, 4 March 2008 (UTC)

removed the RFC template so the bot will find the one below.

I plan on including all relevant historical and current information since Spot has a history of deleting such facts and giving the article an overly negative bias.--Editor5435 (talk) 09:37, 4 March 2008 (UTC)

Spot's integrity is highly questionable, he has been editing his own Wikipedia pages in blatant violation of NPOV policy. The pages in question are Scott Draves and Fractal Flames. I am by no means an expert on dealing with such obvious violations of Wikipedia's policy so I ask for assistance in putting a stop to his blatant self promotion and hawking of wares.--Editor5435 (talk) 03:27, 7 March 2008 (UTC)

After having read much discussion on this page, I have come to the conclusion that Editor5435's arguments are among the most rational and logically-sound of those made here. It's a pity to see so much of it fall upon deaf ears. I applaud and encourage his efforts, and ask other editors to consider his arguments more carefully. Kevin Baastalk 18:07, 10 March 2008 (UTC)
I have interspersed a few responses to some of the topics of discussion, which can be found by using the edit history. Kevin Baastalk
RE: TMMI, I don't even see why this was brought up in an RFC, as it's not currently mentioned in the article and nobody is claiming that it should be. It seems rather spurious. Kevin Baastalk 19:17, 10 March 2008 (UTC)
It was brought up because at the time the RFC was made the article was riddled with references and links to TMMI. Since then we have made some progress in cleaning it up, but we still have a long way to go. Spot (talk) 17:45, 18 March 2008 (UTC)
What do you mean by "but we still have a long way to go"? The article does not mention TMMI. What are you babbling about now? At least your false and misleading information has been removed, it was disgraceful that you included such nonsense in the first place. By the way, when third party verification is available of TMMI's latest development in fractal compression that information will be added to the article and there will be nothing you can do to prevent its inclusion, nothing!--Editor5435 (talk) 18:12, 18 March 2008 (UTC)

Misinformation?

A couple of editors have repaired Spot's frequent blatant vandalism to the article. Anything is better than his false and misleading information that fractal compression does not work without human intervention. Spot's own reference contradicts his claims that fractal compression "doesn't work". Reference: "Fractal compression methods were actually quite competitive until the appearance of powerful wavelet-based methods and, subsequently, context-based coders." Again, how could this technology possibly have been "quite competitive" in it didn't work? Spot's lies have caught up with him. Now with much improved hardware the previous incumberence of this method of compression is becoming less of an issue. Spot's efforts to hide the truth will ultimately fail.--Editor5435 (talk) 08:41, 4 March 2008 (UTC)

Spot, what is your agenda against TMMI being the only company actively developing fractal video compression and the technology in general? For your information fractal image compression is heavily patented. It just so happens TMMI/MCI and Iterated Systems Inc. co-developed fractal video compression in the 1990's and TMMI is licensed to use Iterated patents in its source code. While research is permitted by unlicensed third parties under patent law commercial software is not. This has most certainly created a negative attitude within the software industry towards this specific form of compression. Are you a disgruntled ex fractal compression researcher? You can not accept the fact progress is being made with this technology. I only hope third party verifications comes quickly and then yes, the entire article on fractal compression will have to be rewritten will current factual information and you will be exposed for the fraud you are. For discussion purposes you can start educating yourself about the latest examples found here--Editor5435 (talk) 09:58, 4 March 2008 (UTC)

Your enthusiasm to call Spot a fraud is a blatant violation of WP:NPA, and obscures any weight your arguments may have. Spot does not have an agenda to vandalize or spread misinformation - he is protecting this page as a reasonable science-minded individual should. ErikReckase (talk) 05:02, 5 March 2008 (UTC)
I have made no such violation, Spot has received several warnings of vandalism so therefore my statements are valid. Please ensure your information is correct before accusing anyone of something.--Editor5435 (talk) 21:03, 5 March 2008 (UTC)
The fact remains Spot repeatably deleted relevant information and posted false and misleading information about fractal compression. Why does he keep insisting human intervention is required for fractal compression to work and this technology had no commercial success? Such statements are false and destroy the integrity of this Wiki article --Editor5435 (talk) 06:33, 5 March 2008 (UTC)
My agenda is to protect the Wikipedia from unverified claims by a company about their products. I am not a disgruntled compression researcher. My user page links to my real home page that includes my real name and biography, which includes a PhD in Computer Science and research in computer graphics. I am an expert in fractals (for art not for compression) and have no conflict of interest with this content. You are welcome to similarly disclose your background. Spot (talk) 07:20, 5 March 2008 (UTC)
I would like to note for the record that Editor5435 is a WP:SPA as is the previous advocate of identical positions, Technodo, and in fact one vanished at the same time as the other appeared. I am going to see if I can get their IP#s and further investigate this. Spot (talk) 08:00, 5 March 2008 (UTC)
Spot, what's to investigate? I lost my password so I created a new account. I have never submitted anything as Technodo since I created my new account. What's wrong with you? Your attempts to tarnish fractal compression will ultimately fail because the facts expose your claims as invalid. So you admit you're "expert in fractals" but your claims about fractal compression have been proven wrong and your editing and vandalism warnings clearly point to an agenda that is biased against fractal compression. In my opinion your behavior is motivated by jealousy of those who have succeeded where you have failed, but in any case I hope you have learned from the edit and vandalism warnings you have received and come to your senses to discontinue such activity in the future.--Editor5435 (talk) 09:39, 5 March 2008 (UTC)
Editor5435, you seem to be the one who is blinded by enthusiasm here, and whose claims don't appear to hold up to the statements in reputable sources. As an outside observer, it's very hard to be sympathetic to your position at this point, nor with the vast amount of time you've taken up by your endless argument on this page against (apparently) all other editors. Wikipedia is not the place for advocacy, for promotion of ideas that you feel are deserving and will be proven in time; wait until fractal compression is proven to be competitive (for compression) as reflected in published sources (not just company press releases), and the article will be updated. It seems clear that you are unable to view this subject objectively, nor do you appear to have any real expertise in the subject, so it would probably be better if you could leave this article to others. —Steven G. Johnson (talk) 16:08, 5 March 2008 (UTC)
Stevenj, I have added many references to this article that was previously lacking. Also, since when is "proven to be competitive" a requirement for content of a Wiki article? It appears the previous editors knew absolutely nothing about fractal compression claiming it didn't work without human intervention, that is had no commercial success, that no research or progress has happened during the last 10 year. Are those the "experts" that should be contributing to this article? As for the company's claims I have not included any of them in the main article other than a brief mention of renewed activity. When verification of their claims is available the article can be rewritten. I am viewing this subject objectively, however, I will not stand by while a certain individual spreads false and misleading information. I see he has already received multiple edit and vandalism warnings regarding this article. Since nobody seems interested in "improving" this article I might as well do it until others have additional facts to contribute. There do seem to be very few people who understand fractal compression which most likely explains why the previous shabby article existed for as long as it did with no further contributions.--Editor5435 (talk) 17:28, 5 March 2008 (UTC)

Corporate Vanity

The below is from Brad Patrick of the Wikimedia Foundation. I believe it applies to this situation.

Dear Community:

The volume of corporate vanity/vandalism which is showing up on Wikipedia is overwhelming. .... However, I am issuing a call to arms to the community to act in a much more draconian fashion in response to corporate self-editing and vanity page creation. This is simply out of hand, and we need your help.

.... Ban users who promulgate such garbage for a significant period of time. They need to be encouraged to avoid the temptation to recreate their article, thereby raising the level of damage and wasted time they incur.

.... We are losing the battle for encyclopedic content in favor of people intent on hijacking Wikipedia for their own memes. This scourge is a serious waste of time and energy. We must put a stop to this now.

Thank you for your help.

-Brad Patrick User:BradPatrick Wikimedia Foundation, Inc.

quoted here by Spot (talk) 18:52, 5 March 2008 (UTC)

You are the one who has received vandalism warnings. What "vanity" are you referring to? Please explain.--Editor5435 (talk) 19:09, 5 March 2008 (UTC)
Please be aware of TMMI's history of problems: [7] (ditto here: [8]). Finally consider the stock activity: [9] that corresponds to the beginning of Editor5435/Technodo's editing of the page. Spot (talk) 16:37, 7 March 2008 (UTC)
Spot, stop spreading your filthy lies. My participation on Wikipedia started after the company issued news about TruDef, based on its old SoftVideo fractal compression codec from the 1990's. What in hell do corporate matters and the trading of stock have to do with fractal compression? Besides, don't you have the slightest clue that the company's news releases just might have corresponded to a blip on its stock chart? You have already been proven a despicable hypocrite devoid of all integrity by writing your own Wikipedia articles Scott Draves and Fractal Flames in blatant violation of NPOV policy. You have received several warnings of Wikipedia vandalism over you tampering with the fractal compression article. Stop with your despicable behavior.--Editor5435 (talk) 17:03, 7 March 2008 (UTC)
It's not lying to point out TMMI's history of problems, or the coincidence of your editing and the stock motion. As for the pages about me and my work, I didn't create or write either of them (though I have made some minor edits in accordance with WP policy i can fix simple factual errors). Those pages have been stable for years, and have been edited by lots of different people. If you want to demonstrate your neutrality, why don't you reveal your true identity? Spot (talk) 18:30, 7 March 2008 (UTC)
The article no longer mentions TruDef or Total Multimedia Inc., Spot why do you persist in this harassment? You should be banned from Wikipedia, the community doesn't need unscrupulous individuals such as yourself.--Editor5435 (talk) 17:11, 7 March 2008 (UTC)


New Organization

I hate to weigh in here for fear of attracting personal attacks, but I have to say the article currently reads like an advert to me.Though ironically for a company that no longer exists (Interated, sounds like TMMI was removed), so it's just odd sounding. Iterated systems is mentioned currently eight times in the body of the article, which is unnecessary. The historical detail would be great for an Iterated Systems article but they may not be notable enough to warrant a separate article: not sure. The company detail sounds like it's included here just to make a point about the amount and timing of efforts to develop the technology. But that framing wasn't obvious to me at first read, I'd really like some structure added to this article so the lead paragraph provides a summary and then the subsections help me follow what's going on. How about for subsections "method", "history", "patents", and "comparison with alternatives"? The method section would go into detail on how images are compressed then rendered (and have further subsections on each aspect); history on the invention and development timeline; the patents if discussion of IP and its impact is important; and comparison to alternatives would then be for neutral commentary on why or why not it is used or others are used instead - assuming we can find neutral, third party comparisons. I just found this one but it's only at a high level. - Owlmonkey (talk) 03:42, 13 March 2008 (UTC)

I believe Iterated is frequently mentioned since it is the patent holder and the only company that actually developed the technology, besides one if its partners. If you remove the history there wouldn't be much left to the article besides what research papers discuss. I agree breaking the article into sections as you describe lays the foundation for further expansion, which in my opinion there will be a significant amount of this year.--Editor5435 (talk) 04:40, 13 March 2008 (UTC)
Does Iterated still exist as a separate entity? I'm assuming Interwoven owns all the patents since they purchased MediaBin which was Iterated before that. Or were the patents separated out, do you know? - Owlmonkey (talk) 08:13, 13 March 2008 (UTC)
Correct, Interwoven now owns all of the patents. The historical events in the article refer to Iterated which was the correct name at the time.--Editor5435 (talk) 10:02, 13 March 2008 (UTC)
I agree with owlmonkey, about how it should be organized by sections instead of chronologically. The main section should be the method one. This is how pages like Jpeg and MPEG-2 Discrete_wavelet_transform Discrete_cosine_transform and H.261. We probably won't be able to get this much detail, but I think to have a page like this for Fractal Compression should be our goal. Spot (talk) 10:33, 13 March 2008 (UTC)
A section on fractal scaling can also be included since it is an inherent function of fractal compression despite what some other less informed editors claim. Hopefully the blatant vandalism that previously occurred won't happen again.--Editor5435 (talk) 17:25, 13 March 2008 (UTC)

Fractal scaling / interpolations / resolution independence debate (again)

That's right, and this section should be clear that fractal scaling is an interpolation method, not compression, although it is sometimes erroneously used to claim large compression ratios. The faq has a warning like this, and since clearly there's a lot of confusion on this issue, we should make it clear as well. Spot (talk) 20:05, 13 March 2008 (UTC)
Fractal zooming is not an interpolation method. If this were so, then fractals themselves are an interpolation method. Any display of a fractal is a "zooming", so to speak. - it is a specification of a "scale". And fractals are scale-invariant (i.e. invariant under scale transformations, because they are made out of affine transforms). That is the whole point. Fractal-encoded images are not made out of pixels. A loose analogy would be to vector-graphics. When you scale a vector graphic you're not "interpolating", any more than you would be if you displayed it at 1:1. Same goes for fractals. Scale invariance is "the first thing about fractals", so to speak. If you can't understand how fractal zooming is not interpolation you need to learn a lot more about fractals (as well as interpolation) before you start debating it. Kevin Baas original quote.--Editor5435 (talk) 20:16, 13 March 2008 (UTC)
You are right fractals are scale-free and rendering them at high resolution would not be properly called interpolation. But fractal compression is an operation applied to a bitmapped input image. After decompressing at a higher resolution than the input, you end up with more pixels than the input, and these new pixels are placed between and interpolate the input pixels. Just like nearest-neighbor, bilinear, and bicubic are all kinds of interpolation, so is the "fractal scaling" as implemented by products like Genuine Fractal. As you can see on that page, they directly compare themselves with Bicubic interpolation. You appear to be claiming that not only is the FAQ wrong, but the product documentation for Genuine Fractal is wrong. Spot (talk) 08:27, 14 March 2008 (UTC)
(continued at #LostPixels)
To put it in another way - to start with the incorrect assumption that the concept of "inbetween" two or more input "pixels" is valid in fractal decompression, let us suppose for a moment that we are "interpolating" a pixel inbetween two pixels in the original source. The function we use to find the color value for that pixel, rather than using the "pixels" that it is "in-between", uses the entire image to calculate the color value of that pixel, and thus is not, by any stretch of the definition, "interpolation". This is mixing two different paradigms and like i said, may consequently have some technical flaws, but the point is valid. Kevin Baastalk 16:19, 14 March 2008 (UTC)
This is totally incorrect. The term "interpolation," as used in mathematics for over 200 years, includes interpolation schemes that use the entire dataset rather than the nearest-neighbor points. A classic example is trigonometric interpolation (which uses the discrete Fourier transform of the entire data set for every interpolated point), and was pioneered by Gauss in 1805 (in a tract entitled Theoria interpolationis methodo novo tractata, emphasis added). Many more recent examples can be found (e.g. google "Chebyshev interpolation" or "wavelet interpolation"). In short, "interpolation" just means generating new data points in between old data points, regardless of how much of the old data you use. —Steven G. Johnson (talk) 04:15, 15 March 2008 (UTC) [— an even older example is Lagrange interpolation. —Steven G. Johnson (talk) 04:31, 15 March 2008 (UTC)]
Alright, I'll concede that. I'm beginning to think that we're talking about two different things, which is why we're using two different names. On the one hand "fractal scaling" is the innate property of an ifs that it is based on recursive affine transformations, and thus has essentially infinite resolution - similarly to how a line does. Whereas "fractal interpolation" or "fractal based interpolation" is using this property of fractals to blow-up a digital image with minimal signal-to-noise-ratio loss. Kevin Baastalk 18:01, 15 March 2008 (UTC)
Yes we are talking about different things. Unfortunately what you're talking about isn't what this article is about. This article isn't about infinitely scalable fractals, the pages about that are Fractal and Iterated_Function_System. This article is about Fractal Compression which is an operation on raster images. It's easy for people to be confused by this, which is why I've proposed putting a warning into this section. Finally no, fractal interpolation isn't about minimizing SNR. Bicubic interpolation introduces no noise whatsoever. The concept of noise does not apply since the data between the samples we have in the source image is unknown. The point of fractal interpolation is to synthesize samples in a way that's plausible for photographs of the real world, and in a way that doesn't distract the human eye. Spot (talk) 20:04, 15 March 2008 (UTC)
re: "Yes we are talking about different things. Unfortunately what you're talking about isn't what this article is about. This article isn't about infinitely scalable fractals,": When an image is encoded into .fif, it is encoded into a set of infinitely scalable fractals. These are the primitives of the compresson format. Not talking about them in the article would be like not talking about vectors in the article on vector graphics. Kevin Baastalk 13:33, 16 March 2008 (UTC)
I'm not saying we shouldn't talk about how fif files can be displayed at any resolution, we should. I'm saying we should make clear that this ability does not justify inflating compression ratios. Spot (talk) 23:46, 16 March 2008 (UTC)
That's odd because nobody has suggested that it justifies inflating compression ratios. And I think it kinda goes without saying that it doesn't. I don't think a reader of the article would be confused about that. Thus, I don't really think that the fact that it does not justify inflating compression ratios is notable - since it's addressing a problem that doesn't exist, it would be begging the question. Kevin Baastalk 16:17, 17 March 2008 (UTC)
Actually that's exactly what Editor5435 has claimed numerous times on this page, including here and here. The FAQ deemed it worth to include a section titled "Reader Beware". Have you read it? We have an authoritative reference that seems to think it's notable as well as confusion on this page about it, so clearly it should be included. Spot (talk) 18:03, 18 March 2008 (UTC)
No Spot, I am claiming for MPEG or JPEG to achieve similar viewing quality of high resolution images compared to compressed file size of fractal compression it would require 1000:1. I am comparing fractal upscaling vs large resolution wavelet compression. You are obviously having great difficulty with this simple concept.--Editor5435 (talk) 18:57, 18 March 2008 (UTC)
That is not what he is saying in the first. There he is just saying that a fractally-encoded image up-samples a lot better than any other existing encoding format. Which is true. Up-sampling a fractal image does not result in pixelation, loss of sharpness, or unseemly artifacts, as in other formats. And this comes "free"; you do not have to do anything special (like post-processing) to get it, just decompress it to the resolution you want and viola! This property is unique to fractals, and is a direct consequence of the fact that in a .fif, the image is stored as geometric primitives that are resolution-independent, namely, "fractals", hence the name "fractal image format".
The second diff you cited, I see what you're saying. I don't know what "Fractal scaling achieves the exact same results as compression." means. And If 10kb of image data is decoded into a 1MB image, it represents a de-compression ratio of 100:1. A compression ratio of 100:1 is represented by a 1MB image being encoded into 10kb of image data. You can't interchange the two. Besides that, everything else he said in that diff is pretty accurate. Kevin Baastalk 18:52, 18 March 2008 (UTC)
re: "This article is about Fractal Compression which is an operation on raster images. It's easy for people to be confused by this,..." I don't see how people could possibly be confused by this. It's inherently obvious that when you're talking about compressing a digital image, you're talking about an operation on raster images. Not knowing this amounts to not knowing that a computer screen is composed of pixels. Kevin Baastalk 13:33, 16 March 2008 (UTC)
The confusion isn't about that pixels are involved, the confusion is about compression ratios. There is apparently confusion about whether or not Genuine Fractal performs interpolation when it scales images up (the answer is yes).Spot (talk) 23:46, 16 March 2008 (UTC)
The debate here is about what that is most properly called (and, as i mentioned earlier re "two different things", possibly what the other party is talking about, too.). Whether it meets the technical definition of "interpolation" or not. I have suggested "re-sampling" as a more general and less controversial substitute. I don't believe there's any confusion that when a low-res image is converted to a higher-res image (or vice-versa), that is "re-sampling". And I have made the argument that whether you up-sample or down-sample via .fif compression & decompression, you're in either running through the same alogorithm, so if an donly if up-sampling via this route is "interpolation", then down-sampling is "interpolation". down-sampling via this route is not interpolation. Thus, up-sampling via this route is not interpolation. Any case, I suggest "re-sampling" (or up-sampling, where one prefers to be more specific) as an alternative that is incontrovertibly technically correct. Kevin Baastalk 16:17, 17 March 2008 (UTC)
The debate was about whether or not fractal scaling of input images is interpolation, and the conclusion was that it is. The fact that you initially didn't understand that makes it all the more imperative that the word be used in the article, since people like you will clearly benefit from it. Spot (talk) 18:15, 18 March 2008 (UTC)
As I said, the disagreement we are having here seems to stem from differences in the words we use to describe things, not from differences in how we conceive them those things. I think that shifting our paradigm respectively will help us understand each other better, and thereby sooner come to an agreement on article content that everyone will understand clearly and unambiguously. Kevin Baastalk 21:00, 18 March 2008 (UTC)
To compare SNRs image up-sampling methods, you first down-sample the image, then you up-sample it back to the original resolution with each different method, and compare the resulting image against the original. There have been papers written comparing SNRs of different methods and they show that fractal compression and decompression produces a much lower SNR than bicubic interpolation. Kevin Baastalk 13:33, 16 March 2008 (UTC)
If you down-sample first then yes you can use SNR. It would be great if you could link to such a paper. Spot (talk) 23:46, 16 March 2008 (UTC)
I had found them earlier via some googling. I might look for them again later. But in any case the term "signal-to-noise ratio" is meaningless when you don't have a control to compare against, so to say that the snr of bicubic interpolation is 100% is to misuse the term "snr". and one can logically deduce from this that whenever one speaks of the snr of an upsampling method, they first downsample from a control, because it's the only sense that "snr" is meaningful in this context. Kevin Baastalk 16:36, 17 March 2008 (UTC)
I can't seem to find the results I found earlier. But I have found multiple sources that say fractal compression (without re-sampling, i.e. no resolution changes) gives better snr than jpeg at high compression ratios. a good example with images: [10] from the search i realized that there are a lot of different fractal compression methods that produce differing quality results, and it's difficult to know what's the best, so it's hard to compare. this page seems to be fairly prominent: [11] . it call the property(ies) of fractal compression that we're discussing "resolution independance" and says that it's unique to fractal compression. and i found some good sources for comparing image up-sampling methods: [12] [13] [14] . You can just visually tell that fractal re-sampling gives a higher SNR than the other methods and the fact that it's the up-sampling method of choice certainly shows that the experts think so. Unfortunately, I can't seem to find the paper anymore. Kevin Baastalk 18:58, 17 March 2008 (UTC)
Here's something: [15] . it's actually de-noising which is similiar to up-sampling - main difference is that the missing information is randomly distributed throughout the image, rather than uniformly distributed, which turns out to be equivalent in the limit. the paper compares de-noising via fractal compression & decompression with the "lee filter" for denoising, using four different images. The results show that after a certain noise threshold is exceeded, fractal compression & decompression outperforms a "lee filter" in image denoising. Kevin Baastalk 19:55, 17 March 2008 (UTC)
Thanks for conceding. So let's return to the statement I made that started this argument this section should be clear that fractal scaling is an interpolation method, not compression, although it is sometimes erroneously used to claim large compression ratios.. We now agree that this is a kind of interpolation. Do you agree with the rest of the statement, which is that computing "compression ratios" by fractal upscaling are erroneous? Would you object if I included a section like the one titled Reader Beware? Spot (talk) 20:11, 15 March 2008 (UTC)
What are you talking about? I conceded that some interpolation methods use the whole image as input for each pixel they are interpolating, not the whole argument. I think that was pretty obvious and I'm not sure if you're really just trying to be an a$$ or if you actually think that I conceded the whole argument. In any case, it's become apparent to me that I need to be very careful what I say to you lest you blow it way out of proportion and/or twist the meaning out of it until it bears no resemblance to what I said. Kevin Baastalk 13:33, 16 March 2008 (UTC)
You claimed that fractal scaling of images didn't involve interpolation because it used all the input pixels instead of just the two neighbors. Steven showed you that using all the input pixels does not disqualify a method from being called "interpolation", and you agreed. So I assumed you would conclude that it was correct to refer to fractal scaling of images as interpolation. If not, do you have a new reason? Spot (talk) 23:55, 16 March 2008 (UTC)
That was just one of the multiple arguments re "interpolation" I have made in this section. I don't feel I should have to re-state them when they're already available for you to read (and you should already have read them.) I don't know of anybody who's suggested the technique of computing "compression ratios" by fractal upscaling. That seems absurd and I don't know where you got that idea from. Kevin Baastalk 16:52, 17 March 2008 (UTC)
We have read and debunked all of your arguments. The FAQ says this is a kind of interpolation. You have yet to provide a reference that says it is not. Please respond to my question about including the section from the FAQ. Editor5435 has suggested exactly this, for example here and here. Spot (talk) 18:30, 18 March 2008 (UTC)
"We have read and debunked all of your arguments." without even knowing what those arguments are, apparently. Amazing. Kevin Baastalk 21:02, 18 March 2008 (UTC)
Yes, its amazing the levels certain editors will lower themselves to while making it appear their point of view is correct. It can be most frustrating!--Editor5435 (talk) 23:45, 18 March 2008 (UTC)
up-sampling is done via de-compression, not compression. it is a consequence of the resolution independence of the format. But when one's talking compression ratios, one's talking about getting it into the format, not out of. And at high compression ratios, .fif outperforms other formats in quality (psnr). .fif does especially well w/high-res, high bpp (color depth) images with lots of textures, (such as .tiffs from a digital camera). This makes sense, given that it preserves image quality better (relatively) at higher compression ratios. Kevin Baastalk 16:52, 17 March 2008 (UTC)
Spot is obviously having difficulty is grasping the concept that the whole point of fractal compression is to transform images into resolution independent assets. He, as well as others attempt to compare one aspect of fractal encoding to the limitations of other image compression methods. Here is quote from the forbidden website: "Fractal scaling allows for viewing of video up to 400% larger than its native resolution while maintaining high quality. To achieve the comparative results with pixel based compression schemes compression ratios would need to be 1000:1 or greater, which is not possible without the introduction of an unacceptable level of image artifacts." The end result is video displayed on a large resolution screen with better results compared to compressing MPEG video at the display resolution down to a source file of equal size. This aspect of fractal compression is essential to any discussion on the subject.--Editor5435 (talk) 17:56, 16 March 2008 (UTC)
This is a perfect example of why we need a warning. Spot (talk) 23:58, 16 March 2008 (UTC)
The fact remains to achieve similar results from MPEG you would need to have compression ratios of 1000:1 or greater. What part of this don't you understand?--Editor5435 (talk) 04:15, 17 March 2008 (UTC)
Sorry what fact are you talking about? MPEG does not provide for up-scaling. Which MPEG format do you mean? Similar to what? If you can say it clearly than we can assess it. Spot (talk) 05:04, 17 March 2008 (UTC)
I think the concept is beyond your rather limited comprehension of the subject!--Editor5435 (talk) 05:46, 17 March 2008 (UTC)
Put another way, "fractal scaling" is the fact that once you've transformed a bitmap into a fif, it loses it's association (excepting meta-data) with a given resolution ("resolution independence"), and what resolution you display it in is quite irrelevant as far as the data is concerned - theoretically it can be "scaled" infinitely and will still show the same level of "detail". (whether or not this "detail" was in the original)
Fractal interpolation then exploits fractal scaling/resolution independence by using fif compression/decompression as an interpolation kernel, just as one might use trigonometric functions as an interpolation kernel. It "interpolates" by transforming the image space to .fif, then back to .tiff/whatever at a higher resolution. (kinda like full-scene anti-aliasing) Kevin Baastalk 15:18, 15 March 2008 (UTC)
However, the same thing happens even if you don't increase the resolution; fractal compression is lossy, and in the process of transforming back and forth, the "original" pixels are just as "interpolated" as pixels in a scaled-up image. Same for scaled-down images. And you certainly can't call not resizing an image or even decreasing the resolution "interpolation". Perhaps "re-sampling" would be a more accurate term. Kevin Baastalk 18:01, 15 March 2008 (UTC)


The number of input pixels used is not an issue. Different interpolation methods use different numbers of input pixels at different distances from the target pixel, but they are still all forms of interpolation. What interpolation methods share is determining new samples based on a set of existing samples, that is creating images with more pixels, while preserving the content of the image. There is one paradigm here, that of information theory and computer science, and it is adequate to describe fractal scaling. Spot (talk) 02:37, 15 March 2008 (UTC)
See my response to steven j., above. Kevin Baastalk 15:32, 15 March 2008 (UTC)
You're using an iterated function system to generate the color values for the pixels, and iterated functions systems are not interpolation. Take, for example, the fractal fern:

generated from an IFS. To say that a feature of a leaflet (say, the one highlighted in red) has been interpolated amounts to saying that an equivalent part of the whole image (highlighted in cyan) has been interpolated. And the whole image is a scaled and translated part of the leaflet, so you're dealing with two different resolutions of the same feature. (actually infinitely many resolutions, by recursion.) It logically follows that either all of the image is interpolated, regardless of the pixel resolution of the resulting bitmap, or none of it is, regardless of the pixel resolution of the resulting bitmap. And as is elucidated in the above two paragraphs, neither does the process of decompression fit into the definition of interpolation, nor can it be reinterpreted to after-the-fact, by any stretch (or abuse) of logic. Kevin Baastalk 16:19, 14 March 2008 (UTC)
The fact that a company directly compares their method with Bicubic interpolation does not mean that the company does bicubic interpolation, or any kind of interpolation for that matter. They are comparing, presumably, because they want to show the consumer that the quality of their product is better than what they are comparing against. In this case, they are comparing fractal scaling to one of the best interpolation methods, and their results, I presume, show that fractal scaling produces higher quality images than interpolation does. Part of the reason for this is that fractal scaling is not interpolation. Kevin Baastalk 16:19, 14 March 2008 (UTC)
Well call it what you want interpolation or fractal scaling but the output of this has more pixels than the input, and these pixels are generated from the input pixels by some algorithm. Both bicubic and fractal are both algorithms in this category right? onOne's web page only makes sense as an apples to apples to comparison if this is so.Spot (talk) 02:37, 15 March 2008 (UTC)
I looked at the site. They're both methods used for digital image enlargement. That's the "apple". Conventionally, interpolation is used for digital image enlargement, so people in the graphics field are used to calling digital image enlargement "interpolation". As I understand it, mpeg can scale like this too, but that isn't called "interpolation". Same goes for vector graphics. That's why I think the naming here is based on the application, not the process (algorithm/math).
This isn't a debate about "naming". Now I understand why you introduced those google results. I just said fractal scaling is interpolation. I'm not proposing renaming the article or anything, i've just chosen a word to describe the subject of this article, as part of its explanation. The word is accurate and obviously illuminating and since you have agreed that it applies, let's use it! Spot (talk) 20:26, 15 March 2008 (UTC)
Sometimes I'm not so sure if you actually understand what I write. Let me make one thing clear to you: The property of fractals (such as i.f.s.) in which they scale indefinitely is not related to interpolation. It is an intrinsic property of the geometric primitive: just as a circle is axial-symmetric, so is a fractal scale-symmetric. Look at the fern. The leaflet on the fern is a copy of the whole fern. This leaflet is not made by interpolating any part of the image. It's made by taking the whole leaf, copying it, scaling it down, rotating it, and translating it. Kevin Baastalk 13:07, 16 March 2008 (UTC)
I'm talking about "fractal scaling" of images, not of fractals. It's an operation that takes a raster image as input and produces one as output with more pixels. It's a kind of interpolation. Spot (talk) 23:15, 16 March 2008 (UTC)
I think you mean fractal compression and decompression, or more generally re-sampling. That is how i would describe it. otherwise we're using the same tern to describe two different things. The google test below shows that the term "fractal scaling", at least on the internet, is much more commonly used the way i've defined it. but the point is there's two different processes under discussion here, both of which i think deserve space in the article. Kevin Baastalk 16:36, 17 March 2008 (UTC)
So what do you propose as the title of the section about how fractal compression can be used to scale up images? Spot (talk) 18:37, 18 March 2008 (UTC)
I think that's a good topic to discuss at length. I've given it some thought and what I've come up with so far is to make a section called "resolution independence" that describes this property of fractal-encoded images, and what it is not (e.g. that you can't see pores or atoms). I think the first thing such a section should do is make clear what this is and how it works. Then it should go on to discuss how it is used; it's applications. thus, there would be a sub-section on "image enlargement" or "re-sampling", or "scaling up images", or whatever. Not sure of the best name for the section. Kevin Baastalk 20:51, 18 March 2008 (UTC)
In any case, what gives fractal-encoded images this nice feature has nothing to do with interpolation proper (On might say it's a consequence of transform-coding compression, resulting in "realistic artifacts"), and it's an interesting feature so it probably deserves space in the article. Perhaps the specific application in genuine fractals could get a space, too. (Though I didn't find anywhere where it said if they actually use fractal compression, just that their algorithm was "fractal based".) Kevin Baastalk 13:53, 15 March 2008 (UTC)
When I encountered this quote I thought it particularly apt: "Photoshop’s bicubic interpolation method produces good results, but a side-by-side comparison with fractal technology is a bit like comparing apples and oranges, if you’ll excuse my use of a tired cliche." [16] It seems we're not the only ones to apply this cliche. :) Kevin Baastalk 16:21, 15 March 2008 (UTC)
Apparently they do use fractal compression: "After you have optimized your image file in Photoshop, establishing how the final image will appear, you save the image using the preferred Genuine Fractals file extension (FIF or STN). GF transforms the image into "resolution independent-assets" eliminating the relationship between pixels and resolution. The image becomes mathematically encoded as an algorithm and the pixels of the original raster image are replaced with a new file structure that stores the entire image and none of the pixels. When you open the image again, you can re-scale it to the desired size and the algorithm will generate new pixels while maintaining sharpness regardless of image size." Kevin Baastalk 16:21, 15 March 2008 (UTC)
The reason I included Genuine Fractals in the article is that its a commercial application capable of saving images in compressed FIF (Fractal Image Format), regardless of what the application does with images after fractal encoding. Some others editors ignored this fact and attempted to remove Genuine Fractals from the article because of the fractal scaling/compression argument.--Editor5435 (talk) 16:34, 15 March 2008 (UTC)


google test: "fractal scaling", "fractal interpolation", & "resolution independence"

  • "fractal interpolation"  : 9,270 [17]
  • "fractal scaling"  : 16,400 [18]
  • "resolution independence" : 74,800 [19]

Kevin Baastalk 17:01, 14 March 2008 (UTC)

What is this supposed to demonstrate? A lot of the hits for "resolution independence" have nothing to do with fractal compression or interpolation of bitmap images, e.g. vector graphics are resolution-independent. And a lot of the hits for "fractal scaling" appear to be about the scaling-law definition of fractals and fractal dimension, rather than fractal representations of bitmaps. —Steven G. Johnson (talk) 04:37, 15 March 2008 (UTC)
Ya, I just wanted to get an unbiased overview of how and how often the terms are used. Turns out rather inconclusive for this debate. Kevin Baastalk 13:53, 15 March 2008 (UTC)
How often the terms are used is totally irrelevant. Even if the term "fractal scaling" had been 10x more common than "fractal interpolation" that would not tell us whether or not fractal scaling was a kind of interpolation. Just like the lack of usage of the term "fractal image operation" does not tell us if fractal scaling is an image operation. Spot (talk) 19:44, 15 March 2008 (UTC)
Aha I think discovered our miscommunication (see above). Our use of the word "interpolation" isn't a decision about naming, it's a decision about description for clarification. Spot (talk) 22:16, 15 March 2008 (UTC)
What part of "I just wanted to get an unbiased overview of how and how often the terms are used. Turns out rather inconclusive for this debate" didn't you understand? Kevin Baastalk 12:59, 16 March 2008 (UTC)
Well then what were you hoping to demonstrate or conclude? Spot (talk) 23:03, 16 March 2008 (UTC)
I wasn't hoping to demonstrate or conclude anything. My aim was to discover. Kevin Baastalk 16:36, 17 March 2008 (UTC)


Eleven observations on the nature of fractal zooming

Normally I would not be inclined to wade into this controversy, but since I've been directly quoted by Eric Reckase as authoritative, let me add a few words. Not for the sake of getting the two sides to converge in agreement (this does not appear possible), but in response to the "request for comment on science".

As prelude, I wrote the comp.compression FAQ entry Introduction to Fractal Image Compression. During the mid 90s I published several research articles in the area of fractal compression, particularly on algorithmic and implementation aspects. I invented the "Fast Fractal image compression algorithm." When active in the field I had the privilege of collaborating with prominent mathematicians who developed much of the theoretical underpinnings of iterated function systems. I maintained the Waterloo Fractal Compression web site, posting the most comprehensive cross-comparison of results. Plus, I collated and updated the most complete bibliography available on the subject (reading every paper I could possibly get my hands on).

I haven't touched the Wikipedia entry, nor do I intend to.

I'll confine myself to the following questions, for they are the essence of this dispute. What is the nature of fractal zooming? Is it a form of interpolation?

In particular, I want to take up the following statements.

"Fractal zooming is traversing a fractal in the depth-dimension via

changing the scale parameter." - Kevin Baas

"Fractal zooming is not an interpolation method ... [it] is a property of fractals, such as ifs's and by extension, .fif's. It is not an

operation on an input image." - Kevin Baas

"Fractal scaling is a great interpolation method, but it shouldn't be

confused with compression." -- Erik Reckase

"Ifs are resolution independent but the input image is not." -- Spot

My opinion is this. Kevin Baas begins with a correct characterization (first quote) but draws the wrong conclusion (second quote). Erik Reckase and Spot are raising important relevant facts, and demonstrate proper understanding.

1) Baas is completely dropping context. During the compression phase the iterated function is fitted to the input image through least squares minimization. That's the whole idea, of course. The input image provides a reference scale, a unit of length. This point is critical. As a matter of fact, the compressed file contains the dimension of the original image in its header. If it is to be used for image compression, it has to. It did in my file format.

2) Scale-invariance means that the statistical properties of the object are identical at all scales. Self-similarity is a consequence of this. (Or the other way around, depending on how you prove your theorems.)

3) Interpolation (and its more challenging twin, extrapolation), is the process of hypothesizing spatial-temporal information beyond the immediately available evidence. It is based on the assumption that the generation process applies to both realms (the seen and the unseen), and that one may use patterns in the seen to reasonably guess the unseen. Put another way: what unseen data is most consistent with the known data? The answer depends on what model is assumed to operate for the underlying process. Fractals and splines - iterated and non-iterated polynomial functions - provide two different ways to go about it.

4) Fractals are one means of performing interpolation as I've defined it in (3), and it is absurd to disqualify them on the grounds that they are a particular family of functions. In practical application, they are attractive for clouds and other geometries that are self-similar across a wide range of scales. Fractals are not so appropriate for, say, a highly polished ceramic ball bearing. In such cases you are better served by scale-variant traditional geometry.

5) Fractal zooming is nothing more than evaluating the function at a higher resolution than the reference image. The process of re-sampling, including "uprezzing", is possible for any everywhere-defined function operating on a compact space, not just by fractals. (In the theory papers one doesn't travel long before encountering "assume the metric space X is Hausdorff.")

6) Digital interpolation amounts to sampling a function at a higher resolution than the original measurement. There are many ways of attempting this. It is not to be considered limited to performing arithmetic on nearest-neighbor pixels. Bass' argument boils down to "It's a function! Where are the pixels?! You can't interpolate if there are no pixels to interpolate from!". Wrong. Functions interpolate. There is a frame of reference. Re-sampling brings the pixels back.

7) The same false objection can be made for any analytic function. I wrote my Masters thesis on spline-based image compression. Once you've performed the fitting procedure, conceptually you've transformed the pixelated image into functional form - in this case 1st to 3rd degree piecewise polynomials. Not at all unlike bilinear interpolation, this system also permits zoom operations, i.e. evaluation at arbitrary points. So where is the inherent scale of y=a*x2 + b*x + c? There is none, except for a reference-scale provided by the original. One could just as easily complain that "Spline zooming is a property of polynomials. It is not an operation on an input image." Well, not directly it's not.

8) If resolution enhancement is applied, what you can do is compute the distortion at this scale, provided you have a higher resolution original. Say, by scanning at a higher dpi setting. But because the compression process has no information about this higher resolution data, the distortion will inevitably be higher. A scientifically accurate statement would read, "with fractal zooming we create high resolution images at the expense of substantially increased distortion. We hope you don't notice." DCT-based JPEG and wavelet-based JPEG 2000 can do exactly the same thing. Which one does a better job is a matter of rigorous perceptual experiments.

9) If one is careful, it is possible to speak of very high compression ratios and resolution enhancement in the same breath. But there is an elemental gotcha that pours water on the parade. Suppose our regular sized image is 640x480. It is encoded into a 50:1 FIF file and decompressed to a size of 2560x1920. Now grab the original image scanned at 2560x1920. We have an apparent compression ratio of 800:1. Which, as I've explained in point (8), will present serious degradation. But what is really going on? What's happening is that the compression algorithm has been modified into a two-stage process. Our fractal compression module is now prefaced by a pre-filter whose sole job is to downsample the original before passing it along. The preprocessor tosses away 15 out of every 16 pixels and asks the encoding engine to make do without. But why? Wouldn't be better to hunt for an iterated function system with all of the information at its disposal? Yes, it would. An equivalent sized FIF trained from the full-sized image will encode much better. It has been trained on the high resolution image, and is thus better equipped to represent it in a compact form.

10) Strictly speaking, compression concerns approximation of the original, not the process of re-sampling or interpolation. Both are valuable, but are distinctly different concepts. Each is sufficiently stand-alone to deserve separate treatment.

11) As I originally wrote in the FAQ: "That said, what is resolution enhancement? It is the process of compressing an image, expanding it to a higher resolution, saving it, then discarding the iterated function system. In other words, the compressed fractal image is the means to an end, not the end itself." The one thing I would add is that when the goal is resolution enhancement (such as would be run as a photoshop plug-in), one is hardly at all concerned about compression ratios. That issue becomes moot.

Finally,

"The Fractal Compression section of the Compression FAQ is wrong." - Editor5435.

Ah... no, I'm afraid not.

--Johnmkominek (talk) 05:22, 19 March 2008 (UTC)

I am afraid so..."its wrong and 14 years out of date"! It should by no means be used as a FAQ on the current state of fractal compression, perhaps it serves better as a historical reference to academic opinions of this subject 14 years ago.
"5. Resolution enhancement is a powerful feature but is not some magical way of achieving 1000:1 compression" You are ignoring the fact the end result (file size) is comparable to wavelet compression with 1000:1, however the image quality of wavelet compression at such ratios is unacceptable. Which is the best method for filling a 1920x1080 screen from the smallest possible file? That is what matters in the real world, that is what could benefit services such a YouTube. It seems to me you have very little practical experience with fractal compression software beyond academic study 14 years ago. Do you have any real examples to show? How about some video clips? Were you able to even produce actual working code? Are you aware of Iterated's original developments and progress made since the late 1990's by one of its licensees? I suggest you take a look through http://www.tmmi.us/products.html, sorry to burst your bubble, but fractal compression has come a long way since you gave up your research on the subject. You should however, pay close attention to future updates on this Wikipedia article, you may then wish to revise your FAQ to avoid embarrassment for having your name attached to such a document.--Editor5435 (talk) 06:26, 19 March 2008 (UTC)
"1994 -- Put your name here." Yes, indeed, a name has been put there along with a great deal of progress. It just shows how hopelessly out of date your FAQ is. There is certainly no place on Wikipedia to quote such nonsese. I suggest you start rewriting it now!--Editor5435 (talk) 07:17, 19 March 2008 (UTC)
"Even on a fast workstation an exhaustive search is prohibitively slow. You can start the program before departing work Friday afternoon; Monday morning, it will still be churning away." What relevance does 1994 hardware performance have today? I suggest you remove this completely obsolete comment, its only confusing to the less informed.--Editor5435 (talk) 07:23, 19 March 2008 (UTC)
"Exaggerated claims not withstanding, compression ratios typically range from 4:1 to 100:1" Fractal compression has so far yielded acceptable results all the way up to 700:1 with no scaling, direct 1:1. Again, your FAQ is out of date.--Editor5435 (talk) 07:29, 19 March 2008 (UTC)
"The crossover point varies but is often around 40:1. This figure bodes well for JPEG since beyond the crossover point images are so severely distorted that they are seldom worth using." Did you just make this up as you went along?--Editor5435 (talk) 07:32, 19 March 2008 (UTC)
I submit that your so called "FAQ" (really a 1994 FAQ) is hopelessly out of date and therefore of little use 14 years after it was written.--Editor5435 (talk) 07:40, 19 March 2008 (UTC)
I haven't read the long first post in this section yet, but I've read the shorter response, and here's my two cents so far:
  • 5.) If a file compressed at, say, 50:1, produced visually good image quality when decompresses to 1000:1, one can, undoubtedly, produce an even better quality image if they instead start with a 20x resolution version of the original image and compress it a 1000:1 ratio (producing the same file size), and subsequently decompress it at 1000:1. By this logic, the first example (50:1 compress->1000:1 decompress) shows that a 1000:1 compression ratio can be achieved with that image quality or better. (The trick then, it seems to me, is to start with a high resolution image. As is often said, fractal compression works better with high-res images. probably because there's a higher ratio of texture data to shape information.)
  • 1994?!? That's so last decade! Seriously, when it comes to computer technology, that's antiquated. A supercomputer from 1994 is like a notebook computer from today. So any sentence preceded by "It would take a supercomputer..." translates to "It would take an average computer...". Besides which, the algorithm has seen much improvement over the years; it's not a brute-force search anymore.
  • Besides that, I guess I should read the FAQ, eh?.. Kevin Baastalk 18:20, 19 March 2008 (UTC)
Okay, my responses now:
  • 1) "Baas is completely dropping context..." Correct. This is intentional. There is nothing inherent in the phrase "fractal zooming" that limits it to ifs's generated via compressing a raster image. I don't dispute this, it's on purpose. Kevin Baastalk 18:55, 19 March 2008 (UTC)
  • 3) "Interpolation..." Yes, and I said this earlier when I stated that it would be more correct to say that fractals are being used as an interpolation kernel, as one would say that splines are used as an interpolation kernel. Though splines may be used for interpolation, one does not say that splines, therefore, are interpolation, or that they operate by way of interpolation. Likewise, one does not say that fractal zooming/scaling is interpolation, but rather that interpolation can be done by way of fractal zooming/scaling, not the other way around. That is my point. Let's not do injustices to fractals that we would not do to splines. Kevin Baastalk 18:55, 19 March 2008 (UTC)
  • 7) "The same false objection can be made for any analytic function..." Not true. ifs/fractals are unique in that they are affine transformations. If you calculate the spline function at each point, then scale all the resulting spline functions the same, then used the scaled splines to interpolate, your results won't fit the original data and you'll get something that doesn't represent the original (or is greatly distorted). If you do the same thing with the ifs primitives, you don't have that problem. This is because the values that you solve for when constructing the spline function are inextricably tied to the periodicity of the data points; i.e. to the resolution, whereas in .fif, they are not. Inversely, when you change the input resolution, the spline functions change a lot, whereas with a .fif, the ifs primitives don't change all that much. Again, because the fitted functions in a .fif are orthogonal to the periodicity of the data points. This is not the case for splines, thus there is no such thing as "spline zooming". Kevin Baastalk 18:55, 19 March 2008 (UTC)
All-in-all it's nice to hear from an expert who's done a lot of research and has a lot to show for it. Perhaps "resolution enhancement" would be a good title for a section on how fractals can be used as an interpolation kernel? Kevin Baastalk 18:55, 19 March 2008 (UTC)
And while I'm giving out props, I'd just like to add that I thought the explanations where clear and thorough, and it would be nice to have comparable explanations in the article. Kevin Baastalk 16:22, 20 March 2008 (UTC)

Lost Pixels

Correct, "rendering them at high resolution would not be properly called interpolation". This is because the statement "these new pixels are placed between and interpolate the input pixels" is false. Nearest-neighbor, bilinear, and bicubic are all kinds of interpolation. These functions take neighboring pixels as data points, and use them to infer basis values, theta, for a function: color = f(x,y,theta), such that, given the data point coordinates x,y, the function produces the corresponding color for that point, then selects a point in-between data points (hence inter-pol-ate), and run it through the function to produce a color. This is what is called "interpolation". Fractal scaling does not do this. To say that "these new pixels are placed between and interpolate the input pixels" is to imply that the one has some idea of where the input pixels are. And in fif, that data is not stored, it is lost, so the whole concept of "in-between" is without meaning. Kevin Baastalk 16:19, 14 March 2008 (UTC)

Your reply is so long i'm going to try to break up my reply so we can have separate threads focusing on each of the problems here. So the first issue is that you didn't address the most important thing I said, which is why I said it up front: fractal compression (or scaling) is an operation applied to an input raster image. This image is not scale free or resolution independent. You made an argument above If fractal zooming is an interpolation method then fractals themselves are an interpolation method and then you showed that fractals are not interpolation. Your premise is flawed though, since fractal zooming is about raster images. Spot (talk) 02:37, 15 March 2008 (UTC)
Fractal-encoded images take raster(bitmap) images as input, and mathematically transform them into iterated function systems, which are not raster(bitmap). ifs are resolution independent; they do not have a "native" resolution. when the fif is displayed on-screen, it is transformed back into raster-space(a bitmap). I proved my statement "f fractal zooming is an interpolation method then fractals themselves are an interpolation method" with the example of the fractal fern, below. Kevin Baastalk 14:23, 15 March 2008 (UTC)
Ifs are resolution independent but the input image is not. Fractal zooming is an operation on an input image. You have proved nothing. Spot (talk) 01:39, 17 March 2008 (UTC)
Fractal zooming is a property of fractals, such as ifs's, and by extension, .fif's. It is not an operation on an input image. Technically, it's not even an "operation". if it was an operation on a bitmap/raster image, it would be called "bitmap zooming" or "raster zooming", or just "zooming". The word "fractal" prefixing the word "zooming", according to standard english syntax rules (i.e. how to read a sentence), means that the "zooming" is done on/through/via "fractals". Fractal compression is an operation performed on an input bitmap image that transforms it into a fractally-encoded image. fractally-encoded images are resolution independent. "Fractal zooming" is an "operation" done on a fractally-encoded image. It is essentially displaying (decompressing) a fractally-encoded (and thus resolution-independent) image at a specified resolution. Kevin Baastalk 16:17, 17 March 2008 (UTC)
This article is about operations on input images, and in this context "fractal zooming" refers to encoding an image with fractal compression, and then decompressing it at a larger size than the original. That's what this article is about and you are welcome to join us in discussing it. Spot (talk) 17:49, 18 March 2008 (UTC)
That's not what fractal zooming is. Fractal zooming is traversing a fractal in the depth-dimension via changing the scale parameter of the viewing window. It's like "panning", except instead of left-right, up-down, it's in-out. And it's zooming in/out of a fractal, not a raster-image. If it was zooming in-out of a raster image, it would be called "raster zooming" or "bitmap zooming" or just "zooming". What you are referring to is "up-sampling" via fractals. Using wavelets to do the same thing is not called "wavelet zooming" wavelet zooming is zooming in/out on a wavelet. Just like fractal zooming is zooming in/out on a fractal. Just like "wave riding" is riding on waves and "body surfing" is surfing on bodies. A noun-gerund combination is always read like this. It's a basic rule of sentence construction dictated by the english language. Kevin Baastalk 19:05, 18 March 2008 (UTC)
You say "in fif, that data [the input pixels] is not stored, it is lost, so the whole concept of "in-between" is without meaning." But this is not right. Of course the input pixels are not lost. If you expanded the fif back to the original resolution, the input pixels would be recovered. the pixels are still in the fif file, they have just been encoded differently. Spot (talk) 02:37, 15 March 2008 (UTC)
That's a stretch. The reason that there are pixels when you display the fif image on the screen is because the screen is made out of pixels. transforming a fif to a 2d-plane at best produces different sized regions or infinitesimal points, not fixed-size, tesselated regions (pixels). But since the screen is made out of pixels, you ultimately have to fit these regions/points into them. And ofcourse the reproduction resembles the original - that's the whole point. But that doesn't mean that the original "pixels" are stored. I come back to the analogy with vector graphics, in which data is stored as geometric primitives. One may have originally drawn the vector graphic at a particular resolution, and if you display it at the same resolution you'll likely get back the same pixels. But those pixels are not stored in the .svg, in ANY format. (One might say that a .fif is just an .svg in which the geometric primitives have non-integer dimension.) It's the difference between a PostScript font and a pixel font. in a postscript font, a 400pt rendering is just as "correct" and "primitive" as a 10pt rendering. Same goes for a .fif. Kevin Baastalk 14:23, 15 March 2008 (UTC)
Actually, the simple fact that fractal compression is lossy proves that the original pixels aren't stored: if they were, it would be lossless. Kevin Baastalk 17:54, 15 March 2008 (UTC)
No -- by that reasoning since JPEG is lossy proves that it doesn't store the original pixels either. You said the pixels are lost and they are not. This is just a fact. Spot (talk) 19:51, 15 March 2008 (UTC)
Ah, yes. the proverbial "this is just a fact." argument. tell me, where are the pixels stored in an .svg? Kevin Baastalk 13:07, 16 March 2008 (UTC)
It is a fact the pixels are not lost. If you started with a raster image and converted it to SVG that looked identical would you say the pixels had been lost? Spot (talk) 23:28, 16 March 2008 (UTC)
Yes. The data is no longer stored primitively as a grid of color values. It is now stored primitively as geometric operations. If a vector graphic is converted to a raster image, i would say that the geometric primitives have been lost. Same thing goes for the reverse. Kevin Baastalk 16:17, 17 March 2008 (UTC)
The data is no longer stored primitively that's right -- it's stored in another way. The pixels can be recovered. The pixels are not lost. Spot (talk) 17:52, 18 March 2008 (UTC)
Firstly, the abiliity to recover pixels does not neccesarily mean that the pixels ares stored. Secondly, the pixels cannot be recovered. That's why it's called a "lossy" compression method.
The original pixel values are only approximated. I think you mean the original image is recovered, approximately, at least. And that information is stored. That's why the decompressed image looks similiar to the original image, because they share a lot of mutual information. But that's the point of lossy compression, to preserve, on average, as much mutual information as possible, while minimizing file size.
There are many ways to store an image. And we're talking about how the image is stored. The image is not stored as pixels, it is stored in a space that is related to pixel-space via a non-linear transformation. Since this transformation is non-linear, it has singularities - places where information present in the original space is lost, and that original space is pixel space, therefore that lost information is pixel information. That information is non-recoverable. Just as lines are lost as such when a vector graphic is rendered to a given resolution, so are pixels lost when a raster image is vectorized. What you are really trying to say is that the "image" is not lost, but that is trivial and obvious. Nobody in their right mind would dispute that. If you think that I'm disputing that, then you clearly don't understand what I am saying. When an image is stored as an .fif, it does not know anything about "pixels". Pixels are dependant on (fixed to) a given resolution. .fif, on the other hand, is resolution-independent. It stores all possible resolutions of the image with the same data. That data does not store "pixels". Does the statement "A line from the upper left corner of the canvass to the lower right corner of the canvass." store any pixels? If you say "yes", then where are they stored? If you say no, then you agree with me. Vector graphics are a collection of statements like that. So are .fif's.
Let me put it yet another way, using a physics analogy: when heat is converted into electricity, does one say that the heat is "stored"? No. one says that it is "converted" to a different type of "energy". The "energy" is "conserved", yes, but that's quite different from saying that the "heat" is "stored". By the same token, when a raster image is converted into a set of ifs's, we do not say the "pixels" are "stored", we say that "information", is "conserved". Now that information can be converted back into a bitmap-representation, or another format, just like electricity can be converted back into heat, or converted into another type of energy.
I hope my meaning is finally clear to you. Kevin Baastalk 18:35, 18 March 2008 (UTC)

More References

Some of the potential references listed above looked interesting, but I wanted to offer some that might perhaps be less sophisticated. This would be to determine less what experts feel but more what the general take is. Here are some:

  • Heath, Steve (1999) Multimedia and Communications Technology ISBN 0240515293 pp. 120 - 123. Summarizes how it works and comments how they compare to wavelets. Though now a little dated.
  • Symes, Peter (2003) Digital Video Compression ISBN 0071424873 pp. 348 - 349. Short summary but mostly notes how it's not really in use or generally accepted. With some notes about advantages.
I can't find this online, can you quote the relevant paragraphs? Spot (talk) 08:17, 17 March 2008 (UTC)
  • Sayood, Khalid (2006) Introduction to Data Compression, Third Edition ISBN 012620862X pp. 560 - 569. This text concludes that it is no better than the DCT approach in JPEG, but that future research and improvement is possible. Summary and a paragraph before that of interest is on page 568.
  • Russ, John (2006) The Image Processing Handbook ISBN 0849372542 pp. 190 - 192. Summary of approach and issues, but doesn't add much.

Harder to note, though, is that they're just not talked about as much as wavelet and other approaches. When I search for people talking about fractal compression there just isn't the same volume of citations. - Owlmonkey (talk) 23:41, 13 March 2008 (UTC)

The lack of information is due to limited real world experience people have with this type of compression, especially for video. Most citations available are from research related material rather than from commercial software use.--Editor5435 (talk) 03:38, 14 March 2008 (UTC)
Well let's go with the information we have. The Sayood book is recent and authoritative and it concludes that fractal compression is no better than DCT. Our article should say so too. Of course now wavelets beat them both... Spot (talk) 08:03, 17 March 2008 (UTC)
The Sayood book is obviously wrong. The examples from the forbidden website proves it as an utterly hopeless publication. Show me any other image compression scheme that compares to the 700:1 example. I'm hoping some sort of third party verification is available soon that will end all of these ridiculous arguments. From the information I have seen fractal compression is far superior to wavelets.--Editor5435 (talk) 15:47, 17 March 2008 (UTC)

images for the article

i think the article could benefit from some samples of fractal compression in action, and some samples showing the properties of fractal compression. For example, something like what is shown here: [20], though labels like "resampled using fractal compression" and "resampled using bicubic interpolation" would be more appropriate and informative. Kevin Baastalk 17:05, 15 March 2008 (UTC)

and i think "stair interpolation" should be included in a fair comparison, as methods such as bi-cubic interpolation work best at (one might say "are tied to") certain scaling ratios. Perhaps a "comparison of image resizing methods" article is called for. Kevin Baastalk 17:23, 15 March 2008 (UTC)

If you want to create an article comparing the resampling messages go ahead. I think an illustration would be good for this article, but it should be of its primary subject which is compression. The standard illustration is of how the image quality degrades with higher and higher compression ratios, ie a still image is presented uncompressed and then at compressed at 10:1, 30:1, and 100:1 ratios, with closeups. I think the section on scaling (uprezzing) just a paragraph or two that outlines the relationship to compression and the pitfall (inflated ratios). Spot (talk) 20:33, 15 March 2008 (UTC)
Above examples show the difference between raw and various levels of compression all the way up to 700:1, but of course I know you believe such images are forbidden!--Editor5435 (talk) 18:05, 16 March 2008 (UTC)
An article about image compression without a single demonstration image. Tsk. 82.139.85.220 (talk) 14:35, 5 April 2008 (UTC)
I think perhaps the fractal fern could be used to show what an ifs is, and put in the history section by the first mention of ifs's. A pair of images showing fractal interpolation would also be nice (and maybe more to compare it against bicubic). As for straight compression decompression, too bad it works best at high res - we can't very well put a 2046x1640 image in the article. Maybe a fragment of one could work, though. And then maybe a close up to show the detail generated by the algorithm. (the "compression artifacts" so to speak.) Kevin Baastalk 15:15, 5 April 2008 (UTC)

New History Section

Thank you owlmonkey for separating out the history section. I would like to note that when I attempted to do the same thing it was reverted, but you seem to have the golden touch :) Anyway, what I want to point out is that the history section is now 4x longer than the main part of the article itself. Shouldn't the ratio be the other way around? For example the page on JPEG has a similar section on patents which comprises less than 1/10th of the size of the article. Spot (talk) 06:27, 17 March 2008 (UTC)

I think that fractal compression has a higher proportion of interesting and important material on patents than jpeg image does, hence it gets more space. And besides, this article is pretty small right now - I think we should focus on making it bigger by adding content in other sections before we worry too much about trimming sections that may be to big, in proportion (which right now, is in proportion to not very much). Kevin Baastalk 16:43, 17 March 2008 (UTC)
Its unfortunate Ronz has little understanding of the importance patents are to fractal compression and its history. He attempted to remove the entire section rather than build it up. I don't understand why some editors choose to meddle with articles they have no clue about.--Editor5435 (talk) 06:16, 24 March 2008 (UTC)
Every article on Wikipedia is editable by any editor. Trying to block out editors from a particular article is treading the WP:OWN line. --clpo13(talk) 06:32, 24 March 2008 (UTC)
Editors should have the common sense not to tamper with articles they know nothing about. No wonder Wikipedia is no longer regarded as a source of reliable information.--Editor5435 (talk) 06:37, 24 March 2008 (UTC)
Wikipedia never was regarded as reliable. It's been editable by anyone since its inception, making it inherently unreliable for those who mistakenly consider it a primary (or even secondary) source. Besides, an editor doesn't need to know the subject matter to improve an article. Ronz's edits deal with the non-content problems this article has, such as improper sources and advertisement overtones. Anyone can see this article needs work, and that work doesn't necessarily need to be done by an expert in the field. Remember to assume good faith, as it seems that you are alone in considering Ronz a problematic editor. --clpo13(talk) 06:42, 24 March 2008 (UTC)
Are you saying its some kind of blog now for the uninformed masses? ...seems about right judging by some of the contributors loitering here!--Editor5435 (talk) 22:23, 24 March 2008 (UTC)

Warning Section

I have received permission by email from the comp.compression FAQ authors (Jean-Loup Gailly and John Kominek) to include text from the FAQ into the wikipedia. We don't need their permission to rewrite their conclusions, but it's more convenient to simply reuse their fine words, and then edit them to fit. Thanks guys! Below is the section that I propose including Spot (talk) 18:52, 18 March 2008 (UTC)


Iterated Systems is fond of the following argument. Take a portrait that is, let us say, a grayscale image 250x250 pixels in size, 1 byte per pixel. You run it through their software and get a 2500 byte file (compression ratio = 25:1). Now zoom in on the person's hair at 4x magnification. What do you see? A texture that still looks like hair. Well then, it's as if you had an image 1000x1000 pixels in size. So your _effective_ compression ratio is 25x16=400.
But there is a catch. Detail has not been retained, but generated. With a little luck it will look as it should, but don't count on it. Zooming in on a person's face will not reveal the pores.
Objectively, what fractal image compression offers is an advanced form of interpolation. This is a useful and attractive property. Useful to graphic artists, for example, or for printing on a high resolution device. But it does not bestow fantastically high compression ratios.
Take a 1000x1000 pixel image and compress it 400:1 using wavelets and compare the results to the fractally scaled 250x250 pixel image. If you are still having difficulty with this concept I suggest you watch a few YouTube videos to understand the effects of high compression on images using wavelets. YouTube could certainly benefit from clearer fractally scaled video. Your arguments deny the end results which are most important to the computer industry. Will you ever stop with your silly arguments? By the way, the so called FAQ (really a hopelessly out of date 14 year old FAQ) you relish over is completely bogus since it was written by a group of disgruntled fractal compression researchers who have failed miserably in producing any fractal compression software worthy of commercial licensing. These people are jealous of Michael Barnsley and Iterated Systems Inc.'s accomplishments and as a result of their failures have a vendetta against fractal compression entirely.--Editor5435 (talk) 20:04, 18 March 2008 (UTC)
To name just a few problems with this suggestion:
  1. Encyclopedia's generally don't have "warning sections". That's not encyclopedic.
  2. Wikipedia is not a FAQ.
  3. The wording is inappropriate for an encyclopedia. For example: "What do you see?" is a question. encyclopedias don't have questions. "Take a portrait that is, let us say..." "You run it through their software..." are hypotheticals. encyclopedias don't use hypotheticals. Kevin Baastalk 18:32, 25 March 2008 (UTC)

Fractal Scaling vs Wavelet Compression, small files for giant screens

I propose a section that directly compares the results of fractally upscaled smaller resolutions to wavelet compressed images at the same higher display resolution. The comparisons should be based on identical compressed file sizes. In other words, what method provides the best image quality at 1920x1080 for example, from the smallest possible file size? A highly compressed 1920x1080 image using wavelets or an upscaled fractal image from a smaller resolution?--Editor5435 (talk) 19:13, 18 March 2008 (UTC)

How to fill up a giant screen from a tiny file? Can't be done with wavelet compression without looking like its from YouTube, but wait, it can be done with decent viewing quality using fractal scaling. Call it what you want, the end result is what counts in the real world, this unique characteristic of fractal compression puts it on an entire level above wavelet based compression. Small files, giant screens, surely even Spot can grasp this concept?--Editor5435 (talk) 19:18, 18 March 2008 (UTC)

Working Draft

I am going to edit this here for a while, especially adding references, with the intention of replacing the first section with it. Comment welcome but since this is a comment by me here, please don't edit it. Spot (talk) 23:34, 20 March 2008 (UTC)

Fractal compression is a lossy image compression method based fractals to achieve high levels of compression. The method is best suited for photographs of landscapes (with trees and clouds), or other images with substantial self-similarity. Although it was inspired by Iterated Function Systems, the Fractal transform generally used to perform fractal compression makes it much more like vector quantization.

At common compression ratios, up to about 50:1, Fractal compression provides similar results to DCT-based algorithms such as JPEG [1] But at high ratios such as 100:1 and beyond, fractal coding may have an advantage. In most situations, DWT-based algorithms such as JPEG 2000 perform better than JPEG and fractal coding[2]. Comparisons are usually based on PSNR or RMSE.

Because during fractal compression the source image is searched to find the similarities, the encoding process is extremely computationally intensive, much slower than DCT or Wavelet methods. The decoding process, however, is quite fast. While this asymmetry makes it impractical for most applications[3], when the video is distributed over a low-bandwidth medium to a large audience (such as CD-ROM in the 90s), then fractal compression is more competitive.

Fractal codes have the useful property that they may be iterated to decode to a higher resolution than the original. Because of this, fractally compressed images are sometimes called "resolution independent", since they can be decompressed to any number of pixels. This provides a whole new application of fractal "compression" which is scaling images up to higher resolutions, and in this capacity it competes with simpler interpolation methods such as nearest neighbor and bicubic.

Note that this property of fractal compression is sometimes used to inflate the compression ratios associated with it[4]. This is done, for example, by fractally compressing an image or video at resolution 640x480 down 50:1 from 921K (640x480x3) bytes to 18K (921K/50), and then decompressing to a higher resolution, say 1280x960, which is 4x larger than the input, and then citing a compression ratio of 200:1 (4x50).

The catch is that detail has not been retained, but generated. Any compression method can be paired with an interpolation method and have its ratio similarly multiplied.[5] Compression ratios are properly computed between the input and the compressed data size, not between the compressed size and the output.

  1. ^ Sayood, Khalid (2005). Introduction to Data Compression, Third Edition. Morgan Kaufmann. pp. pp 560 - 569. ISBN 012620862X. {{cite book}}: |pages= has extra text (help); Cite has empty unknown parameter: |coauthors= (help)
  2. ^ Saha, Subhasis (2000). "Image Compression - from DCT to Wavelets : A Review". ACM Crossroads. 6 (3). {{cite journal}}: Cite has empty unknown parameter: |coauthors= (help)
  3. ^ Heath, Steve (1999). Multimedia and Communications Technology. Focal Press. pp. pp 120 - 123. ISBN 0240515293. {{cite book}}: |pages= has extra text (help); Cite has empty unknown parameter: |coauthors= (help)
  4. ^ Kominek, John (Sept 1999). "comp.compression FAQ". {{cite web}}: Check date values in: |date= (help); Cite has empty unknown parameter: |coauthors= (help)
  5. ^ Wohlberg, Brendt (Dec 1999). "A review of the fractal image coding literature" (PDF). IEEE Transactions on Image Processing. 8 (12): 1716--1729. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)

holding pen: Kominek, John (July 1997). "Advances in fractal compression for multimedia applications". Multimedia Systems. 5 (4). Springer-Verlag: pp 255 - 270. {{cite journal}}: |pages= has extra text (help); Cite has empty unknown parameter: |coauthors= (help)

"In most situations, DWT based algorithms such as JPEG 2000 perform better than JPEG and fractal coding."
"While this asymmetry makes it impractical for most applications"
Are the above comments based upon current information? This reference states: "Fractal image compression is a promising new technology but is not without problems. Most critically, fast encoding is required for it to find wide use in multimedia applications. This is now within reach: recent methods are five orders of magnitude faster than early attempts. Beginning with the basic ideas and problems, this paper explains how to accelerate fractal image compression." I realize this quote if from 1997, so the question remains if there was a 5X increase in performance back then what is the current state?
Yes they are based on current information. this 2000 paper concludes "While the DCT-based image coders perform very well at moderate bit rates, at higher compression ratios, image quality degrades because of the artifacts resulting from the block-based DCT scheme. Wavelet-based coding on the other hand provides substantial improvement in picture quality at low bit rates because of overlapping basis functions and better energy compaction property of wavelet transforms." and the current JPEG 2000 page says "The compression gains over JPEG are attributed to the use of DWT and a more sophisticated entropy encoding scheme." For the second statement, the reference you cite backs me up. As for progress since then, the best we could find was summarized by another editor above as "I agree with Spot: compressing 256x256 grayscale in 1 second (with unspecified quality) may well represent progress, but it is not a solution in the sense of being close to competitive". Spot (talk) 13:28, 21 March 2008 (UTC)
You call 2000 current? 256x256 grayscale? What on Earth are you talking about? Images with a greater bit depth (such as 24-bit truecolor images) will compress more efficiently than images with fewer bits per pixel (such as 8-bit gray-scale images). Spot, you are in complete denial about what is shown on http://www.ononesoftware.com/detail.php?prodLine_id=2 and http://www.tmmi.us/products.html --Editor5435 (talk) 15:50, 21 March 2008 (UTC)
"Because of this, fractally compressed images are sometimes called "resolution independent"
Aren't they always described as "resolution independent"?
No, for example I don't call them that. Spot (talk) 13:28, 21 March 2008 (UTC)
"Note that this property of fractal compression is sometimes used to inflate the compression ratios associated with it".
There has been discussion about comparing two methods (fractal vs wavelets) as a "means to the end". Should this be the topic of discussion instead of over inflated compression ratios? The YouTube reference is a good example of excessive artifacts spoiling the "end result". If fractal scaling offers a solution perhaps additional focus is warranted.--64.46.3.61 (talk) 00:09, 21 March 2008 (UTC)
I don't mind including a statement that it might be useful for YouTube if you can find a good reference that says so. I think speculating so would count as Original Research. By contrast, my paragraph is right from the FAQ. Spot (talk) 13:28, 21 March 2008 (UTC)
Spot, enough of your nonsense, the so called 1994 FAQ has already been debunked, its meaningless now, 14 years later, get a grip!--Editor5435 (talk) 15:45, 21 March 2008 (UTC)

Spot, your draft means nothing. Its shaping up to be another hatchet job just like your previous article. I will not permit you to spread false and misleading information. Give it up!--Editor5435 (talk) 15:57, 21 March 2008 (UTC)

This whole draft is one big editorial. Kevin Baastalk 18:26, 21 March 2008 (UTC)

Spot, do not post your draft, nobody has agreed upon the new structure yet.--Editor5435 (talk) 17:05, 24 March 2008 (UTC)

Current Information 2007, 2008, not 1994

This section should be reserved only for current information. So far there are five references to recent activity involving fractal compression/encoding.

1.) Genuine Fractals5

2.) TruDef

3.) Fractal Image Compression Applied to Remote Sensing

4.) Toward Real Time Fractal Image Compression Using Graphics Hardware

5.) Proposed Texture Compression Algorithm Based on Fractal Wavelet Theory

Most of the previous sources of information are simply too old and out of date. Its like trying to do an article about the PC based on 1994 references.--Editor5435 (talk) 16:14, 21 March 2008 (UTC)

compression ratios and decompression ratios

this topic seems to be scattered throughout the recent discourse. let me just bring it here and summarize the opinions and premises, so far as i understand:

Opinions:

  • opinion 1: when a .fif is decompressed at a 1000:1 ratio, resulting in a high-quality image, that implies that .fif can compress at a 1000:1 ratio while preserving that image quality. - held by Editor5435
  • opinion 2: when a .fif is decompressed at a 1000:1 ratio, that does not imply that .fif can compress at a 1000:1 ratio while preserving that image quality. - held by spot

Premises and logic:

  • premise 1: decompression ratio is not the same as compression ratio - all editors agree
    • logic: therefore, opinion 1 is false, and 2 is true - this logic is not valid, instead premise 1 implies that opinion 1 may be false, but does not prove it. likewise, opinion 2 may be true, but it doesn't follow necessarily. I believe this is error is a result of Affirming_the_consequent.
    • logic: therefore, an image decompressed at 1000:1 was not neccessarily compressed at 1000:1 - this logic is valid.
  • premise 2: if a fif compresses an 1x resolution image at 50:1 and then is decompressed at 1000:1, the resulting image quality or better can be achieved if a 20x resolution image is compressed at 1000:1 and subsequently decompressed at 1000:1. - asserted by me (Kevin Baas), Editor5435, and the guy who wrote the FAQ.
    • logic: it follows from premise 2 that opinion 1 is true - this logic is valid, thus the conclusion follows if premise 2 is true.

Kevin Baastalk 16:24, 22 March 2008 (UTC)

"The guy who wrote the FAQ" is John Kominek and he said "My opinion is this. Kevin Baas begins with a correct characterization (first quote) but draws the wrong conclusion (second quote). Erik Reckase and Spot are raising important relevant facts, and demonstrate proper understanding." Quoting Wohlberg: While “resolution independence” has been cited in the popular technical press as one of the main advantages of fractal compression [117], there is little evidence for the efficacy of this technique. Subsampling an image to a reduced size, fractal encoding it, and decoding at a larger size has been reported to produce results comparable to fractal coding of the original image [21], although there is no indication that replacing the fractal interpolation stage by another form of interpolation would not produce comparable results. Comparisons with classical interpolation techniques indicate that, while fractal techniques result in more visually acceptable straight edges than linear interpolators, they are inferior in terms of the MSE measure [118]. An alternative study [119] found slightly better results for the fractal technique in isolated cases, but a general superiority for the classical techniques. Spot (talk) 22:28, 22 March 2008 (UTC)
Once again the concept we are discussing escapes your comprehension. Will you ever understand? "comparable to fractal coding of the original image". NO, we are comparing the quality of upscaled fractal images to 1000:1 wavelet compression of the original.
"although there is no indication that replacing the fractal interpolation stage by another form of interpolation would not produce comparable results." I don't think so, fractal scaling by its very nature is superior to various forms of interpolation. There has already been plenty of discussion about this. You continue to twist this discussion to meet your own agenda which is why your contributions to the article have little value.--Editor5435 (talk) 01:57, 23 March 2008 (UTC)
If you look at his response to the second quote, and my response to that, you'll see that what he was refering to was me taking "fractal zooming" out of the context of fractal compression, and that I was responded "I know." My guess from you misunderstanding is that you did not read it. (In the future please get your self up to speed on the discussion before adding to it.) I imagine then that you also missed "9) If one is careful, it is possible to speak of very high compression ratios and resolution enhancement in the same breath..." which is the part that I was refering to when I said he asserted premise 2.
Regarding inferiority in terms of RMSE, I'd like to see his examples to show this. I've already provided links to papers which show a comparision of fractal compression/decompression against the best noise filter, which is closely related to interpolation. And genuine fractals 5.0 happens to be the industry standard in interpolation, so essentially the best graphic designers in the world disagree with that guy. Now you seem to be arguing against having a fractal interpolation section in the article, when before you were all for it.
Anyways, what does this have to do with what I wrote? Are you disputing that the opinions are properly attributed, that the premises are? Or are you disputing my analysis of the logic? Kevin Baastalk 14:34, 23 March 2008 (UTC)
Kevin Baas and Editor5435, you are in continued and repeated violation of WP:NPA. Please desist. Spot (talk) 18:03, 23 March 2008 (UTC)
WTF are you talking about?!?! What have I ever said about you that offended you? I have never attacked your person, not once. Kevin Baastalk 00:27, 24 March 2008 (UTC)
"Genuine fractals 5.0 happens to be the industry standard in interpolation" this statement is from the company that sells GF5, so it violates WP:RS. In my opinion, whatever Photoshop does by default is the industry standard. And regardless of how good or popular GF5 is, the question is really "can fractal interpolation substitute for compression", and the answer is "no". Wohlberg is copiously referenced; his examples are in the 100+ papers in the bibliography, and the many graphs and tables in the paper. This kind of independent literature review is a perfect source for this article. So no, I am not arguing against a section on fractal interpolation (see my draft above for my proposed text). I am arguing that this section should agree with Wohlberg. I may well paraphrase some of it into the draft. Spot (talk) 18:03, 23 March 2008 (UTC)
"Genuine fractals 5.0 happens to be the industry standard in interpolation" is not from the company, it is from me, and I had no knowledge of the company before commenting on this talk page. It was rather presumptious of me to say, at it is, in fact, wrong. Bicubic interpolation is the industry standard. What I meant is that it's the market leader. A mistake in my wording. I apologize for the confusion. Kevin Baastalk 00:27, 24 March 2008
"this industry standard Photoshop Plug-In even better than before." is a direct quote from the current sales page. I don't believe it's the industry leader either, can you provide a reference demonstrating this? I would say Lightroom or Aperture_(photography_software) is the industry leader, but neither of these programs provide fractal scaling. Spot (talk) 01:59, 24 March 2008 (UTC)
see below. Kevin Baastalk 02:57, 24 March 2008 (UTC)
I don't think that the question is "can fractal interpolation substitute for compression", if you ask me, that's a pretty ridiculous question. fractal interpolation can't even be done w/out compression, so it clearly can't substitute for it. nobody here is arguing that it could. Kevin Baastalk 00:27, 24 March 2008
That's exactly what Editor5435 and Iterated Systems have done, and you seemed to agree above when you said "I see what you're saying". The problem is taking a compression ratio and an expansion ratio and multiplying them together and reporting the result as a compression ratio. This is the error outlined in my draft material above ("4x50"), and repudiated by Kominek and Wohlberg. Spot (talk) 01:59, 24 March 2008 (UTC)
see below. Kevin Baastalk 02:57, 24 March 2008 (UTC)
"I am arguing that this section should agree with Wohlberg." -- if there are different points of view on a topic that is ambiguous, than WP:NPOV states that both sides should be presented in the article. However, when a reference is demonstrably false on technical points, it fails the accuracy test; we do not put "2+2=5" in the article. Just about every sentence from the quotation you cited is demonstrably false, and the falsity can be demonstrated with little effort. That alone proves that the source is not reliable.Kevin Baastalk 00:27, 24 March 2008
Well then please let's have that demonstration. Spot (talk) 01:59, 24 March 2008 (UTC)
see below. Kevin Baastalk 02:57, 24 March 2008 (UTC)
Anyways, you have not answered my question: what does this have to do with my original post to this section? Kevin Baastalk 00:27, 24 March 2008 (UTC)
You appear to be making this error as well. I hoped you might be convinced by a peer-reviewed publication in a major journal (since the comp.compression FAQ wasn't good enough for you). Spot (talk) 01:59, 24 March 2008 (UTC)
Look, I've had enough of this. Obviously you have absolutely nothing to say about [[21]. Thus there's no point in continuing this discussion (or having it in the first place). Kevin Baastalk 02:57, 24 March 2008 (UTC)

structure of the article

This section is for discussing the structure of the article. Please relegate technical debates and other comments to other sections.

I thought to start this out I might take a look at article on other compression formats to see how they're organized. Three section s of the JPEG 2000 article seemed applicable to this article:

  • Features
  • Technical discussion
  • Applications

For .png, there were these:

  • History and development
  • Technical details

and .gif:

  • History
  • Usage

So in sum, we have:

  • history (& development)
  • features
  • technical discussion/details
  • application/usage

It would seem to me that information about the features would fit int to the application/usage section. In any case, I want to hear other peoples thoughts on how this article could be structured. Kevin Baastalk 15:08, 23 March 2008 (UTC)

Its a shame some of the other editors refuse to participate in this discussion and instead go right ahead and attempt butcher the article without knowing what they're doing.--Editor5435 (talk) 23:23, 23 March 2008 (UTC)
We should start with the Features section and describe fractal scaling which is an inherent result of fractal compression.--Editor5435 (talk) 15:51, 24 March 2008 (UTC)

We should all agree of the structure first before any major revisions to the article.

* Introduction

* History (& development)

* Patents

* Features

* Technical discussion/details

* Application/usage

Laying a proper foundation is more important at this time.--Editor5435 (talk) 17:20, 24 March 2008 (UTC)

Spot, stop trying to revise the article under the old format, we are discussing improvements here, stop acting independently of these discussion among editors.--Editor5435 (talk) 18:37, 24 March 2008 (UTC)

I think patents would rightfully by subsumed under history, possibly as a subsection. The features section, I'm not too sure about - that content could go in the tech and/or usage section:
  • Introduction
  • History (& development)
    • Patents
  • Technical discussion/details
    • Features
  • Application/usage
    • Interpolation

This simplifies things to just 4 (actually 3, when you discount the intro) major sections:

  • Introduction
  • History (& development)
  • Technical discussion/details
  • Application/usage

Kevin Baastalk 23:42, 24 March 2008 (UTC)

I am in agreement.--Editor5435 (talk) 00:02, 25 March 2008 (UTC)

I created the Features section in the article with a brief mention of fractal scaling and Kevin has since added a Fractal Interpolation sub section. Finally the article is being structured as it should be rather than one large continuous page.--Editor5435 (talk) 20:01, 25 March 2008 (UTC)

The Technical discussion/details section should be added next, any suggestions of what to start off with?--Editor5435 (talk) 19:06, 26 March 2008 (UTC)

Patents section

This appears to be nothing more than an advertisement for Iterated Systems Inc.'s patents. I'm unable to find how the supplied reference verifies any of the information provided, much less is relevant. --Ronz (talk) 17:53, 23 March 2008 (UTC)

Ronz, you trashed the entire patent section which was added as a result of discussions here. What's wrong with you? It is a widely known fact that fractal compression is heavily patented which restricted its widespread adoption in commercial software. Why do you attempt to edit articles you clearly have little understanding of?--Editor5435 (talk) 18:02, 23 March 2008 (UTC)
Please follow WP:TALK or risk being ignored. Thanks! --Ronz (talk) 18:06, 23 March 2008 (UTC)
Please leave this article to those who are more knowledgeable.--Editor5435 (talk) 18:08, 23 March 2008 (UTC)
No. --Ronz (talk) 18:12, 23 March 2008 (UTC)
There is no doubt other editors will support my position the patent issues are an important topic for this article.--Editor5435 (talk) 18:21, 23 March 2008 (UTC)
It doesn't matter. If it cannot be properly sourced, it will be removed. See WP:V and WP:RS. --Ronz (talk) 18:48, 23 March 2008 (UTC)
There are proper references, your constant tampering is extremely annoying, just STOP! You are only harming the integrity of Wikipedia.--Editor5435 (talk) 20:13, 23 March 2008 (UTC)
The only way to prove Iterated received 25 patents is to list them all. If I remove the list then certain editors insert a tag asking for proof of patents. Yet when I add the list these same editors claim it is irrelevant and advocate its removal. This is most confusing.--Editor5435 (talk) 22:10, 23 March 2008 (UTC)
That is an examplefarm, and the improper use of primary sources. If no third-party sources are available, the entire section should be removed. This article is not a forum to promote patents nor the companies that hold them. --Ronz (talk) 22:12, 23 March 2008 (UTC)
It proves 100% Iterated was indeed granted the patents. When I removed the list a merely stated Iterated was granted the patents up popped a citation tag. I re-added the list to prove that it is a fact. Further, if you advocate the removal of the patent section entirely it only proves your total lack of understanding of the patent issues and how they are important to fractal compression's history. This subject has already been discussed at length. I will agree to removing the list once again if you stop putting up a citation tag since it has already been proven Iterated has been granted the patents.--Editor5435 (talk) 23:29, 23 March 2008 (UTC)
Why don't we try adding to the article right now? Kevin Baastalk 00:36, 24 March 2008 (UTC)
"The only way to prove Iterated received 25 patents is to list them all" If there are no other sources for this, then it's not important enough to include in this article. Either provide sources or stop interfering with those who are trying to improve the article to the point that it no longer violates Wikipedia policies and guidelines. --Ronz (talk) 02:06, 24 March 2008 (UTC)
You are mistaken, I am the one trying to improve the article, you are the one trying to destroy it by deleting important facts. If you don't understand the significance of the fractal patent issues then you should not be trying to edit the article. Needless damage is the only result!--Editor5435 (talk) 04:37, 24 March 2008 (UTC)
The consensus is that you are mistaken, that you are harassing other editors, and that you need to quickly learn to follow WP:CIVIL or risk being blocked yet again. --Ronz (talk) 16:20, 24 March 2008 (UTC)
You risk being blocked due to excessive vandalism.--Editor5435 (talk) 16:37, 24 March 2008 (UTC)
You've already been told that I'm not vandalizing anything and that your preoccupation with making such accusations may get you blocked. Please stop. --Ronz (talk) 18:28, 24 March 2008 (UTC)
Perhaps "butchering" is a more appropriate term.--Editor5435 (talk) 18:47, 24 March 2008 (UTC)

Information Theory & Data Compression ref

This doesn't appear to be a reliable source [22]. I cannot make out where it is from. Are they just lecture notes? It does have some interesting information though that we might want to add if properly sourced (from page 18):

Current Situation
  • In the 1980s and 90s, fractal compression was a hype
  • It delivered better results than JPEG only for low-quality images
  • Michael Barnsley is the principal researcher
  • He wrote a book and holds several patents
  • Barnsley‘s Collage Theorem: images can be segmented using IFS (iterated function sets)
  • In 1987 he founded Iterated Systems Corp in the U.S. (www.iterated.com) but it was hard to survive the competitor JPEG
  • The patent issues and the complexity prevented a large scale deployment
  • Today fractal compression seems to be even less relevant, with Wavelet compression outperforming it in most applications

--Ronz (talk) 18:13, 23 March 2008 (UTC)

International School of New Media affiliated institute at the University of Lübeck in Germany. --Editor5435 (talk) 18:23, 23 March 2008 (UTC)
I'll be removing the reference and associated information if no one can demonstrate this is a reliable source. Please see WP:V and WP:RS for the relevant criteria. --Ronz (talk) 02:03, 24 March 2008 (UTC)
I recommend you ask the author for the references his statements are based on, and about the status of the document (was it published?). It is far more reliable than some of the sources being used by Editor5435 and Baas. And I don't mean that we should stoop to their level, but that if you are going to delete unreliable stuff, this shouldn't be first on your list. Spot (talk) 02:38, 24 March 2008 (UTC)
I'm trying to discuss this one because it is questionable. The ones that I removed and Editor5435 restored will be removed of course. We could try to go through them all, but this is a learning experience for Editor5435, so it's good to give clear examples of what is not allowed and what might be allowed. --Ronz (talk) 02:43, 24 March 2008 (UTC)
I think I misunderstood --- I am just talking about leaving it here on the talk page until we can get more evidence for or against its reliability. By all means remove it from the article. Spot (talk) 02:54, 24 March 2008 (UTC)
There are many references to Iterated's fractal compression patents, its common knowledge the adoption of this technology has been restricted by the patent issues. What exactly are you trying to accomplish here?--Editor5435 (talk) 04:35, 24 March 2008 (UTC)
I'm questioning the use of this reference. Your appeal to it being "common knowledge" demonstrates that you're simply refusing to learn Wikipedia policies and guidelines. "I'll be removing the reference and associated information if no one can demonstrate this is a reliable source. Please see WP:V and WP:RS for the relevant criteria." --Ronz (talk) 16:23, 24 March 2008 (UTC)
I will restore any vandalism you continue to do to this article. Enough of your games!--Editor5435 (talk) 16:35, 24 March 2008 (UTC)
You've already been told that you risk being blocked for such behavior. Please stop. --Ronz (talk) 18:29, 24 March 2008 (UTC)
You are not being constructive here.--Editor5435 (talk) 18:35, 24 March 2008 (UTC)

Ongoing Vandalism

The problem of ongoing vandalism is destroying the integrity of the article. Attempts have been made to remove the section on patents and information about Genuine Fractals 5 and other software based on fractal compression which are essential for this topic.--Editor5435 (talk) 20:11, 23 March 2008 (UTC)

Yes. Please stop vandalising the article. Please learn to follow Wikipedia policies and guidelines. Please learn to follow WP:TALK and WP:CON. --Ronz (talk) 20:37, 23 March 2008 (UTC)
"YOU" are the one butchering the article, what's wrong with you?--Editor5435 (talk) 20:46, 23 March 2008 (UTC)

Ronz, why are you aggressively trying to destroy this article?--Editor5435 (talk) 20:44, 23 March 2008 (UTC)

Go find someone that agrees with you. See WP:DR for possible ways of doing so. --Ronz (talk) 20:46, 23 March 2008 (UTC)
There are other editors participating in this discussion, just not this very minute. Leave it alone for awhile, your persistence is most unusual.--Editor5435 (talk) 20:50, 23 March 2008 (UTC)

The term vandalism has a specific usage on wikipedia. Calling edits one disagrees with as vandalism doesn't, I believe, improve one's case but instead makes one's point less trustworthy and less civil. Just wanted to point that out if it was not already obvious. If you want people to hear your case in a dispute, Editor5435, avoid the term. - Owlmonkey (talk) 23:31, 23 March 2008 (UTC)

Whew, this discussion section is tough to read. My hats off to everyone who has kept with this in spite of the ongoing emotion and hot temper. - Owlmonkey (talk) 23:41, 23 March 2008 (UTC)
I believe the term is appropriate in this case.--Editor5435 (talk) 23:34, 23 March 2008 (UTC)
Nonetheless, it makes you sound emotional and irrational to have so much vehemence and acidic tone. Doesn't bolster your position to me and probably others. But don't you want to be heard properly and your opinion and expertise respected? It makes you sound less an expert and more a mere zealot to me; which prevents me from hearing your actual point no matter how good it is. But perhaps you don't care what anyone else hears or thinks as long as you're "right"? - Owlmonkey (talk) 23:49, 23 March 2008 (UTC)
Perhaps the emotional charge on this topic is the result of a build of emotion over the last month. It's hard for me to not get emotional too just reading the posts here; it's bitter to read. - Owlmonkey (talk) 03:13, 24 March 2008 (UTC)

Editor5435 Incident #2

Another Administrator's noticeboard Incident has been opened on Editor5435: http://en.wikipedia.org/wiki/Wikipedia:Administrators%27_noticeboard/Incidents#User:Editor5435 Spot (talk) 22:20, 23 March 2008 (UTC)

The incident was archived. Spot (talk) 16:56, 26 March 2008 (UTC)

This only shows how much effort I am going to in order to maintain the integrity of this article. There does seem to be an abundance of forces at work spreading false and misleading information about fractal compression.--Editor5435 (talk) 22:57, 23 March 2008 (UTC)
Re: "abundance of forces" Please stop harassing others. You've been identified by multiple editors as the cause of problems here. Please stop causing these problems. --Ronz (talk) 16:16, 24 March 2008 (UTC)
You are the trouble maker, not me, it is you who has been butchering the article with no idea what you're doing. I ask, what is your interest in fractal compression? You don't seem to be very knowledgeable of the subject? If it wasn't for my efforts the article would be in shambles and absolutely useless as a source of information.--Editor5435 (talk) 16:44, 24 March 2008 (UTC)
I would say Ronz is being helpful and you are the disruptive one. Instead of questioning his credentials, perhaps you could provide us with your own? Spot (talk) 18:23, 24 March 2008 (UTC)
Agreed. Editor5435, please stop disrupting Wikipedia. --Ronz (talk) 18:31, 24 March 2008 (UTC)
Stop turning this situation around. You are the one disrupting efforts to establish a new structure for the article and discussions about it. --Editor5435 (talk) 18:33, 24 March 2008 (UTC)
Editor5435 is correct in that when there is an editing dispute, people should refrain from making controversial edits, and instead build consensus on the talk page. Kevin Baastalk 18:52, 24 March 2008 (UTC)
He's saying no such thing. He's simply disrupting the article and the talk page. --Ronz (talk) 19:07, 24 March 2008 (UTC)
Ronz, you are not being constructive here, we are trying to establish a new structure for the article. What part of this don't you understand?--Editor5435 (talk) 19:09, 24 March 2008 (UTC)
He said "Stop turning this situation around. You are the one disrupting efforts to establish a new structure for the article and discussions about it.". By this he means that there are constructive efforts underway on this talk page to work out a new structure for the article, and these efforts are being bypassed. And bypassing these efforts disrupt them, and they disrupt the cooperativeness and good faith of the community in general. Kevin Baastalk 19:18, 24 March 2008 (UTC)
And he's the one that has been disrupting the constructive efforts. I suggest you read the ANI and contribute there rather than support his improper behavior here. --Ronz (talk) 19:33, 24 March 2008 (UTC)
My efforts have only been to improve the article in good faith.--Editor5435 (talk) 00:23, 25 March 2008 (UTC)
@Kevin that's what I did: I put the material here in the draft section on the talk page. Neither you nor Editor5435 provided any references to challenge it, so eventually I put it into the article. It was Editor5435's reversion that was done without consensus or cause. Spot (talk) 00:11, 25 March 2008 (UTC)
You should not have made controversial edits while a consensus is being made about restructuring the article. Its easier to make additions after a new structure is in place.--Editor5435 (talk) 00:21, 25 March 2008 (UTC)
I think it's better to structure existing content than it is to make up a structure and then fill in content. There is no WP guideline or policy dictating one way or the other. My edit should be addressed on its merits. Please do so. Spot (talk) 00:45, 25 March 2008 (UTC)
We are trying to prevent the problem of breaking up an unstructured article, the majority of editors agree structure is needed before continuing with additional information.--Editor5435 (talk) 01:02, 25 March 2008 (UTC)
Both of us objected to the content change, and nobody supported it. (see: Talk:Fractal_compression#Working_Draft) That's what counts. That's what makes making the change uncooperative, disrespectful, and disruptive. Why did you even put it on the talk page in the first place if you were just going to ignore what anybody said about it? It would have actually been more polite and less a waste of everybody's time if you just rewrote the entire article however you liked it without any input. (That's essentially what you did. The only difference is that you went through the extra step of wasting people's time.) But that kind of defeats the point of a wiki, doesn't it? I'm starting to get the feeling that you're a newbie at this. Kevin Baastalk 15:59, 25 March 2008 (UTC)
You are again violating WP:NPA, please desist. Spot (talk) 18:43, 25 March 2008 (UTC)
I suggest you read WP:NPA. The only thing i said about you in that comment is "I'm starting to get the feeling that you're a newbie at this." , and suggesting that someone is relatively new to editing wikipedia isn't exactly an attack on their person. If anything, it flags them for better treatment on account of the "don't bite the newcomers" policy. Anyways, I hope your reply wasn't a blanket dismissal of everything that I just said; I hope that it sunk in. Kevin Baastalk 19:08, 25 March 2008 (UTC)
It says "comments should not be personalized and should be directed at content and actions rather than people." Your comments were directed at me and were intended to discredit me, making it a personal attack. Even your suggestion that I read NPA is a personal attack. Spot (talk) 21:16, 25 March 2008 (UTC)
"Even your suggestion that I read NPA is a personal attack." LOL --Editor5435 (talk) 21:34, 25 March 2008 (UTC)
By that logic, the wikipedia welcoming committee should have been banned long ago.
Also the statement you quoted is a bit over-general for it's intended purpose. I might say "My, you look nice today.", and by according to the statement you quoted, that would be a personal attack. "Hey ___, what do you think about ___ ?" would also be considered a personal attack. In any case, I'm sorry if what I said offended you. Some of your recent actions and assertions have given me the impression that you do not have a lot of experience dealing with content disputes on wikipedia. And I imagine by the tone Editor5435 takes sometimes that the same hold true for him. There's nothing wrong with this. Forgive me if I offended you with that statement. In fact, please disregard it. It's besides the point. The point is that what you did was basically a big "f*&k you" to people who tried to work with you and happened to have some criticisms, and that's not fair to them. Kevin Baastalk 21:53, 25 March 2008 (UTC)

New structure

All that I'm asking is a new article structure be established before any significant edits. What is your problem with this? You are the antagonizing influence in these discussions. Why?--Editor5435 (talk) 19:41, 24 March 2008 (UTC)

That is all that he is asking. And it's a reasonable suggestion. people should not be making edits to the article that they know will be controversial. That is not "constructive". minor corrections like spelling and grammar, and non-controversial edits are constructive because they are lasting contributions and they help repair damaged faith (which also has a lasting effect). making edits to the article that you know will be controversial, on the other hand, obviously will not be lasting, is disrespectful, and damages other people's good faith. as should be abundantly clear by now. it is for these reasons that such edits are "disruptive". Kevin Baastalk 20:04, 24 March 2008 (UTC)
Its far easier establishing a new structure for the article now since the content is currently limited. Spot's recent edit attempts only make matters worse as its does nothing to address the original structure problem which we all agree exists.--Editor5435 (talk) 19:08, 24 March 2008 (UTC)

I don't agree that a new structure is necessary, nor useful. Wikipedia articles are based upon references and the information in those references. Trying to determine a structure for the article is futile until there is some consensus on what references can and should be used. --Ronz (talk) 21:02, 24 March 2008 (UTC)

Your comments are simply ridiculous, haven't you been reading what this entire discussion is about? The article requires structure making it easier for different sections to be expanded. We have already discussed numerous topics for new sections such as fractal scaling and technical details, similar to the structure of other Wiki articles on different forms of image compression. All editors are in agreement about this, except you. I suggest you read this discussion in full before making any further comments that ultimately require clarification.--Editor5435 (talk) 22:05, 24 March 2008 (UTC)
Starting with the references and then going from there to the content seems backwards to me. One usually writes the book before the bibliography. And Editor5435, I suggest you tone down your comments - they've been getting pretty harsh lately. Kevin Baastalk 23:12, 24 March 2008 (UTC)
I honestly don't understand why Ronz makes such comments when obviously there is a consensus about adding structure to the article. Its not very constructive on his behalf.--Editor5435 (talk) 23:30, 24 March 2008 (UTC)

The biggest problem with the article isn't the structure, it's the content and references. My edit was a big step towards addressing this. It didn't change the structure at all, you are welcome to continue to discuss structure, but there's no guideline that says structural edits must precede content. Spot (talk) 00:07, 25 March 2008 (UTC)

You fail to recognize what the majority of editors agree upon.--Editor5435 (talk) 00:59, 25 March 2008 (UTC)
Spot, you yourself were engaged in the long discussion where, after great effort, we found some agreement on what we could work together productively on.[23] [24] [25] [26] And that was the structure, particularly in regard to fractal interpolation and fractal scaling. It would be very frustrating indeed if after such a long discussion, we find that the one thing we agree on, you suddenly discard out of hand for no practical reason. Can you see how this can be perceived as being uncooperative? Can you see how this can make it very difficult to work with you? Kevin Baastalk 15:49, 25 March 2008 (UTC)

Starting with references and going from there is how Wikipedia works. See WP:V. --Ronz (talk) 01:34, 25 March 2008 (UTC)

I've been contributing to wikipedia for a long time and i can tell you definitively that i've never seen any article work that way. Kevin Baastalk 15:49, 25 March 2008 (UTC)
Maybe you haven't worked on many article-salvaging situations like this one? Doesn't matter. If we can't agree on the references to be used, we won't agree upon much more. --Ronz (talk) 02:43, 26 March 2008 (UTC)
I'm not sure "article-salvaging situations" exist - I would say rather that some articles are in their infancy, and they need to be expanded, improved, and made more accurate and verifiable. If any article needs "salvaging", one can just revert it to a version that doesn't, therefore it's not really a salvage situation. if you can't, then it's in its infancy. We've never had any discussions on what references should or should not be used, so i don't see any reason why we should be expected to disagree. Regardless, i'm more interesting in the technical details, so that's what i'm going to be focusing on for now. Maybe you can find some reliable sources for those to verify the statements in the tech sections. I'm sure we can find ways to work together productively. Kevin Baastalk 16:06, 26 March 2008 (UTC)
How do we use the same reference tags in different areas of the article? Some sources provide a wide range of information that is suitable as references in different areas of the article.--Editor5435 (talk) 16:12, 26 March 2008 (UTC)
By using what's called "named references":
first thing with ref.<ref name="name_for_ref">{{cite ... }}</ref>  ...   next thing with same ref.<ref name="name_for_ref"/>
Kevin Baastalk 17:39, 26 March 2008 (UTC)

Use of press releases as sources

Press releases usually are not considered to be reliable sources per WP:SELFPUB. The article currently uses a few such references. If other sources are not available, they should probably be removed with the corresponding information as well. --Ronz (talk) 22:10, 23 March 2008 (UTC)

Indeed, especially considering this is an article about fractal compression, not Iterated Systems Inc. or Michael Barnsley. --clpo13(talk) 06:36, 24 March 2008 (UTC)
There wouldn't much left to any article about fractal compression if Iterated Systems Inc. and Michael Barnsley were not included.--Editor5435 (talk) 06:40, 24 March 2008 (UTC)
I'm not saying they shouldn't be included. What I'm saying is that this article is about the concept of fractal compression. Considering that Barnsley and his company pioneered it, they deserve mention. But the specifics should be left to articles about them, per WP:UNDUE. --clpo13(talk) 06:46, 24 March 2008 (UTC)
You are wrong, the specifics of what Iterated Systems accomplished are directly related to this article. What are you talking about? As for WP:UNDUE I strongly suggest you read this entire discussion before making such comments. We are discussion the inclusion of additional information, particularly on the technical side to give a more balanced and comprehensive article. This is preferable to eliminating information we already have, even if its mostly history weighted for the time being.--Editor5435 (talk) 06:49, 24 March 2008 (UTC)
(ec x2) I also didn't say that what Iterated Systems did isn't related to this article. Iterated Systems and fractal compression go hand in hand like Xerox PARC and graphical user interfaces. However, this isn't the Iterated Systems article. It isn't the Michael Barnsley article. It's the fractal compression article, which means that information that could go into the other related articles (patent information, company information, etc.) doesn't belong here. Just so, the GUI article isn't filled with information about Xerox PARC, even though that company was responsible for developing GUIs.
At any rate, this is getting off topic. The main point is that press releases are not considered a reliable source because they're self-promoting. I stated that was even more relevant because this article isn't about Iterated Systems; it's about Iterated Systems' achievement. Different topic, different articles. Therefore, information about the technical side of how fractal compression works isn't a problem. Information on the patents held by Iterated Systems is. While it may be related to fractal compression, it's not the focus of this article. --clpo13(talk) 07:02, 24 March 2008 (UTC)
You don't understand, just like some other editors. The significance of the patents is patent restrictions are the major reason for lack of commercial fractal compression software. Many references support this view. Only a few third party companies ever received development licenses. "it's not the focus of this article" Its one single sentence, what on Earth are you talking about?--Editor5435 (talk) 07:12, 24 March 2008 (UTC)
Correction, that's two sentences the Patent section takes up, forgive me, I never intended for such bloat. I will try to condense it.--Editor5435 (talk) 07:25, 24 March 2008 (UTC)
Hmm, well I was originally referring to the mass of PDF links I saw earlier, but I'm glad that's been cleaned up. --clpo13(talk) 07:36, 24 March 2008 (UTC)

Linkspam

I removed the linkspam again that were embedded as notes/references. See WP:EL, WP:SPAM, WP:NOT#LINK, WP:V, WP:RS, WP:COI, WP:SELFPUB, and WP:NPOV. These don't belong as external links, much less notes or references. --Ronz (talk) 14:07, 25 March 2008 (UTC)

The end result is a less informative article, but I guess this it to be expected on Wikipedia. At least this information is still on Google if you look hard enough.--Editor5435 (talk) 15:18, 25 March 2008 (UTC)
Thanks for the internal links and not reverting. The end result is less promotional and more balanced. These are very simple NPOV issues. Hopefully, things will go as well when we start tackling the more complicated NPOV issues. --Ronz (talk) 15:19, 25 March 2008 (UTC)

Editor5435 breaks 3RR again?

Five reverts between 19:21 and 22:47 today:

And on a different page in the same interval:

I don't have time to look back further, but isn't Editor5435 breaking WP:3RR again? Please note the quality of the references that have been deleted. Spot (talk) 03:21, 26 March 2008 (UTC)

The edit to Genuine Fractals is fine. He added a number of references, then removed the notability tag.
I suggest you follow the format that was used in the 3rr report (that includes a discription of the edit and what version it is a revert to), because it's not clear what's going on with the other edits. --Ronz (talk) 04:20, 26 March 2008 (UTC)
Spot lighten up, the article is improving with the desired structure. As for Genuine Fractals 5 how can you possibly object to me adding product review references? The product is notable and a well recognized Photoshop plugin, its deserves a Wikipedia article.--Editor5435 (talk) 04:57, 26 March 2008 (UTC)

Reported. Spot (talk) 18:17, 26 March 2008 (UTC)

Spot, grow up! The article is being improved with the new structure. Please stop your nonsense.--Editor5435 (talk) 18:34, 26 March 2008 (UTC)
Edit warring--by either party--does not improve the article. I'm beginning to wonder if dispute resolution is in order here, since it seems that the discussion almost always stalls out with someone violating 3RR. --clpo13(talk) 20:24, 26 March 2008 (UTC)
I would welcome dispute resolution. I'm editing the article now, while Editor5435 is blocked. We'll see what happens on their return. Spot (talk) 21:34, 26 March 2008 (UTC)

Result: Editor5435 blocked for 30 hours. Spot (talk) 21:39, 26 March 2008 (UTC)

Another 3RR Accusation

Jakespalding accused Spot of breaking 3RR, but the ruling is No Violation Spot (talk) 04:26, 27 March 2008 (UTC)

Clarify Application of WP:V and WP:RS

Hi Kevin, I would like to focus your attention on this recent edit of yours. You removed a reference to peer-reviewed literature survey, and the comp.compression FAQ, and replaced them with a link to a trade website. How do you square this with the advice of WP:V which says "Academic and peer-reviewed publications are highly valued and usually the most reliable sources in areas where they are available, such as history, medicine and science." Where does "dpreview.com" fall in WP:RS? Thanks for clarifying. Spot (talk) 22:20, 27 March 2008 (UTC)

If I removed a reference (while preserving the accompanying text), it was on accident. Kevin Baastalk 23:20, 27 March 2008 (UTC)
You removed some text supported by two stellar references and replaced it with new text and a marginal reference (please see the edit). How do you justify this? Spot (talk) 16:26, 28 March 2008 (UTC)
"The catch is that detail has not been retained, but generated" is begging the questionand un-encyclopedia. I don't think anyone's stupid enough to think that fractal decompression magically increases the resolution at which you took the photo. And even if there are people that dumb, we're not supposed to speculate their misconceptions and pre-emptively correct them. The text does not introduce the ridiculous notion that fractal decompression magically increases the resolution at which you took the photo, so the text should not correct this notion that doesn't exist. That's called begging the question, and it's not appropriate for an encyclopedia.
There is no indication that replacing fractal interpolation with another form of interpolation would not produce similar results. is demonstrably false. all you have to do to prove it false is provide one indication that "replacing fractal interpolation with another form of interpolation would not produce similar results." And there have been numerous comparisions with fractal interpolation showing better results than bicubic interpolation, linear interpolation, and nearest-neighbor. In fact, if the sentence referred to bi-cubic interpolation instead of fractal interpolation, it would still be demonstrably false, because if you replace bi-cubic interpolation w/nearest neighbor interpolation, the difference is clear. And a clear difference in the results is clearly an "indication" of "different results".
So in conclusion, the material had to be removed because the first part was begging the question, and the second part was wrong. The reference had to go along with it because it was no longer attached to any content. Kevin Baastalk 17:13, 28 March 2008 (UTC)
Let me quote Wohlberg again: While “resolution independence” has been cited in the popular technical press as one of the main advantages of fractal compression [117], there is little evidence for the efficacy of this technique. Subsampling an image to a reduced size, fractal encoding it, and decoding at a larger size has been reported to produce results comparable to fractal coding of the original image [21], although there is no indication that replacing the fractal interpolation stage by another form of interpolation would not produce comparable results. Comparisons with classical interpolation techniques indicate that, while fractal techniques result in more visually acceptable straight edges than linear interpolators, they are inferior in terms of the MSE measure [118]. An alternative study [119] found slightly better results for the fractal technique in isolated cases, but a general superiority for the classical techniques.
Let me offer you a different interpretation besides "wrong". When Wohlberg says "comparable results" he doesn't mean that fractal scaling and bicubic scaling produces results that look the same. The "results" are how good these images are, where "good" is a quantitative measure from perfection, MSE. As evidence he cites these two papers: Zooming Using Iterated Function Systems and Resolution enhancement of images using fractal coding. In order to avoid confusion perhaps we should include the whole paragraph from Wohlberg and cite these primary references. What do you think? Spot (talk) 19:01, 29 March 2008 (UTC)
Hi Spot, will you kindly tell everyone how to fill a giant screen using a small file with good viewing quality? Maybe your answer could be used by YouTube as so far they have found no other methods that work for them? I'm waiting in anticipation! Oh, by the way, the answer might be found elsewhere than a paper from 1995.--Editor5435 (talk) 20:11, 29 March 2008 (UTC)
I know. Bicubic interpolation gets better RMSE than linear, which gets better than nearest neighbor. This is because color tends to change continuously over space rather than discontinuously. The two papers he cites as evidence use the common reference image, "Lena". Unfortunately, that image is low-resolution and black-and-white, which automatically means that it's not going to compress well with fractal compression. Fractal compression works better on high-res, high-color depth images. I'm sure if such images were used instead, the conclusions would be quite different. Kevin Baastalk 14:25, 30 March 2008 (UTC)
Very good point, most of these so called "papers" from last century are basically worthless because they are so out of touch with today's computing and multimedia environment. I find the examples from the "forbidden website" most interesting because the resolution and color depth are much closer to what is actually be used these days. Imagine trying to contribute to an article about the Internet based on Mosaic 1.0, but that's essentially what some editors are doing with fractal compression.--Editor5435 (talk) 20:10, 30 March 2008 (UTC)
You may be sure but the Wikipedia can only be convinced by reliable and verifiable sources. If you come up with any recent references that make a more extensive or detailed quantitative comparison, then we can revisit this, but until then, shouldn't we use Wohlberg and the papers they cite? Spot (talk) 06:53, 31 March 2008 (UTC)
Gee Spot, read this, why do you want to keep talking about small black-and-white images from last century?--Editor5435 (talk) 14:33, 31 March 2008 (UTC)
Hey Spot look what I found, here is the first digital image ever created on a computer in 1957. It doesn't look much different than the images those obsolete "papers" from the 1990's used to analyze fractal compression. Maybe you can spend a few weeks trying all sorts of interpolation methods to see which provides the best results? In the meantime I'm going to keep looking for current examples like this. :) --Editor5435 (talk) 18:01, 31 March 2008 (UTC)
Editor5435, could you please try to be more WP:CIVIL. Thank you. Kevin Baastalk 18:37, 31 March 2008 (UTC)


@Spot: "If you come up with any recent references that make a more extensive or detailed quantitative comparison, then we can revisit this, but until then, shouldn't we use Wohlberg and the papers they cite? Not necessarily. Wholberg's conclusions have been shown to be based on selection bias and fallacious logic. His conclusions, therefore, are not reliable. And we really don't need a section on this so much as to sacrifice the accuracy of the article.
That being said, I have come across numerous examples that contradict wholberg's conclusions. And yes, they are more recent. I really don't think it's nearly as difficult to find as you're making it out to be. In contrast, I wouldn't have found wholberg had you not first dug it up.
In any case, the article now notes that compression ratios w/good image quality go up as the source image's resolution and color depth do, which is more precise, more accurate, and more informative than simply stating that fractal compression sucks for all images (while failing to mention that this "all" excludes any image that's not a 256x256 grayscale bitmap). Kevin Baastalk 19:00, 31 March 2008 (UTC)

How good your arguments are, or how convincing the promotional material on a company web site may be to you, is irrelevant to Wikipedia. The standard for Wikipedia is verifiability, not truth, and verifiability is measured by published references in reputable sources. You are quite right that small grayscales may be qualitatively different in compression than large full-color images. However, if there are quantitative published data on the former and no quantitative published data on the latter (in reputable independent publications), then the article should simply make no claims about the latter: it should say "quantitative analysis of fractal compression on small grayscale images showed X [ref], while claims of better compression ratios for larger, color images [ref] have thus far not been independently verified in published data." —Steven G. Johnson (talk) 03:36, 22 April 2009 (UTC)

RFC#2 application of WP:RS WP:V

My edits have stellar references but been reverted by editors who do not present counter evidence. In particular I refer to you discussions above: one, and two. Especially "Just about every sentence from the quotation you cited is demonstrably false, and the falsity can be demonstrated with little effort." and "I'm sure if such images were used instead, the conclusions would be quite different" both without citations of support. My edits are backed up by major peer-reviewed publications, standard textbooks, and the FAQ. Spot (talk) 23:47, 31 March 2008 (UTC)

Spot, I'm terribly sorry to inform you that your references are grossly out of date and obsolete. Nobody uses 256x256 grayscale bitmap images anymore. No matter how "stellar" your references are in your mind they are detached from today's reality. According to them Genuine Fractals 5 shouldn't even work, contrary to the abundance of proof it does, and is superior to other methods of interpolation. There are reviews that have been referenced of it using resolutions and color depths that are common today. Visual comparisons have been made available with other interpolation methods in the references listed. There is also an abundance of recent papers covering various aspects of fractal compression. No "papers" from the 1990's will change the fact what is currently possible with fractal compression/interpolation. The Features section has 8 references which support the content.--Editor5435 (talk) 00:44, 1 April 2008 (UTC)
Note where the comp.compression FAQ author weighs into our discussion, and here where Editor5435 describes a standard textbook as "obviously wrong". As proof he offers images on the web page of the company that promises it will soon be selling a product based on this technology. To be clear here my references are 1) Sayood, Khalid (2005). Introduction to Data Compression, Third Edition. Morgan Kaufmann. pp. pp 560 - 569. ISBN 012620862X. {{cite book}}: |pages= has extra text (help); Cite has empty unknown parameter: |coauthors= (help) 2) Kominek, John (Sept 1999). "comp.compression FAQ". {{cite web}}: Check date values in: |date= (help); Cite has empty unknown parameter: |coauthors= (help) 3) Wohlberg, Brendt (Dec 1999). "A review of the fractal image coding literature" (PDF). IEEE Transactions on Image Processing. 8 (12): 1716--1729. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
Note: See with your own eyes a visual comparison between various interpolation methods from April, 2007. Words printed last century have little relevance today other than historical reference and describing test results on 256x256 grayscale bitmap images in relation to application today is just plain nonsense. As for the comp.compression FAQ author, he is clearly living in the past and should really update his research if he wishes to weigh in on current discussions.--Editor5435 (talk) 03:34, 1 April 2008 (UTC)
That web site does not contradict Wohlberg because it contains no quantitative comparison of the methods, and it's self published. Spot (talk) 04:09, 1 April 2008 (UTC)
Can't you see "straight"? Open your eyes, its all there, words from the past won't change what is clearly visible. Self published? No different than any of those "steller" papers. "No quantitative comparison of the methods"? Writing about 256x256 grayscale bitmap images just shows the quality of in depth analysis they used. Why don't they ever write about anything that is actually applicable to the real world?--Editor5435 (talk) 04:46, 1 April 2008 (UTC)
For the record, modern graphic designers use images that are at least 4.1 megapixels, 32-bit color depth. That's quite literally hundreds of times the data density of the images used in the two papers you cite. Easily enough to make quite a substantial difference in compression efficiency. (as well as interpolation quality, given a method that uses the entire image instead of a local neighborhood). Kevin Baastalk 18:37, 1 April 2008 (UTC)
In discussion "one" you continued to sidetrack the discussion from the topic of the section, even after being asked multiple times to at least say something logically related to the section topic. Then you questioned me for humoring your sidetracks, which I admit, in retrospect, was a bad decision. That was also the third time you avoided the subject (after some pretty direct inquires!). After that, I gave up. You can be very frustrating to talk to sometimes.
In regards discussion "two", I've made my case, and I think Editor5435 is doing a good job at pointing out the ridiculousness of your argument. If you can spend the time digging up something like the wholberg which is ancient and full of holes, but is the only paper found so far to support your point of view, I'm sure you can spend a fraction of the time it took for you to find that to find more recent, and accurate papers which provide a counter-balance to wholberg's specious arguments. If you're not willing to take make that effort, it's hard to maintain the belief that you are willing to work against your own personal biases towards making this article more neutral-point-of-view, accurate, up-to-date, and informative. Kevin Baastalk 18:04, 1 April 2008 (UTC)

Please stop removing referenced information based upon your own personal preferences and original research. Thanks! --Ronz (talk) 06:19, 8 April 2008 (UTC)

Ronz, you are wrong, the information was based on proper references, please don't let "your" personal reasons influence this otherwise accurate article. If you take the time to read the discussion you will see Spot's references are from the 1990's and now obsolete and his argument of using references based upon 256x256 grayscale images is flawed. Later references to fractal interpolation of higher resolution color images overrides old references to 256x256 grayscale images. Your recent edit only resulted in damage to the article. Please don't do it again, especially if you really don't have a good understanding of what you're doing.--Editor5435 (talk) 13:58, 8 April 2008 (UTC)
Learn to follow WP:TALK or you may find your comments and edits are ignored. --Ronz (talk) 05:55, 9 April 2008 (UTC)
Please learn the difference in relevance between grayscale and color images before you make edits on related subject matter and accuse others of making edits based upon personal preferences and original research, it will save embarrassment having such mistakes corrected. You have to admit, after taking time to read through these discussions there is little point in supporting references to ancient material from last century based on black and white images, its simply ridiculous and makes Wikipedia appear as utterly hopeless in having informative and accurate up to date information.--Editor5435 (talk) 07:14, 9 April 2008 (UTC)

We already have wording in the article saying that fractal compression works worse at lower resolutions and better at higher resolutions. That covers both sides of the debate and shows how they're related to each other. Kevin Baastalk 14:41, 9 April 2008 (UTC)

The statement is unsourced. References that disprove or contradict Wohlberg have still not been provided. As was explained above, example images showing that GF5 and bicubic interpolation are visibly different do NOT disprove Wohlberg, which is based on quantitative comparisons of these methods. Spot (talk) 17:53, 9 April 2008 (UTC)
Spot, what is your obsession with tiny black and white images? Do you realize how ridiculous your above statement looks? Wohlberg's last century analysis of fractal interpolation on 256x256 grayscale images is now obsolete, high resolution color images are standard these days. What part of this don't you understand? References to third party testing of GF5 provides enough evidence how fractal interpolation compares to other methods when applied to high resolution high color depth images. Your arguments are like comparing gas milage of cars at 5mph, mind boggling to say the least!--Editor5435 (talk) 20:43, 9 April 2008 (UTC)
If you have any references with a quantitative (eg RMSE) comparison of high resolution color images, please provide them. Spot (talk) 14:24, 10 April 2008 (UTC)
I don't dispute wholberg's numbers. I dispute the conclusions he draws from them. His logic is full of holes. (I've already pointed out some of them in that small little paragraph you love to cite.) He makes claims larger than the evidence he uses provide for. I.e. he does not provide the evidence to back them. Thus it would be wrong to put his reckless extrapolations on the page as factual (or verifiable) - at best they constitute a POV. Kevin Baastalk 16:22, 10 April 2008 (UTC)

Fractal Compression Performance on Modern Hardware

I see there are some interesting updates about fractal compression speed of a newly compiled TruDef version running in Windows Vista:

Total time to compress 552 1280x960 frame sequence: 127635 ms (4.324833 fps)
Note: Encoding time includes hard drive I/O bottleneck.

That's 0.23 seconds per frame, also, mutliple threads utilize each CPU core. Just as earllier discussions predicted fractal compression speed is no longer a problem on current hardware.--Editor5435 (talk) 05:21, 22 June 2008 (UTC)

Here is some more information, 1600x1216 TruDef frames are 39KB in size. See raw captures Frame 1, Frame 2, Frame 3. Test clips were made from cropping 1600x1216 out of the original 4096x1714 uncompressed RGB 4:4:4 24fps StEM footage (Standard Evaluation Material) commissioned by DCI (Digital Cinema Initiative). "This footage contains a number of elements such as complex motion, film graining and color variations that are a better test of a video codec's capabilities".--Editor5435 (talk) 18:25, 12 July 2008 (UTC)

An observation: The image quality of larger resolutions is significantly better due to the fact there are more self similarities within each 16x16 or 8x8 pixel domain blocks and their corresponding range blocks. Compression efficiency increases with resolution. I would like to see the results of encoding the complete 4096x1714 frames with fractal compression. This also explains why fractal researchers completely missed the true benefits of fractal compression last century. If they had simply worked with larger images their conclusions would have been very different.--Editor5435 (talk) 16:04, 13 July 2008 (UTC)

4K Video Fractal Compression

See capture 96mbps Compressed Frame compared to raw 4096x2048 Frame. This high bitrate fractal encoding is a viable method for visually lossless archiving of production video. 10 seconds of 4k raw mildly compressed to 117MB.--Editor5435 (talk) 02:34, 21 September 2008 (UTC)