Jump to content

Talk:Additive synthesis/Archive 3

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2Archive 3

About instantaneous phase and instantaneous frequency, in the context of additive synthesis equations, we cannot equate our definitions of time-dependent phase and time-dependent frequency to those definitions because those are non-local and we have local definitions. Instantaneous phase of a real-valued signal is obtained by introducing an imaginary component that is the Hilbert transform of the real signal, and by taking the argument of the resulting complex number. Hilbert transform is obtained by convolution with the Hilbert kernel, which has no compact support, hence non-locality. In other words, instantaneous phase of a non-static sinusoid, at current time, will depend on the unknown future, indefinitely, and that's not what's happening in our synthesis equations. So it might be better to stick to the different names or to describe the difference, or both. Olli Niemitalo (talk) 11:00, 16 January 2012 (UTC) Edit: This applies to the discrete equations. To the continuous equations, I'm not sure! Olli Niemitalo (talk) 14:16, 16 January 2012 (UTC)

Probably I roughly grasped intention of your clear proposal. Moreover, direction of recent improvements on that section seem almost correct, in my eyes. The thing I can't grasp yet it is, why we should use still this specific implementation which seems slightly hard to rationally explained, as example on Wikipedia. Possibly, is it based on a defact standard code on that field ? (If so, probably we may be able to find several sources other than Smith III ...) --Clusternote (talk) 12:46, 16 January 2012 (UTC)
P.S. I understand that you are talking about signal processing using complex form. It is not my specialty, however, your above kind comment is very helpful for me to improve my understandings on these equations. I'm sorry for trouble you. sincerely, --Clusternote (talk) 17:44, 16 January 2012 (UTC)
P.S.2 I'm happy if I can clearly identify the definitions of time-dependent phase and time-dependent frequency and also local definitions used on equations described on that section. If possible ... --Clusternote (talk) 18:24, 16 January 2012 (UTC)
Wikipedia is used as a source of information by all kinds of people. While the general reader (say, a musician) might not be interested, the discrete equations may be useful for those who wish to create their own additive synthesizers (say, a computer programmer), or interesting to someone who wishes to know more about how their inner workings can be described mathematically (say, a "nerdy" person). Clusternote, perhaps it would be more helpful, as compared to you writing a footnote, if you would point out here on the talk page what exactly it is that you find disagreeable about the discrete equations. We can then respond by 1) clarifying the article and 2) correcting any true defects. Olli Niemitalo (talk) 14:16, 16 January 2012 (UTC)
P.S. Anyway, I know that you recognized several issues on earlier version of theory section and corrected. Also IP user, too. I'm glad for intellectual honesty of you.
By the way: possibly most other users on here are the merely a mania or students or general users ? I can't almost believe that professionals on this field can't prove their own mathematical equations. It's a Miracle ! --Clusternote (talk) 22:14, 22 January 2012 (UTC)
I've split my previous post into sub-section "#Yet not clarified points" for ease of editing. --Clusternote (talk) 06:31, 21 January 2012 (UTC) [Added Link]--Clusternote (talk) 09:08, 23 January 2012 (UTC)
- - - - - - - - - - - - - - - - - - - - -
Actually, Olli, I'm not sure I agree with you. First, I do not think that there is much difference in outcome between the continuous-time case and the discrete-time case. Second, as long as the envelopes rk(t) are bandlimited sufficiently, the Hilbert transform of
,
is
and the analytic signal is
.
At this point there is agreement with how we understand instantaneous frequency from the POV of the analytic signal. The instantaneous frequency of a continuous-time sinusoid is simply and always the derivative of the argument of the sin or cosine function w.r.t. time and it need no definition or relationship with the analytic signal when only real sinusoids are involved. 71.169.180.195 (talk) 02:01, 17 January 2012 (UTC)
And, as a result, #Supplemental note for section "Inharmonic partials" using continuous form is almost correct. --Clusternote (talk) 02:59, 17 January 2012 (UTC)
I know you wish for that to be true, Cluster. But it isn't. For example, the expression has no meaning. (The reason is that you're mixing discrete-time notions with continuous-time notions ad hoc in the same equation. The only way to relate continuous-time notions to discrete-time, is via the sampling theorem.) Your supplemental note is dead-in-the-water, right from the beginning. 71.169.180.195 (talk) 04:02, 17 January 2012 (UTC)
Is it sure ? The following two expressions are probably correct.
      (as written on article instantaneous frequency)

and
    (in continuous form)

The later expression is also mentioned on
best regards, --Clusternote (talk) 04:18, 17 January 2012 (UTC)
Those two equations are correct.*** And they are equations that live solely in the continuous-time domain. So, in your note, can you tell me what domain φk[n] is in? And what meaning there is to (d/dt)φk[n]? (d/dt)φk(t) does have meaning, but (d/dt)φk[n] does not and if you want to convert φk(t) to φk[n] or the back again, you need to consider the sampling theorem. BTW, the \textstyle TeX command does not appear to do anything. I don't know why you like putting it in. Use HTML or LaTeX, whichever you like, but try to keep the equations clean and consistent in style and use. Makes it easier for others to see what you're saying and, if they respond, to copy and edit your equations in response. Glad you're playing nice now. 71.169.180.195 (talk) 04:35, 17 January 2012 (UTC)
*** Actually, that article (which I have never contributed to) reverses the common convention for θ being the overall angle going into a cos() or sin() function and φ being the phase offset from a frequency reference term. It really should be (removing your superfluous formatting):
and
71.169.180.195 (talk) 04:42, 17 January 2012 (UTC)
I'm sorry for trouble you by my mistype and \textstyle. After then, I corrected it to , as you pointed out. As for \textstyle, I used it because I wrote these as "footnote". Of course, I should not use it on other situations including talk page. Any way, thanks. --Clusternote (talk) 04:51, 17 January 2012 (UTC)

(split topic on "band-limiting requirements" into subsection for ease of discussing --Clusternote (talk) 09:08, 23 January 2012 (UTC))

More history / citation research notes.

I'm digging around for more citations that might be an early sources of history and theory... maybe someone with library access can see if they are useful.

These are from Adrian Freed's "Bring your own control to additive synthesis" ICMC 1995 paper (pdf is here: http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1995.091):

"Experimental Fourier Series Universal Tone Generator" Chamberlain, Howard A. JAES Volume 24 Issue 4 pp. 271-276; May 1976. Someone with AES library access might like to check it out: http://www.aes.org/e-lib/browse.cfm?elib=2622&rndx=871722
"A 256 Digital Oscillator Bank" G. DiGiugno 1976 Computer Music Conference, Cambridge, Massachusetts. MIT 1976. (ICMC archives don't go back that far http://quod.lib.umich.edu/cgi/t/text/text-idx?c=icmc;idno=bbp2372.*)
Adrian Freed also cites this early paper about implementing digital sine oscillators using the complex rotation method: J Tierney, C Rader, B Gold, "A digital frequency synthesizer". IEEE Transactions on Audio and Electroacoustics. May 1971. http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1162151 — Preceding unsigned comment added by Ross bencina (talkcontribs) 15:58, 17 January 2012 (UTC)
Try here: http://www.maths.bris.ac.uk/~macgj/as_tierney.pdf for this one. I haven't got access to the others. Chrisjohnson (talk) 11:59, 19 January 2012 (UTC)
Thanks for very interesting list, Ross. Although I'm not sure these are understandable for me, I want to see these papers (possibly several may found in paper format). My guessing from the above: have the deep studies on this field emerged in the 1970s instead of 1960s? --Clusternote (talk) 23:43, 17 January 2012 (UTC)
I would like to see these too. The AES and IEEE papers should be available to anyone with University library access, or you can buy them. I'm not sure about 1960s vs 1970s. The papers cited above are concerned mainly with hardware implementation I think. There was research in the 1960s at Bell labs (J-C Risset, Max Mathews). Ross bencina (talk) 11:29, 19 January 2012 (UTC)
My university only had the Tierney paper. Olli Niemitalo (talk) 14:32, 19 January 2012 (UTC)

There is a good list of references to early Bell Labs research here, along with some references to additive analysis/synthesis: http://www.codemist.co.uk/AmsterdamCatalog/02/index.html Ross bencina (talk) 11:29, 19 January 2012 (UTC)

There is a JASA abstract available on line showing J.C.Risset had done additive analysis/synthesis of Trumpet tones at Bell Labs in 1965 or earlier: J. C. Risset, "Computer Study of Trumpet Tones" J. Acoust. Soc. Am. Volume 38, Issue 5, pp. 912-912 (1965); (1 page). pdf of abstract is available free here: http://asadl.org/jasa/resource/1/jasman/v38/i5/p912_s5?bypassSSO=1 Ross bencina (talk) 22:11, 21 January 2012 (UTC)

Spectral Interpolation research by Dannenberg et al explored the wavetable interpolation angle from around 1987: http://www.cs.cmu.edu/~rbd/bib-spectral.html — Preceding unsigned comment added by Ross bencina (talkcontribs) 23:55, 21 January 2012 (UTC)

A lot of terminology (additive, steady-state, off-line, interactive, real-time, quasi-periodic, harmonic(s), aperiodic, inharmonic, analysis-based additive synthesis, Fourier series) explained in this review article: Moorer J. (1977) Signal processing aspects of computer music: A survey. Proceedings of IEEE, 65:1108-1137. Olli Niemitalo (talk) 14:28, 23 January 2012 (UTC)

Thanks for the link, Olli. 3 decades ago I had actually lifted Fig. 11 for my dissertation proposal. Back then, the technology used was the photocopy machine. But I'm not in IEEE and I no longer have any university access to old IEEE docs and I am not willing to pay for them. Nice early history. Notice that, in this early paper, Andy Moorer makes the same mistake that Clusternote had regarding the frequency of the sinusoid in Eq. (2). 70.109.178.133 (talk) 17:09, 23 January 2012 (UTC)
Hi, Guys! Nice to meet you. On these equations, if any clear definition/explanations are newly supplied in the form of reliable source, it's a welcome! As a physicist, I'm not afraid of any equations in the engineering field, as far as it is rational and reasonable one. --Clusternote (talk) 12:12, 24 January 2012 (UTC)

[Xavier Serra's PhD] contains a survey of early additive analysis/synthesis research (page 12, first two paragraphs of Sec 1.5). He cites David Luce's PhD "Physical correlates of non percussive musical instrument tones" MIT (1963) (good luck finding that one!), Morris David Freedman [[1]](1965), James Beauchamp in "Music by Computers" (1969) and J.C.Risset and Max Mathews (1969), although I think there are earlier Risset publications mentioned above, and apparently there is a chapter in Beauchamp's 1965 doctoral thesis on the matter since he references it. Clearly there was work going on at University of Illinois around the same time as the Bell Labs research. I have the Beauchamp book article here in print if anyone believes it contains critical info I could scan it. It has a nice development of the theory. Beauchamp cites Luce, Freedman and the team at Bell Labs.

Without earlier reference to non-analysis theoretical writings this info does reinforce Clusternote's idea that some of this *theory* came about in the context of analysis/synthesis research. Obviously practice pre-dated the theory in some cases, and it's all based on 19thC acoustic theory (e.g. Helmholz).

Perhaps it could be useful to cite Serra in the analysis synthesis section with something like: "Xavier Serra places the research origins of additive analysis/synthesis in the 1960's. He cites three strands of research concerning additive analysis and synthesis of musical tones: by Morris David Freedman at MIT (1963), bj J.C. Risset and Max Mathews at Bell Labs (1969), and by James Beauchamp at University of Illinois (1969)." How does that sound?

Sierra discusses research on the time-varying character of musical instrument harmonic spectra done using computer-based analysis-synthesis systems. Luce was probably the first (Serra is being careful on that) and deserves some credit as well. Olli Niemitalo (talk) 16:42, 4 February 2012 (UTC)

Another thing I'm wondering is how to work this in to the timeline. Maybe something like: 1963 - Early work in the theory of additive analysis/synthesis by David Luce at MIT. 1965 - Additive analysis and synthesis of violin tones published by J.C.Risset et all from Bell Labs. 1969 - Additive analysis/synthesis system by James Beauchamp at University of Illinois Note this needs to be fact-checked. I would like to see Luce's PhD. Also, these are publication dates so the actual work began earlier. Probably prior to 1965 in all cases. Ross bencina (talk) 22:09, 25 January 2012 (UTC)

I think on the timeline we should only give credit for research publications, working machines, and functional software. Start dates would be OK if they improve the prose. Olli Niemitalo (talk) 22:48, 25 January 2012 (UTC)

Vail, Mark; Vintage Synthesizers: Pioneering Designers, Groundbreaking Instruments, Collecting Tips, Mutants of Technology; 2nd ed.; April 1, 2000; Miller Freeman Books; ISBN 978-0879306038. I hope this will answer some of the open questions on early commercial digital synthesizers. A library close to where I live has a copy, will have a look sometime. Olli Niemitalo (talk) 22:32, 25 January 2012 (UTC)

Browsed through. Got some dates, and a little info mainly on EMS, Crumar GDS & Synergy, Fairlight CMI. Olli Niemitalo (talk) 16:08, 29 January 2012 (UTC)

The Development and Practice of Electronic Music, ed. Jon Appleton and Ronald Perera (Prentice-Hall, 1975).

The Evolution of Electronic Music, David Ernst (Schirmer, 1977). Olli Niemitalo (talk) 16:08, 29 January 2012 (UTC)

The Science Of Musical Sounds (first published 1916, reissued 1926, The Macmillan Company, New York) by Dayton Clarence Miller -- a wonderful collection of information on the development of audio analysis and reproduction theory and technology (1984 rolling sphere harmonic analyzer by Henrici), and as particularly interesting, on early analysis-resynthesis technology. Miller describes as harmonic synthesizers machines that mechanically add multiple cosine and sine harmonics of given amplitudes to produce a drawing of the resultant waveform. He cites Lord Kelvin's tide predicting machine made in 1876 as one of the first synthesizers. There is no mention of any playback method for these devices. Rather, the produced drawings are visually compared to the original analyzed waveform as a way to validate analysis-resynthesis. Acoustic devices based on pneumatics and tonewheels (Koenig's wave siren), organ pipes, and tuning forks (Helmholtz's tuning fork apparatus) are also described in the context of synthetic reproduction of wovels from harmonics. On organ pipes, it is noted that tibia pipes, stopped wooden pipes made of wood, are most suitable and that many analyses show that their tones consist 99 % of the fundamental. Olli Niemitalo (talk) 15:26, 31 January 2012 (UTC)

Phonautograph (patented in 1857)
Tide predicting machine (1876)
That's great. Probably these were also inspired by Hermholtz resonator and Graham Bell's one of analyzer possibly other than Phonautograph. Also I think your mention on Lord Kelvin is very impressive for physics peoples (because he had gave several base of Maxwell's classic electromagnetic theory, and it was later referred as the earliest page of history of quantum physics). I believe that above results have later evolved into several basis of broader meaning of researches on additive synthesis, including several of the speech synthesis. --Clusternote (talk) 02:32, 1 February 2012 (UTC)     [P.S.] Sorry, as for my first point, I've overlooked first paragraph of Timeline section which already mentions on Helmholtz and Koenig. --Clusternote (talk) 03:50, 1 February 2012 (UTC)

Timeline section

How much information is really necessary for the timeline section? As written, it borders on trivia. There are some errors that I can fix, but it's difficult in some cases to find good citations. I think it would help to thin this section out.

Here are some things that need to be changed: RMI Harmonic Synthesizer: this doesn't use stored waveforms. The faders on the front panel mix weighted sums of Walsh functions. I can cite the schematic. It's nice, but I don't think it belongs in the article. And the following text about other non-time-variant models probably doesn't either. There were lots of things that could do that (including the Casio SK-1). Static, harmonic additive isn't uncommon or especially remarkable by itself.

The Synclavier has 24 harmonics, not 16 (from the user manual). And its description really needs to be cleaned up.

The Fairlight CMI used wavetable synthesis. A harmonic spectrum is defined for a number of waveform segments, the waveforms are calculated and then scanned. There's no crossfading, but it only steps through waveforms at end points (which are zero because it's summing sines). I can cite the manual. I haven't seen any mention of "resynthesis" abilities outside of the un-cited claim in this article.

Kurzweil K150 is fine, but it falls into an important category: inharmonic (oscillator bank) but without time-varying frequencies.

The Seiko DS series I think does group additive/wavetable. But as far as I know there's nothing good to cite here; no patent or anything. I think it could be removed. It's not historically significant.

The Kawai K5, I think, internally calculates the amplitude of each harmonic in real time (the filters and everything), but only offers the user 4 amplitude envelopes. This is an important distinction. It's not just mixing 4 wavetables. But it's difficult to cite; I can only infer this from manuals and patents.

Maybe the Technos Axcel could be added. It's notable because it implements time variant (amplitude and frequency) oscillator bank additive AND resynthesis.

I can think of more, but I think it's a step in the wrong direction. IMO extensive trivia, correct or not, doesn't improve the article. 68.58.235.70 (talk) 23:37, 17 January 2012 (UTC)

Thanks for your review.
  • RMI Harmonic Synthesizer : Your information seems very interesting, and also I want to see the schematics. I remember possibly someone have mentioned on Walsh functions on it. And also it seems to equip "punch card reader" to load/store pre-programmed "waveform". How are these two features compatible?
68.58.235.70 here (I know the IP is now different) I have the service manual for the RMI but I don't know if it's acceptable to post it here. The punch card reader and stored waveforms were in the Keyboard Computer models. These are repackaged versions of the Allen MOS-1 Digital Computer Organ. I've seen documentation for these too: there are no additive features, and the hardware isn't related to the Harmonic Synthesizer. Anyway I don't think either need to be mentioned. The Harmonic Synthesizer's "sines" are extremely coarse (32 steps). To me that puts it closer to combo organs that mix square waves using drawbars (i.e. Rademacher functions). The rest of the article goes to some effort to avoid overly general gray areas, and I think this strays a little too far from the given definition.(68.58.251.67 (talk) 06:40, 21 January 2012 (UTC))
  • Synclavier series : as you said, the manual for later models said "24 harmonics" [2]. Probably it may be varied with models. If someone have enough references for each models, it can be enough verified.
  • Fairlight CMI series : "Re-synthesis" means "FFT function" probably introduced on IIx on "PAGE5 (Waveform Generation)". It is also mentioned on this "CMI series II" page guide (not IIx), however, possibly it was not enough used in practice.
Note also that Fairlight_CMI links here for "additive resynthesis" without citation so this needs to be clarified.
  • Casio SK-1 : Also I know it, however, I didn't add it because not so significant (as Seiko DS series). If there were a list of similar products, it may be possibly notable, IMO.
  • Technos Axcel : Thanks, I almost forget it, because I don't know enough about it. I found resynthesis demo on YouTube.
You're welcome! Although this article seems to traverse repeatedly between the chaotic state and organized state in a last few weeks, probably you can peacefully edit Timeline of additive synthesizers section, if reliable sources are accompanied.
best regards, --Clusternote (talk) 01:39, 18 January 2012 (UTC)
P.S. Sorry for my misunderstandings. If your intention was not to provide "correct trivia" but to show sample of "unproductiveness of uncertain fragmentary trivia", your intention is effectively shown. However, these type of writers seems not to come here for several months or years. For myself, I take a "Inclusionism" as basic stance, and try to keep original descriptions as far as it is ignorable or insignificant on my viewpoint, or probably verifiable. I'm sorry for almost ignoring these trivial descriptions and RMI's circuit diagram probably exist somewhere. --Clusternote (talk) 16:31, 20 January 2012 (UTC)
I propose as a guideline for inclusion that it must be possible to give the synthesizer or its development a place in the historical context of additive synthesis. About the style of the entries: Each entry could be written like a small story, possibly detailing on the development process, that gives both the historical perspective and the technical trivia (to facilitate comparison). Olli Niemitalo (talk) 05:33, 19 January 2012 (UTC)
It seems slightly too hard guideline for natural improvement process of article. For the fair evolution of article not biased to specific viewpoints of a few people (including me), we should intentionally keep slightly chaotic section as a seedbed of motivation for the future evolution, in my opinion (I think article doesn't need to keep always complete state). Possibly it might be realized as sub-section "others" or "trivial". (For example, several historically important but not yet categorized models and notions need such a placeholder — Subharmonic synthesis, UPIC as a possibly earliest FFT application, and probably microtones behind ANS and a kind of chromatic resinthesis (?)) --Clusternote (talk) 02:13, 20 January 2012 (UTC)
I tend to agree with 68.58.235.70 that there is a lot of trivia here. I'm not sure "small stories" are the correct approach either. There could be a historical overview covering the early years and summarising later developments (in max 3 paragraphs). Then there could be tables ordered chronologically. Maybe one for commercial implementations and one for research implementations (or just one chronology table). There could be columns for year, name and creator/inventor/manufacturer, and a column for notes indicating why this implementation was notable.Ross bencina (talk) 11:49, 19 January 2012 (UTC)
Well I guess that would be OK. I would think one table would be the best so that it is easier to see how research and commercial stuff interleave. We can have both "the first something" and "the first commercial something" listed. Oh and I'd like to put audio samples in the table, too. YouTube users seem happy to donate audio excerpts of their work if you ask nicely; I already mediated the rip, ogg conversion, Creative Commons release and upload of an ANS synthesizer sample to Wikimedia Commons. Guess there are other sources besides YouTube. Olli Niemitalo (talk) 19:35, 19 January 2012 (UTC)
That's great! I hadn't even once imagined the way how to directly negotiate with authors for obtaining appropriate consent. (it often requires "Commons:OTRS" (a kind of pre-formatted formal licensing-agreement mail). Also previously I tried to get Commons:OTRS for several media originally uploaded on Wikimedia by other users.) --Clusternote (talk) 06:12, 21 January 2012 (UTC)
Thanks for the OTRS info, I'm gonna try play through the process to see what I learn. Here are templates that editors have used to contact copyright owners: Wikipedia:Example requests for permission. Olli Niemitalo (talk) 11:09, 21 January 2012 (UTC)
68.58.235.70, I'm sorry for my misunderstandings. Probably you want to pointed out several negative aspect of trivia, and implicit purpose and scope of article.
On your 1st point: Probably "trivia collecting" may be an inevitable nature of human and Wikipedia article. I think it is not always so bad thing, because it's an important starting point for study on which trivia may be gradually converted to a part of more systematic knowledge.
On your 2nd point: "Encyclopedia should be what ?" may be probably important theme. Also, Wikipedia has its own limitation as an open collaboration project. For example, while the policy of article (purpose, scope, style, etc) tends to be not enough shared by writers, arbitrary writers may randomly rewrote article, and as a result, different intention of edit may be intemingled, and insignificant details tend to extended. In order to prevent these state, probably it will be also one way to improve description into the style which prevents undesirable expansion, as you said. However, even on the that strategy, article still needs periodic clean ups. On the long life cycle of Wikipedia article, inevitably evolution phase (chaotic state) and cleanup phase (organized state) are periodically occurred. If we want to avoid wasting time, probably several compromise may be required. Also there is another way possibly based on Inclusionism: avoiding clean up excess, and intentionally keeping partially chaotic state as a seedbed of future evolution, etc. And the latter way is a strategy implicitly used in many Wikipedia articles perhaps.
By the way: For myself, not so interested in the fragmentary trivia or the historically ended thing itself. My main concern on this field is supply of appropriate, educational information needed to cause future evolution in the relevant fields. For this purpose, appropriately selected/summarized/presented "history and trivia" may be useful to foresee the future of relevant fields, and to prepare the resources needed for future evolution.
--Clusternote (talk) 15:20, 20 January 2012 (UTC)
My thought was mainly that if there aren't some limits on what's worthy of being mentioned, plenty of other marginal things are also eligible (Alpha Syntauri, ASI Synthia, Casiotone 1000P, Wersi...). These verifiably fit the given definitions of additive synthesis, but they're not really notable, and including them doesn't help the article. I think Wikipedia tends to discourage this sort of thing (see Wikipedia:Listcruft). So it should probably stick to ONLY the technologically or historically significant models (i.e. not the Seiko or other stuff that no one cares about). (68.58.251.67 (talk) 06:40, 21 January 2012 (UTC))

Table format

(Moved the table and reflist from here to the article) Olli Niemitalo (talk) 21:21, 31 January 2012 (UTC)

Testing the table format. Feel free to edit directly. Olli Niemitalo (talk) 18:41, 20 January 2012 (UTC)

A couple of comments on the table: Personally, I had in mind to exclude all of the historical precursors from the table (i.e. almost everything but Alice and DOB that are in there now) and keep these in a prosaic discussion of historical precursors. Otherwise we have a demarcation problem (and as an IP user points out above, even RMI is dubious under the pages definition of Additive, irrespective of what synthmuseum says (not exactly what WP calls a reliable source anyway). Another issue is that in its current form (focus on prototype vs. commercial) it is not neutral. It focuses on hardware, and it assumes everything that wasn't commercial was a prototype for later commercialisation -- I don't see how the Bell Labs software research (and also at other places) in the 60s can fit into this column format, yet it should be there (imho). Maybe "Research Implementation" and "Commercially Available" are ok column names? What do you think?
Ross bencina (talk) 09:00, 22 January 2012 (UTC)
Changed to "Research implementation" and "Commercially available". Does that help the software research to fit in and are there other changes that can be made to that effect? Olli Niemitalo (talk) 13:41, 22 January 2012 (UTC)
About RMI Harmonic Synthesizer: Any machine-generated sinusoid is an approximation. I haven't seen the RMI schematics, but if the RMI generates an acceptable approximation of a sinusoid for each of the user-controlled harmonic amplitude sliders, it doesn't matter by what means the approximation is generated. The boundaries of what is an acceptable approximation for it to count as an additive synthesizer must depend to some extent on the alternatives available at the time (within the price range). How close to true sinusoidal harmonics combining 16 Walsh functions gets is limited only by the quality of the implementation and the following band-limiting filter. If the Synth Museum does not count as a reliable source, then that would be a much better reason for exclusion of the RMI, in my opinion. I'm moving my older comment about historical precursors below. Olli Niemitalo (talk) 13:41, 22 January 2012 (UTC)
I'm having second thoughts on the categorization of Telharmonium and the Hammond organ. I've been doing some reading and it is becoming apparent that the makers of these machines went to great pains to get sinusoidal oscillator waveforms. The guy who invented Telharmonium, Thaddeus Cahill, iteratively shaped the alternator gears until the waveform was right (Weidenaar, Magic Music on the Telharmonium). Quoting: "The composer of the past has been like the chemist or alchemist of ancient times, who could use in his combinations some few compound bodies only. The composer of the future will have in the sinusoidal vibrations of electrical music those pure elements out of which all tone-compounds can be built; not merely the known and approved tones of the orchestra, but many shades and nuances heretofore unattainable." (Thaddeus Cahill, 1907). Wouldn't it be unfair to list these as anything else than true additive synthesizers? Where do we draw the line? As compared to analog and digital machines, real organs function by principles of traditional musical instruments, so perhaps they only should be named historical precursors (of those that we know of). Olli Niemitalo (talk) 00:35, 18 January 2012 (UTC)

Olli, why don't you put this into the main-namespace article? You can fine-tune the timeline as it lives in the article. Since I made no contribution to it, I don't think I should do it. 70.109.180.116 (talk) 19:50, 31 January 2012 (UTC)

Done, finally! Actually found your comment here at a last-minute check before submitting the changes. :-) There might still be something in the old version of the timeline section, if you want to check. Olli Niemitalo (talk) 21:21, 31 January 2012 (UTC)
Congratulations to the improvement of article !
Although I have withhold the opinion due to above-mentioned reason, at least, I'll add the reference to Subharmonic synthesis on the later.
Doesn't Trautonium have saw wave oscillators? We seem to have a consensus that sine wave partials are required for inclusion. Olli Niemitalo (talk) 10:19, 1 February 2012 (UTC)
Oh, sorry, I've almost forgot these slightly irrational local consensus, during several efforts to communicate with other researchers in the field (I got several new viewpoints. I'll try to describe it on the later). Although I still need several verifications on Trautonium, waveform of oscillator might probably varied with models. Surely earliest version and Telefunken product seemed had used saw wave oscillator (influenced by Heinrich Hertz Institute for Oscillation Research). Gong sound synthesizing in 1942 may be probably used another method (additive based on nearly sine wave or individual formant filter, or possibly ring modulator which also need sine wave to avoid muddy sound). And Moogtonium in 1967 clearly used Moog's oscillator. I think it needs more research and verification. --Clusternote (talk) 19:00, 1 February 2012 (UTC)
P.S. Also Subharchord, a GDR's variants of Trautonium in 1960s, is a suitable example. Although I don't know whether it meets very unique criteria on this page, however, it is generally recognized as application of subharmonic synthesis which is almost equivalent to additive synthesis. The essential difference of subharmonic synthesis against additive synthesis is that kth overtone in the normal sense is recognized as fundamental wave on this instrument. If this instrument didn't meet the criteria on this page, probably criteria might be unpractical, in the normal sense. (In truth, almost all article about the instruments referred on this page have been already checked, verified, arranged photograph, and extended by myself for a long time ago. I'm very happy if you checked relating articles beforehand claiming something ...) --Clusternote (talk) 21:44, 1 February 2012 (UTC)
At Musikkteknologidagene 2011 at NTNU in Trondheim, Norway, Gerhard Steinke, the leader at the time of the laboratory in which Subharchord was developed, gave a presentation titled The Subharchord Story. A quote from the abstract explains the method of tone generation: "subtractive sound formation from oscillations with harmonics of high order and sounds by variable formant filters feeded with saw tooth and meander oscillations formats — most at all up to four subharmonic sounds which can be summarized to a multi voice mixture.[sic]". So it is not sinusoidal additive synthesis. Olli Niemitalo (talk) 23:31, 1 February 2012 (UTC)
Thanks, I'll check its details described in Norwegian on the later. It seems also a kind of additive synthesizer combined with subtractive synthesis.
By the way, after eliminating this historically significant, one of the earliest family of "electronic synthesizer" called Trautonium, how you plan to categorize it on especially English Wikipedia ? On the last month, we have succeeded to give a rational and stable positioning on ANS synthesizer which had been almost orphan article until last year outside Russia, even though in Russia, it have been often mentioned as "(one of ) first synthesizer" developed since late 1930s. Similarly, Trautonium family needs several rational and stable positioning in the history of synthesizer. --Clusternote (talk) 05:23, 2 February 2012 (UTC)
They may merit categorization as non-sinusoidal variants of additive synthesis (like some organs) or as non-sinusoidal variants of subharmonic synthesis (possibly like some organs). I'm only saying that I don't think any of those should be in the timeline. Olli Niemitalo (talk) 11:38, 2 February 2012 (UTC)
I've find your source and checked it.
With the „Subharchord“ could be generated beside a melody voice – derived by means of subtractive sound formation from oscillations with harmonics of high order and sounds by variable formant filters feeded with saw tooth and meander oscillations formats – most at all up to four subharmonic sounds which can be summarized to a multi voice mixture. The subharmonic frequencies can be combined in various manners in the dividing ratios ½ up to 1/29 and allow multifold sound structures.
They inspired to the protected trade mark „Subharchord“.
–Gerhard Steinke, The Subharchord Story
A typical configuration of the frequency divider organ using transformer-divider
I think Subharchord's sawtooth wave source may be a "transformer-divider" often used in 1930s–1950s to reduce the amount of vacuumtube circuit. It was utilized on several frequency divider organs including "Baldwin Electronic Organ" developed by Winston E. Kock in 1941.
According to the source, Baldwin divider seems to produce not only the sawtooth wave but also the square wave by means of "outphaser" method.
And possibly, its precedents in Germany, "Vierling and Kock Organ" developped by Vierling and Kock at Heinrich Hertz Institute ca.1936, may also use transformer-divider. It was said to also produce sinusoidal waves (probably using the external filter).
Also another frequency divider, digital divider implemented by flipflops typically used on electronic organ and paraphonic synthesizer, seems to have been used on Subharmonic synthesizer along with filter. (possibly recent products may use more flexible digital pitch shifter technology based on phase vocoder and formant filter)
 
If we exclude these technology (i.e. pseudo sine wave generator utilizing filters as an alternative of almost non-practical direct sinusoidal wave oscillation on the analog circuit), probably almost all efforts of "electronic analog additive synthesis" (including electronic organ as an earlier additive synthesizer, and following analog synthesizers) might be unnaturally ignored. As a result, irrational long blank existing between 1930s and 1950s might be remained persistently on the timeline.
In conclusion, the sticking to the "direct sinusoidal oscillation" on timeline results almost incorrect information. I clearly disagree with it. --Clusternote (talk) 12:29, 3 February 2012 (UTC)
I think we have established a clear basis for a focus on sinusoidal oscillator based additive synthesis, based on the literature. This is not a catch-all page for all electronic instruments that might vaguely resemble additive synthesis. This is an article about *additive synthesis* the *technique*. As described on this page, the technique uses sinusoids. On this basis, I think there is not a valid argument to expand the list to non-sinusoidal additive synthesis. Personally I do not even thing that the ANS fully conforms to this requirement -- because the control mechanism is not directed at directly controlling the oscillators. Certainly I do not think that an electronic organ is an additive synthesizer in any useful sense. At best these are *precursors*. Ross bencina (talk) 13:17, 3 February 2012 (UTC)
  • As a result, your assuming consensus have not been formed yet, thus your words are merely the your opinions. (Note: I know your word "seem" show your honesty)
  • Also, your last post (shown on above) just show what I'm afraid on your plan (already pointed out on your first proposal). You just try to eliminate historical important models (one of the earliest electronic synthesizer family) without well formed consensus nor enough consideration of the whole balance of related fields on English Wikipedia. In general, an effort to excessive clean up tends to result unproductive eliminating discussions. That's why I always adopt inclusionism and I've proposed to keep intentionally chaotic section for avoiding wasting time.
--Clusternote (talk) 05:46, 3 February 2012 (UTC)
In addition, I think that several addition of columns are essentially indispensable, as following. I'm glad if I can read opinion on it.
--Clusternote (talk) 02:55, 1 February 2012 (UTC)
Because there can be a lot of text in the Description column, we should not have too many other columns with textual contents. If we do, for lower horizontal screen resolutions, the description column will be spread out on a larger number of lines, and that scatters the contents of the table vertically. That is visually displeasing and makes it harder to get an overview of the table if the vertical screen resolution is low. I wouldn't want to make the font smaller either, to ensure readability. An alternative would be to expand somewhat some of the descriptions to accurately explain the significant additive capabilities and the working method, control method, and technology of the synthesizer. I don't think we can easily get rid of the textual description, because it may become hard to quote statements indicating notability with simply a list of properties. And a statement indicating notability may refer to the properties as well. Adding additional columns would unnecessarily repeat that information. Also, moving the textual descriptions to prose outside of the table would to some extent defeat the purpose of the table.
I've known several issues and countermeasures on <Table/> tag since over a decade ago (when I've developed one of the earliest rich client Web application similar to today's Google's suite) Technically, space consuming columns including (Research implementation or publication) and (Commercially available) can be more rationally shrunk with several trade-off (or compromises). And, if table were widened by the addition of columns, it can be more rationally presented by sorting of row by its importance, and addition of "scroll" attribute (see sample on Fairlight CMI). As you said, The inversion of columns and rows may became also another solution, if we can develop several new Template to avoid sacrifice ease of editing. I believe we shouldn't sacrifice ease of understand of contents, due to several limits (or defeats) of <Table/> of W3C standard on which several defeats have been long lived since its birth. --Clusternote (talk) 19:00, 1 February 2012 (UTC)
Just a note that I do not suggest swapping of rows and columns. Olli Niemitalo (talk) 21:16, 1 February 2012 (UTC)
Sorry, because it is very common solution for your points, I misread you comment as if you already think about it. Anyway, several additions of columns can be rationally handled with a few trade-off. Most important thing is not the tweaks on table format, but the ease of understand for readers, then ease of editing for writers. --Clusternote (talk) 22:07, 1 February 2012 (UTC)
As an alternative to text, I have been thinking of small icons such as , signifying different properties. These would not take much space. Could be a bad idea stylistically and it would similarly repeat information already found in the descriptions.
That's nice idea. --Clusternote (talk) 19:00, 1 February 2012 (UTC)
From the point of view of history of additive synthesis, which is the focus of the section, do the non-additive capabilities of the synthesizers matter? I think not. They have no value for comparison purposes, because we are not going to list multiple synthesizers that share the same additive capabilities but differ in their other capabilities. If we do explain other features of the synthesizers, valid reasons might be to avoid misrepresenting the synthesizers as just additive synthesizers, and to enable the reader to better imagine the synthesizer. However, for most of the synthesizers we do have links to pages that will describe them in full detail. Perhaps changing the wording of some of the descriptions to something like "A synthesizer that, among other features, was capable of additive synthesis..." would be enough. Olli Niemitalo (talk) 10:19, 1 February 2012 (UTC)
Currently I'm trying to obtain several advices from researchers in this field, to defuse the situation more rationally, and I'm expecting their posting on this page in someday. Although I have not got any comments on this specific article yet, I'll try to introduce their generic opinions and advices to me as far as I could understand, in the following:
(MOVED: see #Interview to other researchers for details)
Note: I've recognized that their opinions are often biased with their own researches, and are not always represented the average opinions of all researchers all through the period. I merely introduce their opinion because these represent several trends of research after 1990, and probably these lead several future evolution of this field. These future-oriented information are inherently important for educational article for the future researchers/engineers/musicians/amateurs.
BTW: I've almost lost intellectual interest on Wikipedia project due to lack of quality, after fixing several essential articles in the field. I am sorry for lack of interest on the debates for self-assertion without rationality (like an IP user), essentially lacking future-oriented viewpoints. --Clusternote (talk) 19:00, 1 February 2012 (UTC)
By additive synthesis, most authors refer to classical sinusoidal additive synthesis of sounds. The main focus of the article as it currently is reflects this. However, I think there is room for an additional section that discusses non-sinusoidal additive synthesis, a concept introduced in section definitions. For example, granular synthesis could be mentioned there. In section speech synthesis a subsection might be added for that kind of stuff, or it could all go into the additional-to-be section. Olli Niemitalo (talk) 21:16, 1 February 2012 (UTC)
I would favour avoiding defining too many peer terms. This article does not have to discuss the world, just Additive synthesis. I question the relevance of mentioning granular synthesis in this article. Comparison is probably better located in Synthesizer. I am tempted to just delete the speech synthesis section if no one can defend it. I'll create a new section on talk to discuss. Ross bencina (talk) 04:06, 4 February 2012 (UTC)
Who were referred by your word "most authors refer" ? As seen on previous discussion, Ross is clearly not, Charles is clearly neutral, only dear IP user claims it, and their effort to form a consensus has been temporarily pended due to several reasons (probably to avoid endless unproductive discussion similar to my experience).
"Ross is clearly not" -- please don't state my position for me. Actually I agree with Olli. I think most authors do only refer to adding sinusoids. The counter-examples I have found are not concerned with defining the technique, but with creating "umbrella categories" as I discuss in #Criticism of the introductory paragraph. In my view this article is about the synthesis technique (that's what the lede says). To put it simply: there are *two* meanings (or usages). It is pointless trying to argue that one meaning includes the other, they are separate -- note that dictionaries have multiple definitions for words too. In my view the general "umbrella" meaning is not relevant to this page because this page is about the *technique*, not about the "abstract concept.". High level groupings like "subtractive synthesis" vs "additive synthesis" which were popular in pedagogy at one time are now not so useful, given the expansion of possibilities since the terms were coined. Now we have many different techniques and hybrids. But "additive synthesis" has come to be understood as the technique of adding sine waves together to create spectrally fused timbres. Note that there are other methods that involve adding non-sinusoids to create fused timbres (e.g. there is an example of combining FOF with FM synthesis in the CSound manual, or Spectral music which uses a whole orchestra to compose fused timbres -- neither of these are additive synthesis, although you could draw *analogies* to additive synthesis.Ross bencina (talk) 03:52, 4 February 2012 (UTC)
As a result, there is no basis on your previous words. In the near future, we should resume discussion to form more generic consensus acceptable by not only the writers on this page but also the researchers/implementers/musicians/amateurs and even general readers, with carefully avoiding unproductive discussion loop tending to be caused by anonymous.
Before resuming discussion, I think that we need several advices or second opinion from external researchers / developers / musicians to avoid intolerant local rules, and possibly we need several administrator's help to normalization discussion. I don't want to waste a time by the impolite big mouth by the math-challenged old student. --Clusternote (talk) 06:44, 2 February 2012 (UTC)
See Talk:Additive_synthesis#Definition_and_scope. Please stop making insulting personal remarks on other editors. Olli Niemitalo (talk) 11:38, 2 February 2012 (UTC)
If you receive it as "insulting personal remarks", I'm very sad. In truth, I mentioned on it because it have been serious threats on soundness development of this article for a month.
We should again, clearly declare we never allow childish personal attacking, disturbance of discussion, and tendency of personal possession of article. And We should consider several appropriate protocols to avoid above issues. --Clusternote (talk) 13:35, 2 February 2012 (UTC)

Olli, Chris, Ross? Cluster? Care to mosey on over to instantaneous frequency and clean that up?

Cluster, I cringe at all the effort you're putting into your supplementary notes. It's not a hard concept, but I do see that what ended up at instantaneous phase has confused the issue even more. If you look at the article history far enough, it had a more sensible beginning.

Even though complex numbers and complex analysis is very useful for us electrical engineers and signal processing practitioners, I wanted to keep the math in additive synthesis away from any complex math expressions. Cluster, you're making it way too hard, to complicated to understand. All you need to remember is that frequency is how rapidly the angle (or "phase") argument of the sin() or cos() function is changing. If the argument is increasing rapidly, it's a high frequency. If the argument is increasing slowly, it's a low frequency. If the argument is decreasing, it's negative frequency, but for a real-valued sinusoid, that can be rewritten to be equivalent to a positive frequency.

There is a relationship to the analytic signal and the hilbert transform, but that is an extension. Such a mathematical treatment exists when your signal is just raw data and you want to somehow force it into a sinusoidal form so that you can estimate the instantaneous amplitude and frequency. But if your signal is in the form of r(t) cos(θ(t)), to get the instantaneous frequency, you take the insides of the cos() function and differentiate it w.r.t. t. That's all. Analytic signals and Hilbert transforms are non sequitur (well maybe not that, but are an unnecessary complication).

Oh, and Olli, you're right, of course. But the backward difference applies to the angle θ(t), not the frequency. More precisely, we would be saying that the discrete-time instantaneous frequency is the backward difference θ[n] - θ[n-1] which is just another way of saying it's the angle increment. Whether it's continuous-time or discrete-time, the frequency of a sinusoid simply is the rate that the angle or argument of the sin() or cos() function increases.

I don't really want to explain it more here. Don't want to waste my time. But I am considering attacking the instantaneous phase article. Anyone care to help? 70.109.178.133 (talk) 04:20, 28 January 2012 (UTC)

My view on why and what is needed is not much different. I think, if we call these things instantaneous frequency and instantaneous phase, then someone will put those words in the search box and will end up at Instantaneous phase. Either we make the relationship between our definitions and the definitions on that page crystal clear or we make sure that the reader will find our definitions at Instantaneous phase. The latter seems a viable goal even if we stick to analysis-related definitions, and I've been making all the small changes here (positive , slowly changing , backward difference) to facilitate that.
I agree that it is now Instantaneous phase that needs work. For instance, it can well hold the backward difference definition of instantaneous frequency, which we could then directly link to. I've found some nice papers about it. I would have started already, but I'm not very familiar with this stuff and am still reading on it. See you on the other side! Olli Niemitalo (talk) 07:09, 28 January 2012 (UTC)


The mathematics required on additive synthesis is very clear if beforehand enough pre-requirements were presented. I've already presented most of pre-requirements as "Supplemental note on inharmonic discrete equations (rev.3.2)", thus, most peoples who already studied undergraduate mathematics for physics or informatics can very easily understand these.

Dear 70.109.178.133, this article is not only for you, but also for all writers and readers. If you still felt essential equations on supplemental note were too complicated, very sad to say, probably you didn't enough studied undergraduate mathematics underlying these. You should not bother excellent peoples on here by your cryptic comments, without regarding your own insufficient studies on mathematics. Also, stop the provocation simply based on your very strange wishful thinking (probably you will always be said so).

Almost forty years ago, I'd studied electronics in six years old, and I'd understood most basis underlying electronics were not explained by itself, thus, I'd majored physics and informatics to more deeply understand these. --Clusternote (talk) 13:11, 28 January 2012 (UTC)
Clusternote, with all due respect, I think you are completely out of line with your comments. I can understand exactly what 70.109.178.133 is saying and I have no expert qualification in physics nor informatics. I agree with 70.109.178.133 that you are overcomplicating things without any benefit. For people with your level of expertise and training I think the connections to the complex domain would be completely obvious without you obfuscating what is currently a crystal clear explanation in the domain of the Reals. Ross bencina (talk) 13:25, 3 February 2012 (UTC)
Ross, please remember that original idea of these equations were not by me, but by the several series of IP users who claimed it as "Theory" without any reliable sources. However, their expression on "Theory" had clear defeats on which they confused "static Phase offset" with "Time-varying" and "Instantaneous phase", as a result, verification of equations were clearly failed. (Also, their "equation modifications" had several another issues including "Phase absorption" which is theoretically redundant and almost meaningless) However, these IP user couldn't recognize these simple defeats. As for me, at first, I'd expected the mathematician's activities on it, however he overlooked issues. Therefore finally I re-wrote their incorrect, unsourced equations into more accurate, rational forms with possibly reliable sources (Smith 2011). The result is merely a supplemental note for general reader who studied undergraduate mathematics. As a nature of "supplemental note", it naturally contains all "electric engineering"-specific notions needed for equations. It is not the overcomplication as you imagine, in any sense.   That's all.
Personally, I haven't any interest on their too simple model since late 1970s, because it is almost unpractical first approximation for me. sincerely --Clusternote (talk) 03:39, 4 February 2012 (UTC)

Interview to other researchers

Currently I'm trying to obtain several advices from researchers in this field, to defuse the situation more rationally, and I'm expecting their posting on this page in someday. Although I have not got any comments on this specific article yet, I'll try to introduce their generic opinions and advices to me as far as I could understand, in the following:

  • They seem not to so seriously distinct between speech synthesis oriented researches (as scientists of informatics/physics/vocology) and music oriented researches (as contemporary musician/researcher of computer music). Probably they often share both results of each field almost equivalently. (Note: According to this sense, the article specifying on additive synthesis only using sinusoidal wave and only for music, should be titled "Sinusoidal additive sound synthesis" or "Sinusoidal modeling (sound synthesis)")
  1. On the scientific musician's viewpoints, whether purely sine-wave additional or not, is not so important thing to create music, especially in the case of simultaneous use of arbitrary filters.
  2. Instead, non-linear characteristic of "human ears" and "transient sound" is (possibly) dominant on advanced sound designs. (Note: it imply possibly sinusoidal wave is not ideal "basis function" of sound analysis/additive synthesis, in the viewpoints of psychoacoustics and "transient sound modeling")
  3. As for the "sinusoidal modeling" (as an advanced form of sinusoidal additive synthesis, consisting of analysis/modification/re-synthesis), Serra-Smith and McAulay-Quatieri are recommended. [ADDED] After Serra-Smith's work, Scott N. Levine's work on CCRMA is important (possibly as speech synthesis)
  4. For the speech synthesis researcher, sinusoidal modeling became almost out of fashioned theme or, at least lower priority theme. For the practical sound synthesis, several extensions against pure sinusoidal modeling is inevitable, as "harmonic+noise+transient model" on Serra & Smith.
  5. On the other hand, in the viewpoint of a scientific musician, "sinusoidal modeling" extended with "non-linear analysis" and "quasi-linear parameter conversion", is significant (at least for his research group) In my view, he seems to regard it on the psychoacoustics and Pierre Boulez's viewpoint. Currently he plan to publish his researches, he said.
  • [others] Also I tried to obtain advices on following theme, however I've not yet well understood replies. At least my interpretations were found, I'll try to share on here, and also try to verify on sources.
    6.  Pythagorean harmony or more sophisticated pure intonation as the theoretical backgrounds of pre-Fourier precursors of additive synthesis
    7.  Relevance to categorize Iannis Xenakis's UPIC as IFFT-based additive synthesis
    (Note: UPIC in 1970s and later MetaSynth have a kind of spectrogram-based additive synthesis similar to ANS synthesizer. And its legendary precursor developed in 1960s and exhibited on Expo '70, had possibly similar feature.)

Note: I've recognized that their opinions are often biased with their own researches, and are not always represented the average opinions of all researchers all through the period. I merely introduce their opinion because these represent several trends of research after 1990, and probably these lead several future evolution of this field. These future-oriented information are inherently important for educational article for the future peoples (researchers/engineers/musicians/amateurs, etc. and generic readers). --Clusternote (talk) 14:28, 2 February 2012 (UTC)

It is important to note that Wikipedia pages are not supposed to be "opinions of researchers" they should be based on *reliable* *secondary sources*. This means edited books by reputable publishers, and peer reviewed published research. I have posted a number of quotations supporting the direction that the article currently takes. If you believe it should take a different interpretation, please post quotes and citations from other references that suggest a different approach. This page is not primary research. It should not include information just because you, or anyone else thinks it should, it should include only information that is based on *existing* *reliable sources*. Ross bencina (talk) 13:36, 3 February 2012 (UTC)
Thanks for your comment, Ross. As already explained, it is mainly a progress report of my efforts to obtain several posts by external experts in this field, other than you (Note: I've personally recognized you as an expert in this field, guessing from your posts).
Most of above their points are reasonable on their viewpoints respectively, on which they have attempted to improve the additive synthesis as technique by incorporating:
item *, 2, 3, 4:  More recent speech synthesis-oriented results already shared in their comuputer music community (published by CCRMA, etc, as mentioned on earliest discussion),
item 2, 5:  Nonlinear response of human ears well known on psychoacoustics, however not yet enough reflected to additive synthesis (under preparation of publishing).
And if we could regard their opinion as example of most recent trend of this field after 1990, my earliest proposal may take a rational place on this article. I've proposed it to supply advanced information on more practical, advanced synthesis used in related field (including speech synthesis), for the readers who needs more than old toy programs often explained on beginner's book.   If you thought its direction was inappropriate as an explanation of recent trend, please show your interpretation on it.   Or, possibly I might have missed your posts about it ...
on your phrase:   I have posted a number of quotations supporting the direction that the article currently takes.
Sorry, I couldn't grasp exact meaning of your rest phrases following above phrase, due to my limitation. I'm glad if you plainly explained using examples or links to your previous posts (i.e. section link using [[#section]] notation, or revision link available on history). best regards, --Clusternote (talk) 02:20, 4 February 2012 (UTC)
I am referring to the discussion in #Criticism of the introductory paragraph where I cite two classic Computer Music textbooks -- there I explain my view on the textbook's approach. I recommend that we continue the discussion there.
As to your other points about current research: Additive synthesis is a stable concept that can be traced to the 60's (e.g. Beauchamp). I think any modern variants need to be quaified for *notability* before being included in this article. Especially, I don't think that "soon to be published research" is likely to be notable.
Even earlier, to 1957 "The electrical production of music" by Alan Douglas. He categorically lists three methods of forming musical tone-colours: additive synthesis, subtractive synthesis, and other forms of combinations. Olli Niemitalo (talk) 12:37, 4 February 2012 (UTC)
In my opinion, Sinusoidal Modelling and other related techniques (e.g. bandwidth enhanced sinusoids like Lemur, Loris) are extensions *based on* or *inspired by* additive synthesis and do not strictly belong in this article. Spectral_modeling_synthesis (SMS) already has its own article so we don't need to dwell on it here -- although in my view it might be better to have an article on "Spectral Modelling (audio)" that doesn't focus solely on SMS (a very specific approach to spectral modelling). My main point is: some of this material is important and notable but I think we need to avoid trying to cram too much into Additive synthesis.
As for Additive_synthesis#Relation_to_speech_synthesis this is not my area. But to me that section does not read well. I know what LPC is and it doesn't have a strong relationship to additive synthesis -- it is fundamentally a subtractive synthesis technique. Likewise, the "Sinewave synthesis" mentioned in that section appears to use sinewaves to synthesize formant regions, not partials, which makes it only very vaguely related. None the less, I am told that additive synthesis is used in some modern singing synthesizers (e.g. Vocaloid) so there is possibly still some relation to speech that could be explored. Ross bencina (talk) 03:30, 4 February 2012 (UTC)

Proposal: delete Relation_to_speech_synthesis section

I propose that we delete Additive_synthesis#Relation_to_speech_synthesis.

It's only justifiable reason for existence appears to be to host a reference to Sinewave_synthesis, which actually has nothing to do with additive synthesis. LPC also has nothing to do with additive synthesis.

It may be that additive synthesis was used in the 70's for speech synthesis. However the Remez citation does not provide a link. Can someone verify it? I would question whether the fact that additive synthesis was used for speech synthesis is notable enough to reference it here. If it does, we may be better off with an "Applications" section rather than focusing on speech synthesis.

Ross bencina (talk) 04:12, 4 February 2012 (UTC)

Support. The section is very short and it seems like undue weight. The content of that section can be integrated into the rest of the history or applications text. --Ds13 (talk) 04:17, 4 February 2012 (UTC)
Opposite. on deletion, because large amount of recent improvement on additive synthesis are possibly relayed on speech synthesis (it is under the issue on above section).
  • For citation issue, I want to try the clarification and I'll later report it even when failed.
  • LPC provides good sample of yet another approach of additive synthesis other than "harmonic analysis" known as Fourier transform; on these Analysis, characteristic frequency peaks (formant) are traced (as trajectory), and slightly small amount of oscillators (or filters) are efficiently used in re-synthesis. I think it is interesting approach.
LPC has *nothing* to do with additive synthesis. It is a subtractive technique. It involves deriving an all-all pole filter from the source. it is not "another approach of additive synthesis" -- here I am afraid you are completely in error Clusternote. Ross bencina (talk) 01:36, 5 February 2012 (UTC)
    You have been completely misunderstanding the story. LPC is mainly corresponding to "Analysis phase" of Analysys/Re-synthesis type additive synthesis. As a Re-synthesis part of it, sinewave synthesis is used by way of narrower meaning of additive synthesis.
    I have already recognized the reason why general people said that Wikipedia is often unreliable due to lack of appropriate human resources on each article, by discussing this page during last one months; on which several peoples without enough skill of undergraduate mathematics repeatedly claimed "early Theory section is absolutely exact !" (in truth, it had apparently confused "static phase offset" with "instantaneous phase", as an example of numerous errors on it), or peoples claims "this statistics is reliable !" (in truth, it lacks two very important information inevitably needed for all reliable statistics -- detail of statistical population and rationality of statistical samples; especially in this case, the latter should be able to easily shown, However, irrationally, these are not yet provided :o). In the situation that the uncertain discussion have been often carelessly approved by the number of uncertain approver, it is difficult to improve the essential quality of an article.
    Probably, most peoples who need more practical knowledge beyond the classical toy program should wait until this article will be improved radically by the viewpoints of proper music historians, advanced academic computer musicians, musical instrument developers, and the DSP specialists, etc. someday. --Clusternote (talk) 23:47, 5 February 2012 (UTC)
Cluster, you should google Olli's or Ross's name. See what you get. Dunno Chris Johnson but he's this guy. I'm not gonna tell you my name but I will tell you that, since you have arrived and tried changing it, I have twice corresponded with Julius Smith about this article and cites of his online pubs, and we have known each other for nearly 2 decades. I have also designed synthesis algorithms (the DSP code) for what I believe is the 4th largest hardware synthesizer manufacturer and music processing algorithms for a variety of different companies that all put out real products that work in practical contexts. The experts are now here.
And it was only you that have been confused regarding phase and instantaneous phase or frequency. And for the most part, it was you that have been tossing around personal insults, truly underestimating the expertise of the people you've insulted when you don't know diddly about their background and overestimating your own knowledge, understanding, and expertise in this subject. 70.109.179.87 (talk) 04:03, 6 February 2012 (UTC)
Thanks for your advice.
  • For mathematics, if you have not yet consent to very simple and rational equations on Supplemental note (rev.3.3), probably main issue may be underlying any lack of reliable sources supporting your idea yet not revealed. It is merely a business of yourself not related to me :)
  • For personal attributes of participants, I'm always not interested except for their skills shown on discussion. As physicist, I always try to sincerely discuss subject, and if insufficient equations or not enough statistics were shown, I always point out these defeats. It is nothing more than confirmation of the facts revealed in the discussion.
BTW: Recently, very radical researches and developments slightly overlapped to Analysis aspect of analysis/resynthesis additive synthesis, are often seen around me in the context of expressive "vocal synthesis" and "source separations" . It is very exciting movements during last two decades, and someday I want to share these with yours in more appropriate forms. It is merely a my sincere will.
--Clusternote (talk) 13:32, 6 February 2012 (UTC)
I have removed references to LPC from the article. Remez and Rubin do write that they used LPC as the source of the formant frequency and amplitude data in their additive synthesis work. But that is not saying that they created an LPC decoder. Olli Niemitalo (talk) 05:39, 5 February 2012 (UTC)
  • In general: If we continued to narrow the definition and topics like that, finally rest of readers/writers may want to read/edit another generic article on "Generic additive synthesis family and related synthesis".   Only if you have a plan to create another generic article containing excluded topics on this article, possibly your deletion proposal may be justified. --Clusternote (talk) 04:56, 4 February 2012 (UTC)
Support. Historically, starting with the work of Helmholtz in the latter half of the 1800s (century), the relationship between additive synthesis and speech research was very strong. The history section (the prose above the timeline table) can be expanded, so a yet undetermined portion of this stuff can go there. I intend to work on the history. An Applications section is probably the best place to discuss modern-day applications (in musical instruments and in speech synthesis). Olli Niemitalo (talk) 10:29, 4 February 2012 (UTC)
Support. I was dubious about that from the beginning. The speech synthesis that I was mostly familiar with was either something with diphones (more like granular synthesis) or, the plain vanilla speech synthesis is subtractive synthesis with added filtered noise for fricatives. But I don't know firsthand that no speech synthesizer is additive. As more controls get added to the model, it might be that additive is the best way to do it. 70.109.179.87 (talk) 22:22, 4 February 2012 (UTC)

Deleted the section. Olli Niemitalo (talk) 20:56, 5 February 2012 (UTC)

Section on additive analysis/resynthesis.

copied from above:

BTW: Recently, very radical researches and developments slightly overlapped to Analysis aspect of analysis/resynthesis additive synthesis, are often seen around me in the context of expressive "vocal synthesis" and "source separations" . It is very exciting movements during last two decades, and someday I want to share these with yours in more appropriate forms. It is merely a my sincere will.
--Clusternote (talk) 13:32, 6 February 2012 (UTC)

Ya know, Cluster, you have a point there with the utility of a section on analysis and resynthesis of sampled sounds and existing musical instruments. And that Horner/Beauchamp graphic would be a nice thing in that section. Just FYI, this analysis takes as an input, x[n] and converts it to rk[n] and fk[n], perhaps at a reduced sample rate. It goes into the same synthesis depicted in the article where rk[n] and fk[n] go in and y[n] comes out. And you may have some error metric between x[n] and y[n] that the analysis stage is trying to minimize.

Now the funny thing is, that this is still mostly an academic thing (like the Portnoff or McAulay-Quatieri method or the "heterodyne oscillator" that Horner and Beauchamp use) but there might be products that disassemble a sound (speech, bell, fiddle, whatever) into partials that are worth noting in the article. All this should go into a new section. Would you like to do that? 70.109.179.87 (talk) 22:43, 6 February 2012 (UTC)