The original description page was here. All following user names refer to en.wikipedia.
Information entropy of a Bernoulli trialX. If X can assume values 0 and 1, entropy of X is defined as H(X) = -Pr(X=0) log2 Pr(X=0) - Pr(X=1) log2 Pr(X=1). It has value if Pr(X=0)=1 or Pr(X=1)=1. The entropy reaches maximum when Pr(X=0)=Pr(X=1)=1/2 (the value of entropy is then 1).
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License.http://www.gnu.org/copyleft/fdl.htmlGFDLGNU Free Documentation Licensetruetrue
to share – to copy, distribute and transmit the work
to remix – to adapt the work
Under the following conditions:
attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.
This licensing tag was added to this file as part of the GFDL licensing update.http://creativecommons.org/licenses/by-sa/3.0/CC BY-SA 3.0Creative Commons Attribution-Share Alike 3.0truetrue
La bildo estas kopiita de wikipedia:en. La originala priskribo estas: Information entropy of a Bernoulli trial ''X''. If ''X'' can assume values 0 and 1, entropy of ''X'' is defined as ''H''(''X'') = -Pr(''X''=0) log<sub>2</sub> Pr(''X''=0) - Pr(