In electrical engineering and mathematics , the parallel sum is a commutative , associative binary operation which derives from the formula for the resistance of resistors in parallel . The parallel sum is represented by an infix operator, which in elementary electrical texts is written as a pair of parallel lines "||"[ 1] [ 2] [ 3] [ 4] but in the academic literature more usually as a colon ":".[ 5] David Ellerman claims that the parallel sum is "just as good" as the series sum .
Two-terminal resistor network Can't think of any words at the moment
R
=
R
1
∥
(
R
21
+
R
22
)
∥
(
R
31
+
(
R
32
∥
R
33
)
)
{\displaystyle R=R_{1}\parallel (R_{\mathrm {21} }+R_{\mathrm {22} })\parallel (R_{\mathrm {31} }+(R_{\mathrm {32} }\parallel R_{\mathrm {33} }))}
R
=
1
1
R
1
+
1
R
21
+
R
22
+
1
R
31
+
1
1
R
32
+
1
R
33
{\displaystyle R={\frac {1}{{\frac {1}{R_{1}}}+{\frac {1}{R_{\mathrm {21} }+R_{\mathrm {22} }}}+{\frac {1}{R_{\mathrm {31} }+{\frac {1}{{\frac {1}{R_{\mathrm {32} }}}+{\frac {1}{R_{\mathrm {33} }}}}}}}}}}
R
=
R
1
(
R
21
+
R
22
)
(
R
31
R
33
+
R
31
R
32
+
R
32
R
33
)
(
R
21
+
R
22
+
R
1
)
(
R
31
R
33
+
R
31
R
32
+
R
32
R
33
)
+
R
1
(
R
21
+
R
22
)
(
R
33
+
R
32
)
{\displaystyle R={\frac {R_{1}(R_{\mathrm {21} }+R_{\mathrm {22} })(R_{\mathrm {31} }R_{\mathrm {33} }+R_{\mathrm {31} }R_{\mathrm {32} }+R_{\mathrm {32} }R_{\mathrm {33} })}{(R_{\mathrm {21} }+R_{\mathrm {22} }+R_{1})(R_{\mathrm {31} }R_{\mathrm {33} }+R_{\mathrm {31} }R_{\mathrm {32} }+R_{\mathrm {32} }R_{\mathrm {33} })+R_{1}(R_{\mathrm {21} }+R_{\mathrm {22} })(R_{\mathrm {33} }+R_{\mathrm {32} })}}}
The parallel sum of two terms may be defined either as
a
∥
b
=
1
1
a
+
1
b
{\displaystyle a\parallel b={\frac {1}{{\frac {1}{a}}+{\frac {1}{b}}}}}
or as
a
∥
b
=
a
b
a
+
b
{\displaystyle a\parallel b={\frac {ab}{a+b}}}
The former definition is more readily generalized to more than two terms
a
∥
b
∥
c
∥
.
.
.
∥
z
=
1
1
a
+
1
b
+
1
c
+
.
.
.
+
1
z
{\displaystyle a\parallel b\parallel c\parallel \mathrm {...} \parallel z={\frac {1}{{\frac {1}{a}}+{\frac {1}{b}}+{\frac {1}{c}}+\mathrm {...} +{\frac {1}{z}}}}}
but the latter definition has the advantage of remaining valid when one of the two arguments is zero.
For matrix arguments[ edit ]
The parallel sum of two non-singular square matrices may be defined as
A
∥
B
=
(
A
−
1
+
B
−
1
)
−
1
{\displaystyle \mathbf {A} \parallel \mathbf {B} =(\mathbf {A} ^{-1}+\mathbf {B} ^{-1})^{-1}}
[ 5]
This definition may be extended to singular matrices by rewriting it as
A
∥
B
=
B
(
A
+
B
)
−
1
A
{\displaystyle {\begin{aligned}\mathbf {A} \parallel \mathbf {B} =\mathbf {B} (\mathbf {A} +\mathbf {B} )^{-1}\mathbf {A} \end{aligned}}}
provided only that the sum of the two matrices is non-singular. If the sum of the matrices is singular, then the parallel sum may still be defined by adopting a particular generalized inverse , such as the Moore-Penrose generalized inverse (Duffin). Alternatively, the two matrices may be considered to be parallel summable only when the result is the same irrespective of the choice generalized inverse[ 6]
A
∥
B
=
A
(
A
+
B
)
−
1
B
{\displaystyle \mathbf {A} \parallel \mathbf {B} =\mathbf {A} (\mathbf {A} +\mathbf {B} )^{-1}\mathbf {B} }
For vector arguments[ edit ]
Anderson and Trapp[ 7]
For two vectors
U
{\displaystyle \scriptstyle {\mathbf {U} }}
and
V
{\displaystyle \scriptstyle {\mathbf {V} }}
with non-zero
U
+
V
{\displaystyle \scriptstyle {\mathbf {U} +\mathbf {V} }}
U
∥
V
=
V
2
U
+
U
2
V
(
U
+
V
)
2
{\displaystyle \mathbf {U} \parallel \mathbf {V} ={\frac {\mathbf {V} ^{2}\mathbf {U} +\mathbf {U} ^{2}\mathbf {V} }{(\mathbf {U} +\mathbf {V} )^{2}}}}
If additionally
U
{\displaystyle \scriptstyle {\mathbf {U} }}
and
V
{\displaystyle \scriptstyle {\mathbf {V} }}
are each non-zero, this can be written as
U
∥
V
=
U
U
2
+
V
V
2
(
U
U
2
+
V
V
2
)
2
{\displaystyle \mathbf {U} \parallel \mathbf {V} ={\frac {{\frac {\mathbf {U} }{\mathbf {U} ^{2}}}+{\frac {\mathbf {V} }{\mathbf {V} ^{2}}}}{\left({\frac {\mathbf {U} }{\mathbf {U} ^{2}}}+{\frac {\mathbf {V} }{\mathbf {V} ^{2}}}\right)^{2}}}}
That is, the (generalized) inverse of the vector
U
{\displaystyle \scriptstyle {\mathbf {U} }}
is taken to be
i
n
v
(
U
)
=
U
U
2
{\displaystyle \scriptstyle {\mathbf {inv} (\mathbf {U} )={\frac {\mathbf {U} }{\mathbf {U} ^{2}}}}}
, the smallest vector satisfying
(
i
n
v
(
U
)
)
⋅
U
=
1
{\displaystyle \scriptstyle {(\mathbf {inv} (\mathbf {U} ))\cdot \mathbf {U} =1}}
.
Serial sum
Parallel sum
Remarks
a
+
a
+
a
+
.
.
.
+
a
⏞
n
=
n
a
{\displaystyle \overbrace {a+a+a+...+a} ^{n}=na}
a
∥
a
∥
a
∥
.
.
.
∥
a
⏞
n
=
a
n
{\displaystyle \overbrace {a\parallel a\parallel a\parallel ...\parallel a} ^{n}={\frac {a}{n}}}
Repetition
a
+
b
=
1
1
a
∥
1
b
=
a
b
a
∥
b
{\displaystyle a+b={\frac {1}{{\frac {1}{a}}\parallel {\frac {1}{b}}}}={\frac {ab}{a\parallel b}}}
a
∥
b
=
1
1
a
+
1
b
=
a
b
a
+
b
{\displaystyle a\parallel b={\frac {1}{{\frac {1}{a}}+{\frac {1}{b}}}}={\frac {ab}{a+b}}}
Definition in terms of the other (terms assumed non-zero where necessary)
a
+
b
=
b
+
a
{\displaystyle a+b=b+a}
a
∥
b
=
b
∥
a
{\displaystyle a\parallel b=b\parallel a}
Commutative property
a
+
(
b
+
c
)
=
(
a
+
b
)
+
c
=
a
+
b
+
c
{\displaystyle a+(b+c)=(a+b)+c=a+b+c}
a
∥
(
b
∥
c
)
=
(
a
∥
b
)
∥
c
=
a
∥
b
∥
c
{\displaystyle a\parallel (b\parallel c)=(a\parallel b)\parallel c=a\parallel b\parallel c}
Associative property (if each parallel sum is defined)
a
(
b
+
c
)
=
a
b
+
a
c
{\displaystyle a(b+c)=ab+ac}
a
(
b
∥
c
)
=
a
b
∥
a
c
{\displaystyle a(b\parallel c)=ab\parallel ac}
Distribution
a
+
b
c
=
a
c
+
b
c
{\displaystyle {\frac {a+b}{c}}={\frac {a}{c}}+{\frac {b}{c}}}
a
∥
b
c
=
a
c
∥
b
c
{\displaystyle {\frac {a\parallel b}{c}}={\frac {a}{c}}\parallel {\frac {b}{c}}}
a
b
+
a
c
=
a
b
∥
c
{\displaystyle {\frac {a}{b}}+{\frac {a}{c}}={\frac {a}{b\parallel c}}}
a
b
∥
a
c
=
a
b
+
c
{\displaystyle {\frac {a}{b}}\parallel {\frac {a}{c}}={\frac {a}{b+c}}}
Common numerator
(
1
+
a
)
∥
(
1
+
1
a
)
=
1
{\displaystyle (1+a)\parallel \left(1+{\frac {1}{a}}\right)=1}
(
1
∥
1
a
)
+
(
1
∥
a
)
=
1
{\displaystyle \left(1\parallel {\frac {1}{a}}\right)+(1\parallel a)=1}
Mutual cancellation
a
b
+
c
=
a
b
a
c
{\displaystyle a^{b+c}=a^{b}a^{c}}
a
b
∥
c
=
a
b
a
c
{\displaystyle {\sqrt[{b\parallel c}]{a}}={\sqrt[{b}]{a}}{\sqrt[{c}]{a}}}
Addition of exponents
∂
∂
t
(
a
+
b
+
c
)
=
∂
∂
t
a
+
∂
∂
t
b
+
∂
∂
t
c
{\displaystyle {\frac {\partial }{\partial t}}(a+b+c)={\frac {\partial }{\partial t}}a+{\frac {\partial }{\partial t}}b+{\frac {\partial }{\partial t}}c}
∂
∂
t
(
a
∥
b
∥
c
)
=
(
a
∥
b
∥
c
)
2
(
1
a
2
∂
∂
t
a
+
1
b
2
∂
∂
t
b
+
1
c
2
∂
∂
t
c
)
{\displaystyle {\frac {\partial }{\partial t}}(a\parallel b\parallel c)=(a\parallel b\parallel c)^{2}\left({\frac {1}{a^{2}}}{\frac {\partial }{\partial t}}a+{\frac {1}{b^{2}}}{\frac {\partial }{\partial t}}b+{\frac {1}{c^{2}}}{\frac {\partial }{\partial t}}c\right)}
Differentiation
Electrical network demonstrating Lehman's series-parallel inequality. The inequality follows from the observation that (according to <can't remember who>'s principle), closing the switch cannot possibly increase the resistance between the terminals. The network is readily generalized to any number of columns and rows of resistors
Whereas the ordinary sum of any two non-negative real numbers is greater than or equal to each of them, the parallel sum of two such numbers is less than or equal to each.
t
r
(
A
∥
B
)
≤
t
r
(
A
)
∥
t
r
(
B
)
{\displaystyle \mathrm {tr} (\mathbf {A} \parallel \mathbf {B} )\leq \mathrm {tr} (\mathbf {A} )\parallel \mathrm {tr} (\mathbf {B} )}
det
(
A
∥
B
)
≤
det
(
A
)
∥
det
(
B
)
{\displaystyle \det(\mathbf {A} \parallel \mathbf {B} )\leq \det(\mathbf {A} )\parallel \det(\mathbf {B} )}
n
o
r
m
(
A
∥
B
)
≤
n
o
r
m
(
A
)
∥
n
o
r
m
(
B
)
{\displaystyle \mathrm {norm} (\mathbf {A} \parallel \mathbf {B} )\leq \mathrm {norm} (\mathbf {A} )\parallel \mathrm {norm} (\mathbf {B} )}
Lehman's series-parallel inequality[ edit ]
(
A
+
B
)
∥
(
C
+
D
)
≥
(
A
∥
C
)
+
(
B
∥
D
)
{\displaystyle (A+B)\parallel (C+D)\geq (A\parallel C)+(B\parallel D)}
More generally, for an array of m rows in series, by n parallel columns of resistors, the overall resistance is higher if series connections are made first.
|
|
j
=
1
n
∑
i
=
1
m
R
i
,
j
≥
∑
i
=
1
m
|
|
j
=
1
n
R
i
,
j
{\displaystyle {\overset {n}{\underset {j=1}{{\Big |}\,{\Big |}}}}\,\sum _{i=1}^{m}R_{i,j}\,\geq \,\sum _{i=1}^{m}{\overset {n}{\underset {j=1}{{\Big |}\,{\Big |}}}}R_{i,j}}
and this is likewise true when the R are positive definite square matrices.[ 5] .
In electrical engineering, both resistance -voltage drop divided by current- and conductance -current divided by voltage- are used to quantify the propensity of a material body to pass a current. Either of these quantities
In electrical engineering, the propensity of a material body to pass an electrical current may be quantified by either the body's resistance -the voltage drop across the body divided by the current through it- or by its conductance -the current divided by the voltage. In the case of
David Ellerman argues that the parallel sum is "just as good" as the series sum [ 8] and offers a number of illustrations that the
duality (mathematics)
David Ellerman's geometric construction for the parallel sum of two line segments (right), compared with the construction of their ordinary serial sum (left)
Harmonic mean
Gaussian equation for thin lenses
Geometric construction
Richard Duffin
Moore-Penrose generalized inverse
positive semi-definite just as simple to consider Hermitian semi-definite matrices .
^ Patrick, Dale R.; Fardo, Stephen W. (2008). Electrical Distribution Systems (2nd ed.). The Fairmont Press Inc. p. 21. ISBN 9780881735994 .
^ Glisson, Tildon H. (2011). Introduction to Circuit Analysis and Design . Springer. p. 143. ISBN 9789048194421 .
^ Walton, Alan Keith (1987). Network Analysis and Practice . Cambridge University Press. p. 35. ISBN 9780521319034 .
^ O'Malley, John (1992). Schaum's Outline of Basic Circuit Analysis . Schaum's Outline (2nd ed.). McGraw-Hill. p. 33. ISBN 9780070478244 .
^ a b c Bapat, R. B.; Raghavan, T. E. S. (1997). Nonnegative Matrices and Applications . Encyclopedia of Mathematics and its Applications. Cambridge University Press. pp. 153–156. ISBN 9780521571678 .
^ Mitra, Sujit Kumar; Puri, Madan Lal (1973). "On parallel sum and difference of matrices" . Journal of Mathematical Analysis and Applications . 44 (1): 92–97. doi :10.1016/0022-247X(73)90027-9 . ISSN 0022-247X .
^ Anderson, W. N.; Trapp, G. E. (1988), "Network Matrix Operations for Vectors and Quaternions" , in Datta, Biswa Nath (ed.), Linear Algebra in Signals, Systems, and Control , Proceedings in Applied Mathematics Series, vol. 32, SIAM, pp. 3–10, ISBN 9780898712230
^ Ellerman, David P. (1995). Intellectual Trespassing As a Way of Life: Essays in Philosophy, Economics, and Mathematics (PDF) . Worldly Philosophy. Rowman & Littlefield. p. 237. ISBN 9780847679324 .
Cite error: A list-defined reference named "Anderson-Duffin-1969" is not used in the content (see the help page ).
Category:Elementary algebra