Misplaced Pages

Talk:Maxwell's equations: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 16:17, 25 February 2012 editF=q(E+v^B) (talk | contribs)4,289 edits clarification (I hope): forgot to sign name already...← Previous edit Revision as of 04:00, 26 February 2012 edit undoSbyrnes321 (talk | contribs)Extended confirmed users, Pending changes reviewers7,380 edits clarification (I hope)Next edit →
Line 563: Line 563:


Another (minor) point on clarification, is it the right approach to use the "boundary notation" <math>\partial V = S, \, \partial S = C</math> right from the beginning? I guess its no problem with explanation, but maybe readers can grab the concept better by just using ''S'' for surface and ''C'' for curve - more intuitive? less confusing? It doesn't matter, just pointing the obvious out.--<span style="font-family:'Gill Sans MT'"> ](])</span> 16:17, 25 February 2012 (UTC) Another (minor) point on clarification, is it the right approach to use the "boundary notation" <math>\partial V = S, \, \partial S = C</math> right from the beginning? I guess its no problem with explanation, but maybe readers can grab the concept better by just using ''S'' for surface and ''C'' for curve - more intuitive? less confusing? It doesn't matter, just pointing the obvious out.--<span style="font-family:'Gill Sans MT'"> ](])</span> 16:17, 25 February 2012 (UTC)

:<math>\mathbf{D} = \varepsilon_0\mathbf{E} + \mathbf{P} </math> and :<math>\mathbf{B} = \mu_0(\mathbf{H} + \mathbf{M})</math> are not called constitutive relations. Only <math>\mathbf{D} = \varepsilon \mathbf{E}</math> and <math>\mathbf{B} = \mu \mathbf{H}</math> are called constitutive relations. Big difference...The former two equations are always true by definition, the second two equations are empirical assumptions about materials, assumptions that may or may not be accurate.
:The part you were complaining about--I think about whether S and V are changing in time--is altogether unnecessary. I deleted it. We can just say that S and V are not changing in time. The equations with that restriction are still correct and complete. "What happens if S and V might change?" is an interesting and worthwhile homework problem, but not at all essential for understanding Maxwell's equations.
:I don't think many readers will guess that ''C'' means curve and ''S'' means surface, and it is also risky to have the symbol ''S'' referring to a closed surface in one equation and open in another. And even if they do understand that C means curve, what curve is it?? Anyway, it's a dangerous game for readers to be guessing what the symbols mean. They are liable to guess wrong. A few strange symbols are kinda nice insofar as they encourage readers to actually scroll down and take a look at the table. I think that when they see that <math>\partial</math> can mean "boundary of" they will say "Oh, that's a nice and useful notation, maybe I'll start using it myself." Just my opinion :-)

Revision as of 04:00, 26 February 2012

Maxwell's equations received a peer review by Misplaced Pages editors, which is now archived. It may contain ideas you can use to improve this article.
WikiProject iconMathematics Unassessed High‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Misplaced Pages. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.MathematicsWikipedia:WikiProject MathematicsTemplate:WikiProject Mathematicsmathematics
???This article has not yet received a rating on Misplaced Pages's content assessment scale.
HighThis article has been rated as High-priority on the project's priority scale.
WikiProject iconPhysics B‑class Top‑importance
WikiProject iconThis article is within the scope of WikiProject Physics, a collaborative effort to improve the coverage of Physics on Misplaced Pages. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.PhysicsWikipedia:WikiProject PhysicsTemplate:WikiProject Physicsphysics
BThis article has been rated as B-class on Misplaced Pages's content assessment scale.
TopThis article has been rated as Top-importance on the project's importance scale.

Archives
Archive 1 ( ? - 2003)
Archive 2 ( 2004 - 2005)
Archive 3 ( 2006 - 2008)
Archive 4 ( late 2008)

"In classical electromagnetism"

Is it really necessary to say "classical" in the first sentence? The last equation in Quantum_electrodynamics#Euler-Lagrange_equations looks very like Maxwell's equations if you define J as eψγψ and use the "right" measurement units. -- Army1987 (t — c) 11:30, 25 October 2008 (UTC)

You can take any classical law I know of and call it a quantum law with the correct definitions of the variables. For example, the Ehrenfest theorem is Newton's second law if you write F=dp/dt, and define "F" as the expectation value of -grad V and define "p" as the expectation value of momentum. But it's nevertheless correct to say that Newton's second law is a law of classical mechanics, not quantum mechanics.
To answer your question though, it doesn't need to be in the first sentence, but I think it should be in the first two or three. You can also phrase it non-exclusively, e.g. "The laws form the basis for classical electromagnetism". I think Newton's laws and Newton's law of universal gravitation are good models. :-) --Steve (talk) 16:52, 25 October 2008 (UTC)

Displacement Current

It is often said that Maxwell's amendment to Ampère's circuital law states that a changing electric field produces a magnetic field. Careful scrutiny of Maxwell's papers would suggest that Maxwell never said anything of the sort. This seems to be a later interpretation. The electric field alone , whether changing or not, will produce a current which in turn will produce a magnetic field. Maxwell's amendment of Ampère's circuital law states the relationship between the curl of the magnetic field and the rate of change of electric field in situations involving time varying induced electromagnetic induction, but it doesn't state that a changing electric field actually causes a magnetic field. A changing magnetic field induces a changing electric field, but not vica-versa. David Tombe (talk) 17:11, 9 November 2008 (UTC)

What Maxwell said or thought doesn't matter. As has been repeatedly pointed out, this is an article on the set of four modern equations that everyone happens to call "Maxwell's equations". Carefully scrutinizing Maxwell's papers is a waste of your time, even for the history section, and certainly for the rest of the article. "All interpretive claims, analyses, or synthetic claims about primary sources must be referenced to a secondary source, rather than original analysis of the primary-source material by Misplaced Pages editors." Maxwell's original papers are a primary source. --Steve (talk) 18:04, 9 November 2008 (UTC)

Steve, the point which I was making is self evident from the Lorentz force and the Biot-Savart law. Nobody disputes the fact that a changing magnetic field causes an electric field. We can see the relevant term in the Lorentz force. It's the -(partial)dA/dt term. But no such equivalent term is to be found in the Biot-Savart law. Electric current alone causes a magnetic field. To say that a magnetic field is caused by a changing electric field is a wrong assumption based on a superficial symmetry between the Faraday's law and the Ampère's circuital law (amended) in the modern Maxwell's equations. On that other issue about not being allowed to use primary sources, I was already aware of it because I have encountered it on a number of occasions. It seems a strange rule bearing in mind that some of the primary sources in question contain sentences in plain English which have a totally unambiguous meaning. Nevertheless, I wouldn't say that reading Maxwell's original papers is a total waste of time. There is alot to be learned from historical evolution of a topic. What's the time span after death in which a scientist's papers become 'primary sources' and hence inadmissable? Obviously Feynman's writings haven't reached the 'primary source' stage yet because I've seen many a Feynman citation on wikipedia.

PS. I've just read the link which you provided about primary sources. I see what you mean. Maxwell's papers are too close to the evolution of Maxwell's equations. But even if they are primary sources, it is still permitted to draw attention to quotes where the meaning is obvious to all reasonable people. The issue about primary sources is not black and white. David Tombe (talk) 20:20, 9 November 2008 (UTC)

It's odd that you're staking this claim on your understanding of the Biot-Savart law...given that you've repeatedly insisted (under your other account) that you believe the Biot-Savart law to be false! No matter. Here is a reliable source that states "a changing electric field produces a changing magnetic field even when no charges are present and no physical current flows". Let me know when you find a reliable source that contradicts this. To be explicit: Until you produce a modern secondary reliable source that clearly and explicitly contradicts the quote above, I am done with this conversation and I have no interest in anything else you have to say. --Steve (talk) 00:05, 10 November 2008 (UTC)

Steve, In my very first sentence in this section, I fully acknowledged the fact that many modern sources say that a magnetic field is caused by a changing electric field. I was merely pointing out that I believe this idea to be false. This is clear from the Biot-Savart law. And yes, I also have serious reservations about the Biot-Savart law, but that's more to do with the issue of summation such as to allow it to apply on the large scale. I can accept the Biot-Savart law when it is written in the microscopic B = v × 1 c 2 E {\displaystyle \mathbf {B} =\mathbf {v} \times {\frac {1}{c^{2}}}\mathbf {E} } format. Either way, there is no corresponding term to the -(partial)dA/dt term of the Lorentz force, that would ever justify the statement that a changing electric field causes a magnetic field. And I was merely pointing out the fact that a scrutiny of Maxwell's papers indicates that Maxwell never made any such claim, even though he is the one who obtained displacement current and amended Ampère's circuital law accordingly. It is permissible under wikipedia's rules to use original papers (primary sources) for the purposes of making such an observation because it doesn't involve any controversial interpretation. There are clearly no statements to this extent in Maxwell's original papers and that is an indisputable fact. Anything at all that improves comprehension of a topic can only be of help when it comes to writing an article in a coherent manner. David Tombe (talk) 11:14, 10 November 2008 (UTC)

The Displacement current article has an example showing why the vacuum displacement current term (changing electric field creating a magnetic field) is necessary for Ampere's law to work consistently. Also it is necessary in order to derive electromagnetic waves from Maxwell's equations. Hope it helps. --Chetvorno 16:40, 20 November 2008 (UTC)

Chetvorno, As you can see from the latest edits to the Displacement current article, we need to clearly distinguish between Maxwell's original displacement current which is about dielectric polarization (or magnetization), and the modern virtual concept which is about maintaining the solenoidal nature of Ampère's law in a vacuum capacitor circuit. Maxwell certainly doesn't use the latter concept to derive the EM wave equation in his 1864 paper. David Tombe (talk) 08:48, 23 November 2008 (UTC)

Faraday's Law and the Lorentz Force

Further information: Lorentz_force § Lorentz_force_and_Faraday.27s_law_of_induction

Woodstone, You recently asked this question,

Would it not be better to write: × E = d B d t {\displaystyle \nabla \times \mathbf {E} =-{\frac {{\mbox{d}}\mathbf {B} }{{\mbox{d}}t}}}  ?(Woodstone (talk) 17:17, 8 September 2008 (UTC))

There is a textbook, 'J.A. Stratton, Electromagnetic Theory, (McGraw-Hill, New York, 1941). In 23, Chapter 5 is to be found a total time derivative version of Faraday's law. The justification is that the convective component is the curl of vXB. Stratton's words are “If by E we understand the total force per unit charge in a moving body, then curl E = −∂B / ∂t + curl (v × B) . Moreover, dB / dt = ∂B / ∂t + (v.grad)B , so that curl E = −dB / dt .“

This would suggest that Faraday's law is simply the curl of the Lorentz force.David Tombe (talk) 06:44, 15 November 2008 (UTC)

this needs clarifiction. What is the status of the Lorentz force wrt the Maxwell equations?

  • the article body has Nowadays, the v × B term appears in the force law F = q ( E + v × B ) which sits adjacent to Maxwell's equations and bears the name Lorentz force
  • the lead has These four equations, together with the Lorentz force law are the complete set of laws of classical electromagnetism

this implies that the Lorentz force is a necessary addition to the Maxwell equations. I think I am missing something here. Should the Lorentz force be considered a separate law, or is Faraday's law of induction merely a corollary of the Lorentz force? Or should the Lorentz force be considered the definition of E and B? --dab (𒁳) 11:49, 27 January 2009 (UTC)

Yes, as stated in the article, the Lorentz force is a separate law which exists in addition to the four modern Maxwell's equations. This is stated explicitly in textbooks, including Jackson's Classical Electromagnetism, Griffiths' Electrodynamics and Feynman's Lectures on Physics. The contention that Faraday's law of induction is a corollary of the Lorentz force is contradicted by all of these textbooks. However, David Tombe enthusiastically disagrees with these authorities, by using what seems to me to be poor math and poor logic, and also an occasional misunderstanding of an obscure old textbook, as above. But I have no interest in arguing about this point any further; Misplaced Pages rules are clear. --Steve (talk) 16:56, 27 January 2009 (UTC)

What's brewing?

Brews, you've done 29 edits in quick succession, but I see nothing on the talk page that looks like a warning or proposal for what you're trying to do. You need to stop, slow down, and let other editors be involved. Trying to remake the whole article in your unique POV is just going to start an edit war, don't you think? Dicklyon (talk) 19:30, 26 April 2009 (UTC)

Hi Dick: I hope nothing controversial is involved here. Mainly an attempt to elaborate upon the free/bound charge separation to present a context: no new ideas, just a figure and some added preamble. I've reverted your last edit: my statement is less sweeping than yours - it says incontrovertibly that the conclusion applies to the figure, which it does, but doesn't restrict the result to the figure. The other phrasing is too sweeping, as polarization does not always prove equivalent to two charge sheets, obviously. Brews ohare (talk) 20:07, 26 April 2009 (UTC)
If your caution is about the footnote on free space, that note simply states a short summary of that article, introducing no new material or opinion. Of course, Martin might have a fit, but maybe he is used to these ideas now. Brews ohare (talk) 20:14, 26 April 2009 (UTC)
The footnote is what I was talking about; if it was talking about a source, that wasn't clear. I haven't read your other edits, as it's way too much, but I'd be surprised if you don't get some pushback, as you often do. Dicklyon (talk) 21:51, 26 April 2009 (UTC)

Lost the plot

Graduate students and post docs do not come here to learn, they come to critique if they come at all. Your audience is junior high school students and high school students, even science education media types who keep an eye on Misplaced Pages to see if it works. It does not. The contributors to this article have managed to make the simple and essential overly complex and obscure. Write for an encyclopaedia, not to try and prove your erudition. What a waste of space. Malangthon (talk) 05:05, 20 May 2009 (UTC)

I don't think any junior high school students come here and I'd be surprised if high school students did, however I'm an undergraduate taking electromagnetism and I don't understand these equations .. like at all.

I agree with you, Malangthon. I think this article is lost some where in between advanced enough mathematically for graduate / post docs, and simple enough linguistically for high schoolers / curious people. That's a seemingly ubiquitous barrier across math in general. Misplaced Pages would be a fantastic place to breach that barrier. But at the same time, wikipedia is technically designed to be encyclopedic. See down under Please note? It says "Please post only encyclopedic information that can be verified by external sources. Please maintain a neutral, unbiased point of view." IMHO, I'd say let's shoot for understandability. Educate the masses! ThLemming (talk) 00:14, 10 August 2010 (UTC)theLemming

Four-vector version

I've tried to put in the four-current and four-potential and four-gradient version, but the definitions of all these things vary just enough with where to put the minus signs and the 1/c and where to square the Box, etc., that I'm not sure I've got it exactly right yet. Can anyone help? Dicklyon (talk) 06:53, 18 June 2009 (UTC)

It would be more in keeping with the E, B equations normally referred to as Maxwell's to give a 4-space account based on the field, F, rather than the potentials. Thus F; ν = μ0.j
I think both are in the article right now.
This all overlaps with covariant formulation of classical electromagnetism, so we can (and should) keep everything here concise. It's OK right now. --Steve (talk) 00:41, 11 August 2009 (UTC)
Yes, they're both there. The potentials were an important part of Maxwell's original equations. It was Heaviside who decided that fields were more "real" than potentials, and he wrote the potentials out. The four-potential section serves to remind that that's not the only way to formulate, or to simplify, Maxwell's equations. Dicklyon (talk) 02:56, 11 August 2009 (UTC)

History Section

Should the history section have a separate page ? It's long and rambling and rather distracts from the main theme. (Eddy, 21:05ish (UTC), 2009/Aug/10.)

It doesn't help that it consistently uses the vector notation, where Maxwell's "Electrodynamics" uses a separate letter for each compontent of each vector. He has some foot-notes (in the third edition) about some interesting work some of his peers are doing, using the Quaternions, that provide for neat ways of expressing the equations (and these ultimately lead to the 3-vector notation), but he never uses it himself (that I've seen). Yet the history account is replete with assertions about formulae like "∇·B = 0" appearing in his work, where he actually states the same truth far less succinctly. While, of course, the important fact is that he established the relation (the magnetic field has no divergence) and it would be tedious to actually recite it in his terms, a more careful account of the history should make it clearer that what his relevant publication contained was the essence of "∇·B = 0", albeit otherwise stated. (Eddy, 21:15ish (UTC), 2009/Aug/10.)

The interesting history to tell would indeed be to relate that the truths behind the equations initially appear, in component-by-component form, in various authors, ending with Maxwell in the 1860s (ending the discovery phase), by which time work was under way to develop a better way of expressing such systems of equations. That in turn lead to Heaviside's form, with vectors and the three-dimensional vector product, yielding the equations as commonly stated. Then the ramifications of the equations being true for any observer, regardless of inertial frame of reference, lead to the 4-vector formulation, in which E and B are combined into a second rank antisymmetric tensor, charge and current into a vector and so on. (Eddy, 21:20ish (UTC), 2009/Aug/10.) —Preceding unsigned comment added by 84.215.6.188 (talk)

84.215.6.188, You need to distinguish between the issue of (1) the modern vector notation, and (2) Heaviside's versions. Heaviside used modern vector notation in his versions. However, we can still write Maxwell's original eight equations in modern vector notation without affecting their physical meaning in any way, and without making them become the Heaviside versions. And at any rate, the two sets overlap physically in most important respects. The only important exception is where the Heaviside version of Faraday's law doesn't cater for the motionally induced EMF. That is catered for at equation (D) in Maxwell's original eight.
I'm happy with writing the original eight equations in either notation. But the modern vector notation makes easier reading for everybody. Besides that, it would be a headache to try and write out the original eight equations, using the notation that appears in Maxwell's original papers.
As regards the topic of the Heaviside versions in their own right, they are of course the versions that we find in the modern textbooks. Hence I can't see what the big problem is here. Maxwell's original eight equations are dealt with (in modern vector format) in the history section. Later stuff to do with four-vectors is also covered in the article, so why bother changing anything? David Tombe (talk) 16:04, 11 August 2009 (UTC)

Having now had a look at the main article, I can see that somebody has been making edits that mention Maxwell's 1873 treatise. What many people seem to overlook is the fact that the eight Maxwell's equations of the 1873 paper were already in both the 1861 paper and in the 1865 paper. They were grouped together as a distinct set for the first time in the 1865 paper, and that set did not include a 'Faraday's law' equation. This is an example of the confusion that gets sewn into history as a result of somebody having become knowledgeable about Maxwell's most famous paper, without having any knowledge of what went before hand. It is also an example of the prevailing lack of knowledge regarding the fact that Heaviside used a clipped version of Faraday's law rather than using what was in effect Maxwell's precursor of the Lorentz force. David Tombe (talk) 16:19, 11 August 2009 (UTC)

The 1873 Paper

Somebody has stated in the main article that Maxwell's equations in the 1873 paper are divided into two groups of two. I'm not as familiar with this paper as I am with the 1861 and the 1865 papers. However, my recollections are that in the 1873 paper, the eight Maxwell's equations are laid out in a similar fashion to that which we see in the 1865 paper. The two groups of two, that are described in the main article may well relate to another part of that paper. I notice that the v×B term has been dropped. Maxwell dropped the v×B term when he came to derive the EM wave equation. That makes me suspect that the person who made this edit has copied it from a part of the 1873 paper other than at the part which gives the formal list of the eight equations. It's probably an exert from the section on the derivation of the electromagnetic wave equation. Does anybody have the means to check this out.

The bit in question is,

The first set is

E = ϕ A t {\displaystyle \mathbf {E} =-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}}
B = × A . {\displaystyle \mathbf {B} =\nabla \times \mathbf {A} .}

The second set is

D = ρ {\displaystyle \nabla \cdot \mathbf {D} =\rho }
× H D t = J . {\displaystyle \nabla \times \mathbf {H} -{\frac {\partial \mathbf {D} }{\partial t}}=\mathbf {J} .}

I would have deleted this only for the fact that it comes with a package which gives a good link to the 1873 paper. These equations are all correct. One of them leaves out the convective v×B term, but that is fine so long as we are considering a stationary situation. However, I have never seen this particular grouping stated as 'Maxwell's equations' as such. David Tombe (talk) 03:58, 19 October 2009 (UTC)

transcription projects

Over on the Wikisource project, we have a transcription projects for A Treatise on Electricity and Magnetism and On Physical Lines of Force. We especially need MediaWiki math markup gurus to help us.

See s:Author:James Clerk Maxwell for details.

John Vandenberg 07:54, 26 December 2009 (UTC)

magnetic monopoles

Would be best to cover in Gauss' law for magnetism rather than here. I've removed the entire statement otherwise, as I don't believe it adds anything to this article. That said, it shouldn't be added to the other article before the results of the experiments can be independently verified, and one news article a verification does not make. In fact, there is a good paragraph on this very article in magnetic monopole. --Izno (talk) 19:53, 2 January 2010 (UTC)


Maxwell's equations vs. Coulomb's and Biot-Savart law -> PARADOX!!_PARADOX!!-2010-03-20T09:00:00.000Z">

Steve, restrain yourself from censorship and discrimination, this message is to confirm what other people have already tried to tell you above. If you are refusing to think about it just because "for 100 years people have accepted it", I applaud your faith, but it is not your place to interfere the information just because you do not understand it. Whom did you consult about removing my message? You are not alone in this world, kiddo.

1. Gauss's law: divE= p/e0

- Divergence of E field according to Coulomb's law is zero, it has uniform magnitude gradient dropping off with the inverse square law, which is supposed to mean divE= 0.

2. Gauss's law for magnetism: divB= 0

- According to Biot-Savart law which actually describes this magnetic field potential for point charges, *not wires*, this field is toroidal, its magnitude falls off with inverse square law in perpendicular plane to velocity vector (current direction), but it also falls with the angle according to vector cross product, so at the end it looks like doughnut and not like a "ball" of electric field. This actually means that this particular magnetic field 'due to moving charge' (this is not intrinsic magnetic dipole moment), has non zero divergence and non zero rotation (curl), i.e. for point charges divB != 0. What do I get now for discovering magnetic monopoles?

Yes, if you take an infinite wire then divB=0, but that does not reveal anything about how individual magnetic fields look in front and behind that 90 degree plane, it is very crude approximation geometrically and quantitatively considering 'amperes' vs q*v, and hence it lacks some serious information. -- Let's say divB=0, then what is just B equal to? I do not see any information about B field here, so where and when do we ever use this equation?!?

3. Maxwell–Faraday equation: rotE= - dB/dt

- According to Coulomb's law E field has no rotation (curl), it is more of a "radial" kind of thing, so what in the world can this mean if we get rid of the curl operator and solve for just E? How can 'curl of E' tell us anything if 'curl of E' is always supposed to be constant and zero? Also, it is not clear from the article what does "dB" refer to: 2nd equation, 4th equation, Biot-Savart? Table of symbols says 'magnetic field' and link leads to Lorentz force and Biot-Savart law, funny.

4. Ampère's circuital law: rotB= J + dE/dt

- 3rd and 4th equation appear to be in kind of 'circular definition' and self-referencing, or at least very ambiguous as it is not clear what does "dE" refer to: 1st equation, 3rd equation, Coulomb's law? Table of symbols says 'electric field' and link leads to Coulomb's law, funny... as if these four equations can not really define anything by themselves at all. —Preceding unsigned comment added by 203.211.108.184 (talk) 09:00, 20 March 2010 (UTC)_PARADOX!!"> _PARADOX!!">

Anonymous 203.211.108.184, I wasn't able to answer you when you first made this post. But as regards your question (3), you are overlooking the fact that the electric field is not yielded exclusively by Coulomb's law. In fact, when we are dealing in EM induction, Coulomb's law doesn't really enter into it at all. In time varying EM induction, we are working with an electric field which is the partial time derivative of the magnetic vector potential. You need to look at this section here which I have copied from the electric field article,
An electric field can be produced, not only by a static charge, but also by a changing magnetic field. The combined electric field is expressed as,
E = ϕ A t {\displaystyle \mathbf {E} =-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}}
where,
B = × A {\displaystyle \mathbf {B} =\nabla \times \mathbf {A} }
The vector B {\displaystyle \mathbf {B} } is the magnetic flux density and the vector A {\displaystyle \mathbf {A} } is the magnetic vector potential. Taking the curl of the electric field equation we obtain,
× E = B t {\displaystyle \nabla \times \mathbf {E} =-{\frac {\partial \mathbf {B} }{\partial t}}}
which is one of Maxwell's equations, referred to as Faraday's law of induction.David Tombe (talk) 00:26, 14 November 2010 (UTC)

Geometric algebra = differential geometry?

Right now we have this equation "in geometric algebra":

F = 4 π J {\displaystyle \nabla \mathbf {F} =4\pi \mathbf {J} }


And this one in "the language of differential geometry and differential forms":

d F = J {\displaystyle \mathrm {d} *{\mathbf {F}}={\mathbf {J}}}

Am I correct in assuming these are the same equation in the same mathematical formalism but with slightly-different notation? The 4pi is not an important difference, it's just cgs vs SI.

I don't think these are entirely equivalent. Doesn't the formalism of differential forms require a second equation (ie: the duality relation equivalent to the tensor relationship G α β , α = 0 {\displaystyle G^{\alpha \beta },_{\alpha }=0} ). Peeter.joot (talk) 16:49, 7 November 2010 (UTC)

Also, the "2-form F" in the differential forms section isn't explicitly defined...is it correct that it somehow corresponds to the F α β {\displaystyle F_{\alpha \beta }} defined earlier as the electromagnetic tensor? Even so, we need to say something in the text, at least defining F using the notation of differential geometry.

I found this source -- -- but haven't had time to read it... :-) --Steve (talk) 00:07, 5 June 2010 (UTC)

The source I've used here, by Lounesto gives the geometric algebra form like this:
F = J . {\displaystyle \partial \mathbf {F} =\mathbf {J} .}
so I would say they are the same thing, with different units and symbols, though I'm not familiar with differential geometry/differential forms so can't say for sure.--JohnBlackburnedeeds 00:44, 5 June 2010 (UTC)

The geometric algebra equation is wrong in the usual interpretation of taking the total derivative (which is tensor of rank 3 but _not_ antisymmetric) so at best it is very confusing and I removed it. As it stood it can be given a correct sense by interpreting {\displaystyle \nabla } as a the total derivative followed by Clifford multiplication on the exterior algebra (which I guess is what the geometric algebra people mean) but that involves interpreting J as a 1 form through Hodge duality rather than as a 3 form.

RogierBrussee (talk) 15:50, 8 August 2010 (UTC)

Yes, in the GA version, the gradient {\displaystyle \nabla } is a multiplicative operator using clifford multiplication. Both the gradient and the current are four vectors = γ μ μ {\displaystyle \nabla =\gamma ^{\mu }\partial _{\mu }} , and J = c ρ γ 0 + γ k J k {\displaystyle J=c\rho \gamma _{0}+\gamma _{k}J^{k}} . Peeter.joot (talk) 16:49, 7 November 2010 (UTC)

The GA version(s) depend on whether you wish to consider a spacetime metric or not.

Without:- In natural units, F + t F = J {\displaystyle \nabla F+\partial _{t}F=J} where F = E + I B {\displaystyle F=\mathbf {E} +I\mathbf {B} } and J = ρ J {\displaystyle J=\rho -\mathbf {J} } are multivectors and I = γ 0 γ 1 γ 2 γ 3 {\displaystyle I=\gamma _{0}\gamma _{1}\gamma _{2}\gamma _{3}}

With:- Premultiply the above by γ o {\displaystyle \gamma _{o}} gives F = J {\displaystyle \nabla F=J} .

This has nothing to do with interpretation as forms, tensors or anything else, it is standard GA formalism.If you do decide to factor out the various mathematical formulations on a separate page (good idea), I will happily expand on this with appropriate references so that there is no confusion.

Selfstudier (talk) 15:47, 5 November 2010 (UTC)

The formalism are already separated on a separate page mathematical descriptions of the electromagnetic field. TStein (talk) 19:32, 5 November 2010 (UTC)
I see now that the formalisms have a summary on the main page and then these summaries are referenced to other more detailed pages; I was confused as there is no mention of the GA formalism on the main page and the only details I could find were in the page bivector and briefly in the page geometric algebra.

Also I see that not all formalisms have been factored out

Selfstudier (talk) 20:40, 5 November 2010 (UTC)

Big changes coming (hopefully)

Some of you may have noticed that I have made a lot of changes lately to this article. Some (hopefully not all) of the changes may not obvious. I want to let you know what my basic plan is. First, my goal is to move a bunch of stuff out of this article to more appropriate locations. As a first step I am moving stuff around trying to get a handle to what is here and why. As part of this step I am trying to locate main articles for the stuff to be moved into. The article may get more bloated in the near-term as I try to find a good home for the stuff. I hope I haven't acted too boldly in the moves. Stuff I want to do yet.

  1. Move a good portion of the constitutive relations out to various appropriate articles like magnetic field?, electric displacement field?, polarization density, magnetization, constitutive equation? other appropriate links.
  2. Refactor all the other representations of Maxwell's equations using the Mathematical descriptions of the electromagnetic field article.
  3. Determine what to do with the long table summarizing all the quantities

Any input into this would be greatly appreciated, especially if I mess something up ;) . TStein (talk) 21:08, 1 November 2010 (UTC)

Having lived with the article as it stands for many internet years, I confess to apprehension. Especially the potential loss of the long table, which has been highly praised as an examplar for understanding. If it gets moved around, something may be lost in the moves. As one who has made my living from these equations, they (the Heaviside version) are well known to generations of users in their current form, not just to me alone. One of the items I have "always" known (joke because I definitely didn't know this in grade school) is that a conservation law is a consequence of these laws and that the laws are somewhat redundant. But since generations of us have learned them in this form, we know them as they stand.
Another thing I have always known (see joke above) is that there are three formulations that coexist simultaneously, the differential form, the integral form and a pictorial solution of the integral form, so that the standard equations are a shorthand for that which we know in other formulations (again, simultaneously). In other words, the article is a large reminder to me of what I have always known and I have never agitated for the pictorial solutions on this topic. However, if the article is split into subpages, then there is the opportunity to have the triple formulation. --Ancheta Wis (talk) 01:39, 2 November 2010 (UTC)
Strong vote against modifying the tables with the equations and definitions. They're the most important thing in the article IMO. Also, the reason the conceptual description section is (was?) short is so that it doesn't take long before the reader gets to the equations, which again are the most important thing. So I hope you don't expand that section too much. The (former) constitutive relations and boundary conditions sections have a lot of content that is not relevant enough to the article and should be shortened if not eliminated IMO. All of classical electromagnetism is related to Maxwell's equations, but obviously not all of classical electromagnetism should be described in this article. History should be made shorter and split off into a separate article IMO. It's very worthwhile IMO to have the equations in all the most important forms in this article--Maxwell's equations in terms of CGS, in terms of potentials, in terms of four-tensors, etc. This is the article on Maxwell's equations, it is very appropriate to have Maxwell's equations in any form in this article and not another. That's not to say that those sections couldn't or shouldn't be made more concise, even shrunk to just the equations, definition links, and a link to more details in another article. In fact I stated in a previous section of this page that two of the formulations may even be redundant, but seemingly no one here knows enough to check.
Again, what is more relevant to "Maxwell's equations": Maxwell's equations all together in tensor form, or a qualitative description of Ampere's law? To me the answer is obvious: Maxwell's equations all together in tensor form is more relevant. Therefore I would be opposed to greatly expanding the qualitative description of Ampere's law while moving Maxwell's equations in tensor form to a different article. :-) --Steve (talk) 02:41, 2 November 2010 (UTC)
I don't want to greatly expand the qualitative section at all. That is one of the few sections I like. Eventually I would like to have a picture for each of the sections. Maybe a sentence or two will be appropriate. I agree about the constitutive equations. That content belongs someplace, though. Any ideas about where to move it? TStein (talk) 16:37, 5 November 2010 (UTC)
I like how the article used to have "4-equation-table then 4-equation-table then definition-table". That way the most important part of the article was right there, easy to find, near the top, all together. I notice you've split those tables up, but I hope it's temporary. This is an encyclopedia article not a pedagogical physics course, sometimes you can't quite present things in the logical order. :-) --Steve (talk) 03:01, 2 November 2010 (UTC)
While I believe that pedagogy as a strong place in WP (it is used in practice that way), that wasn't my main motivation. I am simply trying my best to organize things. It seems to me that this article is similar to a room with way too much junk in it (along with things that belong in other rooms.) If you want to clean up the room first you have to sort everything into related topics. From there it is easier to judge what type of stuff really belongs in the room. More importantly it allows me to move chunks at a time out to more appropriate articles.TStein (talk) 16:37, 5 November 2010 (UTC)

Please check the article to see if I addressed your concerns

I put all the equations back together but labeled that section Summary of Maxwell's equations. My major problem with this is that I needed to duplicate the tables. I wish could come up with a better way.

What needs to stay from constitutive relations section

I am about ready to move about 50% of the constitutive relations section to the constitutive equations article. (It might be more appropriate to create another article but I need to do one thing at a time.) To be more precise there will be some overlap of material between the article and the section.

What shall I keep in this article? As a start, I am thinking about keeping most of the introduction (with exception to parts of the last paragraph) and the subsections on in vacuum and linear materials. I would like to have a quick discussion of what can go 'wrong' from the general section and ditch the calculation section almost entirely. Personally, I think the calculation section needs a major rewrite as well but I am going to leave that in the constitutive equation article and let someone else fix it. TStein (talk) 19:50, 8 November 2010 (UTC)

Is it possible to combine the two history sections? It seems to me that the section in special relativity is not in the right place.--LaoChen (talk) —Preceding undated comment added 18:48, 9 November 2010 (UTC).
That is certainly on my list. I don't really see much in the relativity section that pertains to Maxwell's equations. Right now I am focused on fixing the constitutive relations section, though. (I don't want to make too many big changes at a time at least without feedback.) If I have a second (third if you include me) I will get to that section sooner, though ;) .TStein (talk) 19:29, 9 November 2010 (UTC)

The History Section

Tstein, In the history section, at the end, there is a paragraph which in my opinion could be removed on the grounds that it is not correct. It's a while since I looked at Maxwell's 1873 treatise, but I seem to remember that the original eight equations appear in the 1873 treatise just as they appeared in the 1864/5 paper. I may be wrong, but that's how I remember it. Whoever wrote that section seems to have got confused with some other part of the paper where four equations have been taken aside for closer scrutiny for some purpose. I would be inclined to simply remove that paragraph altogether as part of the clean up job. But save the source because it seems to be a good on-line source for the 1873 treatise. David Tombe (talk) 19:48, 10 November 2010 (UTC)

Actually, I've just found the 1873 version of Maxwell's equations. This German 'maths pages' does an excellent translation of the original quaternion versions into a format that we can understand. And it appears that the list is not identical to the 1865 list. However, it clearly shows that the existing paragraph in the main article is wrong, and so that paragraph needs to be removed until somebody can diligently write out the original equations correctly as per the 1873 paper. Meanwhile, here is another direct web link to the 1873 paper . David Tombe (talk) 20:47, 10 November 2010 (UTC)
I don't have time to get too much into the history section. It is quite far out of my area of expertise. I will probably get to the history section sometime but only for minor copy edits and minor trimming of fat. I wish I had time to properly help you with fixing the history, but you will have to take care of it, when you can. In the meantime, I agree with cutting it until it can be properly vetted.TStein (talk) 21:48, 10 November 2010 (UTC)

OK. I would have cut that section on the 1873 paper myself, only for the fact that I don't know how to save the wikisource reference, which is good. Would you know how to get the reference down into the blibliography section? If so, you could do that and then just wipe the rest out. I would like to replace it sometime with a proper list of the 1873 equations, but it would be an enormous task because it gets into the issue of the quaternion format. It would be a work of art to copy out those equations as like in that German web link which I supplied.

In general, I agree with you that the entire article needs to be substantially reduced in size. Up to a point I agree with Steve's point of view. Steve seems to think that an article about Maxwell's equations should be emphasizing the groups of equations known as 'Maxwell's equations', and not dealing with the individual equations as such. Steve believes that detailed explanations for the individual equations should be re-directed to the special articles for the individual equations. I do however think that we need to give at least some explanation of the individual equations in this article, but not too much. I would have a lead, then list the Heaviside four. Then give a brief explanation of each of these four. I would have a history section near the botton. And I agree with you that relativity has got nothing to do with the history. The derivation of Maxwell's equations from relativity is a modern thing, and that could also be mentioned somewhere in the article too. David Tombe (talk) 22:20, 10 November 2010 (UTC)

Constitutive relations again (my changes and what shall I cut).

I would appreciate it if someone can review my changes to the constitutive relations section. In particular I am not an expert on all nuances of non-linear optics and bi-birefringence, bi-anisotropy, etc. The frustrating thing is that I made it longer.

I would like to cut almost everything else in that section, though starting around the section where it talks about hysteresis. Personally, I think that the non-local equations for P as a function of E are way too much for this article. (The previous section covering non-linear optics, birefringence, etc., is on the borderline of being to detailed for this article as well.) Input about what (not) to cut and where to move stuff would be greatly appreciated. TStein (talk) 23:27, 12 November 2010 (UTC)

Just as a wave equation can be derived from Maxwell's equations, with applications in remote sensing, so too can tunable metamaterials be described by constitutive relations. This is a very practical subject in use everyday, for example in phased arrays. It may be possible to push that material into the electro-optics article. The applications include computer memories, optical interconnects and nonlinear optics. From my POV, it illustrates how far-reaching Maxwell's equations are. ---Ancheta Wis (talk) 05:13, 13 November 2010 (UTC)

Equation names

When we were first putting in the two sets of equations (), the ones with E and B were called "microscopic Maxwell's equations" and the ones with D and H and E and B were called "macroscopic Maxwell's equations", following Jackson (p248 in the third edition). Somewhere they've been changed to "in vacuum" and "in material". Although both are a bit misleading I think the current names are worse than the old names: (1) The "in material" equations are also valid in vacuum, and (2) The in vacuum equations explicitly assume not a vacuum, because they include charges and currents, and a true vacuum has no charges or currents. Plus, the equations are valid in materials anyway. I just checked Griffiths, he doesn't use any terminology, he just says "in terms of free charges and currents, then, Maxwell's equations read...". Born and Wolf (1980 version) uses the term "in vacuum" to mean P=M=0. (Also, Born and Wolf surprisingly uses the term "free current" to refer to what in this article is called "total current"!) So there's not much consensus on terminology at least in the physics literature, it seems to me. Thoughts? --Steve (talk) 18:55, 17 November 2010 (UTC)

While I understand your disagreement with the term vacuum, I disagree. If I have a wire in an evacuated vacuum chamber with a current flowing through it then I have a current in a vacuum. I have found more than a few textbooks out of a handful that I checked which used that term. That being said the term 'vacuum' does seem to bring out the anal-retentive side of people and maybe it should be avoided.
I just spent a fair amount of time going through some of my random set of E&M books. The only thing consistent is the inconsistency. In Griffith's the name of the section is 'Maxwell's equations in matter'. One book called it Maxwell's equation in material media but then called it 'macroscopic' in the back. A couple books only had the 'macroscopic' versions but didn't call them that. Carson and Lorain had three sets that were all just called Maxwell's equations. (The third were in terms of E,B,P,M.) Those who used Maxwell's equation in a vacuum would use different terms for the other equation such as 'in media' or 'in material' or 'in matter'.
Being forced to choose between these is like being forced to choose between brussel sprouts and liverworst. I agree that the macroscopic/microscopic is the slight less unpalatable version. (I might be even guilty of some of that change, though, since I use Griffith's regularly.) Microscopic/macroscopic of the advantage of describing how they are most often used, but both can be used in both microscopic and macroscopic conditions as well. For example the 'microscopic' equations are most useful for 'macroscopic' situations when there are no dielectrics or magnetic materials around.
What I will probably end up doing, unless there is an objection, is renaming the tables and section heads and the like to 'microscopic' and 'macroscopic', since they are the closest thing to being an official name. Then I will need to add some sentences in various places explaining that there is no agreed upon name. My main concern with this is that it will not stick. Every time someone changes something like this there are invariably problems left behind. And if someone, based on the Griffith's section name for instance, changed it back, then what is the point. TStein (talk) 23:11, 17 November 2010 (UTC)

Tstein, I will support your idea to use the terms 'microscopic' and 'macroscopic'. The main difference between the two sets is of course that the microscopic set is a post-Maxwell concept. Maxwell never formulated his equations in a vacuum. And so, as the article already states, the macroscopic equations are closer to what Maxwell had in mind. David Tombe (talk) 00:10, 19 November 2010 (UTC)

Geomagnetic storms

In a geomagnetic storm, a surge in the flux of charged particles temporarily alters Earth's magnetic field, which induces electric fields in Earth's atmosphere, thus causing surges in our electrical power grids. Image not to scale.

While casting about for an example to illustrate Maxwell's equations, I found the illustration in geomagnetic storms. This might be used to illustrate Faraday's law, although it requires an understanding of Ampere's law as well. Historical events include the 1859 and 1989 geomagnetic storms. --Ancheta Wis (talk) 08:36, 19 November 2010 (UTC)


Ancheta Wis, It's OK with me if you use that diagram in the main article. David Tombe (talk) 12:41, 19 November 2010 (UTC)

The Introduction

One should consider substantially shortening the introduction to the first paragraph alone. The second paragraph is too technical to have in the introduction. The material could be reproduced further down if it is not already there. The division discussed in the second paragraph arises for the reason that the medium within which Maxwell derived his equations is no longer part of modern physics. Hence we have a new set of Maxwell's equations (the microscopic equations) to cater for the vacuum. This can be explained further down in the main body of the article. Also, the third paragraph needs a bit of qualification which would be too much for the introduction. Once again it is to do with the fact that Maxwell's elastic luminiferous medium is not part of modern physics. Maxwell's equations as they originally appeared with reference to the luminiferous medium, predict a speed of light which is fixed relative to that medium. It is only when we disregard that medium, as in the case of the modern microscopic equations, that we can say that it implies a 'speed of light' which is independent of the observer, and hence use Maxwell's equations as a starting point for relativity. And since the introduction is hardly the place to discuss that, the material really ought to be brought down into the relativity section, hence leaving the introduction more concise and to the point. Everytime that I open this page, the first thing that strikes me, as in the case of the magnetic field article, is that the introduction is far too long. David Tombe (talk) 17:57, 19 November 2010 (UTC)

Although there are general rules, the length and structure of the lead is somewhat a matter of taste. I view the lead primarily as a way to indicate to the reader (and maybe just as important to future editors) what the structure of the article is, in a general way that can't necessarily be picked up from the ToC alone. For instance, I view this article to be structured as following:
A. General discussion followed by quick tables
B. Discussion about two main variant in differential form.
C. History
D. Other advanced variations
In my opinion as it stands: A) is perfect (other than the fact that the unrelated last sentences about Maxwell creating the equations which needs to be moved to C). ) B) was an unsuccessful first attempt to explain the difference between the two major variants. Two to three sentences should be sufficient for B). I have already dealt with my opinion about C). D) is very tangential to what is needed to be stated here. So as you see, I mostly agree with you. The main problem is that everybody wants to add their pet thing to the lead and it is very hard to keep the bloat down. (Part of the problem is that I am not a very good natural writer, I have to edit my own stuff several times in order to get it reasonable. For example my thought in adding the information about microscopic using B and E with macroscopic using B,E,H,D was to make it easier for those who know the material to understand the terminology. I have agreed with you for a while, though, that that is not optimal for the average user, but I just haven't had time to fix it.)
I will see what I can do to help with the bloat. TStein (talk) 19:42, 19 November 2010 (UTC)

Tstein, Yes it's much better now. I removed the explicit mention of the two sources (charge and electric current) because there are actually three sources, and I figured that explaining the three sources would be too weighty in the introduction. The third source is of course the changing magnetic field which is the source of the electric field in the Maxwell-Faraday law. David Tombe (talk) 00:20, 20 November 2010 (UTC)

Better focusing history section

I am not an expert on history at all, but it seems to me that the history section has some stuff not directly related to Maxwell's equations. In particular, the first paragraph on the Leyden jar experiment while interesting and definitely needs to be in some article doesn't seem to belong here.

Also I tried to focus the relativity section, but an extra pair of eyes on that would help as well.

Any thoughts on how to focus the history section more without getting rid of anything important? TStein (talk) 20:41, 3 December 2010 (UTC)

Perhaps showing the evolution of how we picture the solutions; from Faraday's lines of force, to fields, to a criss-cross of connections (e.g., as in Jacob Bronowski's terminology for a field). Thus we redraw the picture by shifting our approach. --Ancheta Wis (talk) 04:29, 4 December 2010 (UTC)

Tstein, It seems to be a common misconception that Weber and Kohlrausch's Leyden jar experiment in 1856 is not directly related to Maxwell's equations. But the fact is that it's because of this experiment that the speed of light gets involved in Maxwell's equations. Hence it is crucial as regards the history of Maxwell's equations. This was in fact recently discussed on the talk page at On Physical Lines of Force. As regards the relativity section, I have just corrected it in relation to the fact that Maxwell's equations can't be a starting point for relativity until the aether is first of all abandoned. Einstein abandoned the aether and then concluded that Maxwell's equations imply the constancy of the speed of light. In their original form, Maxwell's equations predict a speed of light which is fixed relative to the aether. David Tombe (talk) 22:20, 5 December 2010 (UTC)

Tstein, here (last paragraph of the section) is the Leyden jar description when I last rewrote it, and here is when David Tombe rewrote it. Maybe that will help you understand what's going on with that. --Steve (talk) 03:44, 6 December 2010 (UTC)
(I think David's version is not too fringe-y -- a rare treat! -- merely incomprehensible.) I also forgot to mention this one written by me --Steve (talk) 17:40, 6 December 2010 (UTC)
David and Steve: thanks for your comments, I think I understand this much better. The problem then appears to be that this paragraph does not explain its importance enough in the history section. There are four solutions then. 1. remove, 2. move back to the position Steve originally put it (where it does fit), 3. use as a foot note in either of those sections. 4. better integrate with history section, explaining precisely how this had to do with the development of Maxwell's equations. The first three are easiest and would improve the article in the short run at least. But depending on the length and how well it was integrated, the last may be better for the article in the long run. I don't have the knowledge nor the time to do the fourth option. If either of you do that would be great otherwise moving it to a footnote will keep it there until someone has time to do it correctly.TStein (talk) 18:06, 6 December 2010 (UTC)

Tstein, I have just done 4. It's simply a case of emphasizing the fact that it's because of this experiment that the speed of light appears in Maxwell's equations. As regards putting this material into the microscopic equations section where I brought them from, what is the direct significance of the Weber/Kohlrausch experiment to the issue of the microscopic equations other than the issue of the speed of light being involved? Actually, I see now that I brought the material from a section on the EM wave equation. But once again, the material is purely historical since its about how Maxwell came to involve the speed of light in his equations. I considered that historical material to be out of context in that section since we were only dealing with the modern derivation of the EM wave equation using the Heaviside method. David Tombe (talk) 19:52, 6 December 2010 (UTC)

Steve, What exactly do you regard as being incomprehensible? We really do need to see the original Weber/Kohlrausch paper in order to establish exactly what figure they arrived at. I have heard conflicting reports on that. I have heard that they didn't notice the speed of light connection because it was masked by a factor of the square root of two. The Weber constant c was actually the speed of light multiplied by root two. Have a look at the bottom of the first page in this pdf link which contains Kirchhoff's original 1857 paper. . I would agree that this complication needs to be made more clear in the article. David Tombe (talk) 19:52, 6 December 2010 (UTC)

In my latest edit I removed the sqrt(2) statement. We have a reliable source that Weber/Kohlrausch did not make the connection to the speed of light, but no reliable source says why they did not make that connection. Certainly the PDF David posted doesn't say. Maybe they did not make the connection because of the sqrt(2), or maybe something else, or maybe a combination of factors...I don't know and I don't think it's too important, and maybe no one will ever know for sure. Again, it's not important to describe all these details in the article. --Steve (talk) 23:15, 6 December 2010 (UTC)

Steve, that's OK. I'm a bit vague myself about the finer details of that episode of history, and it's something which I would like to find out more about. The Dictionary of Scientific Biography would give the impression that the connection to the speed of light wasn't made until Maxwell looked up Weber and Kohlrausch's results in late 1861. But I find that hard to believe. I suspect that there must have already been a rumour in circulation since 1856 to the extent that the linkage had been established between the speed of light and the electromagnetic quantities.

But what is not in doubt is the fact that Maxwell used the results of that 1856 experiment in order to link the speed of light to his electromagnetic theory. And that seems to be a fact which is widely overlooked nowadays. Even those derivations of Maxwell's equations (actually the Lorentz force and the Biot-Savart law, which can then be differentiated curl-wise to get the Maxwell-Faraday law and Ampère's circuital law) which begin with relativity and Coulomb's law still rely on the equation c^2 = 1/(mu)(epsilon), which is based exclusively on the Weber/Kohlrausch experiment of 1856. And until relatively recently, a similar experiment for measuring electric permittivity using a discharging capacitor, was in the textbooks. David Tombe (talk) 00:01, 7 December 2010 (UTC)

Who writes two squares for the d´Alembertian?

Some books define generalized del as square, and dot it with itself. Like this one. Dicklyon (talk) 06:57, 28 February 2011 (UTC)

Maxwell's equations and acoustic waves

Every one knows that form maxwell equations result waves. Not only this but they are the same as 3 acoustics waves! Applying the time derivative to the Amperes Law with the maxwell term, then using the law of induction, and then the curl of the curl, the the Gauss law for zero charge density, finally for zero electric current, results in the wave equation:

2 E = μ 0 ε 0 2 E t 2   {\displaystyle \mathbf {\nabla } ^{2}\mathbf {E} =\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}\ }

Here, ∇ is the vector Laplacian operating on the vector field A.

The Acoustic wave equation, is scaler equation using the pressure, p {\displaystyle p} :

2 p = 1 / v 2 2 p t 2   {\displaystyle \nabla ^{2}p=1/v^{2}{\frac {\partial ^{2}p}{\partial t^{2}}}\ }

The waves are the same, the electromagnetic wave equation represent 3 independent acoustic waves, the result form the vector electric field and the vector Laplacian. Fantastic! — Preceding unsigned comment added by Paclopes (talkcontribs) 20:32, 12 March 2011 (UTC)

The wave equation that propagates functions with unchanging shape is, in general, second spatial derivative proportional to second time derivative. Not so fantastic, really, just simple math on non-dispersive propagation. Dicklyon (talk) 00:18, 13 March 2011 (UTC)
Actually, wave equations are even more general than that; the Schrodinger equation is a wave equation, for example, and only has one time derivative. And they are certainly much more general than the scalar wave equation that you get from e.g. acoustic or electromagnetic waves in homogeneous isotropic nondispersive linear media. (Acoustic waves in solid media, especially anisotropic inhomogeneous solids, are far more complicated.) See for example my course notes on the algebraic structure of wave equations. — Steven G. Johnson (talk) 01:26, 13 March 2011 (UTC)

Vortex model figure

I believe that there is an error in the figure illustrating Maxwell's vortex model, which shows all of the vortices rotating in the same direction. As I understand the model, when the small spheres between rows of vortices represent current flow in a conductor, they cause the vortices above and below the conductor to rotate in opposite directions (clockwise versus counter clockwise). In an insulator, the carriers can only move a small distance, causing a transient induced current, and then rotate in place, like an idler gear, and cause the adjacent rows of vortices to rotate in the same direction. These rotations lead to the magnetic field perpendicular to the plane of the page. See the drawing in this text book:

http://books.google.com/books?id=Gp2tzUhbqjMC&pg=PA1017&lpg=PA1017&dq=maxwell+vortex+model&source=bl&ots=lccvH3X0BC&sig=OPUZAFKibuiDGH-5QS87Eliwu4s&hl=en&ei=d8mUTc2uJMv2gAeL9dnOCA&sa=X&oi=book_result&ct=result&resnum=3&ved=0CCMQ6AEwAg#v=onepage&q=maxwell%20vortex%20model&f=false

I have drawn a revised figure that I would be happy to contribute.

Dpgoldenberg (talk) —Preceding undated comment added 18:39, 31 March 2011 (UTC).


This figure is a modification of Maxwell's drawing. This figure is not the exact redrawn of his drawing. The case illustrated is uniform magnetic field. Since you have drawn another one, why don't you upload your figure and find out which one is preferred by voting.-LaoChen (talk)22:35, 4 April 2011 (UTC)

Induced Current

Why is the point conductivity of the medium and the resulting induced current, σ E {\displaystyle \sigma \mathbf {E} } , not written separately from the source current, J s {\displaystyle \mathbf {J} _{s}} , in Ampere's equation, as in some textbooks? Then, in the absence of a source current-density term, as in a conductive medium insulated from that of the source current, J = σ E {\displaystyle \mathbf {J} =\sigma \mathbf {E} } would be the unmentioned constitutive relation and Maxwell's equations would reduce to four equations (including the divergence equations) to only two vector functions which can be solved for, given initial and boundary conditions, and the effective permittivity would become complex in the frequency domain, indicating a lossy medium:

× H = σ E + ϵ E t × H = i ω ( ϵ i σ / ω ) E {\displaystyle \nabla \times \mathbf {H} =\sigma \mathbf {E} +\epsilon {\frac {\partial \mathbf {E} }{\partial t}}\Rightarrow \nabla \times \mathbf {H} =i\omega (\epsilon -i\sigma /\omega )\mathbf {E} }

Toolnut (talk) 04:22, 26 June 2011 (UTC)

Sure, this is a valid approach and often useful. (Although, less useful in DC, and not directly applicable to arbitrary (non-harmonic) time-dependence, except via Fourier analysis with corrections needed if the medium is not linear.) I only fear it's getting a bit off-topic for this article (always remember...everything in electromagnetism, a gigantic field, is related to Maxwell's equations...to keep this article a reasonable length most things need to be declared "off-topic".) Perhaps it could be fit in, I'm not sure.
It is, however, discussed in other wikipedia articles like permittivity and mathematical descriptions of opacity. :-) --Steve (talk) 13:14, 26 June 2011 (UTC)

Units and summary of equations

This section should explain the motivation for introducing both a differential and an integral form of the equations. Even granting that they are equivalent, it's not obvious what benefit there is to having both forms. (In the Walter Lewin Lectures on Physics, the first-year EM lectures use the integral form exclusively; the second-year course, 8.03, introduces the differential form halfway through, uses it in one lecture, and then goes back to the integral form, although Lewin notes in passing that the differential form is "preferred". I assume this is a consequence of where physics undergraduates are in their parallel track of mathematics courses.) In any event, someone who comes here out of Freshman Physics who hasn't yet done multivariable calculus deserves some explanation of where the differential forms come from and why they are useful, and this section seems the only part of the article written at a sufficiently elementary level to contain that explanation. (Many of the later sections giving alternative formulations give no motivation comprehensible to someone who is not a physicist.) I would also note that freshman-level EM instruction, including both my own some decades ago and the Walter Lewin 8.02 lectures, use the "microscopic" form exclusively. In general, having more of this article written at a Freshman Physics level would make it more useful to Misplaced Pages; perhaps some of the more esoteric formulations should be left to an advanced-level textbook. 121a0012 (talk) 07:08, 5 July 2011 (UTC)

One reason for both forms lies in the search for invariants in physics, dating back thousands of years, at least to the time of the Stoics: "You cannot step into the same river twice" etc. I personally think it unfair to sic the topic on this section, which is meant to be a reference, and not a history or philosophy of physics. The ability to transform physical situations into multiple forms is basic to physical thinking, so it's not as if this were esoteric knowledge. Hence the table. Perhaps a note at the top of the section, on the lines of For more on transforming these equations, see also ... might suffice. --Ancheta Wis (talk) 10:52, 5 July 2011 (UTC)
Right now the article doesn't even say that the differential form and integral form are related by the divergence theorem / Stokes theorem! I swear it used to say that, or maybe I'm thinking of a different article. Anyway I think it could be a separate section before or after the history section, "Differential and integral forms", saying that they're related by the divergence theorem / Stokes theorem and explaining why both forms are useful, and not just both forms are true. After all, you can take any equation and apply some mathematical transformations to write it in infinitely many equivalent forms, but normally those other forms are true but not useful or notable. For example, the article Euler's identity doesn't mention that another form is 7*e^(i pi) + 9 = 2.
I don't think this discussion needs to be in the section with the equations, as long as there are enough section-links and such so that people will find it. --13:45, 5 July 2011 (UTC)
I added a (very short) section on the relation between differential and integral forms.
I notice you also suggest using the microscopic form exclusively in the earliest sections, because that's what your freshman physics textbook did. The trouble is, other textbooks, written at the same introductory level, use the macroscopic form exclusively. (Textbooks for more engineering-oriented classes are especially likely to use the macroscopic form exclusively.) So I really don't think it's the case that the microscopic form is "elementary" and the macroscopic form is "more complex". On the other hand, it is true (in my experience) that the differential form is taught after the integral form in most freshman physics courses. I think this a case where the order of presentation which is most sensible for a semester-long course is not the same as the order of presentation which is most sensible for an encyclopedia article. :-) --Steve (talk) 13:49, 29 September 2011 (UTC)

Anisotropic error?

Are the anisotropic consitutive equations

D j = ϵ i j E i , B j = μ i j H i {\displaystyle D_{j}=\epsilon _{ij}E_{i},B_{j}=\mu _{ij}H_{i}\,}

wrong? They always seem to be given as:

D j = ϵ j i E i , B j = μ j i H i {\displaystyle D_{j}=\epsilon _{ji}E_{i},B_{j}=\mu _{ji}H_{i}\,}

Two example referances are:

http://www.intechopen.com/source/pdfs/16401/InTech-The_eigen_theory_of_electromagnetic_waves_in_complex_media.pdf,
Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press, 2010.

Maschen (talk) 16:27, 16 September 2011 (UTC).

Maschen, Landau & Lifshitz (1960) Course of Theoretical Physics Vol 8: Electrodynamics of Continuous Media p314 (ch. XI "Electromagnetic Waves in Anisotropic Media") state ϵ i j = ϵ j i {\displaystyle \epsilon _{ij}=\epsilon _{ji}} in the absence of a magnetic field. In the presence of a magnetic field (p.331) ϵ i j H = ϵ j i ( H ) {\displaystyle \epsilon _{ij}H=\epsilon _{ji}(-H)} and in a non-absorptive medium ϵ i j = ϵ j i {\displaystyle \epsilon _{ij}=\epsilon _{ji}*} , i.e., ϵ {\displaystyle \epsilon } is Hermitian. As you can see, the situation is material-dependent, but Landau & Lifshitz argue that Maxwell's equations still apply. Landau & Lifshitz derive the expected behavior for anisotropic media, for the Kerr cell, at the interface to a metal, etc., so Maxwell's equations still serve. --Ancheta Wis (talk) 17:10, 16 September 2011 (UTC)
It is true that, except in magneto-optic materials that break reciprocity, the constitutive tensors are symmetric and so in principle you could freely swap the indices. However, it is more conventional to write the indices in the order of the standard matrix-vector product, since ε is normally expressed as a matrix in which ϵ i j {\displaystyle \epsilon _{ij}} is the entry of the i-th row and the j-th column. Therefore, I think we should use the standard D i = j ϵ i j E j {\displaystyle D_{i}=\sum _{j}\epsilon _{ij}E_{j}} notation rather than swapping tensor indices. — Steven G. Johnson (talk) 19:55, 17 September 2011 (UTC)
I agree with everything Stevenj said. :-) --Steve (talk) 20:42, 17 September 2011 (UTC)

Ok, thanks for explaination. Maschen (talk) 08:52, 18 September 2011 (UTC)

Maxwell equations for four electromagnetic equations in the article are one for Lorenz gauge. General one including should replace it. — Preceding unsigned comment added by 113.149.81.173 (talk) 12:14, 5 October 2011 (UTC)

Recent edit to Maxwell's equations#Traditional formulation

The symbol D was replaced by . The way I understand it, the former is usually used to denote covariant derivative (which is what is intended), unlike the latter which usually denotes a simple "point derivative" (and is not in general covariant). As I am not familiar enough with the "normal notation" – can others comment? Quondumcontr 06:21, 7 November 2011 (UTC)

Advanced wave solution

IMO Advanced wave solutions of the Maxwell equations implying negative energy and negative frequency leading to backwards time travel should have a mention in this article. Any comments? Suraj T 04:11, 11 November 2011 (UTC)

I do not think that any specific mention is appropiate in relation to classical electromagnetics. As I interpret it, this is simply a mathematical reframing of the positive-energy forward-time absorber and positive-energy forward-time emitter solutions. To mention "negative energy" and "backwards time travel" is then simply using terminology that confuses rather than clarifies, and in fact invites fringe science through the implication that there is something significant or unexpected implied by the reframing. A glance at the reference suggests that what it says is consistent with my perspective. Quondumcontr 05:32, 11 November 2011 (UTC)
I watched a tv program where Michio Kaku mentioned that Advanced wave solutions have baffled scientists for decades. I came here to look up for more info but found none. A single line mention perhaps stating that these solutions exist? Suraj T 05:48, 11 November 2011 (UTC)
TV programs are inclined to present things out of context, and it is difficult to piece together what was really meant by scientists from such programs. I understand that the interpretation in the case of quantum mechanics, for example in Quantum electrodynamics, is more baffling. There one apparently has not only advanced and retarded propagators, but a third propagator as well. Since Micho Kaku is more involved with modern physics, I would not be surprised if a comment by him did not relate to this rather than classical electromagnetism. And until an editor familiar with the detail adequately defines "advanced" and "retarded" solutions, I do not think they should be added, or at least not with the terms "negative time" or 'negative energy" attached. Quondumcontr 10:11, 11 November 2011 (UTC)
It looks like this is discussed in the wikipedia article Transactional interpretation. --Steve (talk) 17:13, 11 November 2011 (UTC)


epsilon0 vs epsilon in "Table of microscopic equations"

Gauss' law reads del(E) = rho/epsilon0

Isn't this the special case of vacuum, the more general case using the actual dielectric constant (epsilon), rather than that of vacuum (epsilon0)?

Someone who's sure of this, please fix! Thanks! Michi zh (talk) 15:47, 7 January 2012 (UTC)

The vacuum case refers to where there is no matter (and hence charge or current) present, and hence when ρ=0 and J=0. Do not confuse this with the different treatments, being the "microscopic" one where all bound current is treated as part of the current J and ε0 is used, and the "macroscopic" one where the bound current is handled as the time-derivative of the polarization P and a "dielectric constant" ε applies. — Quondumc 16:05, 7 January 2012 (UTC)
To be more specific, imagine that an extra electron gets shoved into the center of a block of glass. The electron polarizes the glass atoms nearby, drawing positive "bound charge" from the glass atoms towards itself (more specifically, the glass's electrons are pushed away but its nuclei stay in place, so there's positive charge from glass atoms in the vicinity of the extra electron). So you might think that there's a charge of -e at the spot where you had placed the electron, but actually the total charge there is much less: -0.1*e (assuming the dielectric constant of glass is 10), which is the sum of -e from the original electron and +0.9e from the glass atoms that were polarized. The terminology is that the "free charge" is -e, and the "total charge" is -0.1e. The equations are: del(E) = total charge density/epsilon0, and del(E) = free charge density/epsilon. (Both are correct.) :-)
The section is written to try to make this very clear. It does seem to me that most people see the links and definition...otherwise someone would have complained much sooner! The only other thing I can think of is to maybe use ρ t {\displaystyle \rho _{t}} for total charge instead of ρ {\displaystyle \rho } . Or put in an image... --Steve (talk) 17:34, 7 January 2012 (UTC)
Nice, clear description. And I see there is already a distinction made between J and Jf, and ρ and ρf. But I see there is no such distinction between the symbols for microscopic and macroscopic versions of the fields D and B, and this is perhaps an omission. I would have subscripted the macroscopic versions used in the article; the microscopic versions should be D=ε0E and B=μ0H. — Quondumc 18:31, 7 January 2012 (UTC)
The result div E = ρ / ε0 is correct. See this front cover and this textbook, Eq. 1.2.14. Brews ohare (talk) 00:42, 8 January 2012 (UTC)

Closed Double Integral

I saw that throughout the article the loops in the double integral symbol were not consistent. In the section Units and Summary of Equations, the loops were smaller in microscopic equations and bigger in macroscopic equations. I couldnt find any standard way to write this symbol as there are no such symbol in latex without packages. I don't think you can add packages though in wikipedia. Can someone fix this or give advise on how to write this symbol consistently? Pratyush Sarkar (talk) 03:50, 22 January 2012 (UTC)

I have made it consistent within the apparent constraints of the rendering system, and have elected to keep the spacing as for \iint ( {\displaystyle \iint } ). A smaller spacing may be more aesthetic, easily achieved by using \int twice and suitable spacing adjustment, but perhaps then the general \iint spacing should also be reduced for consistency (e.g. \iint\int\!\int or similar). There is also another system that uses a pre-rendered PNG file: see {{oiint}}. — Quondum 09:37, 22 January 2012 (UTC)
I just left it the way it is since the spacing is different in the {{oiint}} package than \iint. Pratyush Sarkar (talk) 22:22, 22 January 2012 (UTC)

Late as it may be to reply, but maybe this could be the first article that actually uses this template? Its fine - so what if the spacing is slightly different yet constant for any repeated use (hence its form as a template)?? I'm not saying this as the initial author of those templates for sake of vanity, its because other editors have worked extremely hard to render them the quality they are now. Shall it be a waste for them?

Template:Multicol

Here are Maxwell's integral equations for total charges and currents:

\oiint S {\displaystyle \scriptstyle S} E d S = Q / ϵ 0 {\displaystyle {\mathbf {E}}\cdot {\rm {d}}{\mathbf {S}}=Q/\epsilon _{0}}
\oiint S {\displaystyle \scriptstyle S} B d S = 0 {\displaystyle {\mathbf {B}}\cdot {\rm {d}}{\mathbf {S}}=0}
C E d = t {\displaystyle \oint _{C}{\mathbf {E}}\cdot {\rm {d}}{\boldsymbol {\ell }}={\frac {\partial }{\partial t}}} \oiint S {\displaystyle \scriptstyle S} B d S {\displaystyle {\mathbf {B}}\cdot {\rm {d}}{\mathbf {S}}}
C B d = μ 0 {\displaystyle \oint _{C}{\mathbf {B}}\cdot {\rm {d}}{\boldsymbol {\ell }}=\mu _{0}} \oiint S {\displaystyle \scriptstyle S} ( J + ϵ 0 E t ) d S {\displaystyle \left({\mathbf {J}}+\epsilon _{0}{\frac {\partial {\mathbf {E}}}{\partial t}}\right)\cdot {\rm {d}}{\mathbf {S}}}

Template:Multicol-break and for free charges and currents:

\oiint S {\displaystyle \scriptstyle S} D d S = Q {\displaystyle {\mathbf {D}}\cdot {\rm {d}}{\mathbf {S}}=Q}
\oiint S {\displaystyle \scriptstyle S} B d S = 0 {\displaystyle {\mathbf {B}}\cdot {\rm {d}}{\mathbf {S}}=0}
C E d = t {\displaystyle \oint _{C}{\mathbf {E}}\cdot {\rm {d}}{\boldsymbol {\ell }}={\frac {\partial }{\partial t}}} \oiint S {\displaystyle \scriptstyle S} B d S {\displaystyle {\mathbf {B}}\cdot {\rm {d}}{\mathbf {S}}}
C H d = {\displaystyle \oint _{C}{\mathbf {H}}\cdot {\rm {d}}{\boldsymbol {\ell }}=} \oiint S {\displaystyle \scriptstyle S} ( J + D t ) d S {\displaystyle \left({\mathbf {J}}+{\frac {\partial {\mathbf {D}}}{\partial t}}\right)\cdot {\rm {d}}{\mathbf {S}}}

Template:Multicol-end

Well, by no means am I forcing them into the article, its just my recommendation. =) -- F = q(E + v × B) 00:13, 30 January 2012 (UTC)

I should add: this article, continuity equation or any vector calculus articles, were the main motivation for creating this template. That’s why it exists. =)-- F = q(E + v × B) 01:19, 30 January 2012 (UTC)
I, for one, do not object to such a change: the new template is aesthetically more pleasing. The ideal solution would of course be to have suitable Template:TEX rendering, but we do not have that. I did not introduce that change, since I felt that having a consistent non-template basis may be better as a reference point, but did anticipate that this suggestion would be made. I would simply ask that that we simultaneously think about (but do not necessarily change) the spacing in <math>\iint</math>. We will also have to be aware of the potential for display issues with some browsers when we make changes. — Quondum 04:31, 30 January 2012 (UTC)
Its nice to read this - by all means don't launch into changes if in doubt/think its best not to! =) If so, the \iiint may change to S {\displaystyle \int \!\!\!\int _{S}\,\,\!} (that is <math>\int\!\!\!\int_S\, \,\!</math>). I can't think of any issues with the display of templates, as long as they are as above (which most formulae are in the article) there should be no problem. Let’s see by adding them to a table (again some formulae in the article are in tables):
total charges and currents free charges and currents
\oiint S {\displaystyle \scriptstyle S} D d S = Q {\displaystyle {\mathbf {D}}\cdot {\rm {d}}{\mathbf {S}}=Q} \oiint S {\displaystyle \scriptstyle S} D d S = Q {\displaystyle {\mathbf {D}}\cdot {\rm {d}}{\mathbf {S}}=Q}
\oiint S {\displaystyle \scriptstyle S} B d S = 0 {\displaystyle {\mathbf {B}}\cdot {\rm {d}}{\mathbf {S}}=0} \oiint S {\displaystyle \scriptstyle S} B d S = 0 {\displaystyle {\mathbf {B}}\cdot {\rm {d}}{\mathbf {S}}=0}
C E d = t {\displaystyle \oint _{C}{\mathbf {E}}\cdot {\rm {d}}{\boldsymbol {\ell }}={\frac {\partial }{\partial t}}} \oiint S {\displaystyle \scriptstyle S} B d S {\displaystyle {\mathbf {B}}\cdot {\rm {d}}{\mathbf {S}}} C E d = t {\displaystyle \oint _{C}{\mathbf {E}}\cdot {\rm {d}}{\boldsymbol {\ell }}={\frac {\partial }{\partial t}}} \oiint S {\displaystyle \scriptstyle S} B d S {\displaystyle {\mathbf {B}}\cdot {\rm {d}}{\mathbf {S}}}
C H d = {\displaystyle \oint _{C}{\mathbf {H}}\cdot {\rm {d}}{\boldsymbol {\ell }}=} \oiint S {\displaystyle \scriptstyle S} ( J + D t ) d S {\displaystyle \left({\mathbf {J}}+{\frac {\partial {\mathbf {D}}}{\partial t}}\right)\cdot {\rm {d}}{\mathbf {S}}} C H d = {\displaystyle \oint _{C}{\mathbf {H}}\cdot {\rm {d}}{\boldsymbol {\ell }}=} \oiint S {\displaystyle \scriptstyle S} ( J + D t ) d S {\displaystyle \left({\mathbf {J}}+{\frac {\partial {\mathbf {D}}}{\partial t}}\right)\cdot {\rm {d}}{\mathbf {S}}}
If we do come across any severe problems (unlikley), the transform to the new template can be decommissioned. -- F = q(E + v × B) 08:28, 30 January 2012 (UTC)
I will not do so for this article, but definitley agree the template should be added. I added it to the continuity equation article recently, and it seems fine. --Maschen (talk) 12:29, 7 February 2012 (UTC)

quaternions

The article should mention that the quaternion form of the equations is much simplified. The subject of quaternions is frequently neglected in mathematics. It should not be. — Preceding unsigned comment added by Skysong263 (talkcontribs) 02:49, 25 January 2012 (UTC)

As I understand it, the quaternion approach is absorbed by and superseded by the geometric algebra approach, which is mentioned in the article. As such, a mention of quaternions would only have historical relevance, and would have to be sourced. — Quondum 07:23, 25 January 2012 (UTC)

clarification (I hope)

I do not intend to extensivley edit this article (honest). However the section Table of 'microscopic' equations was frutterly confusing to understand what was supposed to be said about the integrals - so I tried to clarify the prosy wording. (Sureley I am not the only one???). Also added the oiint template \oiint there also for good measures (they are closed surfaces right? of course - as stated in the table just above them). -- F = q(E + v × B) 15:57, 25 February 2012 (UTC)

Another (minor) point on clarification, is it the right approach to use the "boundary notation" V = S , S = C {\displaystyle \partial V=S,\,\partial S=C} right from the beginning? I guess its no problem with explanation, but maybe readers can grab the concept better by just using S for surface and C for curve - more intuitive? less confusing? It doesn't matter, just pointing the obvious out.-- F = q(E + v × B) 16:17, 25 February 2012 (UTC)

D = ε 0 E + P {\displaystyle \mathbf {D} =\varepsilon _{0}\mathbf {E} +\mathbf {P} } and : B = μ 0 ( H + M ) {\displaystyle \mathbf {B} =\mu _{0}(\mathbf {H} +\mathbf {M} )} are not called constitutive relations. Only D = ε E {\displaystyle \mathbf {D} =\varepsilon \mathbf {E} } and B = μ H {\displaystyle \mathbf {B} =\mu \mathbf {H} } are called constitutive relations. Big difference...The former two equations are always true by definition, the second two equations are empirical assumptions about materials, assumptions that may or may not be accurate.
The part you were complaining about--I think about whether S and V are changing in time--is altogether unnecessary. I deleted it. We can just say that S and V are not changing in time. The equations with that restriction are still correct and complete. "What happens if S and V might change?" is an interesting and worthwhile homework problem, but not at all essential for understanding Maxwell's equations.
I don't think many readers will guess that C means curve and S means surface, and it is also risky to have the symbol S referring to a closed surface in one equation and open in another. And even if they do understand that C means curve, what curve is it?? Anyway, it's a dangerous game for readers to be guessing what the symbols mean. They are liable to guess wrong. A few strange symbols are kinda nice insofar as they encourage readers to actually scroll down and take a look at the table. I think that when they see that {\displaystyle \partial } can mean "boundary of" they will say "Oh, that's a nice and useful notation, maybe I'll start using it myself." Just my opinion :-)
Categories: