Bosonic and Fermionic Calculus

Bosonic and Fermionic Calculus

Traditional calculus often mixes up different spaces, mostly due to pedagogical reasons. Its a bit like function overload in programming but there is a prize to be payed and this includes confusions when doing things in the discrete.

Here are some examples: while in linear algebra we consider row and column vectors, in multivariable calculus, we only look at one type of vectors. We even throw affine and linear vectors into one pot. This is perfectly fine as we would produce unnecessary complications like also to consider the cross product of two vectors as a vector and not a covector. We also treat differential forms as vector fields and define the divergence as a type of exterior derivative rather than an adjoint. This is all fine. But there are places, where pedagogy does the wrong thing. An example is when defining line integrals or flux integrals. Some textbooks produce an intermediate integral which is the scalar line or scalar surface integral. It not only complicates things as more integrations are built in, these integrals are also completely different beasts. One only gets fully aware of such things, when working in a discrete calculus like when looking at calculus on graphs.

Here is a first riddle, which many students ponder when learning calculus. How come that if we compute an arc length or surface area that the result is independent of how we integrate. You see, if we take the function |r'(t)|, the speed, then this is a direction independent notion, a non-negative scalar. If we travel from A to B and parametrize this with a curve r(t), then ∫ab |r'(t)| dt is a non-negative number. Also if we integrate backwards. Arc length is independent of the parametrization. If we look at a line integral ∫ab F(r(t)) r'(t) dt however, then the orientation matters. Similarly, when we compute a surface area ∫ ∫ |ru x rv| du dv then this does not depend on the surface orientation, while the flux integral ∫ ∫ F(r(u,v)) . ru x rv du dv does depend on the orientation. You might bend your mind and somehow blame the dt or du dv or then just think you don’t understand well enough. What actually happens is that the objects are of completely different nature. In reality, we have an integration which is orientation dependent, and then there is an integration which does not depend on orientation. Putting these two type of integrals so close together like many calculus books do, not only confuses, it also complicates things.

When looking at integrals ∫ f(t) |r'(t)| dt , then one should see them as modifications of the arc length. They can model average along a curve or mass of an inhomogeneous wire. Mathematically they are one dimensional valuations with respect to an inhomogeneous background measure. The line integral of a vector field however integrates a 1-form which is intrinsically a completely different objects, as differential forms are asymmetric tensors. Its like confusing Fermions with Bosons or symmetric tensors with asymmetric tensors. Mixing such things up is extremely poor taste, even if it is done with the best pedagogical intentions.
Now there are notions in the continuum, which are close to the discrete and are intuitive: this is done with differentials which are quite handy. Still, like non-standard calculus, the overhead appears too big. In order to define differentials properly and appreciate them, it needs quite a bit of sophistication. Similarly, to understand non-standard analysis, it requires some logic and real analysis background in order not to get lost. I personally use differentials and non-standard analysis only on an intuitive level, where it can be quite power ful but not teach it as it can become a source for serious errors.

In classical mathematics, these distinctions only start to matter when looking at geometric measure theory or integral geometry, where unlike in measure theory, the objects under consideration have more structure. They can be duals of differential forms or then have internal simplicial structure. What is going on is that integrals like length, area or volume are valuations which integrates over non-oriented simplicial complexes, while line, flux integrals or any integral entering a fundamental theorem of calculus integrates over oriented simplicial complexes.

It starts already in one dimensions. When looking at the fundamental theorem of calculus, the derivative of a function is a 1-form. Integration then depends on the orientation and ∫ab f'(x) dx = – ∫ba f'(x) dx When computing an area, we add up positive quantities and whether we go from left to right or right to left should not matter. Its like adding up numbers on a spread sheet, where it does not matter whether we add from the left to the right or from the right to the left. That we are dealing with two type of integrals becomes only visible in the discrete. In the first case the function f'(x) is a actually a function on the edges of the graph, while in the second case, |r'(x)| is a scalar given as the square root of r'(x) . r'(x) which is a function on vertices. Not that I believe these things should be pointed out in a calculus course, it is the teachers and especially the textbook writers who have to be aware of it. When explaining calculus, we take great care to hide these difficulties: in the following 15 minute review for example, the integral is defined as Archimedes did it and hide the fact that for the fundamental theorem of calculus, the integrand is a 1-form. It would be a big no-no to explain the difficulty.

We can even hide the difficulty when dealing with calculus on a one dimensional graph like in this Pecha-Kucha (20 x 20 seconds) talk. Its just when we work on general graphs and especially when looking at PDEs on graphs, things start to matter.

Lets look at an other example: the case of surface integrals. Then in the discrete, we deal with valuations counting triangles, possibly with weights. In the flux integral case, however, we have differential forms, functions on oriented triangles and the answer is now orientation dependent. Multivariable calculus is already tough enough, why complicate things? Knowing the
background actually gives more arguments to consider the simplified version, where we only have one type of integral in each dimension: the line integral of a vector field along one dimensional objects, the flux integral of a vector field along a two dimensional surface and the three dimensional integral along a three dimensional solid. Identifying 1 forms with 2 forms and identifying 0 forms with 3 forms in three dimensions is of course ok as well as not mentioning differential forms at all and just look at scalar functions as well as vector fields.

To summarize, its good to be aware that there is a symmetric calculus which features an integration without fundamental theorem but which belongs to valuations in integral geometry. Then there is an anti-symmetric calculus which enters the fundamental theorem of calculus, which is Stokes in higher dimensions. The first is a Bosonic calculus, the second is a Fermionic calculus. As in complexity theory (Permanents and Determinants, NP or P) or physics (stability of matter), the Fermionic calculus is often more pleasant. In calculus, it is the fundamental theorem of calculus which allows us to take shortcuts. In Bosonic calculus, there is often no short cut (similarly as can not compute permanents easily), nor can we count the number of triangles in a large graph easily without just counting them.

What does this have to do with quantum calculus? It is there that we can no more hide our ignorance and have to distinguish between summing up function values attached to oriented simplices or then non-oriented simplices. And this distinction also is important when building interaction calculus which is a new type of calculus, which does not seem to exist in the continuum. There is a Bosonic version which for example is used when defining Wu characteristic (the analogue of Euler characteristic in interaction calculus), and then there a Fermionic version which leads like in traditional calculus to a simplicial cohomology. Now, cohomology is always a very Fermionic construct as we need d2 = 0. Interaction cohomology is exciting, as it allows to attach in an entirely algebraic way numbers to topologies. It allows to distinguish the Moebius strip from the cylinder for example: See the computation.