Here is an other “poem” that has been added to the “math poetry page”. Given a simplical complex G, it computes the automorphism group Aut (its group of symmetries, meaning invertible simplicial mapss) and for each T computes both the Lefschetz number and the sum of the indices of the fixed point set Fix of T. The discrete Lefschetz fixed point theorem assures that
. It is for T=1 (the identity) the Euler-Poincare formula
. The Lefschetz number is the super trace on cohomology which becomes Euler characteristic for the identity. So, this is an other contribution to algorithmic poetry. It should be self-evident what each part does. The matrix Q for example has as its columns a basis of the kernel of the Koopman operator U defined by Uf=f(T) . Wwe use the in every linear algebra courses taught formula
for the projection onto the column space. I needed here only 4 lines to get to the Dirac matrix D=d+d*. Computing the Betti numbers
would be an other line (see “5 lines for cohomology” on this blog from 2023). In the following code, we take an example of a graph
and compute all the Lefschetz numbers for all the automorphisms (that part of course becomes too heavy to do if the complex is too large. For larger complexes we can not form all the permutations). In my 2012 paper I had also interpreted the average over all Lefschetz numbers as an Euler characteristic and computed the dynamical zeta function of an automorphism (a rational function) and then defined the product over all these zeta functions as the zeta function of the complex.
(* Lefschetz, 8/26,25, O.Knill https://fixedpointtheoryandalgorithms.springeropen.com/articles/10.1186/1687-1812-2013-85 *)
CleanC[G_]:=Union[Sort[Map[Sort,G]]]; Generate[A_]:=If[A=={},A,CleanC[Delete[Union[Sort[Flatten[Map[Subsets,A],1]]],1]]];
Whitney[s_]:=Generate[FindClique[s,Infinity,All]]; L=Length; s[x_]:=Signature[x];
s[x_,y_]:=If[SubsetQ[x,y]&&(L[x]==L[y]+1),s[Prepend[y,Complement[x,y][[1]]]]*s[x],0]; GG[k_]:=Map[Sort,Select[G,L[#]==k &]];
Dirac[G_]:=Table[s[G[[i]],G[[j]]]+s[G[[j]],G[[i]]],{i,n},{j,n}]; Str[X_]:=Sum[X[[k,k]]*(-1)^(L[G[[k]]]-1),{k,n}];
g=CompleteGraph[{3,4,2}]; G=Whitney[g]; n=L[G]; V=Flatten[Select[G,L[#]==1 &]]; DD=Dirac[G]; HH=DD.DD;
AutQ[T_]:=Module[{G1=GG[2]},G1==Sort[Map[Sort,(G1 /. Table[V[[k]]->T[[k]],{k,L[V]}])]]];S[T_]:=Table[V[[j]]->T[[j]],{j,L[V]}];
Q=Transpose[NullSpace[HH]]; P=Q.Inverse[Transpose[Q].Q].Transpose[Q]; Aut=Select[Permutations[V],AutQ[#] &];
U[T_]:=Table[x=G[[k]];y=G[[l]];z=y /.Table[V[[j]]->T[[j]],{j,L[V]}];If[x==Sort[z],s[z],0],{k,n},{l,n}];
f[T_]:=Module[{J=S[T]},Fix=Select[G,Sort[# /.J]==#&];ind[x_]:=(-1)^(L[x]-1)*s[x/.J];{Total[Map[ind,Fix]],Str[P.U[T]]}];
R=Map[f,Aut]; {Union[R],Total[R]/L[R]}
I find that exercises on brevity are always helpful. Condensation is an important part of understanding. It also helps to clarify. I don’t think for example that one could explain faster what a super trace is, then just write down its code Str[X_]:=Sum[X[[k,k]]*(-1)^(L[G[[k]]]-1),{k,n}], where L=Length. Sacrificing a bit of brevity one could define dim[x_]:=Length[x]-1 first then define the super trace of a matrix X over a complex as Str[G_,X_]:=Sum[X[[k,k]]*(-1)^dim[G[[k]]],{k,Length[G]}] with the understanding of courses that if the set of sets G has n elements, then X is a nxn matrix.
To the right is a 1 minute derivation and explanation from 2023 of the data fitting formula. There is hardly any “mathematical rabbit” I could pull out of my hat than this data fitting formula. I do not think this is an overstatement. Data fitting is simply everywhere in a world of data. It came handy for example also for this 360 panorama computer vision project. Data fitting is one of the main pillars of statistics. And it is a sparkling piece of math which shows how helpful linear algebra is in that area. In many introductory stats courses (especially in high school) data fitting like regression must be introduced without linear algebra and they do not even justify it (also a calculus justification needs multivariable calculus, you can with a trick do linear regression even with single variable calculus). The linear algebra approach is not only much more general as we can fit with everything. It is also very simple: if we want the best solution on im(Q) to the equation Qx=b, then b-Qx is in the orthogonal complement of im(Q) which means allowing to solve for
and so get the projection formula
which is the super cute formula.
To the right is a 1 minute short explaining the data fitting formula , which I called in my classes also the “super cute” formula, the reason being that if Q has orthonormal colums, then the formula becomes the “cute formula”
which you remember as cute when spelling it out. In the case when Q is a single colum, it become the important projection formula
for a colum vector Q=w. And the super cute formula becomes the projection formula
in multivariable calculus because
is then
.
The upshot is that the projection formula is one of the most important things in mathematics overall. It is one of the central pillars also in the mathematics of AI besides gradient method machine learning techniques. By the way, also these 1 minute short videos I made (see the collection of videos in linear alghebra) were exercises in brevity. Each 1 minute video belongs to an hour class but the core points are usually not complicated. For me, doing one of the 1 minute videos took about a day, where most of the time is used to produce the graphics and slides.
To get back to last week, one should note that the mathematical proof of the Lefschetz theorem is even shorter than its implementation: first realize that the Dirac matrix D=d+d^* produces a duality between the non-zero even eigenspace and non-zero odd eigenspace implying for all positive n implying that
is time independent, where
. For
it is the sum of the indices of fixed points. For
where
is the projection onto the harmonic forms,
is the Lefschetz number, the supertrace of the induced map on cohomology.