## Introduction

The idea to base physics on the evolution **finite set of sets** is intriguing. It has been tried as an approach to quantum gravity. Examples are causal dynamical triangulation models or spin networks. It is necessary to bring in some **time evolution** as otherwise, a model has little chance of modeling physics. Our world is dynamic, all successful physics describes developments, the evolution of stars, the propagation of waves, the interaction of particles, bifurcation processes under parameter changes etc. Sometimes, there are simplifications done like in statistical mechanics where one looks at stochastic processes or the evolution of macroscopic quantities, sometimes, one can only get the scattering matrix, where the time evolution is only given as a transition from from minus infinity to infinity. In any case, if physics should be modeled on finite sets one needs a time. This is the case even if space-time as a finite set as we have then to look at evolution of some hyper-surfaces in it. But t space-time as a finite set of sets has not produced yet anything really interesting except philosophically a la in *Paul Kustaanheimo* (1924-1997) who looked at space, time and physical values as being quantized and modeled by a finite set and modeled over a finite field. Having a finite set as space-time has so far hardly been able to do basic computations in physics like model the motion of a planet in the solar system, compute how an electro-magnetic wave radiates from an antenna, or how much energy is gained from a chemical or atomic process. A good model of physics should be able to do such computations. In order that it will be adopted, it also has to do at least one computations better and explain a not-yet-understood phenomenon. The ultimate test is to predict new phenomena which then can be measured. There has been written a lot about discrete approaches to **quantum gravity**, also popularized like in this pbs article or this quanta article. For books, there are Smolin roads to quantum gravity, or Rovelli’s quantum gravity. But all approaches there are still pretty disappointing. For example, none of these approaches gives even the slightest idea to tell what happens if a black hole passes the stage, where it evaporates to nothing. This problem had been suggested once by Gerard ‘tHooft. It is a great problem because it involves both relativity as well as quantum mechanics. The Hawking radiation effect makes black holes evaporate. They lose mass and there will be a time, when the mass is too small to be a black hole so that it might just disappear. But what happens at such a time? It is a singularity in the models of singularities as in space-time, the time line of the singularity disappears.

## Finite sets of sets

There is already a blog entry in this quantum calculus blog on the joy of sets. Let me update this a bit as there is other terminology used when talking about **finite sets of sets**. A finite set of non-empty sets is also called a **hypergraph** or a **simple game**. The set is the vertex set of . The reason for the **hypergraph** name is to see the 0-dimensional parts as vertices and the 1-dimensional parts, the set of sets of cardinality , as edges. The reason for the name “simple game” is that in a **cooperative game**, the point set is the set of players and the **profit function** takes values or meaning losing or winning. The winning subsets of a game is now a finite set of non-empty sets. I myself vastly prefer to talk about **finite sets of sets**, the main reason being that both the world **hypergraph** as well as the word **simple game** are not universally adapted and associate more structure or attitude (like a graph or then a profit function of a game). Here are some structures which can be finite sets of sets

- A
**finite topology**is closed under intersection, union and contains the void. - A
**finite sigma algebra**is closed under intersection, union and complement. Contains the void. - A
**finite simplicial complex**is downwards closed and does not contain the void. - A
**one-dimensional simplicial complex**a graph (equipped with the 1-skeleton complex) No void. - A
**clutter**is a finite set of non-empty sets in which the only strict subsets are zero-dimensional. - A b
**inary relation**on a finite set V is a subset of the Cartesian product V x V. It is a digraph with possible loops. - A
**finite filter**is a non-empty set of sets that is closed under intersection and upward-closed.

An example of a filter is the star in a simplicial complex, the set of sets which contain a given set. Filters are pretty cool, they are not so much used in topology (the topology course I myself learned used filters and not nets).

## Maverick ideas

Wolfram’s suggestion, which recently appeared in the news, is based on a finite set of sets. Wolfram is well known for bold claims like already done in a **new kind of science**. I personally like **Maverick scientists** who are not following main stream idea and dare to look at new roads and other paradigms. No harm is done if a suggestion does not pan out. The good ideas will prevail, the bad ideas will die which usually happens by just being ignored. I did not read the entire paper but understand that the newly proposed model is **to iterate rewrite rules** on multi-graphs (I prefer just to call this **finite sets of non-empty sets**) and hope to get some physics our of it. The frame work is quite general as rewrite rules appear as **substitution dynamical systems** in combinatorics and solid state physics. The concept also appears in the theory of **languages**, where one has a grammar telling how to build up sentences. In fractal analysis, one knows **substitution systems**, where one applies various rewrite rules iteratively and looks at the corresponding attractor. One can also see it as a generalization of **cellular automata** (this is a situation when one has a fixed underlying graph and the rewrite rules only involve changing graph values at the nodes. As an undergraduate student, we had to take every semester a class from the philosophy department and one semester there was a lecture series from Paul Feyerabend who was a bit of a Maverick himself, proposing not to worry too much about **method **but let things work out freely and possibly even allow anarchism. [Add May 22: Of course, by attending lectures (which were actually often more like heated disputes as most did not agree with this) does not make you a follower and indeed there are some ideas of Feyerabend which are pretty hard to swallow like also to possibly abandon the golden standard of falsification.. Yes, one has to keep an open mind but letting go of falsification has historically never worked for science.]

So, if some outsider idea should become successful, then so be it. Good ideas will prevail and survive, bad ideas will disappear. **Time is the daughter of truth**, as Kepler already pointed out. We will have to see of course how much can be pulled out of the Wolfram approach. The idea to physics on finite sets is quite promising if one looks at **pro-finite limits** of such sets. What always happens because of general principles is that a set of rewrite rules will produce an **attractor** X, a set of accumulation points of all possible system evolutions one considers. This is an old idea for **basic substitution systems.** What happens is that the rewrite rules produce symmetry transformations on X. Furthermore, in many cases, the attractor X is a **compact topological group** implying that there are also translation symmetries on it. So even if the system is discrete the attractor can have continuous symmetries. If you look at my own early papers in dynamical systems and quantum mechanics, this idea was always the **underlying bred and butter** (like for almost periodic Schrödinger operators, or almost periodic cellular automata). For the Fibonacci substitution dynamical system for example which is , , one gets a compact topological space X consisting of infinite sequences in the alphabet a,b on which the shift is uniquely ergodic. An other interesting case, which I myself only looked at later is the case when the rewrite system is the **Barycentric refinement**. In that case, there is also an attractor, but it is in higher dimensions not yet well understood. For one-dimensional geometry everything is understood and the attractor is the dyadic group of integers, which contains the self-similarity renormalisation map as well as the translation group as symmetries. This has appeared for me first when looking at Jacobi matrices which have Julia sets of the quadratic map as spectra. The dyadic group of integers is a bit strange maybe at first but it is the dyadic analog of the circle, where one has also a renormalization map, and where the translation group is the circle itself, a nice Lie group. The dyadic group is of course not a Lie group and its dual, the group of dyadic numbers. You find some early notes here where I point out that the Laplacian converges in the sense that we have a universal integrated density of states in the limit which only depends on the dimension. Now, **universality ideas ** like used in physics when doing **renormalization** or dynamical systems when understanding period doubling transformations, the nice thing is that it does not matter so much where we start with. If there is a **renormalization fixed point** in the space of all models and it is an attractor, then we will get there. In the case of a hyperbolic fixed point, we can learn about universal properties from the spectrum of the linearization at the fixed point. In any case, one of the major weaknesses of the Wolfram set-up is its richness. This had also been a problem for string theory. Which model produces interesting physics? The idea of renormalization could reduce the problems as in the limiting case, one could have less models left. The dream of course would be that under some kind of renormalization, only one model will remain and that this model produces quantities we can measure in our own world.

## Critics of Wolfram’s suggestion

The critics were surprisingly fast:

- In Forbes, Ethan Siegel points out that a new idea has to succeed in places where old theories worked, one has to explain an existing observation that current theory struggles with and make a prediction that one can go out and measure. This is correct but it does not disqualify a new approach. I myself would not be so harsh. A new theory which allows to explain one thing better than any other existing theory is already valuable. The prediction point has been pointed out by Sabine Hossenfelder to be overrated. Indeed, I could build a machine which makes thousands and thousands of theories and predictions. The chance is then substantial that one of them is right. I would modify that in adding that any wrong prediction should count against the theory. So, one could propose that every scientist making a wrong prediction should get reputation damage. There is still also a problem with this: take 10000 scientists and let each makes a good guess, then by chance one of them will be lucky and get the right prediction. But it is not necessarily that the one who had the prediction really had more insight. It was just luck. Better is if some theory makes multiple predictions which all turn out to be right. Quantum mechanics, general relativity and classical mechanics are all such theories. They produced countless of predictions which can be measured. By the way, there is an analogy here with success in business. There are countless of start-ups with great ideas initiated all the time. Some of the succeed, some of them do not. It is often a matter of luck and timing and circumstances. Now, the ones who succeed are hailed as genius while in fact, it was mostly luck. Of course, one has to have a good idea but new ideas are cheap. Its the process of making them work that is hard. Also in physics. Look at how many new ideas are proposed daily on preprint servers. The mass of such ideas shows that new ideas are cheap. Everything which is abundant is cheap. In physics, the problem is to generate ideas which match with experiments.
- Also In gizmodo , the science writer Ryan Mandelbaum points out trouble in the new theory. But this is mostly a personal attack which I personally dislike. I don’t think that Wolfram as an entrepreneur has it easier to promote ideas than your average university scientist. In contrary, being outside the establishment will most likely make it harder (a typical university scientist has PhD students, collaborators or postdocs who work for them and help, for example by refereeing their papers). Avoiding the usual scientific publishing path is usually harder even if a lot of money is used to promote it. Also, many good results in the history of science have only been communicated through
**pre-prints**(Perelman, Grothendieck), or**letters**to other scientists (Euler, Grothendieck) or then by**write-ups**by third parties later on (Pythagoras, Galois). The Siegel critique mentioned before is a general critique which one has to take more seriously. Also pointing out that somebody has more cash to promote the ideas or belongs to then a certain ethnic group or gender (white male) are not that fair. It would be equally cheap (and a hit below the belt) to point out that Mandelbaum is not a scientist himself. [Titles are is often used to disqualify people. Actually, science writers often do much more for science than the scientists. In fact, there are many**great science writers**who are or were not actually scientists. Some made great contributions also to promote science (i.e. Martin Gardner). Having no title does not disqualify them, especially in a time, when being able to become a professional scientist is**essentially a lottery**. Only one of maybe a 100 PhDs can expect to continue in their original profession and do research later on, the others are weeded out by a merciless process, as the job markets are terrible and**global**means that in order to get a position one has to internationally compete against the top of the top in the world.]

I personally have not yet studied the Wolfram models enough, but experimenting with such models is for me a process one knows from physics itself. [P.S added May 22: I myself do probably more experiments than an average theoretical physicist in the physics department. Almost all results I discovered myself were done after much experimentation, sometimes hundreds, or even thousands of hours of work one usually does not see. ] It might not be that there is any relation with what we call fundamental physics, but the objects **exist in the computer**, and so can be built and studied. The computations are the** laboratory**. And we can look at worlds created in a computer as a physical world too. We can in a computer build worlds with physical laws which have nothing in common with the physical laws we know.

Just one thing: I can not wait until the rewrite rules are **part of the Wolfram language**. I personally dislike external libraries. Even in Mathematica itself, virtually all example notebooks provided on the Wolfram website which use external libraries do not work any more , most of them use depreciated code or (even much more likely) contain commands which later have been implemented, using a bit different syntax, in the core language, so that redefining them gives errors. I had once had to solve a problem by having a procedure which called an combinatorics external library (and which was messing up anything else in the code), just by its presence) to write Mathematica code which could be run then using Run[“math<test.m>out.txt”], then read in the result out.txt to continue. This made sure that there was absolutely no interference of the library with the rest of the code. **In general, in programming, libraries are a menace in the long term**. This is one reason, why all this **container** stuff has started for developers. It is the library mess. Libraries are like medication. Even if each work by themselves, combining two them can produce disasters. In a complex project where many external libraries enter, there can be problem which would have been impossible to avoid by any of the original developers of the individual libraries. A trivial example is if two different libraries use the same procedure names.

## Computational explorer

While writing this, I got my hands on the new book **“Adventures of a computational explorer”**. It is some sort of autobiography of Wolfram. The title is good. There is also the story about the movie Arrival (here a scene). Some **space time networks **seemed have already been hatching there, while consulting for that movie. The biography also repeats a few amazing things like that Wolfram got his PhD when 20 year old in 1979. By the way, the site has sometimes more details than the e-book version, which seems has been cobbled together quite quickly (probably in an almost automatized way, which would not surprise given the computational background of the author). It has 1200 pages with lots of pictures and I still have only glanced over it.