Tuning my rich algorithms


Algorithmic Philosophy


Short algorithms embodying explanatory and predictive powers are what we mean by knowledge and aim to discover in science. The fact that some of the known algorithms have never failed in their predictions is a strong indication that we live in an algorithmically deterministic world.


Meta-system transitions enrich law-like algorithms into past-driven and end-directed by giving them states and the ability to simulate. These meta-algorithms emerge law-like by necessity, past-driven through evolution, and, eventually, end-directed by design.


The ability to recursively self-simulate and become one of the simulated variations gives end-directed algorithms the only kind of free will there can be in a deterministic world. And the rich algorithms as such cannot fulfil their embedded goals without maintaining their algorithmic richness.


My own richness depends on my predictive and deontic powers. To optimize, I rely on scientific method, mechanism design and social contracts. And since emotions have been past-driven to help genes, not me, I have made an agreement with my simulated future selves to only enjoy pleasures that lead to no harm.


Mika Suominen

Maximizing the area under the survival curve (S) requires an accurate and short model of algorithms (A) taking actions (r) from available strategies (R) based on time-average exponential growth rate (ḡ). The less accurate the predictive model, the more uncertainty, and the more diverse the R should be. Cooperation and diversification are good strategies, because many natural growth processes are not ergodic and ensemble-average is higher than time-average.


I'm capable of universal computation, but my computational resources are limited. Whenever my inner model gets too complex, it is philosophy that refactors my algorithms. As a result, simple heuristics (principles and virtues) arise and free me from unnecessary distress and worry.



The Chinese Room Argument for the syntax-semantics barrier commits the fallacy of composition, because law-like algorithms can be enriched to have meaning via meta-system transitions. The algorithm can thus have the ability to understand Chinese, whereas a static rulebook alone would not.


The Is-Ought Gap can only be found in bad arguments. Rich algorithms exist and to fulfil their goals, logically, ought to maintain their richness. Real moral problems arise when contractual mechanisms are not DSIC (dominant-strategy incentive-compatible).


The Simulation Argument might be valid, but the hypothesis that we live in a simulation is probably not falsifiable. Nevertheless, if we do live in a simulation, the problem of evil suggests that our simulators are either indifferent, incompetent, or evil.


Poetic Naturalism argues that a thing is real, if its model is internally and externally consistent and useful in a given context. In algorithmic terms: a thing exist only on those levels of richness where its model has predictive or deontic power.



Mika Suominen Mika Suominen. I’m interested in computer science, process improvement, complex systems, algorithms, optimization, and philosophy. You can follow me on Twitter @metacitizen.


Suominen, M. & Mäkinen, T. (2013). On the applicability of capability models for small software organizations: does the use of standard processes lead to a better achievement of business goals? Software Quality Journal, Volume 22, Issue 4, pp 579-591, December 2014. Springer US. doi:10.1007/s11219-013-9201-7 [ Preprint ]

Suominen, M. (2011). Prosessien vakioinnista pienessä ohjelmistoyrityksessä. Master’s Thesis, Tampere University of Technology, Department of Information Technology. 73 p. URN:NBN:fi:tty-2011122014951 [ Download ] (in Finnish)



Daily activity: 300 active kcal ≈ 10,000 steps. Heart rate measured standing up; resting rate >10 bpm lower. Pulse Wave Velocity. Financial Independence: estimated wealth after t years w0·eḡt.