research/

thesis

My thesis is in philosophy and centres on the research question: how is time mediated in algorithmic reason?

Algorithmic reason refers both to the way in which algorithms, big data and AI come to provide solutions to heterogeneous sets of problems, the belief that algorithms, big data and AI can provide solutions to heterogeneous sets of problems, and the way in which heterogeneous problems are formulated so that algorithms, big data and AI may solve them. Or put bluntly at the cost of rigorous conceptual clarity: algorithmic reason refers to how machines think, and how we think so that they can think. 

I define algorithmic reason as a style of reason made possible by the invention of computational machines, and that has evolved and crystallised with the integration of algorithmic technologies into ever-more spheres of life. I draw on Ian Hacking's styles project to argue that a style of reason involves ways of thinking and doing in the world, and I draw on Sheila Jasanoff’s work on co-production to argue that algorithmic reason is co-produced by humans and machines, or more specifically machines built on ‘the learning types of algorithms’. 

‘Time’, in my thesis project relates to Henri Bergson’s philosophy of time, which I also discuss on two of the podcasts and in the videos on this website. Bergson’s notion of Time with a capital T is a heterogeneous multiplicity and ontologically distinct from space. Time – that Bergson most often refer to as la durée – is intimately connected to life and can be accessed through intuition. Quantified time – or clock time – is the time we most often relate to in our daily lives, but it is a spatialised time of the intellect and directed towards action, rather than la durée which is directed towards consciousness. The overarching argument in my thesis is that algorithmic time is characterised by a tension between duration and clock-time, which I trace through looking at 1) big data as a materialised past that is at the same time static and dynamic, 2) the conflation of immediacy and instantaneity in the fetishisation of speed in algorithmic reason, and 3) the primacy of the immediate future in prediction, and dislocation of the present.

The reason for why I find it helpful – and from my biased perspective also important – to approach algorithmic reason from the perspective of Bergsonian time, is that time relates to life, where life refers to more than the living, or being alive, but the continuous and spontaneous creation of life. In a day and age where doom looms over an ongoing climate collapse, wars, genocide, and fascism, Bergson’s philosophy is a way to put life at the centre of an approach to the world that is increasingly characterised by death. In my own little way I hope to contribute to the ongoing research that attempts to untangle exactly how digital technologies in general, but AI-technologies in particular, relate to (often in the form of endangering) life itself.

[Bergson] [AI] [critical algorithm studies] [critical data studies]

publications

Henriksen EE (2024) ‘Algorithmically generated memories: automated remembrance through appropriated perception’ Memory, Mind & Media 3(e11):1-15, doi:10.1017/mem.2024.8

This article is on algorithmically generated memories: data on past events that are stored and automatically ranked and classified by digital platforms, before they are presented to the user as memories. By mobilising Henri Bergson's philosophy, I centre my analysis on three of their aspects: the spatialisation and calculation of time in algorithmic systems, algorithmic remembrance, and algorithmic perception. I argue that algorithmically generated memories are a form of automated remembrance best understood as perception, and not recollection. Perception never captures the totality of our surroundings but is partial and the parts of the world we perceive are the parts that are of interest to us. When conscious beings perceive, our perception is always coupled with memory, which allows us to transcend the immediate needs of our body. I argue that algorithmic systems based on machine learning can perceive, but that they cannot remember. As such, their perception operates only in the present. The present they perceive in is characterised by immense amounts of data that are beyond human perceptive capabilities. I argue that perception relates to a capacity to act as an extended field of perception involves a greater power to act within what one perceives. As such, our memories are increasingly governed by a perception that operates in a present beyond human perceptual capacities, motivated by interests and needs that lie somewhat beyond interests of needs formulated by humans. Algorithmically generated memories are not only trying to remember for us, but they are also perceiving for us.

[digital memories] [Bergson] [perception] [social media] [memory]