Noema: Conception of Form and Meaning in an Artificial Neural Network -Kaushik Varma

Updated: Apr 30

Various thinkers of the past have addressed the quintessential role of ‘form’ in human perception. From Plato’s theory of forms to Heidegger’s Being and Time, the concept of form established itself as an essential part of being as such. It is only through the perception of forms that move into and out of being that we can talk about or experience the world in any meaningful way. That is to say, subjectivity in itself seems to be enabled by our ability to conceptualize form. But is it the case that such a subjectivity is only limited to seemingly spontaneous organic beings? Could it be that subjectivity could be attributed to an inanimate object if it were to exhibit a mechanical cognizance of form?

Such a notion of an inanimate conception of form would blur the age-old dichotomy between organic spontaneity and machinic repeatability. How does one conceive of a machine (a predominantly repeatable automaton) that (spontaneously) constructs its own structures of form and meaning independent of our own networks of signs and knowledge?

In speculating the possibility of such a blurring (between spontaneity and repeatability), Jacques Derrida, a French post-structuralist, deems it necessary for the event (that which is happening; the perception of a form captured out of time) and the machine (the calculable programming of an automatic repetition) to be conceived as in-dissociable concepts. However, it is more than safe to say that the concepts of event and machine are far from being compatible today. They in fact present themselves to be antinomic in nature owing to our conception of the event as something singular and non-repeatable. Derrida associates this singularity of the event to be a characteristic of the living; the perceived form undergoes a particular sensation (an effect or a feeling) which eventually crystallizes as organic material. The machine equivalent of such a crystallization is based on repetition; “It [the machine] is destined, that is, to reproduce impassively, imperceptibly, without organ or organicity, the received commands. In a state of anaesthesis, it would obey or command a calculable program without affect or auto-affection, like an indifferent automaton” (Without Alibi, p. 73).

Owing to the machine’s state of indifference, its seemingly automatic nature is not the same as the spontaneity attributed to organic life. This incompatibility begins to be apparent as one draws borders based on spontaneity between these two concepts: organic, living singularity (the event) and inorganic, dead universality (mechanical repetition). Derrida says that, if we can make these two concepts compatible, “you can bet not only (and I insist on not only) will one have produced a new logic, an unheard conceptual form. In truth, against the background and at the horizon of our present possibilities, this new figure would resemble a monster” (Without Alibi, p. 74). In building an artificial neural architecture to arrive towards a machinic conception of form, the driving intuition is to possibly accommodate a compatibility between these concepts given the limits of classical computing systems. What hindered such a compatibility from ever materializing in modern technological frameworks seems to be something that inhibits the essence of the machine, its Functionality.

Functionality becomes an underlying constant that places the machine to be in opposition to what’s outside it and one whose subversion would create a rupture in a world that shaped its identity against the notion of the machine as a functional tool. As sensible as it may seem from the predominant utilitarian standpoint that one would have no reason to not make use of objects that were produced to be made use of, it is important to acknowledge that assigning a particular function (the end towards which an object is used as a means) transpires functionality itself to be a violent force that imprisons the object from ever attaining a larger set of possibilities and configurations that the object could potentially inhibit. This entrapping nature has always placed functionality as an invisible pervasive intent in the construction of the machine; from its inhibition of the primitive spear as a weapon, it has evolved to take shape as the specific task that a set of instructions lead towards in a computer program. With the advent of the deep learning paradigm, we witness systems that are designed to mimic ourselves in terms of growth and ability, and yet perform within the restraints of their predefined functions. Functionality in today’s elusive neural networks takes the form of a loss function, an operation that alters the configurations of those neurons that fail to contribute towards producing a desired output. Over the course of several iterations, the loss function eventually transforms the entire neural architecture to be one that performs the assigned task with utmost precision and accuracy.

Building a neural architecture capable of constructing its own conceptions of form would presuppose the network to be free from functionality, one that is not driven and limited by its loss function. To go about implementing such a framework, I set out to rebuild a generative neural network, an adversorial auto-encoder [ref] in particular and introduce an alternate loss function that would later grow to shape its behavior. A traditional autoencoder, when given an input, tries to reconstruct the same input from its latent space. For instance, given an image of the circle, the autoencoder reduces it to its latent space and can later reconstruct the same circle from sampling on its latent space. However, such a mechanic reconstruction is not what this endeavor aims for. The objective is to let the network reconstruct forms without being dependent on our network of signs; not a mechanical transformation but rather a spontaneous creation. In order to trigger such a spontaneity, I introduced a rupture within the loss function, one that doesn’t constantly drive the network towards a perfect reconstruction of the object being perceived but rather opposes such a construction. This addition encourages the neurons that fail to contribute to the perfect reconstruction of the image and as a result, the network given an image of a circle, would produce an object that deviates from the composition of the circle and yet carry traces of it. In order to better track the evolution of form within the network, I’ve chosen to provide inputs spread in time rather than space.

What this choice entails is that rather than giving the network the ‘image’ of a circle, I provide it with the performance of the circle, the stroke-after-stroke representation of how the circle is drawn at each point in time. Although this choice comes at the cost of a greater compute power requirement, its dynamic nature provides us an insider’s perspective of the changes happening in the network as it evolves.

After each iteration, the network is trained on its own (mis)representations from the previous iteration and not the initial input. With an added level of deviation at every iteration, the network would eventually produce forms that deviate from the initial input to a point where any traces are intractable, and that the very image of a circle if provided later to the network would seem foreign.

The machine would continue to run infinitely opposing and deviating from its own constructions from the previous iterations and produce fresher forms at each point. Such a process seems to be analogous to the temporal changes in our own constructions of aesthetic forms, with the world of art redefining its own genres by accommodating that which deviates or contradicts the domain of art at any given period in history.

About the author:

Kaushik Varma is a researcher and a visual artist working in the interweaving of Art, Science and Philosophy. Central to his creative process and research methodology is a blurring of boundaries and interfusing of ideas between these distinct bodies of knowledge. His work centers around a phenomenological investigation into the cybernetic subject; a study of experience as experienced by the machine. He is currently an artist-in-residence at the ArtScienceBLR Laboratory where he is experimenting with neural networks towards a conception of aesthetic forms free from human-centric symbolisms. He intends to induce the construction of form through a mechanical subject with a hope to arrive at a computable representation of the experience of form itself. The following is a first in the series of articles that document his experiments and address his broader framework of research.


© 2019 Foreign Objekt