Updated: Oct 26, 2021
Future of Intelligence in the Age of Intellectual Scarcity
Transcribed and Edited by: Mafe Izaguirre & Sepideh Majidi @ Foreign Objekt's Posthuman Lab
This video was produced by The New Centre for Research & Practice, and aired live on Feb. 24, 2019
David Roden, A Disconnected Future: What is Disconnection and How it Ultimately Reconnects Back to What it Set to Diverge
I really like David's work. In general I find it really easy to follow. I read this paper a year ago, and it strikes me as a kind of formalization of trying to understand how artificial general intelligence may come up as a sort of wild disconnection from the human substrate…and I find it quite self-defeating in a way, because it basically says we kind of make particular claims about such and such; a circuitous posthumanism because it would have to happen first. Roden goes on to say that we may be able to make some ethical claims about both, human life or whatever it is. He weighs heavily on the option of maybe not saying much about it. Maybe this is going forwards into the course, but it's kind of like a quietest response to Bostrom. The work of the future of humanity that heavily relies on a panic and instead of relying on some sort of starry-eyed optimism, Rodens’ choice is to suspend the ability to make any kind of non-superficial claim about artificial intelligence, human intelligence, or both. I don't know I find it to be quite interesting as a position, but momentarily. However, moving forward, I don't know that it sets out a particular agenda for the big picture.
In other words, how much the speculative thesis, the future artificial intelligence, or the posthuman, scientifically realized being disconnected from its existing human substrates, namely, Homo Sapiens, or something else. How much this thesis can tell us anything about how to move forward without rather bringing out questions or concerns which are themselves predicated upon vagaries of the speculation.
Yes, you mean it leads Roden with it, with the Kanner's cultivated without ammunition for any kind of positive forward-thinking philosophical or even practical agenda and it makes no claims to the future so basically, we cannot predict the future therefore the future is canceled, and we must live in the present and the present is human.
Is it possible instead of disconnection thesis it is possible to read it as a constant refocusing (or renegotiation)? As you know the human develops towards posthumanity the point of disconnection could very well be pushed, and you know what this establishes is just that we have a certain horizon in our speculation (A) and (B), there could be a true disconnection point but that aside, you know that this horizon is constantly shifting and rather than saying: “oh we can't tell”, rather the issue is well we have a certain space before the horizon, where then focus there.
The entire critique that I'm going to levy against Roden's thesis is precisely on this issue, and I would say that if we take it as a regular renegotiation, I think we are in a good solid, robust, philosophical, scientific ground. But if we call it “disconnection” while pretending as if it was a renegotiation, then it is just simply armchair speculation. And armchair speculation really doesn't have any sort of claim about the ethical concerns or anything. And renegotiation comes back to this idea of the critical philosophy. What is renegotiation? It is essentially dialectics combined with epistemological criteria, or epistemological methods, capable of distinguishing what is already at stake. Or at least, at an early stage of this renegotiation. What kind of epistemological concerns we are talking about? How are we, renegotiating? How can we renegotiate ourselves in our position regarding a future intelligence if we don't have a healthy epistemological criterion, but also the so-called dialectics?
This comes back to Kant really, I would say--or not probably Kant, but the turn from pre-critical philosophy to critical philosophy from Kant to Hegel and the rest of German Idealism in the sense that a speculation without dialectics and epistemology is simply a vagary. It is, what Kant calls, a fanaticism. We call it today “whimsicality,” in an ordinary language, and that's the whole point. So, if it is simply a matter of choice of terms, then Roden should be forced to accept that there are some dialectical concerns here as epistemological concerns. And those epistemological concerns, and dialectical concerns, should be as relevant to us as they are relevant to this conception of a future intelligence. Otherwise it would be just pure disconnection, it would be a pure vagary of a speculation. That is exactly, I would say, one of the cornerstones of what I'm going to talk about today.
Our emphasis in this course is certain kinds of posthumanist trends, which in one way or another, try to overcome the classical humanist conditions, for example, anthropocentrism, and so on and so forth. By way of a recourse to future conditions. Usually, such future conditions amount to uncertainty, risk, singularity, and so on and so forth. These terms are not essentially synonymous, but they can be thought of in terms of a set —or a family— of terms. They are related.
We are not putting our focus in this session--only tangentially--we are not putting our main focus on other trends of posthumanism. We will try to show, or grasp, the certain aspects of the singularitarian thinking: this future is uncertain, future is imbued with risk, future is not human, and so on and so forth. We try to show that some of the aspects of this thesis overlap with what you might call a “more egalitarian form of posthumanism” as it is put forward today, Like for example, New Materialism and so on and so forth.
This doesn't mean that they are equal. It is just that they have points of overlap. These points of overlap are extremely important, precisely because if we can't really pay attention to them, essentially the conclusion of both: the so-called egalitarian posthumanism and this kind of singularitarian posthumanism, which tries not just simply overcome human anthropocentrism but to terminate it once and for all—terminate the face of the human; break from the cage of anthropocentrism in a kind of a violent mode.
The conclusion of these two pieces, the overlaps, do converge, and I want to argue that the way that they converge is the very philosophy of Neoliberal Humanism. They try to break from the cage, they try to overcome the biases and prejudices of anthropocentrism, but at the end of the day, in one way or another, there is a possibility —more than a possibility, I would say— that they converge. And that point of convergence is the square one from which we set to escape: conservative humanism. Today, encapsulated by the Neoliberal politics and Neoliberal thinking.
So, let me begin with David Roden's Posthuman Life. This is the book. I highly suggest it. It is a fantastic book, excellently clear. David is an astonishingly sophisticated philosopher, I have a huge amount of respect for him. But nevertheless, philosophy is an impersonal practice when it comes to criticism.
Let me read the last paragraph on page 1 to 2 from the Disconnection Thesis, which brings us back to certain points that Giancarlo was talking about:
To take a historical analogy: the syntax of modern computer programming languages is built on the work on formal languages developed in the nineteenth century by mathematicians and philosophers like Frege and Boole. Lacking comparable industrial models, it would have been impossible for contemporary technological forecasters to predict the immense global impact of what appeared an utterly rarefied intellectual enquiry. We have no reason to suppose that we are better placed to predict the long-run effects of current scientific work than our nineteenth-century forebears (if anything the future seems more rather than less uncertain). Thus even if we enjoin selective caution to prevent worst-case outcomes from disconnection-potent technologies, we must still place ourselves in a situation in which such potential can be identified.”
--Roden, David. Posthuman Life (p. 122). Taylor and Francis. Kindle Edition
The reason that I started reading this paragraph rather than starting with saying what the Disconnection Thesis is, is that I think that this is a great encapsulation of the Disconnection Thesis: future is uncertain. To say that the future is uncertain means that even though we can predict certain trajectories of the present towards the future, there are also certain trajectories we cannot predict. We can only see them, and encounter their full impact, or their consequences, only until and unless they are realized.
What does this statement give us more than the very trivial statement that every ordinary human being already says: “future is uncertain”? Hasn't this been the very formula of thinking from the dawn of time? And in that matter, why do we need to think about the technological repercussions of posthumanity?
We can go back to the dawn of evolution, the Cambrian explosion, around 541 million years ago, when we see an aquatic creature, then we flash forward to the mammals, vertebrates, great apes, among others. Evolution can also be thought in terms of the future uncertainty, but does a mollusk or any kind of aquatic creature at the Cambrian phase, think or has a capacity to make ethical injunctions about a future that is fully going to be catastrophic regarding its current existential register? No.
This poses the very question of these kinds of ethical concerns regarding the disconnections of a possible future or possible futures. We are already in the domain that we are the platform, essentially, otherwise there wouldn't be any kind of ethical injunction with regards to the possible catastrophes. I mean catastrophic in a very technical sense. For example, you can think of a system and the system will reject it dynamically. The system changes so much that it no longer by any definition resembles its substrate, like a mammal and some sort of unicellular organism.
This is a very good point to begin with, essentially: how can we connect to the future when the whole point is that the future is uncertain? This is a matter of fact, but it is also a trivial matter of fact. What kind of uncertainty are we talking about? Is it an uncertainty that can be epistemologically grasped, or is it an uncertainty which is completely outside of the purview of any resources that we have for identifying it?
I think David is quite cautious about this. If he says that it is not fully in the purview of the current epistemological knowledge and so on and so forth, the resources that we have, then it would be just a radical alien. Okay. A future posthuman will be just a radical alien. In that case it falls back to, one the question of obscurity, but also more importantly —a point that I will later expand on—it falls back to a question that Kant makes in the Critique of Pure Reason in regards to transcendental aesthetics: namely, intuitions of space and time.
He says that there might be some sort of extraterrestrial aliens who have a more radically alien representation of space and time than us. But then, how are we going to talk about them? If the posthuman intelligence is a radical alien, then it is not about the future. We can also pose the same problem here and now regarding some possible extraterrestrials.
Imagine if we do see some sort of alien register of intelligence. We are forced to explain why we are calling it an “intelligence”. What do we mean by intelligence when we attribute such an adjective to such a register, to such an alien register? In other words, if Roden tries to go to full uncertainty of the future such that, the posthuman is fully epistemologically disconnected from our resources of knowledge-making, theorization, conceptualization, and so on and so forth; then, there is no reason for Roden to talk about posthuman futurity. He can as well be talking about here and now, about the possibility of extraterrestrials that are outside of our radar.
But of course, he wants to talk about posthuman intelligence, and to do that he is cautious. He tries to discuss that a disconnected-future-intelligence—namely, an intelligence that is outside, that is simply disconnected from its current substrate, namely, existing Homo Sapiens, existing humans-- is not uninterpretable in principle. Now, that's a smart move, I would say; it's a very sophisticated move. It is not uninterpretable in principle, because if he says that it is, in fact, uninterpretable, then the whole question of future posthuman AI becomes null and void. We can just talk about other kinds of stuff that are happening right now in the domain of, you know, universal cosmology, physics, and other kinds of sentience. I mean Boltzmann already has this kind of view that there might be inhabitants in different regions of the universe which can never contact one another, precisely because they have different representations of time. They inhabit different so-called entropy gradients or potential fields.
So now that we know that Roden wants to talk about this with caution, saying that this disconnected future-intelligence is not uninterpretable in principle, namely, reserving some hidden threads of connections, not from an evolutionary perspective or similarity point of view but in a sense of interpretability, in terms of an epistemic traction on this future intelligence. With that said, I want to show that even though he says this, the way that he charts the territory of his argument, the way that he develops his argument, his cautious account of posthuman intelligence as different from radically alien, ends up in a territory of the radically alien. It [the thesis] loses the semblance of sophistication and caution it purports to give us, that it purports to hold.
So, the Disconnection Thesis and unbounded posthumanism are part of Roden's thesis. Roughly, this is the idea that prospective posthumans have properties that make their visible forms of association disjoint from human, or MOSH —the acronym for Mostly Original Substrate Human— forms of association.
What is interesting in Roden’s account is that unbounded posthumans mark a discontinuity with both the biological conception of the human —Homo Sapiens as an evolutionary natural species— and a perceptive conception of human persons, namely, sapiens, as a rational agency. These two are different: the first one is what you might call to be “a natural evolutionary human, Homo Sapiens-sapiens” and the other one is to be “human as a functional diagram of the faculties, or capacities, which make human a human”. And they can be in fact, possibly realized by other kinds of physical substrates different from the biological substrate of us. Hence, what is common to both is what you might call “a certain range of necessary activities.” These necessary activities can be thought of as necessary enabling constraints that make us do whatever we do: practical reasoning, theoretical reasoning, hypothetical reasoning, and so on and so forth.
So, according to Roden, the cause of such discontinuity, understood as a radical, cognitive, practical, asymmetry between unbounded posthumans on the one hand, and humans and their bounded descendants, on the other: is technological. Although, it is not attributed to any particular, technical cause but to more general abstract tendencies for disconnection within technical systems. For example, the autonomy of such systems to functionally modify and multiply themselves, in discontinuity with any natural essence or rational law. It is important, here again, to note that there is a continuity, even though this continuity is quite a restricted one, but nevertheless, there is a continuity between Homo Sapiens —namely biological humans— and human as a functional abstraction of necessary faculties for whatever, in fact, existing humans do, if not more.
Now the thing is that discontinuity between Homo Sapiens and Human as an abstraction, as a functional abstraction, already implies multiple realizability. In the sense that it implies that the faculties of Homo Sapiens can be realized by sufficient necessary constraints in other kinds of physical systems. For example: such as computers, such as machines. Whereas what Roden is implying here is that the technological capacities, the technological operations, and he is not specific about the very particularities of how they can give it. But he tries to argue that with the advent of these autonomous technological systems we can understand a different level of multiple realizability, in the sense that we can imagine a machine, a computer, or for that matter, a synthetic biotic life form, to have fundamentally different faculties and constraints than those of human one and human two: Homo Sapiens and human as an abstract functional diagram—simply a list of necessary abilities or faculties, so to speak —in a Kantian sense— or conditions.
So, as I mentioned, the cause of such discontinuity understood as a radical cognitive practical asymmetry between unbounded posthumans on the one hand, and humans and their bounded descendants, on the other: is technological. Although, it is not attributed to any particular, technical cause but to more general abstract tendencies for disconnection within technical systems. As a result of this radical asymmetry, we should then understand that emergent behaviors of a future AGI from within a framework that is recalcitrant--sealed off from--to any well-defined hermeneutics of intelligence, because that well-defined hermeneutics of intelligence it's definitely going to be provided by us, the existing humans, or the existing descendants, or the future descendants, of the existing humans. Whereas what Roden wants to say is that no, as I mentioned, that there will be a radical break away from both: Homo Sapiens as a model and anything that can be artificially realized, or modeled based on Homo Sapiens.
Any questions? No. Ok, I’ll go on then.
Citing Mark Bedau and Paul Humphreys, Roden suggests that a diachronically emergent behavior or property occurs as a result of a temporally extended process but cannot be inferred from the initial state of that process. It can only be derived by allowing the process to run its course. This is what can be called “unbounded posthuman,” namely, “disconnected posthuman,” as a diachronically emergent phenomenon. We are not talking about regular emergence. We are talking about diachronic emergentism. The medium of this diachronicity in “time”—emergence--is a deep time of technology for Roden. In other words, unbounded posthumans cognitively and practically reiterate what seems to be a prevalent characteristic of complex, nonlinear dynamic systems, namely, divergence from initial and boundary conditions. In other words: that disconnected posthuman, in the sense that Roden is talking about it, is, already, a register of nonlinear dynamics.
A physical system can fundamentally and radically diverge from its initial conditions, no matter what these initial conditions are. Given a few perturbations beneath the threshold of measurements, a dynamic system can evolve in an explosive manner from its initial conditions. For example, if the human is such and such; given such and such technological perturbations, the posthuman can evolve catastrophically, namely: in complete disconnection, in an evolutionary sense, from such initial conditions. Here and now. The existing human, the existing Homo Sapiens, or their descendants: computers or machines, which are modeled under Homo Sapiens.
Is this clear? Don't be shy. You should ask questions at this point.
So, I have a question: If I understood correctly, you are saying that nonlinear dynamic systems can predict diachronic emergence, or is it already in our epistemic grasp?
Diachronic emergencism, is in fact, a variation on the theme of nonlinear dynamics, in the sense that the trajectory of a dynamic physical system can vastly diverge from its initial conditions, in an explosive manner, given a few perturbations, which might not actually be measurable.
So, whatever may be the initial conditions of realization for humans, either in the sense of humans one and humans two —Homo Sapiens or their descendants, modelled on them— whatever may be the initial conditions of realization for humans, as a natural species or rational persons, whether such conditions are seen as natural evolutionary causes or logic conceptual norms afforded by discursive linguistic activities—future posthumans can neither be predicted in an epistemically robust way, nor adequately approached with reference to such initial conditions, namely, humans. Hence, the disconnection, the vast divergence.
Disconnections, in this sense, signify the global diachronicity of the deep technological time as opposed to local emergent behaviors associated with particular technologies of the past and present. The radicality of Roden's unbound posthuman or future AGI —Artificial General Intelligence— lies precisely in the double-edged sword of technological time and its abstract tendencies, which cut against both any purported natural essence and any socio-culturally or rationally conceived norms of being human. In virtue of the Disconnection Thesis, when it comes to thinking unbounded posthumans, the artificiality of rational personhood —human as a functional diagram— is as handicapped as the naturalness of biological species, namely, Homo Sapiens as a biological species.
The images of the posthuman could forward through evolutionary naturalism or rational normativity built on the biological constitution of Homo Sapiens or the synthetic makeup of discursive apperceptive sapiens, are quite literally bounded. And hence, this is why Roden calls it “unbounded posthuman” with regard to the Disconnection Thesis. Because any kind of other kinds of trajectory of evolution would be bounded. They are fundamentally —these bounded ones— inadequate to cope with, or engage with, the ethical cognitive and practical ramifications of technologically unbounded posthumans, and in that sense, they fall back on the very parochial humanism from which posthumanism was supposed to break away in the first place. However, despite the remarkable theoretical sophistication of Roden's arguments and the cogency of his claims regarding the cognitive practical asymmetry of a future artificial intelligence, or artificial general intelligence (AGI)--none of which should by any means be discounted--upon closer examination, The Disconnection Thesis suffers from a number of glaringly loose threads and misconceptions. Coming back to what I said earlier, that, even though there is a great deal of sophistication in this argument, the incoherencies within the argument ultimately shatter the thesis under the very cradle, or swamp, that it tries to escape from, and that is conservative humanism.
Roden's account of diachronic emergent behaviors within deep technological time and the radical consequences for prediction interpretation on the basis of initial conditions of realization, remains negatively metaphorical. Firstly, even if we follow Roden in ruling out the rational and linguistic-inferential-conceptual-intentional-conditions necessary for the realization of human agency, it is still far from obvious how neatly a feature of nonlinear dynamic systems, i.e. divergence from initial conditions, can be extended to all conditions of realization. Not all complex systems and conditions necessary for emergent behaviors can be framed in the context of nonlinear dynamics and the so-called stability analysis in complexity sciences. Now, of course, here it can be objected—my objection to Roden is in fact, Roden’s counterargument, in the sense that he in fact, doesn't think that there are such intentional states, conceptual content, and so on and so forth, to the human, that characterizes the human. In fact, all of such contents, such norms, that specify the Appalachian human are grounded naturally. That is why he can talk about divergence from the human condition, from the human conceptual capacity, and so on and so forth, in terms of a natural physical model; namely, nonlinear system dynamics.
Now, the problem with this is that yes, ok, let's “agree” tentatively that there is no such thing as intentional content, conceptual content, or norms for defining the intelligence, for defining human. And in fact, they are all physical. Ok? They are all natural so to speak, more broadly.
Not mentioning that this agreement would lead to physicalistic and naturalistic fallacies, which I don't want to talk about. But let's agree that is okay, and that is why he is applying nonlinear system dynamics to certain aspects of humans, defining aspects of humans, which are not in fact natural, which are normative conceptual, and so on and so forth. But let's say that they are all naturally grounded. They can be physicalisticly approached. I would argue, from the perspective of the complexity system sciences, that the very model of the nonlinear system dynamics, in the sense that a system can in sufficient time diachronic emergence, in sufficient time can vastly and explosively diverge from its initial conditions, is not in fact a physicalistic or naturalistic thesis. It is simply a mathematical idealization, which is useful, but it cannot be in fact applied to natural physical systems.
Now, let me come back to my point. The main component that distinguishes a nonlinear system dynamic divergence down the road, the so-called global Lyapunov exponent, which is a measurement for understanding how dynamic systems evolve in time, and how much can they diverge from their initial conditions.
The framework of diachronically, or namely, diverging emergent behaviors, cannot be extended to all conditions necessary for the realization of human intelligence. For example, it does not apply to those involving computational constraints, such as resource related constraints, an information processing constraint associated with the instantiation of different types of computational capacities.
Secondly, the so-called radical consequences of the divergence from initial conditions--you know, an unbounded posthuman intelligence from its existing human substrate--for a given set of emergent behaviors within a dynamic system are themselves based on a false interpretation. The formal property of nonlinear dynamic systems known as positive global Lyapunov exponent—Aleksandr Lyapunov made fundamental discoveries in nonlinear dynamics. This has been the root of a complexity folklore that is not only widely popular in the humanities but also prevalent in commentaries and complexity sciences. In short, nonlinear systems are sensitive to initial conditions. The smallest amount of local instability, perturbation or uncertainty in initial conditions, which may arise for a variety of reasons in different systems —in Roden's case it is the technological perturbations; they are those uncertainties which creates a massive uncertainty down the line— can lead to an explosive growth in uncertainty resulting in a radical divergence of the entire future trajectory of the system from its initial conditions. This explosive growth in uncertainty is defined by a measure of on average exponential growth rate for generic perturbations and it is called the maximal global or the largest Lyapunov exponent. Roughly formulated, the maximal Lyapunov exponent is a time averaged logarithmic growth rate of the distance between two neighboring points around an initial condition where the distance with divergence between neighboring trajectories issuing from these two points grows as an exponent. A positive globally Lyapunov exponent is accordingly defined as the measure of global and, on average, a uniform deviation from initial conditions and increase of instabilities in the system, or uncertainties. Just like the unbounded posthuman.
Now, the global Lyapunov exponent comes from linear stability analysis of trajectories of nonlinear dynamic systems. Sorry, nonlinear evolution equations in an appropriate status space within an infinite time limit. The idea of radical global divergence in trajectories or uniform exclusive growth of local instabilities. It is therefore only valid within an idealized infinitely long-time limit, but the assumption that exponential deviations after some long, but finite time, can be properly represented by an infinite time limit, is problematic. In other words, the radical conclusions regarding the limits of predictability and analysis drawn from the interpretation of positive global Lyapunov exponents hold for only a few simple mathematical models but not for actual physical systems, and that is the folklore. And that is also the folklore that is underlying this kind of explosive divergence in Roden's account —or diachronic emergenticism— of the unbounded posthumans from their initial conditions, which are the existing ones.
On average, increase of instabilities or radical divergences from initial conditions, is not guaranteed for nonlinear chaotic dynamics. In fact, linear stability analysis within a large but finite elapsed time and measured by local Lyapunov exponents representing the parameters of the stated space of a system, point-to-point, show regions on an attractor where these nonlinearities will cause all uncertainties to decrease. In other words, causing trajectories to converge rather than diverge, so long as trajectories remain in those regions.
So, it is a kind of tangential point. It is very interesting that many of these singularitarian posthumanist scenarios are in fact implicitly built on complex systems, particularly on nonlinear dynamics models. But in fact, nonlinear dynamics, as understood correctly, in complexity sciences, never suggest the conclusions that they suggest.
Yes. Can you explain why in certain regions they tend to converge?
In a simplified, perhaps even reductive way, you can think of a dynamic system, any kind of physical system, classically is defined by the ground zero of the analysis of a system. It is defined by its initial and boundary conditions. Like a system of gas inside a bottle, okay? Now, the thing is that with regards to dynamic systems you want to see that given such inputs into the systems, this inputs can be also thought as perturbations because there is something that you introduced into the system that there is not already in the system; given these smallest perturbations for example, in our case, some sort of very local technological advances injected into the human system, results in a massive explosive growth or divergence namely, unbounded posthumanism.
Now, the thing is that we can think about this massive growth in terms of a set of possible trajectories, which define or plot the evolution of this system once the perturbation is inside the system. Once the system is perturbed. When the input is already [in] the system. These trajectories of evolution of the system can be represented by certain, kind of what you might call “idealized space,” which is essentially a bracketing of the threshold of its evolution as opposed to another trajectory, Ok? It is what you might call to be something like a “state space.”
Now, within the range of the state space, given certain kind of perturbation —and when we when we have a perturbation, we can analyze the range or the set of these evolutionary trajectories and their corresponding state spaces— when we are talking about certain state spaces, we simply mean this: that given such perturbation, there is this range of stated space and trajectories of evolution for a nonlinear dynamic system. And within a statistical certainty, —not any kind of metaphysical certainty— within this statistical certainty, of this range of state space, they do converge more than they do diverge. Essentially, the amount of uncertainties reduce after a sufficient long time rather than increase.
You can think of it as a diagram: Imagine a box that is your system. Then, imagine there is an arrow moving to this box. This is your input, which perturbs the system. Then, from your box there are some squiggly arrows coming out. Here squiggly arrows are defined by a range or threshold, each having its own threshold, its own space. Within this range--given a sufficient time —according to the Lyapunov exponent, which is the basis of the linear dynamic and nonlinear dynamic systems—uncertainties, namely the amount of perturbations, do decrease instead of increasing.
My attempt to this would be by saying that this differential system that exists a priori is wrong. When the scientists use differential equations, simply use them by looking at the systems which already exist. I mean, for example, if they want to consider useful energy of Earth—maybe we can consider first a very simple set of equations—we should only consider energy which comes from the Sun, this would explain quite a big chunk of Earth history. When this is a different set of different inputs, for instance, if we start mining oil on Mars, we will get a completely different system of equations. To which system our power turn, at what point of time, and in what order, can influence quite a lot, right?
I agree. The point is that when we are talking about trying to bracket in a certain way, a set of energy lines that are relevant to our system: the planet. We are going to basically turn these energy points, these energy indexes, into a system. Obviously, the first thing that we need to have is a set of criteria for recognizing and defining the initial and boundary conditions of our system, right? That is basically a ground zero of our stability analysis. You must have an initial boundary conditions criteria from which you can launch and study the evolution of the system. I do agree that the equations can fundamentally differ and that's why there are different systems, different models of the system, and so on and so forth. But the thing is that if we are simply talking about this kind of explosive divergence, then we are talking about nonlinear system dynamics, and inside that, the Lyapunov exponent is still considered to be the canonical model, of not the equations but of the very space of possibilities; of the mathematical space of possibilities, that can happen within such a physical system, within this given order of time, such initial and boundary conditions, and so and on so forth. The whole point is that this whole idea of explosive growth, it is not necessary. It can happen, it can happen in a nonlinear system dynamic, but it cannot be taken as if it was an a priori natural law because it is a mathematical idealization.
Not every physical system, as you say, follows this trajectory. Let's assume that Roden believes that we are simply physical systems, not anything spooky—norms, reasons, and stuff. Then, how can we talk about this coherently if such conclusions are drawn from the mathematical idealization rather than the exact application of the mathematical idealization to a particular physical system, as you say? You see, that is the problem. I don't have any problem that there are different evolutionary forces in terms of nonlinear dynamics; it's just that the umbrella way of applying it, or the selective term of applying it to humans, doesn't really hold any ground in complexity science or in physics. What are the measurements? What are the initial conditions? What are the boundary conditions? Can we see them? I have no problem in playing the role of a pure rationalist and fighting the great normative fight. Also, I can put the physicalist mask on, and show you that this choice is completely arbitrary.
But a superbly fantastic comment. I agree with this.
So essentially, this is the first of several problems with the Disconnection Thesis (DT). Essentially, the DT tries to say we cannot infer the characteristics and features of the unbounded posthuman from its initial conditions namely, us, MOSH, or mostly substrate humans, and their descendants. Roden needs to explain, now that he has got rid of all the intentional conceptual contents of the human and he is treating human almost like a natural kind, he should explain why is that his modeling, is what you might call to be divergence; and something that awfully looks like a model of nonlinear dynamic systems in the full glory model of nonlinear dynamic systems. And what it supposedly tells us, if, by virtue of it being folkloric, namely, an idealized mathematical model, applied to only a specific physical system, how can we draw conclusions from it? I'm sure that Roden will definitely work more on this front, but I haven't seen anything in the Disconnection Thesis which is quite a very strong singularitarian hypothesis with regard to these issues.
….Another thing is that this nonlinear dynamic idea, that so-called physical systems, is an idealized mathematical model, this idea we cannot infer the future of the system from its initial conditions, only holds, in fact, for dynamic systems. But there are also static systems, which can also be understood as complexity under a different register of complexity. Nonlinear dynamics, by no means, is a sufficient register of complexity in physical systems.
I would like you to read James Ladyman: What is a Complex System? Where he gets rid of some of these, what I call to be dogmas about the idea of complexity as understood in complexity science. In a sense of complexity meaning variations, differences, nonlinear dynamics, that is not correct. These are only very narrow characteristics of complexity. A special case, so to speak. We do, in fact, have static systems which are complex. So, where is the discussion with regard to the question that in fact, Human can be understood as a static, complex system. Why the thesis of nonlinear dynamics in this full gloric sense is being applied to naturally evolved humans, if such humans cannot be brought under the purview of a linear dynamic system, or linear dynamic physical systems.
These are, I think, all things that should be elaborated. But regardless there are more glaring, I would say, flaws inside the Disconnection Thesis than just this one.
Aside from the highly debatable extension of a very particular feature of physical complex systems and to all conditions of realization of sapiens, natural and/or rational, the main issue here is that there is simply no such a thing as an emergent behavior diversion from initial conditions in an unconfined or unbounded manner. To say that it is an unbounded manner that risks idealization; an idealization, which in Roden's case, becomes a reification. There is no guarantee of uniform divergence or convergence toward initial conditions. This, as Valentin was talking about, is quite fundamentally relative to the specificities of the physical system in question. A specificity that shouldn't be in fact talked about in its own physical particularities. And these physical particularities are in fact constraints, a structural complex constraint, which Roden tries to erase from the purview and generalize, and make an umbrella conclusion, an overarching conclusion from them. The whole point is that Homo Sapiens, if you are talking about Homo Sapiens, they can be understood as a physical system in its particularities, as constraints: it has enabling constraints, and negative constraints; it has certain kinds of characteristics. Any application of such models to the physical human, as a substrate for future posthuman, should be quite specific about what these constraints are, which of course we don't see them in the Disconnection Thesis.
Another contemptuous claim in the Disconnection Thesis is that the cognitive practical abilities of posthumans might be founded upon the abstract general tendencies of technological systems. More broadly, the technological deep-time —which I still don't understand what technological deep time actually means— and this is a whole recurring motif among the Singularitarians. What is a deep technological time? How do you extrapolate such time? What you might call to be the launching pad to in fact speculate such deep technological time if not the constraint of the temporal consciousness of the existing humans?
So, another contentious claim in the Disconnection Thesis is that the cognitive practical abilities of posthumans might be founded upon the abstract general tendencies of technological systems. Roden claims that speculating about how currently notional technologies might bring about autonomy for parts of why humanity affords new substantive information about posthuman lives and code. This is a careful consideration here, that posthumanity realized by the extension of current technologies presents another form of bounded posthumans. That is a very smart move. Not to mention that drawing conclusions from particularly historical instantiated technologies or technical causes does not imply the radical claims of discontinuity and divergence that Roden seeks to underline. Being aware of these problems, Roden’s solution is then to single out salient disconnecting, namely self-modifying tendencies of technical systems and to present them as diachronically emergent behaviors of the deep technological tech-time.
So, Roden is not interested in the particularities of the technologies that are present here and now, their specificities of the restrictions of constraints and so forth. He is trying to single out an abstract tendency from these “technological advances” that are being unfolded here and now. And then use this to show that given such abstract tendencies, which are in common and can be generalized among “momentous technologies”, like nanotechnology, biotechnology, cognitive science, or artificial intelligence, we should be compelled to speculate or extrapolate an unbounded posthuman realized by these technologies, no longer convergent or connected to where it has come from, which is the existing humans.
But there is no evidence of the methodological basis upon which these particular tendencies or salient features have been singled out and assigned such a high degree of probability or magnitude. Essentially he is making an abstraction here, a generalization of the technological singularity of technological divergences or disconnections. But where is really the method by which he is making such a means from the historically evolved technologies, which we have right now, this kind of abstract tendency of the deep technological time? Any formal abstraction or generalization requires an elaboration about the method of the abstraction. What were your particularities such that you now have general abstractions? Do we see them? No.
So, I repeat, there is no evidence of the methodological basis upon which these particular tendencies or salient features have been singled out and assigned such a high degree of probability or magnitude, such that they basically result in an unbounded posthuman intelligence. Selection of salient features or behaviors —in this case disconnecting tendencies— makes no sense other than through an analysis of past and present technologies, namely the particularities of technologies: an analysis that would precisely bring into play the missing question regarding particular technical causes and a specific data with regard to their frequency and context in today's technological world. Essentially, this is very much in tandem with what Hegel would have called an indeterminate negation, an abstract negation, as opposed to a determinant of concrete negation.
The movement from abstract to concrete, the movement from particular to universal, requires precisely such determinations. And such determinations require methods of their own. They require exact historical analysis. But so far, they are not existent into the Disconnection Thesis. It seems as if the generalization of disconnecting technological tendencies, which defined deep technological time that leads to the unbounded posthuman intelligence, is just like a kind of a very wishy-washy indeterminate decision. It is pure personal arbitrerism. If it is simply an instrument of abstraction, then, what does exactly distinguish it from a psychologistic determination? In the sense that if I am a madman--which usually I am--and I say that things might all come back to a certain kind of initial state, or humans might evolve to some sort of platonic God or something. If there is no real concrete arbitration, then all sorts of arbitrations are allowed. Then, why should we choose this one as opposed to others?
The inductive generalization of specific tendencies in such a way that they enjoy a disproportionate degree of likelihood of occurrence is a well-known type of base rate fallacy in Bayesian inference and judgment under uncertainty. You see, humans are really bad at heuristics. They think that they are distinguished by heuristics, but heuristics is their curse. Any time that they engage in some sort of inductive heurism, they are going to be biased because usually such judgments are made under uncertainty. And under uncertainty, literally, the curse of arbitrariness, of the judgments, haunts you. Anything that you say might, you know, be true; be inductively true.
Bayesian inference problems, comprising two types of data: the background information—which is called base rate information— and the indicant or diagnosed information. The base rate fallacy with regards to judgments under uncertainty occurs when diagnosed information, or the so-called indicators, for example causally relevant data, are allowed to dominate the base rate information in the probability assessment. In other words, the absence or weakness of calibration between base rate and indicant information results in flawed prognostic judgments. In the case of Roden's Disconnection Thesis, some diagnosed features: representatives —such as propensity for autonomy and disconnection in certain technical systems— are taken as general tendencies of future technologies, but in a completely biased way. And that is what the whole base rate fallacy is about.
The outcomes of technological evolution are outlined precisely on the basis of the over-determination of some representatives, i.e.: the selection of certain diagnosed data or features as causally relevant as opposed to other causal factors; the problem of arbitrariness of judgments. But it is exactly this seemingly innocent notion of relevancy, selected on the basis of a diagnosed prominent causal rule, or representative feature, that is problematic. It leads to judgments in which base rate data, such as other non-salient or irrelevant features of technical systems, that apparently lack any explicit causal role, as well as those uncertainties associated with a specific historical condition around technological evolution, are ignored. So, it simply becomes a matter of subjective selectivity. Subjectivist in fact. Subjectivist or psychologistic selectivity. Consequently, the final result is an overdetermined prognostic judgment regarding how the tendencies of the disconnecting technologies unfold within the overall evolution of technology, i.e. the claim about the abstract tendencies of deep technological time.
Firstly, there is no proposed methodology with regard to the criteria of selection and diagnosis of disconnecting technologies. As I mentioned, this whole movement of singling out certain concrete or absolute tendencies is quite undetermined at this point in Roden's thesis. We do not know what the criteria of selection for these technical systems are, or how their disconnecting features have been diagnosed and singled out. Instead, what we have is a tacit and vicious circularity between diagnosed features of some emergent technical systems and the criteria used to select those systems based on the proposed features. Absent is methodological, epistemological dimension we are adhering to a psychological account of technology that is a trademark of an idle anthropocentrism habituated to relying on its evolutionary deep-seated intuitions for making diagnostic and prognostic judgments about that which is not human.
See, falling in the trap of anthropocentrism is not just if you are cursing norms and rationality. Even if you are doing probability analysis, you can fall back into the dogmas of anthropocentrism.
Questions. Even off-topic.
I have an off-topic comment about the ideas behind option pricing theory and statistical modeling, and the idea of uncertainty behind some of the models which seem to be nonlinear and how the idea of volatility seems to be ever-present.
Yes, the order of financial risk. This is why I use this as a launching pad. Roberto, are you familiar with the work of Suhail Malik?
I have read some passages but I'm not fully familiar with him.
Read his essay in the latest Collapse. It is absolutely on this idea and essentially we see the idea of risk and contemporary, and this idea of volatility as the order of risk. The way that he is handling it is very similar to what Roden is talking about future uncertainties of the unbounded posthuman. They can in fact be approached and criticized, if not by the same methods of criticism, but by this same family of critical methods. Because they are essentially, I would say that they are fundamentally equivalent.
What I’m getting is that Roden basically lacks methodological rigor. [Reza: On different levels, I would say: the probabilistic, the epistemological, and the physical]. Ok. How would you go about solving this? Or not solving this but going for a less anthropocentric view of posthumanism.
If I can excuse myself just for the sake of today’s session. I would like to finish this critique, and I will give you the response in the beginning of the second session.
Essentially, in a very brief sentence, I think that there is no way that we can come out of this idea that we can speculate without the current epistemological, methodological, and conceptual resources of existing humans about something in the future.
This does not mean that the future, as Roden said, can be fundamentally different from a future in which we may no longer exist. The name of the human, whether Human I or Human II, in the sense that I said, has already been expunged. But to go and overcome Human I and Human II towards the future artificial intelligence requires a systematic exploitation and modification of our existing conceptual resources. How can we go on about this? That is the question. I have a few answers which might be fundamentally rudimentary, but nevertheless, I think that they, at least in my current philosophical research, can in fact do a point up toward that kind of posthuman condition as Roden imagines it.
I think that the question of artificial intelligence, which is in fact the question of the posthuman, in the most consequential sense, is really the question of a design space. How can you design an intelligence as vast as possible? Of course, you can say: Oh well, this is just vast. But in abstraction, this vastness is no good. You have to make it, you have to design it. To design a vast space, you have to start from your current constraints, your current resources, and so on and so forth.
I would say that there are four dimensions, four aspects which are currently the pillars of reaching toward the heavens; toward this unbounded posthuman as Roden says. One, is a question of probability—I'm not going to go into that. This is an extremely difficult question. Second, the question of perception as a probabilistic self-optimizing model, with regard to an environment. We see in current applied parts of it is basically used in the current paradigms of AI, neural networks and so on and so forth. Third, the question of conceptualization and logic, and logical inferences.
The fourth one is the one that is the key to the posthuman intelligence in a Rodenian sense. So far, we have had three that share something with one another, either about perception or conceptualization. Essentially, they define the perceptual noetic poles of any source of cognitive agents. We can talk about different sources of perception, different accounts of probabilities, also we can model them differently, different models of conceptualization, logic, and so on and so forth; that doesn't matter. The entire point is to redefine the perceptual noetic pole. Essentially, make new worlds in which, for example, a jar of honey tastes sweeter. What does that mean? It is neither sweet, it is neither sweet nor bitter: It is sweeter. It is a different combination, a different synthesis, from the existing perceptual-conceptual elements that we have. You see, until we reach that point when a robot sees--as for example, Sellars or Goodman, would have said--sees a flock of ‘blight’ crows in the sky: we cannot claim that we have reached an unbounded posthuman.
What is a ‘blight’ flock of crows? When we see a flock of crows in the sky, we see them as either black or white. We don't have white crows. We have black crows. But by virtue of our perceptual capacities and, also, conceptual capacities, the concept of black means that it is not white, means that it is not green, means it is not red, and so on and so forth. But imagine that a robot can construct its perceptual noetic, or perceptual conceptual nodes, so far away that it can see the universe fundamentally differently than ours. That is the question of world building, and that is the very aspect of unbounded posthumanism. Which of course, to do that, the robot should be able to sufficiently master the perceptual-conceptual resources that we have at our hand right now: black and white. What does it mean to perceive black? What does it mean to perceive something white? What does it mean to conceptualize something black? And what does it mean to conceptualize about something white? But more importantly, what does it mean to hypothesize? Make a counterfactual argument, a counterfactual, unnatural predicate, such as “blight” (black and white) and it sees the crows as ‘blight’, and it is capable of, making the universe something like that, with redefined perceptual noetic poles.
Reza, you referenced Goodman's Ways of World Making.
Yes, absolutely. I think it is one of the greatest works of philosophy. That work is essentially written for a different purpose. The points of Goodman's Ways of World Making—which I suggest to all of you— is essentially about a version of irrealism, that any rigid concept of reality is contradictory to our conceptual epistemological methodological ways of thinking. Now, it is essentially self-contradictory. So, the only way that we can think about how we can render consistent, and talk about coherently, our epistemological, conceptual-perceptual resources is by thinking about an irrealist thesis. This irrealist thesis is about making different worlds. When I am saying “of making different worlds” this is not just a capricious statement. We are not making worlds just for the sake of making a different world. Every world that we should make, should be a way, a new way, of knowing about other worlds. Okay? So, for example, both the Ptolemaic system and the Copernican system can be thought of in terms of world buildings and in connection with one another.
Essentially, in the Ptolemaic system you see that the Sun revolves around Earth. You have chosen a specific frame of reference and by virtue of that, you are subjected to certain kinds of consequences: perceptual-conceptual consequences. Whereas in the Copernican system, you have chosen another different system: the sun is the center and the Earth revolves around it, and for that matter you are living in a different world. But the thing is that these two worlds are not totally disconnected. They are made from the ingredients that they share at the base, and the whole point is that this diversification of worlds, or world making —the Ptolemaic and Copernican— allows you to see the world differently, to pose questions that are not permissible in the other world. For example: in the Ptolemaic system you can ask how long does it take for the Sun —and is it quite a justifiable question, and it is a good question, which is totally scientific— how long does the sun take to revolve around Earth? Whereas in the Copernican system you are only allowed to ask the question of how long the Earth takes to revolve around the Sun. It is only in connecting these two worlds, again, to make new worlds--kind of a ramification of worlds--that you can see the movement of celestial bodies; moving from the opposition between Ptolemaic system, and the Copernican system, to the Keplerian system, to the Newtonian, to the Einsteinian, and so on and so forth. So, this is the idea of world making. You can think of this as an allegory for AI, a future AI.
So, let me continue with regard to Roden's thesis. So, I talked about the first probabilistic or heuristic objection: absent his methodological epistemological dimension, we are adhering to a psychological account of technology that is the trademark of an idle anthropocentrism, habituated to relying on his deep-seated intuitions or making diagnostic and prognostic judgments. Secondly, even if we accept diagnosis about disconnecting features, as a verdict obtained, non-arbitrarily, non-selectively, and non-psychologisticly, we are still left with a statistical fallacy in the inductive generalization of these features in the form of an overdetermined judgment about the abstract tendencies of deep technological time. This over determined judgement becomes the locus of a disproportionately high probability, giving a sense of false radicality or impending gravity to its consequences.
It is kind of like the 9/11, “the unknowing-unknowing”. When you are trying to judge under such uncertain conditions, so-called judgment uncertainty, you are always predisposed to assign a high magnitude of probability to your probability consequences, or your probability posteriors. Here the Disconnection Thesis, the unbound posthumans, that are fundamentally disconnected from their human substrate.
But just because some represented features of the technical systems may play more prominent causal roles, does not mean that they are more likely to dominate the evolution of technology in the form of diachronically emergent tendencies, which lead to an unbounded posthuman intelligence, as defined by Roden. In other words, even if we accept that local disconnections are salient features of emerging NBIC (nanotechnology, biotechnology, information technology, and cognitive science), a claim —which by the way already calls for methodological assessment— there is no guarantee that these local representatives will become global tendencies capable of generating radical discontinuities. There is no guarantee that such salient features can be thought of as the very generalized tendencies of a deep technological time. Assigning high probability significant weight to these features and then drawing radical conclusions and wagers from them, is another form of what Nick Szabo —I really would like you to read his blog post “Pascal’s Scams.”
Nick Szabo calls it Pascal's Scams. These are scenarios in which there is poor evidence and probabilities lack robustness. Pointing out this lack of robustness and poor evidence environment, addition of new evidence, for example: the defeat of a human player against a computer in the game of Go, or a breakthrough in one of the branches of academic science, can disproportionately change the probability and magnitude of outcomes. This new evidence is as likely to decrease the probability by a factor X, as to increase it by a factor X. And the poorer the original evidence, the greater X is, hence the high proportional magnitude of the judgment. In such an environment the magnitude of possible outcomes, not just their probabilities, are overdetermined to such an extent that uncertainties become the basis of decision-making and cognitive orientation, forcing us to make ever more expensive bets and form ever more radical beliefs with regard to uncertainties and future scenarios, as can neither be falsified, nor adequately investigated, by analyzing the specificity of the historical conditions of realization.
Just very much like the 9/11 terrorist attack paranoia: you know, “all of these guys can be goddamn terrorists.” This is the whole point. And under this kind of uncertain judgment, they should all be treated like terrorists. The same thing can be applied to this discussion with caveats of course, but nevertheless, the base of them is just biased heuristics under judgments with uncertainty.
What is unlikely, insofar as it is only probable under uncertainties--methodological, semantics, paradigmatic, and epistemic--becomes likely. Then, what is likely under the same uncertainties, becomes plausible, and what has now become plausible, only because it is probable under implausible conditions, becomes weighty and truth indicative; gains some gravity. Such is the process through which the Pascal’s Scam is sold to the unsuspecting. A Pascal’s Scam is just an equivalence, it is a probabilistic equivalence of the Pascal Wager. Imagine that if God was real —a judgment under uncertainty— you move from uncertainty to implausibility, from implausible to plausibility, and so and so forth. And then you try to scam other people that “if you don't pray to God, you will be banished to hell.” Because what if the God was actually real?
In short, we are swindled into taking the magnitude and probability of such scenarios seriously, treating what is at best an unfounded conjecture, at worst flight of metaphysical fancy, no more substantial accounting the magical properties of angels in heaven, as if it were a plausible possibility, not entirely foreclosed to rational assessment and epistemological procedures.
In attempting to retain their claimed plausibility without exposing themselves to any criterions of robust analysis and assessment that might debunk a purported radicality, such extreme scenarios have to formulate their wagers, not in terms of epistemological problems about posthuman intelligence, and so on and so forth. Or hypotheticals that can be adequately tested from the perspective of current resources. But in terms of aesthetic and ethical pseudo- problems, often structured as “but what if?” you know, “but what if God really existed?” “but what if this is really going to be the future of humanity?” Questions desperately begging for a response, an engagement, or sympathy for their plausibility. It is in this function that the genuine import of the artificial realization of mind or the consequences of posthuman intelligence are obfuscated by pseudo-problems whose goal is to maintain a facade of significance, seriousness: the existential risk of AGI, security analysis of posthuman intelligence, or in the case of the Disconnection Thesis: ethical complications arising from the advent of unbounded posthumans. In such frames, the posthuman is disconnected from the human only to be reconnected back to the human and the level of discourse and a hollow a speculation that
feeds on the most dogmatic forms of human affect, heuristic biases, and intuitions.
This is the first point, but I still have one more point with regard to the critique of Roden Disconnection Thesis. I will leave it for the next session. That should give us good materials to see that a lot of you should be able to single out some of these themes, and as we move forward you will see that these singled out themes, are the undercurrent motifs of all singularitarian branches with regard to the posthuman intelligence: heuristic biases, methodological under determination, metaphysical overdetermination, and so on and so forth.
Contrasting all the different thinkers, I do think that there is a very strong similarity, not in terms of pros, obviously, but in terms of content with David's account of disconnection as intelligence going astray, going wild, and Nick Land’s account of intelligence, as well. It is very much in how they respond to that challenge of intelligence leaving its human substrate and whether they want to think about, where they cheer it on, whether they are scared to bits like Nick Bostrom.
This is how I think about it: So, Roden and Land are rather different. Essentially, they have a few components in their arguments which doesn't allow for their pure convergence. Yes, they overlap but they don't converge. I think that, as you say, essentially, Nick Bostrom, or not Nick Bostrom, but the whole idea of the panic, AI panic, artificial intelligence panic, or AI over-optimism, essentially malevolent AI, which Nick, you know, worships, a Skynet, come against a benevolent AI in the vein of Yudkowsky—they are coming from the same table of categories. It is just that we need to kind of distinguish this table of categories and kind of you know make sure that the subtleties of how they overlap, but also how they can be distinguished, are recognized. Yes, definitely let's do it.
I have two or three criticisms left about the Disconnection Thesis, and then also Peter Wolfendale has a fantastic criticism, which I think is superb, that comes back to this very idea that the whole point is that Roden tries to retain certain philosophical components, such that these philosophical components allow a posthuman future intelligence and not a radical alien here and now. In the sense that once you show and debunk such philosophical components, the posthuman intelligence no longer becomes a hereditary problem, as if it is the children of Human after going to an extensive spacetime translation. But it actually comes back to the very question of the possibility of the alien here and now, and with the possibility of the alien here and now, as Kant would have said it. Then, you have to explain why you, in fact, are capable of recognizing this alien, and how can you manage to explain what it means that he identifies this alien as an intelligent agent, without simply engaging in the vagaries of a speculation, in fanaticism.