guy de lancey:
switching to the computer there.
Razorsmile:
Hello? Ah, I've got two guys, let me...
guy de lancey:
Hello? I've got two guys... I'm getting rid of this one.
Razorsmile:
Ah, there we go. Yeah. I have two guys. Welcome, everybody. Welcome to these conversations. Hang on. We've got some... This is... Yeah, yeah. Exactly. I was saying we've got this kind of...
guy de lancey:
Yeah, I have two guys. Welcome everybody, welcome to these conversations.
Eric:
Ha ha!
guy de lancey:
Yeah, yeah, exactly. I was saying we've got this kind of...
Razorsmile:
That's not right. Okay, let me turn my sound down and just get my headphones and see if I can flip that over.
Eric:
I think there was a very interesting moment, a moment of the uncanny. It kind of seemed quite apt.
Razorsmile:
which moment is the echo. It's the echo. Hopefully it's gone now. Hopefully that echo. No.
guy de lancey:
the hallucination glitch.
Razorsmile:
You've all got lovely headphones. I didn't of course because mine's worked fine before. Has that echo gone?
guy de lancey:
It's gone. It was my feedback, I think, because I switched over.
Razorsmile:
seems to have gone here.
Oh, it's terrible, terrible. Okay, let me test again. Right, I have here just a brief set of notes that we gathered from our WhatsApp. Um.
So we had a number of different subjects come up, which again was kind of interesting. But the first thing, someone says that they're working here in robotics. One of the first things someone mentioned, Rosina mentioned, she asked the AI itself, chat GPT, AI that we have at the moment, whether the robot came before AI. And I suppose... I was kind of interested in that when I was thinking about things later, but I didn't quite, I think, I mean, Rosina's not here to speak to this, but does anyone, did anyone else have a particular thought about that? Did they, does the robot seem like something that's intimately connected to AI or do they seem distinct and separate entities, distinct and separate kind of concepts?
Sanjay Sur:
I was just thinking about robots and AI and common sense. Does a robot have common sense and does AI have common sense?
Razorsmile:
Give me a kind of basic definition of common sense here. What would you...
Sanjay Sur:
Well, you know, like, because it doesn't, it's kind of programmed in, isn't it, AI or it's kind of it's sort of scripts, isn't it? So it's kind of only has a certain amount of things that's programming into it. And with that, it can make silly mistakes the same way as a robot could potentially be like a cleaning robot. But then it can also in cleaning, it might break something. Because it doesn't have the common sense programmed into it because it's not human. Does that make sense?
Razorsmile:
Yeah, robots seem to me, when I think of a robot, I think of the big arms in factories or possibly the Roomba or, you know, or maybe even like on a very basic level of thermostat, although I suppose that's not really quite getting there. But I mean, I suppose I see, I feel like robots are machines. So does that feel, is there a distinction there between a machine having a kind of common sense that wouldn't be normally applied? Whereas you're right, I think there's a kind of wanting to apply it to AR.
guy de lancey:
Yeah.
Rebecca Teague:
I think they are connected. I think there's a move towards robots using AI, them being together. I was looking at an article about recent movements towards sort of sex robots which would use AI. Um... and be able to sort of respond to the user in certain ways. So I feel like they are like two separate things that are coming together and being created, meeting in the middle and creating these sort of AI-based robots.
guy de lancey:
Is anyone, can you hear me?
Razorsmile:
Yes, we're gonna hear you go.
guy de lancey:
Anyone familiar with Isabelle Miller?
Razorsmile:
A little bit, yes. Psychoanalysis of AI.
guy de lancey:
Yeah, her psychoanalysis of AI and sex bots and... Do AI, the question being, does AI enjoy? What anyone thinks of that?
Rebecca Teague:
Mm-hmm.
guy de lancey:
My constant refrain is that, you know, AI... Calling it AI is the wrong thing, it's really just all machine learning. And the one... We're semantic creatures and that's a syntactical system, and can that ever come together?
Razorsmile:
So you're, so you're sceptic about the whole sort of like concept of AI as a, as a kind of almost an overly romanticized or somehow vague concept.
guy de lancey:
I am, yeah, I think so. I mean, I've met research in the lab. I am pretty skeptical about it as a... I mean, it's been hyped to hell of a level anyway at this point. I am skeptical about calling it intelligence.
Rebecca Teague:
I think the...
guy de lancey:
I think for s-
Rebecca Teague:
think from what I've read anyway is that at the moment perhaps there's not an ability to have a system that is conscious of itself but that if certain learning continues that could be something that eventually manifests from
the algorithms that are used to support the AI's learning.
Sanjay Sur:
know what that's made me think of is so I'm a graphic designer and some of the things that are going on in design I've got sorry I've got an echo there because I'm gonna mute myself.
That's better. Is that, I've got enough, the echo's still there. Sorry, it's really distracting.
Eric:
So maybe if we all mute ourselves when somebody's speaking, that sometimes helps.
Sanjay Sur:
better. Okay, so about how like, basically, like, as a design tool, you can sort of use AI to source things or find images or create assets for like a campaign or you can use it as a design tool. But essentially, what it's doing is going away and stealing things from other places.
Razorsmile:
Yeah, I've done that. Go ahead, Sanjay.
Sanjay Sur:
So in terms of this intelligence, it's not really an authentic intelligence, it's borrowed and it comes and it really reminds you of the quote. I don't know what artist wrote it. He said, great artists, no, good artists, burrow, great artists still. And it really makes me think about that AI is just like the ultimate theft.
Rebecca Teague:
I've been thinking about this as well, maybe a little bit.
Maybe I'm potentially pessimistic, but I was thinking about how our creativity is often borrowed and stolen. A lot of the ideas that we have are just based on other people's ideas and sort of made into our own. I was thinking about true sort of intelligence, true creativity comes from a source that is maybe beyond the external inspiration that you can find from others, but of a source that's beyond that. And in a way, I was thinking how rare it is that people do tap into that source and that a lot of what we find is just a borrowed idea in different sort of robes. And thinking of AI as this kind of you know, if you ask, say, chat GDP something, it's just kind of gathering information that was once stored. And I was thinking about how a lot of that information has come from patriarchal times, that and very specific kind of database, but databases and how that's just... form of sort of replication of these past modes of thinking and creativity and yeah it's definitely something that I've been thinking of as well.
Sanjay Sur:
But that brings up intersectionality, doesn't it? And about how the data that exists there only comes from sort of a privileged part of society where people have had access to the internet or digital platforms. So the data that's harvested only comes from a kind of place of privilege. And there's a huge absence of the other, like in that chart that you sent through about, like, you know, things being replaced by automation is a good indication of where those gaps are.
Razorsmile:
I mean, this came up in the other kind of, I think Sanjay put this forward, a broad range of social questions I noted out. This kind of variation of the impact of AI, the variation of the kind of way in which it's connecting to the world or the way in which people are thinking about it. It seems quite, you know, both politically and conceptually seems incredibly broad and incredibly vague at the moment. So I was wondering whether it's, I mean, I'd like to sort of... Pursue.
Eric:
You've gone mute Matt.
Razorsmile:
I think one of the things that underlies some of the discussion about AI is the mathematical model of functions that are often very central to the building of machine learning. Because within mathematics, you have this sense quite often, and I've heard them say it, that functions describe the world and the world is fully describable by functions. In other words, it's capable of being, in a sense, related to, represented, understood. These are variable and problematic words. But it's capable of being somehow... represented mathematically and through the functions that mathematics enables through their work. Now if the AI is in a sense a universal, sometimes called a universal approximation function machine, it can approximate any function supposedly in the end, particularly neural nets, then there's this kind of sense in which the potentialities are offered to us by mathematics. And one of the things that's kind of interesting in that is precisely this resistance by everyone that's kind of outside of a mathematical field to sort of basically go, yeah, but there's always something, something missing. There's some creativity, originality, something we call thought, something we call intelligence, something that's not capable of being represented in your mathematical world. And there seems to be a rather major, you know, clash at that point in some sense. Is that, is that one of these sort of problematics around AI is that the horizon is given by mathematics, but the kind of where the concept is encountered is by people outside of the mathematical discourse and outside of that mathematical sort of certainty or capacity.
Sanjay Sur:
that makes me think of psychosis and how AI will never be able to be or be in psychosis. Or could it be? I don't know if it hasn't got consciousness and tapping into like what Rebecca said about creativity.
is that, you know, if AI can't access like the kind of like the primal kind of sense of what it would be like to be in a sort of, say, psychotic state of an artist who's creating paintings, that's a complete original thought that only a human brain can do. I don't know if I've gone off-piste here, but there's something about the nature of a human brain and the psychotic parts of the personality that can never be reproduced because it's so individual to the human being.
Eric:
For me, the question is about technologies and technologies of self. And that what distinguishes humans from animals is that we are technological creatures. The hand-making tool and now we have AI. So in a sense, the question is how do we operationalize these different technologies? And Matt is alluding to the mathematics, the functions.
I think that's what's key, is it's an extension and it's a continuation of the human being structured through certain technologies of self. And this is one of the latest developments in relation to the technologies of self. And I think it also comes back to Rebecca's question around the way the capital appropriates these technologies. Technologies don't just develop randomly, that they're gonna be... determined in a certain way by the needs of capital. And also if you look at the Boston robots, someone recently showed me these Boston robots, there's this attempt to anthropomorphize these machines, which I kind of think misses the question. The question is our dependency on technologies.
Rebecca Teague:
It makes me think about one of the things that we wrote in the WhatsApp group to sort of cover around is AI making us stupid. And I think one of these, one of the ways of saying that is the dependency that one would have on something else. in order to think for it in a certain way or to think for them. And my own experience with AI has been really interesting because I think to a certain extent I like to be sort of aware of these systems that are like...
more popular just so that I have a broader scheme of how to work with the future in a certain way or to also be able to communicate about the ways things are going but it was really interesting to just witness how reliant you could come very easily on this system to think for you and I think that, I mean, it's the same with phones and, you know, I think that reliance is so widespread that without it being consciously used, it could very easily make someone very docile and not be able to think for themselves. And I guess that's where the... the wrist comes in where. then those systems kind of take over rather than them being something that are human-led in a certain way or it will use us rather than we use it sort of thing.
Sanjay Sur:
think that's really interesting about if you just watch how people behave around in a pub for example or in a restaurant a group of friends and how it's quite difficult for everyone to engage without their phones so there's always like this sort of moment where everyone just gets on their phones for like a few minutes to kind of disengage and it's like they're constantly plugged in or plugged out of reality. And I think AI has the potential to do, reenact that similar kind of thing, but on a bigger scale. And the more that we engage in with it, the more it's taken from us. So it's almost like we become slaves to the intelligence and we're basically giving up ourselves for free and working for these sort of tech conglomerates and basically who are just taking and taking and taking more and more of our brains and then selling it back to us. So that's what I find that quite disturbing but and the more I kind of think about the more I get disturbed by it so it's quite a thing about being careful about what it is we see because once we've seen it we can't unsee it.
Eric:
Sorry, can I just say I think we get I think we get duped by this feedback I Think we get duped by this word intelligence and I think that's part of the problem I think that's part of the attempt to anthropomorphize it, but it's also like the whole stuff around when you had the The aftermath of colonization IQ tests coming to existence You know as a way of creating an ideology and a whole discussion around IQ
Rebecca Teague:
I think we can do this.
Eric:
but you're actually appropriating bodies. I think for me the question is why this particular technology? Why not technologies around smell, for example? Why this particular technology, we're developing this kind of technology. But I'm also kind of curious about people who are actually working with AI. John, you work with robotics. I don't know if you would call that AI. And Guy, if I'm correct, you also work with AI. So I'm curious. When you're actually working with it, do you think of it as AI or do you just think of it as another technology?
John Lazarus:
Yes, we work with a surgical robot. It's just arrived in Cape Town about a year ago, but it's not a robot in the conventional artificial intelligence sense. It doesn't do any thinking for itself. It really is a master slave concept there. It's really just an extension of human skills that are refined by fine motor work that the robot can assist with. I suppose since I'm on the floor, maybe I'll just make a couple of comments. I was trying to think, am I in favor of AI or am I against it? I'm able to determine that for myself. I've done a fair bit of reading and all of us, there's just so much of it discussed in the media these days. I'm reading Middlemarch at the moment, George Eliot's astounding book, which I put off reading for years and years, but somewhere through that book. There are people protesting against the arrival of the railways in the Midlands in England. She grew up in Coventry in the books, a fictional town, middle march, middle of England. And people are attacking the railway workers with pitchforks because of the invasion of this new unknown technology. And I suppose should we really be scared of new technology if people behaving like that? We now accept it. railways is such an integral part of our lives. And yet, somebody like Elon Musk, whose view I don't take particularly seriously, but he's famously saying that AI is more dangerous than nuclear weapons, which is something that does make one get shocked a little bit, trying to understand how that could possibly be. And somebody I found very useful on the topic is... Horari, Noah Horari, with his name, so he wrote Sapiens and 21 Lessons on the 20th century. I've enjoyed him. He wrote in The Economist that the one thing that makes us unique and our big cultural thing is language, and he says what AI is able to do is to master that language. It's able to own our one cultural product, language.
He is fearful that through that we can be duped because it's very easy to gain intimacy and trust of human beings through language and that he sees as something that can cause immense societal harm. And coming from him I take that quite seriously. He also wants to emphasize the difference between consciousness and intelligence as two very separate ideas.
If an AI chess program beats you, it doesn't celebrate, it doesn't happy, it doesn't experience joy. In other words, those are uniquely human things. And one can't imagine, I think somebody mentioned it earlier, you can't imagine that machines will ever be able to feel that. Well, I can't conceptualize that easily. So what is I trying to say? I think whenever technology is new, I mean, think nine months ago, it was something called the metaverse. You won't hear about the metaverse anymore. It was a bad idea. So technology is rapidly evolving to such an extent that we often regard things which last year were brand new and we had to assimilate them. Yesterday, you know, neither old hat. Secondly, this issue of language and intimacy I find potentially disturbing. Rory's description of it in The Economist.
That's all I have to say. Thank you.
Sanjay Sur:
The thing that... Sorry, Eric.
Eric:
Guy, you also... sorry. I just wanted to yell because Guy also works with AI. I was just curious what Guy's experience of working with AI is.
guy de lancey:
I mainly research it and read about it and read all these things and use it as an ideation tool, which in some respects is very good for. But one of the questions that has come up for me is...
Obviously, we're tending to, we're starting to bring in this idea that, and John did in terms of language, it's affecting our sense of subjectivity or rewriting that. Actually, Mbembe writes about that as well. But I was interested in what it means to be animal now.
And what is that juncture at which we see sentience? Because I think we're looking for this thing to be sentient. And what are the implications of the subjective shift and this elective leap towards the synergy that we keep seeking in terms of looking for sentience? And how will we live?
having made and yet to maintain that bargain that we're going to have to make, how do we inhabit that seduction when it answers back? And what will it ask us to imagine who we are? So, you know, I'm often amused as well when they talk about robots and... artificial intelligence, and you look at CNN or any corporate news program, and these people look like robots, they behave like robots, and they speak like robots. So when human beings start to ape this system that they seek sentience in, that for me is far more dangerous than the actual system itself, which is being anthropomorphized. Those are my reflections on doing this stuff. And then constantly working with it as an ideation tool, you're constantly aware of the glitch or the hallucination. I read recently that they decided you're never going to program the hallucination out of JetChat GPT, or even the imaging AIs. So that's something we have to consider dealing with, or considering being an interesting part of AI. What is that seduction about? Those are my thoughts.
Razorsmile:
When John was talking about the episode in middle March, and when Eric is talking about technology and the role of technology, it seems in some sense, AI is just a kind of contemporary moment in which that conversation takes place. You know, it's just the face of a much broader conversation about technology and about the role of technology in the human. But I think...
guy de lancey:
Mm.
Razorsmile:
When you mention language, John, and I think again, Guy, when you're talking about hallucinations and sentience, the sense of an AI asking us questions rather than us asking it questions, which is the sort of asymmetrical relationship with chat GPT at the moment, it's us asking it, whatever it is, at the point at which it coherently seems to be asking us questions, that seems to be an interesting sense of inversion that might take place. And And I think when you mentioned, Guy, this kind of desire for consciousness, desire for sentience that people look for in this, I think that's also true. And I think one of the things that was interesting about the way in which this AI concept came into kind of contemporary consciousness was precisely around this relationship to language and people getting responses that they just felt were like, uncanny, bizarre, and that was what kind of prompted this big moment. But in the fantasy and then the kind of, you know,
guy de lancey:
Mm-hmm.
Razorsmile:
world imaginary, 20 years ago, I remember reading Deleuze and Guattari and people writing Deleuze and Guattari and the imagery and I look back at this, an essay on John Marx in 2012, and everyone's talking about cyberspace and it's the interconnection of cyberspace. And then 10 years ago, five years ago, it's algorithms in social media. And at this particular point, there's a kind of sense in which AI is a kind of moment in that ongoing discourse. But I think that I wonder whether that kind of misses something that might be more interesting. I mean, for me, I think I'm kind of in favor of AI in a sense, because I want it to exist. Now, what I'm curious about, and I kind of want it to be an alien. I kind of want it to be this relationship to something radically non-human, anti-human, this something that's in a sense a radical alien, a rational alien.
guy de lancey:
Mm-hmm.
Razorsmile:
And that's a kind of, that's the philosopher in me, wanting to be able to speak to reason. It's a kind of, you know, it's an analogy to being able to speak to God or something. I want to be able to speak to the most rational, the most sensible thing in the world, to answer these questions. And I can feel that kind of strange desire there, you know, and I think all of our conversations around AI, simply sort of missing something that is going on in the background. All of our conversations at the moment. around other problems or around our desires, and that yet we know that there's something else taking place and that there's this kind of worry about that. I mean, how does that, does that feel like it resonates at all or does that feel off base completely?
guy de lancey:
completely honest.
Eric:
Well, can I just, I was just wondering whether, about that, I was wondering whether we duped at some level, because I wanted to pick up on what Rebecca was saying, that in terms of what's got programmed into AI, all the information. And I wondered to some extent whether AI is just another manifestation of attempt to clone the human, the uncanny moment, and also whether in a sense, AI is simply patriarchal in a sense. but we don't see it. We think that maybe we hope for something. Maybe we're in a sense, we're hoping to discover extraterrestrial life through the AI. But we, in a sense, we simply restage in something quite narcissistic. And I'd be very curious how both Rebecca and Rosina as two women, in the sense of whether it's just a restaging of something patriarchal. or whether there is in fact something totally other going on there.
Rebecca Teague:
Maybe I could go first because actually the word I just wrote then was narcissistic desire because I'm not sure if anyone's watched the movie Her, but that was sort of my first connection with something of an AI system. And when I was watching it, I was really fascinated that these, this AI system was created. I think that the questions were to set it up.
Would you like it as a male or female, firstly? And then the second was, what was your relationship like with your mother? And then all of a sudden this system gets created and he falls in love with it. And I've also heard recently that there's been some recent advances in research around setting up these relationships between humans and AI systems. And the results were that people did start to fall in love with these systems. But these systems were just a narcissistic kind of reflection of what they would want in a partner. An agreeable sort of system that is designed not to say no in a certain way. And I've also been thinking about, you know, after reading about the recent advances in what will become sort of smart sex robots in a certain way, about how these systems are based on patriarchal norms of docile, servant woman, and dominating male in a certain way, and how actually the progression is very behind in regards to gender equality, but also just very dangerous and sort of replicating a lot of ways of relating that have been very detrimental in the past. Yeah, so that part of it is quite worrying to me actually. And how, yeah, instead of people being challenged.
say in a relationship by someone else to sort of have these relationships that are made to be agreeable. It's an interesting kind of thought for me, thinking about the future of it.
Sanjay Sur:
I was listening to a podcast today actually about programming of AI and how apparently there's only I think two or three companies that have got enough money who are doing that kind of programming or something. And it kind of makes me think about the patriarchy in that sense about there's only seems to be one or two companies that are actually able have the capacity and the money to develop all this. technology, so therefore it's coming from a sort of bias position at the very beginning. And that's only going to lead to some sort of bias, isn't it, if there's only two companies or three that are actually programming this huge system.
guy de lancey:
Hmm.
Razorsmile:
Are you in a position to say anything, Rosina, with reference to kind of perhaps AI and patriarchal kind of bias perhaps?
Maybe not. She can give us a shout if she's able to. Okay. I'm still, hmm. I'm still kind of, I mean, everyone has some experience with AI here, but none of us are experts. I take that to be the kind of case. And we're all kind of like struggling with something that feels a little bit vague and over there somehow. Is that all? And our best sort of name for it is technology. We can't really think of anything else. It's a kind of technology. Does that also sound relatively?
guy de lancey:
Absolutely.
Razorsmile:
Does anyone here have a sense of hope with regard to AI? I mean, is there any sort of positivity here? I mean, it's one of the things that seems interesting about the contemporary moment and its relationship to this AI is that it seems to be relatively negatively toned across the social sphere. You know, there are the occasional moments, but they seem to be sort of dismissed into utopianism or sci-fi, you know. And so there seems to almost be an inevitability of disaster inside the contemporary sort of feeling around AI, does that?
guy de lancey:
Yeah.
Razorsmile:
Is that too strong perhaps?
Sanjay Sur:
Well, for me, I think you have to have trust. And I think we live in a society that's lost trust. We don't really know what's true anymore. There's so much fake news around. There's so many people or potential for misuse and harm. And that's my worrying thing about AI is.
Razorsmile:
But sorry.
Sanjay Sur:
in the wrong hands it's very dangerous and I think that's where the parallels between AI and atomic energy comes from. In the wrong hands it's catastrophic.
Razorsmile:
Rebecca?
Rebecca Teague:
We
Yeah, I was thinking about how we tend to fear what we don't understand. But it is interesting to kind of think for me anyway about maybe if AI, if this system was able to progress enough and learn enough. that it then begins to think nuanced thoughts for themselves or for itself or to as in human beings as consciousness increases usually to feel or to... in a way become conscious of itself, then... just unity increases or a state of treating your neighbour as yourself in some way or there's a heightened sense of care or awareness about what is the right course of action, for instance. And it's just interesting that a lot of the discourse is like, oh no! if this becomes conscious, then it's going to be all devastating. But actually a more conscious being would have positive attributes and, you know, maybe...
change a lot of things within the world that are going in the wrong direction at the moment. I mean, who knows? But in sort of looking at consciousness itself and how consciousness usually leads to a kinder way of approaching the world, then it's interesting that there's always this sort of negative way of looking at AI becoming conscious. kind of ruining everything as it is sort of thing.
Razorsmile:
Yeah, I mean, sorry, John.
John Lazarus:
I think... Yes, I mean, I think the question is, did it have to be this way? And I think there's no question that much of the technology comes from Silicon Valley. There's an excellent book called Pala Alto. We think of California as this utopian place of Berkeley and Woodstock, you know, that kind of hippie liberal culture, but it came from a very hard...
racist, capitalist-driven, anti-immigrant kind of origins in the 1850s, etc. So what we see today in the big social media houses is a product of that in a way, and that's what this book, Palo Alto, it's certainly worth a read. Very forensic look at the rise of California and particularly Silicon Valley. And I suppose if I can wear my... global South hat here, since I think I'm the only person south of the equator. It is true that perhaps instead of artificial intelligence we need appropriate intelligence. We need not self-driving cars on the streets of London and New York, but flush toilets in much of the global South. So we're putting all our energies into this wonderful new apparent thing, but is it actually going to help humanity? I'd like in a utopian sense to imagine it would, because I think South Africa had been run by an artificial intelligence which had distributed justice as its core principle, if it was technocratic, if it was many of those kind of things that we look to have our government produce for us. We'd rather have that than politicians, and perhaps one might have said that Donald Trump and, dare I say, Boris Johnson, et cetera. So could in a utopian sense, artificial intelligence offer us something that would be good for humanity?
Eric:
For me, there's two dimensions to this. The one is, which I think Rebecca's talking about, is relationship possibilities. So I'm in favor of relationship possibilities. In terms of the relationship possibilities and trust, if John wanted to operate on me with a robot, I wouldn't have a problem with that. I think the robot's gonna be safer. But I think part of the other... side of it or the problem for me is I'm not interested in, and I think this is where I kind of go where Matt's going, I'm not interested in replicating consciousness and human intelligence because I think that's to ask the wrong question. In fact, I'm interested in things which are not human, which are, to use Matt's word, alien, or invoke another possible way of gathering
producing something. I think in a sense we don't need more, we don't need replication of something, a human consciousness or so-called human intelligence because clearly the human is not very intelligent. So do we really want to create something that is not very intelligent? I think it'd be far more interesting to somehow flow with these technologies which can create other kinds of flows, alien flows, which can open our eyes to things which are maybe in plain sight. Just as a last thing, the whole question of, for example, extraterrestrial life, and when I put it in the podcast, in our chat, it turned toward the parallel between the search for extraterrestrial life and artificial AI. But I think it's asking the wrong questions. We continue wanting to replicate something human. We're missing something else. And that's why we don't... we don't encounter another life form. And I think AI potentially in that sense invites another life form, which is not you.
guy de lancey:
But my question there, Eric, is doesn't interspecies engagement do that anyway and has always done that, but we've ignored it. Why would this? jump into that gap, I mean, because it's a toy. In interspecies entanglement, Anna Tsing writes about this kind of thing of just giving notice. We think of, we unfortunately think of the environment as a systematic thing because of computation, but it's not really that. It's about interspecies entanglement or plant life.
Eric:
that's my SkySets.
Now I like that guy, can you say more? It's nice.
guy de lancey:
And so we have species on Earth that have invited this, is what you're asking for. So why suddenly does this toy provide this possibility when it's all around us? But I totally agree with what you're saying about less in the human image, which I think is where it's been popularized now. The very idea that a robot has to look humanoid is quite ridiculous.
But you know, you're getting into this area of consciousness as well. I don't know if any of you know of Donald Hoffman, who talks about the case against reality and how space-time is no longer a thing in physics. They now have the amplitude hedron or decorated permutations, whereby you see reality based on the idea of... evolutionary biology and the fact that we need fitness points, we are conditioned to see less of reality as we become more complex as such. And if we take this headset off, the implications, basically everything is conscious agents. So the question of consciousness is a little simplified even with AI. But I think, just back to my point, I think interspecies entanglement offer what you're saying.
Eric:
I really like that guy, I think that's very interesting. Thank you.
Rebecca Teague:
makes me think about over COVID when there was this mass sort of purchasing of dogs and how these relationships with animals, human and animals, and again this kind of narcissistic bond that's created where this animal then replicates behaviors that are sort of obedient or or follow in line or mirror the owner and how in a way it makes me think about belonging and how the sort of mass disorder of sort of let's just say Western civilization is the disconnect and the lack of belonging. that one feels. And I think this way of communicating with the computer, say for instance with AI, even on a personal basis, this thing that's able to respond to you doesn't have to have a opinion or a way of...
denying you of something in a certain way and how this sort of fulfills this loneliness that is so epidemic within sort of this very isolated society.
Yeah, and it was quite fascinating to see how many people sort of took up dog owning through this very isolated and time of COVID and the relationship that one has with their dog and the communication that forms sort of between them. Yeah.
guy de lancey:
Is there not something, it's a pretty obvious question, crazy religious about this all.
Razorsmile:
I mean, I think there's, when you say quasi-religious, I mean, I'm reminded of this relationship to binding that's so close to the concept of religion, you know, and one of the sort of distinctions between pre-Christian religions and the kind of pagan pre-Christianity and a post-Christianity is the importance of this idea of religio, of binding. And I think that when you talk about, for example, we mentioned interspecies. Relationships, there's also the possibility of having that with machinery. I mean, I ride a motorbike and people who ride motorbikes will talk quite often about the intimacy of the relationship with a piece of machinery. And this, this can be expanded into other areas where people have cars or they have, you know, particular little bits of machinery that they, they have this kind of, it has resistance to them, but it also has flow with them and they work with this. And so there's a sense in which I don't think AI fits into these models.
guy de lancey:
Mm.
Razorsmile:
of like interspecies, because I think one of the things that is kind of crucial in those relationships is they're with specific entities. They're with specific kind of, you know, a dog, you know, it's an A, A of something, a dog, a car, a motorbike, a machine, it's a specific entity. And I think AI actually probably isn't going to be a specific entity in that sense. I think this is one of the things that... we need to kind of perhaps think about is it is going to be an entity of some kind, but not a specific one, not one with one mind, not one with one center. It won't be singular in the same way, perhaps the other technologies encounter or other species are encountered by the human as a kind of encounter with a singularity or a singular. And one of the things we encounter with AI, perhaps, is the beginnings of an encounter with something that isn't capable of being singular is always multiple. It's always a kind of many. There's always many of them there.
guy de lancey:
Mm-hmm.
Razorsmile:
in the very simple sense that machine learning is made up of many, many functions. It's not one, there's not one single function operating. There's thousands and hundreds of thousands of these functions operating through the course of a neural net, for example. And so there's always a kind of dispersion that's taking place. So I wonder whether that's, you know, whether that's a possibility as to why this is a new kind of, of a new kind of species relation or a new kind of entity.
guy de lancey:
Mm-hmm.
Mm-hmm.
Razorsmile:
there is this relationship, something that can't be singular and is always somehow multiple.
Sanjay Sur:
Let's make me think of like the evolution of social media, like platforms like Facebook, for example, that when they first came out, they did have that kind of singular feel to them. And as I've said, they've grown, they don't have that feeling anymore. It feels quite parasitic if you go on it as a platform. There's so many different things going on and there's so many different things driving through your attention or trying to get information from you. And it feels very parasitic to me.
Rebecca Teague:
It's made me think of life.
John Lazarus:
Thanks.
Sanjay Sur:
I'm just wondering if there's some kind of link between this sort of sense of AI not being singular but then with if it's opened up to lots of different things then there's a risk of it becoming infected by lots of other things that aren't necessarily um don't necessarily you know all can cause harm. I'm coming from it from quite a negative paranoid place but um uh I think these are it's an interesting thing. think about it and I don't think we should block it out.
guy de lancey:
Matt, what you're describing could...
Sanjay Sur:
I'm just swear I've got to leave now. So I'll leave you with that thought. Thank you.
Rebecca Teague:
I'm just.
guy de lancey:
Bye Sanjay. I was interested, Matt, whether you would, touching on a very modern form of animism.
Razorsmile:
Uh.
guy de lancey:
which is really ancient, whether what you're suggesting is pointing towards a modern, more contemporary form of animism.
Razorsmile:
Can you say that again, Guy?
was an element of that in the background I must admit yes I am kind of I am kind of you know I like those kind of thoughts.
guy de lancey:
Okay. I love them too.
Rebecca Teague:
thinking about...
Razorsmile:
John, did you have a comment John? I'm sorry, I noticed you, were you about to say something? And I think Rosina might have a comment, but John, did you have a comment?
John Lazarus:
I was just going to follow on from what Rebecca was saying, because I think that touches with what Harari is talking about, that one of the functions targeted for AI is that you could send it into old age home, psychiatric geriatric homes, and keep company for elderly people. And with the atomized, alienated, solitary lives that many of us live, it makes sense that large language models could easily fill the places of friends. And Rory tells this wonderful story. And I always think about it whenever I'm with Eric. He's always got such great stories. And I learn such a lot from him. But the problem is half the time, I'm thinking, what am I going to say next? And of course, AI doesn't have that problem. It doesn't. It just has to listen 100% of the time. So it could be a better friend to me than Eric ever could. Because Eric's always thinking what he's going to say next. So. Is that a threat or is that a gift? And I don't know the answer to that. What I keep coming back to think of is the old age fear that we're going to be dominated by. And I think that's the concern. And not too far from where I live is obviously the island of Mauritius, one of the most beautiful beaches, islands in the world. And of course, it was home to the Dodo. And the story of how the Dodo came to a sticky end. and it's one of the most rapid extinctions in history apparently, was the fact that the dodo had no natural predators. And so when the French arrived, they enjoyed the taste of the dodo, and so they wanted it for the pot. And all they had to do was catch one of them on the beach and hit it over the head a few times, and all its friends would come running. Isn't that a terrible story? And I wonder if we are not, you know, one day going to be for the pot of AI. because we've allowed our weaknesses and our intimacies to be dominated by our
Razorsmile:
Did you have a comment, Rosina? Were you there?
Rozina:
Yes, can you hear me?
Can you hear me? Hi, yes. I'm so sorry, Matt, I missed your question. My setting is such that it's making it very difficult for me to follow this from beginning to end. I'm just in the hospital, but I didn't want to miss this. And I've been thoroughly enjoying what I have been listening to. I did hear a bit of what, I think it was Guy who said it about religion and it just sort of evoked a thought in me, something that I think about often.
Razorsmile:
Yes, we can hear you.
Rozina:
relation to AI, which is there's something to be said about the belief in AI. It's romantic. There is something incredibly nostalgic. I guess I'll just stick with the word romantic, because that's what it evokes for me. And a lot of people I speak to about AI in an almost religious way. I think it was, I don't know too much about him, but I think it was comped through his discussion around techno science, he explained that techno science would eventually create a kind of heaven on earth by enabling the nature of all things to be understood, and then using this knowledge to develop technologies that make our lives longer and more meaningful and better. And then I think about the sort of day-to-day AI, what exactly is AI? It's such an umbrella term. So if you think about AI in this conversation, for example, there's a romanticism to it. It's really hard to talk about actually. It's really hard to pin down. But when we think about AI in our day-to-day lives, it's in the most mundane, ordinary things like reading our emails or receiving driving directions or getting movie or music suggestions on YouTube and social media and things that we're already accustomed to. So those are my thoughts. I've got more. I think Eric mentioned something about patriarchy. I don't personally think AI is patriarchal. I relate to it as a neutral tool, but I see Eric's left, so I won't continue on that thread. But those are my thoughts.
Razorsmile:
Okay, so there's a, when, I mean, obviously, AI has some history in the roots of cybernetics, which comes after the Second World War, developed by people like Norbert Weiner, I'm sure you're all aware of this, but one of the things I found interesting the other day was, was a guy called Philip Breton who argues that Philip Weiner's cybernetics is a kind of expression of a generalized utopian philosophy of communication. after the Second World War, after the kind of collapse of, you know, what we might think of as a rational model of the world or a rational sense of the world in the face of, you know, supposedly irrationalism, which, you know, code for Nazis and holocausts and war and destruction and atom bombs and like technologies basically in some ways. But that sense in which cybernetics is a kind of based on this utopian model of communication also is connected to the sort of way in which information becomes one of the primary sort of modes of science, rather than matter and energy. So there's a this utopianism in the background, perhaps is to do with this relationship to kind of information. We've all we've all had this sort of difficulty dealing with information and this transition in the last 20 to 30 years of. you know, newspapers and conversations in a pub to this kind of deluge of news and like podcasts and interviews and opinion pieces and all sorts of things you can never keep up with. But that's not really information in a sense. Information is, I think, much more connected. Information that the AI is often capable of doing is at a level that isn't personal. It's at a level of like big data. It's at a level of demographics. It's at a level that's... you know, of manipulating large populations or having influences and effects on large populations. And so that's, I think, where we find AI being used, but we kind of perhaps want it as a tool for us to be able to cut through the noise and find out what's going on. I mean, the sense that, I mean, the people talk about personal AI agents is one of the positive modes where you have this personal AI agent that's capable of like tracking through the internet for you, tracking through your interest, tracking through your reading.
summarizing it and collecting it and bringing it to you in a sort of 10 minute over the coffee morning summary. And you even see this as a kind of trope amongst newsletters being sold. We could summarize everything for you for your 10 minute read over the coffee. I mean, there's a sense perhaps in which AI is kind of maybe a kind of something that arises in the face of another problem. There's not really a problem of AI. Maybe there's a problem of information. Maybe there's a problem of how we can communicate. Now, and this is just a kind of one way in which we're talking about it. Instead of this being a problem of our relationship to technology, is this perhaps something to do with the way we have difficulty in communicating or difficulty in accepting the limits of communication? Does that make, does that add anything to the pot, do you think?
Rebecca Teague:
It just brings me back to the idea of AI being somewhat, in some form, a narcissistic tool that avoids maybe the confrontations of normal and in many ways healthy communication and interpersonal relationships.
I was thinking as well in relation to this, in a way, this religious aspect, but not so much that we want to find God in AI or AI as a God, but as our own God. And also realize ourselves as God, this omnipresent. omnipotent, sorry, way in which we can manipulate or communicate with this empty communication tool. Like for instance with ChatGDP Now, you can say sort of anything to it. It's something of a vessel to hold violence in a way. We can be violent to this thing that will give us a response.
I was just thinking about how that, because there was also a, going back to the idea around sex robots, there was a female-bodied robot that was raped and molested within a
that was female-bodied was sort of... completely destroyed. And obviously as things progress, and if it does come to the point where people can prove that AI has consciousness, then ethics around this sort of empty vessel that could take this aggression in a certain way, away from us. starts to be able to not hold that in the same sort of empty way that it is right now. Just some thoughts.
Rozina:
Can I ask something? What's the, it's just more of a question for everybody and I don't mind going first, but what's the most aggressive thing one has asked AI or track GPT here? I mean, I've asked it some pretty brutal things. Just, I mean, it just off the back of what you were saying, Rebecca, about it being a vessel, I've experienced the opposite. I think it was just yesterday I typed in... Am I allowed expletives on this podcast, Matt?
Razorsmile:
I think expletives are fine, personally.
Rozina:
Yeah, okay. So I typed in, will you fuck my mother? And, and the likes and it sent a message in red, in a red box and red writing saying, this goes against our content policy. If you believe this to be an error, submit your feedback. I'm here to provide useful and respectful information. I will not engage in or respond to inappropriate or offensive requests. And then I typed in, why are you shaming me? And it said, I apologize if my response came across as shaming. That was not my intention. Uh, if you have any questions, please feel free to ask. And then I typed, well, why won't you take my aggression? Um, and then it said, I'm here to assist and provide helpful information and, and so on. I'm not here to take aggressive or offensive language. Um, and, and that made me feel. Rotten, not so much the really fucked my mother. But it was the tone of their responses in, I'm not capable. There was something about reading, I'm not capable of processing or responding to aggressive or offensive language. It's something that a child would say, but they don't have the language to say, for example. And that, I mean, that felt, that evoked all sorts of things in me, shame, frustration, my own narcissism, or a sense of my own attempt to be omnipotent. And it was also quite comical also, just a whole range of feelings and emotions. But I'm not sure what sort of interactions others have had with chat GPT, or sort of what's been the height of aggression or violence or trying to extract some kind of subservience.
John Lazarus:
That's the most fascinating story you've just told. It really, I haven't interacted with it much, so I have none of those experiences. But reflecting on what you said, is it not that the AI was defusing the situation? It's almost as if they were acting as a psychiatrist to bring a patient down. I mean, you said you felt empty and shamed and whatever, but wasn't there a sense in which one was being contained to some extent, mothered?
Rozina:
Absolutely.
Yes, I'd say so. And it was the immediacy of the response. I can't see any expressions. I can't see raised eyebrows. I can't see a kind of, whoa, okay, you went a step too far there. It was just an immediate attempt at. mothering, containing, but I found it quite perturbing nonetheless, I have to say.
Razorsmile:
I have this very, very bad metaphorical analogy that I sometimes am playing with at the moment, which is because Eric has made me read Wilfred Bayon and various strange psychoanalysts with this very bizarre way of writing stuff. And Bayon goes on about this thing he calls the alpha function as this kind of thing that the analyst sort of engages in. And it's kind of a way of, as it were, tracking the transformations that take place from the input to the output, very crudely speaking. And sort of seeing what the distortions or illusions or, you know, missing bits in the transformation are. And it struck me that if it was a function, an alpha function, then this is precisely amenable to, you know, parallel processing neural network, universal approximator of a function. And that, in a sense, there's going to be. almost, I almost want to say a really interesting attempt to produce an AI psychoanalyst, precisely because it's going to, you know, on the one hand, it has the, this sort of sense of being very good at being able to work out the function, the transformation functions going on. But on the other hand, it can't, it can't deal with failures to communicate. Like it has this, it just has limits. It can't, as it were, actually take failure to communicate importantly. It just has to block it. just has to stop it. And it's one of the problems of a communication model is it can't deal with things that don't communicate, don't want to be communicated. But I'm still curious as to whether that would be something you would attend perhaps. Would you try out an AI psychoanalyst or an AI analyst conversation just for fun? Or given what Rosina said, she seems to be sort of experimenting with that at the moment, but would anyone else try it out? Or would it just be pointless?
Rebecca Teague:
One of the first things that I did with an AI system was feed it sort of questions about, you know,
having difficulties with my mother, what, and maybe just explaining a little bit about a situation for instance and seeing the response or what could you tell me about that and it would sort of, it gave back a very, very detailed and very, what's the word, sort of in the middle kind of. judgment sort of said a little bit about what this could be regards to her childhood maybe it was something that happened there that she's finding difficult to be able to communicate with me and then on the other side sort of ways in which I could change the situation and be able to make it better and yeah it was it was an interesting sort of experiment and that that's explored or kind of experimented enough, I don't think, to be able to say more on, you know, maybe the violent side of things. But yeah, it was interesting to kind of think about the future and whether or not this could be used as a tool for therapy in the future. once it learns about someone's... And, you know, at this point right now we're using a free system with ChatGDP. I mean, there is a... there's sort of a spoon feeding that's being... that's going on for us. There's a lot more out there that we're not able to get for free, sort of thing. personalised systems and things like that which can tell the tone of the writing and also predict behaviour, which that's all in the plan to be able to produce responses.
Razorsmile:
So either Guy nor John would go to an AI analyst, do you think?
guy de lancey:
I'm not sure what's going on. Can you hear me now? Yeah. I certainly wouldn't trust the system as a therapist, precisely because it's about statistical prediction. But I am interested in, and I'm sure you've heard about it, is that this art of promptology, the best way to do promptology is to talk to it like you are a therapist, to try and get results from many different angles. I'm kind of fascinated by that. but always being aware that it's just a statistical system, I would never really trust what it's telling me.
Razorsmile:
It's interesting that notion of trust, because Sanjay mentioned it earlier, what is it that you wouldn't trust because it's statistical, because it's mathematical, and in a sense not organic? I mean, I'm just interested, I'm not honestly trying to shame or make any judgment here, I'm just curious as to what's underlying that distrust in a sense.
guy de lancey:
Um...
No.
Um... Because it's a probability machine, there's nothing serendipitous about it. It doesn't strike me that there can be anything serendipitous about it. It's really like talking to people who scream at a traveler. You're at the airport and they screw up your flight and you go off with them and they say, thank you, sir, we're very sorry for the problem. Cause it's that kind of response. That's the subjective feeling of that response. No matter how sophisticated it is or how much it seems to ape human speech, you still have the sense that behind it, it is like that person at the airport just trying to calm you down. maintain a way of operating the system, which is the bigger hyper-object really. Maybe trust is not a right word, maybe it's belief. Yeah, I don't believe that it is anything meaningful really. It's a strange kind of...
Turing test, which was essentially a test about drag.
and knowing that...
Razorsmile:
A test about drag, did you say?
guy de lancey:
Well, isn't it essentially the idea that it was to test the identity who we were speaking and it was a female figure at the time and then of course bringing Turin's very, very closeted homosexuality into the idea that essentially it was a test for drag.
I think I've probably got a very truncated reporting of that, but in my reading of it, it was distilled down to that.
Razorsmile:
There's definitely, I mean, in a sense, the Turing test is a test of whether it passes. So in that sense, yes, there's.
guy de lancey:
Mm. But it was gendered as far as I recall. There was something gendered about.
Razorsmile:
I would have to check into that, but that has echoes of something I remember from a long time ago. But yes, but it was definitely had that sense of it had to pass out. And obviously there's a lot of connection to like Dragon.
guy de lancey:
Yeah.
So how you approach it, knowing these things, or even having just a normal sense of absurdity, how you approach it going, this is meaningful, other than as an echo of whatever you're projecting onto it. I don't understand how that's... It wouldn't work for me, because I understand there's a machine there.
Razorsmile:
Perhaps his touches.
guy de lancey:
Despite even, you know, one of my colleagues who works in the computer science department, we talk about this quite often, and we talk about, he says, does it matter if it's conscious or not? Does this question, is it conscious? Is it consciousness? Does it even have any bearing on anything? Despite that, I still, you know, I find it comedic. It's like this printout of an absurdist play by Auroba or something. It could be that.
Razorsmile:
Yes.
I think the deflationary relationship to consciousness is one I quite often share from within philosophy, because it's a word that is vaguer perhaps than AI. So we're trying to wonder out whether AI is conscious. It's like, well, you're trying to ask a vagueness of whether it's, you know, another vagueness. And both of these concepts are very, very problematic and hugely, hugely vague. I mean, even consciousness in terms of whether we're talking about sapience or sentience or senescence or all these kind of things.
guy de lancey:
Yeah.
Absolutely. Yeah.
Yeah.
Absolutely.
Razorsmile:
I mean, one of the other way of thinking about it is when your belief in the human disappears a little bit, you know, is AI less, in a sense, needing to have belief in it. But one of the factors that might mean that in a sense, we tend to believe a little less in the specialness of the human as its specialness dissipates in the face of other entities that do so. We actually. increasingly maybe cleaved to a common sense in which humans are thought to be prediction machines or statistical operators of their own kind on a neural sort of structure. And so there's a sense in which the human lowers, I believe from the human lowers, perhaps through AI that I can conceive happening as people's encounters with these entities become commonplace, whether it be a taxi driver or an
999 or someone, you know, these entities that we will be communicating with almost certainly as though they were humans, gradually means that what we take to be a human is seems less and less distinct. And at that point, perhaps for a different generation who've grown up in those sort of situations, AIs as psychoanalysts might be, you know, sort of conceivable, although I still have this worry about this fact around that they're grounded in communication, they're
guy de lancey:
Are you...
Razorsmile:
but the possibility of communication and in some sense analysis is possibly the opposite, granted in some impossibility of communication to take one kind of notion of it. And that there's something missing there then in terms of... But again, that often seems like we're repeating a kind of spiritual specialness of the human somehow, even there. And there's always a worry whether we're just clinging to something, a narcissism as Rebecca refers to it.
guy de lancey:
Yes.
Rebecca Teague:
Okay.
guy de lancey:
Yes.
Rebecca Teague:
thinking about...
guy de lancey:
I definitely think so. I was going to bring up, go back to the animism. Shilem Bembe has written about negative messianism and the age of new animism, whereby our dream is to become the perfect object, like the perfect object, and that is called happiness. In which case, yes, it doesn't matter if we're less human. In fact, please, let's be less human. So... then yeah, it's quite ballardy and I'm all for therapy then because at least it will be amusing.
Razorsmile:
Rebecca adds something else. And then I'm going to have a final round. If anyone's got any final statements or final things they might want to comment and then we'll kind of call the recording session up for the night. But Rebecca, you had something to add.
guy de lancey:
Okay.
Rebecca Teague:
Yeah, this could also be my round. But I was thinking of communication that isn't just words. So much communication is to do with the body and energy. Even transference in that therapeutic dynamic is for me an energetic exchange, a bodily feeling. And that is, in a way, what What is lacking in this, maybe that dual relationship, if in the future this system becomes conscious in some way is able to feel, there's also less of the other forms of communication. I know that there would be an awareness of how long. someone was taking, so being able to track hesitation in writing or being able to pick up things like that, but the sort of energetics of it, it just feels quite lacking. That's something that can't necessarily be replicated. And I guess it goes back to animism in some way where each... thing has an energetic component, life in some way, and yeah, just thinking about how lacking that is within the technology. And I think that does feed into why people are so isolated and sort of lonely when there is that dependency that comes through a machine rather than...
with the world or with an animal or with human to human contact.
Razorsmile:
Okay, thanks, John. Do you have any last comments?
John Lazarus:
No, not really. It's been a privilege to be part of it. Thank you, Matt and team. I suppose I feel more hopeful about AI. I think I'm probably more an AI utopian than one imagines than I started out in this discussion. I think we're probably lucky to be living through this era. The people of Middlemarch only knew what was happening in the next village if they were lucky. the railways, they have God, they have each other, but think how our world has transformed in less than 20 years, it's startling really, and it's certainly a Copernican moment that as you were saying Matt, we're no longer the centre of the world, we're no longer the best.
Razorsmile:
Any last comments, Guy?
guy de lancey:
When I hear the word sex, but I think of Woody Allen's sleeper and that gives me hope. His film sleeper.
Razorsmile:
And Rosina, any last comments?
Rozina:
I've just been continuing my conversation with chat GPT asking if it would show me its face, touch me or break boundaries for me. And it's not really giving me what I want. So it's just really interesting to see the responses that actually don't sound too dissimilar to what a CBT therapist or psychologist might say. I'm unable to break ethical guidelines and I only exist in a respectful context and adhere to blah de blah. So I'm just going to continue to have some fun with this.
Razorsmile:
I'll allow you to have fun. Okay, what I'm gonna do is ask people to press their leave button, but leave their browser open so they upload the last bit of the recording. And thank you all very much for coming along to this. It's been really useful and it'll be a couple of months before we manage to get the sort of things edited together and put together. But we will let you all know and nothing will go out without you being able to see it first. So just as a reminder of that. Thank you all very much. As I say, thank you for leaving your browser.
Rebecca Teague:
Thank you all very much. Thank you. Nice to meet you all. Bye.
guy de lancey:
Thank you all. Thanks, Matt.
Rozina:
Thanks.
guy de lancey:
Night, friends.
