pad

Hotline_on_AI

About the event:

What data is being used to train ChatGPT? Is AI really coming for your job? And how does DALL·E know how to paint? The Hmm Hotline is here, to answer all your questions, speculations, or anxieties regarding AI. Join us during our live radio show, where a panel of experts—from artists to philosophers to researchers—will answer all your questions live. Can’t wait to share your worries, dreams, or contemplations about artificial intelligence? Then call our hotline and leave us an anonymous voice message during the weeks leading up to the live radio show. As waves of developments in artificial intelligence keep washing over us it can be hard to keep up. And since AI technologies are often presented as untransparent “black boxes”, and generate what can feel like ‘magic’ results, they can leave us with many questions. That’s why The Hmm is organising this online radio show and call-in hotline. To demystify AI, a number of guests will answer your burning questions on this complex, contentious, and exciting topic.

Introduction

SJEF: You are listening to “Ask us anything hotline on AI advice”. Hello everyone. Good evening.

Ask us anything hotline on AI advice for you tonight. The Hmm is an organization committed to a better understanding of the internet and digital culture by creating accessible events, research the shades and educational programs. We are broadcasting to you from pickup club. In the NDSM warehouse in Amsterdam, as we are always traveling around with our events to make each base that we visit our own. However, we have developed our very own scent together with Cesar Majorana. It’s called Hmmosphere, it smells like The Hmm and our interpretation of the internet. And we are spraying it around in this studio right now. So, if you want to join our events from home, smell wise, too, in addition to our livestream, you can buy Hmmosphere from our web shop on our website: . Tonight’s radio show is part of our larger research into artificial intelligence that we’ve been doing this year. On our website, we have published a dossier on co-creation with AI, which features a number of articles and interviews. You can read these for free on our website too. Back in May, we did an evening program around that same topic co-creation with AI. We did a prompt battle and had someone generates the slides for a lecture with stable diffusion. Life on stage. You can watch that back on our website, too. For now, we have a radio show ahead of us featuring three guests, all experts on AI in their own ways. They will answer questions that you sent in via hotline over the past weeks. If you haven’t sent in a question yet, or if you come up with one right now on the spot during the show, you can send it in via the chat on our livestream website. Hopefully, we’ll have time to get to them later.

Now, I would like to ask our guests to introduce themselves. Let’s start with Eva. You aren’t here in the studio with us, but luckily you were able to call in so we can have you on the show now.

EVA: Thank you so much for having me. I was so sad when I got COVID, I slept the entire day, just woke up to go with the talk with you guys. To introduce myself, I am an artist, an illustrator, and I have finished my studies in 2019. Since then, I have been a professional freelance creature designer and artist. I make stuff for galleries, but also for clients like Vaku Eurovision and other entertainment companies. Last year, AII discovered that my work was in the data set from AI image generators. Together with a few artists, we set up the agar, the European Guild of Artificial Intelligence Regulations, to fight for regulations and protection for creatives against artificial intelligence. Fortunately, that has gone extremely well, and we have connected with organizations worldwide to protect creatives everywhere. That’s why I’m here today.

SJEF: Thank you for joining us. Next, we have Ahnjili ZhuParris. Could you tell our listeners who you are?

AHNJILI: Sure. I’m Ahnjili. I’m currently based in The Hague. I just submitted my PhD thesis a few weeks ago, and my research focused on experimenthow to develop clinical biomarkers using smartphone and wearable data. So basically, I would analyze remotely collected data, use AIAI to identify a few key characteristics, and then estimate how depressed you were. Since then, I got a new job as a machine learning engineer, and I work for a plastic surgeon. So I basically AIuse AI to show people what they would look like post-surgery, which has been a great deal of fun. My last chapter is actually I work as an AI artist. algorithmsI like to take typical AI algorithms or AI-driven surveillance technologies and repurpose them for very strange scenarios just to highlight or question when and where and why should we use these technologies? One project in particular that I’m working on is called fashion police drones in which we use AI to identify fashion criminals. Yep, that’s it for me. Exciting. Thanks.

SJEF: Our final advisor is Derek Lomas. Hi, Derek, could you tell the audience what you do?

DEREK: Sure. So I’m a professor at TU Delft. I recently received tenure, which means I can say whatever I want on the show. Looking forward (laugh). My title is professor of positive AI, and I’m in the department of human-centered design. My background’s in cognitive science and human-computer interaction. I have a company called Play Power Labs that makes AI-infused educational software, AI math tutors, and that sort of thing. And what does positive AI mean? Exactly. Yeah, we get to define that ourselves. But essentially what we’re trying to do is AItake principles of positive psychology, the science of what makes people happiness, what supports flourishing long-term wellbeing, and make sure that that's present in the AI systems that we're building.

Chapter 1

SJEF: Well, let’s get started with our first question that was sent in by Sam. We’re going to listen to it now.

SAM: Hello, I’m Sam. I’m an application manager. I see a big hype around AI and the companies that work with it are getting really high evaluations. But for now, I think that AI is really in its beginning. In my own view, the first jobs that are able to get a lot of artificial intelligence or jobs like lawyers or doctors. Like complicated jobs, but that contain like a huge amount of data where AI could be able to get the best answer and it’s easily checkable. For lawyers, I’d like to have all the laws, they can easily access it and get a good answer out of it. Like just like coding with GPT is already quite useful. There’s just a lot of good data on it and easily checkable. I do notice saying this, that when I say AI, I mostly think about GPT. But mostly my question is, chat-questionsare there already jobs that are getting easier with AI at the moment? And what sort of jobs specifically will be the easiest to replace with the help of AI and what kind of time span will that change actually happen?Looking forward to an answer. Have a nice show.

SJEF: Thank you, Sam, for sending in that question. It’s maybe a very common fear of people to be replaced by AI. So, who wants to take a swing at it?

AHNJILI: I can go ahead. Personally, AI, more specifically, active AI, has made my life easier in terms of jobs because I use it to debug my code and to summarize research articles. I see a lot of my colleagues also doing the same. But AII don't think anyone is afraid of GPT or any other AI of replacing their job anytime soon because people are finding new ways to co-work or integrate these algorithms into their daily lives rather than using it to completely automate their work.

EVA: I think that the current general AI in its current form would be able to take a few jobs away, especially in the artist fields. We can already see that entire art departments have been fired or people have been fired and replaced with mid-journey, for example. But as soon as these AIgenerative AI models are not trained on copyrighted artor like the quality of the data set depends on what is in the dataset. So if you remove all the C data from the datasets, you will be left with something a lot less impressive than we have right now. So what I’m estimating is that as soon as these models need to get permission and compensation to the people who’s work is trained, general AI is trained on. Suddenly, these models won’t be as impressive as they are right now. Also, these generative AI models take a lot of energy, and I don’t think it is actually profitable to keep them going. So I think now a lot of people are very excited about AI, which I get it is very interesting and incredible technology, but in the end, I don’t think it’ll take a lot of jobs away, but I do think that a lot of employers will think that it will be able to replace people. But it can’t actually do that. It’s like, you can ask GPT to generate a beautiful image, but if you ask for revisions, that is a lot more difficult to do. So I think like I have personally seen a lot of people being fired for example, an AI anorexia helpline fired their entire staff and replaced them with GPT and then a day. GPT gave tips to anorexia people how to eat less. So I think that a lot of people think AI can do a lot of things. But in the end, I think humans should be kept in the workforce and AIit should only be used for task like summarizing text or mundane tasks it should be used for, but not to replace workers entirely.

DEREK: I might jump in and just say that current generations of AI are a lot more powerful than we thought they were gonna be three years ago. I mean, it’s just way better and it’s moving way faster than we thought. But it still can’t do the last 10% of the job. And the last 10% of the job is more than half the work, which, you know, if you do anything, the last 10% is really hard to do so it can make a really incredible picture as Eva says, but if you need it to do something specific, it’s bloody impossible. And it will get better. And right now I find myself doing work twice. Like first I try to do it with AI, and then I have to actually do it. I think that the job that it doesn’t replace but enhances the most is the intern. AII think that interns with Chat GPT are this incredible superpower. And the thing is, is that you don’t wanna fire your interns, like because you have Chat GPT like you wanna hire as many interns as you can because they’re unbelievable. I mean, they can do everything a lot better than they could do two years ago. And I think that all of the examples of people firing people to replace them with AI, so far as I know, it’s been a bad idea. Every single time, like all, all of the examples Eva gave are really great examples of why you shouldn’t fire people and replace it with AI. It’s not a good idea. Instead, the people will become a lot more productive. But then again, this is just the very beginning of the AI apocalypse. So I mean, it’s like, we’re still speculating check back in in five years, you know, it’s apocalypse. Yeah. It’s gonna be really weird.

SJEF: Yeah. I think we’ll, we’ll get to the AI apocalypse with a later question maybe, but so far the lawyers and doctors are safe, you think?

DEREK: Oh yeah, definitely. Yes.

EVA: Yes. Actually, if there’s a lot of private data that should not be in chat or near it.

DEREK: Oh yes, I completely agree.

AHNJILI: And I wanted to bring up an example that was related to the energy efficiency point that Eva brought up as well. Big TechSo I think about five years ago, the state-of-the-art AI cancer detection algorithm that used mammograms to detect tumor malignancy was around 77% accurate. And a radiologist at the time was also 80% accurate, but they also found that if you train pigeons over the course of a few weeks, then they would also achieve 80% accuracy. But not just that, if you train four pigeons and then use the majority vote, you could achieve around 98% accuracy. I love this. That's awesome. It is so cool. Beautiful, actually beautiful because not only that, maybe someday the algorithm will also achieve around 98% accuracy. But AIyou need to feed it millions of images, and it's completely not cost-efficient to train these models, whereas for pigeons, you just need to feed them. Yes, a few crumbs of bread solution. Yes, exactly. Just get pigeons. Yes, pigeons and you know, radiologists aren’t that energy efficient either because you know, they have to go to school five to 10 years in order to gain that level. Those stem doctors, yes exactly. So perhaps AI won’t be the only option that will threaten radiologists’ jobs. Yeah, yeah.

Chapter 2

SJEF: I think that’s a great image to go on to the next question with, which I think sort of ties into this one. When Sam was talking about when he says AI, he’s mostly thinking about GPT and the way these tools shape our thinking. I think that connects to our next question that was sent in by Florian that we can listen to now.

FLORIAN: Hi, I’m Florian. I’m from Sweden. I have the following question about AI. When we speak of AI, I get the feeling that it’s such a broad topic that it’s important also to specify what type of AI or in what field of AI you are discussing, or we are discussing. When you talk about AI, there is a certain fear or there’s a certain potential that people see. But in my view, it really depends on the context in which AI is used. So if it’s used in a medical as a medical service, it can be very helpful in detecting diseases, for example. While in other fields, like in the military, you can expect it to be more problematic. So chat-questionshow do you consider discussing AI, and how far do you specify in what field you are discussing it in order to make a point?

AHNJILI: I worked in big pharma for a few years, I noticed that Big Techwhen we applied for grants, we threw the term AI anywhere and everywhere. But then when it comes to research papers, we're like, it's not really AI. It's actually just linear regression or it's just random forest or it's just a gun. And so, yeah, I noticed that, you know, when it comes to wanting to reap the financial rewards. AI is a hype term, but then when it comes to actually convincing, uh, other researchers or scientists of what you're actually capable of doing, the term AI is never used, but for the exact reason, it is just too broad and it actually doesn't mean anything.And so yeah, even within the medical field or maybe even within the other four mentioned fields like the military, like it’s still quite hard to define what the reach or impact of these algorithms are. So, for example, I might propose a survival model for someone’s diabetes treatments. But then that model probably was only trained on adults. So then if I use the same model for children, then I put those children at risk because I haven’t actually accounted for their own BMI, their growth rate and so on. And so like when I like to talk about AI, not only do I talk about the field I’m in, but also the population that I’m working with.

DEREK: AIAI was invented as a term for grant applications, like that's where it came from. We used to call it cybernetics, and I like cybernetics, first of all, it sounds cool. Like, yeah, you know, but it also doesn’t require things to be artificial. So in a cybernetic system, you can have people as part of the cybernetic system because it’s about a feedback loop. And so the first cybernetic system that discussed was by a Dutch guy back in the late 15 hundreds. He was like, if we take over these islands and set up these canals, we can completely control the water level. And he was like, and it will just work. And it didn’t work, but he tried it. And so for me, what I think about AI, I think about it in a systems way. So I’m not, I’m not so concerned with like, Oh, is this a deep learning? Is this a regression? I’m like, no, no, no, no, no. Like, let’s look at the whole system. So, so, you know, the problem isn’t that GPT, you know, when we were talking about GPT, like, Oh, you know, is it dangerous? No, no, no. The problem is when you take GPT and you plug it into Facebook’s recommendations, and then you plug that into the 2020 election, you see what I’m saying? And so like, when I think about AI, I think about it in this way of, okay, like, it’s, it’s not that one algorithm is bad. It’s that when you put all of these algorithms together into this huge system, it’s like, it’s like putting on the Ring of Power, like, okay, you have the Ring of Power, great. But then you put it on and now you’re invisible. And now you want to kill people. You see what I’m saying? So, um, you know, for me, I think that the field thing is a little bit of a red herring because we’re already in this, this big old cybernetic system. And so it’s a little bit too late to be like, okay, well, is this good? Is this bad? It’s like, no, no, no, we’re already here. Like, let’s look at the whole thing. So, so for me, that’s, that’s what I’m thinking about it. I hated the way that the term AI was used and now I am finding myself… Well, yeah, it’s really convenient for talking about what’s going on with ChatGPT but I want to clarify that however we were talking about AI two years ago, it’s a really different thing now. And it’s actually a lot closer to the misconception that people had around it and that is weird.

SJEF: And then how, sort of… We haven’t discussed the term neural networks yet, right? That’s also like, I feel like when I heard people who go to study AI, like they also talk about this. I think it’s gone like a little out of fashion to use that, but like what does that mean? And how does that come into play in this?

EVA: Oh, sorry. Uh, I was a bit… can you repeat that question? Sorry.

SJEF: No. Yeah, we were talking about the terminology and I was wondering like neural network. It’s also like this term, I hear thrown around and sort of like, how does that… Like, how would you specify that? Is that artificial intelligence? Or is that a certain mode of…

AHNJILI: Yeah. I would say that it’s one of, let’s say the baby steps within AI. Um, so basically, algorithmswithin a neural network, you'll have an algorithm like a linear regression. And then if you want to create a deep learning model, then you pair up multiple neural networks together to get your…

SJEF: Yeah, deep learning, that’s also one of the, yeah but now everything is AI, but like, yeah.

AHNJILI: Yeah, exactly.

EVA: It has become a bit of a buzzword.

SJEF: Yeah. And it’s also just like this big umbrella term where like we throw everything in and then whether we talk about…

EVA: Yeah, when an organization says they are using AI their stocks go oof! Like AI is a legitimate field of research. My best friend is doing AI neuroscience. She’s very smart. Uh, but no, generative AI, it’s completely not the same field. It’s completely something else. But I think that the context is very important because when I talk about AI, some people think I’m against all AI. I certainly am not. I think there is great uses for that in all different kinds of fields, but for example, with art, I don’t think that it has a place in the way it is right now. Like if it would have been trained and copywrited with all the consent of the owners. So, context is very important.

SJEF: Okay. Do you have anything to add or should we go on?

AHNJILI: If I can ask a minor question for Derek actually because, I noticed that a few companies are trying to move away from the term AI and towards terms that also highlight the, let’s say human connection. So, IBM for example, loves to call their algorithms, cognitive computing. Yes, exactly. And you know, they have their. Robot Watson as well. So there are all these term for basically renaming your AI with a human name, like Alexa is also an example. So, how do you feel about these kind of humanization of these AI systems?

EVA : I hate it. Have you, uh, read the paper from referencesTimnit Gebru: Stochastic Parrots?Sorry, I don’t know how to pronounce it, but I think it’s really dangerous to give it human characteristics. Because it’s not human and like, for example, what we have noticed with AI companies, like generative AI companies, like Stability AI. They keep telling us that it learns like a human, it is like a human and it’s magic and we don’t know how it works. And by talking about AI and that’s way, they shoved the responsibility of it to the AI and not to the people who built those systems. Well, now they’re getting sued left and right. So it’s not working properly, but it is not good to give something that is not human, it is not intelligence, to give it human characteristics. Sorry. I cannot pronounce that word. Um, I think it’s pretty dangerous.

AHNJILI: Yes. I agree.

DEREK: Yeah, I, um, things are really weird now. I mean, and I think they’re gonna keep getting weirder. So, I think it’s important… Um, I do think AI… So, I mean, none of these generative AI systems, like feel, but I think it’s appropriate to describe them as understanding. Like they understand certain concepts and they don’t understand other concepts. And if you’re having a conversation, um, even in a generative image system. Like sometimes they get it and sometimes they don’t and that’s a human characteristic too. Like people don’t understand everything and the way that you determine whether a person understands something is you test them. Like you can give them a test to find out whether they understand some concept or don’t understand another concept and that’s the sort of thing that’s taking place right now with Artificial Intelligence. I suspect the field will be called machine psychology, but I’m not, you know, things are moving around right now. Um, but it’s useful to assume that they understand concepts. And I have some issues with the perception as a stochastic carrot, uh, not stochastic carrot. It’s a difficult word. Stochastic Parrot and the notion that it’s only predicting the next token, etc. I mean, they are some things technically where that’s not quite entirely right but these things really do operate at a conceptual level and that’s weird.

EVA: But, for example, if you take out the entire data set, would it be able to understand and understand concepts, according to you?

DEREK: Oh, definitely not. I mean, that’d be like raising a child with no input and like they wouldn’t even be…

EVA: Yeah. But if you put a child inside a white room or a child in the world. It would still, it will not be the same, like it will be nothing, a computer will be nothing and a human will be human. Like it is not the same and generative AI is only as impressive as the dataset, without the dataset, it’s nothing. It’s cannot generate anything without human input.

AHNJILI: Yeah, just to play the devil’s advocate, I would say that a child without any input still doesn’t have any knowledge either. So in both cases, whether or not…

EVA: It will go insane.

AHNJILI: Yeah, exactly. But yeah, if you don’t feed the AI…

EVA: But a computer doesn’t go insane.

AHNJILI: Oh, no, but, uh, the computer doesn’t have a biological system that, like

that needs input, that requires input.

DEREK: I mean, it’s like humans aren’t just bodies. Like as organisms we are tools as well, and we are networks. I mean, we’re extended organisms and it’s weird because we just evolved this… I like thinking of it as the exocortex, it’s like we have this whole new layer of our cognitive architecture. And, um, you know, we’ve been doing that for a while. I mean, we use computers as part of our cognition, and I think the moral ethical considerations of how you compensate artists are really important. Um, artists play a really big role in society, and artists also adapt and really interesting ways. There’s a like a nice aside I can give about like early photography in the Netherlands when they were using Camera Obscura and you know. I mean, there’s things that have happened in the past with art, and this is more shocking than the others because it completely violates our understanding of copyright. Totally agree with that. And I think that they are liable in a lot of interesting ways. They’re just gonna be minting billions and billions of dollars, and so they’re just gonna like pay for all this stuff.

EVA: All the money of the creative industry. Like for example, Midjourney, there are only 10 people working there and I have heard of countless artists who have lost 80% of their income. Um, I, for example, since last year I haven’t had any jobs but it’s not normal. I had, uh, since January I was introduced, I hadn’t had a job for I think 10 months and my fellow artist as well.

DEREK: Like as an illustrator?

EVA: And it’s saying it’s our work. Yes, as an illustrator. It’s our work that is being used to replace us and that is not okay. And actually, about copyright law, I’ve talked with a few people who know much more about this than me; lawyers, AI scientists, and people in European parliament. And they said, uh, we actually don’t have to change any copyright laws regarding this technology, because we already think this is unlawful. And the AI act that is currently being passed, makes sure that companies need to open their dataset. And they said, we’re not gonna change anything in the copyright law, but bits of transparency laws in AI act, we can open up the datasets and artists can look at it and see if their work is in there and sue the companies. Like this is new technology, but that doesn’t mean that there are new rules for it that need to be put in place. Well, actually there is, there has to be a few rules put in place, but with copyrights. We’re actually kind of good. But there are, I think 200 lawsuits right now, and there are still more coming. So, uh, I think that, if AI companies would’ve done it ethically from the first place we could have a really interesting conversation about how it could develop arts, but now it all feels a little bit icky in my opinion.

SJEF: Yeah, I think, we kind of drifted away from the question, which is fine.

EVA: Sorry!

SJEF: Actually there’s any, uh, I think there were many touchpoints in this discussion that will come up again in the next questions, and then we can discuss them further. Uh, but also with the time in mind. I think it’s nice to go onto the next question, which comes from an anonymous teacher.

Chapter 3

ANONYMOUS TEACHER: Hello, AI advice hotline. I am a teacher from Amsterdam and I’m seeking guidance on chat-questionshow to inspire my students to understand that genuine works of art require patience, setbacks, and perseverance, in an era where AI tools are readily available. Or could it be that I am the one clinging to outdated notions and in need of adaptation?

SJEF: Yes. So which would be the case?

EVA: Can I take that one? Is, uh, from the teacher, right?

SJEF: Yes.

EVA: Yes, uh, AIgenerative AI can do a lot of things but it cannot create something new. It can only create what's in the dataset and also art is so much more than just a pretty picture, than just something pretty to look at. My favorite paintings, for example, are the paintings from artists where I know a little bit about their backstory, a little bit about their mental states, and I know a little bit about why they made the painting and what they did with them mentally. And I think that as soon as the current AI technologies, in the ways they are now are out lot, if your students have learned how to work with that kind of AI, they would not have grown as artists. They would have stayed stagnant. Um, so I think the best advice you can give young students is to keep developing themselves and do not let their creativity be dependent on companies. And that is my answer.

SJEF: Anyone else on how to guide the students.

DEREK: I would take a totally opposite approach. Um, where I would encourage understanding of art as something that’s very different from contemporary art markets.

Um, and see the use of tools as being really central to the development of art historically. And I completely agree with the concern around dependency. So I think it’s very important that from a creative perspective, people are able to make use of many different approaches like, I don’t think that, uh, a person should be too reliant on anyone when medium, you know, especially when you’re teaching people. I mean, you’ve gotta be able to do things in lots of different ways. But, uh, I think it’s really important that it’s not viewed as a… Like aAIrt, isn't just pictures, you know, and being able to generate pictures is not being able to generate art. Um, I think computers can absolutely make up new things without, you know that's not just in the data set, at least as much as people can. Um, and their ability to do so is getting better and better.And so I don’t know, I think it’s really important to stay like on top of what people are able to do to see some of the trends that have taken place in the past. Like Photoshop disrupted a lot of jobs as well but people adapted and, I think that understanding the role of effort in art is really crucial. Like, I agree, I mean like something that you just didn’t put any effort into it doesn’t have the same emotional value as something that you put efforts into. But I don’t think that there’s anything fundamental that says that you can’t put your soul into something that you used AI to help you make, because it’s not just gonna be in the first generation. You’re gonna use it in some reflexive manner where you can demonstrate some aspect of your soul that you didn’t otherwise know about. So I don’t think it’s as black and white as like if you use AI, than you know, you’re going to become dependent.

SJEF: Yeah, maybe I wanna get to you Ahnjili, but then maybe because we’re going down this sort of art making route. Maybe it’s fun to just play the next question, because I think it perfectly fits the discussion we’re having now.

ANONYMOUS CALLER: I have a question for Ahnjili, since you are a machine learning engineer and an AI artist: chat-questionsHow do you see the relationship between artists and these new AI tools that might usurp the work artists do?

AHNJILI: Yes, actually, uh, this question is perfect for now because it also helps me answer the previous question, which is… Um, how do I frame this? Uh, right now, AIif you do AI art, your art will be about AI. It's really hard to do anything that doesn't end up just showcasing the AI algorithm. Like for the AI art that I appreciate, I really enjoy it when the AI is more or less invisible, it's not like artwork that showcases: oh, this is what Midjourney can do now or this is what Stable Diffusion can do now. But it's more about, oh, like what happens when you combine these different elements, these different data sets. Or what happens when you fine tune your own algorithm? That makes it way more interesting. Uh, also just as a example, there was like a 500k AI generated painting from six or seven years ago that, you know, downloaded a bunch of images from Wikimedia and if you were to look at that image today, it looks like shit. It looks like, honestly, it’s so, like, uh, there’s like not a recognizable figure in any of those paintings like, you’re just like who bought this for like 500k, but, uh, wait… I actually, so…

SJEF: Yeah, but also I think like you mentioned like we’re talking about AI as a tool, right? And also like you said, like if you think an artwork would be interesting it would be by someone making their own algorithm. Or like, I would throw in maybe people who try to sort of subvert the systems we have or sort of like try to get under the hood, like I’m thinking of Trevor Paglen and Kate Crawford, who did these experiments with large language models and all that. Like you would sort of go more in that direction than like just generate an image, right?

AHNJILI: Yes, exactly. I think…

EVA: I think…

AHNJILI: Go ahead.

EVA: That is ethical AI. So for example, do you know the artistreferencesAnna Rittler? She created the mosaique virus and she made pictures of like 10,000 tulips and she generated, made her own data set, and made our own art with that data set. And I think that is wonderful. And I don’t think that AI itself has a place… I think, it has a place in the art world, but I think we should look at it ethically and not that it takes advantage of your fellow colleagues. But I think you can do incredible things with AI, and I think that if done correctly, it can make wonderful conversations. Um, but with generative AI, I’m a bit hesitant.

AHNJILI: Oh, yeah. Uh, well, I’ll say a corporate generative AI, I’m a bit hesitant about, but if you’re able to generate your own generative AI then that would be ideal.

EVA: Yeah, exactly. Okay, we’re on the same page, that’s great.

AHNJILI: Um, but, I would like to bring up, I guess it’s more of a question for the two speakers here or even for the audience about, how do you feel about these trendsmusicians who are now collaborating with AI to essentially licensed their voices to the public. So Holly Herdon is a good example, Grimes is a good example. I think last week, T-Pain and John Legend said that they were going to follow suit as well.

EVA: Did you see what Grimes said? Like referencesGrimes, uh, licensed her work and then she got mad. I think that people were not giving her royalties or something like that.

AHNJILI: Uh, so I’m wondering in that case, who is to blame? Did she set up the right pipeline to do something like that?

EVA: If you are a creative person and you don’t mind licensing your voice or your art style to gnerative AI. That’s completely fine, it’s your work. You have a say for what to do with your work but don’t force anybody else to join that. Like, if you wanna sell your voice, that is totally fine and for example, Bruce Willis, I think he got Alzheimer disease or something like that.

DEREK: And quit acting right?

EVA: Yeah. He licensed the use of his face and his voice. So his family after he dies…and cannot act anymore. Um, that his family can still collect royalties. I think that is completely fine and I think we should just have very strict rules and regulations and consent and compensation around this technologies. And that can actually help artists in the long run.

But how it’s done right now is just take, take without asking consent and that is not good.

SJEF: I think also with T-Pain, that’s so great because like the aesthetic of his, like autotune singing voice that’s already so like, computer related. And with AI, it can now go like, he can merch with that machine even more. And it’s wonderful.

AHNJILI: Yeah. Yeah, exactly. Um, but yeah, which is also quite interesting because referencesT-Pain, I think released an app like a few years ago where you could use this autotune voice.And then I think…

SJEF: There was a voice filter or something?

AHNJILI: Yeah, exactly. But then there was like a whole like lawsuit between him and the company about like, you know, how to…

SJEF: Oh, because he was like sort of repurposing the autotune tool, essentially?

AHNJILI: Yeah, exactly.

SJEF: Interesting.

DEREK: It’s gonna be a lot of lawyers that are gonna make a lot of money over the next couple of years. And the ethics of this, they’re really weird. Like it’s totally new territory, I think. And I guess the thing that I’m just open about is it feels like a very natural progression. I mean, I know it’s artificial intelligence, but I feel like this was going to happen. Like it feels a little bit like the steam engine sort of thing.

SJEF: Like the printing press something.

DEREK: Yeah, something just happened that we tapped into it’s happened on another planets. Right, there’s a hundred billion galaxies out there. So, I mean, this is something that has emerged in a technological society and the question of how we manage it in a competitive world. You know, there’s like, China, you know, there’s like, you know, there’s so many different players that have a stake in this now, and it’s gonna move really fast.

And being able to create vibrant, authentic artistic ecosystems, I think is the goal. I don’t think preserving copyright law per se, is the goal.

SJEF: I mean, maybe also with like the Disney stuff and sort of keeping copyrights on something that was made a hundred years ago and the author is already dead, maybe that sort of thing.

DEREK: I mean, I felt myself fighting against copyright law for a long time in the arts where it felt like it was all about the big players and now there’s this weird inversion. So I just, I want to keep like some....

SJEF: Or maybe also back to the T-Pain example, like let him like, uh, give everyone the auto-tune via an app instead of everyone like having to pay a big company that’s already rich for it, something like that.

DEREK: I mean, the keypoints for Eva and just sort of the conversation because, okay, there’s a stable diffusion stuff and there’s the image generation piece and there’s that, but like ChatGPT is entirely based on copyright violation. It just totally is, like totally.

EVA: Yeah, generative AI.

DEREK: Yeah, the whole thing it’s totally based on copyright violation.

EVA: It’s a complicated threat.

DEREK: Yeah. And so if that’s the case, it’s like you either as a society, you say “Hey, that’s not allowed”. And you know, Italy did that for a little bit. It was like, oh, you can’t use ChatGPT. People are going to not be okay with that. And so, you have this competitive world where you can’t turn off the AI thing, right? And at the same time, it’s like, well, I broke all these laws, like how do you do? Well, you’re just gonna extract some massive flow of funding from these ultra-rich companies over, I mean we don’t know. So, I mean, I think it’s really important to be advocating for this, but it’s not gonna go… I really don’t think it’s gonna go away. I mean, sort of Jihad like literally, like I was in Dune, right. I mean, referencesDune was a book about Jihad against AI and it was like the post Jihad world where there were no more computers, but that's essentially like the optionality we have which is getting in the streets and stopping this.

SJEF: Yeah, it’s sort of like the Silicon Valley machine is so big and so fast, for a sort of… Yeah, for the lawyers…

DEREK: It’s way past Silicon Valley.

EVA: There is a person’s behind making decisions on how these models work. referencesEmad Mushtaq, the CEO of Stability AI, he said that he want to pay people a subscription to use his models for a hundred euro per month for commercial use. And, um, like people say that for copyright is only for big corporations but it also is to protect small artists from corporations, that somebody doesn’t take my work and put it on the billboard without my consent. Uh, also copyright law is there to make sure that artists are not competing with their own work. So when the internet first came around, a friend of mine… I was too young by that time, but a friend of mine told me that he was an artist and people took his work and put it on his website. And he said, “well, this is a new thing, there’s nothing you can do about it and you’re just gonna laydown and your artist job is done”. Well, within a few years or less than a year I think, there were laws put in place that made sure that it couldn’t happen and I think what we are moving in direction is that AI systems that are trained without the consent of the makers will be some kind of pirated content in the future. I think it will not go away entirely but I think, uh, there will be laws put in place that companies cannot use it as freely as they are right now and that there’s gonna be fines if you use a dataset without the permission of the artist whose work is in there. That is what I see the direction is going.

SJEF: All right. Well let’s um, yeah, maybe you’ve actually heard it, or maybe not, but we have a live audience with us in the studio as well. Uh, so before we go to the next question, I was wondering if there is anyone here in the studio who has a question for our panel. Yes, we have a, a mic over there. If you could speak your question into that then our listeners at home can hear you as well and we can continue the broadcast easily.

HAHAE: Yeah. Hi, my name is Hahae. Um, I guess there’s just this one thing I’ve been struggling with for a couple of years now, especially during the pandemic era when a lot of universities were using like chat-questionsProctorio for proctoring exams and stuff like that. And, um, I actually brought it up at my own university, well, my old university, the university of Amsterdam. Um, that you know, there could be some very problematic implications for especially people of color and, they didn't really listen. I got the student council to take it to court and the court kind of just rejected that and they were like, can’t do anything. Um, and then it was brought up again through the case of Robin Precony, perhaps y’all have heard of it. I’m sure y’all have. And, yeah just like, how do we deal with these going into the future when the models we already have, are fundamentally racist in nature.

SJEF: Thank you for that question.

AHNJILI: Do you wanna go first?

DEREK: Yeah. I mean, the bias of AI today is thatAIAI is biased. Like if you ask ChatGPT about, you know, its concerns, it's like, well, watch out I'm biased. And I think it's a reasonable thing to say that AI ChatGPT is less biased than people, like any random person, like ChatGPT is less biased than them. And, I personally find ChatGPT to be especially helpful in checking bias like if you ask it to. And methodologically, like I think it's not appropriate to put everything into the technology where it's like, we're gonna make this perfectly unbiased technology, whatever that might be, but rather use it using methods that allow us to check our own biases and think about the implications thereof.At the same time, like it’s really good in English and you can’t say that about all languages and so there’s a lot of work to be done to make it more inclusive as a data set. That brings up some of this same tensions, you know that we’re talking about from a copyright perspective, because it’s like, should we go to all of these languages that aren’t well represented and take all the artist’s work from that and train these models, etc or not? I think the ethics are a little bit complicated.

SJEF: But it’s also like making… You said the AI can be less biased, but it’s also like it begins with the training data, right? I’m thinking now of a tweet. Um, I can show it to you. I think my colleague maybe share it in the chat as well. This was an image someone generated a while ago with DALL-E. memesThey asked for a image of Homer Simpson and then we see here, the Homer Simpson with like a brown skin color and he's wearing a tag that says, ethnically ambiguous which is like, apparently DALL-E is putting in these terms to make their results less biased, but like is that like an actual solution?

AHNJILI: Yeah, I think this is actually a very good approach because… I mean for people who are actually people of color who are using these algorithms it’s important to show these examples in public spaces to make people aware about what the flaws are of these algorithms. Uh, so especially for me, I’m a great test for a lot of computer vision algorithms. So, I’m half Jamaican, half Chinese and whenever I do these like demographic recognition algorithms, I always come out as like a 18 year old Asian male, which I don’t think I look like at all. But unfortunately in terms of, yeah how do you counteract these large scale algorithms that are actually being used or applications? I actually, I still don’t have a good answer for that because, for one lots of money is involved, people have invested in these systems. Uh, and then two, companies are always trying to show off how accurate their, algorithms are and if people aren’t there to say, “oh, no, actually they’re sugarcoating it, there’s no way to argue against their algorithms. And I just wanted to make another little point to say that maybe it’s a bit controversial, but algorithmsI think not improving the algorithms and also highlighting how inaccurate they are, especially for different populations of people is actually a good thing because then we actually have a great argument to say, we shouldn't implement these at all. Because a lot of companies are saying "oh, you know, maybe our let's say Amazon recognition algorithm, doesn't do great with certain demographics now we have a good reason to collect even more data from people. And then, you know, then optimize our algorithms so we can sell it for more". A good example of this is actually in China. So, you know, China was like, "Hey US, we hear your complaints about computer recognition algorithms being inaccurate for certain populations. So we contacted the government in Zimbabwe and we said, can you just give us like a hundred thousand images of people from your country and then we'll basically make the best demographic recognition algorithm for people from Zimbabwe". Oh, and I think in exchange Zimbabwe got like a free facial recognition algorithm for their government CCTV cameras.

EVA: There are very specific rules in AI, that make sure that kind of technology cannot happen in Europe which is great because that sounds scary. That’s not good. But answer to the question, there are a few very talented AI ethics people who have made a list of things we should implement to make sure that the models are ethical or less biased. I do not have them by hands, but I recommend checking out Timnit Gebru. She’s amazing, she worked at Google, but she got fired because she was too ethical, I think. So, uh, check out her papers on how to make sure that models are less biased and more ethical. It is a difficult question.

DEREK: I think it’s surprising how ethical ChatGPT is like it was one of the very surprising, one of the very surprising things. Like, uh, its moral sensibility and its ability to give nuance to things that are really tough topics. I mean, it’s pretty high level. It’s also very good at non-violent communication. If you ever have like a angry text message, you wanna send to someone like just put it in ChatGPT?

SJEF: Oh, that’s a great tip.

DEREK: But, the idea that these things are more empathic, or can express empathy better than…

SJEF: Or know how to mimic it.

DEREK: Exactly right. I mean, they don’t feel it but they can express it in a way that’s better than people most of the time, especially in contentious situations. Um, and its ethical capabilities like its expressed ethical capabilities, it’s just, wow. It was really surprising, I didn’t anticipate that to happen so fast.

SJEF: All right. Well, I think it’s time to go on the next question, which was sent in by Malika.

Chapter 4

MALIKA: Hi, AI Hotline. This is Marika from Utrecht and I have the following question. chat-questionsA recent forecast indicate that by 2026, as much of 90% of the content on the web may be generated by AI, and given this projection, what are your ideas on the implication for both the monetary value and the intrinsic qualitative value of human generated data? Okay, bye.

SJEF: Yes. Nice. Does anyone have an idea how this is gonna play out.

EVA: Uh, I hope personally that there will be an AI detecting tool in the future. Right now, the only way to detect AI is human verification. And usually people are expert in their fields like musicians, illustrators, writers, they can detect AI generated works pretty quickly. Uh, but we of course need to have like a safe proof way to show if something is AI generated or not. And I hope that we’ll be coming in the future. trendsI saw a statistic that there have been more AI image generated this past year than there have been in the entire photography history. More AI images have been generated than photographs since the invention of the camera, the amount of slug that is being put out by these machines is incredible. And um, what I’ve noticed in the art community, in the Netherlands, we are a little bit slower than America. For example, America is I think four months ahead. And what I’ve noticed on, for example, social mediaEtsy or arts related websites, or Pinterest, people are fed up with AI images. Uh, and the Etsy shop is absolutely overflowing because the amount of content that is being pushed, like if you post 10 images a day, you get higher in the algorithm for example, on Instagram as well. We, humans are being pushed out like, I have noticed that traffic has gone significantly down since the come of AI. But fortunately, I have noticed that trendsa lot more people are craving human made content more and more. And myself included in the beginning, I was pretty impressed by the AI image generated images but now I think if I say seen like 10 fantasy art AI generated illustrations, I've seen them all. And I wanna see weird wacky stuff of humans, and I think that most humans think the same way. Um, so I think the value of human made art will grow, and I hope there will be ways to detect, in the future, how to filter them out because my Pinterest is overflowing with AI art and it’s terrible reference. So a collarbone is here and that’s not good if I’m looking for reference.
SJEF: Okay, now if I want to drag it out of the arts fields, maybe a little bit, because when I think of human generated data that holds monetary value online, I think of like, my behavior data on Instagram or my DM messages there that are like being monetized for like micro-targeted ads, which I think are actually… Like that data is pretty much overvalued. I would say. How do you feel about this type of human data? Like text that gets written.

AHNJILI: Uh, sorry, I’m just going it specifically answer that one point about overvalued, because like, since I basically analyze people’s smartphone and wearable data. I realized like a lot of that data is actually might be undervalued actually because for example, experimentI can learn so much about a person just by how they tap on their phone. So, for example, if you're scrolling faster or clicking faster, you know, maybe there's a chance that you're bipolar and that type of activity could signal your manic. And so when you become manic, you also become impulsive and therefore you buy more expensive ads. So just by how fast you're scrolling, I might, or an advertiser might be attempted to sell you a one-way ticket to Las Vegas to spend all of your savings at like a holiday somewhere else. And so uh, yeah, that really alarms me because something so easy is that can actually be very valuable for people who are making money from people's behaviors online. But to go back to the wider question about just general content online, especially AI generated content, I’m also gonna zoom into specific industry, which is, I guess the porn industry now because deep fake porn is becoming a huge deal or it has been a huge deal for like, the last few years actually. A lot of women are getting their faces stolen and having their… Being essentially implemented in non-consensual porn, but it’s not just that actually, because now, maybe some of you are aware, there are deep fake nudes out there in the world. So, I mean, social mediayou could find websites or even Telegram channels where you basically upload a photo of someone who's fully clothed and then this website or a chatbot will send you a generated nude image of this person. And that's like a whole new industry on its own.

EVA: Yeah, it’s a really big problem in high schools, which is terrifying. The most vulnerable people in our society are being taken advantage of. I’m very happy that laws are being put in place because it it’s horrifying. It’s absolutely…

AHNJILI: But do you know what kind of laws are being put into place?

EVA: Uh, in America, they’re working on, I don’t know which state, but they’re working on making deep fakes illegal without the consent of the people whose face is being used. And I think, they’re also working on something in Europe. But I think with revenge porn, there was also like a few years that it was just legal, but seeing how… like deep defects were previously only able to be done by actors and actress because there has to be a huge amount of images of this person, and the normal person was relatively safe. But, uh, right now you need I think 10 images of someone to create a deep fake, and the first people that are being targeted are of course, like women, children. The amount of awful stories that’s coming out of, this is horrific but due to the stories being so horrific more laws are being put in place and people are working on it which is great.

SJEF: Yeah. Eva, you cannot see it, but we have another audience member who just walked up to the mic and wants to jump in.

Chapter 5

AUDIENCE MEMBER: Hi, this question is to you, Derek. You talk about positive, AI positive psychology and chat-questionsI am curious to hear more about your work in relationship to collaboration with AI, specifically in learning environments. If you could maybe share with us how this is implemented and what are the potentials of a learning environment both with children and adults.

DEREK: Yeah, thanks. Um, so I do think that one of the most exciting aspects of generative AI is in learning. I think it’s a really incredible learning tool. AII've been working on generative AI in math instruction, which is exactly where generative AI is the worst because it's really bad at fifth grade math. Like it's, you know, the whole collapse of Open AI, you might have heard about it on the news, was because AI got so good at fifth grade math. And that was the sign that, you know, Moloch is coming and we’re gonna have this apocalypse really soon. So, um, okay off topic. I’m really excited about programming education with generative AI, because it’s something that is really hard for schools to offer because it’s really hard to staff schools, period. And now, it’s become possible for kids to have really meaningful programming experiences with a tutor that can answer all of their technical questions very quickly and that’s something… It’s not just for kids. I mean, it’s for adults, like being able to upskill, like realizing that we can all program quantum computers like, you didn’t know that, but actually you can, you just haven’t had a good reason to, I guess. It’s those sorts of things where it’s not just at a superficial level where you can generate something and then copy paste, chuck it in. I mean, that’s a big part of programming period, but now it becomes so much easier to interrogate things, and understand them and have it explained at whatever level you’re at. That’s something that I think will have a really positive effect across the board, and will help teachers focus more on developing student interests, as opposed to sort of a knowledge transfer. I’m really hoping that we shift some curricular goals that are currently there to accommodate what kids and adults can learn.

SJEF: Interesting. Do you want to reply, or should we go to the next question?

AHNJILI: Sure. I’ll just add like a little anecdote as well. I’m American, I’m learning Dutch. Some of you might know that ChatGPT has a speech attached and also texts to speech function. So essentially I could have a audible conversation with ChatGPT and they can speak Dutch. So, I have Dutch conversations with ChatGPT. I mean, I do go to Dutch classes. I have a Dutch tutor and I will never replace them because my goal is to have a human conversation in Dutch, but when I want to like fine tune certain things or have a more personalized experience where I don’t feel awkward about like asking, I dunno, random questions. Like I feel very comfortable with using ChatGPT and it is a great learning tool in that sense for me.

SJEF: It’s nice, and I think the subject of translation sort of create ways into our next question.

Chapter 6

HURVER: Hello, my name is Hurver. chat-questionsWhen the term Artificial Intelligence is discussed, it's often about word intelligence. What exactly does that mean? I recently read a pre-publication of an essay stating that the Chinese term of Artificial Intelligence, if you translate it literally, is human made intelligence. So it gives people much more agency. To what extent do you think our fear of AI is caused by us mentioning it as if it were something external to ourselves.

SJEF: Yeah, semantics question. Who wants to take it?

AHNJILI: Wait, uh, Eva, do you know what the question is?

EVA: Uh, yeah, I was just listening to it. Uh, was it about intelligence? Sorry, just checking. Um, I think that Chinese translation is actually way better than what we have right now because it is not intelligent, generative AI, it’s just mimicking humans. And I think that is very important for people to understand that it’s not human. And also a great way to test that we don’t understand the human brain yet, so how can we build a human brain, we can’t. So, um, it’s very good at mimicking humans and sounding intelligent, but it’s not actually intelligent and I think that is very important for people to remember.

SJEF: I guess we also kind of had this discussion earlier already about like terminology. But like to Hurver’s question, like to what extent do you think the term artificial intelligence shapes our perception as it’s like something external.

AHNJILI: Even now, when you Google “Artificial Intelligence”, you’re just going to see like blue numbers or white robots as well.

SJEF: Yeah, you go like from the iRobot or the always full of love video like that robots.

EVA: Holograms.

AHNJILI: Yeah, exactly. So I feel… Yeah. I also don’t… Like, if you were to ask me to visually represent what AI means to me, I don’t know, it’s just a computer screen, essentially. So I don’t know what an appropriate term would be for what it actually encompasses, but stochastic parrot I think is also insufficient now. Because, algorithmsif anyone is aware of how transformers work, I mean, the algorithm that underlies ChatGPT. I mean, for me, it looks like a multi-headed dragon basically with each head....

EVA: Multi-head dragon is a good term. It would be lovely to draw that.

AHNJILI: Yeah, definitely. Yeah, especially because yeah, I use the term multi-headed dragon because like transformers are driven by attention. referencesOne of the most important AI paper is called "attention is all you need", and each type of attention focuses on a different aspect of language. So when a dragon focuses on punctuation, another focuses on syntax, another will focus on god knows what. But I don't know if a multi-headed dragon is enough to encapsulate or to translate what AI is actually capable of doing.

SJEF: It’s also like, just not a metaphor which like will be insufficient at some point.

EVA: So just the computer with a dragon drawn on it. That’s about it.

DEREK: referencesSo one of my favorite places in Amsterdam is this embassy of the free mind, which is a library that collects all of these incredible old books from the Renaissance and early modern period. And there's a lot of esoteric magical works there, where people are really obsessed, I think in a really interesting and appropriate way, about the nature of intelligence, the nature of consciousness, the nature of being.And I like the sort of philosophical, provocation of our age and I think getting us to think more about the nature of intelligence is a good thing. Sometimes they say, “oh, well, you can’t really define intelligence”, but there’s a really great paper where they collect maybe like 120 different definitions of intelligence and they really do converge in a certain way, which is…

SJEF: But then still that’s a lot of definitions.

DEREK: There’s a lot of definitions, and some of them really focus on the sort of human characteristics, like being able to problem solve and being able to pay attention or… But a lot of it is just about being able to successfully accomplish goals, and the ability to have intentions and fulfill them. That’s really at the core of how intelligence is defined in psychology. It’s also, interestingly, very much at the core of how magic was defined as well, like being able to have an intention and manifest it. I think that it is sort of a weirdly magical age that we live in and I like leaving a little bit of room for that in our rationalist paradigm because these technologies are… Uh, like Sam Altman, he tweeted “the only explanation for this is divine Providence”, and that was a joke. That was a joke. It was a reference to a paper that made the same claim, because they just don’t know why this stuff works so well.

EVA: They do though. They do. They do know how it works. There are humans behind this technology without humans, you couldn’t do anything. It’s not magic. It’s not, it is pretty cool and it’s incredible technology but they do know how it works.

AHNJILI: I would like to play devil’s advocate for that. Uh, just all the small comment, like you know, we still can’t really define how aspirin works, but humans have created it.

EVA: Yeah, that’s true.

AHNJILI: Yeah. And there are also other aspects of human technologies in which they just work, but we haven’t sufficiently been able to explain why they work.

EVA: I think what bothers me with that explanation, that it’s magic and that it’s a God, etc, is that it’s being used to blame someone else, it’s not being put on the companies that are doing this. It’s a way to deflect blame and I think that is why that terminology bothers me. And well, I have been affected and many, many, many, many more artists and creatives, and I think it is interesting to think about it in a psychological sense, but I think these AI companies and Sam Altman and all these people use these kind of terminology to deflect blame. And I think that is why it bothers me if people talk about it that way.

DEREK: It’s hard to credit them also though, like, I feel like they don’t entirely deserve the credit for what’s happened. They’ve been behind it, but these are things that are beyond just design I think, like no one knew it was gonna be this powerful. Like they started doing some of these scaling law studies. I mean, GPT2 was funny like I liked using GPT2 because…

EVA: DALL-E was also funny.
DEREK: Yeah. I mean, DALL-E was funny and then it got like uncannily good.

EVA: AII think why it got so good is because before they didn't use copyrighted data. They trained on public domain images, royalty-free images. Uh, they didn't have these incredibly big data sets and then LAION-5B came around, and it was a huge dataset filled with copyrighted data and suddenly it was that good.But personally, I was not expecting that they would be allowed to get away with it, and they fortunately are not getting away with it, but I think that why it’s so incredible is because of the input. I think that most people didn’t expect this to be legal or that companies would even try to do this. Um, well, they’re not getting away with it, but still I think that is.... Personally, that is the reason why I didn’t expect they would get to this level. Yeah.

AHNJILI: Well, I’ll just add. Yes, the models did get better because of the data set but if I were to give like everyone here also the same ChatGPT dataset, they wouldn’t be able to recreate what ChatGPT or DALL-E does now. They were also able to achieve their status today because they got a bunch of video graphics cards and so, yeah, Nvidia…

EVA: Yeah, that’s true.

AHNJILI: Yeah. Nvidia, as a company is you know the real king maker in this industry, because if they give you their graphic cards then yeah, you’re on your way to becoming to the next…

EVA: It takes a lot of energy. Yeah, that’s true.

AHNJILI: Yeah, exactly. And so if you don’t get Nvidia investment thing you’re screwed basically.

EVA: Yeah. It’s the dataset. It’s the energy. It’s the power and investors money. Yeah.

SJEF: All right. Well, in the meantime, we had a question from our online chat where someone asks… Let’s see… “A scientist working in AI are ringing the alarm that AI could go rogue and could irreversibly take control. What are your thoughts about this?”, but actually before we go there, I think the last question actually fits perfectly with this question from one of our online listeners. So, yeah, let’s go to the apocalypse and let’s listen to our last prerecorded question.

Chapter 7

ANONYMOUS CALLER: Hey hotline. I’m calling from my kitchen table in the north of Amsterdam. Um, last week, chat-questionsthere was an article in the De Groene Amsterdammer about the X risk movement people who worry about the AI apocalypse, the risk of extension by AI. And it described how the dangers of super intelligence provide also a useful frame for tech companies to shift attention from current problems to potential problems in the future in order to avoid regulation. What do you think are current problems that we should not lose sight of? Thank you.

SJEF: Yes.

DEREK: I mean, one of the things that I think is a current problem that we need more of an understanding of is when we feel our sense of purpose being eroded. People can get really down about how powerful or effective or… and then on some days it’s like: it’s piece of shit doesn’t really work that well, but then on other days you’re like: wow, this is really happening very fast and who am I if I’m not this really smart person, that’s able to make all this stuff happen. If everyone can do this, where am I? And I don’t think we really know how to deal with that one because it’s weird.

SJEF: Yeah. Like you’re finding your place in a new landscape essentially, or sort of having to deal with this overwhelming new situation that we’re in/going to be in.

AHNJILI: Yeah and also I like the fact that you’re using the term weird and also I hear a lot of people in the tech industry say creepy as well. And I think these are safe words to use because they don’t have any legal or ethical implications about them.

SJEF: Uncanny.

AHNJILI: Oh yeah, exactly. The uncanny valley. Yeah, and I think that’s an also intentional, because if you start saying “oh, is this legal or is this borderline, uh, unethical?” then you get into some deep waters.

SJEF: Yeah, like if the terminology becomes too negative like some investors might get cold feet or like some lawyers will get their ears up.

AHNJILI: Yes, exactly.

EVA: Yeah. I was at a event in Chicago, I was given a talk about AI and ethics and somebody asked me for how much money would I put my work into the dataset? And I said, I would not do that. Uh, and afterwards somebody was very mad at me and said “never say never, and you shouldn’t say those things because there are stockholders’ interest and they’re in this room and you should always keep a door open”. I was like, no, I don’t wanna do that like not only for a monetizable subject, but also a personal, I don’t want my work in a machine without my consent. Um, but it is exactly what you said, like legal and ethical are kind of words that they wanna avoid. But, uh, well, if you break laws and it is unethical that are not certain words you can avoid. I also, um, I don’t know if there is a good moment, but if there are creative people out there, artists that want to protect their art style, referencesI recommend using Glaze. That is a tool that can protect your style against AI image generators and some are next week… Nightshade is gonna be released and Nightshade is a tool that, like if you put on your artwork and somebody puts that in a dataset without your consent, it'll poison the dataset. So it'll protect your work, if somebody takes without your consent and I wanna make people aware of that because a lot of creative people don't want their work in there. So these are ways that artists can protect themselves from AI mimicry.

AHNJILI: I’ll just add one more small tip as well. referencesSo if you have a personal website, every website has a robots.txt. So OpenAI has offered two lines of code that you could just copy and paste into your website and they won't scrape your website.

SJEF: How nice of them.

AHNJILI: Yeah. So nice.

EVA: It’s Squarespace, if you go to the settings, you can also stop AI crawlers. They put it automatically on on so if you have a Squarespace website please, turn that off and Glaze. If you’re an artist, Glaze protects your images and you will be relatively safe. Good tips.

AHNJILI: Further question current: AI apocalypse problems…

SJEF: Yeah. The question was like, this sort of apocalypse narrative is distracting from current problems. Like what current problems can you come up with like that we haven’t mentioned yet. I think we’ve had many examples already.

AHNJILI: This is a specific example but I’m worried about people not being able to access services because they can’t get past the AI checkpoint for example.

SJEF: Like when people get replaced by....

EVA: Therapy and stuff.

AHNJILI: Oh actually this is more specific. So for example, Big Techat the Hong Kong international airport, once you go through border control, basically, you don't have to take out your passport anymore. Everything is facial recognition. So let's say, I don't know, uh, you need to go from point A to point B, but for some reason they won't scan your passport anymore but your facial recognition is not also not working like now you're essentially locked out because you don't conform to the system.

SJEF: Right, so then like if there’s no sort of service person to help you, you’ll miss your flight. You’ll be locked forever at the Hong Kong airport.

AHNJILI: Yes, exactly terminal party.

EVA: But also it’s good to keep in mind is that the people who train these models or have to filter certain things out with ChatGPT like certain phrases or certain images that are not allowed, they… AIIn Kenya, there was a recent article about the people who trained ChatGPT and it's incredibly mentally taxing, and are getting paid next to nothing for their work. And I think the people behind the training of those models and making sure that the training data is not toxic or fortune like or weird, we have to keep those people in mind and I think they recently unionized. So that, that is great, but we have to keep the people that make these systems work in the way they do and make it appear ethical, we have to keep those in mind as well because the kind of images these generative AI can produce can be very taxing on you mentally, and there should be surfaces put in place to take care of those people.

SJEF: Yeah, I think it’s also very like comparable to the people working for Facebook for like reported content, right. It’s sort of the same situation.

EVA: Yeah, exactly.

DEREK: I mean, I would say that one of the big challenges is just really powerful companies, you know. I mean, it’s gonna be like it has been a challenge except they’re gonna be even more big and powerful. So, Microsoft was like the boogeyman for much of my growing up years was like ” ah big monopoly”. And then they haven’t been so big monopoly… But boy, going forward, I’m telling you, like they’re gonna get really powerful. I mean, now that they’ve got OpenAI in their pocket. It’s gonna be really intense. Um, and so Big Techall of these big companies are gonna be competing with each other and you're gonna be seeing a lot of companies producing these high level foundation models, like anyone who has the capability to invest in is going to be doing it. And that's this arms race, right? So you've got this arms race taking place where you're trying to create the most powerful AI model, we're living in that right now. And this has been one of the main concerns from the people concerned about this AI apocalypse. My philosophy professor in college was this guy, Nick Bostrom, who’s famous for writting a really good article about the likelihood that we’re living in a simulation, first of all, and also for the concern that we might all get turned into paperclips by AI, that’s trying to maximize paperclips. You know, we’re not gonna get turned into paperclips, but having runaway corporate objectives and profit objectives, it’s a real concern.

SJEF: I mean, those have improved to go against human interests already.

DEREK: At times they can. Exactly. And, um, at the same time we will see things that are really surprising. Like my current bet, I made this six months ago so I’ve got a year and a half left, like we will have Star Wars level robots in a year and a half. The large language models are gonna transfer over to robotics and we are gonna see things being able to manipulate objects and make peanut butter jelly sandwiches and do things that a few years ago seem like it was decades off.

SJEF: Sort of Boston Dynamics on steroids.

DEREK: Yeah, with weird… Yeah. And, um, you know, whether that comes to play or not in that radical of a timeframe, who knows, but I think it’s gonna happen pretty fast and we’re going to get capabilities from that. We’re gonna be able to do a lot of things, and some of the things we do are like, will we have new capabilities of dealing with climate change? Yeah, probably like we’re probably gonna be able to figure out some really hard problems. We’re gonna be going to the moon soon again. We’re gonna be doing all kinds of different things. Like, the level of ambition will go up. I think the speed of society is gonna get totally exhausting. I think it’s going to make a bunch of people go crazy, because things are gonna change so fast. Like the accelerating line is just gonna be going up and that’s going to be near term than getting turned into a paperclip, but also concerning.

SJEF: One final note. Yeah, go ahead.

AHNJILI: Uh, for another current AI problem, just to move away from the computer and think about the physical elements of AI. Obviously, like to collect data to store data, Big Techto train these algorithms, you need these large data centers and these data centers consume a lot of water. And recently in a few states in the US, and also in few South American countries, like I think in Honduras, for example, they've… You know, Google, Amazon, like META have had these contracts saying we need X amount of water per year to keep our data centers alive, but in a lot of these areas, they've also had droughts. So these areas had to have their governments decide: are we going to honor our contracts or are we going to provide water for our civilians and yeah, if they don't honor the contracts, that's like a huge financial backlash or implications for these areas. So I think that is also a current AI apocalyptic problem that we have to deal with.

SJEF: Yeah, no, that’s a great point. Well, Eva, Ahnjili, and Derek, thank you so much for all your insights and answers and discussions, to all our listener questions. This marks the end of our radio show. So I have a few closing words, I would like to thank as well my Humm colleagues who helped prepare this event, Lillian Stolk, Margarita Osipian, Eva van Boxel, Guus Hoeberechts, Leanna Wijnsma, and Marco Wessel. I would like to thank Karl Moubarak for creating our hotline website, where all recorded audio messages came from and our additional radio staff for this event Augustina Woodgate, Andrea Gonzalez, and Monty Mouw. And Pickup Club for hosting our studio tonight. A big thanks to the AFK and the Creative Industries fund as well for generously supporting this event. We are going on a little leave for a while with our events, but we will be back next year. To stay up to date on all things Hmm related, you can sign up for our newsletter via our website, thehmm.nl. Now you can follow us on Instagram at the.hmm. Via our website as well, you can subscribe to our Telegram channel where every other Saturday we share a selection of articles we’ve been reading over the past two weeks. And you can join our Discord server to share your thoughts or findings on internet and digital culture with us and get in touch with our community. Then at last, I would like to ask you, the listener to please fill out a short survey that we made to help us reflect on this event, that will be shared in our live chat right now. It won’t take more than a couple of minutes and it helps us tremendously to improve our events. And to all the audience here, on site in our studio, I wanna tell you that you can also go to our website, our livestream website, live.thehmm.nl, tomorrow, you can listen back to this event and you can also read back the chats and you can see all the links that were shared. There’s even a special button that can summarize all URLs that were sent in the chat where you can find all these extra references that enrich this radio show. Yes, that’s it. We’ll be back next year. Thanks for tuning in and have a good night.

Labels