label
AI
Linked to 17 items
-
from: Hotline_on_AI (pad)
EVA: Thank you so much for having me. I was so sad when I got COVID, I slept the entire day, just woke up to go with the talk with you guys. To introduce myself, I am an artist, an illustrator, and I have finished my studies in 2019. Since then, I have been a professional freelance creature designer and artist. I make stuff for galleries, but also for clients like Vaku Eurovision and other entertainment companies. Last year, AII discovered that my work was in the data set from AI image generators. Together with a few artists, we set up the agar, the European Guild of Artificial Intelligence Regulations, to fight for regulations and protection for creatives against artificial intelligence. Fortunately, that has gone extremely well, and we have connected with organizations worldwide to protect creatives everywhere. That’s why I’m here today.
-
from: Hotline_on_AI (pad)
AHNJILI: Sure. I’m Ahnjili. I’m currently based in The Hague. I just submitted my PhD thesis a few weeks ago, and my research focused on experimenthow to develop clinical biomarkers using smartphone and wearable data. So basically, I would analyze remotely collected data, use AIAI to identify a few key characteristics, and then estimate how depressed you were. Since then, I got a new job as a machine learning engineer, and I work for a plastic surgeon. So I basically AIuse AI to show people what they would look like post-surgery, which has been a great deal of fun. My last chapter is actually I work as an AI artist. algorithmsI like to take typical AI algorithms or AI-driven surveillance technologies and repurpose them for very strange scenarios just to highlight or question when and where and why should we use these technologies? One project in particular that I’m working on is called fashion police drones in which we use AI to identify fashion criminals. Yep, that’s it for me. Exciting. Thanks.
-
from: Hotline_on_AI (pad)
DEREK: Sure. So I’m a professor at TU Delft. I recently received tenure, which means I can say whatever I want on the show. Looking forward (laugh). My title is professor of positive AI, and I’m in the department of human-centered design. My background’s in cognitive science and human-computer interaction. I have a company called Play Power Labs that makes AI-infused educational software, AI math tutors, and that sort of thing. And what does positive AI mean? Exactly. Yeah, we get to define that ourselves. But essentially what we’re trying to do is AItake principles of positive psychology, the science of what makes people happiness, what supports flourishing long-term wellbeing, and make sure that that's present in the AI systems that we're building.
-
from: Hotline_on_AI (pad)
AHNJILI: I can go ahead. Personally, AI, more specifically, active AI, has made my life easier in terms of jobs because I use it to debug my code and to summarize research articles. I see a lot of my colleagues also doing the same. But AII don't think anyone is afraid of GPT or any other AI of replacing their job anytime soon because people are finding new ways to co-work or integrate these algorithms into their daily lives rather than using it to completely automate their work.
-
from: Hotline_on_AI (pad)
EVA: I think that the current general AI in its current form would be able to take a few jobs away, especially in the artist fields. We can already see that entire art departments have been fired or people have been fired and replaced with mid-journey, for example. But as soon as these AIgenerative AI models are not trained on copyrighted artor like the quality of the data set depends on what is in the dataset. So if you remove all the C data from the datasets, you will be left with something a lot less impressive than we have right now. So what I’m estimating is that as soon as these models need to get permission and compensation to the people who’s work is trained, general AI is trained on. Suddenly, these models won’t be as impressive as they are right now. Also, these generative AI models take a lot of energy, and I don’t think it is actually profitable to keep them going. So I think now a lot of people are very excited about AI, which I get it is very interesting and incredible technology, but in the end, I don’t think it’ll take a lot of jobs away, but I do think that a lot of employers will think that it will be able to replace people. But it can’t actually do that. It’s like, you can ask GPT to generate a beautiful image, but if you ask for revisions, that is a lot more difficult to do. So I think like I have personally seen a lot of people being fired for example, an AI anorexia helpline fired their entire staff and replaced them with GPT and then a day. GPT gave tips to anorexia people how to eat less. So I think that a lot of people think AI can do a lot of things. But in the end, I think humans should be kept in the workforce and AIit should only be used for task like summarizing text or mundane tasks it should be used for, but not to replace workers entirely.
-
from: Hotline_on_AI (pad)
DEREK: I might jump in and just say that current generations of AI are a lot more powerful than we thought they were gonna be three years ago. I mean, it’s just way better and it’s moving way faster than we thought. But it still can’t do the last 10% of the job. And the last 10% of the job is more than half the work, which, you know, if you do anything, the last 10% is really hard to do so it can make a really incredible picture as Eva says, but if you need it to do something specific, it’s bloody impossible. And it will get better. And right now I find myself doing work twice. Like first I try to do it with AI, and then I have to actually do it. I think that the job that it doesn’t replace but enhances the most is the intern. AII think that interns with Chat GPT are this incredible superpower. And the thing is, is that you don’t wanna fire your interns, like because you have Chat GPT like you wanna hire as many interns as you can because they’re unbelievable. I mean, they can do everything a lot better than they could do two years ago. And I think that all of the examples of people firing people to replace them with AI, so far as I know, it’s been a bad idea. Every single time, like all, all of the examples Eva gave are really great examples of why you shouldn’t fire people and replace it with AI. It’s not a good idea. Instead, the people will become a lot more productive. But then again, this is just the very beginning of the AI apocalypse. So I mean, it’s like, we’re still speculating check back in in five years, you know, it’s apocalypse. Yeah. It’s gonna be really weird.
-
from: Hotline_on_AI (pad)
AHNJILI: And I wanted to bring up an example that was related to the energy efficiency point that Eva brought up as well. Big TechSo I think about five years ago, the state-of-the-art AI cancer detection algorithm that used mammograms to detect tumor malignancy was around 77% accurate. And a radiologist at the time was also 80% accurate, but they also found that if you train pigeons over the course of a few weeks, then they would also achieve 80% accuracy. But not just that, if you train four pigeons and then use the majority vote, you could achieve around 98% accuracy. I love this. That's awesome. It is so cool. Beautiful, actually beautiful because not only that, maybe someday the algorithm will also achieve around 98% accuracy. But AIyou need to feed it millions of images, and it's completely not cost-efficient to train these models, whereas for pigeons, you just need to feed them. Yes, a few crumbs of bread solution. Yes, exactly. Just get pigeons. Yes, pigeons and you know, radiologists aren’t that energy efficient either because you know, they have to go to school five to 10 years in order to gain that level. Those stem doctors, yes exactly. So perhaps AI won’t be the only option that will threaten radiologists’ jobs. Yeah, yeah.
-
from: Hotline_on_AI (pad)
DEREK: AIAI was invented as a term for grant applications, like that's where it came from. We used to call it cybernetics, and I like cybernetics, first of all, it sounds cool. Like, yeah, you know, but it also doesn’t require things to be artificial. So in a cybernetic system, you can have people as part of the cybernetic system because it’s about a feedback loop. And so the first cybernetic system that discussed was by a Dutch guy back in the late 15 hundreds. He was like, if we take over these islands and set up these canals, we can completely control the water level. And he was like, and it will just work. And it didn’t work, but he tried it. And so for me, what I think about AI, I think about it in a systems way. So I’m not, I’m not so concerned with like, Oh, is this a deep learning? Is this a regression? I’m like, no, no, no, no, no. Like, let’s look at the whole system. So, so, you know, the problem isn’t that GPT, you know, when we were talking about GPT, like, Oh, you know, is it dangerous? No, no, no. The problem is when you take GPT and you plug it into Facebook’s recommendations, and then you plug that into the 2020 election, you see what I’m saying? And so like, when I think about AI, I think about it in this way of, okay, like, it’s, it’s not that one algorithm is bad. It’s that when you put all of these algorithms together into this huge system, it’s like, it’s like putting on the Ring of Power, like, okay, you have the Ring of Power, great. But then you put it on and now you’re invisible. And now you want to kill people. You see what I’m saying? So, um, you know, for me, I think that the field thing is a little bit of a red herring because we’re already in this, this big old cybernetic system. And so it’s a little bit too late to be like, okay, well, is this good? Is this bad? It’s like, no, no, no, we’re already here. Like, let’s look at the whole thing. So, so for me, that’s, that’s what I’m thinking about it. I hated the way that the term AI was used and now I am finding myself… Well, yeah, it’s really convenient for talking about what’s going on with ChatGPT but I want to clarify that however we were talking about AI two years ago, it’s a really different thing now. And it’s actually a lot closer to the misconception that people had around it and that is weird.
-
from: Hotline_on_AI (pad)
EVA: Yes, uh, AIgenerative AI can do a lot of things but it cannot create something new. It can only create what's in the dataset and also art is so much more than just a pretty picture, than just something pretty to look at. My favorite paintings, for example, are the paintings from artists where I know a little bit about their backstory, a little bit about their mental states, and I know a little bit about why they made the painting and what they did with them mentally. And I think that as soon as the current AI technologies, in the ways they are now are out lot, if your students have learned how to work with that kind of AI, they would not have grown as artists. They would have stayed stagnant. Um, so I think the best advice you can give young students is to keep developing themselves and do not let their creativity be dependent on companies. And that is my answer.
-
from: Hotline_on_AI (pad)
Um, and see the use of tools as being really central to the development of art historically. And I completely agree with the concern around dependency. So I think it’s very important that from a creative perspective, people are able to make use of many different approaches like, I don’t think that, uh, a person should be too reliant on anyone when medium, you know, especially when you’re teaching people. I mean, you’ve gotta be able to do things in lots of different ways. But, uh, I think it’s really important that it’s not viewed as a… Like aAIrt, isn't just pictures, you know, and being able to generate pictures is not being able to generate art. Um, I think computers can absolutely make up new things without, you know that's not just in the data set, at least as much as people can. Um, and their ability to do so is getting better and better.And so I don’t know, I think it’s really important to stay like on top of what people are able to do to see some of the trends that have taken place in the past. Like Photoshop disrupted a lot of jobs as well but people adapted and, I think that understanding the role of effort in art is really crucial. Like, I agree, I mean like something that you just didn’t put any effort into it doesn’t have the same emotional value as something that you put efforts into. But I don’t think that there’s anything fundamental that says that you can’t put your soul into something that you used AI to help you make, because it’s not just gonna be in the first generation. You’re gonna use it in some reflexive manner where you can demonstrate some aspect of your soul that you didn’t otherwise know about. So I don’t think it’s as black and white as like if you use AI, than you know, you’re going to become dependent.
-
from: Hotline_on_AI (pad)
AHNJILI: Yes, actually, uh, this question is perfect for now because it also helps me answer the previous question, which is… Um, how do I frame this? Uh, right now, AIif you do AI art, your art will be about AI. It's really hard to do anything that doesn't end up just showcasing the AI algorithm. Like for the AI art that I appreciate, I really enjoy it when the AI is more or less invisible, it's not like artwork that showcases: oh, this is what Midjourney can do now or this is what Stable Diffusion can do now. But it's more about, oh, like what happens when you combine these different elements, these different data sets. Or what happens when you fine tune your own algorithm? That makes it way more interesting. Uh, also just as a example, there was like a 500k AI generated painting from six or seven years ago that, you know, downloaded a bunch of images from Wikimedia and if you were to look at that image today, it looks like shit. It looks like, honestly, it’s so, like, uh, there’s like not a recognizable figure in any of those paintings like, you’re just like who bought this for like 500k, but, uh, wait… I actually, so…
-
from: Hotline_on_AI (pad)
DEREK: Yeah. I mean, the bias of AI today is thatAIAI is biased. Like if you ask ChatGPT about, you know, its concerns, it's like, well, watch out I'm biased. And I think it's a reasonable thing to say that AI ChatGPT is less biased than people, like any random person, like ChatGPT is less biased than them. And, I personally find ChatGPT to be especially helpful in checking bias like if you ask it to. And methodologically, like I think it's not appropriate to put everything into the technology where it's like, we're gonna make this perfectly unbiased technology, whatever that might be, but rather use it using methods that allow us to check our own biases and think about the implications thereof.At the same time, like it’s really good in English and you can’t say that about all languages and so there’s a lot of work to be done to make it more inclusive as a data set. That brings up some of this same tensions, you know that we’re talking about from a copyright perspective, because it’s like, should we go to all of these languages that aren’t well represented and take all the artist’s work from that and train these models, etc or not? I think the ethics are a little bit complicated.
-
from: Hotline_on_AI (pad)
DEREK: Yeah, thanks. Um, so I do think that one of the most exciting aspects of generative AI is in learning. I think it’s a really incredible learning tool. AII've been working on generative AI in math instruction, which is exactly where generative AI is the worst because it's really bad at fifth grade math. Like it's, you know, the whole collapse of Open AI, you might have heard about it on the news, was because AI got so good at fifth grade math. And that was the sign that, you know, Moloch is coming and we’re gonna have this apocalypse really soon. So, um, okay off topic. I’m really excited about programming education with generative AI, because it’s something that is really hard for schools to offer because it’s really hard to staff schools, period. And now, it’s become possible for kids to have really meaningful programming experiences with a tutor that can answer all of their technical questions very quickly and that’s something… It’s not just for kids. I mean, it’s for adults, like being able to upskill, like realizing that we can all program quantum computers like, you didn’t know that, but actually you can, you just haven’t had a good reason to, I guess. It’s those sorts of things where it’s not just at a superficial level where you can generate something and then copy paste, chuck it in. I mean, that’s a big part of programming period, but now it becomes so much easier to interrogate things, and understand them and have it explained at whatever level you’re at. That’s something that I think will have a really positive effect across the board, and will help teachers focus more on developing student interests, as opposed to sort of a knowledge transfer. I’m really hoping that we shift some curricular goals that are currently there to accommodate what kids and adults can learn.
-
from: Hotline_on_AI (pad)
EVA: AII think why it got so good is because before they didn't use copyrighted data. They trained on public domain images, royalty-free images. Uh, they didn't have these incredibly big data sets and then LAION-5B came around, and it was a huge dataset filled with copyrighted data and suddenly it was that good.But personally, I was not expecting that they would be allowed to get away with it, and they fortunately are not getting away with it, but I think that why it’s so incredible is because of the input. I think that most people didn’t expect this to be legal or that companies would even try to do this. Um, well, they’re not getting away with it, but still I think that is.... Personally, that is the reason why I didn’t expect they would get to this level. Yeah.
-
from: Hotline_on_AI (pad)
EVA: But also it’s good to keep in mind is that the people who train these models or have to filter certain things out with ChatGPT like certain phrases or certain images that are not allowed, they… AIIn Kenya, there was a recent article about the people who trained ChatGPT and it's incredibly mentally taxing, and are getting paid next to nothing for their work. And I think the people behind the training of those models and making sure that the training data is not toxic or fortune like or weird, we have to keep those people in mind and I think they recently unionized. So that, that is great, but we have to keep the people that make these systems work in the way they do and make it appear ethical, we have to keep those in mind as well because the kind of images these generative AI can produce can be very taxing on you mentally, and there should be surfaces put in place to take care of those people.