label
Big Tech
Linked to 5 items
-
from: Hotline_on_AI (pad)
AHNJILI: And I wanted to bring up an example that was related to the energy efficiency point that Eva brought up as well. Big TechSo I think about five years ago, the state-of-the-art AI cancer detection algorithm that used mammograms to detect tumor malignancy was around 77% accurate. And a radiologist at the time was also 80% accurate, but they also found that if you train pigeons over the course of a few weeks, then they would also achieve 80% accuracy. But not just that, if you train four pigeons and then use the majority vote, you could achieve around 98% accuracy. I love this. That's awesome. It is so cool. Beautiful, actually beautiful because not only that, maybe someday the algorithm will also achieve around 98% accuracy. But AIyou need to feed it millions of images, and it's completely not cost-efficient to train these models, whereas for pigeons, you just need to feed them. Yes, a few crumbs of bread solution. Yes, exactly. Just get pigeons. Yes, pigeons and you know, radiologists aren’t that energy efficient either because you know, they have to go to school five to 10 years in order to gain that level. Those stem doctors, yes exactly. So perhaps AI won’t be the only option that will threaten radiologists’ jobs. Yeah, yeah.
-
from: Hotline_on_AI (pad)
AHNJILI: I worked in big pharma for a few years, I noticed that Big Techwhen we applied for grants, we threw the term AI anywhere and everywhere. But then when it comes to research papers, we're like, it's not really AI. It's actually just linear regression or it's just random forest or it's just a gun. And so, yeah, I noticed that, you know, when it comes to wanting to reap the financial rewards. AI is a hype term, but then when it comes to actually convincing, uh, other researchers or scientists of what you're actually capable of doing, the term AI is never used, but for the exact reason, it is just too broad and it actually doesn't mean anything.And so yeah, even within the medical field or maybe even within the other four mentioned fields like the military, like it’s still quite hard to define what the reach or impact of these algorithms are. So, for example, I might propose a survival model for someone’s diabetes treatments. But then that model probably was only trained on adults. So then if I use the same model for children, then I put those children at risk because I haven’t actually accounted for their own BMI, their growth rate and so on. And so like when I like to talk about AI, not only do I talk about the field I’m in, but also the population that I’m working with.
-
from: Hotline_on_AI (pad)
AHNJILI: Oh actually this is more specific. So for example, Big Techat the Hong Kong international airport, once you go through border control, basically, you don't have to take out your passport anymore. Everything is facial recognition. So let's say, I don't know, uh, you need to go from point A to point B, but for some reason they won't scan your passport anymore but your facial recognition is not also not working like now you're essentially locked out because you don't conform to the system.
-
from: Hotline_on_AI (pad)
DEREK: I mean, I would say that one of the big challenges is just really powerful companies, you know. I mean, it’s gonna be like it has been a challenge except they’re gonna be even more big and powerful. So, Microsoft was like the boogeyman for much of my growing up years was like ” ah big monopoly”. And then they haven’t been so big monopoly… But boy, going forward, I’m telling you, like they’re gonna get really powerful. I mean, now that they’ve got OpenAI in their pocket. It’s gonna be really intense. Um, and so Big Techall of these big companies are gonna be competing with each other and you're gonna be seeing a lot of companies producing these high level foundation models, like anyone who has the capability to invest in is going to be doing it. And that's this arms race, right? So you've got this arms race taking place where you're trying to create the most powerful AI model, we're living in that right now. And this has been one of the main concerns from the people concerned about this AI apocalypse. My philosophy professor in college was this guy, Nick Bostrom, who’s famous for writting a really good article about the likelihood that we’re living in a simulation, first of all, and also for the concern that we might all get turned into paperclips by AI, that’s trying to maximize paperclips. You know, we’re not gonna get turned into paperclips, but having runaway corporate objectives and profit objectives, it’s a real concern.
-
from: Hotline_on_AI (pad)
AHNJILI: Uh, for another current AI problem, just to move away from the computer and think about the physical elements of AI. Obviously, like to collect data to store data, Big Techto train these algorithms, you need these large data centers and these data centers consume a lot of water. And recently in a few states in the US, and also in few South American countries, like I think in Honduras, for example, they've… You know, Google, Amazon, like META have had these contracts saying we need X amount of water per year to keep our data centers alive, but in a lot of these areas, they've also had droughts. So these areas had to have their governments decide: are we going to honor our contracts or are we going to provide water for our civilians and yeah, if they don't honor the contracts, that's like a huge financial backlash or implications for these areas. So I think that is also a current AI apocalyptic problem that we have to deal with.