• For our 10th anniversary on May 9th, 2024, we will be giving out 15 GB of free, off-shore, DMCA-resistant file storage per user, and very possibly, public video hosting! For more details, check a look at our roadmap here.

    Welcome to the edge of the civilized internet! All our official content can be found here. If you have any questions, try our FAQ here or see our video on why this site exists at all!

A Statement to Ed Grefenstette on His Work on A.I.

Arnox

Master
Staff member
Founder
Messages
5,317
Ed is a very prominent researcher in the field of A.I. In an AMA with Ed on Reddit on December 7th, just yesterday, Mrbrightideas asked this question:

If you have any; what are you biggest concerns on the growing prevalence of AI?
Ed had this to say:

There is a spectrum of sorts when it comes to fears about AI, spanning practical concerns to existential ones. I do not want to dismiss the latter end of the spectrum, although I have little time for the whole killer AI story line (humans are already experts at destroying each other) or the whole longtermism debate, and I'm more interested and concerned by the practical risk that rapid technological advance will disrupt the economy in a way which is so rapid individual and professional fields don't have time to adapt rapidly enough. We saw this (not directly, mind you) with the industrial revolution, as machines replaced manual labour, and the same could happen again. I don't have any easy answers to this, but when it comes to building products, services, and new ways of working and producing economic value on top of the technology we are building, I can only hope developers and inventors alike will prioritise building tools that work symbiotically with humans, that assist their work and simplify it, rather than seek to automate away human jobs (at least in the short term), giving society and the economy time to adapt.
In response to Ed, I gave this statement. Who knows if he'll read it, but here it is anyway:

This answer reminds me a little too much of when Miles Dyson in Terminator 2 was telling Sarah Connor how development of this kind of thing started and how it was covered up. And then she just unloads on him (metaphorically speaking).

Was Sarah's viewpoint on Miles right? Maybe. Maybe not. But I have to tell you, Ed, this answer you gave to the question of the possible dangers of AI is not a good or even satisfactory one. Sometimes, one has to be very brave and admit that what they're doing, even if it's their life's work, is not correct. If you are going to continue to pursue this field, then I really think you should have a better answer besides, "I can only hope."
I think this is pretty important for me to say. We are no longer in a time where this kind of incredibly disruptive A.I. is a pseudoscientific pipedream. We can't afford to be naive about this stuff any longer. And I know it's pretty easy for me to say all this as I haven't devoted my life to A.I. research, but the point still stands.
 

Arnox

Master
Staff member
Founder
Messages
5,317
Ed has responded:

That's a good callout. Let me think about this more and come back to you, as I'm in back to back meetings all afternoon until the point I deal with my kids bedtime, but I think your point deserves reflection and a response.
I'll go ahead and post his further response here as soon as it comes.
 

Arnox

Master
Staff member
Founder
Messages
5,317
Ed Grefenstette said:
Okay I have had a little time to think about this, and would be curious to hear what is unsatisfactory, if anything, about the following explanation: I do agree that technologists have a moral responsibility for the impact of their contributions, but that this is loosely weighted by the plausibility of their causing harm and the benefit they offer relative to that potential for harm (yes, I know this is just naive utilitarianism), both of which hard to quantify and even harder to measure and predict (which is one reason naive utilitarianism fails). For example, I would not feel comfortable directly working on ML models for warfare, and would feel no moral qualms in working on ML models for, say, helping detect cancer earlier.

However, the issue here is not just the more generic ML methods are not just fairly ubiquitously applicable (or at least adaptable), but furthermore they are surprisingly non-specific (once you abstract away the data they are trained on), such that it's actually conceivable that ML methods designed to detect cancer might be rapidly adapted to serve military purposes (I don't think it's plausible, but it's not an absurd thought experiment). And this really exemplifies the difficulty of disentangling the potential for harm from the potential for good, we are in the age of a class of methods where the application of the technology is really mostly just a function of where the method is applied, rather than heavily constrained by the method itself. So as technologists, we have to make a choice, do we halt progress altogether (which is impractical as there is no guarantee all of humanity will play ball)? Or do we continue the development of these methods in lockstep with a greater organisation of society and institutions thereof around regulatory framework and the enforcement thereof, monitoring and anticipation of social and economic change, and reaction to such change, in the face of potentially deeply transformative technology? I think the latter is the only realistic approach, and so far the discussion around this is primarily driven by the technologists themselves. Therefore, I am not passing the buck by saying the responsibility is solely in the hands of technologists, but merely observing that currently that is how we are acting when it is, in fact, by definition, a shared responsibility.
My response:

You make good points, but I would argue that my concern actually lays much more in the realm of legislation and the leadership of countries and companies, or should I say, the lack of good legislation and good leadership.

If our society was properly set up for it, then AI could generally be a boon, but as it currently stands, we cannot really even hope for this and must confront the very ugly reality that while AI will have some positives, it is, for the most part, going to have a very negative impact. Perhaps maybe even vastly so. And all due to completely irresponsible use and no regulation in sight. We're already beginning to see the problems with this as AI is used to both replicate works of artists or even replace them outright. We see this as governments use AI to further spy on its citizens. We see this as people potentially using deepfakes to construct from whole cloth scarily convincing misinformation.

And again, this is just the beginning. What's going to happen when governments and companies use AI even more irresponsibly to make automated drones for police and military application and entirely replace whole entire workforces? Yes, someone does need to maintain the drones, this is true, but this cannot be the entire answer to this question. New jobs will be created, but far more workers will be displaced as a result. Or what about malicious actors using AI for reverse engineering to find innumerable security holes?

And if we had the legislative infrastructure or at least competent leadership to deal with these issues, it wouldn't be nearly so bad, but could you honestly look me in the eye and say with a straight face that the current world is ready to responsibly confront these issues? I really don't think so at all. We can barely manage what we already have.

In the past, we worried about Skynet launching nukes and planning absurd robot invasions, but while we were all sitting around worrying about that, we completely overlooked all the far more subtle but still insidious implications that AI would bring to the table. Basic things that we all used to be able to rely on for decades or even centuries are now no longer valid.
 

Houseman

Zealot
Sanctuary legend
Messages
1,074
There's this AI thing called ChatGPT that you can prompt to give out factual information, but it's very obviously biased. It can praise and defend Mao Zhedong, but it refuses to do the same for Hitler. It will tell you that there are no significant differences in IQ scores between different races, but it'll admit to specific studies finding the opposite. It is biased and tries to lie to you.

So not even just military applications, this can be applied to journalism, research, academia, and literature in devious ways.


But what do you want Ed to do about it? What can he do or say about it that would make any difference? The cat's out of the bag.
 

Arnox

Master
Staff member
Founder
Messages
5,317
But what do you want Ed to do about it? What can he do or say about it that would make any difference? The cat's out of the bag.
The question isn't really what should he do about it, it's what shouldn't he do. I don't think he should personally encourage the field of AI with his work. That's all. Or at least, he shouldn't until we get proper leadership and/or regulations in place for it. And it's not like he can't retire right now if he wanted to. From what I can see, he's already pretty set up.
 
Top