- Messages
- 5,066
Ed is a very prominent researcher in the field of A.I. In an AMA with Ed on Reddit on December 7th, just yesterday, Mrbrightideas asked this question:
Ed had this to say:If you have any; what are you biggest concerns on the growing prevalence of AI?
In response to Ed, I gave this statement. Who knows if he'll read it, but here it is anyway:There is a spectrum of sorts when it comes to fears about AI, spanning practical concerns to existential ones. I do not want to dismiss the latter end of the spectrum, although I have little time for the whole killer AI story line (humans are already experts at destroying each other) or the whole longtermism debate, and I'm more interested and concerned by the practical risk that rapid technological advance will disrupt the economy in a way which is so rapid individual and professional fields don't have time to adapt rapidly enough. We saw this (not directly, mind you) with the industrial revolution, as machines replaced manual labour, and the same could happen again. I don't have any easy answers to this, but when it comes to building products, services, and new ways of working and producing economic value on top of the technology we are building, I can only hope developers and inventors alike will prioritise building tools that work symbiotically with humans, that assist their work and simplify it, rather than seek to automate away human jobs (at least in the short term), giving society and the economy time to adapt.
I think this is pretty important for me to say. We are no longer in a time where this kind of incredibly disruptive A.I. is a pseudoscientific pipedream. We can't afford to be naive about this stuff any longer. And I know it's pretty easy for me to say all this as I haven't devoted my life to A.I. research, but the point still stands.This answer reminds me a little too much of when Miles Dyson in Terminator 2 was telling Sarah Connor how development of this kind of thing started and how it was covered up. And then she just unloads on him (metaphorically speaking).
Was Sarah's viewpoint on Miles right? Maybe. Maybe not. But I have to tell you, Ed, this answer you gave to the question of the possible dangers of AI is not a good or even satisfactory one. Sometimes, one has to be very brave and admit that what they're doing, even if it's their life's work, is not correct. If you are going to continue to pursue this field, then I really think you should have a better answer besides, "I can only hope."