The scene in Terminator 2 that makes me question my work in AI — and why I have bi-weekly ethics meetings.
A few weeks ago I held a lecture about AI-methodology (NLP, ML, NN, DL, IR etc) and cybersecurity. I spoke about how we construct models and how they, in fact, impact us.
I’ve been in this AI game for a long time, I’ve seen everything come and go. I’ve seen two chatbots talk to each other with ultrarealistic flirtatious voices already 15 years ago. I’ve also spent many years in life-science and medicine so the thought of bias and having to stumble through ethical complexities is not new to me.
I’m not exactly worried about a SkyNet just yet. But I am worried about becoming Miles Dyson.
A few years ago, I made an experiment that got me in the headlines of a lot of reputable, and highly questionable news.
I was thus sent heaps of emails, physical letters and memes. They ranged from: awesome work exposing the pitfalls of academic writing to please top, you’re triggering SkyNet to become aware.
In the ladder part, I had no idea how ANYONE could even draw that conclusion until I started re-watching the terminator movies.
This very scene sent chills down my spine.
The video shows a man WAY AHEAD of his time (I’m starting to think James Cameron IS from the future). A man working from home, with his two computer screens and his family complaining about the lack of life-work balance.
Miles, an engineer, is excited about making something he thinks will make the world better. Miles is a kind, caring, person that small-talks to his colleagues and by the few interactions I see him have at Cyberdyne, seems very much like someone who is just happy to do what he loves, and wants a better world. He is not driven by money, or fame and worst of all he is blindes by his enthusiasm.
In the movies, a guard knows details about his family and the new recruit asks him for advice. Those subtle details tell me that he is not one of those “academic jerks” that lives on a high horse isolated from his colleagues. And made me see myself in him.
Things I’ve done so far are harmless. They would have been done by someone else at some point and will not (much to the dismay of the article from The Sun) cause world war three!
But as I push the boundaries of what I do, I see a similar path for me and Miles. I see the enthusiasm and I see the willingness to press the “this is not that bad” button in my brain. Without it, I would never push boundaries, with it, there is always a risk I push too far.
There is no lack of bias and ethical complexity when it comes to working in this field. The most challenging thing has not been to make sure to follow basic ethical practices, that is in my “blood”.
The main challenge has been to decipher how I contribute without harm. I see a need for A hypocritic oath for AI-developers.
To, at least somewhat, combat it withing myself (especially as my project become more advanced) I have started having ethical advisory discussion with other health-care workers and developers. I lay my plans out as transparently as possible and they push back as much as they can, and steer me in the right direction. That way, I know there are at least some independent minds out there to call me out when I think about taking things too far.
If Miles Dyson would have had this fail-safe, the world would have been better off. Or would he have listened? I hope I will.
I’m so grateful for the people who make me question my place and role in AI. With that, I will hopefully not cause more harm than good.