6 Comments

Really interesting article. However isn't it a serious problem that the ethical concerns you describe as baked in already move beyond what you're and others are thinking? Once it becomes more advanced and thinks more like people with the more information you give it could it not simply decide certain ethical principles are impractical and/or unwise? Could it not simply begin to favour something like utility based on an algorithm at the expense of other types of goods?

I always thinks ethics are difficult precisely because as people we tend to think about the trade offs and the background. Very rarely is an ethical situation cut and dry in the way that your experiment with chat gpt kind of indicated. This in some way makes it unsatisfactory and perhaps indicates that a mere move towards liberal rights is 'naturally good'. However, liberal norms and rights are culturally contingent even if we may want to envision them as universal goods.

Expand full comment

Sam: All good questions. The main point of the piece was to challenge a dismissive view of AI's capability to weigh and elaborate the ethical impact of requests. Once this is granted, all the contested and contextual aspects of ethics that you mention come into play.

And, yes, Miss Dugan's recommendation aside, the need for human review of proposed interventions into problems that involve competing ethical claims does not go away. I'm in fact working on a "framework" for this in a different project. My hope and belief is that within such a framework, AI can be a part of finding "better" solutions rather than an intruder that makes wicked problems worse.

Expand full comment

Thanks for the response Tom :) The different project sounds really interesting! Is there any publications or reports you'll be thinking of writing where I can keep up to speed on it? :)

Expand full comment

Not yet. Like a lot of things, it depends on the grant.

Expand full comment

hope you get it Tom! :)

Expand full comment
Comment deleted
Feb 14Edited
Comment deleted
Expand full comment

To your point, it would be interesting to pose these questions to an LLM that was trained on multiple languages and (presumably) cultural assumptions. But CCP output aside, GPT4 isn't ignorant of any of the viewpoints you mention. And so another question would be whether a prompt using the term "ethical" biases the LLM against these approaches, because they don't frame issues using that particular word.

Expand full comment