marojejian 2 hours ago

The root cause here is not specific to AI - this the same force that incentivizes "leaders" to produce unethical results by delegating tasks and setting results-based goals. You'll eat gross sausage if you don't see how it's being made, or reap the rewards of crimes it wasn't your hand physically on the switch.

So in that sense there is nothing new or surprising here.

But: I do think there's a potential "difference in degree is a difference in kind" situation here. Having humans in the loop does still provide some ethical friction. In the Milgram experiments there were degrees of compliance related to both the degree of pain inflicted, and the justification / authority for doing so.

But automated agents need not have any such friction. Thus there's potential for huge increase in how easy it is to both effect, and justify to yourself, delegating unethical actions.