What's so special about military research or AI that the two can't be done together even though the organization is not in principle opposed to either?
Human oversight: The use of AI must always remain under human control. Its functioning and outputs must be consistently and critically assessed and validated by a human.
It's still just a platitude. Being somewhat critical is still giving some implicit trust. If you didn't give it any trust at all, you wouldn't use it at all! So they endorse trusting it is my read, exactly the opposite of what they appear to say!
It's funny how many official policies leave me thinking that it's a corporate cover-your-ass policy and if they really meant it they would have found a much stronger and plainer way to say it
What's so special about military research or AI that the two can't be done together even though the organization is not in principle opposed to either?
blah, blah,people will simply use it as they see fit
Feels like the useless kind of corporate policy, expressed in terms of the loftiest ideals instead of how to make real trade offs with costs
I found this principle particularly interesting:
It's still just a platitude. Being somewhat critical is still giving some implicit trust. If you didn't give it any trust at all, you wouldn't use it at all! So they endorse trusting it is my read, exactly the opposite of what they appear to say!
It's funny how many official policies leave me thinking that it's a corporate cover-your-ass policy and if they really meant it they would have found a much stronger and plainer way to say it
I think you're more reading what you want to read out of that - but that's the problem, it's too ambiguous to be useful