Talk:Misaligned goals in artificial intelligence
Appearance
This redirect does not require a rating on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||
|
The contents of the Misaligned goals in artificial intelligence page were merged into AI alignment#Misalignment on 16 December 2023. For the contribution history and old versions of the merged article please see its history. |
This article is an example of how anthropomorphizing things is a problem.
We tend to do this to simplify a thing/situation so we can more easily understand/explain it.
The irony is, "Getting something wrong because it has been simplified until it lacks the required information to function properly." is the ACTUAL topic if the article.
- The goal of the article is to inform people about how improper goals, result in unpredictable/unintended results in computer programming.
- We do it using language that heavily implies, or directly states, that these algorithms have "intelligence", that they can/do "learn", and/or that they work in a similar("neural") way to the way we think.
- This language causes anyone not already aware of how iterative programming A works, to make incorrect, and wildly varying assumptions about what can/cannot be done.
- Therefore the article itself, by attempting to simplify things to be more easily understandable, is directly responsible for causing misunderstanding.
A I will not use the terms "Artificial Intelligence", "Machine Learning", or "Neural Network" because they are a core part of the problem.
67.186.150.159 (talk) 16:33, 21 November 2021 (UTC) Prophes0r