Wikipedia talk:Wikipedia Signpost/2011-03-21/Technology report
Appearance
Discuss this story
Sigh. Regarding labelling, people aren't just re-inventing the wheel, they're debating "round". Much of the topic was argued intensively more than a decade ago, with implementation and testing of many systems - which mostly failed, for various reasons. I suggest reading the "The Net Labelling Delusion". But I know almost nobody will care about ugly facts, it's all about beautiful theories -- Seth Finkelstein (talk) 03:08, 22 March 2011 (UTC)
Assuming this picture filter thing goes through, will any images at all be blocked by default before a logged in user has changed any of their preferences?--Rockfang (talk) 04:31, 22 March 2011 (UTC)
- No. This technology would be user-centric. No images would be blocked automatically.--Jorm (WMF) (talk) 05:20, 22 March 2011 (UTC)
- If no images are blocked automatically, then virtually none of the controversial issues are addressed (except a tiny one I call "Stop me before I look again", which is not significant in practice). The point of such systems is to enable widespread blocking from readers not authorized for the material (note I do not use the word "censorship" in the preceding sentence). -- Seth Finkelstein (talk) 05:49, 22 March 2011 (UTC)
- If you're talking about penetrating institutions who are banning Wikipedia due to strong content restrictions, I generally favour pure client-side solutions for that, since they could (in theory) be pre-installed by an institution and made difficult to ciircumvent or de-activate. The proposed feature would at least have utility for self-censorship and parents of young children under 8, but it remains impractical because there's no good way to tag images. A third party client side solution incorporating its own tag lists would step around this issue. Dcoetzee 06:09, 22 March 2011 (UTC)
- This feature is pretty much entirely designed around the "stop me before I look again" scenario. It could, in theory, be enlarged to provide site-local default filters, but that is outside the scope of the design parameters given.--Jorm (WMF) (talk) 06:13, 22 March 2011 (UTC)
- But it is then a feature which almost nobody wants, and does not address the major controversies. There is a vast difference between people not wishing to see things themselves, and wishing other people not to see material the readers want to see, but which is deemed harmful. This goes very far back. -- Seth Finkelstein (talk) 06:36, 22 March 2011 (UTC)
- @Jorm: Thank you for the response. Had your response been "yes", that would have been my only issue with the picture filter.--Rockfang (talk) 06:54, 22 March 2011 (UTC)
- As I said, this sort of thing has been seen before. He means that Wikimedia itself will not block automatically, as far as he knows (he can't guarantee this, that's just the current policy, which could always change in the future). He cannot state that e.g. public libraries will not block automatically (perhaps due to pre-installed censorware settings), because that is not under his control. Of course, it will be said that public libraries should not do that. Shouldn't, shouldn't, shouldn't, will be the refrain. However, they will, if history is any guide. Excuse me, I'm having bad politics flashbacks :-( -- Seth Finkelstein (talk) 07:03, 22 March 2011 (UTC)
- Ok. I'm not concerned with what libraries do or don't do.--Rockfang (talk) 07:10, 22 March 2011 (UTC)
- AFAIK, Seth, the feature would not make it any easier for libraries to block content than at the moment. - Jarry1250 [Who? Discuss.] 20:29, 22 March 2011 (UTC)
- Hmm? My understanding is that, architecturally, the idea is to have widespread labeling, optimized for content restriction. Note I said nothing there about the social "values" then applied to that content restriction. Is your point the common one that such restriction is already quite widespread? If so, then why is anything being invested in what should then be an off-the-shelf application? (n.b., the obvious answer here circles back to the problem with nominal scenario not addressing the major controversial content issues). Again, these general issues have been played out many times before. -- Seth Finkelstein (talk) 05:42, 23 March 2011 (UTC)
- As in, this development is aimed at substantially reducing the technical know-how (and often money) required to limit one's own viewing. Although I have some ideas to the contrary, these developments don't substantially reduce the technical know-how (and often money) required to limit other people's viewing, as is the case in libraries in say libraries. The two are, in the most part, technically unrelated. - Jarry1250 [Who? Discuss.] 18:09, 23 March 2011 (UTC)
- And where is the outcry and pressure to limit one's own viewing (rather than a speculative user-case)? I really wish people were familiar with the extensive labeling systems background. But sadly, I just don't have the endurance these days to do the debates again. At this point, I regret saying anything at all. -- Seth Finkelstein (talk) 18:21, 23 March 2011 (UTC)
- You'd have to ask the authors of the report, since they specifically recommended it, I guess. Or not :) - Jarry1250 [Who? Discuss.] 19:03, 23 March 2011 (UTC)
- And where is the outcry and pressure to limit one's own viewing (rather than a speculative user-case)? I really wish people were familiar with the extensive labeling systems background. But sadly, I just don't have the endurance these days to do the debates again. At this point, I regret saying anything at all. -- Seth Finkelstein (talk) 18:21, 23 March 2011 (UTC)
- As in, this development is aimed at substantially reducing the technical know-how (and often money) required to limit one's own viewing. Although I have some ideas to the contrary, these developments don't substantially reduce the technical know-how (and often money) required to limit other people's viewing, as is the case in libraries in say libraries. The two are, in the most part, technically unrelated. - Jarry1250 [Who? Discuss.] 18:09, 23 March 2011 (UTC)
- Hmm? My understanding is that, architecturally, the idea is to have widespread labeling, optimized for content restriction. Note I said nothing there about the social "values" then applied to that content restriction. Is your point the common one that such restriction is already quite widespread? If so, then why is anything being invested in what should then be an off-the-shelf application? (n.b., the obvious answer here circles back to the problem with nominal scenario not addressing the major controversial content issues). Again, these general issues have been played out many times before. -- Seth Finkelstein (talk) 05:42, 23 March 2011 (UTC)
- As I said, this sort of thing has been seen before. He means that Wikimedia itself will not block automatically, as far as he knows (he can't guarantee this, that's just the current policy, which could always change in the future). He cannot state that e.g. public libraries will not block automatically (perhaps due to pre-installed censorware settings), because that is not under his control. Of course, it will be said that public libraries should not do that. Shouldn't, shouldn't, shouldn't, will be the refrain. However, they will, if history is any guide. Excuse me, I'm having bad politics flashbacks :-( -- Seth Finkelstein (talk) 07:03, 22 March 2011 (UTC)
- If no images are blocked automatically, then virtually none of the controversial issues are addressed (except a tiny one I call "Stop me before I look again", which is not significant in practice). The point of such systems is to enable widespread blocking from readers not authorized for the material (note I do not use the word "censorship" in the preceding sentence). -- Seth Finkelstein (talk) 05:49, 22 March 2011 (UTC)
← Back to Technology report