Commons:Village pump/Proposals
This page is used for proposals relating to the operations, technical issues, and policies of Wikimedia Commons; it is distinguished from the main Village pump, which handles community-wide discussion of all kinds. The page may also be used to advertise significant discussions taking place elsewhere, such as on the talk page of a Commons policy. Recent sections with no replies for 30 days and sections tagged with {{Section resolved|1=--~~~~}} may be archived; for old discussions, see the archives; the latest archive is Commons:Village pump/Proposals/Archive/2024/12.
- One of Wikimedia Commons’ basic principles is: "Only free content is allowed." Please do not ask why unfree material is not allowed on Wikimedia Commons or suggest that allowing it would be a good thing.
- Have you read the FAQ?
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 5 days and sections whose most recent comment is older than 30 days. | |
CAPTCHA for many IP edits
[edit]There is a new feature that allows AbuseFilters to require a CAPTCHA before uploading an edit. I would like to enable this for many IP edits, especially IP edits on mobile. The aim of this is to reduce the huge amount of accidental and nonsense edits. Are there any concerns against this? GPSLeo (talk) 10:06, 18 August 2024 (UTC)
- No, it would be good to reduce maintenance time to free up time for other tasks. However, I doubt this is enough and have called for better vandalism/nonsense-edit detection like ClueBot does it on Wikipedia here which may also be some context for this thread. Prototyperspective (talk) 10:25, 18 August 2024 (UTC)
- Detection of nonsense after is was published is not our problem, this is possible with current filters. We do not have enough people looking on the filter hits and reverting the vandalism. We therefore need measures to reduce such edits. If we do not find a way to handle this we need to block IP edits entirely. GPSLeo (talk) 10:56, 18 August 2024 (UTC)
- I think we rather need measure to automatically revert such edits. Detection is a problem if it's not accurate enough that it contains too many false-positives that people don't implement them. The proposal is not just about detection but also about what follows from there – for example one could also automatically revert them but add the edit to a queue to check in case the revert is unwarranted. Prototyperspective (talk) 11:00, 18 August 2024 (UTC)
- We might to want to experiment mw:Moderator Tools/Automoderator. It won't probably work perfectly at a first experiment, but we will at least know some indication of where it works and where it doesn't. whym (talk) 01:18, 1 September 2024 (UTC)
- Very interesting! Thanks for the link, it's very constructive and if possible please let me know when WMC enables this or when there is some discussion about enabling it.
It could save people a lot of time and keep content here more reliable/higher quality. I think there's not even auto-detection for when e.g. 80% of a user's edit have been reverted for checking the remainder and whether further action is due. Prototyperspective (talk) 23:18, 1 September 2024 (UTC)
- Very interesting! Thanks for the link, it's very constructive and if possible please let me know when WMC enables this or when there is some discussion about enabling it.
- I think we rather need measure to automatically revert such edits Absolutely yes. I think the risk of losing well-intentioned IP edits in Commons is quite low (for example, I had edited Wikipedia as an IP user many times before I registered, but I've never thought of editing Commons as an IP user). MGeog2022 (talk) 21:27, 26 September 2024 (UTC)
- We might to want to experiment mw:Moderator Tools/Automoderator. It won't probably work perfectly at a first experiment, but we will at least know some indication of where it works and where it doesn't. whym (talk) 01:18, 1 September 2024 (UTC)
- I think we rather need measure to automatically revert such edits. Detection is a problem if it's not accurate enough that it contains too many false-positives that people don't implement them. The proposal is not just about detection but also about what follows from there – for example one could also automatically revert them but add the edit to a queue to check in case the revert is unwarranted. Prototyperspective (talk) 11:00, 18 August 2024 (UTC)
- Detection of nonsense after is was published is not our problem, this is possible with current filters. We do not have enough people looking on the filter hits and reverting the vandalism. We therefore need measures to reduce such edits. If we do not find a way to handle this we need to block IP edits entirely. GPSLeo (talk) 10:56, 18 August 2024 (UTC)
- Capchas are supposed to stop robots from spamming, right? Why would this stop some random human IP user from posting “amogus sussy balls”? Dronebogus (talk) 14:05, 18 August 2024 (UTC)
- Seconding this. CAPTCHAs should only be used to prevent automated edits, not as a means of "hazing" users making low-effort manual edits. Omphalographer (talk) 20:12, 18 August 2024 (UTC)
- Maybe candidates could be edits that are currently fully blocked but have some false positives that could be let through?
∞∞ Enhancing999 (talk) 13:59, 27 August 2024 (UTC)
- Maybe candidates could be edits that are currently fully blocked but have some false positives that could be let through?
- You did not consider the full rationale of OP, he wrote
huge amount of accidental […] edits
and this measure would be partly and probably mainly be about greatly reducing accidental edits. OP however failed to name some concrete specific examples which have been brought up in a thread elsewhere. I favor better detection of nonsense edits and automatic reverting of them but requiring captchas for IP edits on mobile or for certain actions may still be worth testing for a month or two to see whether it actually reduces these kinds of edits. Prototyperspective (talk) 16:43, 27 August 2024 (UTC)- I'd totally support requiring captcha's for edits on mobile in general, not just for IP addresses. I know personally I make a lot of editing mistakes on mobile just because of how clanky the interface is. There's also been plenty of instances where I've seen pretty well established users forgot to sign their names or make other basic mistakes on mobile. So I think having captcha's on mobile for everyone would be a really good idea. --Adamant1 (talk) 17:59, 27 August 2024 (UTC)
- In Special:Preferences there is an option "Prompt me when entering a blank edit summary (or the default undo summary)". Enabling this seems like a good way to provide a chance to briefly stop and review what you are trying to do. I wonder if it's possible to enable it by default. A captcha answer has no productive value, but a good edit summary will do. whym (talk) 01:15, 1 September 2024 (UTC)
- I'd support that as long as there's a way for normal, logged in users to disable it if they want to. I think any kind of buffer between making an edit and posting it would reduce bad edits though. Even ones that are clearly trolling. A lot of people won't waste their time if they have to take an extra step to post a message even if it's something like writing an edit summary. --Adamant1 (talk) 01:34, 1 September 2024 (UTC)
- There is something to be said for en:WP:PBAGDSWCBY and en:WP:ROPE (I know, we don't ban here, just substitute indef for ban). — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 02:01, 1 September 2024 (UTC)
- @Jeff G.: True. That's one of the main reasons I support requiring people to have an account since it seems to be much easier to track and ban editors that way. --Adamant1 (talk) 02:05, 1 September 2024 (UTC)
- @Adamant1: Like it or not, we still have "anyone can contribute" right on the main page. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 02:17, 1 September 2024 (UTC)
- @Jeff G.: Anyone can still contribute if we require accounts. I could see not requiring accounts if there was legitimate reason for it, but I've put a lot of thought into this over the last couple of years and can't think of one single legitimate reason why someone wouldn't be able to create one. We'll have to agree to disagree though. I can understand why they let IP edit Wikiprojects back in the day though, but the internet and people are just different now and the project should be able to adapt to the times. --Adamant1 (talk) 02:21, 1 September 2024 (UTC)
- @Adamant1: Like it or not, we still have "anyone can contribute" right on the main page. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 02:17, 1 September 2024 (UTC)
- @Jeff G.: True. That's one of the main reasons I support requiring people to have an account since it seems to be much easier to track and ban editors that way. --Adamant1 (talk) 02:05, 1 September 2024 (UTC)
- There is something to be said for en:WP:PBAGDSWCBY and en:WP:ROPE (I know, we don't ban here, just substitute indef for ban). — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 02:01, 1 September 2024 (UTC)
- This does not help in cases this is about as theses types of edits always have auto generated edit summaries and no way to edit the edit summary. GPSLeo (talk) 04:32, 1 September 2024 (UTC)
- Maybe that is a software problem to be fixed? It already says "(or the default undo summary)" after all. Reminding users to add a bit more to what's auto-generated seems like a natural extension. whym (talk) 18:54, 1 September 2024 (UTC)
- The Wikibase UI does not have such a feature and in the many years of Wikidata it was not considered a problem that changing the edit summary is not possible. GPSLeo (talk) 20:24, 1 September 2024 (UTC)
- Can Commons customize that in their Wikibase instance? It's not yet implemented in the Wikidata UI, but on the API level Wikibase supports edit summaries according to d:Help:Edit summary. whym (talk) 23:38, 1 September 2024 (UTC)
- The Wikibase UI does not have such a feature and in the many years of Wikidata it was not considered a problem that changing the edit summary is not possible. GPSLeo (talk) 20:24, 1 September 2024 (UTC)
- Maybe that is a software problem to be fixed? It already says "(or the default undo summary)" after all. Reminding users to add a bit more to what's auto-generated seems like a natural extension. whym (talk) 18:54, 1 September 2024 (UTC)
- I'd support that as long as there's a way for normal, logged in users to disable it if they want to. I think any kind of buffer between making an edit and posting it would reduce bad edits though. Even ones that are clearly trolling. A lot of people won't waste their time if they have to take an extra step to post a message even if it's something like writing an edit summary. --Adamant1 (talk) 01:34, 1 September 2024 (UTC)
- I make much fewer editing mistakes on mobile when I use my new portable bluetooth mini keyboard. Touch-typing in the dark, however, can still be problematic. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 01:52, 1 September 2024 (UTC)
- In Special:Preferences there is an option "Prompt me when entering a blank edit summary (or the default undo summary)". Enabling this seems like a good way to provide a chance to briefly stop and review what you are trying to do. I wonder if it's possible to enable it by default. A captcha answer has no productive value, but a good edit summary will do. whym (talk) 01:15, 1 September 2024 (UTC)
- I'd totally support requiring captcha's for edits on mobile in general, not just for IP addresses. I know personally I make a lot of editing mistakes on mobile just because of how clanky the interface is. There's also been plenty of instances where I've seen pretty well established users forgot to sign their names or make other basic mistakes on mobile. So I think having captcha's on mobile for everyone would be a really good idea. --Adamant1 (talk) 17:59, 27 August 2024 (UTC)
- Seconding this. CAPTCHAs should only be used to prevent automated edits, not as a means of "hazing" users making low-effort manual edits. Omphalographer (talk) 20:12, 18 August 2024 (UTC)
One week test
[edit]There is definitely no consensus to use this feature for now but there were some people suggesting to make a test. Therefore I would propose that we make a one week test and then evaluate the results. GPSLeo (talk) 19:37, 2 September 2024 (UTC)
- Why? How is that useful? No consensus to implement means no consensus to implement, period. I can guarantee it will not gain any more consensus with a test version. Dronebogus (talk) 21:08, 2 September 2024 (UTC)
- There were some people suggesting to make a test. There is also no consensus against some kind of measure. GPSLeo (talk) 14:11, 3 September 2024 (UTC)
- There is also no consensus for it. I feel like you’re just projecting whatever you like onto the discussion to make sure your proposal gets through somehow. It sucks when people don’t like your idea, but “seeing” consensus where none exists is not the way to fix that Dronebogus (talk) 19:54, 3 September 2024 (UTC)
- There were some people suggesting to make a test. There is also no consensus against some kind of measure. GPSLeo (talk) 14:11, 3 September 2024 (UTC)
- Oppose. As I noted above, this is not an appropriate use of CAPTCHAs - their purpose is to prevent automated edits by unauthorized bots, not to prevent "accidental or nonsense edits". Omphalographer (talk) 20:42, 3 September 2024 (UTC)
Simple edit confirmation
[edit]Instead of a CAPTCHA it is also possible to show a warning and require the user to confirm their edit. I would propose to make a one week test where we show IPs a warning "You are publicly editing the content of the page." and they have to hit the publish button again but with no CAPTCHA. GPSLeo (talk) 15:59, 26 September 2024 (UTC)
- Support Makes more sense. I think it's worth giving that a try but one week is short so somebody would need to have a good way of tracking relevant changes and creating some stats to see whether it's been effective. Or are there any better ideas what to do about Unregistered or new users often moving captions to other languages? Prototyperspective (talk) 20:58, 26 September 2024 (UTC)
- Support A month is probably better though. --Adamant1 (talk) 21:05, 26 September 2024 (UTC)
- Info @Prototyperspective and Adamant1: I made a draft for the message shown by the filter: MediaWiki:Abusefilter-warning-anon-edit. GPSLeo (talk) 09:29, 13 October 2024 (UTC)
- link to Commons:Project scope directly.
- put "If you are sure that your edit is constructive to this page please confirm it again." as last paragraph.
- We also recommend you to create an account, which allows you to upload your own photos or other free licensed content.
- RoyZuo (talk) 12:00, 18 October 2024 (UTC)
- I suggest the warning message includes a link to a place where users can give feedback (complain) so that we might see how many users are affected; and the test period be 1 month. 1 week is too short. collect stats over the period as often as possible (daily?). RoyZuo (talk) 06:34, 18 October 2024 (UTC)
- For the monitoring I created a tool [1]. The feedback is a problem as we had to protect all regular feedback pages due to massive vandalism and I think if we create a new page for this we would have to monitor it 24/7 and massively revert vandalism. GPSLeo (talk) 08:27, 18 October 2024 (UTC)
- Interesting tool. Please correct the typo in the page title and add a link to some wikipage about it (where is the software code, where to report issues or discuss it, is it CCBY). It does show edits that were later reverted in the charts and not edits that are reverts by date right? Prototyperspective (talk) 11:03, 18 October 2024 (UTC)
- I created a documentation page for the tool Commons:Revert and patrol monitoring. GPSLeo (talk) 11:01, 19 October 2024 (UTC)
- Can you keep the data from start to now, instead of only 1 month? RoyZuo (talk) 18:16, 20 October 2024 (UTC)
- The problem is that all edits become marked as patrolled after 30 days. It would be possible to also check the patrol log to get this data but it would be a bit complicated and would require to much API request for daily updates. For edit counts and revert counts it is not such a huge problem but also a bit problematic when requesting data for a hole year every day as edits might get reverted after a year. GPSLeo (talk) 18:40, 20 October 2024 (UTC)
- You could just keep the data as it was right before all edits become patrolled, right? RoyZuo (talk) 14:00, 23 October 2024 (UTC)
- When i first looked at the table data was starting from 2024-09-17. now you can keep instead of erasing data that "expire" after 1 month. RoyZuo (talk) 14:04, 23 October 2024 (UTC)
- You mean just keeping the row in the table without updating the number of reverted edits? GPSLeo (talk) 14:43, 23 October 2024 (UTC)
- From now on I will keep the data from the last day without updating it. In two month I will then need to update the design of the page, maybe I make a sub page for the archive data. GPSLeo (talk) 16:04, 25 October 2024 (UTC)
- You mean just keeping the row in the table without updating the number of reverted edits? GPSLeo (talk) 14:43, 23 October 2024 (UTC)
- The problem is that all edits become marked as patrolled after 30 days. It would be possible to also check the patrol log to get this data but it would be a bit complicated and would require to much API request for daily updates. For edit counts and revert counts it is not such a huge problem but also a bit problematic when requesting data for a hole year every day as edits might get reverted after a year. GPSLeo (talk) 18:40, 20 October 2024 (UTC)
- Can you keep the data from start to now, instead of only 1 month? RoyZuo (talk) 18:16, 20 October 2024 (UTC)
- I created a documentation page for the tool Commons:Revert and patrol monitoring. GPSLeo (talk) 11:01, 19 October 2024 (UTC)
- The feedback page is about this specific measure (double confirmation). it can be temporary, so that any existing users have a central page to complain. imagine if you always edit without login, suddenly this double confirmation kicks in and you get frustrated. you want to complain, but dont know where. so if we have a link for them to write something, and if anyone of them bothers to do, we can see how many are affected and why, etc.
- once the measure becomes permanent, users should just take it as it is; no point in complaining. RoyZuo (talk) 11:51, 18 October 2024 (UTC)
- We can give it a try. GPSLeo (talk) 10:30, 19 October 2024 (UTC)
- I just added the regular abuse filter error reporting link with a different text. GPSLeo (talk) 16:01, 25 October 2024 (UTC)
- We can give it a try. GPSLeo (talk) 10:30, 19 October 2024 (UTC)
- Interesting tool. Please correct the typo in the page title and add a link to some wikipage about it (where is the software code, where to report issues or discuss it, is it CCBY). It does show edits that were later reverted in the charts and not edits that are reverts by date right? Prototyperspective (talk) 11:03, 18 October 2024 (UTC)
- For the monitoring I created a tool [1]. The feedback is a problem as we had to protect all regular feedback pages due to massive vandalism and I think if we create a new page for this we would have to monitor it 24/7 and massively revert vandalism. GPSLeo (talk) 08:27, 18 October 2024 (UTC)
- If there are no concerns I would enable this for a first test. GPSLeo (talk) 06:35, 26 October 2024 (UTC)
- I just enabled this for first test run. GPSLeo (talk) 13:19, 27 October 2024 (UTC)
- I keep hitting this filter, every time I try to edit a page I keep getting this warning, which is odd. Also, we should track anonymous edits to see if this warning actually works or not, because it might just end up annoying only the productive people and not the real, actual vandals. -- Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 16:31, 27 October 2024 (UTC)
- I just enabled this for first test run. GPSLeo (talk) 13:19, 27 October 2024 (UTC)
Almost everyone hit with this filter is a registered users with a Wikimedia SUL account, it's I barely see any unregistered users being warned at all... --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 16:34, 27 October 2024 (UTC)
- Fixed now. I accidentally had an OR and not an AND between anon and mobile condition. GPSLeo (talk) 16:57, 27 October 2024 (UTC)
- Do you think you could make it work better for "new external links" edits? Every time those happen (often when I'm NOT attempting to add an external link), I have to put in a different CAPTCHA twice for the same edit. 2603:3021:3A96:0:4E4:95C6:BC81:D2E1 21:31, 28 October 2024 (UTC)
- If you get a CAPTCHA you triggered another anti spam mechanism that has nothing to do with AbuseFilters. GPSLeo (talk) 22:08, 28 October 2024 (UTC)
- How come this Abuse Filter seems to only happen for mobile devices? 2600:1003:B4C7:465D:0:2C:1E2C:4101 22:07, 21 December 2024 (UTC)
- @GPSLeo. RoyZuo (talk) 12:36, 5 January 2025 (UTC)
- At mobile edits from anon users 5-20% of the edits are reverted. For non mobile anon edits it is about 2-5%. GPSLeo (talk) 12:50, 5 January 2025 (UTC)
- @GPSLeo. RoyZuo (talk) 12:36, 5 January 2025 (UTC)
- How come this Abuse Filter seems to only happen for mobile devices? 2600:1003:B4C7:465D:0:2C:1E2C:4101 22:07, 21 December 2024 (UTC)
- If you get a CAPTCHA you triggered another anti spam mechanism that has nothing to do with AbuseFilters. GPSLeo (talk) 22:08, 28 October 2024 (UTC)
- Do you think you could make it work better for "new external links" edits? Every time those happen (often when I'm NOT attempting to add an external link), I have to put in a different CAPTCHA twice for the same edit. 2603:3021:3A96:0:4E4:95C6:BC81:D2E1 21:31, 28 October 2024 (UTC)
First results
[edit]After one week I looked at the share of reverted edits compared to all edits and it shows a huge decrease of reverted edits while the number of edits only decreased slightly [2]. When looking at the filter hits it also shows that many nonsense edits were not sent while most useful edits were confirmed. GPSLeo (talk) 08:05, 3 November 2024 (UTC)
- @GPSLeo from when to when was the filter live? RoyZuo (talk) 18:09, 14 November 2024 (UTC)
- The filter with current settings was activated on 16:55, 27 October 2024 and is still active. As there were no complaints I would just leave it on as it seems to have at least a small positive effect. GPSLeo (talk) 19:47, 14 November 2024 (UTC)
- The graphs dont look much different before or after 27 October. ip users just happily keep on editing by tapping twice? if true that sounds like very stubborn ip users. RoyZuo (talk) 20:49, 24 November 2024 (UTC)
- Oh, I don't know...Do any of your graphs show vandalism? If most of those edits after October 27 aren't vandalism, I wouldn't call them "very stubborn IP users". (Although I might call some of them helpful.) 2603:3021:3A96:0:2101:3992:8630:68C6 14:24, 16 December 2024 (UTC)
- It doesnt imply the users are vandalising. I meant if I were a user running into the new double confirmation for every edit I would soon give up editing. RoyZuo (talk) 12:36, 5 January 2025 (UTC)
- Maybe they're taking more care to proofread their own writing than you do? I mean, you seemingly forgot to add apostrophes in "don't" & "doesn't" and you didn't capitalize "IP". Not that I'm a Grammar Nazi (I wouldn't hold those grammatical errors against you). But, maybe the IP users are being a little more meticulous than you are in your edits and/or using something like AutoComplete to type. 2603:3021:3A96:0:407D:9112:AB5D:B4D7 16:01, 7 January 2025 (UTC)
- It doesnt imply the users are vandalising. I meant if I were a user running into the new double confirmation for every edit I would soon give up editing. RoyZuo (talk) 12:36, 5 January 2025 (UTC)
- Oh, I don't know...Do any of your graphs show vandalism? If most of those edits after October 27 aren't vandalism, I wouldn't call them "very stubborn IP users". (Although I might call some of them helpful.) 2603:3021:3A96:0:2101:3992:8630:68C6 14:24, 16 December 2024 (UTC)
- The graphs dont look much different before or after 27 October. ip users just happily keep on editing by tapping twice? if true that sounds like very stubborn ip users. RoyZuo (talk) 20:49, 24 November 2024 (UTC)
- The filter with current settings was activated on 16:55, 27 October 2024 and is still active. As there were no complaints I would just leave it on as it seems to have at least a small positive effect. GPSLeo (talk) 19:47, 14 November 2024 (UTC)
RfC: Changes to the public domain license options in the Upload Wizard menu
[edit]An editor has requested comment from other editors for this discussion. If you have an opinion regarding this issue, feel free to comment below. |
Should any default options be added or removed from the menu in the Upload Wizard's step in which a user is asked to choose which license option applies to a work not under copyright? Sdkb talk 20:19, 19 December 2024 (UTC)
Background
[edit]The WMF has been (at least ostensibly) collaborating with us during its Upload Wizard improvements project. As part of this work, we have the opportunity to reexamine the step that occurs after a user uploads a work that they declare is someone else's work but not protected by copyright law. They are then presented will several default options corresponding to public domain license tags or a field to write in a custom tag:
It is unclear why these are the specific options presented; I do not know of the original discussion in which they were chosen. This RfC seeks to determine whether we should add or remove any of these options. I have added one proposal, but feel free to create subsections for others (using the format Add license name
or Remove license name
and specifying the proposed menu text). Sdkb talk 20:19, 19 December 2024 (UTC)
Add PD-textlogo
[edit]Should {{PD-textlogo}} be added, using the menu text Logo image consisting only of simple geometric shapes or text
? Sdkb talk 20:19, 19 December 2024 (UTC)
- Support. Many organizations on Wikipedia that have simple logos do not have them uploaded to Commons and used in the article. Currently, the only way to upload such images is to choose the "enter a different license in wikitext format" option and enter "{{PD-textlogo}}" manually. Very few beginner (or even intermediate) editors will be able to navigate this process successfully, and even for experienced editors it is cumbersome. PD-textlogo is one of the most common license tags used on Commons uploads — there are more than 200,000 files that use it. As such, it ought to appear in the list. This would make it easier to upload simple logo images, benefiting Commons and the projects that use it. Sdkb talk 20:19, 19 December 2024 (UTC)
- Addressing two potential concerns. First, Sannita wrote,
the team is worried about making available too many options and confusing uploaders
. I agree with the overall principle that we should not add so many options that users are overwhelmed, but I don't think we're at that point yet. Also, if we're concerned about only presenting the minimum number of relevant options, we could use metadata to help customize which ones are presented to a user for a given file (e.g. a.svg
file is much more likely to be a logo than a.jpg
file with metadata indicating it is a recently taken photograph). - Second, there is always the risk that users upload more complex logos above the TOO. We can link to commons:TOO to provide help/explanation, and if we find that too many users are doing this for moderators to handle, we could introduce a confirmation dialogue or other further safeguards. But we should not use the difficulty of the process to try to curb undesirable uploads any more than we should block newcomers from editing just because of the risk they'll vandalize — our filters need to be targeted enough that they don't block legitimate uploads just as much as bad ones. Sdkb talk 20:19, 19 December 2024 (UTC)
- "we could use metadata" I'd be very careful with that. The way people use media changes all the time, so making decisions about how the software behaves on something like that, I don't know... Like, if it is extracting metadata, or check on is this audio, video, or image, that's one thing, but to say 'jpg is likely not a logo and svg and png might be logos' and then steer the user into a direction based on something so likely to not be true. —TheDJ (talk • contribs) 10:52, 6 January 2025 (UTC)
- Addressing two potential concerns. First, Sannita wrote,
- Oppose. Determining whether a logo is sufficiently simple for PD-textlogo is nontrivial, and the license is already frequently misapplied. Making it available as a first-class option would likely make that much worse. Omphalographer (talk) 02:57, 20 December 2024 (UTC)
- Comment only if this will result in it being uploaded but tagged for review. - Jmabel ! talk 07:14, 20 December 2024 (UTC)
- That should definitely be possible to implement. Sdkb talk 15:13, 20 December 2024 (UTC)
- Support Assuming there's some kind of review involved. Otherwise Oppose, but I don't see why it wouldn't be possible to implement a review tag or something. --Adamant1 (talk) 19:10, 20 December 2024 (UTC)
- Support for experienced users only. Sjoerd de Bruin (talk) 20:20, 22 December 2024 (UTC)
- Oppose peer Omphalographer ,{{PD-textlogo}} can use with a logo is sufficient simply in majority of countries per COM:Copyright rules (first sentence in USA and the both countries peer COM:TOO) my opinion (google translator). AbchyZa22 (talk) 11:02, 25 December 2024 (UTC)
- Oppose in any case. We have enough backlogs and don't need another thing to review. --Krd 09:57, 3 January 2025 (UTC)
- How about we just disable uploads entirely to eliminate the backlogs once and for all?[Sarcasm] The entire point of Commons is to create a repository of media, and that project necessarily will entail some level of work. Reflexively opposing due to that work without even attempting (at least in your posted rationale) to weigh that cost against the added value of the potential contributions is about as stark an illustration of the anti-newcomer bias at Commons as I can conceive. Sdkb talk 21:36, 3 January 2025 (UTC)
- Oppose. I think the template is often misapplied, so I do not want to encourage its use. There are many odd cases. Paper textures do not matter. Shading does not matter. An image with just a few polygons can be copyrighted. Glrx (talk) 19:47, 6 January 2025 (UTC)
- Support adding this to the upload wizard, basically per Skdb (including the first two sentences of their response to Krd). Indifferent to whether there should be a review process: on one hand, it'd be another backlog that will basically grow without bound, on the other, it could be nice for the reviewed ones. —Mdaniels5757 (talk • contribs) 23:57, 6 January 2025 (UTC)
General discussion
[edit]Courtesy pinging @Sannita (WMF), the WMF community liaison for the Upload Wizard improvements project. Sdkb talk 20:19, 19 December 2024 (UTC)
- Thanks for the ping. Quick note: I will be on vacation starting tomorrow until January 1, therefore I will probably not be able to answer until 2025 starts, if needed. I'll catch up when I'll have again a working connection, but be also aware that new changes to code will need to wait at least mid-January. Sannita (WMF) (talk) 22:02, 19 December 2024 (UTC)
- Can we please add a warning message for PDF uploads in general? this is currently enforced by abuse filter, and is the second most common report at Commons talk:Abuse filter. And if they user pd-textlogo or PD-simple (or any AI tag) it should add a tracking category that is searched by User:GogologoBot. All the Best -- Chuck Talk 23:21, 19 December 2024 (UTC)
- Yes, please. Even with the abuse filter in place, the vast majority of PDF uploads by new users are accidental, copyright violations, and/or out of scope. There are only a few appropriate use cases for the format, and they tend to be uploaded by a very small number of experienced users. Omphalographer (talk) 03:11, 20 December 2024 (UTC)
- Can we please add a warning message for PDF uploads in general? this is currently enforced by abuse filter, and is the second most common report at Commons talk:Abuse filter. And if they user pd-textlogo or PD-simple (or any AI tag) it should add a tracking category that is searched by User:GogologoBot. All the Best -- Chuck Talk 23:21, 19 December 2024 (UTC)
- Comment, the current version of the MediaWiki Upload Wizard contains the words "To ensure the works you upload are copyright-free, please provide the following information.", but Creative Commons (CC) isn't "copyright-free", it is a free copyright ©️ license, not a copyright-free license. I'm sure that Sannita is keeping an eye on this, so I didn't ping
herhim. It should read along the lines of "To ensure the works you upload are free to use and share, please provide the following information.". --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 12:19, 24 December 2024 (UTC)- @Donald Trung: Sannita (WMF) presents as male, and uses pronouns he/him/his. Please don't make such assumptions about pronouns. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:02, 24 December 2024 (UTC)
- My bad, I've corrected it above. For whatever reason I thought that he was a German woman because I remember seeing the profile of someone on that team and I probably confused them in my head, I just clicked on their user page and saw that it's an Italian man. Hopefully he won't feel offended by this mistake. Just saw that he's a fellow Whovian, but the rest of the comment remains unaltered as I think that the wording misrepresents "free" as "copyright-free", which are separate concepts. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 14:09, 24 December 2024 (UTC)
- (Hello, I'm back in office) Not offended at all, it happens sometimes on Italian Wikipedia too. Words and names ending in -a are usually feminine in Italian, with some exceptions like my name and my nickname that both end in -a, but are masculine. :) Sannita (WMF) (talk) 13:15, 2 January 2025 (UTC)
- Wiki markup: {{gender:Sannita (WMF)|male|female|unknown}} → male. Glrx (talk) 03:07, 3 January 2025 (UTC)
- (Hello, I'm back in office) Not offended at all, it happens sometimes on Italian Wikipedia too. Words and names ending in -a are usually feminine in Italian, with some exceptions like my name and my nickname that both end in -a, but are masculine. :) Sannita (WMF) (talk) 13:15, 2 January 2025 (UTC)
- My bad, I've corrected it above. For whatever reason I thought that he was a German woman because I remember seeing the profile of someone on that team and I probably confused them in my head, I just clicked on their user page and saw that it's an Italian man. Hopefully he won't feel offended by this mistake. Just saw that he's a fellow Whovian, but the rest of the comment remains unaltered as I think that the wording misrepresents "free" as "copyright-free", which are separate concepts. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 14:09, 24 December 2024 (UTC)
Nuevo Témplate PD-textflag
[edit]Buenas admin una propuesta es posible crear un nuevo témplate para las banderas simples como este (File:Bandera de Colina (Falcón).svg) esa bandera contiene con texto (below too) ,están de acuerdo crear nuevo témplate "PD-textflag"? AbchyZa22 (talk) 17:22, 25 December 2024 (UTC)
- Comment @Glrx:any opinion? (google translator). AbchyZa22 (talk) 12:26, 6 January 2025 (UTC)
- I don't see the need. For flags, it's usually the individual drawings which have copyright, not the design. The particular vector instructions in an SVG might have a copyright, even if the visual result does not, for example. So we should keep that licensing statement on that SVG. {{PD-text}} is fine if there is a very particular circumstance where it makes sense (it's really just another name for PD-ineligible anyways). Carl Lindberg (talk) 12:51, 6 January 2025 (UTC)
- Oppose per Clindberg. Also, the affected illustrations would be few. Glrx (talk) 19:33, 6 January 2025 (UTC)
Category naming for proper names
[edit]There are currently multiple CfD disputes on the naming of categories for proper names (Commons:Categories for discussion/2024/12/Category:FC Bayern Munich and Commons:Categories for discussion/2024/12/Category:Polonia Warszawa). The problem is caused by an unclear guideline. At COM:CAT the guideline says: "Category names should generally be in English. However, there are exceptions such as some proper names, biological taxa and names for which the non-English name is most commonly used in the English language". The first problem is that sometimes people do not notice that there is no comma before the "for" and think that the condition applies for all cases. This might also be caused by some wrong translations. The other problem is the "some" as there are no conditions defined when and when not this applies. I think we have four options:
- Translate all proper names
- Translate proper names when English version is commonly used (enwiki uses a translated name)
- Do not translate proper names but transcribe non Latin alphabets
- Always use the original proper name
Redirects can exist anyways. The question what to do with locations they have multiple official local names in multilingual regions is a different topic to be discussed after there is a decision on the main question. GPSLeo (talk) 11:40, 28 December 2024 (UTC)
- I don't think it's a bad thing that the rule gives room for case-by-case decisions. The discussions about this are very long, but it's rarely about a real problem with finding or organising content. So my personal rule would be something like ‘If it's understandable to an English speaker, is part of a subtree curated by other users on an ongoing basis, and you otherwise have no engagement in that subtree, don't suggest a move just because of a principle that makes no difference here.’ Rudolph Buch (talk) 14:37, 28 December 2024 (UTC)
- 100% That should be the standard. People are to
limp wristedweak when it comes to dealing with obviously disingenuous behavior or enforcing any kind of standards on here though. But 99% of time this is only a problem because someone wants to use category names as their personal nationalist project. It's just that no one is willing to put their foot down by telling the person that's not what categories are for. Otherwise this would be a nonissue. But the guideline should be clear that category names shouldn't be in the "native language" if it doesn't follow the category tree and/or is only being done for personal, nationalistic reasons. --Adamant1 (talk) 18:40, 28 December 2024 (UTC)
- 100% That should be the standard. People are to
- I think that at least in most cases the right answer is something like #2 except:
- I wouldn't always trust en-wiki to get it right, especially on topics where only one or two editors have ever been involved there, and we might well have broader and more knowledgeable involvement here.
- Non-Latin alphabets should be transliterated.
- The thing is, of course, that is exactly the one that frequently requires judgement calls, so we are back where we started.
- Aside: in my experience, some nationalities (e.g. German) have a fair number of people who will resist the use of an English translation no matter how common, while others (e.g. Romanian) will "overtranslate". On the latter, as an American who has spent some time in Romania, I'm always amazed when I see Romanians opt for English translations for things where I've always heard English-speakers use the Romanian (e.g. "Roman Square" for "Piața Romana"; to my ear, it is like calling the composer Giuseppe Verdi "Joseph Green"). - Jmabel ! talk 18:59, 28 December 2024 (UTC)
- I've made the sentence in COM:CAT, quoted by OP, into a list, to remove ambiguity. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:56, 12 January 2025 (UTC)
- Oppose all four suggested solutions because they would be too disruptive, each. Several outcomes are possible: top-down regulations to be mass-executed by a handful of editors who could instead do more meaningful work, while a large group of editors raises up in protests against a consensus that they didn't participate in (see recently: "Historical images of..."), is one possible outcome. Another possibility is a toothless rule that is generally ignored in practice, except when it can be wielded to cudgel others.
The current rule for proper names is sufficient and remains flexible enough to be handled by contributors of all kinds. My arguments against each of the four general ideas: Solution 1 and 2 are hardheaded English WP supremacy, those should be discarded right away, Commons is a multilingual community project. Solution 3 sounds best at first, but it is again inflexible: Non-Latin category names exist by the thousands - Cyrillic and East Asian publications most prominently (Category:武州豊嶋郡江戸庄図, note how the Japanese title uses Kanzi), and it's not as if transliterated-to-Pinyin Chinese is easier to understand for non-writers. That means (imo) that on the lowest levels in the category tree, native names should be allowed in whatever script, as long as the generic parent categories like "Category:Books from Russia about military" that would be used to navigate the cat-tree are still in English. Regarding solution #4: "Always proper names" could be interpreted by some to raise language barriers against foreign editors on a much higher level: I prefer to find Chinese provinces under English categories names like "Anhui" and "Guangdong", not as Category:北京. I prefer Arabic personal names transliterated (by whatever method, even), and so on. --Enyavar (talk) 22:10, 27 January 2025 (UTC)
RfC: Should Commons ban AI-generated images?
[edit]An editor has requested comment from other editors for this discussion. If you have an opinion regarding this issue, feel free to comment below. |
Should Commons policy change to disallow the uploading of AI-generated images from programs such as DALLE, Midjourney, Grok, etc per Commons:Fair use?
Background
[edit]AI generated images are a big thing lately and I think we need to address the elephant in the room: they have unclear copyright implications. We do know that in the US, AI-generated images are not copyrighted because they have no human author, but, they are still very likely considered derivative works of existing works.
AI generators use existing images and texts in their datasets and draw from those works to generate derivatives. There is no debate about that, that is how they work. There are multiple ongoing lawsuits against AI generator companies for copyright violation. According to this Washington Post article, the main defense of AI generation rests on the question of if these derivative works qualify as fair use. If they are fair use, they may be legal. If they are not fair use, they may be illegal copyright violations.
However, as far as Commons is concerned, either ruling would make AI images go against Commons policy. Per Commons:Fair use, fair use media files are not allowed on Commons. Obviously, copyright violations are not allowed either. This means that of the two possible legal decisions about AI images, both cannot be used on Commons. There is no possible scenario where AI generated images are not considered derivative in some way of copyrighted works; it's just a matter of if it's fair use or not. As such, I think that AI-generated images should be explicitly disallowed in Commons policy.
Discussion
[edit]Should Commons explicitly disallow the uploading of AI-generated images (and by proxy, should all existing files be deleted)? Please discuss below. Di (they-them) (talk) 05:00, 3 January 2025 (UTC)
- Enough. It is a great waste of time to have the same discussion over and over and over. I find it absurd to think that most AI creations are going to be considered derivative works. The AI programs may fail that test, but what they produce clearly isn't. Why don't we wait until something new has happened in the legal sphere before we start this discussion all over?--Prosfilaes (talk) 06:21, 3 January 2025 (UTC)
- OpposeNo, it shouldn't and they are not derivative works and if they are uploaded by the person who prompted them they also are not fair use but PD (or maybe CCBY). They are not derived from millions of images, like images you draw are not "derived" from public works you previously saw (like movies, public exhibitions, and online art) that inspired or at least influenced you.
There is no debate about that, that is how they work.
False.the main defense of AI generation rests on the question of if these derivative works qualify as fair use.
Also false. Prototyperspective (talk) 09:52, 3 January 2025 (UTC)
- Most AI-generated images, unless the AI is explicitly told to imitate a certain work, are not "derivative works" in the sense of copyright, because the AI does a thing similar to humans when they create new works: Humans have knowledge of a lot of pre-existing works and create new works that are inspired by them. AI, too, "learns" for example what the characteristics of Impressionist art are through the input of a lot of Impressionist paintings, and is then able to create a new image in Impressionist style, without that image being a derivative work of any specific work where copyright regulations would apply - apart from the fact, of course, that in this specific example, most of the original works from the Impressionist period are public domain by now anyway. The latter would also be an argument against the proposal: Even if it were the case that AI creates nothing but "derivative works" in the sense of copyright, derivative works of public domain original art would still be absolutely fine, so this would be no argument for completely banning AI images. Having said all that, I think that we should handle the upload of AI images restrictively, allow them only selectively, and Commons:AI-generated media could be a bit stricter. But a blanket ban wouldn't be a reasonable approach, I think. Gestumblindi (talk) 11:12, 3 January 2025 (UTC)
- We want images for a given purpose. It's a user who uploads such an image. He is responsible for his work. We shouldn't care how much assistance he had in the creation process. But I'd appreciate an agreement on banning photorealistic images designed for deceiving the viewer. AI empowers users to create images of public (prominent) people and have these people appear more heroic, evil, clean, dirty, important or whatever than they are. But we have this problem with photoshop already. I don't want such images in Wikimedia even if most people know a given image to be a hoax (such as those of Evil Bert from sesame street). Vollbracht (talk) 01:42, 4 January 2025 (UTC)
- This discussion isn't about deception or usefulness of the images, it's about them being derivative works. Di (they-them) (talk) 02:12, 4 January 2025 (UTC)
- You got the answer on "derivative works" already. I can't see a legal difference between a photoshopped image and an image altered by AI or a legal difference between a paintbrush artwork and an AI generated "artwork". Still as Germans say: "Kunst kommt von können." (Art comes from artistic abilities.) It's not worth more than the work that went into it. If you spend no more than 5 min. "manpower" in defining what the AI shall generate, you shouldn't expect to have created something worthy of any copyright protection or anything new in comparison to an underlying work of art. We don't need more rules on this. When deriving something keep the copyright in mind - no matter what tool you use. Vollbracht (talk) 03:34, 4 January 2025 (UTC)
- This discussion isn't about deception or usefulness of the images, it's about them being derivative works. Di (they-them) (talk) 02:12, 4 January 2025 (UTC)
- Look at other free-upload platforms and you get to the inevitable conclusion that AI uploads will ultimately overwhelm Commons by legal issues or sheer volume. Because people. But with no new legal impulses and no cry for action from tech Commons, I see no need for a new discussion at this point. Alexpl (talk) 05:59, 4 January 2025 (UTC)
- As I understand it, there are three aspects of an AI image:
- The creations caused by the computer algorithm. Probably not copyrighted anywhere because an algorithm is not an animal.
- An AI prompt, entered by a human. This potentially exceeds the threshold of originality, in which case the AI output probably is a derivative work of the prompt. Maybe we need a licence of the AI prompt from the person who wrote it, unless the prompt itself is provided and determined to be below the threshold of originality.
- Sometimes an AI image or text is a derivative work of an unknown work which the AI software found somewhere on the Internet. Here it might be better to assume good faith and only delete if an underlying work is found. --Stefan2 (talk) 11:54, 7 January 2025 (UTC)
- Re 2: note that short quotes can also be put onto Wikipedia which is CCBY-SA and Wikiquote. Moreover, that applies to the prompt, but media files can also be uploaded without input prompt attached. In any case, if the prompt engineer licenses the image under CCBY or PD then it can be uploaded and I only upload these kind of AI images even if further may also be PD. Re 3: that depends on the prompt, if you're tailoring the prompt in some specific way so it produces an image like that then it may create an image looking very similar...e.g. if you prompt La Vie, 1903 painting by Pablo Picasso, in the style of Pablo Picasso, the life it's likely produce an image looking like the original. I also don't think that it would be good to assume that active contributors would without disclosing it do so. Prototyperspective (talk) 12:16, 7 January 2025 (UTC)
- If you ask for
La Vie, 1903 painting by Pablo Picasso, in the style of Pablo Picasso, the life
, then you are very likely to get a derivative work. - If you ask for
a picture of a cat
, then there is no problem with #2, but you have no way of knowing how the AI tool produced the picture, so you are maybe in violation of #3 (you'll find out if the copyright holder sues you). --Stefan2 (talk) 12:53, 7 January 2025 (UTC)
- If you ask for
- Oppose Whatever the details of AI artwork and derivatives are, there's a serious lack of people checking for copyright violations to begin with and anyone who tries to follow any kind of standards when it comes to AI artwork just get cry bullied, threatened, and/or sanctioned for supposedly causing drama. So there's really no point in banning it or even moderating in any way what-so-ever to begin with. The more important thing is properly labeling it as such and not letting people pass AI artwork off on here as legitimate, historically accurate images. The only other alternative would be for the WMF to take a stance on it one or another, but I don't really see that happening. There's nothing that can or will be done about all the AI slop on here until then though. --Adamant1 (talk) 06:51, 9 January 2025 (UTC)
- Conditional Support. I do not support an outright and total ban of any and all AI generated imagery (in short: AI file) on Commons, that's going too far. But I would support a strict enforcement and an strict interpretation of our scope policy in regards to such imagery. By that, I mean the following.
- I support the concept that any upload of AI generated imagery has to satisfy the existence and demonstration of a concise and legitimate use case on Wikimedia projects before uploading the data on Commons. If any AI file is not used, then it's blanketly out of scope. Reasoning: Most Wikimedia projects have a rule of only hold verifiable information. AI files have a fundamental issue with this requirement of verifiability, as the LLM models (Large Language Models) used do not allow for a correlation between input and output. This is exemplified by the inability of the LLM creators to remove the results of rights infringing training data from the processing algorithms, they can only tweak the output to forbid the LLM outputting infringing material like song or journalistic texts.
- I support a complete ban of AI generated imagery that depicts real-life celebrities or historical personnages. For celebrities, the training data is most likely made of copyrighted imagery, at least partly. For historical personnages, AI files will likely deceive a viewer or reader in that the AI file is historically accurate. Such a result, deceiving, is against our project scope, see COM:EDUSE.
- I support the notion of using AI files to illustrate concepts that fall within the purview of e.g. social sciences. I could very well see a good use case to illustrate e.g. poverty, homelessness, sexuality topics and other potentially contentious themes at the discretion of the writing Wikipedian. AI files may offer the advantage in that most likely no personality rights will get touched by the depiction. For this use case, AI files would have to strictly satisfy our COM:Redundant policy: as soon as there is an actual human made media file, a photograph, movie or sound recording that actually fulfils the same purpose as the AI file, then the AI file gets blanketly out of scope.
- I am aware that these opinions are quite strict and more against AI generated imagery. That's due to my background thoughts about the known limitations of generative software and a currently unclear IP right situation about the training data and the output of these LLM. I lack the imagination on how AI files could currently serve to improve the mission of disseminating knowledge, save for some limited use cases. Regards, Grand-Duc (talk) 19:07, 9 January 2025 (UTC) PS. For further reference: Commons:Deletion requests/Files in Category:AI-generated portraits.
- Re 1.: some people complain that people upload images without use case, other people complain when people add useful media they created themselves to articles – it's impossible to make it right. Moreover, Commons isn't just there as a hosting site for Wikipedia & Co but also a standalone site. Your point about LLM is good and I agree but this discussion is not about LLMs but AI media creation tools.
- Re 2.: paintings are also inaccurate. Images made or modified with AI (or made with AI and then edited with eg Photoshop) are not necessarily inaccurate. I'm also very concerned about the known limitations of generative software but that doesn't really support your points and doesn't support that Commons should censor images produced with a novel toolset. Prototyperspective (talk) 19:34, 9 January 2025 (UTC)
- All the AI media creation tools, be it Midjourney, Grok, Dall-E and the plethora of other offerings are based upon LLM. So, any discussion about current "AI media creation tools" is the same as discussing the implications of LLM in practice, IP law and society. And yes, Commons wants to also serve other sites and usages (like school homework for my son, did so in the past and will do in the future). But as anybody may employ generative AI, there is no need to use Commons to endorse any and all potential use - as I tried to demonstrate, AI files are only seldom useful to disseminate knowledge, see Commons:Project scope.
- Paintings are often idealized, yes, introducing inaccuracies. But in that case, the work is vouched for by a human artist, who employed his creativity and his knowledge based upon the learnings in his life to produce a given result. These actions cannot be duplicated at the moment by generative AI, only imitated. And while mostly educated humans will recognize a painting as a creation of a fellow human that will certainly contain inaccuracies, the stories about "alternative facts", news bubbles, deepfakes etc. show that generative AI products are often neither recognized as such and taken at face value. Regards, Grand-Duc (talk) 19:56, 9 January 2025 (UTC)
- No, those are not the same implication. You however got closer to understanding the concept and basics of prompt engineering which is about getting the result you intend or imagined despite all the flaws LLMs have.
People have developed all sorts of techniques and tricks to make these tools produce the images they have in mind at a quality they'd like to have them. If you think people ask AI generator tools to illustrate a subject by just providing the concept's name like "Cosmic distance ladder" and then assuming it produces an accurate good image showing that you'd be wrong. Moreover, most AI images do look like digital art and not photos and are generally labelled as such. Prototyperspective (talk) 22:05, 9 January 2025 (UTC)
- No, those are not the same implication. You however got closer to understanding the concept and basics of prompt engineering which is about getting the result you intend or imagined despite all the flaws LLMs have.
- Oppose per Prosfilaes it is not at all guaranteed that we're in a Hobson's choice here. Some AI images may well be bad, but banning them all just in case is ridiculous. --GRuban (talk) 21:53, 9 January 2025 (UTC)
- Support with the possible exception of images that are themselves notable. Blythwood (talk) 20:22, 11 January 2025 (UTC)
- Mostly support, but not for the reasons proposed. While I don't disagree with the argument that AI-generated content could potentially be considered a derivative work, this argument isn't currently supported by law, and I don't think that's likely to change in the near future. However, very few AI-generated images have realistic educational use. Generated images of real people, places, or objects are inherently inferior to actual photos of those subjects, and often contain misleading inaccuracies; images of speculative topics tend towards clichés (like holograms and blue glowing robots for predictions of the future, or ethnic stereotypes for historical subjects); and AI-generated diagrams and abstract illustrations are inevitably meaningless gibberish. The vast majority of AI-generated images inserted into Wikimedia projects are rejected by editors on those projects and subsequently deleted from Commons; those that aren't removed tend to be more the result of indifference (especially on low activity projects) than because they actually provide substantial educational value. Omphalographer (talk) 20:15, 20 January 2025 (UTC)
- Oppose - No evidence there is actual legal risk. The U.S. Copyright office has declared many times now that A.I. generated images are not copyrighted unless they are clearly derivative works. Any case of an image being a derivative work needs to be handled on a case-by-case basis, just like any other artwork. If Commons is actually concerned about copyright issues with derivative works, we need to delete about a thousand cosplay images first. (No, I'm not saying that all cosplay images are copyrighted derivative works, but a lot of them are.) Nosferattus (talk) 02:02, 21 January 2025 (UTC)
- Oppose if a replacement sister project is not established. Support if a sister project like meta:Wikimedia Commons AI is introduced and AI images can be moved to that project. As I already stated in 2024, I believe that AI-generated images and human-made images should be kept separate in order to protect, preserve, defend and encourage the integrity of purely human creativity. S. Perquin (talk) 20:33, 22 January 2025 (UTC)
- Oppose A blanket ban because "If they are fair use, they may be legal. If they are not fair use, they may be illegal copyright violations." is not accurate: as courts have already found in the United States, intellectual property such as a copyright can only be applied to a human person intentionally making a creative work, not a software process or an elephant or a hurricane in a paint store. Individual AI-generated works may well be copyright infractions, but that would be for the same reasons as if a human person made a work that was influenced by existing copyrighted works, such as being virtually identical to the source work. I am not a lawyer and nothing I write here or anywhere should be taken as any kind of proper financial, legal, or medical advice. —Justin (koavf)❤T☮C☺M☯ 22:54, 22 January 2025 (UTC)
- A blanket ban for copyright reasons would likely encompass a number of uses that would not violate copyright. If this is unwarranted, there may be smaller categories that are more reasonable to consider per Commons:PRECAUTIONARY. For example, AI images of living individuals we do not have a free copy of, might be one area this could apply. I recall there was an AI image of a en:Brinicle discussed previously and deleted, which very obviously resembled the BBC footage of a Brinicle, likely as that was the first ever footage of this phenomena and remains part of a very limited set, but it's hard to work that into a general prohibition. CMD (talk) 02:28, 23 January 2025 (UTC)
- @Koavf: One issue with AI generated images is that we don't usually have a way of knowing the country of origin and they are currently copyrighted in the United Kingdom. Although they are PD in the United States. The issue is that policy requires something not be copyrighted in the country of origin, not just the United States. That's not to say AI generated images can't just be nominated for deletion on a "per image" basis when (or if) it's determined if said image was created in a country where they are copyrighted, but that goes against Commons:PRECAUTIONARY and no other images get a free pass from it in the same way that AI generated artwork seems to. I. E. some people have made the argument that there doesn't need to be a source for AI generated images "because AI", which clearly goes against the guidelines. Not to say I think there should be a blanket ban either though but there should at least be more scrutiny when it comes to where AI artwork on here originates from and enforced of the Commons:PRECAUTIONARY when it isn't clear. --Adamant1 (talk) 08:13, 23 January 2025 (UTC)
- Oppose nothing new provided here. Until there is a broad legal consensus they’re copyright violations, they’re legal. And we can’t ban an entire medium just because there’s a lot of justifiable controversy around it— how are we supposed to illustrate DALL-E itself in that case? Dronebogus (talk) 09:08, 23 January 2025 (UTC)
- Like Grand-Duc, Omphalographer and others said, I think it's better to argue from a COM:SCOPE standpoint. I think it's worthwhile to add illustrative examples to the said policy or to a subpage of it, if necessary - examples where an AI image is unlikely to be in scope, perhaps along with other similar materials like amateur artworks. --whym (talk) 08:56, 25 January 2025 (UTC)
- Oppose per Blythwood. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 11:39, 25 January 2025 (UTC)
- Oppose a blanket ban as per the OP; illustrations made by AI are potentially useful. However if we want to keep them, AI creations have to adhere to the Commons:Scope and should be judged more critically than other content. AI images that are just created and uploaded for no educative purpose, should get deleted, especially if someone makes them en masse. AI images intended for misinformation should also lead to user bans (on repeated offenses after fair warnings). Best, --Enyavar (talk) 22:23, 27 January 2025 (UTC)
Upload of preview images for existing svg files
[edit]If we'd allow the original uploader of an SVG file to provide manually generated reference preview png files we'd have a number of advantages:
- The uploader could provide resolutions optimized for the purpose the SVG file was designed for.
- The reference preview could show how rsvg-convert should have rendered the SVG file in case of unexpected problems. If it's the uploaders fault, we, the comunity could give helpful hints. Otherwise we could suggest workarounds or find an admin who might solve the problem.
- The reference preview could reveal how the user agent (firefox e. g.) should render the SVG file. In case of differences the user might recognize the necessity to install a given font (listed in meta:SVG fonts) to have his user agent render the file the intended way.
- The file might probably be used for its purpose in WP e.g. in spite of rendering problems with rsvg-convert.
Current example: I just uploaded file:Arab Wikimedia SVG fonts.svg. It started with <svg version="1.1" xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" width="210mm" height="594mm" viewBox="0 0 3535 9999">
. Client side rendering allows printing this file on 2 pages A4 with perfectly rendered characters. It was impossible though to have an automatic generated preview file with even a single character being readable. I had to change the attributes to <svg ... width="3535" height="9999" viewBox="0 0 3535 9999">
. But now the informations on intended image and font sizes are gone. Vollbracht (talk) 00:58, 4 January 2025 (UTC)
- Oppose. SVG intends to be scalable (i.e., no fixed size), so the notion of fixed or minimum sizes is counter to the intent. In general, SVG does not scale fonts linearly. Text that is a few pixels high will not be readable. Furthermore, font specifications and substitutions are problematic. SVG files that use
text
elements should expect font substitutions rather than exact rendering. Settingwidth
andheight
is also problematic: do you want to specify a fixed size, do you want the image to fill its designated container, or do you want to be able to pan/zoom in that container? SVG also does not have the notion of a "page". Glrx (talk) 01:31, 4 January 2025 (UTC)- SVG intends to stay in high quality when scaling. It's not limited to presentations that are size independent or tied to specific output media. SVG allows drawing a ruler that in original size has correct dimensions when printed or shown on a correctly installed monitor. Sometimes I want to pan/zoom my container. Sometimes I don't. SVG allows both (even scaling one axis only).
- And how does SVG scale fonts if not linearly?. What is pixels but a unit of measurement based on 96 dpi monitors? And, yes, SVG files that use
text
elements should expect font substitutions but within limits. - My problem in the current example was that rsvg-convert took my mm information as based on 96dpi monitors as well. But per definition they are to be applied to different output media. So ideally preview images should have been generated for hor. 220 dots in total (for WP-thumb), 96dpi (for classic low res. monitors) and at least 300 dpi (for low res. laser printers). Vollbracht (talk) 02:42, 4 January 2025 (UTC)
- Oppose, mostly. Providing a preview at upload time of how Wikimedia's servers will render an SVG (and a warning if it fails to render) is a good idea, and one which I think should be followed up on. But allowing uploaders to override that preview with a custom image is not viable - it'd inevitably lead to situations where there's mismatches between SVG content and its previews, especially as files are updated. If you're unhappy with how an SVG is rendered, change the SVG to render properly, or file a bug if something is unambiguously wrong. Omphalographer (talk) 02:36, 4 January 2025 (UTC)
- Yes! Mismatches between custom preview and updated SVG files are a problem. So in most cases we will avoid that rather than accepting such problems. But at least in some cases a solution could be defining a custom set of preview resolution definitions. By what chance do we have such a possibility sometime in the future? Vollbracht (talk) 02:53, 4 January 2025 (UTC)
- What are you trying to accomplish, precisely? MediaWiki generates image thumbnails on demand - the set of resolutions listed on the file page is just a couple of guesses at sizes that users might want to look at, not the sum total of all sizes that can be generated. Omphalographer (talk) 03:30, 4 January 2025 (UTC)
- The user provided an example for the problem and proposed a solution. It seems to me that them was perfectly clear about what them is "trying to accomplish". The MediaWiki software is faulty WRT SVG, and them proposes a fine workaround, that can be adopted immediatly, while the SVG problam has been there for many years and is probably to stay for many more years. C.Suthorn (@Life_is@no-pony.farm - p7.ee/p) (talk) 10:00, 4 January 2025 (UTC)
- What are you trying to accomplish, precisely? MediaWiki generates image thumbnails on demand - the set of resolutions listed on the file page is just a couple of guesses at sizes that users might want to look at, not the sum total of all sizes that can be generated. Omphalographer (talk) 03:30, 4 January 2025 (UTC)
- Yes! Mismatches between custom preview and updated SVG files are a problem. So in most cases we will avoid that rather than accepting such problems. But at least in some cases a solution could be defining a custom set of preview resolution definitions. By what chance do we have such a possibility sometime in the future? Vollbracht (talk) 02:53, 4 January 2025 (UTC)
Bø
[edit]Happy new year folks! "Caregory:Bø, Midt-Telemark" should be merged with "Category:Bø i Telemark", because it is the same city. Tollef Salemann (talk) 23:12, 4 January 2025 (UTC)
- Some of pictures in "Bø, Telemark" refer to the former municipality, but the rest is just the city. Not sure what to do with some of it and what is the easiest way to solve it. Maps are of the municipality, but most of the stuff is the city and people from the city. Tollef Salemann (talk) 23:15, 4 January 2025 (UTC)
Restrict administrators from blocking or sanctioning users in certain instances
[edit]I'm not going to point fingers but there's been multiple instances over the last couple of years where I've seen administrators block people in cases where they we're clearly involved in a dispute with the user at the time and/or had very little participation in the project to begin with. Probably in the second instance it was because their canvased off site. Which should never be acceptable. So I'm proposing two things here.
1. An administrator should not be able to block or sanction a user that they are clearly involved in a personal dispute with at the time.
2. Administrator's who have little or no participation in the project should not be doing "drive by" blocks or sanctions, period.
Nor should an administrator who meets either criteria be able to deny an unblock request.
In both cases the block, sanction, or denial of an unblock request should be reversed as invalid. There's absolutely no instance where an administrator should be able to block someone to win an editing dispute or do so as a way to prove a point because they don't like the user or how they communicate. Let alone should an administrator who only superficially participates in the project be able to block or sanction people. There's enough well established, uninvolved administrators to block or sanction a user if their behavior is actually that much of an issue. Adamant1 (talk) 08:30, 9 January 2025 (UTC)
- Oppose There's clearly a specific incident that you see as a problem. If that's the case, you should go to Commons:Administrators' noticeboard and ask for a review of that specific administrator's actions. This proposal, as written, is a) too vague to be enforceable (what constitutes "involved" and "little participation"?), and b) already reflects community norms (if an admin is blocking someone to "win" a personal dispute, that's already a problem, hence me suggesting you go to COM:AN). The Squirrel Conspiracy (talk) 08:46, 9 January 2025 (UTC)
- @The Squirrel Conspiracy: It shouldn't be that hard to figure out when an administrator blocked a user to get their way in a personal dispute. It's not that vague of a word. Also, you can say it's already a problem, but it happens pretty frequently on here and it's never reverted because people play defense for the admin or act like the user is making excuses for their behavior. There's no reason the block would be reverted if there's no guideline saying it's not acceptable anyway. By "little participation" I mean an administrator who has only made a few edits in the last year and/or has very little experience with the project outside of that issue. Again, it shouldn't be that hard to determine if an administrator is established here or not. Just look at their edit history. If it's essentially non-exiting and their clearly here just to block the user, but not do any other editing, then they aren't established enough. It's not that complicated. --Adamant1 (talk) 09:34, 9 January 2025 (UTC)
- i would give weak support to this. but i agree with squirrel. modern_primat ඞඞඞ ----TALK 14:12, 9 January 2025 (UTC)
- and also any admin should not give block to the user who get (personal)trouble with him in the past. that admin should just report him in com:an/u. modern_primat ඞඞඞ ----TALK 14:15, 9 January 2025 (UTC)
- Oppose. Admins are already elected partially due to their activity. Admins who are inactive are already automatically removed, and we already have deadminship procedures. Specific Admin actions may already be addressed at COM:AN. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 09:47, 9 January 2025 (UTC)
- @Jeff G.: If this stuff already isn't acceptable why not make it a part of Commons:Blocking policy then? Seriously, if the proposal already reflects community norms then what's the difference if it's part of the blocking policy? --Adamant1 (talk) 10:06, 9 January 2025 (UTC)
- I think it could be added somehow but not with the strict wording you proposed. Most blocks by involved admins are emergency blocks to stop ongoing harassment or edit wars. Such blocks need to be allowed as we do not have enough admins to always get a second opinion within a very short time. For unblocks we already have an uninvolved admin guideline. I think we should make the inactivity guidelines a bit more strict that the technical number of admins gets closer to the number of really available admins. GPSLeo (talk) 12:07, 9 January 2025 (UTC)
- You make a fair point. I'm not necessarily looking to keep admins from being able to do blocks in cases of edit waring or harassment. So I don't have an issue with the specific wording being loosened or otherwise modified this is approved. Usually proposals are rough drafts of the final wording in the guideline anyway. Now that you mention it though the "drive by" blocking could probably be solved by just making the inactivity guidelines a little more strict. I don't have an issue with doing that either. --Adamant1 (talk) 12:22, 9 January 2025 (UTC)
- I think it could be added somehow but not with the strict wording you proposed. Most blocks by involved admins are emergency blocks to stop ongoing harassment or edit wars. Such blocks need to be allowed as we do not have enough admins to always get a second opinion within a very short time. For unblocks we already have an uninvolved admin guideline. I think we should make the inactivity guidelines a bit more strict that the technical number of admins gets closer to the number of really available admins. GPSLeo (talk) 12:07, 9 January 2025 (UTC)
- @Jeff G.: If this stuff already isn't acceptable why not make it a part of Commons:Blocking policy then? Seriously, if the proposal already reflects community norms then what's the difference if it's part of the blocking policy? --Adamant1 (talk) 10:06, 9 January 2025 (UTC)
- Oppose When I give a warning to an ill-behaved user, there's about a one-in-five chance that they then attack me. There is no way in the world that should disqualify me from blocking them because they have created a "conflict" or "dispute" with me.
- On the other point: if someone has qualified as an admin, the community has decided that they generally trust this person's judgment. If they are now less active on Commons, that doesn't mean their judgment has deteriorated. I'm not terribly active on en-wiki, where I remain an admin. I still would have no hesitancy to block someone there if I ran across something egregious. - Jmabel ! talk 17:46, 9 January 2025 (UTC)
- I will add, though: there is a problem with certain admins using a block when they are in a content dispute with someone, something where a block should never have entered the picture. At most, they should have brought that to COM:AN/U and let someone else make a decision. If some admin has a pattern of doing that repeatedly, someone should make the case to have them de-admin'd. But the problem isn't that they blocked someone they were in conflict with, the problem is that they blocked someone because they were in conflict with them. - Jmabel ! talk 17:51, 9 January 2025 (UTC)
- @Jmabel: The problem is that someone who's blocked inherently can't take it to ANU. Then you end up with situations like what happened with Enhancing999 where he stopped contributing because his complaints after the fact weren't taken seriously. I've ran into similar situations myself. The fact is that it's much harder (if not impossible) to deal with a bad block after the person is unblocked. You can't call foul while blocked either because admins just play defense for each other and reject unblock requests by default regardless of the actual merits. So involved blocks just shouldn't happen to begin with. It's certainly not something that's worth losing otherwise productive editors over. --Adamant1 (talk) 22:50, 9 January 2025 (UTC)
- Clearly, the person who has been blocked can't take it to AN/U, at least not for the duration of the block. The point is that someone else who sees a pattern of abuse by an administrator can.
- @Adamant1: unless I'm mistaken, you've been banned from bringing issues to AN/U yourself, because it was perceived that you abused that. (Correct me if I am wrong about the ban.) I think that you are skating on thin ice here discussing particular AN/U cases here. I was going to let it slide because your initial proposal made a point of not singling anyone out, but now you have.
- Since you bring up that specific case, I will briefly address it here but, again, I'd prefer you drop the matter for the reasons just stated. The only time User:Enhancing999 has ever been blocked, they were blocked for a week. I see nothing wrong with the process. There was a broad consensus to block. An uninvolved administrator. Taivo came in and decided the length of the block, and decided precisely that only this short block was in order. Frankly, Taivo may have been doing Enhancing999 a relative favor: someone else might have blocked them a lot longer. There was certainly nothing wrong with him coming in, sizing up the discussion, and making a determination. That is a lot of what admins constantly do on DRs and the like. It is no less appropriate on AN/U. - Jmabel ! talk 23:08, 9 January 2025 (UTC)
- @Jmabel: I specifically avoided mentioning ANU in the original proposal and none of the instances that I have in mind specifically inolve ANU. I'm not topic banned from discussing administrator behavior in general either and if an administrator blocks someone that their in a dispute with it inherently doesn't involve ANU. THATS THE PROBLEM!!!!! So I don't see what the issue with this proposal is in that regard. The same goes for me refering to ANU in an off hand way. Correct me if I'm wrong, but it's not a violation of a topic to say ANU isn't the appropriate way to deal with something if someone else brings it up.
- @Jmabel: The problem is that someone who's blocked inherently can't take it to ANU. Then you end up with situations like what happened with Enhancing999 where he stopped contributing because his complaints after the fact weren't taken seriously. I've ran into similar situations myself. The fact is that it's much harder (if not impossible) to deal with a bad block after the person is unblocked. You can't call foul while blocked either because admins just play defense for each other and reject unblock requests by default regardless of the actual merits. So involved blocks just shouldn't happen to begin with. It's certainly not something that's worth losing otherwise productive editors over. --Adamant1 (talk) 22:50, 9 January 2025 (UTC)
- With User:Enhancing999 my issue is purely with how it was handled on his talk page after he was blocked. I don't care about, nor was I involved in the ANU complaint. But it's not an ANU issue at that point as far as I'm concerned. Say it is though, cool. Then I'll purely speak about my own experiences. At least in my experience I was blocked by a clearly involved admin (again, in a way that didn't involve ANU what-so-ever) and there wasn't any way to deal with it either at the time or after the fact. But apparently I should just accept that and not talk about it because I was topic banned from ANU a year later. Even though again, it had absolutely nothing to do ANU. Right. --Adamant1 (talk) 23:22, 9 January 2025 (UTC)
- @Adamant1, I'm not involved, and I'm not an admin, but I can warn you to be extremely careful when dealing with you topic ban. @Jmabel has been incredibly patient and mellow with you, but you are reaching the end of the ROPE. Be careful with what you say next, and I would recommend taking a walk after writing your next post, but before posting it. All the Best -- Chuck Talk 01:02, 10 January 2025 (UTC)
- I don't have anything else to say about it. The fact is that there isn't and never will be even the most basic standards for how admins behave or use their privileges on here. I have a right to say that something has absolutely nothing what-so-ever to do with ANU on my end if someone claims I'm violating the topic ban in the meantime though. I didn't say crap about ANU and I'm not responsible for what other people decide to talk about. Have fun shotting the messanger though. Its impossible to discuss anything on here from a general perspective without it turning personal.
- @Adamant1, I'm not involved, and I'm not an admin, but I can warn you to be extremely careful when dealing with you topic ban. @Jmabel has been incredibly patient and mellow with you, but you are reaching the end of the ROPE. Be careful with what you say next, and I would recommend taking a walk after writing your next post, but before posting it. All the Best -- Chuck Talk 01:02, 10 January 2025 (UTC)
- With User:Enhancing999 my issue is purely with how it was handled on his talk page after he was blocked. I don't care about, nor was I involved in the ANU complaint. But it's not an ANU issue at that point as far as I'm concerned. Say it is though, cool. Then I'll purely speak about my own experiences. At least in my experience I was blocked by a clearly involved admin (again, in a way that didn't involve ANU what-so-ever) and there wasn't any way to deal with it either at the time or after the fact. But apparently I should just accept that and not talk about it because I was topic banned from ANU a year later. Even though again, it had absolutely nothing to do ANU. Right. --Adamant1 (talk) 23:22, 9 January 2025 (UTC)
- No other website deals with problems in the same super pedantic, needlessly personal way that things are constantly discussed on here. There's been a ton of discussions over the years about admins unilaterally using their privileges to push their own personal opinions or way of doing things. Nothing is ever done about it though because this is exactly how every single conversation goes. All I'm asking for here is for there to be minor, basic standards for when admins are allowed to unilaterally block someone. But lets not do that even though its clearly a problem and leading people to not contribute to the website just because I'm topic banned from an unrelated area that has absolutely nothing to do with this. Adamant1 (talk) 01:41, 10 January 2025 (UTC)
- BTW with Enhancing999, I had gotten into it with him over the exact same thing that he was blocked for a couple of days before he was blocked. Its absolutely within my right to discuss something that I was involved in and its not my problem that other people decided to escalate things or take it to a different forum after that. My bad for mentioning a conflict that I was personally involved though. I wasn't aware it would be such a big no no. --Adamant1 (talk) 01:48, 10 January 2025 (UTC)
- I'm not trying to tell you to stop talking about that issue, I just don't want you to get blocked. Friends don't let friends get sanctioned, as Barkeep49 put it. All the Best -- Chuck Talk 04:49, 10 January 2025 (UTC)
- OK. Fair enough. --Adamant1 (talk) 05:07, 10 January 2025 (UTC)
- I'm not trying to tell you to stop talking about that issue, I just don't want you to get blocked. Friends don't let friends get sanctioned, as Barkeep49 put it. All the Best -- Chuck Talk 04:49, 10 January 2025 (UTC)
- BTW with Enhancing999, I had gotten into it with him over the exact same thing that he was blocked for a couple of days before he was blocked. Its absolutely within my right to discuss something that I was involved in and its not my problem that other people decided to escalate things or take it to a different forum after that. My bad for mentioning a conflict that I was personally involved though. I wasn't aware it would be such a big no no. --Adamant1 (talk) 01:48, 10 January 2025 (UTC)
- No other website deals with problems in the same super pedantic, needlessly personal way that things are constantly discussed on here. There's been a ton of discussions over the years about admins unilaterally using their privileges to push their own personal opinions or way of doing things. Nothing is ever done about it though because this is exactly how every single conversation goes. All I'm asking for here is for there to be minor, basic standards for when admins are allowed to unilaterally block someone. But lets not do that even though its clearly a problem and leading people to not contribute to the website just because I'm topic banned from an unrelated area that has absolutely nothing to do with this. Adamant1 (talk) 01:41, 10 January 2025 (UTC)
Require VRT permission from nude models
[edit]There are currently many cases of nude models requesting the deletion of photos where they are visible. We do not have a clear policy how to handle such cases and every solution has problems. I want to propose a new process to minimize this problem for future uploads.
I would propose a new guideline like the following:
"Photos of nude people need explicit permission from the model verified through the VRT process. This applies to photos of primary genitalia and photos of identifiable people in sexually explicit/erotic poses also if only partial nude. This also applies to photos form external sources with an exception for trustworthy medical journals or similar. This does not apply to public nudity at protests, fairs and shows where photographing was allowed. For photos of such events only the regular rules on photos of identifiable people apply. This applies to all photos uploaded after Date X. Within the process the people are reminded that the permission is irrevocable. Having such permission does not automatically put the photo within the scope."
As I think that would not be more than a hand full of cases per month I think this could be handled by the VRT team. If this new task is a problem for the VRT we could also ask if the T&S team could help in this sensitive area. GPSLeo (talk) 10:09, 11 January 2025 (UTC)
- I thought the GDPR right "to be forgotten" makes a "irrevocable" model release impossible? What would such a guideline mean for fotos from pride parades? At pride parade there are regularly people with visible primary genitalia. C.Suthorn (@Life_is@no-pony.farm - p7.ee/p) (talk) 11:33, 11 January 2025 (UTC)
- I think there is no higher court decision on "model contract" vs. "right to be forgotten" but I would assume that the model contract is the superior right. If otherwise we would already have cases of known movies where some actors got themself removed from the movie. I will add a sentence on public nudity. I had this in mind but then forgot it when writing the draft. GPSLeo (talk) 11:41, 11 January 2025 (UTC)
Before putting new tasks on the VRT, please consider to speak with the VRT. Their current policy is not to process any personality rights releases, which also included model contacts, not least because they are unable to reasonably verify such releases. --Krd 11:50, 11 January 2025 (UTC)
- I am aware that this is often more complicated than for copyright. But I think it is better to make a "delete if not verified policy" instead of keep everything and handle all the removal requests they also require identity confirmation. GPSLeo (talk) 12:07, 11 January 2025 (UTC)
- How many removal requests have there been in the last 2 years? Krd 12:29, 11 January 2025 (UTC)
- I do not know how many cases were handled privately by VRT and Oversight but for the cases starting as regular deletion requests I would estimate around ten to twenty cases in the last two years. GPSLeo (talk) 12:55, 11 January 2025 (UTC)
- No offence, but can we make sure we are addressing a problem, and not a non-problem, before we make such expensive approach? I for sure don't see all such VRT requests, but I think I see at least half of them, and I have no memory of any relevant issue. If they happen, they are mostly such cases where consent initially was given and is going to be revoked later, which is a situation other addressed by the proposal.
- Who is going to ask the oversighters, so that we know what we are talking about? Krd 17:26, 11 January 2025 (UTC)
- I do not know how many cases were handled privately by VRT and Oversight but for the cases starting as regular deletion requests I would estimate around ten to twenty cases in the last two years. GPSLeo (talk) 12:55, 11 January 2025 (UTC)
- How many removal requests have there been in the last 2 years? Krd 12:29, 11 January 2025 (UTC)
The proposal seems sensible to me, as long as the VRT would be actually willing to handle such permissions, see Krd's comment. I would, however, add something exempting historical photographs too (for example, the photographs in Category:19th-century photographs of nude people), or photographs of now deceased people in general (photos taken when the person was alive). In Switzerland, for example, the "right to one's own picture" (Recht am eigenen Bild) basically ends with death, see de:Recht am eigenen Bild (Schweiz) and can't be claimed by family members; in Germany, family members can claim it for up to 10 years after the person's death (per de:Postmortales Persönlichkeitsrecht). Gestumblindi (talk) 14:41, 11 January 2025 (UTC)
- I’d say if a nude model legitimately requests their picture be taken down, we just take it down; requiring VRT for each and every non-historical, non-nudist, non-self-shot photo of a nude person seems tedious and unnecessary. 10-20 cases is a non-trivial amount, but I’d think it’s a pretty small percentage of all the nude photography we host here. Dronebogus (talk) 15:24, 11 January 2025 (UTC)
Something like this may be reasonable, but the considerations raised by User:Gestumblindi and User:Dronebogus are relevant. To list three exceptions I see:
- Historical photos, especially photos that were routinely published in their own era and whose copyrights have now expired. E.g. I cannot imagine doubting appropriate consent on a nude photo of actress Louise Brooks.
- Photos from societies and cultures where what is in the West considered "nudity" is simply considered normal (e.g. places in Africa or Pacific Islands where women do not routinely cover their breasts).
- Photos taken at public events in countries where appearing in public is de facto consent for photography. E.g. the many people who appear naked at the Folsom Street Fair, or Fremont Solstice Parade, or Mardi Gras in New Orleans. It is not practical to get VRT from a person walking by in a parade, nor do I see any need to do so in a situation where they have no legal expectation of privacy.
I would not be surprised if there are other equally strong cases for exceptions. - Jmabel ! talk 19:50, 11 January 2025 (UTC)
- Yes, the part for historical photos should definitely be added and defined in a very broad sense (all photos older then 25? years). The second point is the reason why I made the complicated definition to exclude female breasts in non sexual contexts from the guideline. GPSLeo (talk) 20:37, 11 January 2025 (UTC)
- The proposal was about primary genitalia. Now you introduce secondary gender specific body parts like breasts or a beard. There are societies that forbid a man to show a shaven face. Should we also require VRT permission for images of iranien or afghan men without a beard? C.Suthorn (@Life_is@no-pony.farm - p7.ee/p) (talk) 23:10, 11 January 2025 (UTC)
- The proposal would create very big issues (increase in work for VRT, increase of DRs towards nude pictures) to potentially solve very few (almost non-existent) issues. Christian Ferrer (talk) 08:41, 12 January 2025 (UTC)
- Oppose A solution in search of a problem. The Squirrel Conspiracy (talk) 11:28, 12 January 2025 (UTC)
- Comment I'm very active in VRT and I've never see a case like this -nude models requesting the deletion of photos where they are visible-. I think we can handle it when the moment arrives. --Ganímedes (talk) 22:49, 12 January 2025 (UTC)
- Oppose Unneeded and would cause a increase in deletion requests and a increase in work for VRT Isla (talk) 23:08, 14 January 2025 (UTC)
- Support Assuming Jmabel's suggestions are implemented if it passes. Regardless, this seems like a reasonable proposal and I don't really think the arguments against it are compelling. God forbid the VRT team has to do a couple of more VRT permissions every now and then. --Adamant1 (talk) 03:16, 16 January 2025 (UTC)
- It's not about workload; the comment above was
"Their current policy is not to process any personality rights releases, which also included model contacts, not least because they are unable to reasonably verify such releases."
Unless this issue is adequately addressed, the proposal is a non-starter. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:12, 16 January 2025 (UTC)
- It's not about workload; the comment above was
- Fair enough. I must have missed that. I agree the proposal is probably a non-starter if its not worked out though. --Adamant1 (talk) 13:29, 16 January 2025 (UTC)
- If we see a need for something we are currently not able to do we have to show this to get support from the WMF. The WMF will only help us finding a solution if there is a consensus in the community that there is need for this. We need the community decision that there is a need before we can talk about finding solutions. GPSLeo (talk) 13:41, 16 January 2025 (UTC)
- Fair enough. I must have missed that. I agree the proposal is probably a non-starter if its not worked out though. --Adamant1 (talk) 13:29, 16 January 2025 (UTC)
- Oppose mandatory VRT permission. My proposal is: if a model makes a legitimate request to remove an image, we remove it, no questions asked. Dronebogus (talk) 18:27, 16 January 2025 (UTC)
- As far as your proposal is concerned, I suspect that's more or less the case already, if someone knows who or where to ask - but I'd absolutely support a more substantial proposal to make that an official policy, and to make it better known. Omphalographer (talk) 02:41, 17 January 2025 (UTC)
- Commons:Contact us/Problems mentions info-commons for issues about "Images of yourself". So in a sense, it's already handled in VRT, or at least our documentation says so.
- We have 2 Commons-related community queues in VRT: info-commons mentioned in Commons:Contact us/Problems and permissions-commons described in COM:VRT. The latter page might make it look like permissions-commons=VRT, but that is not true. whym (talk) 10:32, 18 January 2025 (UTC)
- As far as your proposal is concerned, I suspect that's more or less the case already, if someone knows who or where to ask - but I'd absolutely support a more substantial proposal to make that an official policy, and to make it better known. Omphalographer (talk) 02:41, 17 January 2025 (UTC)
- Support with Jmabel's exceptions. Nosferattus (talk) 02:07, 21 January 2025 (UTC)
Enable two new tools
[edit]Hi! As part of a project with User:Scann (WDU), I developed two new tools to edit Commons:
- AddFileCaption - Adds captions to the structured data of files in a given category
- AddFileDescription - Adds descriptions to the files in a given category
The tools are are already enabled in MediaWiki.org and eswiki, and we'd like to enable them in Commons too. For that, I'd need to add two new gadgets (technical details explained on the links above and here). I have the necessary permissions (I'm a global interface editor) but would like to ask the community for support, questions, ideas or concerns. Thanks! Sophivorus (talk) 13:30, 21 January 2025 (UTC)
- Hi! Thanks for the mention. Just to clarify, this work was funded by Wikimedistas de Uruguay. Hope the community finds the tools useful! Scann (WDU) (talk) 13:52, 21 January 2025 (UTC)
- Support Please them both autopatrolled only to avoid issues with SDC vandalism. All the Best -- Chuck Talk 19:24, 21 January 2025 (UTC)
- Support Looks interesting. Although I agree with Alachuckthebuck that probably only autopratrollers should be able to use the gadgets. --Adamant1 (talk) 18:10, 22 January 2025 (UTC)
- Support I don't think it needs to be autopatrolled however: these are captions and descriptions, and until it becomes a problem -- adding requirements like autopatrolled prevents it from being useful for campaigns and other newcomer activities. Like the ISA tool, the way to prevent bad behavior on SDC, is training the people using it, not limiting who can use it, Sadads (talk) 12:34, 24 January 2025 (UTC)
- Comment The tools should work the same way the ISA tool does, anyone that has an account can use it. These are not massive edits, there's no reason or purpose to limit who can use them -- they are precisely to make the Wikimedia Commons interface more intelligible and easier to use for newcomers. Scann (WDU) (talk) 12:49, 24 January 2025 (UTC)
- It's usually just better to limit something new to a specific group of editors until it's been tested. That's less of an issue in this case since the tools are already in use on other projects though. --Adamant1 (talk) 12:52, 24 January 2025 (UTC)
- The ISA tool is exactly an example for a tool that created lots of bad edits because people used it without reading the guidelines. GPSLeo (talk) 13:05, 24 January 2025 (UTC)
- Support Very useful tools. As a campaign organizer I'd love to have these tools at hand to invite people to participate in new ways and engage with SDC, which is a powerful but neglected way of collaboration and engagement for new comers. That's why I think these tools should be available to everyone. The power that they potentially might give to bad actors for making vandalism, will be balanced giving anyone the chance to fix problems, and overall, to increase and improve the use of SDC. Mariana Fossatti (WK?) (talk) 16:44, 24 January 2025 (UTC)
Done Well, I just enabled both tools, see Template:Add file caption and Template:Add file description. I also went ahead and added a "group" parameter that basically allows to limit the use of the tool to some specific group (e.g. user, autoconfirmed, autopatrolled, etc). The current default is empty (no restriction), but if vandalism starts to occur, the default can be set to some group. Hope they bring many valuable edits, cheers! Sophivorus (talk) 15:51, 31 January 2025 (UTC)
Major damage to Wikimedia Commons
[edit]As far as I can see, major damage has successively been done to Wikimedia Commons over the last few years by chopping up categories about people into individual "by year" categories making it
- virtually impossible to find the best image to use for a certain purpose, and
- virtually impossible to avoid uploading duplicates since searching/matching imges has become virtually impossible.
Here is a perfect exsmple. I have a really good, rare picture of her, but I'll be damned if I'm willing to wade through all the "by-year" categories to try to see if Commons already has it. The user who uploaded this didn't even bother to place it in a personal category. Why should they, with all the work required to try to find the category at all & fit the image in there?
I am mot objecting to the existence of categories "by year", Searching is the problem.
What if anything can be done about this mess which is steadliy getting worse all the time? Could some kind of bot fix it?
I really feel that this is urgent now and cannot be ignored any longer. The project had become worth much much much less through the problem described. Or have I missed/misunderstood something here? SergeWoodzing (talk) 10:00, 23 January 2025 (UTC)
- This is a duplicate discussion of Commons:Administrators' noticeboard#URGENT! Major damage to this project. CMD (talk) 12:22, 23 January 2025 (UTC)
- Yes, but the user was told there to bring it here. - Jmabel ! talk 17:28, 23 January 2025 (UTC)
- Contemporary VIP´s produce a ton of images. Sorting them by year makes sense - otherwise you would have to deal with hundreds of files in one category. As for Wikipedia: Go to the most recent useable photo of "Sophie" and use that. And if it is not the most flattering... well that´s life. Alexpl (talk) 12:33, 23 January 2025 (UTC)
- Uh, I don't think that's really what we ought to do. I've tried for many years always to use the best possible images. --SergeWoodzing (talk) 17:52, 23 January 2025 (UTC)
- It feels like the fear is a bit too huge to me, but (if you're looking for images of Donald Trump), you can enter deepcategory:"Donald Trump by year" in the regular search for example, et voilá! You can see many Donald images at once without looking into each subcat :) (Also see COM:Search for tags and flags) --PantheraLeo1359531 😺 (talk) 12:51, 23 January 2025 (UTC)
- Splitting images by year is often counterproductive, but that's necessary when there are a lot of them for one person. Yann (talk) 15:45, 23 January 2025 (UTC)
- See Help:Gadget-DeepcatSearch and TinEye image reverse search among other things. Prototyperspective (talk) 16:04, 23 January 2025 (UTC)
Please! I have not suggested that images should not (also) be sorted by year, so there is no need to defend that kind of sorting. I've asked for a search remedy & will now try the tips we've been given here. Thank you for them! --SergeWoodzing (talk) 17:52, 23 January 2025 (UTC)
- @SergeWoodzing: Another solution I've seen is addition of a flat category for all of a topic's files, to achieve your purpose but still allow for the granularity that others like to achieve. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 18:31, 23 January 2025 (UTC)
- I agree with the general issue brought up by SergeWoodzing. "By year" in many cases only makes sense if a large number of files cannot be meaningfully sorted in another way. Non-recurring events, certainly. But things that undergo little changes from year to year, do not necessarily need atomized categories, and I also noticed that by-year categories are steady on the rise since 2018. Not always for the better. The following examples are not what literally exists right now, but it would be easy to find real cases just like them.
- I have recently encountered more and more "books about <topic> by year", and that means that either a very broad topic like "biology" is split up by year (which later makes it much harder to split the topic by "botany" vs. "zoology", or as "by countinent", "by language", or to meaningfully search the publications) or that a very narrow topic like "American-Mexican War" gets splintered into single-file categories. How can two books about the same war be categorically different because the one was published in December 1857 and the other in March 1858?
- "Maps by year" are my pet peeve. Nearly all maps before the 21st century had many-years-long production processes. The further back we go in time, the less the publication year of an (old) map matters, the categories should rather differentiate the location and topic, not the day/month/year of publication. An old city plan of Chennai, an old topo map of Rajasthan, and an old geological map of Bengal all from the same year, have so little in common that it is pointless to primarily group them under "<year> maps of India". (I have been vocal about this several times here on the pump already, and got some support too).
- "Tigers by year": I think everyone should see the absurdity. Photos of tigers should be grouped by location (zoo, country) or by growth stage (juvenile, adult...), not whether they were photographed in 1998 vs. 2015.
- Location by year. Without exif/metadata, one photo of Taj Mahal looks like the other, with no telling that one was created 2008 and the other 2021. It makes much more sense to primarily sort them by architectural elements (main building, gates, interior...) than by year. Thankfully, this is done too, but not always. And sure, with some events even two years make a huge difference, like Verdun 1913 and Verdun 1915; but many "non-changing" locations are fine without a by-year split-up.
- "Person by year" is the OP topic already. Actually, in my opinion this makes mostly sense as long as MANY files exist. If there are just 25 files of a celebrity, please do NOT split that collection up into 12 by-year subcategories. Doing so is a case of well-intentioned obstruction, as access just gets harder with no further benefit.
- Much more could be said. So while I am cautiously supportive of several important use cases of by-year categories, atomization has to stop. --Enyavar (talk) 02:12, 24 January 2025 (UTC)
- I agree with the general issue brought up by SergeWoodzing. "By year" in many cases only makes sense if a large number of files cannot be meaningfully sorted in another way. Non-recurring events, certainly. But things that undergo little changes from year to year, do not necessarily need atomized categories, and I also noticed that by-year categories are steady on the rise since 2018. Not always for the better. The following examples are not what literally exists right now, but it would be easy to find real cases just like them.
- Comment I did a proposal a few months ago to confine "by year" categories to images that show a meaningful distinction by year. For instance something like a yearly event where there's actually a difference between the years. Whereas, say images of tigers per Enyavar's example aren't worth organizing per year because there's no meaningful between a tiger in 2015 and one in 2016. Anyway, it seemed like there was general support for the proposal at the time.
- The problem is that there's no actual way to enforce it because people will ignore consensus, recreate categories, and attack anyone who disagrees with them. It's made worse by the fact that admins on here seem to have no will or ability to impose any kind of standards. They just cater to people doing things their own way regardless of consensus as long as the person throws a big enough tantrum about it. There's plenty of proposals, CfD, village pump and talk page discussions, Etc Etc. that should already regulate how these types of categories are used though. They just aren't ever imposed to any meaningful degree because of all the
limp wristedweak pandering to people who use Commons as their own personal project. --Adamant1 (talk) 06:47, 24 January 2025 (UTC)
- So should they ban you promptly for using a homophobic slur ("limp wristed"), or should they just let you continue going on your way ignoring consensus?--Prosfilaes (talk) 08:16, 24 January 2025 (UTC)
- @Prosfilaes: I didn't actually know it was a homophobic slur. I just thought it meant weak. I struck it out though. Thanks for letting me know. Not that I was saying anyone should banned for ignoring the consensus, but if people intentionally use homophobic slurs then yes they should be banned for it. With this though it's more about the bending over backwards to accommodate people who don't care about or follow the consensus then it is sanctioning anyone over it. people should just be ignored and the consensus should be followed anyway. There's no reason what-so-ever that it has to involve banning people. Just don't pander to people using Commons as their own personal project. It's not that difficult. --Adamant1 (talk) 09:28, 24 January 2025 (UTC)
- For this topic at least, I don't think I have seen actual attacks against other users, thankfully. SergeWoodzing has used some strong condemnations of the status quo in general on Commons, but I do not perceive his statement as an attack against some users. Now, Adamant points out the problem, which is that we seemingly don't have a guideline or even policy on which topics may be organized by year and which ones should rather not get a by-year categorization. I'm almost sure that people are creating by-year categories out of the best intentions, and mostly because they are boldly imitating the "best practice" of other users, ignorant of some consensus that may or may not have been formed among a dozen users in the village pump. Which means that such users have to be talked out of the idea individually once they start by-year categories for an unsuitable topic. --Enyavar (talk) 16:33, 24 January 2025 (UTC)
- There's certainly an aspect to this where people indiscriminately create by year categories because other people do. But it still comes down to a lack of will and/or mechanisms to enforce standards though. You can ask the person doing it to stop, but they can just ignore you and continue. Then what? No one is going to have repercussions for ignoring the consensus by continuing to create the categories. I've certainly never there be any and I've been involved in plenty of conversation about over categorization. The person usually just demagogues or outright ignores the issue and continues doing it. --Adamant1 (talk) 16:54, 24 January 2025 (UTC)
- For this topic at least, I don't think I have seen actual attacks against other users, thankfully. SergeWoodzing has used some strong condemnations of the status quo in general on Commons, but I do not perceive his statement as an attack against some users. Now, Adamant points out the problem, which is that we seemingly don't have a guideline or even policy on which topics may be organized by year and which ones should rather not get a by-year categorization. I'm almost sure that people are creating by-year categories out of the best intentions, and mostly because they are boldly imitating the "best practice" of other users, ignorant of some consensus that may or may not have been formed among a dozen users in the village pump. Which means that such users have to be talked out of the idea individually once they start by-year categories for an unsuitable topic. --Enyavar (talk) 16:33, 24 January 2025 (UTC)
- Tbf I didn’t really know it was either. I wouldn’t even call it a “slur”— more of a general insult with homophobic connotations, like “sissy” or “pansy”. Dronebogus (talk) 17:57, 27 January 2025 (UTC)
- @Prosfilaes: I didn't actually know it was a homophobic slur. I just thought it meant weak. I struck it out though. Thanks for letting me know. Not that I was saying anyone should banned for ignoring the consensus, but if people intentionally use homophobic slurs then yes they should be banned for it. With this though it's more about the bending over backwards to accommodate people who don't care about or follow the consensus then it is sanctioning anyone over it. people should just be ignored and the consensus should be followed anyway. There's no reason what-so-ever that it has to involve banning people. Just don't pander to people using Commons as their own personal project. It's not that difficult. --Adamant1 (talk) 09:28, 24 January 2025 (UTC)
- So should they ban you promptly for using a homophobic slur ("limp wristed"), or should they just let you continue going on your way ignoring consensus?--Prosfilaes (talk) 08:16, 24 January 2025 (UTC)
Should a "bot cleanup kit" exist?
[edit]In the last 6 months, Commons has had 2 bots have extended issues where they created tens of thousands of invalid edits.
Both times, I cleaned up the mess with massrollback and either account creator or a bot account. But it's a less than ideal solution, as I was hitting 2 thousand EPM while performing rollbacks, and these were not marked as bot edits. So my question is:
Should we create a tool/script/playbook, for doing bot cleanups? I understand bot owners are responsible for the edits made by their bots, but having dedicated tools to rapidly handle 75 thousand rollbacks without causing 5 mins of database lag would be nice. I have been asked frequently is why this can't be done slowly? The problem is that if for any reason, an affected page is edited, any error introduced by the bot can't be fixed easily, often requiring manual correction. All the Best -- Chuck Talk 18:30, 24 January 2025 (UTC)
- We should require bot operators be able to cleanup mistakes they made with their bot. In the bot request they have to confirm that they can also revert the edits they made with the bot. If they can not guaranty this the bot can not be approved. GPSLeo (talk) 19:00, 24 January 2025 (UTC)
- What GPSLeo said. Krd 04:23, 25 January 2025 (UTC)
- @Krd, Seeing as you have handled most of the bot requests from the last 5 years, when does this check happen? And if a bot does mess up and make a bunch of junk edits, should we really be using the same bot to fix it? (unless we have a standardised script to do it, I don't think that's an option) All the Best -- Chuck Talk 04:50, 25 January 2025 (UTC)
- There is no check, but it speaks for itself that if a bot operator messes up in large scale, they are responsible to at least help to clean it up, whatever it takes. Everybody who is running a real bot is able to do that. Perhaps we should consider to be more hesitant on AWB or js gagdet "bots" in the future. In order to understand the actual size problem, perhaps the 2 mentioned cases should be analyzed regarding what exactly went wrong. Krd 05:24, 25 January 2025 (UTC)
- @AntiCompositeNumber was looking into the flickr goof last I heard. As to WLKbot, nothing went wrong per se , but rather the operator @Kim Bach jumped the gun on implementing something, and creating 3k categories that still need cleanup. Also, these were both full bots, not script/AWB bots. Their edits were fixed by a script. Also, @MolecularPilot updated the script, allowing for ratelimiting and marking bot edits (haven't tested the second part yet.) User:MolecularPilot/massrollback.js. Even if commons has our ducks in a row, once the tool exists and is documented, this can be used movement wide, having a much larger impact. All the Best -- Chuck Talk 21:42, 25 January 2025 (UTC)
- Hi! The new version of massrollback supports ratelimiting (you tell it the max number of rollbacks to make in a minute) but it doesn't support marking them as bot edits if you're flagged. I'm working on this part now! :) MolecularPilot (talk) 23:03, 25 January 2025 (UTC)
- Actually, it already did mark bot edits if your flagged, I forgot that I coded that part. So, yeah! :) MolecularPilot (talk) 08:42, 26 January 2025 (UTC)
- Hi! The new version of massrollback supports ratelimiting (you tell it the max number of rollbacks to make in a minute) but it doesn't support marking them as bot edits if you're flagged. I'm working on this part now! :) MolecularPilot (talk) 23:03, 25 January 2025 (UTC)
- @AntiCompositeNumber was looking into the flickr goof last I heard. As to WLKbot, nothing went wrong per se , but rather the operator @Kim Bach jumped the gun on implementing something, and creating 3k categories that still need cleanup. Also, these were both full bots, not script/AWB bots. Their edits were fixed by a script. Also, @MolecularPilot updated the script, allowing for ratelimiting and marking bot edits (haven't tested the second part yet.) User:MolecularPilot/massrollback.js. Even if commons has our ducks in a row, once the tool exists and is documented, this can be used movement wide, having a much larger impact. All the Best -- Chuck Talk 21:42, 25 January 2025 (UTC)
- There is no check, but it speaks for itself that if a bot operator messes up in large scale, they are responsible to at least help to clean it up, whatever it takes. Everybody who is running a real bot is able to do that. Perhaps we should consider to be more hesitant on AWB or js gagdet "bots" in the future. In order to understand the actual size problem, perhaps the 2 mentioned cases should be analyzed regarding what exactly went wrong. Krd 05:24, 25 January 2025 (UTC)
- @Krd, Seeing as you have handled most of the bot requests from the last 5 years, when does this check happen? And if a bot does mess up and make a bunch of junk edits, should we really be using the same bot to fix it? (unless we have a standardised script to do it, I don't think that's an option) All the Best -- Chuck Talk 04:50, 25 January 2025 (UTC)
- What GPSLeo said. Krd 04:23, 25 January 2025 (UTC)
- Is it sufficient to use the existing mw:Manual:Pywikibot/revertbot.py? If not, what is missing? whym (talk) 12:03, 1 February 2025 (UTC)
- Not all bots use pywikibot. And also, I'm not so sure a bot that just screwed up should be doing the cleanup. All the Best -- Chuck Talk 20:32, 1 February 2025 (UTC)
- With the script linked above, you can specify the target user account whose recent edits are to be reverted. The target account doesn't need to be a Pywikibot bot, nor even a bot. whym (talk) 01:53, 2 February 2025 (UTC)
- I didn't know that existed. I have user:chuckbot kicking around with pywikibot and a working backend, so I might do a bot request for that. All the Best -- Chuck Talk 01:57, 2 February 2025 (UTC)
- With the script linked above, you can specify the target user account whose recent edits are to be reverted. The target account doesn't need to be a Pywikibot bot, nor even a bot. whym (talk) 01:53, 2 February 2025 (UTC)
- Not all bots use pywikibot. And also, I'm not so sure a bot that just screwed up should be doing the cleanup. All the Best -- Chuck Talk 20:32, 1 February 2025 (UTC)
Expanding an explanation on the De-adminship policy
[edit]I would like to propose this text to expand the de-adminship policy, this text came as a result of this discussion on meta and also inspired by the current policies on English Wikipedia:
Administrators are expected to adhere strictly to the principles of respect and proper etiquette as outlined in the Universal Code of Conduct. Any administrator who repeatedly or egregiously violates these principles by engaging in disrespectful behavior, personal attacks, or actions that undermine the community's trust may have their administrative rights revoked. Administrators are accountable for their actions and must provide clear and prompt explanations when their conduct or decisions are questioned. Repeated failure to communicate, poor judgment, or misuse of administrative tools, as well as conduct incompatible with the responsibilities of the role, may result in sanctions. Such cases will be reviewed by a designated committee, which will evaluate the severity, frequency, and context of the violations, ensuring a fair and transparent process. Administrators must maintain proper account security, including using strong passwords and reporting any unauthorized access immediately, as compromised accounts may lead to immediate loss of administrative privileges. In cases where violations persist despite warnings or where the offenses are severe, the administrator’s rights may be permanently revoked to safeguard the integrity of the platform and its community. Reinstatement of administrative rights, if sought, will be subject to thorough evaluation by the committee, considering the administrator’s past actions, corrective measures taken, and the current trust of the community.
This aditional description in the current policy aims to uphold a respectful, accountable, and secure environment, ensuring that administrators act in alignment with the values and expectations of their role in our community Wilfredor (talk) 19:51, 24 January 2025 (UTC)
- I think the process as it is is perfectly fine but it can be improved for sure. Clarifying that personal attacks and disrespectful behaviour is not okay seems like a good step however common sense applies. Also your proposal mentions a committee whose creation you do not elaborate further. In any case I would oppose creating a committee and leave the decision of desysopping in the hands of the community (i.e. voting). I don't mean to condone wrong behaviours but one off disrespectful comment or attack could be pardoned, the desysopping process should go for reiterative bad behaviour only. Bedivere (talk) 21:42, 24 January 2025 (UTC)
- Also stating that "Administrators are expected to adhere strictly to the principles of respect and proper etiquette as outlined in the Universal Code of Conduct" is redundant since all users are expected to abide to that code of conduct, including admins for obvious reasons (it is literally verbatim) Bedivere (talk) 21:46, 24 January 2025 (UTC)
- We all should keep in mind that the UCOC is the BARE MINIMUM for acceptable behavior, and we should expect admins to have a much higher standard than UCOC. All the Best -- Chuck Talk 21:58, 24 January 2025 (UTC)
- Yeah, but the proposed wording is not stronger. I would not oppose a stronger wording at all (just fyi) Bedivere (talk) 22:14, 24 January 2025 (UTC)
- @Wilfredor, Would you be willing to add a sentance into the proposed text stating that UCOC is the bare minimum, and should be well above the UCOC at all times. All the Best -- Chuck Talk 22:18, 24 January 2025 (UTC)
- Unfortunately, this thing that seems to be obvious, I do not feel that it is being fulfilled. Wilfredor (talk) 22:48, 27 January 2025 (UTC)
- Common sense isn't that common. All the Best -- Chuck Talk 01:32, 28 January 2025 (UTC)
- Unfortunately, this thing that seems to be obvious, I do not feel that it is being fulfilled. Wilfredor (talk) 22:48, 27 January 2025 (UTC)
- @Wilfredor, Would you be willing to add a sentance into the proposed text stating that UCOC is the bare minimum, and should be well above the UCOC at all times. All the Best -- Chuck Talk 22:18, 24 January 2025 (UTC)
- It can be easily weaponized to serve personal agendas, though, so a good amount of good sense and discretion is strongly recommended when using it. Darwin Ahoy! 22:17, 24 January 2025 (UTC)
- That's right. All I would support is stronger-wording the current policy and saying that personal attacks, poor behaviour are not accepted. But as it is now, there should be a discussion (always local) before initiating a proper desysopping voting Bedivere (talk) 22:27, 24 January 2025 (UTC)
- Yeah, but the proposed wording is not stronger. I would not oppose a stronger wording at all (just fyi) Bedivere (talk) 22:14, 24 January 2025 (UTC)
- We all should keep in mind that the UCOC is the BARE MINIMUM for acceptable behavior, and we should expect admins to have a much higher standard than UCOC. All the Best -- Chuck Talk 21:58, 24 January 2025 (UTC)
- Also stating that "Administrators are expected to adhere strictly to the principles of respect and proper etiquette as outlined in the Universal Code of Conduct" is redundant since all users are expected to abide to that code of conduct, including admins for obvious reasons (it is literally verbatim) Bedivere (talk) 21:46, 24 January 2025 (UTC)
Make Commons:Civility, Commons:Harassment and Commons:No personal attacks a policy
[edit]- @The Squirrel Conspiracy Do you think it is good to close this that fast? There are many comments mentioning that there is some need to adapt the pages for Commons. If they are now a policy every not very minor change would require separate community confirmation. GPSLeo (talk) 08:45, 1 February 2025 (UTC)
- That's fair, but if the proposed changes are uncontroversial, it should be easy to get a consensus for them, and if they are controversial, they shouldn't be in the policies in the first place.
- Candidly, I'd like to think that after 15 years, I have a good sense for what does and doesn't get done on this project, and I suspect that no one is going to step up and rewrite the policies regardless of whether the discussion stays open for two weeks or not. Happy to be proven wrong, but there are lots of gaps in Commons' bureaucracy and infrastructure that have never been fixed.
- If you want to revert the close, go ahead though. The Squirrel Conspiracy (talk) 10:34, 1 February 2025 (UTC)
- As for the procedure, I would at least wait until one weekend is over, to include people who only find volunteer time in weekends. While I agree with the observation that there have been policy gaps for a long time, but I think the long time span also means that it wouldn't hurt to spend a few more weeks, or at the very least, the proposed 2 weeks. Using Template:Centralized_discussion or even MediaWiki:Sitenotice wouldn't be unreasonable for this, consdiering most users don't frequent to COM:VPP, but are affected. Sitenotice might be more suited for the final decision, though. whym (talk) 02:03, 2 February 2025 (UTC)
- I think Commons:No personal attacks talks too much about articles and article talk pages, which we don't have in general. And I don't know what would be the Commons counterpart to regular disagreements on Wikipedia talk pages. whym (talk) 11:58, 1 February 2025 (UTC)
- See also an ongoing discussion in Commons_talk:Civility#Pre-policy_debates (started a few days ago). whym (talk) 02:18, 2 February 2025 (UTC)
Disable the ability to nominate images with VRT permission for deletion or report them as copyright violations
[edit]I think the title is pretty self explanatory. Every once in a while someone an image that VRT permission for deletion based on copyright grounds. I've done it a few times myself. It always seems to just piss people off, but who can blame anyone for doing it when they are given the option. I've read through the guidelines and other related pages though and there doesn't seem to be a legitimate reason to nominate files with VRT permission for deletion. At least not from what I can find. It doesn't seem like VRT permission in general can even be challenged or questioned. It's not like pages can't already be locked for other reasons either. So nominating images with VRT permission for deletion or as copyright violations just shouldn't be an option to begin with. Adamant1 (talk) 21:49, 28 January 2025 (UTC)
- Oppose. VRT can make mistakes, and they don't make any determinations on whether files are in scope. Omphalographer (talk) 23:04, 28 January 2025 (UTC)
- @Omphalographer: Wouldn't the solution in that case be to message someone from the VRT team about it so they can with it then? I don't think regular users should be nominating files for deletion because they think the VRT team screwed up when they don't have access to the same information they do. At least not without talking to someone from the team first, but at that point they can just delete it on there end if there's actually an issue with the permission. --Adamant1 (talk) 23:57, 28 January 2025 (UTC)
- As a concrete example: a user can upload an advertisement to Commons and send permission to VRT, who will duly tag it as having permission. This should not obstruct Commons users from nominating that image for deletion because it is an advertisement. Omphalographer (talk) 02:02, 29 January 2025 (UTC)
- @Omphalographer: Wouldn't the solution in that case be to message someone from the VRT team about it so they can with it then? I don't think regular users should be nominating files for deletion because they think the VRT team screwed up when they don't have access to the same information they do. At least not without talking to someone from the team first, but at that point they can just delete it on there end if there's actually an issue with the permission. --Adamant1 (talk) 23:57, 28 January 2025 (UTC)
- Strong oppose it saves lots of time. i use Help:VFC for "no permission" button(you get it). when there is so many files to prove they are actually copyvios, that option really helps. just mark it! "i think author should contact VRT". this is not a bad thing. if a person decides to upload it here, he already should know how to keep it on screen. actually i dont care if user dont know or cant read COM:VRT. modern_primat ඞඞඞ ----TALK 02:41, 29 January 2025 (UTC)
- No offense, but I think your confused about what I'm proposing here. I'm not proposing that people shouldn't be able to mark files that don't have permission. I'm proposing people shouldn't be able to nominate files that already have VRT permission for deletion due to copyright concerns. Files with VRT permission already have permission, that's literally what VRT permission is and the files obviously aren't COPYVIO. Again, because of the VRT permission. Maybe chill and actually read the proposal next time. --Adamant1 (talk) 02:57, 29 January 2025 (UTC)
- sorry, it was 5 am there. i was sleepy.
- still, people have the right to nominate files despite files got permission. modern_primat ඞඞඞ ----TALK 16:24, 29 January 2025 (UTC)
- No offense, but I think your confused about what I'm proposing here. I'm not proposing that people shouldn't be able to mark files that don't have permission. I'm proposing people shouldn't be able to nominate files that already have VRT permission for deletion due to copyright concerns. Files with VRT permission already have permission, that's literally what VRT permission is and the files obviously aren't COPYVIO. Again, because of the VRT permission. Maybe chill and actually read the proposal next time. --Adamant1 (talk) 02:57, 29 January 2025 (UTC)
- Oppose - We've deleted plenty of files with VRT permission for being out of scope or as outright spam, per Omphalographer. I do think that anything with a VRT ticket should go through regular DR instead of speedy if the concern is copyright, but that's easily solved by the admin hitting the button to convert the speedy to a DR in the very rare cases that someone tags a VRT'd file as a copyvio. The Squirrel Conspiracy (talk) 03:14, 29 January 2025 (UTC)
- Oppose though I would agree they should never be speedy-deleted on copyright grounds. Still, VRT are capable of completely missing the point of what is problematic about a particular file (e.g. an issue of lack of FoP that didn't cross their mind when a photographer gave an otherwise valid permission). The regular deletion process should always give enough time to loop back to VRT if needed. - Jmabel 05:39, 29 January 2025 (UTC)
- @Jmabel: One of the people who voted for to be topic banned from ANU did so because I had nominated some files for deletion a few months ago that had VRT permission based on scope grounds, which they had a problem with. I don't really care either way, but if people are just going to be attacked, threatened and/or sanctioned because they nominated a file that has VRT permission for deletion based on scope issues then they just shouldn't be able to do it to begin with. There was also that whole row Yann and Rosenzweig got into over the studio Harcourt files, which probably could have been resolved by simply asking the VRT team about it to begin with instead of doing mass DRs.
- No offense, but you guys just default to whomever throws the biggest tantrum about something in any given instance. I don't even disagree that VRT are capable of completely missing the point of what is problematic. I just don't think to deal with it should then be put on regular users who will just be attacked and bullied for someone else's mistake. The VRT team is more then capable of dealing with their own problems, or at least they should be. --Adamant1 (talk) 16:00, 29 January 2025 (UTC)
- Oppose No problem shown. Please provide examples. --Krd 06:45, 29 January 2025 (UTC)
- @Krd: Commons:Deletion requests/Files in Category:Photographs by Studio Harcourt from sources which claim they are under a free license. A lot of the images in that and several other DRs related to Studio Harcourt have VRT permission. Yann and Rosenzweig got in a tiff over the whole thing and multiple users attacked Rosenzweig for questioning the VRT permission. --Adamant1 (talk) 15:50, 29 January 2025 (UTC)
- The images in the mentioned DR do not have ticket permission. I cannot say what exactly is in the ticket, as I cannot read the language, but it's no permissnio, as the "VRT info" template is used (which in my opinion should be deleted because it causes nothing but confusion). Krd 16:02, 29 January 2025 (UTC)
- @Krd: Fair enough. Thanks for looking into it at least. --Adamant1 (talk) 05:04, 30 January 2025 (UTC)
- The images in the mentioned DR do not have ticket permission. I cannot say what exactly is in the ticket, as I cannot read the language, but it's no permissnio, as the "VRT info" template is used (which in my opinion should be deleted because it causes nothing but confusion). Krd 16:02, 29 January 2025 (UTC)
- @Krd: Commons:Deletion requests/Files in Category:Photographs by Studio Harcourt from sources which claim they are under a free license. A lot of the images in that and several other DRs related to Studio Harcourt have VRT permission. Yann and Rosenzweig got in a tiff over the whole thing and multiple users attacked Rosenzweig for questioning the VRT permission. --Adamant1 (talk) 15:50, 29 January 2025 (UTC)
- Oppose A single file may involve 1/ several copyrights (e.g. in case of derivative work) 2/ potential out of scope issues 3/ privacy rights issues. All this implying that a VRT permission, although partially valid, may be not sufficient to resolve all the problems. Everyone should have the ability to raise a previously unnoticed issue, and therefore to open a DR if necessary. Christian Ferrer (talk) 09:14, 29 January 2025 (UTC)
- Oppose. Sometimes VRT is for the derivative work and not the underlying copyright. Second out of scope and advertising concerns. Glrx (talk) 17:38, 29 January 2025 (UTC)
I think we can considered this closed on a Snowball basis. - Jmabel ! talk 04:04, 30 January 2025 (UTC)
- @Jmabel: Be my guest and close it if you want to. --Adamant1 (talk) 05:08, 30 January 2025 (UTC)