Commons:Village pump/Proposals
This page is used for proposals relating to the operations, technical issues, and policies of Wikimedia Commons; it is distinguished from the main Village pump, which handles community-wide discussion of all kinds. The page may also be used to advertise significant discussions taking place elsewhere, such as on the talk page of a Commons policy. Recent sections with no replies for 30 days and sections tagged with {{Section resolved|1=--~~~~}} may be archived; for old discussions, see the archives; the latest archive is Commons:Village pump/Proposals/Archive/2026/02.
- One of Wikimedia Commons’ basic principles is: "Only free content is allowed." Please do not ask why unfree material is not allowed on Wikimedia Commons or suggest that allowing it would be a good thing.
- Have you read the FAQ?
| SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 5 days and sections whose most recent comment is older than 30 days. | |
Ratify Commons:AI images of identifiable people as a guideline
[edit]In a previous discussion, consensus was found to implement a policy related to AI-generated or AI-edited images of real people, leading to the proposed guideline at Commons:AI images of identifiable people. Whether that draft should be designated a guideline is the subject of the discussion below. The proposal began on 7 December 2025, and has been open for more than two months. There have only been two new votes in February thus far, so it seems ready for closure.
Among the objections were a couple arguments that we do not need a stand-alone guideline, but that was addressed by the earlier discussion. Several suggestions and objections were raised about the draft text, often as part of an oppose !vote, but sometimes as part of a support. For example, that publications on behalf of someone should be permitted
, clarification on "legal and moral" rights, and whether people who have been dead for a long time should be excluded. None of these saw sufficient engagement to modify an otherwise clear consensus to adopt the guideline as written.
Importantly, the proposed (and now adopted) version is not set in stone, but rather the guideline's starting point. As with any other guideline, issues with specific aspects of the text can be addressed on the talk page through normal consensus-building procedures. — Rhododendrites (talk) | 17:06, 15 February 2026 (UTC)
- The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Following the discussion at Commons:Village_pump/Proposals/Archive/2025/09#Ban_AI_generated_or_edited_images_of_real_people, I prepared Commons:AI images of identifiable people.
I am now seeking to have it officially adopted as a guideline.
@GPSLeo, Josve05a, JayCubby, Dronebogus, Jmabel, Grand-Duc, Pi.1415926535, Túrelio, Raymond, Isderion, Smial, Adamant1, Infrogmation, Omphalographer, Bedivere, Masry1973, and Ooligan: I believe this is everyone that participated in the original discussion. Please feel free to ping anyone if I missed them.
Cheers, The Squirrel Conspiracy (talk) 22:28, 7 December 2025 (UTC)
Support As proposer. The Squirrel Conspiracy (talk) 22:28, 7 December 2025 (UTC)
Support As a 'canvas'-by-ping user. JayCubby (talk) 22:38, 7 December 2025 (UTC)
Support Pi.1415926535 (talk) 22:47, 7 December 2025 (UTC)
Support. Omphalographer (talk) 23:15, 7 December 2025 (UTC)
Support Abzeronow (talk) 23:29, 7 December 2025 (UTC)
Support Ooligan (talk) 00:01, 8 December 2025 (UTC)
Support Grand-Duc (talk) 00:54, 8 December 2025 (UTC)
Support --Bedivere (talk) 01:02, 8 December 2025 (UTC)
Support with one caveat "The image in question was published by the person it depicts" should be "The image in question was published by the person it depicts or with their documented permission or approval." - Jmabel ! talk 02:53, 8 December 2025 (UTC)
- @The Squirrel Conspiracy: would you have any objection to that small edit? - Jmabel ! talk 05:13, 8 December 2025 (UTC)
- In principle, I think it's fine. In practice, I'm not sure what exactly that looks like though. I'm loathe to have people submit "documented permission" to VRT, because a) they're often backlogged as it is, and b) there's this loop where someone uploads a file, then it gets deleted for permissions reasons, then it goes through VRT and is restored, then it gets deleted for scope reasons (because while VRT agents can decline tickets for scope reasons, it seems like a decent number of agents are uncomfortable doing so) - it's a tremendous waste of volunteer resources and I can see a lot of AI images getting stuck in that loop. @Krd, thoughts? The Squirrel Conspiracy (talk) 05:53, 8 December 2025 (UTC)
- I think the sort of situation Jmabel is trying to address is content published on behalf of a person by a social media manager or similar. For instance, if a political figure were to post an AI-generated image on their social media, we wouldn't necessarily know whether it was personally posted by the politician or by their PR team, but it should probably be considered allowed regardless. Omphalographer (talk) 06:01, 8 December 2025 (UTC)
- @Jmabel: could the word "auspice" or a wording like "published by or on behalf of the person it depicts" work (with a footnote explaining that "on behalf of" shall mean by a person like a social media manager)? Regards, Grand-Duc (talk) 06:41, 8 December 2025 (UTC)
- That's one of two cases I had in mind. The other is after-the-fact endorsement. E.g. (this has happened) someone publishes an AI-generated image of Trump, Trump re-tweets it (or whatever you call the equivalent on Truth Social). Also (likely, but no examples offhand), someone approvingly links in social media or on their own web page, etc. to an AI-generated image of themself.
- FWIW I wasn't thinking VRT at all. I'd hope that seldom, if ever, arises. - Jmabel ! talk 22:26, 8 December 2025 (UTC)
- Also a good point. There's a lot of different ways that content can be posted on social media these days - posting, reposting, embedding offsite media, etc. IMO, we should treat all of these cases identically for the purposes of this guideline. Omphalographer (talk) 00:27, 9 December 2025 (UTC)
- @Jmabel: could the word "auspice" or a wording like "published by or on behalf of the person it depicts" work (with a footnote explaining that "on behalf of" shall mean by a person like a social media manager)? Regards, Grand-Duc (talk) 06:41, 8 December 2025 (UTC)
- I think the sort of situation Jmabel is trying to address is content published on behalf of a person by a social media manager or similar. For instance, if a political figure were to post an AI-generated image on their social media, we wouldn't necessarily know whether it was personally posted by the politician or by their PR team, but it should probably be considered allowed regardless. Omphalographer (talk) 06:01, 8 December 2025 (UTC)
- In principle, I think it's fine. In practice, I'm not sure what exactly that looks like though. I'm loathe to have people submit "documented permission" to VRT, because a) they're often backlogged as it is, and b) there's this loop where someone uploads a file, then it gets deleted for permissions reasons, then it goes through VRT and is restored, then it gets deleted for scope reasons (because while VRT agents can decline tickets for scope reasons, it seems like a decent number of agents are uncomfortable doing so) - it's a tremendous waste of volunteer resources and I can see a lot of AI images getting stuck in that loop. @Krd, thoughts? The Squirrel Conspiracy (talk) 05:53, 8 December 2025 (UTC)
- @The Squirrel Conspiracy: would you have any objection to that small edit? - Jmabel ! talk 05:13, 8 December 2025 (UTC)
Neutral While I am against this as a policyguideline, the community has spoken. So, nothing against ratifying it, but I don't want to support it. --Jonatan Svensson Glad (talk) 03:51, 8 December 2025 (UTC)- well, the wording suggests it would be a policy anyway, disallowing some AI materials de facto (or de jure, depending on how you interpret it) Bedivere (talk) 04:48, 8 December 2025 (UTC)
- The community has previously spoken on another proposal, not this proposal. Now, the community is hopefully speaking about this new proposal which is different from the earlier one. Prototyperspective (talk) 17:12, 8 December 2025 (UTC)
- it's somewhat insulting to imply that participants are confused or unaware; they've simply reached conclusions different from yours. Bedivere (talk) 22:08, 8 December 2025 (UTC)
- Good that I didn't imply that then. Prototyperspective (talk) 10:19, 9 December 2025 (UTC)
- it's somewhat insulting to imply that participants are confused or unaware; they've simply reached conclusions different from yours. Bedivere (talk) 22:08, 8 December 2025 (UTC)
Support Raymond (talk) 07:44, 8 December 2025 (UTC)
Support. --Túrelio (talk) 07:58, 8 December 2025 (UTC)
Support, looks good. --Belbury (talk) 08:45, 8 December 2025 (UTC)
Support GPSLeo (talk) 09:29, 8 December 2025 (UTC)
Support --Smial (talk) 11:41, 8 December 2025 (UTC)
Strong oppose The original proposal had AI generated photos where the description states that the photo shows an actual person are not allowed
but this new proposal now has the much more restrictiveImages of identifiable people created by AI are not allowed on Commons unless at least one of the following criteria are met [posted by the person or reliable sources cover it]
. I don't know if the voters here all know about this. I think it should be changed. There are two main issues:
- Example File:King Tutankhamun brought to life using AI.gif (display was disabled)
- Information graphics and art such as caricatures relating to public officials such as an information graphic or artwork pointing out problems of Trump behavior, claims & policies.
- It doesn't seem to exclude identifiable historic people. AI images can often make sense, especially when there is nearly no or no free media available of the person. An example is on the right.
- I think the votes were done hastily without proper deliberation and without consideration of potential uses. A policy this indiscriminate and restrictive additionally seems to violate existing policies COM:SCOPE, COM:INUSE and COM:NOTCENSORED. A constructive approach would be to edit the proposed policy but I would probably still tend toward oppose because I see no need for this – we should strive to stay as unbiased and uncensored as possible and delete files based on whether that is due per set/case. People could introduce more and more restrictions and soon you'll find yourself in a situation where you can't even upload an image critical of Trump anymore per policy (and with wider adoption of AI tools by society, this is what this policy will already achieve to a large extent).
- Prototyperspective (talk) 15:29, 8 December 2025 (UTC)
- It's bold of you to assume that everyone above you voted "hastily without proper deliberation and without consideration of potential uses". More likely, I think, is that the other participants simply disagree with you.
- Regarding the first point: "The image in question is the subject of non-trivial coverage by reliable sources" already covers the use case of "caricatures relating to public officials". The series of images that File:Trump’s arrest (2).jpg belongs to, for example, are permissible under this guideline. This guideline would not permit a random user's AI image caricature of Trump, but even without this guideline, it would be deleted as personal art.
- Regarding the second point: "It doesn't seem to exclude identifiable historic people.", that is working as designed. If it's a notable depiction, it'll be covered by "non-trivial coverage by reliable sources". If it's a random user's AI image of a historic figure, even without this guideline, it would be deleted as personal art. Keep in mind that the image you posted does not depict King Tut. It depicts what a probability engine thinks the promoter is looking for - a young boy with Arabic features in pharaoh attire. It has no way of knowing if any of what it did is accurate. This is why some projects have already banned most AI images.
- The Squirrel Conspiracy (talk) 16:45, 8 December 2025 (UTC)
- sincerely that "Tutankhamun" image is a disgusting AI slop. I can see why it is necessary to have these all (non notable) depictions banned. If someone wants to share their (prompted) art, there are venues such as Tumblr, Deviantart and Twitter (or whatever Elon Musk has decided to call it). Bedivere (talk) 16:52, 8 December 2025 (UTC)
- Nothing about is disgusting.
why it is necessary to have these all (non notable) depictions
ok: so why? Prototyperspective (talk) 17:05, 8 December 2025 (UTC)- They are fictional reconstructions produced by a model, not representatios of an actual person, making them potentially misleading and outside COM:SCOPE. Allowing non-notable AI depictions would open the door to massive amounts of invented imagery serving no educational purpose. Notable cases are covered by the exception. Bedivere (talk) 22:07, 8 December 2025 (UTC)
- So a public broadcast documentary showing some well-known historical figure is means that segment is noneducational and the documentary so so badly disgusting because they're showing a historical person differently than s/he may have looked? Prototyperspective (talk) 22:40, 8 December 2025 (UTC)
- in that case, the key would be that the recreation would most likely be a human creation or representation, not something created by an algorithm. Bedivere (talk) 00:57, 9 December 2025 (UTC)
- So a public broadcast documentary showing some well-known historical figure is means that segment is noneducational and the documentary so so badly disgusting because they're showing a historical person differently than s/he may have looked? Prototyperspective (talk) 22:40, 8 December 2025 (UTC)
- They are fictional reconstructions produced by a model, not representatios of an actual person, making them potentially misleading and outside COM:SCOPE. Allowing non-notable AI depictions would open the door to massive amounts of invented imagery serving no educational purpose. Notable cases are covered by the exception. Bedivere (talk) 22:07, 8 December 2025 (UTC)
- Nothing about is disgusting.
to assume that…
I didn't do so if you read my comment. This is a false statement.already covers the use case of "caricatures relating to public officials"
No, it doesn't. It means caricatures and critical works are reserved to the privileged few who got reported on in major publications. What chaos if we'd allow common citizens to release critical art and information graphics right?it would be deleted as personal art.
No, it wouldn't (necessarily). It depends on how educational/useful it is.a young boy with Arabic features in pharaoh attire
Exactly, and such things can be useful and interesting, especially if engineered to closely match data about the given person.no way of knowing if any of what it did is accurate
not the AI but the prompter. Prototyperspective (talk) 17:11, 8 December 2025 (UTC)
- sincerely that "Tutankhamun" image is a disgusting AI slop. I can see why it is necessary to have these all (non notable) depictions banned. If someone wants to share their (prompted) art, there are venues such as Tumblr, Deviantart and Twitter (or whatever Elon Musk has decided to call it). Bedivere (talk) 16:52, 8 December 2025 (UTC)
- It's bold of you to assume that everyone above you voted "hastily without proper deliberation and without consideration of potential uses". More likely, I think, is that the other participants simply disagree with you.
Support, with the addendum that publications on behalf of someone should also be permitted. --Carnildo (talk) 23:15, 8 December 2025 (UTC)
Support Infrogmation of New Orleans (talk) 01:21, 9 December 2025 (UTC)
Support the proposal and also
Support whacking User:Prototyperspective with a wet trout Apocheir (talk) 04:08, 9 December 2025 (UTC)
- Re trout, if I made an error point out which by addressing it (ideally refuting it).
Why do educational documentaries use fictional depictions of historical people if such can't be educationally useful? These are banned by this proposal as well. I always support truly considering and addressing points raised in every kind of community decision-making, especially when it's volunteers.- Another point I didn't mention earlier, the policy rationalizes itself with
When dealing with photographs of people, we are required to consider the legal and moral rights of the subject […] Commons has long held that files that pose such legal or moral concerns
but why would not apply to paintings or nonAI digital art of identifiable people? And does this really apply to neutral depictions of ancient historical people? There is no need for this policy considering the very low number of of such files Commons currently has.
- Another point I didn't mention earlier, the policy rationalizes itself with
- Prototyperspective (talk) 10:24, 9 December 2025 (UTC)
- Personal art about notable people was always not allowed as being out of scope. That is was only handled through the regular scope rules was never a problem because of the small amount of such uploads. Now with the AI tools available there are much more of such uploads. To avoid long discussions and case by case decisions, we need this new stricter guideline. GPSLeo (talk) 11:28, 12 December 2025 (UTC)
Personal art about notable people was always not allowed as being out of scope
False. Personal art by non-contributors is speedily deleted so this is an additional reason for why there is no need for this proposed policy. Other than that, I don't know of such a policy, especially not one that clarifies what is meant with "Personal art".Now with the AI tools available there are much more of such uploads.
Arguably false. There aren't many – currently just 99 in the cat. That's the number of files uploaded every ? two minutes maybe?- Moreover, a significant fraction of them are COM:INUSE, underlining that these files can be useful also on Wikimedia projects despite that the ones we have are not close to what is possible with these tools in terms of quality (and accuracy if data on appearance is available) but Commons isn't just there for only wikiprojects but also for e.g. documentary makers who often show fictional imagery of historical people (as stated earlier and which I could prove by linking to several such documentaries with the example timestamps).
To avoid long discussions and case by case decisions, we need this new stricter guideline
For personal art by non-contributors and hoaxes, files can already be speedily deleted without discussion. For files that are of low-quality or not useful, there generally are no lengthy discussions. Enabling users to discuss whether a file should be deleted is a point of COM:NOTCENSORED which this proposed policy would as far as I can see invalidate in terms of its current title/proposition. There are a lot of things where one may prefer to not enable discussion. I still see no need for a stricter guideline.
- Prototyperspective (talk) 11:41, 12 December 2025 (UTC)
- Personal art about notable people was always not allowed as being out of scope. That is was only handled through the regular scope rules was never a problem because of the small amount of such uploads. Now with the AI tools available there are much more of such uploads. To avoid long discussions and case by case decisions, we need this new stricter guideline. GPSLeo (talk) 11:28, 12 December 2025 (UTC)
- Re trout, if I made an error point out which by addressing it (ideally refuting it).
Oppose The page refers to "legal and moral" rights as a justification but doesn't cover cases where the legal and moral rights are expired. If there's another good reason to exclude pictures of, say Cleopatra or Genghis Khan, the policy needs to spell it out. -Nard (Hablemonos) (Let's talk) 17:27, 11 December 2025 (UTC)
- Editorial standards are moral rights too. Be seldom make editorial decisions for other Wikis on Commons, but here it is needed to protect our project. Having AI generated images of historical personalities, used to show how this person looked like, is against good journalistic standards. We still allow such images if created in the context of a relevant art project of scientific paper. But we do not want that ever user can just upload such content. GPSLeo (talk) 11:37, 12 December 2025 (UTC)
used to show how this person looked like
This is not the only use-case of such imagery. An example I made is a documentary film video about say Ancient Egypt and I noted I could provide evidence that such documentaries usually do include fictional imagery of historical people.is against good journalistic standards
Commons is not censored based on proposed "journalistic standards". Prototyperspective (talk) 16:12, 15 December 2025 (UTC)- I think the point is that living people have certain rights that dead people cannot have, and this proposal's main justification lies there. Editorial standards seem to be secondary to the proposal. whym (talk) 23:41, 5 January 2026 (UTC)
- Editorial standards are not moral rights; they're standards used by a certain organization. I see no evidence that journalistic standards exclude the use of tools to show how someone might have looked like. Wikipedia certainly uses much worse, random images produced by people who had no idea how the person may have looked, but by paint and not computers.--Prosfilaes (talk) 03:24, 8 January 2026 (UTC)
- FWIW, those have a certain value in terms of showing how someone was perceived in a given era. For example, all images of biblical figures are from people who had never seen them (unless we count visionaries as actual witnesses). A painting of Jesus by a notable artist has an historical significance that an AI image of Jesus does not, though it would be purely coincidental for either to be a good likeness. - Jmabel ! talk 03:47, 8 January 2026 (UTC)
- Editorial standards are moral rights too. Be seldom make editorial decisions for other Wikis on Commons, but here it is needed to protect our project. Having AI generated images of historical personalities, used to show how this person looked like, is against good journalistic standards. We still allow such images if created in the context of a relevant art project of scientific paper. But we do not want that ever user can just upload such content. GPSLeo (talk) 11:37, 12 December 2025 (UTC)
Support --ReneeWrites (talk) 23:11, 13 December 2025 (UTC)
Strong oppose No reason provided why this is needed when Commons:Scope already exists. --Trade (talk) 15:59, 15 December 2025 (UTC)
- @Trade I assume you mean
Support, otherwise the context is not clear for us :) --PantheraLeo1359531 😺 (talk) 16:03, 15 December 2025 (UTC)
- It might be a reaction to my deletion decision in Commons:Deletion requests/File:GPT-4o Studio Ghibli portrait of Barack Obama.png. Abzeronow (talk) 02:00, 16 December 2025 (UTC)
- "we should not have it because i dont want it" is not a very compelling argument Trade (talk) 16:35, 16 December 2025 (UTC)
- I didn't feel like posting a whole treatise for a DR close on how that AI portrait would likely violate the principles of en:WP:BLP and Obama's moral rights as well as the fact that an AI portrait is not an accurate representation of a person, and there is no educational reason why we'd need a Ghibli-style (which essentially violates the copyrights of Studio Ghibli btw) portrait of Obama when we have plenty of portraits of Obama that are educationally useful. Abzeronow (talk) 00:09, 17 December 2025 (UTC)
- "we should not have it because i dont want it" is not a very compelling argument Trade (talk) 16:35, 16 December 2025 (UTC)
- It might be a reaction to my deletion decision in Commons:Deletion requests/File:GPT-4o Studio Ghibli portrait of Barack Obama.png. Abzeronow (talk) 02:00, 16 December 2025 (UTC)
- @Trade I assume you mean
Oppose for its treatment of dead, especially long-dead, people. AI of living people is problematic. AI pictures of King Tut are not. That rule is far too much in telling the other projects that depend on us what they may use as illustrations.--Prosfilaes (talk) 07:13, 17 December 2025 (UTC)
- @Prosfilaes: what would you think of a rule about some number of years after death? - Jmabel ! talk 19:17, 17 December 2025 (UTC)
- I personally am not interested in diluting the policy for one person's objection when 18 people have already approved it as is. The Squirrel Conspiracy (talk) 23:52, 17 December 2025 (UTC)
- It is not one person. Moreover, things aren't just about the relative number of votes but also about the content of what people have written. Wikipedia for example has a policy about that, en:WP:NODEMOCRACY.
No reason has been given so far for why Commons should censor/disallow/entirely-delete images of the mentioned type in apparent tension and/or contradiction with other policies – namely at least COM:SCOPE and COM:NOTCENSORED – and with so far unclear need for it (implied also by there being no stated reason). Prototyperspective (talk) 00:06, 18 December 2025 (UTC)
- It is not one person. Moreover, things aren't just about the relative number of votes but also about the content of what people have written. Wikipedia for example has a policy about that, en:WP:NODEMOCRACY.
- Sure. Life+50 or life+70 are nice round numbers, and we should generally be able to find photographic evidence of anyone within that range. There are other people who have made similar objections, and such objections don't lead to good consensus.--Prosfilaes (talk) 02:02, 18 December 2025 (UTC)
- I personally am not interested in diluting the policy for one person's objection when 18 people have already approved it as is. The Squirrel Conspiracy (talk) 23:52, 17 December 2025 (UTC)
- @Prosfilaes: what would you think of a rule about some number of years after death? - Jmabel ! talk 19:17, 17 December 2025 (UTC)
Support with Jmabel's caveat. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 01:54, 18 December 2025 (UTC)
Strong support. I don't think we should be hosting deepfakes of any kind, to prevent the spread of misinformation, respect towards the person being depicted among many other ethical and social considerations. It's moon (talk) 03:35, 27 December 2025 (UTC) – Edited on 07:21, 29 December 2025 (UTC)
Support Surely one benefit of this guideline is that it will deter those who attempt to get around copyright violation by using AI-generated portrait. However, considering that the Commons version may differ from or even conflict with those of other communities, images that do not comply with this guideline should also be excluded from COM:INUSE rules. 0x0a (talk) 17:51, 28 December 2025 (UTC)
- @0x0a: that last (about this trumping INUSE) sounds like you are making a different proposal than the one about which everyone above has expressed their opinion. - Jmabel ! talk 01:46, 29 December 2025 (UTC)
- Um, I kinda believe INUSE also needs to be updated accordingly, so I opened a new discussion at
- 👉︎ Commons_talk:Project_scope#Proposed_change:_excluding_images_do_not_comply_with_COM:AIP_from_COM:INUSE_rules -- 0x0a (talk) 10:12, 29 December 2025 (UTC)
- @0x0a: I disagree. Part of the point of the guideline is to not use deepfakes and unaccurate representations of identifiable people not just on Commons but across all Wikimedia projects. Therefore all images that don't meet with the proposed guideline should, in my opinion, get deleted once the guideline gets ratified regardless of wether they are currently in use on other projects or not (with perhaps the only exceptions being images that get used to illustrate the concept of deepfake or similar itself → and even in those cases, they should probably still have been published by the person they depict). It's moon (talk) 10:58, 29 December 2025 (UTC) – Edited on 12:13, 29 December 2025 (UTC)
- Frankly, I don't know which of my statements you disagree. I clearly support this proposal and have already opened a revision discussion at Commons_talk:Project_scope regarding the conflicting part with the guideline. 0x0a (talk) 14:50, 29 December 2025 (UTC)
- Whoops, I misread INUSE, I thought you were saying that images used on other projects should be kept, which I disagreed on, but I am realizing you were saying they should get deleted, so turns out we both agree. It's moon (talk) 16:05, 29 December 2025 (UTC)
- Frankly, I don't know which of my statements you disagree. I clearly support this proposal and have already opened a revision discussion at Commons_talk:Project_scope regarding the conflicting part with the guideline. 0x0a (talk) 14:50, 29 December 2025 (UTC)
- @0x0a: that last (about this trumping INUSE) sounds like you are making a different proposal than the one about which everyone above has expressed their opinion. - Jmabel ! talk 01:46, 29 December 2025 (UTC)
- I think the oppose votes, even if they are the minority, raise valid points about living people and long-dead people. I’d suggest focusing on living people (and perhaps the recently deceased) for now. This is not to say anything goes for images of the dead, it would just be left undertermined in the meantime. I think that a narrower focus would allow us to ratify some important and non-controversial part of the proposal quickly with a broader support. We can continue working on the rest and additively revise the policy after that. whym (talk) 11:38, 5 January 2026 (UTC)
Oppose This seems over thought. Take the bit that's important, tweak it, and add it to COM:PIP. AI images of identifiable people are not allowed on Commons unless they have been published with the subject's permission or the image itself is the subject of significant public commentary in reputable sources.
There's no need to rehash a moral framework, define what a person is, or legislate interactions with overarching standards like SCOPE or DW. There's no need to add technical issues related to things like upscaling. Wherever that needs to go, it's not specific to identifiable people. There's no need to trying to define a boundary between substantially AI edited or AI generated. No need to get into what counts as a good source.The operative bit above sets the standard and people can sort out the finer details in vivo. GMGtalk 14:18, 5 January 2026 (UTC)
Comment Regarding AI images of long-dead people, while not necessarily problematic when it comes to legal and moral rights of the subjects, there are other factors that make these images unsuitable for an educative project like Commons. The example of Tutankhamun illustrates this perfectly. We have multiple forensic studies that reconstruct Tutankhamun's appearance based on the actual structure of his skull and mummy (see [1], [2], [3], [4], [5], [6]). However, files such as File:King Tutankhamun brought to life using AI.gif are problematic because they are historically inaccurate, overly idealized misrepresentations. This just comes to further show how Generative AI can and will make false assumptions about historical subjects and introduce misinformation. It's moon (talk) 14:50, 5 January 2026 (UTC)
- What if a Wikibooks chapter wants to discuss misinformation using AI-generated Tutankhamun images as illustrations? whym (talk) 23:38, 5 January 2026 (UTC)
- I had seen that study before my post with that gif earlier FYI and I'm well aware of scientific facial reconstruction.
- First of all you're making the false assumption that the educational function of media showing ancient people is primarily or even only to educate people on how exactly precisely the given people looked like. That is not necessarily the case, probably not even usually. If I wanted to make an educational podcast video about King Tutankhamun talking about historical facts and the peculiarity of his young age, it would be more interesting if it had some visuals. Such an animation even if not accurate to the most precise tiniest of detail would help the listener to visualize and better imagine what is being talked about plus it makes them take up more information as the content is not dull and boring but exciting. An example here is the Fall of Civilizations podcast that I sometimes enjoy listening to. It also has some visuals to it on YouTube – do you think it's accurate to the last detail? Example Ep 18 Fall of the Pharaohs (1.1 M views) such as its depiction of Ramesses. (Btw I made some educational podcast in the past and went to Commons to find free media to use which was often so gappy that I had to first upload relevant media to here from elsewhere and see how AI media can be useful for podcast&documentary-making sometimes depending on various factors such as how it's contextualized etc.)
- It depends on how the file is used. If it's used in a Wikipedia article where the text implies or the caption says basically 'This is how Tutankhamun exactly looked like' then it's problematic. But the problem there is how it's used, not that it's on Commons.
- The gif actually looks quite similar the scientific reconstruction. Maybe you think it's of utmost importance that even the tiniest of facial details is exactly accurate in any depiction and everything else is "misinformation". But that's not what matters to many or in many contexts, such as when the media is not contextualized as to be a very realistic restoration and the subject is just e.g. the young age of Tutankhamun. Moreover, most paintings, especially historic and ancient ones are very inaccurate.
- The question is not whether there are studies that reconstruct a given person's face – and for most notable long-dead people there aren't any – but whether the media is on Commons / free-licensed. There's basically one person (big thanks to him) who creates (static) restorations of notable people – ~150 files in Category:Works by Cícero Moraes – and sometimes (probably fewer than these) some free-licensed image in some study or elsewhere to import. For many notable subjects there aren't media. Key here is that just because a file is on Commons, doesn't mean it has or needs to be used. Lastly, AI tools here can be leveraged to create scientifically accurate free-licensed depictions of people: one can prompt with descriptions of the scientific reconstruction and additionally select and adjust the results until one has a result where the appearance matches that of the scientific reconstruction.
- Prototyperspective (talk) 00:22, 6 January 2026 (UTC)
- @Prototyperspective: I think that most regulars understand the proposed policy not as tool for an absolute prohibition of AI generated depictions of (long-dead) persons, but rather as a quality-assurance tool to stem any influx of such imagery without clear-cut use case. I as supporter certainly do.
- I see the current situations as: "Upload first, ask later", and without robust tools to have a redactional overview over AI generated imagery. It's kind of similar to "shall issue states" in the US in regard to firearm laws and concealed carry. I think that most supporters are advocating for the alternative of "Ask yourself first if AI is useful, then if yes, upload", the default being "Don't upload" (or delete by due process if uploaded anyway). Such a mindset in regard to AI slop and AI generated imagery in general would be a robust tool for the needed curating. To return to the concealed carry example: we should switch from a "shall issue" to a "may issue" style of permit. This implies that, of course, an AI generated Tutankhamun image with a demonstrated solid use case (like the Wikibooks thing above your post), can, may stay. I'm advocating for that such AI imagery imperatively needs a worked-out context in its description (prompt, use case, ideally the sources) besides the demonstrated need of actual use somewhere; otherwise it's liable to get deleted.
- Lastly, you wrote
AI tools here can be leveraged to create scientifically accurate free-licensed depictions of people: one can prompt with descriptions of the scientific reconstruction and additionally select and adjust the results until one has a result where the appearance matches that of the scientific reconstruction.
As it stands now, the tools available to the general public (ChatGPT, DALL-E, Stable Diffusion...) are built in a way to generate eye candy (as you wrote on the German Forum, I could also refer de:Klickibunti), not scientifically sound media, as that is likely expected by the general public, by their users. Some software that is specifically made for scientific reproductions (like forensic face generation, digitally aging or similar) won't be within the purview of this policy. Regards, Grand-Duc (talk) 18:22, 6 January 2026 (UTC)- Reasonable point but I disagree: there is no flood of AI imagery and this proposed policy probably won't be much of an help with this nonproblem if it was a problem and it's redundant due to policies COM:SCOPE and COM:DIGNITY while in direct contradiction with COM:NOTCENSORED and, as explained above, COM:SCOPE where the minor potential benefits are not worth the inconsistency and problems that come with this proposed policy. People can already nominate any such or many such files at once for deletion.
- The Tutankhamun animation has two educational use-cases I can readily think of and we shouldn't assume we can and need to be able to readily think of potential use-cases:
1. as part of some video or page about Tutankhamun where the animation is not contextualized as to being precise to the last facial wrinkle but just some rough AI visualization eg showing his young age 2. as an illustration of how AI tools can be used to visualize people such as ancient people in moving (nonstatic) format (that is even if some say the quality is low). are built in a way to generate eye candy
I know they are not built for what I described to be easy. That doesn't mean they can't be used for that. People could for example learn about this use-case and the current issues with it and adjust these tools or use them in sophisticated ways to create better-quality results of that type.Some software that is specifically made for scientific reproductions
I'm not talking about other software though. The current models can already be used for this. It's just not easy. Many people think using AI tools is always easy but it isn't – just the way maybe most people use them is simple but some people use them in more sophisticated ways that need a lot of skill and expertise. I outlined roughly how these tools, including just standard Stable Diffusion etc, can be used for reproductions of scientific accuracy and you seem to have overread or ignored that. This can already be done, I'm just not skilled enough with these tools plus also not motivated enough to spend my time and effort on it to prove it to you right now. My prior low-effort uploads relating to this are more about (enabling) communicating the concept and idea – this again can lead to people working on fleshing out this application for higher-quality results via adjusting or building tools and developing workflows. But again, not for every application does each facial detail matter such as for the podcast linked above where also at least one ancient person is depicted without scientific precision level accuracy (btw typo it has 11 M views, not 1.1 M). Prototyperspective (talk) 19:19, 6 January 2026 (UTC)- You repeatedly claim that editors overread or fail to deliberate whenever they disagree with your views ([1], [2], [3]).
- My stance is that we need to build policies based on how AI is currently being used, not how it could or may theoretically be used. I'm not against changing the policy later down the line if we see a change in AI accuracy or a tendency to a more responsible usage, but for now we have to address the current reality. It's moon (talk) 21:20, 6 January 2026 (UTC)
- Your claims are ad hominem argumentation, and I will not stand for them. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 21:42, 6 January 2026 (UTC)
- @Jeff G.: Could you clarify on who you are replying to? It's moon (talk) 22:00, 6 January 2026 (UTC)
- @It's moon: I was replying to Prototyperspective, referencing your characterization of their claims. Sorry for not specifying that, I thought my indentation was clear. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 22:49, 6 January 2026 (UTC)
- Understood, thanks. It's moon (talk) 23:05, 6 January 2026 (UTC)
- @It's moon: I was replying to Prototyperspective, referencing your characterization of their claims. Sorry for not specifying that, I thought my indentation was clear. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 22:49, 6 January 2026 (UTC)
- Absurd claim; if you ignore all I said in my comment imo it's better to not comment at all. Prototyperspective (talk) 22:54, 6 January 2026 (UTC)
- @Prototyperspective: Better for you, maybe. I didn't ignore it, I agreed with @It's moon's characterization of it. I asked you nicely in this edit 16:09, 7 November 2024 (UTC) to stop with the insults and displaying your pro-AI bias. Now, I am warning you: if you do it again, I am going to report you. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:21, 6 January 2026 (UTC)
- I'm not insulting anybody and didn't make any ad homininem and am nicely asking you to please not accuse me of things I'm not doing, thanks. Prototyperspective (talk) 23:29, 6 January 2026 (UTC)
- @Prototyperspective Did you or did you not write "you ignore all I said in my comment" 22:54, 6 January 2026 (UTC)? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:36, 6 January 2026 (UTC)
- This is not an insult. It was a rational point that your comment did not address nor relate to anything I wrote (where btw imo a constructive rational response would be to prove me wrong by pointing to the specific text segment to which your comment does relate if there was any but there isn't any "ad homininem" in there, let alone is it all just that). With "ignore" I meant you didn't address any of it which is of course one can do but I'm also free to point it out even if you disagree with that assessment. Prototyperspective (talk) 23:44, 6 January 2026 (UTC)
- @Prototyperspective Did you or did you not write "you ignore all I said in my comment" 22:54, 6 January 2026 (UTC)? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:36, 6 January 2026 (UTC)
- I'm not insulting anybody and didn't make any ad homininem and am nicely asking you to please not accuse me of things I'm not doing, thanks. Prototyperspective (talk) 23:29, 6 January 2026 (UTC)
- @Prototyperspective: Better for you, maybe. I didn't ignore it, I agreed with @It's moon's characterization of it. I asked you nicely in this edit 16:09, 7 November 2024 (UTC) to stop with the insults and displaying your pro-AI bias. Now, I am warning you: if you do it again, I am going to report you. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:21, 6 January 2026 (UTC)
- @Jeff G.: Could you clarify on who you are replying to? It's moon (talk) 22:00, 6 January 2026 (UTC)
- Re.
there is no flood of AI imagery
- my experience speaks otherwise. I've seen a ton of clearly AI-generated images uploaded to Commons, including a substantial number of AI-generated or heavily AI-retouched images of people. Omphalographer (talk) 21:51, 6 January 2026 (UTC)- How is that a flood? People upload floods of mundane low-resolution photos of all sorts, repetitive high-size mundane photos, and so on – probably hundreds per day on average. There's just a few thousand AI files; 1089 in AI-generated humans – that's near-nothing in Commons. And the depictions of historic/ancient people is an order of magnitude below that. Prototyperspective (talk) 23:00, 6 January 2026 (UTC)
- The vast majority of new AI-generated uploads are deleted, most often under CSD F10. The files which end up categorized - and particularly those which are placed in those "AI-generated by subject" categories - are a small fraction of what's coming in. Omphalographer (talk) 23:22, 6 January 2026 (UTC)
- Good point but it's not a small fraction in my experience (of for over a year regularly tracking all new AI uploads and categorizing probably more than half of AI-related files) but maybe something around as many as are still on Commons.
- If one makes a comparatively large effort to delete low-quality AI media, then it can seem as if it's a flood but there's days where not even one AI image got uploaded and people I think aren't doing a comparable effort to find and delete low-quality drawings and low-resolution-mundane photos. I think we just keep disagreeing on that point but it's not central to my arguments above – especially so since you also say these files are already even speedily-deleted so this new policy is not needed, especially not in this indiscriminate/harsh+unjustified shape. Prototyperspective (talk) 23:38, 6 January 2026 (UTC)
- Re.
there's days where not even one AI image got uploaded
- not recently! There are typically somewhere on the order of 50 to 100 AI-generated images uploaded every day. Omphalographer (talk) 21:28, 9 January 2026 (UTC)
- Re.
- The vast majority of new AI-generated uploads are deleted, most often under CSD F10. The files which end up categorized - and particularly those which are placed in those "AI-generated by subject" categories - are a small fraction of what's coming in. Omphalographer (talk) 23:22, 6 January 2026 (UTC)
- How is that a flood? People upload floods of mundane low-resolution photos of all sorts, repetitive high-size mundane photos, and so on – probably hundreds per day on average. There's just a few thousand AI files; 1089 in AI-generated humans – that's near-nothing in Commons. And the depictions of historic/ancient people is an order of magnitude below that. Prototyperspective (talk) 23:00, 6 January 2026 (UTC)
- I think that most regulars understand the proposed policy not as tool for an absolute prohibition of AI generated depictions of (long-dead) persons, but rather as a quality-assurance tool to stem any influx of such imagery without clear-cut use case. Er, what? No, we don't use policy that says these things are "not allowed" and then argue it's fine because it's not an absolute prohibition. Policy should say exactly what it means; laws saying that X is not allowed and people in the know getting the wink and nod from people also in the know is a good way to piss off users.--Prosfilaes (talk) 03:24, 8 January 2026 (UTC)
Support Without having read this whole discussion, I've looked at the proposed guideline as it stands today, and I agree with the proposal. It is quite restrictive, but I think we need to be restrictive handling such AI-generated images. We should always be extremely cautious and only allow a selection of such images where there is a very good reason for each individual image to host it at all. Gestumblindi (talk) 09:53, 6 January 2026 (UTC)
- One of the controversial points emerged in the discussion is if we are legally required to protect dead people's dignity in the same way to that of living people. What do you think? whym (talk) 10:36, 7 January 2026 (UTC)
- @Whym: Well, legally required? That's a question we could discuss in great detail, as it very much depends on the jurisdiction. Germany, for example, has quite strong postmortal personality rights at least for recently deceased people, while Switzerland doesn't have quite the same concept. I don't know how this is in the US; if we applied the same principles as for copyright, we could require an image (be it real or AI generated) to not infringe postmortal personality rights in the US and in its country of origin... But I think regarding AI generated images, that's a point we don't even need to discuss, as the moral and scope issues should be enough to refrain from hosting such images in most cases. Gestumblindi (talk) 18:49, 7 January 2026 (UTC)
- Yeah, it seems like there is a territory specific component to be considered regarding the living vs dead issue.
- The current proposal's main justification, as it is written, seems to be the moral rights of the people depicted, though. (It's in the first paragraphs.) If there are other, more important rationales, I think the proposal needs to be revised to more clearly include them and argue based on them. Without such (major) revision, I think it would make a more solid argument if we stick with living people within this iteration. whym (talk) 01:20, 11 January 2026 (UTC)
- @Whym: Well, legally required? That's a question we could discuss in great detail, as it very much depends on the jurisdiction. Germany, for example, has quite strong postmortal personality rights at least for recently deceased people, while Switzerland doesn't have quite the same concept. I don't know how this is in the US; if we applied the same principles as for copyright, we could require an image (be it real or AI generated) to not infringe postmortal personality rights in the US and in its country of origin... But I think regarding AI generated images, that's a point we don't even need to discuss, as the moral and scope issues should be enough to refrain from hosting such images in most cases. Gestumblindi (talk) 18:49, 7 January 2026 (UTC)
- One of the controversial points emerged in the discussion is if we are legally required to protect dead people's dignity in the same way to that of living people. What do you think? whym (talk) 10:36, 7 January 2026 (UTC)
Support Strakhov (talk) 18:27, 6 January 2026 (UTC)
Support Ternera (talk) 14:02, 7 January 2026 (UTC)
Support Chorchapu (talk) 01:38, 14 January 2026 (UTC)
Support No to AI slop. Nemoralis (talk) 12:07, 3 February 2026 (UTC)
- Agree. If that word has not lost all its meaning yet, "slop" refers to low-quality and/or useless content. However, AI images of identifiable people aren't all (necessarily) low-quality – it could for example be realistic scenes of ancient cities (such as ancient Alexandria) where an identifiable ancient famous person is shown (such as Cleopatra) and which could be used in documentary videos about the subject as just one of many positive use examples. (And I've already seen public broadcast documentaries that use AI images seemingly made in collaboration with historians which proves such educational use is realistic.)
- .
- The policy as worded is not needed – as low-quality files can be simply deleted and a dignity-policy already exists – and unjustified, which is unprecedented in Commons community decision-making which has developed in imo pretty unhealthy ways that now is e.g. more prone to bias and external efforts to stimulate desired policies such as content deletion policies which I'm sure many external actors such as governments and companies are quite interested in to stimulate (and they also stand to benefit from this one by limiting such depictions to just very few instead of democratized widespread access to it and a general principle of freedom of expression). Prototyperspective (talk) 12:40, 3 February 2026 (UTC)
- I am against the use of visual content generated by AI, even if it is of high quality. Nemoralis (talk) 12:50, 3 February 2026 (UTC)
Oppose as unnecessarily restrictive. If we're using en wiki policy, en:WP:DUE may allow discussion of something with only one reliable source to it; if an AI-generated image is relevant to that discussion, it obviously passes COM:SCOPE and should be able to be uploaded here, even if there is only one reliable source. Additionally, it is unclear whether things like faceswapping are included in this guideline (see wikt:kirkification, for example). Based5290 (talk) 06:25, 12 February 2026 (UTC)
- The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Add autopatrol to file movers
[edit]Special:ListGroupRights here you can see what rights groups have. file mover doesnt have autopatrol now.
i briefly searched the archives and found the following. Commons:Village_pump/Proposals/Archive/2012/08#c-Philosopher-2012-08-04T23:26:00.000Z-Bundled_rights_(Filemover)_-_+1 2012 decision to do exactly this but not acted upon?
similarly jdx also suggested the same Commons:Village_pump/Proposals/Archive/2019/02#c-Jdx-2019-03-18T08:24:00.000Z-Add_rights_from_the_autopatrollers_user_group_to_the_rollbackers_user_group:_vot RoyZuo (talk) 17:32, 5 February 2026 (UTC)
Support for Filemovers as well as rollbackers. Shaan SenguptaTalk 15:16, 5 March 2026 (UTC)
Publicizing Commons:Uploading works by a third party as guideline
[edit]Hello,
I'd like to propose that Commons:Uploading works by a third party (COM:THIRD for short) is upgraded from an essay to a guideline. The text is already developed enough, quite widely used as reference on the COM:Help desk among others and is built in a way to fit very well under a description as guide-line: The page IMHO provides sound guidance for people toiling away at collecting third-party works; you may read about do's and don'ts on that subject. Such a consensual upgrade is surely warranted for this helpful work. Regards, Grand-Duc (talk) 20:03, 18 February 2026 (UTC)
- Hi, Yes, but it should first be translate into the main languages, at least Spanish, French, Arabic, Chinese, German, Russian, etc. I will do French. Yann (talk) 20:06, 18 February 2026 (UTC)
- I suggested marking the page for translation in the talk page back then, but so far only one has supported as of writing this. As for the translation, I'll do Indonesian. HyperAnd [talk] 06:58, 19 February 2026 (UTC)
- Should I mark only a section for translation (with the appropriate number of translation units) or the whole page? Abzeronow (talk) 04:50, 20 February 2026 (UTC)
- it is way too long in itself. that's not gonna be helpful for its intended targets: people who dont have much knowledge of copyright matters. newbies are not gonna read the whole thing just to figure out if they can upload a few photos, so they either ignore it or give up uploading.
- i dont see how its content is not already covered by other policy or guideline pages including com:l com:dw...
- it contains unnecessary, problematic and esoteric jargons such as RTFM.
- in addition to trimming down, i think it can be split into 2 pages. one that deals with copyright-expired/inherited stuff; the other for stuff whose authors can be contacted by the uploaders. they have rather different procedures.
- I suggested marking the page for translation in the talk page back then, but so far only one has supported as of writing this. As for the translation, I'll do Indonesian. HyperAnd [talk] 06:58, 19 February 2026 (UTC)
- RoyZuo (talk) 15:12, 20 February 2026 (UTC)
- i think the best method to educate newcomers is to write as succinctly as possible, and make short explanatory videos. that's way more engaging and informative than a long page of texts. RoyZuo (talk) 15:30, 20 February 2026 (UTC)
- If (per RoyZuo) you want to write a guideline that departs significantly from my essay, please create it as a separate document and leave my essay as my essay. - Jmabel ! talk 19:17, 20 February 2026 (UTC)
- FWIW, my main point in writing this was to bring together all the major issues in one reference, rather than (as we had to in the past) refer people to half a dozen different documents, or write complex bespoke answers covering the particular issues that seemed relevant for that user. - Jmabel ! talk 19:20, 20 February 2026 (UTC)
- I have marked the page for translation, and started the French. Yann (talk) 20:42, 20 February 2026 (UTC)
- OK, French is mostly done (thanks to Google). It needs proofreading.
- First, thanks a lot to Jmabel for this huge help page. But by experience, I think that most people start with a wrong assumption. At least 95% of such pictures are of personalities, and people usually assume that they mostly need a permission from the subject. While a permission from the subject might sometimes be useful to avoid the personality requesting deletion, they first of all need a permission from the copyright holder. So it needs a big warning as introduction. Yann (talk) 23:28, 20 February 2026 (UTC)
- @Yann: I'll edit accordingly. - Jmabel ! talk 00:45, 21 February 2026 (UTC)
Captcha editing?
[edit]Hello,
@GPSLeo, Jmabel, Yann: is there now a thingy that makes all anonymous users (mobile or otherwise) complete a CAPTCHA whenever they submit an edit to a file? I remember when the AbuseFilter for mobile edits (as in, this warning here) was implemented, but right now I'm editing on a desktop, not a mobile.
You see, the first and second times I edited File:Carthamus tinctorius 050709b.JPG today, I didn't have to solve a CAPTCHA, but when I made a third edit to it, I needed to solve one, so it seems this new thing was implemented between 14:12 and 14:15 today. How come? ~2026-93563-4 (talk) 14:29, 24 February 2026 (UTC)
- Then again, I did NOT have to solve one when I edited File:CSIRO ScienceImage 10707 Safflower plant.jpg just now at 14:30. What gives? ~2026-93563-4 (talk) 14:31, 24 February 2026 (UTC)
- These are automated filters by the mediawiki itself and not set by abuse filters. They are configured by the server admins. We do not have detailed information on how they work. If you do not want to solve these captchas, you have to create an account. GPSLeo (talk) 16:22, 24 February 2026 (UTC)
Mass upload proposal
[edit]I'm searching for a way to upload a big batch of pictures; to do it myself or help from an experienced user to upload them.
The source website: catza.net
The licence: CC BY 3.0
The author: Heikki Siltala
The text from the website on attribution: The All photos © Heikki Siltala. The photos are immediately available both for non-commercial and commercial uses under the Creative Commons Attribution 3.0 License. There is no need to get a more specific permission or to pay money. The attribution is Heikki Siltala or catza.net.
The ideal way would be to automatically file the pictures by its description. For example this picture (https://catza.net/en/view/code/MCO_g_09_22/172054/) has the description: Escape's Rihanna, JW [MCO g 09 22] . album RuRok cat show Helsinki 2011-04-23 . cat Escape's Rihanna . breeder Escape's . FI . breed MCO . lens Sigma 85mm f/1.4 EX DG HSM . f/1.8 . 1/125 s . ISO 2000 . 85 mm . 12:21:57 . id 172054
So it can be uploaded as: Name: Escape's Rihanna, JW - MCO g 09 22.jpg
== {{int:filedesc}} ==
{{Information
| Description = {{en|Escape's Rihanna, JW [MCO g 09 22] . album RuRok cat show Helsinki 2011-04-23 . cat Escape's Rihanna . breeder Escape's . FI . breed MCO . lens Sigma 85mm f/1.4 EX DG HSM . f/1.8 . 1/125 s . ISO 2000 . 85 mm . 12:21:57 . id 172054}}
| Date = 2011-04-23
| Source = https://catza.net/en/view/code/MCO_g_09_22/172054/
| Author = [https://catza.net/ Heikki Siltala]
| Permission = All photos © Heikki Siltala. The photos are immediately available for both non-commercial and commercial uses under the Creative Commons Attribution 3.0 License. There is no need to get a more specific permission or to pay money. The attribution is Heikki Siltala or catza.net. The earlier www.heikkisiltala.com is also allowed.
}}
== {{int:license-header}} ==
{{CC-BY-3.0}}
[[Category:Photographs by Heikki Siltala (Catza)]]
[[Category:EMS Code g 09 22]]
[[Category:Helsinki cat show 2011]]
If possible the breed category could also be assigned through this code list: https://catza.net/en/list/breed/a2z/
What would be the best way to approach this upload? YukiKoKo (talk) 10:45, 25 February 2026 (UTC)
- @YukiKoKo: Hi, and welcome. COM:BATCH would be a good place to start. Please see what Yann needed to do in Special:Diff/1171701501 to mitigate the effects of your headings and templates, and avoid that need in the future. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 13:04, 25 February 2026 (UTC)
- @YukiKoKo: You indicated, you wanted to try yourself. I would recommend to have a look at: Commons:Pattypan --Schlurcher (talk) 07:50, 26 February 2026 (UTC)
- I've made a request for batch uploading (https://commons.wikimedia.org/wiki/Commons:Batch_uploading/Catza), so I will first wait how that will turn out. But I will have a look at Pattypan in case the batch uploading feature isn't possible. YukiKoKo (talk) 11:52, 27 February 2026 (UTC)
- @YukiKoKo: You indicated, you wanted to try yourself. I would recommend to have a look at: Commons:Pattypan --Schlurcher (talk) 07:50, 26 February 2026 (UTC)
- I would just manually upload useful photos instead. Photos like [1] aren't really useful and photos like [2] and [3] require an evaluation of the local freedom of panorama laws. There are also a lot of duplicates like [4] and [5] with one just being a redundant (in terms of educational value) black and white of the same image. Traumnovelle (talk) 22:22, 2 March 2026 (UTC)
Narrow scope for AI on Commons
[edit]With the recent adoption of Commons:AI images of identifiable people as a guideline, along with the increasing scrutiny and backlash against generative AI technology, I think we should consider restricting the uploading of AI to only situations where it is strictly necessary. More formally I propose adopting the following scope guidelines for AI generated content on Commons and amending Commons:AI-generated media to include and reflect the following:
Any AI generated or modified file on Commons must meet at least one of the following requirements:
1. It is an independently notable work or part of an independently notable work
2. It is currently being used per the principles of COM:INUSE
3. It is the only example of the output of a particular piece of software (for example, Sora or Grok) or type of output (for example, music or video). Dronebogus (talk) 01:50, 1 March 2026 (UTC)
Oppose, I don't think it is a good idea for now, since it would require significant changes to Commons:AI images of identifiable people when it has just recently been adopted as a guideline, and specific aspects of the text are still being discussed in its talk page. Thanks. Tvpuppy (talk) 02:36, 1 March 2026 (UTC)
- @Tvpuppy: with respect, that’s a weak reason to oppose something. Obviously the old policy would be superseded by and folded into the new one since COM:AIIP is very short and covers a narrower part of the same topic in a very similar way. Dronebogus (talk) 06:12, 2 March 2026 (UTC)
Support per nom. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 03:01, 1 March 2026 (UTC)
Oppose I still think something I proposed over a year ago would be very much in scope and should happen. I pretty much avoid using generative AI myself so this is a "proposing someone else should do this," but here goes.- We should identify anywhere between half a dozen and 100 different reasonably specific things that a reasonable person might ask AI to generate, e.g. "a photorealistic depiction of New York's Times Square in 1965," "a photorealistic depiction of a macaque," "an anime-style representation of Oliver Twist," "a watercolor of a European dragon," "a 32-bar musical passage in the style of Beethoven." These could be more specific if that works better. Then roughly every three months, or when a particular engine puts out a new release, we would give these same queries to a number of currently available AI engines and upload both their initial creations and what possibly better result a human can get by tweaking in dialog with the AI, with that dialog being part of the documentation. Over time, I imagine we would develop a very good history of the evolution of this technology. I would think that should certainly be in scope, and much more useful than the haphazard stabs people have taken at this sort of thing.
- This is an example of what would be precluded by the proposal here, and I imagine that is not the only thing that would be worth doing that would involve using AI. - Jmabel ! talk 03:41, 1 March 2026 (UTC)
- @Jmabel: doesn't the current framework of policies and guidelines already see for that some AI generated media are permitted on Commons in any case, even under the assumption that in the future, new additions are unwanted on SCOPE reasons? Namely, I'm thinking along the lines of COM:IAR and COM:PORN. And isn't there a wording in law texts that is only slightly more permitting than a direct and strict prohibition, something like a "shall not" vs. a "must not"?
- So, we could say that AI generated media are generally unwanted / not allowed / out of scope (similar to the rule for new uploads in PORN) but with a comparable small circumventing exception, which would allow only evidently good material actually enhancing our collections, using such a "shall-based" wording.
- Your example of an upload series with an actual "storyboard" and a well-thought concept would and should be permitted in any case as it it designed and shows for providing actual technological knowledge, and not by a small amount (barring developments in court decisions which could outlaw AI for our purposes).
- I'm not fundamentally opposed to an AI tool usage. In fact, in my family, we have already used AI generated imagery several times to enhance me son's homework to good effects (and the Microsoft Image Generator that we used is also good for laughs when it e.g. blocks a totally inconspicuous German prompt containing the word "Wolfsrudel", "wulf pack", I think because of Nazi associations - replacing it with "mehrere Wölfe", "several wolves", and leaving the remainder unchanged made the prompt work). But I wouldn't never think about using these tools to produce media for Commons, in my opinion, they simply don't fit with our aims, besides a few limited exceptions. Regards, Grand-Duc (talk) 05:31, 1 March 2026 (UTC)
- I don't see where the proposal offers any leeway here. COM:PORN doesn't really say anything about limiting porn: "Low-quality images of x that do not contribute anything educationally useful to our existing collection of images are not needed on Wikimedia Commons." is true for any value of x.--Prosfilaes (talk) 05:46, 1 March 2026 (UTC)
- (cross-posted) @Grand-Duc: unless I am misreading, and I do not think I am, Dronebogus's proposal here would absolutely bar what I am suggesting, so I am opposing the proposal. In terms of allowance for this sort of thing
It is the only example of the output of a particular piece of software (for example, Sora or Grok) or type of output (for example, music or video)
is much narrower than what I am suggesting here. - As I've said before, at least at the current state of generative AI I'm pretty skeptical about the use of AI imagery to illustrate anything other than the topic of AI imagery, but Dronebogus's proposal seems possibly even a bit narrow for illustrating AI imagery in Wikipedia. Do we really mean to say that we can have no pool of illustrations of what can be done with a given AI tool beyond what is already in use in existing articles, not even something that illustrates a capability that might not otherwise be obvious? And is this going to be the one area in which Commons has no virtually no interest in content of historical interest (the history of the development of generative AI)? Because that would seem to be a consequence of adopting this proposal as it stands. - Jmabel ! talk 05:53, 1 March 2026 (UTC)
- @Jmabel: You are looking for unreasonable reasons to oppose a reasonable proposal. If someone actually did whatever you’re proposing they would presumably put it in an article, no? Then it would be COM:INUSE and not a violation. Dronebogus (talk) 05:54, 1 March 2026 (UTC)
- @Dronebogus: No, they would not (mostly) be put in an article. I can't think of anywhere that files on Commons that amount to a large data set are all put in an article somewhere else. A good example of this (not AI-related) that I'm (slowly) curating at the moment is , an early 20th-century collection of mostly 19th-century photographs, mainly of Seattle, with comments by Thomas Prosch. Most of these will never make it into an article, partly because for many of them if we wanted just the photographic image (not his hand-written notes), we have a better print elsewhere. If you want, I could provide numerous other examples of content we absolutely should have on Commons that is never likely to find its way into any of our "sister projects." - Jmabel ! talk 05:45, 2 March 2026 (UTC)
- I think an exception for illustrating AI even if not INUSE could be added to the guidelines, but I’m not sure how to word it. I want Commons to be able to provide illustrations on the topic of AI art, but I don’t want AI art to be used outside of AI related topics. The purpose of this proposal is to try to stop the latter before it happens while acknowledging and working around the necessity of the former. Dronebogus (talk) 05:58, 2 March 2026 (UTC)
- @Dronebogus: we can limit how AI-generated content on Commons is categorized, but we cannot limit how other projects use our content. - Jmabel ! talk 18:52, 2 March 2026 (UTC)
- They won’t use AI if we don’t host it. Dronebogus (talk) 00:30, 3 March 2026 (UTC)
- @Dronebogus: we can limit how AI-generated content on Commons is categorized, but we cannot limit how other projects use our content. - Jmabel ! talk 18:52, 2 March 2026 (UTC)
- I think an exception for illustrating AI even if not INUSE could be added to the guidelines, but I’m not sure how to word it. I want Commons to be able to provide illustrations on the topic of AI art, but I don’t want AI art to be used outside of AI related topics. The purpose of this proposal is to try to stop the latter before it happens while acknowledging and working around the necessity of the former. Dronebogus (talk) 05:58, 2 March 2026 (UTC)
- @Dronebogus: No, they would not (mostly) be put in an article. I can't think of anywhere that files on Commons that amount to a large data set are all put in an article somewhere else. A good example of this (not AI-related) that I'm (slowly) curating at the moment is , an early 20th-century collection of mostly 19th-century photographs, mainly of Seattle, with comments by Thomas Prosch. Most of these will never make it into an article, partly because for many of them if we wanted just the photographic image (not his hand-written notes), we have a better print elsewhere. If you want, I could provide numerous other examples of content we absolutely should have on Commons that is never likely to find its way into any of our "sister projects." - Jmabel ! talk 05:45, 2 March 2026 (UTC)
Oppose "It is the only example of the output of a particular piece of software" feels absolutely putative. There is basically no case, besides a unique 2D piece of artwork, where two examples isn't better than one. As Jmabel says, chronologically and by subject are valuable views into how a generative AI produces files. We shouldn't demand that one file an old version of Grok got hilariously wrong is the only image we'll store here.--Prosfilaes (talk) 05:46, 1 March 2026 (UTC)
- @Prosfilaes: the “only one example” clause could be amended to include versions of a piece of software— i.e. baz by Grok 1.0.jpg is not incompatible with baz by Grok 1.7.jpg Dronebogus (talk) 06:06, 2 March 2026 (UTC)
Oppose I didn't understand item 3 in requirement. Please rephrase it. Gryllida (talk) 07:37, 1 March 2026 (UTC)
- I don’t know what doesn’t make sense. It states that one potential rationale for keeping an AI-generated or modified file would be that no other files exist demonstrating the output of the software used to generate it, and/or there are no other AI files of the same media type (ex. Audio or video). For example, if baz.jpg was the only file generated by foo.AI, or baz.mp4 was the only AI video on Commons, then it would be in scope because no other examples or foo.AI outputs were available on Commons or no other examples of AI videos were on commons. Dronebogus (talk) 20:29, 1 March 2026 (UTC)
Oppose No need for this censorship of a production method and tool increasingly common throughout society. No right to force the bias or opinions of a few as repressive restrictions onto all, instead of looking at the case(s) at hand via standard procedures and existing policies. Prototyperspective (talk) 21:19, 1 March 2026 (UTC)
- No need for this imposition of non-human-created slop on a project that features human-created human-curated works that provide an educational resource increasingly common throughout society. No right to force the pro-AI POV, bias, or opinions of one AI advocate to open the floodgates to all AI advocates. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 22:10, 1 March 2026 (UTC)
- No need to use the files if you don't like them. And it's not pro-AI POV bias, I just don't wish for this novel increasingly common production method to be censored indiscriminately. And floodgates is a false description. You could start working on the actual flood of 92,000 files Category:All media needing categories as of 2021 instead of forcing your censor-things-I-don't-like attitude onto others when there is no genuine problem so far or flood at all. Prototyperspective (talk) 22:15, 1 March 2026 (UTC)
- The flood is already here; Category:AI-generation related deletion requests is just what we've been able to catch since 2022-12-03. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 22:48, 1 March 2026 (UTC)
- If you look at how many AI files are on Commons overall, that's a tiny percentage and low fraction...e.g. much fewer than the uncategorized files of just one year or various kinds of useless photos, such as blurry photos or mundane photos of nothing in particular showing things there's thousands of photos of already etc. Moreover, the policy proposed here would rather increase rather than reduce the amount of work and for no reason. At least it wouldn't really help with this and low quality files by noncontributors can already even be speedy deleted. There's also lots of low-quality drawings and logos, yet drawings and logos aren't all banned. Prototyperspective (talk) 10:32, 2 March 2026 (UTC)
- @Prototyperspective: So let's just ban all AI-generated content - less work, much brighter line. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 19:15, 2 March 2026 (UTC)
- Goes back to what I said there. Also people don't need to make these DRs and spend any time on them, I understand that you do not recognize any usefulness of any media produced in this way (basically called a bias) but it doesn't mean it doesn't exist, and third we'd get far more uploads with it not being declared & labelled as made using AI so it could just as well be more work. Fourth, we don't ban lots of other things with more DRs or where the fraction of useful files is low such as Category:MobileUpload-related deletion requests, Category:Nudity and sexuality-related deletion requests, etc. Things can already be easily deleted and often speedily so. Why should we ban a notable organization's logo just because it's made in a low-budget method that uses novel tools for example? But let's not continue this discussion. Prototyperspective (talk) 22:15, 2 March 2026 (UTC)
- @Prototyperspective: So let's just ban all AI-generated content - less work, much brighter line. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 19:15, 2 March 2026 (UTC)
- If you look at how many AI files are on Commons overall, that's a tiny percentage and low fraction...e.g. much fewer than the uncategorized files of just one year or various kinds of useless photos, such as blurry photos or mundane photos of nothing in particular showing things there's thousands of photos of already etc. Moreover, the policy proposed here would rather increase rather than reduce the amount of work and for no reason. At least it wouldn't really help with this and low quality files by noncontributors can already even be speedy deleted. There's also lots of low-quality drawings and logos, yet drawings and logos aren't all banned. Prototyperspective (talk) 10:32, 2 March 2026 (UTC)
- The flood is already here; Category:AI-generation related deletion requests is just what we've been able to catch since 2022-12-03. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 22:48, 1 March 2026 (UTC)
- No need to use the files if you don't like them. And it's not pro-AI POV bias, I just don't wish for this novel increasingly common production method to be censored indiscriminately. And floodgates is a false description. You could start working on the actual flood of 92,000 files Category:All media needing categories as of 2021 instead of forcing your censor-things-I-don't-like attitude onto others when there is no genuine problem so far or flood at all. Prototyperspective (talk) 22:15, 1 March 2026 (UTC)
- No need for this imposition of non-human-created slop on a project that features human-created human-curated works that provide an educational resource increasingly common throughout society. No right to force the pro-AI POV, bias, or opinions of one AI advocate to open the floodgates to all AI advocates. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 22:10, 1 March 2026 (UTC)
Comment - at the present time, the biggest issue I'm seeing with AI-generated content is users "retouching" photos using ChatGPT, Gemini, Apple Photos Cleanup, or other similar AI tools before uploading them to Commons. What's most in need of change right now is the user messaging around this issue, not policy - something as simple as "if you're going to upload an AI image, please upload the original first, and don't upload AI images of people" would be a huge help. Omphalographer (talk) 03:20, 4 March 2026 (UTC)
- +1 - Jmabel ! talk 03:23, 4 March 2026 (UTC)
- Maybe if this doesn’t pass we just ban AI enhancement? Dronebogus (talk) 11:12, 4 March 2026 (UTC)
- The problem is not editing with AI tools itself. The problem is how people do this. Removing a lens flare or dirt on the sensor with an AI tool in Photoshop or CaptureOne is fine. Uploading a photo to ChatGPT for the same purpose is not, as ChatGPT might change anything and not just what you wanted to be changed. GPSLeo (talk) 18:37, 4 March 2026 (UTC)
- I agree with GPSLeo. There are already a fair number of good, specialized, AI-based graphics tools, but the attempts at general-purpose tools have largely shown that it is relatively easy to build an artificial bullshit artist, and much harder to (at least for now) to build an artificial expert. - Jmabel ! talk 21:42, 4 March 2026 (UTC)
- @Jmabel: I still remember bullshit artist en:User:Bad article creation bot. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:11, 4 March 2026 (UTC)
- Maybe then, if it’s not already mandatory, make it required to upload the original alongside the retouched version and disallow overwriting a non-AI modified image with an AI modified one. Any AI retouched image without the original available should be speedy deleted. Dronebogus (talk) 04:43, 5 March 2026 (UTC)
- Uploading the original for every file only because someone routinely runs a dust spot removal over all files seems to be completely exaggerated. Such a rule would be hard to fit with the workflow of most photographers. GPSLeo (talk) 05:27, 5 March 2026 (UTC)
- We could work out a common-sense exception for trusted users who upload professional grade photography and provide detailed specifications on their hardware (i.e. cameras) and software (i.e. what AI tool they used and how). I think 99% of cases where it’s even evident AI has been used are your average joe shmoe single-upload user putting a grainy 100px historical image through slop.ai to make it 200% more betterer and inadvertently adding Bigfoot into the image. Dronebogus (talk) 06:31, 5 March 2026 (UTC)
Uploading the original for every file only
one could require them to upload the untouched original as the first version and only upload modified ones as new revisions of the file. In the file history section users can then still see the other version(s). Prototyperspective (talk) 11:56, 5 March 2026 (UTC)workflow of most photographers
: still, all things being equal, barring copyright or personality rights issues, it is certainly best practice for documentary photography to make your original photo, straight from the camera, available (and, typically, overwrite that with the preferred version). I'll admit I'm not 100% on doing that myself, but I'm close. And that is entirely independent of AI-driven tools, which I don't use. Typical examples: File:Nicolae Tonitza - Portretul lui Gala Galaction (Omul unei lumi noi) (1919-1920).jpg, File:Ithaca, NY - W State Street, looking west from S Cayuga Street.jpg. I would not require this, but it certainly can be a lot clearer than a verbal description of retouching. - Jmabel ! talk 00:02, 6 March 2026 (UTC)- We have so many complaints from good photographers, who want to contribute but fail with the technical difficulties. Requesting them to upload the original and then the editing version would make the process even more complicated. GPSLeo (talk) 07:35, 6 March 2026 (UTC)
required to upload the original
: that would completely eliminate anything from third parties. - Jmabel ! talk 23:49, 5 March 2026 (UTC)- I was referring to those uploads where the original is by the user who uploads or the user has access to it. If the unedited original is not available to them because only the modified version was posted online that obviously makes it sth that can't be expected from them. Users that forgot doing so could be asked to upload it as a new revision and then revert the revision. Prototyperspective (talk) 11:25, 6 March 2026 (UTC)
- Uploading the original for every file only because someone routinely runs a dust spot removal over all files seems to be completely exaggerated. Such a rule would be hard to fit with the workflow of most photographers. GPSLeo (talk) 05:27, 5 March 2026 (UTC)
and much harder to (at least for now) to build an artificial expert
that's the wrong way to use these tools – they are not there for any of the expertise, the expertise should be about 100% in the human who uses these tools in often sophisticated ways, not in the tool. Prototyperspective (talk) 11:54, 5 March 2026 (UTC)
- From what I've seen some users say in response to DRs, part of the problem is that many consumer AI tools (including, but not limited to, ChatGPT) simply don't behave predictably when processing images. Sometimes they'll do an acceptable job of retouching an image - e.g. removing dust and scratches, colorizing black and white photos, adjusting levels and contrast - and sometimes they'll go off the rails and completely recreate an image from "memory", introducing changes in the content of the image. It's not clear what controls how these tools will behave, or if it's even possible to reliably control them. And unless we can give users specific, reliable advice on how to use these tools responsibly, the safest option will be to advise against using them. Omphalographer (talk) 22:13, 4 March 2026 (UTC)
- As an example: the uploader of File:201A Tube characteristics.png used a "text recognition" feature in (Microsoft) "Word with Copilot" which replaced all the labels in the chart with nonsense. (Worse: it wasn't even the usual unreadable text - most of it was contextually appropriate nonsense, making the problem harder to notice.) The original has been uploaded now, but you can compare to the modified version in file history. Omphalographer (talk) 03:19, 5 March 2026 (UTC)
Removing a lens flare or dirt on the sensor with an AI tool in Photoshop or CaptureOne is fine. Uploading a photo to ChatGPT for the same purpose is not, as ChatGPT might change anything and not just what you wanted to be changed
this comes from inexperience with these tools – valid point in principle but there are now tools where you can select the part of the image to change and describe how so it does the same as those other tools, just much easier, low-budget, quicker and often better. Prototyperspective (talk) 11:55, 5 March 2026 (UTC)
- I agree with GPSLeo. There are already a fair number of good, specialized, AI-based graphics tools, but the attempts at general-purpose tools have largely shown that it is relatively easy to build an artificial bullshit artist, and much harder to (at least for now) to build an artificial expert. - Jmabel ! talk 21:42, 4 March 2026 (UTC)
- The problem is not editing with AI tools itself. The problem is how people do this. Removing a lens flare or dirt on the sensor with an AI tool in Photoshop or CaptureOne is fine. Uploading a photo to ChatGPT for the same purpose is not, as ChatGPT might change anything and not just what you wanted to be changed. GPSLeo (talk) 18:37, 4 March 2026 (UTC)
Oppose I think we're doing ok with the slow accretion of guidelines and best practices regarding AI. This seems like far enough to be a non-starter. It is currently being used
kind of doesn't make sense without an additional exception, as nothing is in use at the time of upload, but it must be in scope to be uploaded. — Rhododendrites talk | 14:48, 5 March 2026 (UTC)- I agree the proposal is DOA in its current form, but this discussion has resulted in a lot of constructive criticism I’ll apply to a revised version. I still absolutely believe Wikimedia needs to take a hard line against generative AI (just like crypto and all the other toxic, kleptocrat-driven web 3.0 bullshit being forced down our throats). But we also need to talk about generative AI in an educational context. I want Commons to have a broadly anti-AI policy written down that also accommodates the necessity of hosting AI generated content to illustrate and discuss such content in a way that feels sensible and doesn’t rely on either being extremely vague or extremely specific. Dronebogus (talk) 14:58, 5 March 2026 (UTC)
- Millions of people and lots of countries and their education systems etc think differently. There is no reason to make Commons very biased in one way or the other and exclude lots of content or take a political stance on this. Your view of this novel technology is your opinion. Prototyperspective (talk) 15:05, 5 March 2026 (UTC)
- You are literally the only person I’ve ever encountered passionately defending AI generative garbage who doesn’t appear to have an economic stake in it. The broad consensus of the general online public that actually bothers to voice an opinion is that nearly all generative AI technology and output sucks. I’d say it’s a solution in search of a problem, but that’s too generous. It’s a “solution” to the “problem” of needing humans to produce creative works. And before you say “it gives people who can’t do x a chance to do x”— that’s a feature of being human, not a bug. If you can’t do x you either learn or ask someone else! That’s like the idea behind Wikimedia! Generative AI as it currently stands is directly contrary to this idea of human beings sharing knowledge and skills! Dronebogus (talk) 15:15, 5 March 2026 (UTC)
- That doesn't surprise me – related concepts are 'echo chamber', 'filter bubble', and 'confirmation bias'. And that's not the online consensus at all which is a bad way to assess consensus anyway. Generative AI as it currently stands is directly supportive of the idea of human beings sharing knowledge and skills as more people have access to better idea/concept visualization and more media depictions can finally enter the public domain/creative commons. Prototyperspective (talk) 15:18, 5 March 2026 (UTC)
- (Edit conflict) Well, this opinion is shared by a lot of people. We need to be very cautious about such generalizations. The dominant discourse is pro-AI, but it doesn't mean the majority of people are pro-AI. At the very least, most people I know are very skeptical or critical about AI. I don't know how we should formulate Commons policies about AI, but we should keep an independent and critical view about it. Yann (talk) 15:18, 5 March 2026 (UTC)
- The dominant discourse is pro-AI if by “dominant” you mean “rich and loud”. If you look at social media, comments sections, youtubers, artists, people on this very website, it’s overwhelmingly negative. Dronebogus (talk) 15:21, 5 March 2026 (UTC)
- If I look outside of reddit and Wikipedia, it's nuanced and/or positive. In any case, that's a bad way to gauge the public view; for example there are people stoking up divisions and polarizations, paid commenters, algorithms that drive disagreement and upset, etc etc. It doesn't matter either way what the majority opinion on this is. We don't censor lots of other things that people don't like – people are free to hate these things and not use them.
“rich and loud”
the loud ones are the ones being hyperbolic nonnuanced haters of anything that has anyhow to do with generative AI. Prototyperspective (talk) 15:27, 5 March 2026 (UTC)people stoking up divisions and polarizations
because they are voicing their honest dislike of this technology and what it’s doing to art and culture?paid commenters
Yes, I’m sure there’s big money to be made trashing big tech’s new favorite thing in the whole world, something that basically prints money for free.It doesn't matter either way what the majority opinion on this is
public opinion does actually matter.We don't censor lots of other things that people don't like
I’m not saying we should censor it, but just like how w:wp:gratuitous states we shouldn’t use explicit images to illustrate non-explicit subjects I don’t think we should use AI to illustrate topics unrelated to AI.hyperbolic nonnuanced haters of anything that has anyhow to do with generative AI.
I don’t hate or disapprove of generative AI 100%, if w:Neuro-sama counts as generative AI. And while I don’t exactly like that he used it, ZUN also used AI in the latest Touhou game and took pains to demonstrate how to use it in an ethical manner that doesn’t negate the importance of real, serious human contribution. Dronebogus (talk) 17:11, 5 March 2026 (UTC)- I was giving examples why it's not a good idea to base things on personal subjective impressions of online opinion. There's financial interests for and against various kinds of AI uses and AI uses in general. And if we censored away everything we feel is widely disliked we may be moving to censoring videos of sexual intercourse, homosexuality, fetishes, religious desecration, and political caricatures next. And claiming you are not saying/proposing sth does not make it so. Prototyperspective (talk) 17:59, 5 March 2026 (UTC)
- FWIW, I'm pretty neutral on the long-term potential of generalized AI, but so far we are at a phase similar to when Ambrose Bierce remarked about electricity circa 1890 that so far it had been shown that it could pull a streetcar better than a candle and light a room better than a horse. - Jmabel ! talk 00:10, 6 March 2026 (UTC)
- Yes, I feel the same way. Only it’s worse than just inferior; it’s actively harmful. AI as a concept has potential, but right now it’s being applied fast-and-loose in places it doesn’t need to be applied, or places it could be applied responsibly but isn’t. It’s more like how back in the early-mid 20th century we thought we’d be warming our hands by a lump of radium in the fireplace— yeah, radioactivity is useful, but not like THAT. Thank god no-one started putting radium fireplaces in homes by default like every tech corporation is doing with AI in everything. Dronebogus (talk) 06:07, 6 March 2026 (UTC)
- Okay so how much have you used latest AI? I felt like this about LLMs (because they just parrot things to sound plausible, not accurate) but these aren't LLMs and it's not about how we feel. I doubt you have used them for coding, diagrams, creative ideas you didn't have time for, or specific images you have in mind spending hours to create them. In this area often feels like people have super strong opinions and extensive advise to give but little experience or data underneath it. I'm not saying it's not harmful or that it isn't currently overdone but knee-jerk reactions to e.g. companies scrambling to put AI into everything where it's not needed/wanted/useful or sensationalist media coverage relating to some real issues aren't helping and additionally would further the perception that they're entirely useless and a problem when reality is more nuanced than that. Prototyperspective (talk) 11:34, 6 March 2026 (UTC)
- I was just thinking about how generative AI is like nutrient paste in RimWorld: maybe you don’t care relying on nutrient paste puts talented, passionate chefs out of a job because now everyone can be a “chef” at the push of a button. Maybe you can justify the space wasted by the room-sized dispenser by pointing out a regular stove uses slightly more electricity and is far less efficient in its output. Maybe you think a human cooking a delicious meal is functionally identical to the dispenser grinding up the ingredients into flavorless mush. Maybe you even like nutrient paste and know lots of people who do. But the fact is most people hate eating nutrient paste. They don’t like seeing a freezer stocked with nutrient paste meals. They don’t like biting into their food and finding out it’s actually just paste. They don’t forced out of their cooking jobs they spent years honing and getting replaced by “nutrient paste engineers” (which isn’t a real job in RimWorld just like how “prompt engineer” isn’t a real job IRL). You can start your own colony with a cult of transhumanism that mandates that everyone eat nutrient paste, and attract lots of like-minded nutrient paste eaters to your colony, but most of us at the Wikimedia colony would just like to eat real human food. Dronebogus (talk) 12:07, 6 March 2026 (UTC)
- Why would one eat nutrient paste if the other tastes better.
If one has the option for both in a specific case like say a specific meal occasion (such as a lunch during travel on day xy) I see no reason for why to pick it. Especially when both meals are equivalent or the nutrient paste is better because eg it's healthier and tastes better then why the heck should I be forced to only eat the manmade dish with other options being prohibited? If you think for cases where both are available the latter is intrinsically better due to being manmade/handmade the traditional way then you're free to have this opinion but shouldn't insist on everybody adopting the same view. Btw, the ideas/philosophy has some resemblance to this. Prototyperspective (talk) 12:27, 6 March 2026 (UTC)- That’s the thing: nutrient paste can technically meet your colonists’ raw nutritional requirements, and extremely efficiently too, but it tastes disgusting unless you are an ascetic who doesn’t care about taste or have adopted a pro-nutrient paste ideology. To use a real world example: the w:dilberito, which was basically real life nutrient paste. It was supposed to be the next big thing in food. It (supposedly) provided everything your body needed, but it apparently tasted awful. It was only acceptable fare to people who can eat without concern for taste (and maybe like two people who actually enjoyed it). The point is AI generated content may be able to technically meet the minimum requirements of whatever it’s being used for, but most people think it’s about as palatable as nutrient paste or a diberito. And putting AI in an article or whatever is like putting nutrient paste it in a meal at a restaurant— you can order something else, but if I wanted this meal I have to eat the paste as part of it. Dronebogus (talk) 12:50, 6 March 2026 (UTC)
most people think it’s about as palatable as nutrient paste or a diberito
you think that. I don't. Millions and probably most people don't, in my country I think and it seems to be most people. Regardless of what they think, we shouldn't censor things based on taste. There's country where homosexuality is punished and acceptance of it a minority view. That files are on Commons don't mean they have to be used. It's not technical requirements but holistic all-criteria requirements which is more broad than making some criteria you personally are a fan of about production methodology a critical decisive top criteria. Prototyperspective (talk) 12:54, 6 March 2026 (UTC)millions and probably most people
uh, citation needed. I at least have anecdotal evidence a lot of people do not like AI. I can point out English Wikipedia, the biggest Wikimedia site by far and one of the biggest websites on the planet, has a laundry list of policies, essays, and guidelines on AI that are mostly negative. I could point out the lengthy “concerns” section on the AI boom article, or the existence of w:AI slop as a concept and term. I could point out the extremely negative reaction to uses of AI in the media, like w:It's the Most Terrible Time of the Year, or the backlash against w:Théâtre D'opéra Spatial. You are relying on a silent majority that possibly doesn’t even exist, and comparing hostility towards AI generated content (a new and highly controversial concept/technology) to intolerance of homosexuality (a natural, healthy behavior among humans and animals that nevertheless results in people getting marginalized, hurt, and killed by ignorant individuals and societies). Dronebogus (talk) 13:10, 6 March 2026 (UTC)- I'd say citation needed for your claims. Given that millions use these tools, it's not a stretch or near-self-explanatory. But again it's not about and should not be about what the dominant or >50% majority contemporary opinion on a subject is. I'm sadly well aware that the existence of the term "AI slop" is what many people believe is what can settle debates or a strong point or just slightly convincing. The majority goes about their day and either uses the tools at work or for fun or daily life things and/or doesn't bother about how Wikimedia projects handle this. I'm not "relying" on them because again majority taste and sentiment aren't what matters. Prototyperspective (talk) 13:29, 6 March 2026 (UTC)
- Okay, let’s assume you’re right that, yes, a majority of people like or don’t care about AI. A non trivial minority really does not like it. There is no offense to either camp in using exclusively human made files in the vast majority contexts. However the anti-AI camp is offended by the use of AI and the pro-AI camp’s use of it and subsequent justification of it; both parties come out unsatisfied and hostile toward each other with no real benefit to show for it. So human content is a win for both parties and AI is a lose for both parties. Dronebogus (talk) 13:44, 6 March 2026 (UTC)
- I'd say citation needed for your claims. Given that millions use these tools, it's not a stretch or near-self-explanatory. But again it's not about and should not be about what the dominant or >50% majority contemporary opinion on a subject is. I'm sadly well aware that the existence of the term "AI slop" is what many people believe is what can settle debates or a strong point or just slightly convincing. The majority goes about their day and either uses the tools at work or for fun or daily life things and/or doesn't bother about how Wikimedia projects handle this. I'm not "relying" on them because again majority taste and sentiment aren't what matters. Prototyperspective (talk) 13:29, 6 March 2026 (UTC)
- That’s the thing: nutrient paste can technically meet your colonists’ raw nutritional requirements, and extremely efficiently too, but it tastes disgusting unless you are an ascetic who doesn’t care about taste or have adopted a pro-nutrient paste ideology. To use a real world example: the w:dilberito, which was basically real life nutrient paste. It was supposed to be the next big thing in food. It (supposedly) provided everything your body needed, but it apparently tasted awful. It was only acceptable fare to people who can eat without concern for taste (and maybe like two people who actually enjoyed it). The point is AI generated content may be able to technically meet the minimum requirements of whatever it’s being used for, but most people think it’s about as palatable as nutrient paste or a diberito. And putting AI in an article or whatever is like putting nutrient paste it in a meal at a restaurant— you can order something else, but if I wanted this meal I have to eat the paste as part of it. Dronebogus (talk) 12:50, 6 March 2026 (UTC)
- Why would one eat nutrient paste if the other tastes better.
- I was just thinking about how generative AI is like nutrient paste in RimWorld: maybe you don’t care relying on nutrient paste puts talented, passionate chefs out of a job because now everyone can be a “chef” at the push of a button. Maybe you can justify the space wasted by the room-sized dispenser by pointing out a regular stove uses slightly more electricity and is far less efficient in its output. Maybe you think a human cooking a delicious meal is functionally identical to the dispenser grinding up the ingredients into flavorless mush. Maybe you even like nutrient paste and know lots of people who do. But the fact is most people hate eating nutrient paste. They don’t like seeing a freezer stocked with nutrient paste meals. They don’t like biting into their food and finding out it’s actually just paste. They don’t forced out of their cooking jobs they spent years honing and getting replaced by “nutrient paste engineers” (which isn’t a real job in RimWorld just like how “prompt engineer” isn’t a real job IRL). You can start your own colony with a cult of transhumanism that mandates that everyone eat nutrient paste, and attract lots of like-minded nutrient paste eaters to your colony, but most of us at the Wikimedia colony would just like to eat real human food. Dronebogus (talk) 12:07, 6 March 2026 (UTC)
- FWIW, I'm pretty neutral on the long-term potential of generalized AI, but so far we are at a phase similar to when Ambrose Bierce remarked about electricity circa 1890 that so far it had been shown that it could pull a streetcar better than a candle and light a room better than a horse. - Jmabel ! talk 00:10, 6 March 2026 (UTC)
- I was giving examples why it's not a good idea to base things on personal subjective impressions of online opinion. There's financial interests for and against various kinds of AI uses and AI uses in general. And if we censored away everything we feel is widely disliked we may be moving to censoring videos of sexual intercourse, homosexuality, fetishes, religious desecration, and political caricatures next. And claiming you are not saying/proposing sth does not make it so. Prototyperspective (talk) 17:59, 5 March 2026 (UTC)
- If I look outside of reddit and Wikipedia, it's nuanced and/or positive. In any case, that's a bad way to gauge the public view; for example there are people stoking up divisions and polarizations, paid commenters, algorithms that drive disagreement and upset, etc etc. It doesn't matter either way what the majority opinion on this is. We don't censor lots of other things that people don't like – people are free to hate these things and not use them.
- The dominant discourse is pro-AI if by “dominant” you mean “rich and loud”. If you look at social media, comments sections, youtubers, artists, people on this very website, it’s overwhelmingly negative. Dronebogus (talk) 15:21, 5 March 2026 (UTC)
- (Edit conflict) Well, this opinion is shared by a lot of people. We need to be very cautious about such generalizations. The dominant discourse is pro-AI, but it doesn't mean the majority of people are pro-AI. At the very least, most people I know are very skeptical or critical about AI. I don't know how we should formulate Commons policies about AI, but we should keep an independent and critical view about it. Yann (talk) 15:18, 5 March 2026 (UTC)
- That doesn't surprise me – related concepts are 'echo chamber', 'filter bubble', and 'confirmation bias'. And that's not the online consensus at all which is a bad way to assess consensus anyway. Generative AI as it currently stands is directly supportive of the idea of human beings sharing knowledge and skills as more people have access to better idea/concept visualization and more media depictions can finally enter the public domain/creative commons. Prototyperspective (talk) 15:18, 5 March 2026 (UTC)
- You are literally the only person I’ve ever encountered passionately defending AI generative garbage who doesn’t appear to have an economic stake in it. The broad consensus of the general online public that actually bothers to voice an opinion is that nearly all generative AI technology and output sucks. I’d say it’s a solution in search of a problem, but that’s too generous. It’s a “solution” to the “problem” of needing humans to produce creative works. And before you say “it gives people who can’t do x a chance to do x”— that’s a feature of being human, not a bug. If you can’t do x you either learn or ask someone else! That’s like the idea behind Wikimedia! Generative AI as it currently stands is directly contrary to this idea of human beings sharing knowledge and skills! Dronebogus (talk) 15:15, 5 March 2026 (UTC)
- Millions of people and lots of countries and their education systems etc think differently. There is no reason to make Commons very biased in one way or the other and exclude lots of content or take a political stance on this. Your view of this novel technology is your opinion. Prototyperspective (talk) 15:05, 5 March 2026 (UTC)
- I agree the proposal is DOA in its current form, but this discussion has resulted in a lot of constructive criticism I’ll apply to a revised version. I still absolutely believe Wikimedia needs to take a hard line against generative AI (just like crypto and all the other toxic, kleptocrat-driven web 3.0 bullshit being forced down our throats). But we also need to talk about generative AI in an educational context. I want Commons to have a broadly anti-AI policy written down that also accommodates the necessity of hosting AI generated content to illustrate and discuss such content in a way that feels sensible and doesn’t rely on either being extremely vague or extremely specific. Dronebogus (talk) 14:58, 5 March 2026 (UTC)
In addition to Dronebogus says, with which I totally agree, we cannot say that AI-generated and human generated content are a “free” choice. Because AI use is cheap and easy for the end-user, many people are tempted to use it. But it is neither free or easy for the society in general. This is not a competition on equal terms. Yann (talk) 13:51, 6 March 2026 (UTC)
- It's not a competition to begin with. Computer use is neither free or easy for the society in general. Prototyperspective (talk) 13:59, 6 March 2026 (UTC)
- It gets easier and, measured in money, cheaper every day - to the point were you can just talk to a device and ask it to have media generated along your guidelines. The upload to Wikimedia is just a formality. So, no argument here. Alexpl (talk) 21:52, 6 March 2026 (UTC)
- That is speculation and is currently not true except if quality and accuracy are none of your criteria and/or if it's sth quite simple. I did not make a new 'argument' there but just addressed 2 claims in the prior comment and showed how these are basically false. If it gets easier and cheaper to create good-quality useful visual illustrations for subjects where these would be useful then that's great. Prototyperspective (talk) 22:05, 6 March 2026 (UTC)
- That is the core of your misconception— that AI art is good, even disregarding personal taste. AI art is frequently full of errors and “hallucinations”. Even if accurate it simply doesn’t inspire trust among anyone with critical thinking skills who have seen the utter BS it has spit out in the past. So going back to the nutrient paste analogy, it’s like there being a non-zero chance of the nutrient paste containing toxic waste to make it seem more substantial. A human chef might cook food badly or improperly, resulting in anything from a lousy meal to food poisoning, but they won’t put toxic waste in your food and lie about it meeting your nutritional requirements. Dronebogus (talk) 11:00, 7 March 2026 (UTC)
- I don't want Prototyperspective and his ilk piping in the electronic version of toxic waste. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 11:08, 7 March 2026 (UTC)
- That is the core of your misconception— that AI art is good, even disregarding personal taste. AI art is frequently full of errors and “hallucinations”. Even if accurate it simply doesn’t inspire trust among anyone with critical thinking skills who have seen the utter BS it has spit out in the past. So going back to the nutrient paste analogy, it’s like there being a non-zero chance of the nutrient paste containing toxic waste to make it seem more substantial. A human chef might cook food badly or improperly, resulting in anything from a lousy meal to food poisoning, but they won’t put toxic waste in your food and lie about it meeting your nutritional requirements. Dronebogus (talk) 11:00, 7 March 2026 (UTC)
- That is speculation and is currently not true except if quality and accuracy are none of your criteria and/or if it's sth quite simple. I did not make a new 'argument' there but just addressed 2 claims in the prior comment and showed how these are basically false. If it gets easier and cheaper to create good-quality useful visual illustrations for subjects where these would be useful then that's great. Prototyperspective (talk) 22:05, 6 March 2026 (UTC)
- It gets easier and, measured in money, cheaper every day - to the point were you can just talk to a device and ask it to have media generated along your guidelines. The upload to Wikimedia is just a formality. So, no argument here. Alexpl (talk) 21:52, 6 March 2026 (UTC)
Proposal: Allow file movers to delete single-revision redirects during file moves
[edit]I would like to propose adding the delete-redirect right to the file mover user group on Wikimedia Commons. This would allow file movers to delete single-revision redirects when they block a file move.
Background
[edit]On Wikimedia Commons, file renaming is performed by users with the file mover or sysop right. However, when the destination title already exists as a redirect, the move can fail even if that redirect is trivial.
In such situations, file movers must request administrator assistance to delete the redirect and complete the move. In many cases, these redirects are:
- created automatically by previous file moves
- redirects with only one revision
- redirects with no meaningful history or content
Despite being technically trivial, these situations require administrator intervention, which creates unnecessary delays and additional administrative work.
Existing precedent
[edit]Similar issues have been discussed in the context of page moves on other Wikimedia projects. MediaWiki development work has recognized that single-revision redirects generally have no meaningful history and can safely be removed when they block a move operation.
The purpose of the delete-redirect capability is not to grant general deletion powers, but to allow the system to remove trivial redirects automatically during a move action.
Proposed change
[edit]Grant the delete-redirect user right to the file mover group on Wikimedia Commons.
In practice, this would allow file movers to delete redirects only when all of the following conditions are met:
- The file is a redirect.
- The redirect has only one revision.
- The deletion occurs as part of a file move operation.
- The redirect would otherwise block the move.
This would not grant file movers general file/page deletion rights.
Benefits
[edit]This change would:
- reduce routine administrator workload
- speed up routine file renaming
- eliminate many trivial admin requests
- make the file mover workflow more efficient
Commons contains millions of files and frequent renaming requests. Allowing file movers to resolve these minor redirect conflicts directly would streamline maintenance without introducing meaningful risk.
Safeguards
[edit]The proposal is intentionally limited:
- Only single-revision redirects can be removed.
- The deletion occurs only within the move process.
- File movers would not gain general deletion rights.
Request
[edit]I would like to gather community feedback on whether the file mover group should be granted the delete-redirect right for this limited purpose.
If there is consensus, a configuration change could be requested via Phabricator. Regards, ZI Jony (Talk) 08:44, 5 March 2026 (UTC)
Comments
[edit]Just a few questions:
- Which problem would be solved? Unlike articles on WP file names can be / are trivial on Commons. There may exist a zillion files of a woodpecker, differing by a number, situation, action of the bird, etc. If renaming is blocked, one could add a number to the file name.
- Can this have disadvantages? Such as wheelwarring about a filename?
Regards, Ellywa (talk) 11:32, 7 March 2026 (UTC)
- Ellywa, thanks for raising these points.
- I agree that this situation is probably not very common, and the proposal is not meant to solve a large systemic problem. It is more about handling those occasional cases where a technically trivial redirect blocks a move and requires unnecessary admin intervention.
- For example, in the current request to rename File:2020 New Jersey Question 1 results by county.svg to File:2020 New Jersey Question 1 results map by county.svg, the destination title already exists as a redirect pointing back to the original file. Even though this redirect has no meaningful history, the move cannot proceed unless an administrator deletes it first, or the administrator does so themselves.
- This is exactly the kind of situation the proposal tries to address. The redirect is simply a leftover technical artifact, but resolving it still requires admin involvement.
- Of course, a file mover could choose a slightly different name instead, but in cases where the requested title is the most accurate or natural one, it would be helpful if trivial single-revision redirects like this could be removed as part of the move process.
- So while the case may be rare, the idea is to make these small maintenance tasks smoother and reduce minor admin requests when the redirect involved has no real content or history. Regards, ZI Jony (Talk) 06:51, 8 March 2026 (UTC)
- A filemover could move the redirect itself to an intermediate name (without leaving another redirect), then move the original file (again without leaving a redirect), then move the intermediate name redirect to the original source name of the move, changing it to point to the new name. Certainly, being able to delete that redirect then do an normal move is easier, and maybe leaves a better history, so if the safeguards can be implemented to not delete redirects with history (I have little idea about that) it's probably fine. But seems to me like it's still possible to avoid involving admins even now. Carl Lindberg (talk) 19:40, 8 March 2026 (UTC)
- Carl Lindberg, thank you for explaining that workaround. You are correct that it is technically possible to complete the move without admin involvement by moving the redirect to an intermediate title and then performing a sequence of moves. However, in practice that approach has a few drawbacks.
- A filemover could move the redirect itself to an intermediate name (without leaving another redirect), then move the original file (again without leaving a redirect), then move the intermediate name redirect to the original source name of the move, changing it to point to the new name. Certainly, being able to delete that redirect then do an normal move is easier, and maybe leaves a better history, so if the safeguards can be implemented to not delete redirects with history (I have little idea about that) it's probably fine. But seems to me like it's still possible to avoid involving admins even now. Carl Lindberg (talk) 19:40, 8 March 2026 (UTC)
- First, it requires several additional steps compared to a normal move. Instead of one straightforward move, the file mover has to perform multiple moves and carefully manage redirects in between. For routine file renaming work this quickly becomes cumbersome.
- Second, it can make the page history less clear. Multiple intermediate moves may create a more complicated history that is harder to follow later, whereas deleting a trivial single-revision redirect and performing a normal move keeps the history cleaner and easier to understand.
- Third, while the workaround avoids direct admin involvement at that moment, it still creates extra maintenance work overall. File movers need to spend additional time performing the workaround, and sometimes the intermediate redirects created during the process may later require cleanup anyway.
- The intention of this proposal is simply to allow file movers to resolve these very limited situations in a straightforward way when the blocking redirect has only a single revision and no meaningful history. It would not grant general deletion rights, but would remove the need for workarounds or small admin requests in these cases.
- So while the workaround exists, the proposal aims to make the workflow simpler and cleaner for those occasional cases where a trivial redirect blocks a file move. Regards, ZI Jony (Talk) 13:11, 9 March 2026 (UTC)
Support
[edit]
Support Schlurcher (talk) 08:17, 8 March 2026 (UTC)
Support with the proposed safeguards. Tvpuppy (talk) 11:55, 8 March 2026 (UTC)
Support. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 11:59, 8 March 2026 (UTC)
Support. Rehman 15:43, 8 March 2026 (UTC)- --
Oppose
[edit]- --
Neutral
[edit]- --
Feature Request: Revert back to the original vector 2010 design.
[edit]The new layout is horible. I understand that it came out around late 2022 for new users to read better using the website, but lets face it, it came out during a mass pandemic back when everyone was stuck inside and DEPRESSED. Also most people still have family computers (including my family) so if the redesign is related in responce to everyone trying to access wikipedia from their iphones there's an APP for that. Also the entire point of the website is for education, right? So the new design actually defeats the point of using the website to begin with. The 2022 layout actually removes side links and those fold out bars on the bottom of wikipedia pages so that you can't learn more about a topic, which prevents you from learning more something, and redesigning it would not only look horible but given the current design might not even be possible. Plus more people have started going back to the original color design by repainting their homes, so why should wikipedia be any different?
Plus, changing the current design to the vector 2010 skin would be extremly easy and wouldn't require that much effort.
If you want to support this arguement, do this: Download the old wiki or old wiki redirect extension on either google chrome or mozilla firefox.
See what layout you find better, the original one with the quick links on the side and the information tabs at the bottom of the website, or the current design.
Ok let me make a point about a few arguements I might get.
"You're just resistant to change"
Depending on where you live, you may have noticed other people repainting their houses with the color design for the same reason, because they couldn't deal with the grey modern design that just looks horrible.
Also there's a psychologcal effect of the more minimalist designs, and even if the claim is that the design helps new users read because there's less space, like I mentioned before, wikipedia came out with their own app on the iphone YEARS ago.
There's no difference between a high school student using wikipedia for history class and the guilded age and my annoying younger cousins learning how to use the using website on the family computer like I did when I was really little.
"The current skin helps reading comprehension for new (younger) users because there's less stuff on the website"
Actually, there's no difference between someone in high school using wikipedia for class and newer users using it for school. I learned how to use the internet on the family computer, so what's the difference?
"You can just change the website back to the old design on your account"
Not everyone is good with computers and knows how to do that. Plus, wikipedia stops people from creating an account on ANY public internet, so even if you go to your local library and try to create an account or do that at school, it doesn't work.
If you have to use the built in email feature on wikipedia to create a new account so that you can change the design and the new design slows down you from learning anything, you're probably going to end up with your parents getting pissed off because you got an F on your report card as a result.
"The people that work at the company that maintains wikipedia can just add the features found in the original design and use that on the current skin"
Not Exactly.
The new design not ontly prevents you from adding the links on the side of the website, but it would also look horible.
Which slows down you from learning anything anything where the original design from 2010 didn't and had the features like side links or popout tabs on the bottom of the page.
Also when was the last time you actually saw someone using wikipedia in high school? I haven't seen anyone use it at my school. Jelleyjelly (talk) 02:18, 6 March 2026 (UTC)
- Only a small fraction of people use the app instead of mobile Web and this is Commons, not Wikipedia where an even lower fraction uses the Commons app. I think proposals would have more likelihood of getting implemented if you were requesting specific changes to the new skin or some new configurability for it by which Commons could adjust how it looks. Could you describe/name very briefly (this is a long post), which exact things you don't like about the new UI? The sidebar is there by default unless it has been hidden. Prototyperspective (talk) 11:39, 6 March 2026 (UTC)
- The current design is generally speaking insainly difficult to navigate where the original is much easier to use and isn't in your face. I don't think there's a good way to fix the current design.
- Original (vector2010):[6]
- Current:[7] ~2026-14584-69 (talk) 00:44, 7 March 2026 (UTC)
- Well the TOC on the side does make it easier to navigate and closing the right or both panels in your screenshots would solve the narrow space issue. Prototyperspective (talk) 21:32, 8 March 2026 (UTC)
2022 layout actually removes side links and those fold out bars on the bottom of wikipedia pages
, @Jelleyjelly perhaps you are on mobile view? I assume you are referring to English Wikipedia and I can still see the "side links" and the "fold out bars on the bottom" using the desktop view of the new 2022 layout, so they definitely did not remove them. Thanks. Tvpuppy (talk) 15:29, 6 March 2026 (UTC)- Yeah but the vector 2010 skin had a directory of links, not a drop down menu with a ton of stuff removed and that made it easy to use ~2026-14584-69 (talk) 00:48, 7 March 2026 (UTC)
- @~2026-14584-69: It didn't support dark mode as well. However, if you want to go back to using it like it was in 2010-2021, you may use "useskin=vector" in the command line (with "?" or "&" as appropriate) or set your appearance/rendering preferences as a logged-in user in Special:Preferences#mw-prefsection-rendering. YMMV as a temporary account; incognito, I get "Please create an account to change preferences". — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 02:21, 7 March 2026 (UTC)
- Ok yeah but what about everyone else that's either not great with computers or doesn't know how to do that? Why not just make it the default, not just for people like me that use the useskin=vector all the time? ~2026-14584-69 (talk) 03:18, 7 March 2026 (UTC)
- @~2026-14584-69: That would not be progress. I resisted the new looks for a while in favor of Monobook, but dark mode won me over. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 10:38, 7 March 2026 (UTC)
- Yeah but not all browsers have dark mode. Also even if that were true, why not make an extension that has dark mode with the vector 2010 skin? ~2026-14678-29 (talk) 13:50, 7 March 2026 (UTC)
- this would be sick. -Nard (Hablemonos) (Let's talk) 14:34, 7 March 2026 (UTC)
- Yeah but not all browsers have dark mode. Also even if that were true, why not make an extension that has dark mode with the vector 2010 skin? ~2026-14678-29 (talk) 13:50, 7 March 2026 (UTC)
- @~2026-14584-69: That would not be progress. I resisted the new looks for a while in favor of Monobook, but dark mode won me over. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 10:38, 7 March 2026 (UTC)
- Ok yeah but what about everyone else that's either not great with computers or doesn't know how to do that? Why not just make it the default, not just for people like me that use the useskin=vector all the time? ~2026-14584-69 (talk) 03:18, 7 March 2026 (UTC)
- @~2026-14584-69: It didn't support dark mode as well. However, if you want to go back to using it like it was in 2010-2021, you may use "useskin=vector" in the command line (with "?" or "&" as appropriate) or set your appearance/rendering preferences as a logged-in user in Special:Preferences#mw-prefsection-rendering. YMMV as a temporary account; incognito, I get "Please create an account to change preferences". — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 02:21, 7 March 2026 (UTC)
- Yeah but the vector 2010 skin had a directory of links, not a drop down menu with a ton of stuff removed and that made it easy to use ~2026-14584-69 (talk) 00:48, 7 March 2026 (UTC)
Oppose You can change your preferred skin in preferences. Regarding sidelinks and "fold out bars" (I guess you are talking about navigation boxes), you can still move the sidelinks to sidebar and navboxes are still visible on Vector 2022. Nemoralis (talk) 12:54, 9 March 2026 (UTC)
Oppose Seeing from a UX POV: I approve that Wikipedia/WMC needs a more modern UI to create an impression on the users that Wikipedia becomes more modern (psychological effect: people often think something is modern when it looks more modern). Anyway, if you demand another design, you can change it as proposed before --PantheraLeo1359531 😺 (talk) 16:29, 9 March 2026 (UTC)
Possible upload: Leipzig address books
[edit]Hi all,
I’ve compiled a list of public-domain Leipzig address books from the digital collections of the SLUB Dresden. They cover 1830–1937 and total about 100 pdfs. They grow with population, with the 1937 edition containing about 2000 pages.
My plan would be to upload them to Commons (with attribution to SLUB) as PDFs with included OCR (they are in Fraktur so require some fiddling in Tesseract, I still haven't gotten it to recognize ligatures like tz, etc.). Just being able to search them is I think tremendously useful. I would then like to create Wikisource index pages so that the OCR can be improved.
Before starting, I wanted to check whether:
- these are already uploaded somewhere I may have missed
- there is a preferred format (PDF vs DjVu)
- there are recommendations for batch upload tools or workflows.
I am working from a data set that looks like:
https://gist.github.com/amundo/85d2cbff9efc7e17e384c767a310b1d4
Thanks! Babbage (talk) 15:51, 6 March 2026 (UTC)
