I'm a pretty avid member of various history groups, and one thing that has absolutely driven me nuts for the past couple of years is how many people there are that use AI for upscaling and colorization of photos - not knowing or noticing how the models fundamentally alter the photos. A couple of zooms in on the photo, and it is nightmare fuel.
A week ago me and some members spent a couple of hours trying to find a building from the early 1900s, because someone had uploaded a photo and asked about the building. Sifted through old maps, newspapers, etc. but couldn't find anything. Turns out said photo had been upscaled via AI, which in turn had added some buildings here and there.
But, yeah, for stuff like OP posted it could work out nicely.
One random example of a before/after: https://imgur.com/a/WIAYLHm
(Provenance is so important. The infinitely-recopied local history photos were never a great source anyway).
The experiance made me certain that AI is going to to much more harm than good to the buisness of archiving historical photos.
As for the lady who is distorting photos to colorize them - I don't even understand why you would want to do that. There are other ways!
Its like saying "I love Da Vinci's art so I'm going to draw a moustache on everyone in the last supper" which you probably wouldn't do if you really loved Da Vinci's art.
Meh, so what if I only love Da Vinci's art to the degree that it's amusing to adulterate with mustaches?
It reminds me of the cuneiform problem. Between 500,000 and 1 million tablets have been collected. This is one of the earliest preserved writing systems. Even so, fewer than 10% of these tablets have been translated. I was surprised to learn this but it makes sense. There are several problems:
1. Scribes used a lot of shorthand;
2. Cuneiform itself changed over time;
3. Writers would use multiple languages (eg Sumerian, Akkadian), even on the same tablet. There are relatively few people fluent in these languages, particularly in multiple of them at once;
4. To some extent the tablets are 3D such that a 2D photo might not be sufficient to translate because you might need to physically turn the tablet to accurately see the marks; and
5. In some cases the tablets are incomplete or broken so you may not to figure out how things fit together.
I wonder if AI can help make inroads into this 90%. I really wonder what is waiting to be unearthed.
If the risk of mistranslation is high, I fail to comprehend how letting AI "take a swing at it" does not reduce the translation quality?
How are they ensure no drop in translation quality?
Ideally they'd always carry an "AI-generated" flag (in the db and in the frontend) until manually reviewed (or never) by a human. If anything, this is actually in AI proponent's favor as it would let you periodically regenerate or cross-validate (a subset of) the AI contributions some years down the line when newer and better models are released!
But it's actually really cool how they used AI to better determine the locations of the photos. I love this!
https://www.oldnyc.org/#707133f-a this is supposed to be here https://www.oldnyc.org/#702487f-a
also, if folks are interested in these old depictions of NYC, check out https://1940s.nyc/ as well!
This has been true since before LLMs, but now so many more people and use cases are enabled so much more easily. People are undisciplined and quick to take short term gains and handwave the correctness.
(I briefly got excited that there might be a street sign _in_ the photo, but if you zoom way in it says "DENTIST")
+1 to 1940s.nyc. Very different photos — those are were taken for tax assessment, the ones on OldNYC were taken to document the city as it changed. The photographer had an arrangement where he'd get tips from demolition crews, and go shoot buildings before they were gone forever.
[1]: https://digitalcollections.nypl.org/items/5a5e06a0-c539-012f...
I haven't seen an "AI edited" image that hasn't changed important details, and so the result is just yet more slop.