Tangent, there are these videos on YT of people walking through cities, the ones I like in particular are through Tokyo/Japan. I was thinking it would be cool to build a 3D map from that, it is possible but not my field. I think some companies have done it too. But there is a lot of data on that. Maybe free robot training (walking through a crowd like delivery).
I believe it's a combo of SLAM/photogrammery/VIO but you don't have an IMU so that part would have to be estimated from the video. Maybe the flickering of the lights with the frames probably too fast.
Similarly it would be great to have a tool to do it with stills, like reconstruct a floor plan based on real estate photos. Even if it were partially manually, it would be pretty handy.
There was a guy a long time ago, who did YT videos of the tech markets in Tokyo and it was really surprising some of the best places to get parts for smartphones or robots were completely non-descript buildings in the heart of the city. He specifically went to places that most people wouldn't know about unless you really had great local information.
If someone were to do what you're saying, it would be a huge win for people visiting and being able to find these places. I would love to see this.
This would be an interesting additional layer for google maps search which I often find to be lacking. For example, I was recently travelling in Gran Canaria and looking for places selling artesan coffee in the south (spoiler: only one in a hotel which took me almost half an hour to even find). Searching for things like "pourover" and "v60" is usually my go-to signal but unless the cafe mentions this in their description or its mentioned in reviews it's hard to find. I don't think they even index the text on the photos customers take (which will often include the coffee menu behind the cashier).
Yeah, that can be somewhat of a problem in bigger cities ;-) It's pretty common for people to have taken a photo of the menu in cafes but as mentioned it seems google isn't ingesting or surfacing that information for text search.
GitHub of the person who prepared the data. I am curious how much compute was needed for NY. I would love to do it for my metro but I suspect it is way beyond my budget.
(The commenters below are right. It is the Maps API, not compute, that I should worry about. Using the free tier, it would have taken the author years to download all tiles. I wish I had their budget!)
The linked article mentions that they ingested 8 million panos - even if they're scraping the dynamic viewer that's $30k just in street view API fees (the static image API would probably be at least double that due to the low per-call resolution).
OCR I'd expect to be comparatively cheap, if you weren't in a hurry - a consumer GPU running PaddlePaddle server can do about 4 MP per second. If you spent a few grand on hardware that might work out to 3-6 months of processing, depending on the resolution per pano and size of your model.
Their write up (linked at top of page below main link, and in a comment) says:
> "media artist Yufeng Zhao fed millions of publicly-available panoramas from Google Street View into a computer program that transcribes text within the images (anyone can access these Street View images; you don’t even need a Google account!)."
Maybe they used multiple IPs / devices and didn't want to mention doing something technically naughty to get around Google's free limits, or maybe they somehow didn't hit a limit doing it as a single user? Either way, it doesn't sound like they had to pay if they only mention not needing an account.
(Or maybe they just thought people didn't need to know that they had to pay, and that readers would just want the free access to look up a few images, rather than a whole city's worth?)
i just hashout out the details with claude. apparently it would cost me ~8k USD to retrieve all Taipei street images from gmap api with 3m density. Expensive, but not impossible.
The pudding.cool article has a link labeled "View the map of “F*ck”" but it leads to a search for "fuck" instead. If you search for "F*ck", you find gems such as "CONTRACTOR F CK-UP" https://www.alltext.nyc/panorama/KhzY08H72wV2ldXamZU5HA?o=76... (Strategically placed pole obscuring the word.)
"Fart bird special" is pretty funny, and "staff farting only" might be my favorite. Other good ones: "BECAUSE THE FART NEEDS," "Juice Fart," "WHOLESALE FARTS"
Reminds me of NY Cerebro, semantic search across New York City's hundreds of public street cameras: https://nycerebro.vercel.app/ (e.g. search for "scaffolding")
Nice! If you want to email hn@ycombinator.com we could send you a repost invite for https://news.ycombinator.com/item?id=44664046 - but please wait a while first. The trick is to let enough time go by for the hivemind caches to clear. Then everything old becomes new again :) - usually 2-3 months is a good interval...
This is a super cool project. But it would be 10x cooler if they had generated CLIP or some other embeddings for the images, so you could search for text but also do semantic vector search like "people fighting", "cats and dogs, "red tesla", "clown", "child playing with dog", etc.
I feel like street-view data is surprisingly underused for geospatial intelligence.
With current-gen multimodal LLMs, you could very easily query and plot things like "broken windows," "houses with front-yard fences," "double-parked cars," "faded lane markers," etc. that are difficult to generally derive from other sources.
For any reasonably-sized area, I'd guess the largest bottleneck is actually the Maps API cost vs the LLM inference. And ideally we'd have better GIS products for doing this sort of analysis smoothly.
Yes. I work at a company that is using street view to identify high-rise apartments with dangerous cladding for the UK gov. Also could use it for grouping nearby properties which were clearly built together and share features. Helps spread known information about buildings. You can also get the models to predict age and sometimes even things like double-glazing.
I made this - https://london publicinsights.uk as well as operate a public records aggregator that has indexed, amongst other things, planning applications. I wonder if it could be of use?
My only suggestion would be to remove duplicates. Many of the items are just the same thing from different angles. Of course, this is a tough technical challenge to solve that most likely cannot rely on location alone.
Surprisingly I can't seem to find any doors with notices from the sheriffs department or building department embarrassingly plastered on them. Am I misremembering how these are phrased verbatim or are certain things censored?
The next step should be to create a Street-View-style website for navigating around New York City, where only the text is visible and everything else is left blank/white.
A game: find an English word with the fewest hits. (It must have at least one hit that is not an OCR error, but such errors do still count towards your score. Only spend a couple of minutes.) My best is "scintillating" : 3.
Sloth returned surprisingly many results, 92
Deviant returned 5 (cmon NY, do better)
Sherpa five but two false positives, two Gap ads about Sherpa fleece, two genuine including Sherpa consulting which seems pretty niche
Defenestrate got zero
At first glance, there's plenty of grog to be had in NYC. But sailors will be disappointed. It all seems to be OCR errors of "Groceries" or the "Google" watermarks.
BNE is an anonymous graffiti artist known for stickers that read "BNE" or "BNE was here". The artist has left their mark in countries throughout the world, including the United States, Canada, Asia, Romania, Australia, Europe, and South America. "His accent and knowledge of local artists suggest he is from New York."
I was trying for various graffiti slogans, turns out the anarchy "(A)" is basically the most difficult thing in the world to search for lol, other political ideologies much easier to find. It did amusingly lead me to search for just "anarchy" which led to 4 pages of bus ads for a show by the "Sons of Anarchy" guy.
EDIT: Lol, "communism" leads to 39 pages of Shen Yun billboards.
The word search for "fart" shows the tool's limits. No entry I saw actually said the word fart, but was listed as doing so -- "fart nawor" (hearts around the world irl), the penny farting (the penny farthing irl), etc.
Under the search button there is a drop down. Enable "exact match" and filter low ocr confidence. Still has many false positives, but you'll also see the "fart king".
Mamdani is just one dude's gynecology clinic. I wonder when the data was pulled?
edit: I found mentions of Gaza bombings and there's cars with like #gaza on it so my guess is sometime in the last 2 years.
I could of course look it up but this is a game now for me, like when I found a hella old atlas in a library and tried to figure out the date it was published just by looking at the maps.
Gosh! Maybe one of these days someone will take time off from this cultural wonderment to construct a simple, easy to use, text-to-audio.file program - you know, install, paste in some text, convert, start-up a player - so that the blind can listen to texts that aren't recorded in audiobooks. Without a CS degree.
I think the issue is the compute power needed for good voice models is far from free just in hardware and electricity, so any good text to audio solution likely needs to cost some money. Wiring up Google vertex AI text to speech or the aws equivalent is probably something chat gpt could walk most people through even without a CS degree, a simple python script you could authenticate from a terminal command, and would maybe cost a couple bucks for personal usage
A service you can pay for of that simplicity probably doesn’t exist because there are other tools that integrate better with how the blind interact with computers, I doubt it’s copy and pasting text, and those tools are likely more robust albeit expensive