I spent a long time yesterday trying to understand the layout of some ‘shopping arcades’ (‘Shotengai’) in Kochi, Japan (in preparation for a walking tour). Google AI insisted that Obiyamachi arcade was perpendicular to Harimayabashi arcade (east/west vs north/south). I went into streetview and definitively determined that Harimayabashi arcade was east-west, as was Obiyamachi arcade, and they were connected by a third arcade ‘Kyomachi’. To help understand what Google’s AI was ‘suggesting’, I asked it for map references for the entrances. Over the course of many interactions, it gave me ‘plus codes’, lat/long coordinates, and more, and each time they were way off - hundreds of meters or more. I then went over to Perplexity AI, and asked similar questions, and also got wrong answers (but different - Perplexity agreed that the two arcades were both east-west, but insisted they were directly facing each other (they are in fact connected by a third arcade). I asked it for map references also, and it also gave me incorrect references. Finally, I gave Perplexity AI a google maps link, and asked it to ‘view’ it in streetview mode, and ‘observe’ that at that point, Harimabayabashi arcade entrance was literally across the street (180 turn) from the Kyomachi entrance (there are distinct visual markers to confirm which arcade is which). Perplexity then confirmed I was correct and ‘thanked me’ for the correction. I asked it why all the map references it gave me were wrong, and it explained that google map references (plus codes, lat/long coords, etc) are not absolute, that they change with time (which I find hard to believe). I then went back to Google AI, and in a similar fashion, gave it the definitive map reference and it too confirmed back to me that I was correct, and explained that most tourist guides ‘gloss over’ the different arcade names, so its source information was incorrect. It did, however, claim that it can ‘scan’ a google maps streetview scene and ‘read’ the arcade names. Both AIs told me that this ‘improved information’ would remain in force for the context of this session, but that it would revert back to its incorrect spatial analysis once the session was over. The ‘incorrect information’ is just par for the course in AI (hallucination), but the incorrect map references was a more worrying discovery. submitted by /u/Steerpike58
Originally posted by u/Steerpike58 on r/ArtificialInteligence
