Hi! The Data Working Group had a look at the data, and decided to revert the two pool changesets. The polygons the algorithm had drawn were consistently of poor quality with stray nodes and nodes far outside the pool boundaries, and the imports hadn't been discussed with local communities.
I have disabled the hosted demo for now, and will remove the uploading part from the code in favor of showing an URL that will open the editor at the location.
If its of any help, you can find any contributed polygon with the tag `created_by=https://github.com/mozilla-ai/osm-ai-helper`. Feel free to remove all of them (or I can do it myself once I access a PC).
I will be happy to continue the discussion on what is a good prediction or not. I have mapped a lot of swimming pools myself and edited and removed a lot of (presumably) human contributed polygons that looked worse (too my eyes) than the predictions I approved to be uploaded.
Hi, thanks for replying! I was looking at your source code, and wondering how easy it would be to create a .osm file instead of uploading the data. The JOSM editor’s todo list plugin would make it easy to plough through all the polygons or centroids, and do any refinement necessary. For example, I’m curious to try this out to detect crosswalks, and those need to be glued to the highway being crossed.
> and wondering how easy it would be to create a .osm file instead of uploading the data. The JOSM editor’s todo list plugin would make it easy to plough through all the polygons or centroids, and do any refinement necessary. For example, I’m curious to try this out to detect crosswalks, and those need to be glued to the highway being crossed.
Hi, I didn't know about this possibility. I should have better researched what were the different options. I will be taking a look on implementing this approach.
> I will be happy to continue the discussion on what is a good prediction or not. I have mapped a lot of swimming pools myself and edited and removed a lot of (presumably) human contributed polygons that looked worse (too my eyes) than the predictions I approved to be uploaded.
Something else you need to be mindful of is that the mapbox imagery may be out of date, especially for the super zoomed in stuff (which comes from aerial flights). So e.g., a pool built 2 years ago might not show up.
This is a general problem when trying to compare OSM data with aerial imagery. I've worked a lot with orthos from Open Aerial Map, whose stated goal is to provide high quality imagery that's licensed for mapping. If you try and take OSM labels from the bounding boxes of those images and use them for segmentation labels, they're often misaligned or not detailed enough. In theory those images ought to have the best corresponding data, but OAM allows people to upload open imagery generally and not all of it is mapped.
I've spent a lot of time building models for tree mapping. In theory you could use that as a pipeline with OAM to generate forest regions for OSM and it would probably be better than human labels which tend to be very coarse. I wouldn't discount AI labeling entirely, but it does need oversight and you probably want a high confidence threshold. One other thought is you could compare overlap between predicted polygons and human polygons and use that as a prompt to review for refinement. This would be helpful for things like individual buildings which tend to not be mapped particularly well (i.e. tight to the structure), but a modern segmentation model can probably provide very tight polygons.