API Reference
osm_ai_helper.download_osm
download_osm(area, output_dir, selector, discard=None)
Download OSM elements for the given areas and selector.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
output_dir
|
str
|
Output directory. |
required |
selector
|
str
|
OSM tag to select elements. Uses the Overpass API. Example: "leisure=swimming_pool" |
required |
area
|
str
|
Name of area to download. Can be city, state, country, etc. Uses the Nominatim API. |
required |
discard
|
Optional[dict[str, str]]
|
Discard elements matching any of the given tags. Defaults to None. Example: {"location": "indoor", "building": "yes"} |
None
|
Source code in src/osm_ai_helper/download_osm.py
osm_ai_helper.group_elements_and_download_tiles
group_elements_and_download_tiles(elements_file, output_dir, mapbox_token, zoom=18)
Groups the elements by tile and downloads the satellite image corresponding to the tile.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
elements_file
|
str
|
Path to the JSON file containing OSM elements. See download_osm. |
required |
output_dir
|
str
|
Output directory.
The images and annotations will be saved in this directory.
The images will be saved as JPEG files and the annotations as JSON files.
The names of the files will be in the format |
required |
mapbox_token
|
str
|
Mapbox token. |
required |
zoom
|
int
|
Zoom level of the tiles to download. See https://docs.mapbox.com/help/glossary/zoom-level/. Defaults to 18. |
18
|
Source code in src/osm_ai_helper/group_elements_and_download_tiles.py
osm_ai_helper.convert_to_yolo_dataset
convert_to_yolo_dataset(input_dir)
Convert the output of group_elements_and_download_tiles.py
to the YOLO format.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_dir
|
str
|
Input directory containing the images and annotations.
The images are expected to be in the format |
required |
Source code in src/osm_ai_helper/convert_to_yolo_dataset.py
grouped_elements_to_annotation(group, zoom, tile_col, tile_row)
Output format: https://docs.ultralytics.com/datasets/detect/
Source code in src/osm_ai_helper/convert_to_yolo_dataset.py
osm_ai_helper.run_inference
run_inference(yolo_model_file, output_dir, lat_lon, margin=1, sam_model='facebook/sam2.1-hiera-small', selector='leisure=swimming_pool', zoom=18)
Run inference on a given location.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
yolo_model_file
|
str
|
Path to the YOLO model file. |
required |
output_dir
|
str
|
Output directory.
The images and annotations will be saved in this directory.
The images will be saved as PNG files and the annotations as JSON files.
The names of the files will be in the format |
required |
lat_lon
|
Tuple[float, float]
|
Latitude and longitude of the location. |
required |
margin
|
int
|
Number of tiles around the location. Defaults to 1. |
1
|
sam_model
|
str
|
SAM2 model to use. Defaults to "facebook/sam2.1-hiera-small". |
'facebook/sam2.1-hiera-small'
|
selector
|
str
|
OpenStreetMap selector. Defaults to "leisure=swimming_pool". |
'leisure=swimming_pool'
|
zoom
|
int
|
Zoom level. Defaults to 18. See https://docs.mapbox.com/help/glossary/zoom-level/. |
18
|
Source code in src/osm_ai_helper/run_inference.py
osm_ai_helper.upload_osm
upload_osm(results_dir, client_id, client_secret, comment='Add Swimming Pools')
Upload the results to OpenStreetMap.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
results_dir
|
str
|
Directory containing the results.
The results should be in the format of |
required |
client_id
|
str
|
OpenStreetMap Oauth client ID. |
required |
client_secret
|
str
|
OpenStreetMap Oauth client secret. |
required |
comment
|
str
|
Comment to add to the changeset. Defaults to "Add Swimming Pools". |
'Add Swimming Pools'
|
Source code in src/osm_ai_helper/upload_osm.py
osm_ai_helper.utils.inference
download_stacked_image_and_mask(bbox, grouped_elements, zoom, mapbox_token)
Download all tiles within a bounding box and stack them into a single image.
All the grouped_elements are painted on the mask.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bbox
|
tuple
|
Bounding box in the form of (south, west, north, east). |
required |
grouped_elements
|
dict
|
OpenStreetMap elements grouped with group_elements_by_tile. |
required |
zoom
|
int
|
Zoom level. See https://docs.mapbox.com/help/glossary/zoom-level/. |
required |
mapbox_token
|
str
|
Mapbox token. See https://docs.mapbox.com/help/getting-started/access-tokens/. |
required |
Returns:
Name | Type | Description |
---|---|---|
tuple |
tuple[ndarray, ndarray]
|
Stacked image and mask. |
Source code in src/osm_ai_helper/utils/inference.py
tile_prediction(bbox_predictor, sam_predictor, image, overlap=0.125, bbox_conf=0.4, bbox_pad=0)
Predict on a large image by splitting it into tiles.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bbox_predictor
|
YOLO
|
YOLO bounding box. See https://docs.ultralytics.com/tasks/detect/. |
required |
sam_predictor
|
SAM2ImagePredictor
|
Segment Anything Image Predictor. See https://github.com/facebookresearch/sam2?tab=readme-ov-file#image-prediction. |
required |
image
|
ndarray
|
Image to predict on. |
required |
overlap
|
float
|
Overlap between tiles. Defaults to 0.125. |
0.125
|
bbox_conf
|
float
|
Sets the minimum confidence threshold for detections. Defaults to 0.4. |
0.4
|
bbox_pad
|
int
|
Padding to be added to the predicted bbox. Defaults to 0. |
0
|
Returns:
Type | Description |
---|---|
ndarray
|
np.ndarray: Stacked output. |
Source code in src/osm_ai_helper/utils/inference.py
osm_ai_helper.utils.osm
get_area_id(area_name)
Get the Nominatim ID of an area.
Uses the Nominatim API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
area_name
|
str
|
The name of the area. |
required |
Returns:
Type | Description |
---|---|
Optional[int]
|
Optional[int]: The Nominatim ID of the area. |
Source code in src/osm_ai_helper/utils/osm.py
get_elements(selector, area=None, bbox=None)
Get elements from OpenStreetMap using the Overpass API.
Uses the Overpass API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
selector
|
str
|
The selector to use. Example: "leisure=swimming_pool" |
required |
area
|
Optional[str]
|
The area to search in. Can be city, state, country, etc. Defaults to None. |
None
|
bbox
|
Optional[Tuple[float, float, float, float]]
|
The bounding box to search in. Defaults to None. Format: https://wiki.openstreetmap.org/wiki/Overpass_API/Language_Guide#The_bounding_box |
None
|
Returns:
Type | Description |
---|---|
list[dict]
|
The elements found. |
Source code in src/osm_ai_helper/utils/osm.py
osm_ai_helper.utils.plots
show_vlm_entry(entry)
Extracts image and points from entry and draws the points.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
entry
|
dict
|
required |
Returns:
Name | Type | Description |
---|---|---|
Image |
Image
|
Image with points drawn. |
Source code in src/osm_ai_helper/utils/plots.py
osm_ai_helper.utils.tiles
group_elements_by_tile(elements, zoom)
Broup elements by the tiles they belong to, based on the zoom level.
Each MAPBOX tile is a 512x512 pixel image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
elements
|
List[Dict]
|
List of elements from download_osm. |
required |
zoom
|
int
|
Zoom level. See https://docs.mapbox.com/help/glossary/zoom-level/. |
required |
Returns:
Type | Description |
---|---|
dict[tuple, list[dict]]
|
dict[tuple, list[dict]]: Grouped elements. |