API Reference
osm_ai_helper.download_osm
download_osm(area, output_dir, selector, discard=None)
Download OSM elements for the given areas and selector.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
output_dir
|
str
|
Output directory. |
required |
selector
|
str
|
OSM tag to select elements. Uses the Overpass API. Example: "leisure=swimming_pool" |
required |
area
|
str
|
Name of area to download. Can be city, state, country, etc. Uses the Nominatim API. |
required |
discard
|
Optional[dict[str, str]]
|
Discard elements matching any of the given tags. Defaults to None. Example: {"location": "indoor", "building": "yes"} |
None
|
Source code in src/osm_ai_helper/download_osm.py
osm_ai_helper.group_elements_and_download_tiles
group_elements_and_download_tiles(elements_file, output_dir, mapbox_token, zoom=18)
Groups the elements by tile and downloads the satellite image corresponding to the tile.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
elements_file
|
str
|
Path to the JSON file containing OSM elements. See download_osm. |
required |
output_dir
|
str
|
Output directory.
The images and annotations will be saved in this directory.
The images will be saved as JPEG files and the annotations as JSON files.
The names of the files will be in the format |
required |
mapbox_token
|
str
|
Mapbox token. |
required |
zoom
|
int
|
Zoom level of the tiles to download. See https://docs.mapbox.com/help/glossary/zoom-level/. Defaults to 18. |
18
|
Source code in src/osm_ai_helper/group_elements_and_download_tiles.py
osm_ai_helper.convert_to_yolo_dataset
convert_to_yolo_dataset(input_dir)
Convert the output of group_elements_and_download_tiles.py
to the YOLO format.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_dir
|
str
|
Input directory containing the images and annotations.
The images are expected to be in the format |
required |
Source code in src/osm_ai_helper/convert_to_yolo_dataset.py
grouped_elements_to_annotation(group, zoom, tile_col, tile_row)
Output format: https://docs.ultralytics.com/datasets/detect/
Source code in src/osm_ai_helper/convert_to_yolo_dataset.py
osm_ai_helper.run_inference
run_inference(yolo_model_file, output_dir, lat_lon, margin=1, sam_model='facebook/sam2.1-hiera-small', selector='leisure=swimming_pool', zoom=18, save_full_images=True, bbox_conf=0.5, batch_size=32)
Run inference on a given location.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
yolo_model_file
|
str
|
Path to the YOLO model file. |
required |
output_dir
|
str
|
Output directory.
The images and annotations will be saved in this directory.
The images will be saved as PNG files and the annotations as JSON files.
The names of the files will be in the format |
required |
lat_lon
|
Tuple[float, float]
|
Latitude and longitude of the location. |
required |
margin
|
int
|
Number of tiles around the location. Defaults to 1. |
1
|
sam_model
|
str
|
SAM2 model to use. Defaults to "facebook/sam2.1-hiera-small". |
'facebook/sam2.1-hiera-small'
|
selector
|
str
|
OpenStreetMap selector. Defaults to "leisure=swimming_pool". |
'leisure=swimming_pool'
|
zoom
|
int
|
Zoom level. Defaults to 18. See https://docs.mapbox.com/help/glossary/zoom-level/. |
18
|
bbox_conf
|
float
|
Sets the minimum confidence threshold for detections. Defaults to 0.4. |
0.5
|
batch_size
|
int
|
Batch size for prediction. Defaults to 32. |
32
|
Source code in src/osm_ai_helper/run_inference.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
|
osm_ai_helper.export_osm
export_osm(results_dir, output_dir, tags=None)
Export the polygons in results_dir
to an .osc
file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
results_dir
|
str
|
Directory containing the results.
The results should be in the format of |
required |
Source code in src/osm_ai_helper/export_osm.py
osm_ai_helper.utils.inference
download_stacked_image_and_mask(bbox, grouped_elements, zoom, mapbox_token)
Download all tiles within a bounding box and stack them into a single image.
All the grouped_elements are painted on the mask.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bbox
|
tuple
|
Bounding box in the form of (south, west, north, east). |
required |
grouped_elements
|
dict
|
OpenStreetMap elements grouped with group_elements_by_tile. |
required |
zoom
|
int
|
Zoom level. See https://docs.mapbox.com/help/glossary/zoom-level/. |
required |
mapbox_token
|
str
|
Mapbox token. See https://docs.mapbox.com/help/getting-started/access-tokens/. |
required |
Returns:
Name | Type | Description |
---|---|---|
tuple |
tuple[ndarray, ndarray]
|
Stacked image and mask. |
Source code in src/osm_ai_helper/utils/inference.py
split_area_into_lat_lon_centers(area_name, zoom, margin)
Split the bounding box of area_name
into a list of lat lon centers.
If you iterate on the returned list using run_inference, you will cover the entire area.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
area_name
|
str
|
Name of the area to split. |
required |
zoom
|
int
|
value to be used in run_inference. |
required |
margin
|
int
|
value to be used in run_inference. |
required |
Returns:
Type | Description |
---|---|
list[tuple[float, float]]
|
list[tuple[float, float]]: List of lat lon centers. |
Source code in src/osm_ai_helper/utils/inference.py
tile_prediction(bbox_predictor, sam_predictor, image, overlap=0.125, bbox_conf=0.5, bbox_pad=0, batch_size=32)
Predict on a large image by splitting it into tiles.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bbox_predictor
|
YOLO
|
YOLO bounding box. See https://docs.ultralytics.com/tasks/detect/. |
required |
sam_predictor
|
SAM2ImagePredictor
|
Segment Anything Image Predictor. See https://github.com/facebookresearch/sam2?tab=readme-ov-file#image-prediction. |
required |
image
|
ndarray
|
Image to predict on. |
required |
overlap
|
float
|
Overlap between tiles. Defaults to 0.125. |
0.125
|
bbox_conf
|
float
|
Sets the minimum confidence threshold for detections. Defaults to 0.4. |
0.5
|
bbox_pad
|
int
|
Padding to be added to the predicted bbox. Defaults to 0. |
0
|
batch_size
|
int
|
Batch size for prediction. Defaults to 32. |
32
|
Returns:
Type | Description |
---|---|
ndarray
|
np.ndarray: Stacked output. |
Source code in src/osm_ai_helper/utils/inference.py
osm_ai_helper.utils.osm
get_area(area_name)
Get the area from Nominatim.
Uses the Nominatim API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
area_name
|
str
|
The name of the area. |
required |
Returns:
Name | Type | Description |
---|---|---|
dict |
dict
|
The area found. |
Source code in src/osm_ai_helper/utils/osm.py
get_area_id(area_name)
Get the Nominatim ID of an area.
Uses the Nominatim API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
area_name
|
str
|
The name of the area. |
required |
Returns:
Name | Type | Description |
---|---|---|
int |
int
|
The Nominatim ID of the area. |
Source code in src/osm_ai_helper/utils/osm.py
get_elements(selector, area=None, bbox=None)
Get elements from OpenStreetMap using the Overpass API.
Uses the Overpass API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
selector
|
str
|
The selector to use. Example: "leisure=swimming_pool" |
required |
area
|
Optional[str]
|
The area to search in. Can be city, state, country, etc. Defaults to None. |
None
|
bbox
|
Optional[Tuple[float, float, float, float]]
|
The bounding box to search in. Defaults to None. Format: https://wiki.openstreetmap.org/wiki/Overpass_API/Language_Guide#The_bounding_box |
None
|
Returns:
Type | Description |
---|---|
list[dict]
|
The elements found. |
Source code in src/osm_ai_helper/utils/osm.py
osm_ai_helper.utils.plots
show_vlm_entry(entry)
Extracts image and points from entry and draws the points.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
entry
|
dict
|
required |
Returns:
Name | Type | Description |
---|---|---|
Image |
Image
|
Image with points drawn. |
Source code in src/osm_ai_helper/utils/plots.py
osm_ai_helper.utils.tiles
group_elements_by_tile(elements, zoom)
Broup elements by the tiles they belong to, based on the zoom level.
Each MAPBOX tile is a 512x512 pixel image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
elements
|
List[Dict]
|
List of elements from download_osm. |
required |
zoom
|
int
|
Zoom level. See https://docs.mapbox.com/help/glossary/zoom-level/. |
required |
Returns:
Type | Description |
---|---|
dict[tuple, list[dict]]
|
dict[tuple, list[dict]]: Grouped elements. |